Member-only story

| PROGRAMMING | ARTIFICIAL INTELLIGENCE |

Scaling Isn’t Everything: How Bigger Models Fail Harder

Are Large Language Models really understanding programming languages?

Salvatore Raieli
6 min readMay 31, 2023

--

AI code completion and generation
image by Man Chung on Unsplash

Code generation and completion models have disturbed programmers' dreams, so much so that they have led to lawsuits and controversy. On the other hand, it is still unclear how much these models include programming. Also, the more parameters a model has the more capable they should be. what if they don’t?

The model he dreamed of programming

With the development of LLMs, it came natural to apply these models not only to language but also to programming. Preliminary investigations showed that GPT-3 could generate simple programs from Python docstrings. All this despite the fact that the program had not been specifically trained to generate code. Considering that there were many public code sources, many research groups focused on developing specific models for code generation. For example, Codex:

--

--

Salvatore Raieli
Salvatore Raieli

Written by Salvatore Raieli

Senior data scientist | about science, machine learning, and AI. Top writer in Artificial Intelligence

Responses (2)