"participants who had access to an AI assistant wrote significantly less secure code" and "were also more likely to believe they wrote secure code" - 2023 Stanford University study published at CCS23

Open link in next tab

https://arxiv.org/pdf/2211.03622

See all comments

which does support the idea that there is a limit to how good they can get.

I absolutely agree, im not necessarily one to say LLMs will become this incredible general intelligence level AIs. I'm really just disagreeing with people's negative sentiment about them becoming worse / scams is not true at the moment.

I doesn't prove it either: as I said, 2 data points aren't enough to derive a curve

Yeah only reason I didn't include more is because it's a pain in the ass pulling together multiple research papers / results over the span of GPT 2, 3, 3.5, 4, 01 etc.