Can you trust ChatGPT’s package recommendations?
Open link in next tab
Can you trust ChatGPT’s package recommendations?
https://vulcan.io/blog/ai-hallucinations-package-risk
ChatGPT can offer coding solutions, but its tendency for hallucination presents attackers with an opportunity. Here's what we learned.
From https://twitter.com/llm_sec/status/1667573374426701824
- People ask LLMs to write code
- LLMs recommend imports that don't actually exist
- Attackers work out what these imports' names are, and create & upload them with malicious payloads
- People using LLM-written code then auto-add malware themselves