Can you trust ChatGPT’s package recommendations?

Open link in next tab

Can you trust ChatGPT’s package recommendations?

https://vulcan.io/blog/ai-hallucinations-package-risk

ChatGPT can offer coding solutions, but its tendency for hallucination presents attackers with an opportunity. Here's what we learned.

From https://twitter.com/llm_sec/status/1667573374426701824

  1. People ask LLMs to write code
  2. LLMs recommend imports that don't actually exist
  3. Attackers work out what these imports' names are, and create & upload them with malicious payloads
  4. People using LLM-written code then auto-add malware themselves