AI Models Trained on Buggy Code Mirror Errors, Study Finds
Researchers Examined 7 LLMs to Determine How they Dealt with Flawed Code
Large language models trained on flawed data tend to replicate those mistakes, researchers found. "In bug-prone tasks, the likelihood of LLMs generating correct code is nearly the same as generating buggy code," found researchers from institutions including the Chinese Academy of Sciences.
Large language models trained on flawed data tend to replicate those mistakes, researchers found. "In bug-prone tasks, the likelihood of LLMs generating correct code is nearly the same as generating buggy code," found researchers from institutions including the Chinese Academy of Sciences.