![]() But still, this suggests that the long term solution may be “simply” detecting LLM-generated reports, and marking them as spam. is more charitable than I might be, suggesting that LLMs may help with communicating real issues through language barriers. What LLMs do is provide an illusion of competence that takes longer for a maintainer to wade through before realizing that the claim is bogus. There have always been vulnerability reports of dubious quality, sent by people that either don’t understand how vulnerability research works, or are willing to waste maintainer time by sending in raw vulnerability scanner output without putting in any real effort. But as point out, LLMs are not actually AI, and the I in LLM stands for intelligence. There are some big bug bounties that are paid out, so naturally people are trying to leverage AI to score those bounties. This is a bug report that was generated with one of the Large Language Models (LLMs) like Google Bard or ChatGPT. There just doesn’t seem to be a vulnerability here. This code has pretty robust length checks. ![]() Yes, a strcpy call can be dangerous, if there aren’t proper length checks. ![]() It’s a 8.6 severity security problem, a buffer overflow in websockets. ![]() So first off, go take a look at this curl bug report. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |