Chatbots trigger next misinformation nightmare

News Summary
- That means at least some generative AI will be subject to "injection attacks," where malicious users teach lies to the programs, which then spread them.The misinformation threat posed by everyday users unintentionally spreading falsehoods through bad results is also huge, but not as pressing.
- Yes, but: "The challenge for an end user is that they may not know which answer is correct, and which one is completely inaccurate," Chirag Shah, a professor at the Information School at the University of Washington, told Axios.
New generative AI tools like OpenAIs ChatGPT, Microsofts BingGPT and Googles Bard that have stoked a techindustry frenzy are also capable of releasing a vast flood of online misinformation.Why [+5296 chars]