AI technology can go quite wrong, OpenAI CEO tells Senate

News Summary
- He suggested that Congress "define capability thresholds" and place AI models that can perform certain functions into the strict licensing regime.As examples, Altman said that licenses could be required for AI models "that can persuade, manipulate, influence a person's behavior, a person's beliefs," or "help create novel biological agents."
- Before we released GPT-4, our latest model, we spent over six months conducting extensive evaluations, external red teaming, and dangerous capability testing.Altman also said that people should be able to opt out of having their personal data used for training AI models.
- Montgomery said that Congress should clearly define the risks of AI and impose "different rules for different risks," with the strongest rules "applied to use cases with the greatest risks to people and society.
- "No person anywhere should be tricked into interacting with an AI system... the era of AI cannot be another era of move fast and break things," she said.She also said the US should quickly hold companies accountable for deploying AI "that disseminates misinformation on things like elections.
- Altman said it would be simpler to require licensing for any system that is above a certain threshold of computing power, but that he would prefer to draw the regulatory line based on specific capabilities.OpenAI consists of both nonprofit and for-profit entities.
- "Altman said he doesn't think burdensome requirements should apply to companies and researchers whose models are much less advanced than OpenAI's.
Enlarge/ OpenAI CEO Sam Altman testifies about AI rules before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law on May 16, 2023, in Washington, DC. 2 with OpenAI CEO Sam Altm [+5580 chars]