Gary Marcus is happy to help regulate AI on behalf of the U.S. government
News Summary
- I thought it was interesting that weapons didn’t come up.We covered a bunch of ground, but there are lots of things we didn’t get to, including enforcement, which is really important, and national security and autonomous weapons and things like that.
- Maybe you want to have some kind of licensing around things that are going to be deployed at very large scale, but they carry particular risks, including security risks.
- Maybe they did, but the decision process with that or, say, Bing, is basically just: a company decides we’re going to do this.But some of the things that companies decide might carry harm, whether in the near future or in the long term.
- I interviewed Brooks on stage back in 2017 and he said then he didn’t think Elon Musk really understood AI and that he thought Musk was wrong that AI was an existential threat.
- I think Rod and I share skepticism about whether current AI is anything like artificial general intelligence.
- So I think governments and scientists should increasingly have some role in deciding what goes out there [through a kind of] FDA for AI where, if you want to do widespread deployment, first you do a trial.
On Tuesday of this week, neuroscientist, founder and author Gary Marcus sat between OpenAI CEO Sam Altman and Christina Montgomery, who is IBMs chief privacy trust officer, as all three testified bef [+7487 chars]