Senators heard from prominent technology executives and others in private Wednesday on how to accomplish the potentially impossible task of passing bipartisan legislation within the next year that encourages the rapid development of artificial intelligence and mitigates its biggest risks.
Category
🗞
NewsTranscript
00:00 [laughter]
00:06 We had a diverse group of participants.
00:10 They talked at each other unvarnished.
00:13 Everyone learned from everybody else.
00:15 And so I am really pleased.
00:18 As some of the people who came out said, it was historic.
00:22 We got some consensus on some things.
00:24 First, I asked everyone in the room, is government needed to play a role in regulating AI?
00:32 And every single person raised their hands, even though they had diverse views.
00:39 So that gives us a message here, that we have to try to act, as difficult as the process
00:44 may be.
00:45 Elon, what did you say to the Senator, sir?
00:48 The key point was really that it's important for us to have a referee, just as you have
00:55 a referee in a sports game, or all sports games, and that the games are better for it,
00:59 to ensure that the players obey the rules, play fairly.
01:06 I think it is important for similar reasons to have a regulator, which you can think of
01:11 as a referee, to ensure that companies take actions that are safe and in the interest
01:16 of the general public.
01:17 The probability of there being some sort of AI regulatory agency that stands on its own,
01:21 similar to the FAA or FCC, is likely at some point.
01:24 Do you think so?
01:25 I think so.
01:26 The reason that I've been such an advocate for AI safety in advance of anything terrible
01:34 happening is that I think the consequences of AI going wrong are severe.
01:40 So we have to be proactive rather than reactive.
01:43 In the past, if you take, say, and I'm being somewhat late for speaking of regulators,
01:48 I'm a little late for the FAA, I'm meeting with the FAA administrator.
01:50 We don't want to hold you up.
01:51 No, sir.
01:52 But if you take the example of, say, seatbelts, seatbelts were opposed by the auto industry
02:00 for a very long time, even though the data was very clear that they're safe, that they'd
02:04 radically improve adjustment injuries.
02:09 So we don't want to be in that situation where we're fighting regulations even though there's
02:14 a safety thing.
02:16 We can't wait for millions of people to die in auto accidents.
02:22 And it's important to just elevate the question here.
02:24 The question is really one of civilizational risk.
02:28 So it's not like one group versus another, one group of humans versus another.
02:32 It's like, hey, this is something that's potentially risky for all humans everywhere.
02:37 And it's important to understand that.
02:40 Huge opportunities, curing cancer, education, and huge risks.
02:48 And I think that was the big tension, risks that people are talking about civilization-sized
02:53 risks.
02:54 Maybe not so big in terms of percentage, but very smart people have concerns about some
03:02 very big risks.
03:04 So and then the other tension, I think, in the room there is open source.
03:12 Is this open to everybody versus a more closed system?
03:16 And it's the same issues with regard to risk and reward.
03:21 And then finally, one area of agreement, American leadership.
03:24 It's critical.
03:25 It won't happen without American leadership.
03:28 You don't want China leading on this.
03:29 I think that's, I would say, unanimous.
03:35 Regulations are on the table.
03:38 And I believe they are a reality in this field.
03:41 It's a brand new field.
03:43 And I think it needs some guidance.
03:44 Anything that stood out to you the most during the meeting?
03:45 Well, I looked at the diversity of the people on the panel.
03:52 And they represent all segments.
03:54 And they seem to be in general agreement toward moving forward together.