At Thursday's Senate Commerce Committee hearing, Sen. Ted Cruz (R-TX) questioned OpenAI CEO Sam Altman about ChatGPT, DeepSeek, and a range of other topics.
Category
đ
NewsTranscript
00:00All right. I have a few more questions and then we will wrap up. Mr. Altman, what has been the most surprising use for chat GPT you've seen? What are applications that you're seeing that are surprising?
00:17You know, people message chat GPT billions of times per day, so they use it for all sorts of incredibly creative things. I will tell one personal story, which is, as I mentioned earlier, I recently had a newborn.
00:27Clearly people did it, but I don't know how people figured out how to take care of newborns without chat GPT. That has been a real lifesaver.
00:35So I will tell you a story that I've told you before, but my teenage daughter several months ago sent me this long detailed text and it was emotional and it was really well written.
00:47And I actually commented, I'm like, wow, this is really well written. She said, oh, I use chat GPT to write it.
00:51I'm like, wait, you're texting your dad and you don't, it is something about the new generation that it is so seamlessly integrated into life that she's sending an email, she's doing whatever, and she doesn't even, doesn't even hesitate to think about going to chat GPT to, to capture her thoughts.
01:10I have complicated feelings about that.
01:12Well, use the app and then tell me what you're doing.
01:16Okay.
01:20Google just revealed that their search traffic on Safari declined for the first time ever.
01:30They didn't send me a Christmas card.
01:32Will chat GPT replace Google as the primary search engine and if so, when?
01:43Probably not.
01:45I mean, I think some use cases that people use search engines for today are definitely better done on a service like chat GPT, but Google is like a ferocious competitor.
01:56They have a very strong AI team, a lot of infrastructure, a very well-protected business, and they're making great progress putting AI into their search.
02:11All right.
02:11So a question that I have spent a lot of time talking to business leaders, CEOs in the tech space, AI, and one question that I've asked that I get different answers on, and I'm curious what the four of you say.
02:25How big a deal was DeepSeq?
02:28Is it a major, seismic, shocking development from China?
02:33Is it not that big a deal?
02:34Is it somewhere in between?
02:36And what's coming next?
02:37And let's each of the four of you.
02:40Not a huge deal.
02:42There are two things about DeepSeq.
02:43One is that they made a good open source model, and the other is that they made a consumer app that for the first time briefly surpassed ChatGPT as the most downloaded AI tool, maybe the most downloaded app overall.
02:54There are going to be a lot of good open source models, and clearly there are incredibly talented people working at DeepSeq doing great research, so I'd expect more great models to come.
03:05Hopefully also us and some of our colleagues will put out great models too.
03:08On the consumer app, I think if the DeepSeq consumer app looked like it was going to beat ChatGPT and our American colleagues' apps as sort of the default AI systems that people use, that would be bad.
03:26But that does not currently look to us like what's happening.
03:31I would say it's somewhere in between, Chairman Cruz.
03:35When you think about what we learned, what we learned is there are different ways of doing things.
03:40So we have lots of incredibly innovative people in the United States.
03:46American models are clearly the best by far.
03:48However, when you have constraints that are placed, there are other ways of doing things, and I think we learned a few things in the process.
03:55I think the open source nature of DeepSeq was one of the things that probably was most impactful in just terms of how much can be done in an open source type of model and open ecosystem.
04:06But clearly, you know, the United States is leading and we need to continue, as we said, to accelerate innovation and adoption as you started this hearing with.
04:18I think DeepSeq did a lot of things.
04:23One of the things that it did was it sort of raised the specter of China's AI capability to a much broader audience than was perhaps focused on it prior to that, right?
04:41And so you saw that kind of reverberate through the financial markets.
04:44You saw, like, broad-based reaction, and suddenly everyone knows what DeepSeq is and the fact that China is not theoretically in the race for AI dominance but actually is very much a formidable competitor.
04:58And so, you know, it was a starting gun in some ways for the broader population and kind of maybe the broader consciousness of the fact that this is not a fait accompli and that we're going to have to work as America together to kind of propel our solutions forward.
05:26And so I think that was one of the lasting impacts that we will see from that.
05:33I would say, like Lisa, that somewhere in between, it wasn't shocking.
05:39I mean, it was one of a number of startups that we were following in China that we saw as having the potential to be innovative in this space.
05:48I do think there's a really interesting and important point that constraints encourage innovation in other ways.
05:58And I just think one of the interesting facts about DeepSeq is that of their, say, 200 or more employees, that was their size when they released these models,
06:11almost all of their employees, by design, were four years or less out of university.
06:19They wanted to hire people that would not bring to their work traditional ways of doing things.
06:26So the kids are taking over the world?
06:29To every generation.
06:30Related to that, were you finished with that, Mr. Smith?
06:39Related to that, we talked at the outset about the AI diffusion rule being rescinded, which I'm glad.
06:47I think it was a bad rule.
06:48I think it was overly complex.
06:50I think it put on a number of our trading partners unfair restrictions.
06:54But, and so I'm glad that the president is rescinding it.
06:59That doesn't necessarily mean that there should be no restrictions.
07:02And there are a variety of views on whether there's, what the rules should be concerning AI diffusion.
07:12NVIDIA has argued that we want American chips everywhere, even in China.
07:17Others have argued that we want to restrict at least the most advanced processors.
07:23I'm curious, each of the four of you, what do you think the rule should be, if anything, is to replace the AI diffusion rule?
07:32And Mr. Altman, we'll start with you.
07:35I also was glad to see that rescinded.
07:38I agree there will need to be some constraints.
07:40But I think if the sort of mental model is winning diffusion instead of stopping diffusion, that directionally seems right.
07:48That doesn't mean there's no guardrails.
07:49It doesn't mean we say, like, we're going to go build a bigger data center in some other country than the U.S.
07:54Our intention is to build our biggest and best data centers in the U.S., do training in the U.S., build models here, have our core research here.
08:01But then we do want to build inference centers with our partners around the world.
08:05And we've been working with the U.S. government on that.
08:07I think that'll be good.
08:08To this point that influence comes from people adopting U.S. products and services up and down the stack, maybe most obviously if they're using ChatGPT versus DeepSeq,
08:19but also if they're using U.S. chips and U.S. data center technology and all of the amazing stuff Microsoft does, that's a win for us.
08:26And I think we should embrace that, but make sure that, you know, the most critical stuff, the creation of these models that will be so impactful, that should still happen here.
08:34Dr. Hsu?
08:37I think we would totally agree with the concept that some restrictions are necessary.
08:44This is a matter of national security as much as it is about AI diffusion.
08:48That being the case, we were happy to see the rescinding as well.
08:52And, you know, we view this as an opportunity to really simplify, right?
08:55At the end of the day, you know, we've talked about the need to drive widespread adoption of our technology and our ecosystem.
09:02You know, simple rules that can be, you know, easily applied that really allow our allies to, you know, protect our technology while still utilizing the best that the United States has to offer,
09:15I think is a good start in terms of where we're going.
09:18And, you know, again, this is an area where I think the devil's in the details and it requires a lot of balance.
09:24And so, from an industry standpoint, you know, it's our job to put on the broader hat and work hand-in-hand with the administration and Congress to, you know, make our best recommendations so that it is a policy that has some stability as we go forward as well.
09:39Mr. Trader.
09:40So, I'll echo what Sam and Lisa said, but, you know, national security is paramount.
09:50And then once you've addressed the limitations around national security, the opportunity to work with regulators to put together a regulatory framework beyond that makes a lot of sense.
10:05And the diffusion rule didn't allow us that opportunity to participate fully enough to feel like we're going to come away with what would be an optimal outcome at this point.
10:15Mr. Smith.
10:16I think we've all discussed the right recipe, simplify, eliminate these tier two quantitative restrictions that undermine confidence and access to American technology,
10:29but enable even the most advanced GPUs the country has to be exported to data centers that are run by a trusted provider,
10:39that meet certain security standards.
10:41That means both physical and cybersecurity standards, that there is protection against diversion of the chips,
10:48and there are precautions against certain uses.
10:52And that means two things.
10:53One is that there are controls in place to ensure that, say, the PLA, the Chinese military,
11:00isn't accessing and using these advanced models or advanced chips, you know, in a data center, regardless of the country that it's in.
11:09And there are certain harmful uses that one should want to prohibit and preclude, like using a model to create the next pandemic,
11:19a biological weapon, a nuclear weapon.
11:22And I think that there is an approach that is coming together that can be retained and can move forward and that strikes the right balance.
11:31Okay, final question for each of you.
11:35Would you support a 10-year learning period on states issuing comprehensive AI regulation or some form of federal preemption to create an even playing field for AI developers and deployers?
11:48I'm not sure what a 10-year learning period means, but I think having one federal approach focused on light touch and an even playing field sounds great to me.
12:02Aligned federal approach with, you know, really thoughtful regulation would be very, very much appreciated.
12:09I agree with both of my colleagues.
12:14Yeah, I think that builds, obviously, on the op-ed that you and Senator Graham published last year.
12:20And I think giving the country time, your analogy, your example, was this work for the Internet.
12:25There's a lot of details that need to be hammered out.
12:30But giving the federal government the ability to lead, especially in the areas around product safety and pre-release reviews and the like, would help this industry grow.
12:42Well, I want to thank you.