Thoughts on Lex Fridman's interview with Sam Altman
A unimpressive interview with an impressive man
Lex Fridman interviewed Sam Altman, the CEO and one of the founders of OpenAI. OpenAI is the company responsible for ChatGPT and GPT-4 so naturally their discussion revolved around recent developments in large language models and the GPT product.
I’ve written several posts on individual Lex Fridman interviews. His earlier interviews were very good. He obviously did his research in the person being interviewed and had a reasonable understanding of technical matters in the field of the interviewee. He always had quirks, mainly about espousing the power of love and empathy, but they weren’t all encompassing. But recent interviews Lex leans more on these tropes to cover for lack of technical insight.
I thought the interview with Altman would be strong because Fridman is (was?) as AI researcher. So they would be able to delve into some technical details that other interviewers wouldn’t get out of Altman.
But the interview left a lot to be desired although I still recommend listening to it. I won’t do a full breakdown of everything said, but I’ll give some highlights or cringe moments I found memorable.
Leading the interviewee
This is common amongst a lot of interviewers where they want to interject their thoughts and lead the interviewee to make a certain point. Here’s an example [00:43:10]:
Lex: Do you feel pressure from clickbait journalism that looks at 10,000, that looks at the worst possible output of GPT, do you feel a pressure to not be transparent because of that?
Sam: No.
Lex: Because you're sort of making mistakes in public and you're burned for the mistakes. Is there a pressure culturally within OpenAI that you're afraid, it might close you up a little bit?
Sam: I mean, evidently there doesn't seem to be, we keep doing our thing, you know?
Lex: So you don't feel that, I mean, there is a pressure
Sam: I'm sure it has all sorts of subtle effects. I don't fully understand, but I don't perceive much of that. I mean, we're happy to admit when we're wrong. We wanna get better and better. I think we're pretty good about trying to listen to every piece of criticism, think it through, internalize what we agree with, but like the breathless clickbait headlines, you know, try to let those flow through us.
This is a small gripe but if you ask the interviewee if there is pressure, and he says no, and you say obviously there is pressure. Let him answer the question. The lead-up is unnecessary, because you’re guiding the interviewee into a particular position that may or may not reflect his own beliefs.
Lex is asking does he feel pressure, then he reminds him that he’s making mistakes and in public and he’s burned for those mistakes. It’s just bad interviewer form. It’s fine to push back on conversations, but Lex conducts his podcast as more of an interview with scripted questions where it’s less appropriate. Here I miss Larry King style interviews where you just ask one simple question (no lead-up) and the rest of the interview flows from there.
Consciousness
[1:07:09]
Lex: Do you think GPT-4 is conscious?
Sam: I think no, but.
Lex: I asked GPT-4 and of course it says no. No.
Sam: Do you think GPT-4 is conscious?
Lex: (long dramatic pause) I think it knows how to fake consciousness, yes.
Sam: How to fake consciousness?
Lex: Yeah, if you provide the right interface and the right prompts. It definitely can answer as if it were. Yeah, and then it starts getting weird. It's like, what is the difference between pretending to be conscious and conscious
I wish Lex let him continue the “but”. I don’t think any serious AI researcher believe LLMs are conscious. They may argue about whether they’re “intelligent” or just playing probabilistic tricks, but the conversation about consciousness is kind of silly at this point.
What does it mean it “fakes consciousness”? Lex doesn’t answer and admits just before that asking GPT if you ask GPT if it’s conscious it replies no, which is the opposite of faking consciousness.
Does giving expressing meaningful output or ideas imply consciousness? Are calculators conscious? How about books? Weird take for an AI researcher to have.
Proposed test for consciousness
[1:09:12]
Sam: We [Altman and Ilya, his cofounder] were talking about how you would know if a model were conscious or not. And I've heard many ideas thrown around, but he said one that I think is interesting. If you trained a model on a data set that you were extremely careful to have no mentions of consciousness or anything close to it in the training process, like not only was the word never there, but nothing about this sort of subjective experience of it or related concepts. And then you started talking to that model about here are some things that you weren't trained about. And for most of them, the model was like, I have no idea what you're talking about, but then you asked it up. You sort of described the experience, the subjective experience of consciousness and the model immediately responded, unlike the other questions. Yes, I know exactly what you're talking about. That would update me somewhat
This was an interesting insight into a test for consciousness. Obviously not definitive but points to the idea that the LLM is trained on a large corpus of data that necessarily what its purported beliefs are.
Open source
Sam: Do you think we should open source GPT for?
Lex: My personal opinion, because I know people at OpenAI is no.
Sam: What is knowing the people at OpenAI have to do with it?
Lex: Because I know they're good people. I know a lot of people. I know they're good human beings. From a perspective of people that don't know the human beings, there's a concern. There's a super powerful technology in the hands of a few that's closed.
Sam: It's closed in some sense, but we give more access to it. If this had just been Google's game, I feel it's very unlikely that anyone would have put this API out. There's PR risk with it. I get personal threats because of it all the time. I think most companies wouldn't have done this. So maybe we didn't go as open as people wanted, but we've distributed it pretty broadly
I don’t care that Fridman doesn’t think it should be open sourced, but the reasoning is just stupid. You don’t want it to be open sourced because you trust the people? Sam even wondered why that was relevant.
Presumably if OpenAI were not run by good people, Fridman would want to open source. There are good and bad people outside of the group so open sourcing would give it to everyone. But the original bad group would still have access, even after it was open sourced. And new bad people would have access. So the point makes no sense. It’s as though Fridman never thought about the question deeply, and just relied on a simple trope. If he didn’t have an opinion he should have just said so.
The obligatory Fridman love question
[2:16:09]
Lex: What if it's capable of love? Do you think there will be romantic relationships like in the movie Her or GPT?
Sam: There are companies now that offer, like for lack of a better word, like romantic companion ship AIs.
Lex: Replica is an example of such a company.
Sam: Yeah, I personally don't feel any interest in that. So you're focusing on creating intelligent tools. But I understand why other people do. I understand why other people do. That's interesting.
This is just boring. What does it even mean for a bunch of numbers digitally represented in silicon to “feel love”?
Questions I wish were asked
Altman mentioned the technical leap from ChatGPT to GPT-4 was 100 small improvements and OpenAI was good at stringing small breakthroughs together for a better product. I wish Fridman followed up on that idea and asked for some examples.
Are you still getting juice from training on the current data set?
How do you incorporate new data into the model?
What technical evaluations do you perform when comparing models?
Are you still seeing benefits from increase in size/parameters of the model?
Ask about plugins. Taping might have been before announcement but integrations were pretty obvious
How many parallel models are you currently building? Do you have candidates for the next version, and how much experimentation is there and what kind? Anything totally radically different?
These are the questions I would hope someone with cursory knowledge about AI and LLMs would ask. Unfortunately Lex would rather talk about whether the model can feel love.