Letters #168/169: Daniel Kahneman (2017/2015)
Nobel Prize Winner and Author of Thinking, Fast and Slow | NBER Economics of AI Remarks | McGill Honorary Degree Speech
Hi there! Welcome to A Letter a Day. If you want to know more about this newsletter, see "The Archive.” At a high level, you can expect to receive a memo/essay or speech/presentation transcript from an investor, founder, or entrepreneur (IFO) each edition. More here. If you find yourself interested in any of these IFOs and wanting to learn more, shoot me a DM or email and I’m happy to point you to more or similar resources.
If you like this piece, please consider tapping the ❤️ above or subscribing below! It helps me understand which types of letters you like best and helps me choose which ones to share in the future. Thank you!
Earlier this week, Daniel Kahneman passed away. Today, I’d like to share two transcripts of talks he gave relating to the nature of humans—the first on how technological advancements will affect humanity in the future and the second about how humans can improve themselves in the present. The first is the transcript of remarks Daniel gave in 2017 at the NBER Economics of AI conference, where he first shared his thoughts on the preceding speaker’s presentation on AI and behavioral economics, before diving into his observations and learnings from the previous day’s talks, most notably the belief that machines will be able to do anything humans can do. The second is the transcript of a speech he gave at McGill University after receiving an honorary doctorate (one of over 20), where he shares the most important psychological principle that they could describe in about four minutes, which have the greatest potential for changing your life for the better, and applicable equally to your behavior and someone else’s.
Daniel Kahneman was a psychologist best known for his contributions to the field of behavioral economics. Notably, he received the Nobel Prize in Economics and authored the critically acclaimed book Thinking, Fast and Slow. He was one of the most influential thinkers on Wall Street and in Silicon Valley.
I hope you enjoy these two talks as much as I did!
[Transcriptions and any errors are mine.]
Transcript 1
So that was my my conclusion from yesterday. I couldn't understand most of what was going on, and yet I had the feeling I was learning a lot. Now, I'll have some remarks about Colin, and then some remarks about the few things that I noticed yesterday that I could understand.
I certainly agree with Colin about--I think it's a lovely idea that if you have a massive data and you use deep learning, you will find out much more than your theory, in general. And I would hope that machine learning can be a source of hypotheses--that is that some of these variables that you identify are genuinely interesting. At least in my field, the bar for successful, publishable science is very low. We consider theories confirmed, even when they explain very little of the variance. If they are, they yield statistically significant predictions, we treat the residual variance as noise. And a deeper look into the residual variance, which machine learning is good at, is clearly an advantage.
So, as an outsider, actually, I have been surprised not to hear more about that, about the superiority of AI, to what people can do. Perhaps as a psychologist, this is what interests me most. Now, I'm not sure that new signals will always be interesting, but I suppose that some may lead to new theory, and that would be useful.
Now, I don't really fully agree with Colin's second idea, that it's useful to view human intelligence as a weak version of artificial intelligence. There--certainly there are similarities, and certainly you can model some of human overconfidence in that way. But I do think that the processes that occur in human judgment are really quite different--the processes that produce overconfidence.
Now, I left myself time for some remarks of my own on what I learned yesterday. And one of the recurrent issues, both in talks and in conversations, was whether AI can eventually do whatever people can do. And will there be anything that is reserved for human beings? And frankly, I don't see any reason to set limits on what AI can do. We have in our heads a wonderful computer. It's made of meat, but it's a computer. It's extremely noisy. It does parallel processing. It is extraordinarily efficient. But there is no magic there. And so it's very difficult to imagine that with sufficient data, there will remain things that only humans can do.
Now, the reason that we see so many limitations, I think, is that this field is really, at the very beginning. I mean, we're talking about development, deep learning, that took off--I mean, the idea is old, but the development took off eight years ago, so that's the sort of landmark date that people are mentioning. And that's nothing.
You have to imagine what it might be like in 50 years. Because the one thing that I find extraordinarily surprising and interesting in what is happening in AI these days is that everything is happening faster than was expected. So people were saying that it will take 10 years for AI to beat go, and the interesting thing is it took eight months. So this excess of speed at which the thing is developing and accelerating, I think, is very remarkable. So setting limits is certainly premature.
One point that was made yesterday was about the uniqueness of humans when it comes to evaluations. It was called judgment here. In my jargon, it's evaluation, evaluation of outcomes, and, basically, the utility side of the decision function. And I really don't see why that should be reserved to humans. On the contrary, I'd like to make the following argument: people--the main characteristic of people is that they are very noisy. You show them the same stimulus twice, they don't give you the same response twice. You show the same choice twice--I mean, that's why we have stochastic choice theory, because there is so much variability in people's choices, given the same stimuli.
Now, what can be done with AI--it can be done even without AI--is a program that observes an individual's choices will be better than the individual at a wide variety of things. In particular, it will make better choices for the individual by--because it will be noise free. And it will--that we know, from the literature that Colin cited on the literature from Meehl and on on predictions.
There's an interesting tidbit: if you take clinicians, and you have them predict some criterion a large number of times, and then you develop a simple equation that predicts not the outcome, but the clinician's judgment, that model does better in predicting the outcome than the clinician himself. That is fundamental. This is telling you that one of the major limitations on on human performance is not bias--it is just noise.
And I'm maybe partly responsible for this, but people, now, when they talk about error, tend to think of bias as an explanation. That's the first thing that comes to mind. Well, this is a bias, and it is an error. And in fact, most of the errors that people make are better viewed as random noise. And there's an awful lot of it. And admitting the essence of noise means something--it has implications for practice. And one implication is obvious: that you should replace humans by algorithms whenever possible. And this is really happening.
Even when the algorithms don't do very well, humans do so poorly, and are so noisy, that just by removing the noise, you can do better than people. And the other is that when you can't do it, you try to have humans simulate the algorithm. And that idea, by enforcing regularity and processes and discipline, on judgment, and on choice, you improve, you reduce the noise, and you improve performance, because noise is so poisonous. Now Yann LeCun said yesterday that humans would always prefer emotional contact with other humans. That strikes me as probably wrong. The--it is extremely easy to develop stimuli to which people will respond emotionally. A face, an expressive face, a face that changes expressions--that will [buy] people, especially if it's sort of baby shaped. I mean--so there are cues that will make people feel very emotional. Robots will have these cues. Furthermore, it is already the case that AI reads faces better than people do, and undoubtedly, will be able to predict emotions, and development in emotions, far better than people can. And I really can imagine that one of the major uses of robots will be taking care of the old, because I can imagine that many old people will prefer to be taken care of by robots, by friendly robots, that have a name, that have a personality, that are always pleasant. They will prefer that to being taken care of by their children. Now, I want to end on a story. A well-known novelist--I'm not sure he would appreciate my giving his name--wrote me some time ago that he's planning a novel. And the novel is about a love triangle between two humans and a robot.
And what he wanted to know is, How would the robot be different from the individuals? And I proposed three main differences. Now one is obvious: the robot will be much better at statistical reasoning, and less enamored with stories and narratives than people are. The other is that the robot would have much higher emotional intelligence. And the third is that the robot would be wiser. And wisdom is breadth. Wisdom is not having too narrow a view. That's the essence of wisdom. It's broad framing. And a robot will be endowed with broad framing, and I really do not see--when it has learned enough, it will be wiser than we people, because we don't have broad framing. We're narrow thinker, we're noisy thinkers. It's very easy to improve upon us. And I don't think that there is very much that we can do that computers will not eventually be programmed to do. Thank you.
Transcript 2
Well, it's a pleasure and an honor to be here. I'm a psychologist, and I like to talk about things I know something about. So I looked for the most important psychological principle that I could describe in about four minutes, which have the greatest potential for changing your life for the better.
The principle applies whenever you want to change someone's behavior. And it applies equally well when that someone is you or somebody else. Furthermore, it can be stated in three words: Make it easy.
When we want to change someone's behavior, our culture provide three main ways of accomplishing that goal. We are socialized to argue, to promise, and to threaten. We use arguments to convince people of the errors of our current ways, and of the great benefits of doing things our way. We promise them rewards if their behavior changes as we wish it to change, and we warn them of bad consequences if they do not change. These are all natural ways to answer what seems to be the relevant question: How do I get this person, or this group, to do what I want them to do?
But there is a fourth approach, which starts from two different questions. The first one is, Why isn't this person already doing the wonderful thing I want her to do? What is keeping her from doing the right thing? And how could I change this person's situation so that it would be easy for her to do as I hope? This principle has been around in psychology for a long time. I learned a version of it when I was an undergraduate a very long time ago. And in recent years, it has been adopted by the thriving field of behavioral economics. And it has acquired a new name, which has become part of everyday language. It's called nudging.
The most famous application of nudging has to do with how to encourage people to save more for the future. It is widely known that people in the West, certainly in the US and probably in Canada as well, save less than they want to. Indeed, they plan to save more starting next year. But next year somehow never comes. Why is it then that people don't save more than they do? And as much as they would like to do? The answer is: human nature.
Most of us are much less interested in the future than in the present. Most of us also dislike the idea of cutting our consumption. And we are too lazy to take the actions that are needed to organize a higher saving rate. And this is why people are trapped in a situation in which they would actually like to get out.
Now, the inventor of the idea of nudging, Richard Thaler--you should all by his very recent book Misbehaving--had a brilliant idea. And the idea was to create a situation in which people's laziness and lack of passion about the future can be harnessed to make it easy to adopt a higher rate of saving. Employees in an organization were offered a plan that is now known as Save More Tomorrow. In that plan, they do not increase their saving rate immediately. They only commit themselves to increase their saving rate by 2% of their salary the next time they get a raise, the saving to be deducted automatically from their paycheck. The series of automatic increases will continue until the employee decides to stop it, at which point the saving rate will stop increasing. Note how this works. The increased saving will take place later, not now. It will not involve a cut in consumption, because it will be associated with a raise, and it requires no action, because it will happen automatically. On the contrary, laziness will work to increase the saving rate, because an action is required to opt out of the program. The saving rate in that organization actually increased from an average of about 3% to 11%. Nudging is a very powerful force. Millions of workers in several countries are now willingly enrolled in Save More Tomorrow plans. People are actually happy to enroll in a plan that will cause them to do what they wish to do, which is to save more, without putting any load on their willpower. They save more when it is made easy to save more. There are many similar nudges, and you get the general idea: Make it easy.
Now, I promised you a principle that can be applied to changing your own behavior, as well as influencing the behavior of others. So how can we nudge ourselves? How do we make ourselves improve our own behavior? He had the make it easy principle take a slightly different form. How do we create a situation in which we will behave well? How do I create a situation in which I will behave well, better than I'm inclined to, and with a minimum exertion of willpower?
Willpower is a scarce resource, and we should conserve on it. Fortunately, it is often possible to create an environment in which behaving well is easy. For a cookie freak like myself, for example, it is far easier not to have cookies at home than to have them and not devour them. And even if there must be cookies at home, I'm better off if they're out of sight in a kitchen cabinet than if they are on the table. And if they are in a kitchen cabinet, I'm better off if they're on a high shelf and on the second row than if they are within easy reach. Setting up these arrangements of my environment is relatively straightforward, and certainly much easier than forcing myself to abstain from cookies that I hanker for.
I believe I have kept my promise, though I may have exceeded my four minute limit by a few seconds. I have offered you a principle that can substantially empower you when you attempt to influence others or yourselves: make it easy--and have good and productive lives. Good luck.
If you got this far and you liked this piece, please consider tapping the ❤️ above, subscribing, or sharing this letter! It helps me understand which types of letters you like best and helps me choose which ones to share in the future. Thank you!
Wrap-up
If you’ve got any thoughts, questions, or feedback, please drop me a line - I would love to chat! You can find me on twitter at @kevg1412 or my email at kevin@12mv2.com.
If you're a fan of business or technology in general, please check out some of my other projects!
Speedwell Research — Comprehensive research on great public companies including Constellation Software, Floor & Decor, Meta (Facebook) and interesting new frameworks like the Consumer’s Hierarchy of Preferences.
Cloud Valley — Beautifully written, in-depth biographies that explore the defining moments, investments, and life decisions of investing, business, and tech legends like Dan Loeb, Bob Iger, Steve Jurvetson, and Cyan Banister.
DJY Research — Comprehensive research on publicly-traded Asian companies like Alibaba, Tencent, Nintendo, Sea Limited (FREE SAMPLE), Coupang (FREE SAMPLE), and more.
Compilations — “An international treasure”.
Memos — A selection of some of my favorite investor memos.
Bookshelves — Collection of recommended booklists.
Great letters. Great insight from one of the most brilliant minds.