Collaborative Intelligence: Leading in the Age of AI with Shawn Bice & Nick Beim

Published
Aug 23, 2021

This week on World Reimagined, we discuss the future of AI and how leaders should be approaching and utilizing this revolutionary technology with two guests uniquely positioned to discuss this topic. Listen wherever you get your podcasts.

While the benefits of AI are undeniable, it’s not yet clear exactly how it will define the role of tomorrow’s leaders. In this episode, Host Gautam Mukunda speaks with two industry leaders with unique vantage points on how AI is transforming the world around us and redefining the future of leadership.

Shawn Bice is the president of Products and Technology at Splunk, the world’s first Data-to-Everything platform designed to remove the barriers between data and action. And, Nick Beim, partner at Venrock, is one of the world’s leading investors in AI, software, and fintech and is also on the Board of the Council on Foreign Relations.

The AI revolution will force a philosophical reexamination of how we make our most important decisions in different spheres of society... It's almost as if by encountering a new form of intelligence, we learn more about ourselves and become more conscious and deliberate about how we make decisions.
Nick Beim
I think leaders that have emotion where they understand mood, or relationships, and how they’re interacting with others is great. To me, a machine is going to have an incredible IQ, but I don’t know that it’s going to have EQ. That’s where I think that real limitation comes in.
Shawn Bice

Follow @GMukunda on Twitter or email us at WorldReimagined@nasdaq.com

Guest information for Collaborative Intelligence:

Shawn Bice is President of Products & Technology, with overall responsibility for Product, Engineering, Design, Architecture, CTO, CIO and CISO functions. With previous leadership roles at Amazon Web Services and Microsoft, he brings nearly 25 years of expertise in managing massive data operations and native cloud services at scale. Shawn is driven by a passion for curiosity, learning and applying those qualities to create new value for customers. Shawn holds a B.S. from Eastern Michigan University. Married for nearly three decades, Shawn, his wife and two sons live in the Seattle area. As a former college athlete, Shawn enjoys watching lacrosse, football and hockey.

Nick Beim is a partner at Venrock, the venture capital firm created by the Rockefeller family that helped pioneer the venture capital industry. He focuses primarily on artificial intelligence, software, financial technology and defense. He serves on the boards of directors of Dataminr, an AI platform that identifies critical breaking information from publicly available data; Interos, an AI-driven platform for 3rd party risk; Rebellion Defense, which develops AI-driven products that serve the mission of national defense; Percipient.ai, an advanced computer vision analytics platform; and Altruist, a digital investment platform for financial advisors. 

Episode transcript:

Gautam Mukunda:

To lead is to choose and to be responsible for your choices. But what does leadership mean when AI can decide faster and better than any human? Are you ready to share your responsibilities with a machine?

Speaker 1:

I think of it as trying to create a new world, the kind of world that we perhaps have always wanted to live in.

Speaker 2:

Climate change is a systemic risk to the entire economy. You cannot diversify away from it.

Speaker 3:

To intervene when your country, your company, your family need you to do so... That's leadership character.

Speaker 4:

World Reimagined with Gautam Mukunda, a leadership podcast for a changing world. An original podcast from NASDAQ.

Speaker 5:

Why do leaders fail? Unwillingness to learn, a fear of showing their vulnerability, and a fear of being themselves. Lack of authenticity.

Speaker 6:

Character of a corporation is not the personality. Character of a corporation is the integrity and the morality of a company.

Speaker 7:

So without truth and trust, there is no democracy.

Gautam Mukunda:

If you've flown on a plane it's happened to you. You've been in the air for a little while now and so far, everything's been as smooth as could be. Takeoff was perfect, no delays for once, and everything after that has been a breeze. Maybe you even had chance to get some work done or take a nap. And then when your mind was anywhere but 30,000 feet in the air, turbulence. Not enough to alarm anybody. The oxygen masks don't drop from the ceiling like they do in the movies, but enough to remind you that you are, in fact, in a tube hurdling through thin air, suspended several miles above the earth by the laws of physics. You're not too concerned, but you grab the arm rest anyway, just out of instinct. And at that moment you assure yourself that it'll be fine because the pilot knows what they're doing. Well, I have some bad news for you because in all likelihood, the pilot isn't flying the plane at all.

Shawn Bice:

I think of a friend of mine who's a pilot. He is the leader of this aircraft flying 300 people often times across the ocean. And what my friend has told me is that... I'll never forget the story of these pilots will put auto pilot on. Now the machine has taken over and it is reasoning and making decisions to safely get you from A to B.

Gautam Mukunda:

Shawn Bice is the president of products and technology at Splunk, a NASDAQ 100 company that is the world's first data to everything platform designed to remove the barriers between data and action. We were joined in this conversation by Nick Beim, a partner at Venrock who is one of the world's leading investors in AI, software, and FinTech, and is also on the board of the Council on Foreign Relations. We wanted to chat with them both because they have different vantage points on the AI revolution. One leads a company that is on the cutting edge of the field, while the other needs to understand the system level effects of AI, both to succeed as an investor and to help guide our public policy response to it. So I wanted to ask Nick and Shawn, what is AI going to mean for leaders and for the world?

Gautam Mukunda:

There is I think no technology that's getting more attention right now than AI and the rise of AI. And I would say this is a cyclic phenomenon. It seems like almost every 25 years this becomes a huge focus of attention. We're in the midst of that now. So let me ask the two of you, just at a high level and a macro question, 10 years, 20 years from now, what will this AI-powered world look like? How will the rise of this new technology impact the way we work, we live, we connect? What do you see happening?

Shawn Bice:

I'll jump in here. And I think the possibilities are exciting. You think of an AI powered world... I can wrap my head around AI inspecting medical images to help diagnose disease or accelerating drug discovery by possibly predicting clinical trial outcomes or flying drones automatically that can deliver things. Look what's happening in sports today by use of real time statistics and play predictions. You're looking at games, particularly the NFL, in a way that makes it new and exciting. Or think of AI just looking into documents to understand contents maybe like a human, but better. But I just think the possibilities to make the world better is exciting.

Nick Beim:

I would agree. I'm quite optimistic on what AI can bring and I'm also optimistic about our ability to make the kind of decisions we need to to integrate it into broader social and political decision making. Maybe to add some things to what Shawn said, I think in science there'll be a growing body of things that we can predict with incredible accuracy but not yet explain or explain in a way that humans would find satisfying. So I think there's a fascinating impact on science, a new gray zone between theory and knowledge.

Nick Beim:

In politics I think regulation of AI and relatedly antitrust to limit AI-driven monopolies will become exceptionally important. And what we really need as a government to get much more sophisticated on these issues. My personal view is I don't think we'll be anywhere remotely close to general artificial intelligence.

Nick Beim:

I think maybe most interestingly the AI revolution will force a philosophical reexamination of how we make our most important decisions in different spheres of society, so in politics, in science, in business, national security, and other fields, and decide where AI can augment or replace human decision making, to what extent we let it, and why. And I think there's a very saluatory impact in that. I think it'll lead to a more examined society, although no doubt getting there will involve a lot of bumps. It's almost as if by encountering a new form of intelligence we learn more about ourselves and become more conscious and deliberate about how we make decisions.

Gautam Mukunda:

As Nick mentioned, AI is now at a point where it will soon be affecting how we make decisions in all walks of life. There's not yet any danger of confusing an AI with a person, but it's also increasingly impossible to deny that AI systems are doing something that seems a lot like thinking. And it's getting better. So what will that mean for businesses?

Gautam Mukunda:

I think it was particularly striking for business leaders, is right now, at least in the way that you described it, it seems like AI has been particularly helpful for large companies, that many of them have been able to adopt their business models to it and take advantage of the incredible pools of data that they have available. Going forward do you see these sorts of technologies advantaging incumbents or disruptors?

Nick Beim:

One of the most helpful starting points is just to highlight that AI is intrinsically feudal in nature. I think in the tech world we've gotten used to software being relatively egalitarian. Lots of people could learn to code. There's explosions in software development all over the world. But what makes AI feudal is that it benefits overwhelmingly by having lots of data and this critical amount of data required to even do certain things. So I think the big consumer internet platforms have amassed the most data of any businesses in human history. And they have the ability to use AI on that data to do things that no one else can come close to achieving because they have the data.

Nick Beim:

And so I think large companies that control a lot of data in particular really do have a structural advantage that's very significant. I think this greatly accelerates the monopolistic tendencies of technology. And all of these companies are great at gathering more data. A lot of, "Let's feed the machine and get smarter as much as we can."

Nick Beim:

So AI does help the large companies the most, but interestingly those companies are now providing platform AI services to lots of other companies and there's a feudal relationship underlying it, which is "We'll give you these services," sometimes for free. But I think it's also true that AI is enabling small companies who will always be the most innovative to do vastly new things. And they can partner with other companies to use their data. They can partner with governments to use their data. They can aggregate the data in specific areas of a lot of customers that those platforms just don't have access to to do a lot of new things. I think a lot of AI capabilities become building blocks, much as software has become much more building block oriented, where they can take advantage of the big advances of the platforms and use those building blocks as a foundation to do something really special on top of.

Gautam Mukunda:

So AI is going to change the way businesses compete. And that's only one of the ways it's going to matter, because it's going to change the way we think. It will increase our abilities in some ways and decrease them in others.

Gautam Mukunda:

This happens to all of us, right? There's all this research showing that people who rely on GPS don't know their way around cities and aren't even as aware of their surroundings. And just to go further, AI is not the first technology to do this. It was routine in the Middle Ages and going back for people in preliterate societies for poets to memorize the Odyssey and the Iliad. Tens of thousands of lines of poetry. That was just something you did if that was your job of going around and being a storyteller. How long do you think you and I would have to work to memorize tens of thousands... Tens of thousands of lines!

Shawn Bice:

Yeah! Sitting there reading forever!

Gautam Mukunda:

Right. So the human capacity to perform those feats of memory was lost when we could write things down. So I tell my students over and over again that to lead is to choose. If you're going to lead, you're going to have to make choices. What does that mean for us as a leader when making choices might mean relying on a computer system to tell us what to do when our own ability to make those choices is vastly inferior to the inputs that we're getting from the model?

Shawn Bice:

This question reminds me of when I learned how to scuba dive. On paper think through how deep I was going, how long I was staying at a depth, so that when I surfaced, I did it in a way where I wouldn't get hurt. It was all manual. Point being, I had to actually reason at all. Or I remember when I was younger whenever I was doing a road trip, there was no Google or GPS like you would call AAA and get these maps. And then you would study the map. You would really take the time to understand all the routes and decisions you'd have to make. Or I even remember the first time I flew an airplane and it's not relying on GPS. It's thinking about wind and speed.

Shawn Bice:

But my point is to all of that, all of those things I have described have all been replaced with computers. Now when you dive you just have a computer that does all that reasoning for. It just tells you where you need to go and how long you need to stay. Flying an airplane, GPS. You don't call AAA for a map. You just turn on GPS and you tell Google, "Hey, here's where I'm going. Guide me there." And what I often worry about is relying on things like that so much that you become lazy in decision-making and you actually forget all that goes into these types of decisions. And I'll admit I am very lazy when it comes to... If I'm going somewhere I don't even think about roads anymore. I just type in an address. And I fear even the way I behave on this that, as time goes on, do I become lazy in decision-making just because a computer is going to do it for me? And then to the question, all of a sudden I'm just assuming it's always correct and working and when it doesn't, it could be big.

Gautam Mukunda:

Almost everyone agrees that AI is going to change the world, but no one's quite sure how. During the COVID pandemic, for example, AI researchers rushed to create machine learning algorithms which could help hard-pressed doctors diagnose patients. It seemed like the perfect situation for AI to show what it could contribute in an emergency. A number of recent studies that looked at hundreds of those models, however, all concluded that not even one of them was helpful and some might've actually misled physicians and potentially harmed patients. So AI has a lot of promise, but we've often guessed wrong about where that will come to fruition.

Nick Beim:

If you think about the cognitive skills that AI has progressed through from its early days, starting with gaming, Pong, and going up through Go and mechanical skills and robotics and perception where we've seen this huge revolution over the last 10, 15 years, people were expecting a lot more progress to be made in complex decision-making, in cognition, in judgment, in logic, and ultimately in... I don't even know if this is philosophically possible. It's an interesting question. Understanding of really complex topics for AI.

Nick Beim:

Interestingly, the progress there has been remarkably slow, and there's a lot of interesting research going on, but I think if you'd ask most leaders in the field to predict where we would be in those higher-order judgment skills, people would have thought AI could go much further. So I don't think we're going to be at a point soon where complex decision-making that involves not only many variables, but particularly variables that involve an understanding of humans and human behavior, will be replaced by machines anytime soon.

Nick Beim:

I think humans always... Or for an extraordinarily long time. Can be better at understanding the human mind and understanding what a customer really is trying to say or really wants. There are of course lots of examples where AI can see things in human behavior that we can't. When it comes to holistic understanding of, "What's really going on here?" Or, "How do I really achieve this end as a leader?" I think it's going to be a long time before AI can take on really meaningful decision-making roles.

Shawn Bice:

And also when you think about a leader relying on a computer or a machine to do something, what I find interesting in that context is a buddy of mine, this pilot, in all of his stories where, "Hey, all of a sudden, I'm coming in for a landing and I'm turning autopilot off immediately because I have to start making the decisions," I think that's a good real world example of, "Okay, now the human being, the pilot, has taken over because he or she now needs to reason and make decisions."

Shawn Bice:

And sometimes I think that's because humans have emotion. And emotion allows you to understand your circumstance, a mood, or the relationship with your copilot, passengers, what have you. And I just don't know that machines will ever have emotion ever. I don't know that they're capable of having emotion, but when you add that emotional factor into when that plays into a decision in the right way... I don't know. I just think it's such an interesting concept. Why is it that, using a pilot example, our pilot's immediately turning off autopilot and taking control of an aircraft when the most critical decision-making needs to be done?

Gautam Mukunda:

So automation does a lot of the work in flying planes. And for a long time we thought it would play the same role in cars.

Gautam Mukunda:

I remember one of my colleagues when I was at [inaudible 00:15:52] school said to me he was certain that he would never teach his kid to drive. That was about five years ago. I don't think anyone would say that now. So what went wrong? Why was there such a huge consensus that this technology was going in this way and it didn't pan out, so much so that companies investing billions of dollars into it? And what can we learn from that example about thinking through the future impacts of these sorts of technologies?

Shawn Bice:

I think in this particular case it's a good example of a practical outcome, self-driving car that's going to make decisions better than humans and it's going to potentially be safer than humans.... I understand that, but I think the part where we might've, I don't know, maybe gotten a little too far in front of our skis on this one is I think such a small number of people actually understood what was truly going on for a car to have a radar and reason speed and things that are in proximity and reason all of that into when it would slow down, speed up, make a lane change.

Shawn Bice:

I just can't think of a lot of people that really understand how that works. How does a team really work? Well, in my experience, to be a good teammate, you really do have to understand the subject and what it is that you're looking at together so that you can contribute. And maybe because a lot of people are still trying to sort out what is artificial intelligence and what is involved, maybe that reality is why we thought cars would be driving themselves today. And they aren't. Maybe there's a certain set of people that need to certify and sign off on that that just don't understand how it works. And they don't want to make a decision that could have a really bad outcome.

Nick Beim:

To add to what Shawn said, I'd say that humans have been notoriously poor at predicting the future evolution of AI, both on the upside and on the downside. There have been long droughts proceeded by these waves of hype about AI. And there have been pessimistic periods... The end of those droughts that have been followed by immense bursts of productivity and new things that AI can do. So it's hard to predict, but I think what we can say is that AI has tended to move in big bursts and the bursts have tended to coincide with changes in one of its key inputs. So computing power, data, and algorithms, roughly speaking. And so I think the explosion of productivity and perception had a lot to do with the aggregation of all this perception-oriented data by the big consumer internet platforms and their ability to take advantage of that with AI.

Gautam Mukunda:

AI is going to automate away lots of jobs. It's almost the first thing everyone talks about when they talk about AI. That's inevitable and it's not the first time that's happened. Today a computer is electronic, but it used to be a job, one often held by women who would do the long calculations that underlied some of the most important scientific and engineering projects ever conducted, like the Manhattan Project and the moon landing. Modern computing, of course, destroyed this entire profession, but no one regrets this any more than we regret the fact that cars destroyed the buggy whip manufacturing industry or kerosene and electric lights eliminated the need for whale oil. But many fear AI will be different.

Nick Beim:

Now, certainly there's dislocations associated with certain job types, but I think it will be significantly outweighed by the new job creation. Just to take three quick historical examples, when the automated loom came out, it gave rise to the Luddites who threw rocks at the looms and sabotaged them. What the automated loom ultimately did was great for weavers because they became the loom operators and there was a huge boom in the production of textiles because the price went down. And so they were able to move up the value chain, get paid more in what became a booming industry.

Nick Beim:

When the ATM came out, bank tellers were incredibly worried that they were going to lose their jobs, but it led to more bank tellers being employed because it lowered the cost of opening new bank branches and the bank tellers that were there got to do higher value activities besides just handing out cash.

Nick Beim:

And then the last one I'd mentioned is with the birth of the computer, math intensive professions like accounting were incredibly worried. And as I said, the US government actually started a task force on the massive job loss that would come from computers. It didn't happen. Accountants has never done better than when they had computers to do a lot more and offer higher value services and outsource what they were previously doing with calculators to spreadsheets and move on to higher value things. So I think, again, due to our evolutionary instincts of imagining the worst case scenario first and how we might be threatened. Journalists and public debate quickly got caught up in both a massive job crisis to come and a Terminator-like all-powerful intelligence. And I disagree with both of them.

Gautam Mukunda:

So AI is going to replace some jobs for sure, maybe more and more with time, and it's going to create whole new ones. That's going to require a response from society, particularly the government. For that response to be effective the government will need to learn a lot more about the technology and how to adapt quickly in response to it.

Shawn Bice:

I think that's one of the most interesting questions for us to ask, but I think AI has to be explainable at a satisfying level to be used in critical decision-making systems and law and business. And what counts as the right level of explainability is very interesting. I think it will vary by subject area. I think the good news is there's a lot of research being done in the general area of AI explainability. And I think we'll make a lot of progress.

Shawn Bice:

I worry more about government and levels of sophistication, of really digging into not only what might make the most sense at a given point in time, but how quickly it changes. Governments generally aren't good at keeping up with change in most spheres. So I think it's going to be hard and I think we're going to have to upgrade our capabilities in government to do it well.

Gautam Mukunda:

It's very hard to predict the future of technology. In the 19th century, British law required any self-propelled vehicle to be led at a walking pace by someone carrying a red flag as a safety precaution. Even contemporaneously the New York Times noted that this law made horseless carriages useless. It was not repealed until 1896. And even then, cars were limited to no more than 14 miles per hour. Not much faster than a horse or a bicycle. It helped destroy the British market for cars and it was one of the reasons that Britain, at the time the world's most advanced country, failed to establish a significant automotive industry.

Gautam Mukunda:

It could have been a sensible safety precaution, if you didn't know where the technology was going and how important it would turn out to be. But if the government gets AI regulation right, we're going to have capabilities to our fingertips that we couldn't even have imagined a few years ago. Think about how much the ability to do a simple Google search has transformed almost all white collar jobs. AI could make that look like a minor course correction. Learning how to incorporate AI-produced insights into every part of our life at work and beyond it will be a struggle for organizations of every kind. But what does that mean for leaders? How is AI going to help or hinder them?

Gautam Mukunda:

So we started out speaking about the potentials of all the things AI can do, which I agree with. And now we're moving into the limitations of where the technology might be. The idea of AI fulfilling a leadership role is so far in the future that it's not all that meaningful to talk about it yet.

Shawn Bice:

It's possible. Maybe boils down to what do you think leadership is? And I think what makes leaders exceptional is their empathy or ability to understand and share the feelings of another. I think that is awesome. And I think leaders that have emotion where they really can understand mood or relationships and how they're interacting with others is great. To me a machine is going to have incredible IQ, but I don't know that it's going to have EQ. That's just where I think that real limitation comes in, but the possibilities of what it can do up to that point are incredible.

Nick Beim:

I think Shawn is right. I think AI in our lifetimes never be as good as other people in understanding human beings and inspiring human beings. It's hard to get inspired by a ghost in a machine. And I think there's one other area where humans for as far as I can imagine, certainly in our lifetimes, will be superior at important aspects of, which is creativity and strategy. I think AI is phenomenal when there's a defined context and a lot of data, but think about in business and government and a lot of areas of human endeavor where the most important decisions are out of the box. AI needs a box.

Gautam Mukunda:

Flying a plane is surprisingly much easier for a machine than driving a car. Wind speed, altitude, atmospheric pressure, fuel levels. These are all things a computer can read and make good decisions about way faster than any human being ever could. But it's much, much harder for a machine going 20 miles an hour on a suburban road to see a little girl playing with a ball in a yard and do what every human being on earth knows immediately to do in that situation: slow down.

Gautam Mukunda:

The world's most powerful computer, backed by sophisticated sensors and billions of dollars of investor capital, is still way worse at making that sort of decision than the average 16 year old. So we need to understand where humans have advantages, where machines do and how they can work together. That won't be easy. Computers are now far better at chess than humans are, although teams of humans and computers can outperform either on their own. Simple enough, but computers have now reached a point where even the best human chess experts cannot understand what machine chess players are doing.

Gautam Mukunda:

For example, in chess, one end game is a castle and a night versus two castles. Computers figured out the solution a long time ago, but it involves a sequence of 262 moves and even the best human chess players cannot explain why those moves are the right ones. In the words of the chess grandmaster Tim Krabbé, "The knights jump, the kings orbit, the sun goes down, and every move is the truth." It's like being revealed the meaning of life, but it's in Estonian. If you're a leader and an AI tells you to do something, will you be willing to take its advice when it can't even explain to you why this is the right thing to do?

Shawn Bice:

I love the chess example because it is a perfect illustration of how machines can think faster and with more precision than a human being. But when I think about decision-making, it always makes me think about in the software industry, all these developers are writing code and making a decision to check that code into a product or service, but not all check-ins are the same. And there's this rule. It's called the two-person rule. And what that means is for a check-in that could have a massive blast radius, you can't check that in without a second person looking at what you've done to just be that other point of view and then off you go. And I often wonder, if you use that as an analogy, I often wonder AI and machines can certainly help make a lot of decisions, but maybe not all. And for those decisions that have a massive blast radius, I wonder if there would be something like a two person rule where the machine was one person, but a human being actually has to look at the decision before it actually happens.

Gautam Mukunda:

So I know [inaudible 00:28:28] has done research arguing that in a lot of circumstances, human AI teams will outperform just humans or just AI. So it sounds to me like one thing that you're saying is a key leadership skill is going to become the ability to work as a team with these expert systems. And what does it take to do that well?

Shawn Bice:

Yeah. I think that could be a thing because if you think about that two person rule, that is about, look, this decision and this check-in is different. It has a bigger blast radius. So we're always going to go get that second person to look at it. And think about how does that team work well? What does it take for a two person rule to work well? It's an understanding of what it is that you're looking at together. It's the ability to communicate and understand the logic and reasoning that led you to whatever decision that you're going to make. And in that context, if you apply that two person rule where the other person is a machine and you have to work well together, I think it does suggest that the human is absolutely going to need to understand what the person is looking at and the reasoning and logic that was processed that got you to the particular decision. And much like humans work in a team fashion, I wonder if that's how it could be when your teammate is actually a machine and not a person.

Nick Beim:

I very much agree with Shawn. And maybe just to add a few additional thoughts, we've somewhat handicapped ourselves in talking about AI by having our evolutionary instincts kick in and think, "Wow, this thing is potentially a competitor to us. And what can it do that we can't? What can we do that it can't?" And so many of the public debates have involved us versus AI, rather than human AI teaming. And if you look at almost any really significant technology revolution... Computers for one, it was the very same set of evolutionary instincts that started the debates where people thought they'd be replaced by computers. And of course what happened is the combination of humans working together with computers were incredibly more productive than they were previously and could do lots more things.

Nick Beim:

I think it'll be very much the same with AI. And as Shawn pointed out and Gautam to your question on what will leaders need to do, I think we need to develop an additional skill set in senior decision makers to really understand how these systems work, to understand how to tune them. It's a very important feedback loop to improve AI systems that is enabled by having humans who interact with them understand where they have potential biases. And so that combination, just like the combination of human AI chess teams, I think will yield the most significant breakthroughs in productivity and be very important for leadership to harness.

Gautam Mukunda:

So AI is going to change leadership just as much as it will everything else. Leaders have always needed to understand emotions and inspire their followers, two things AI will struggle to do for the foreseeable future. But they're also going to need to learn how to work in teams with teammates who aren't human. They're going to have to know when to subordinate their judgment to the machines and when to override them. That's going to be a test of judgment and wisdom unlike any that have come before. The importance of those two qualities made me particularly eager to hear Nick and Shawn's answers to our final question.

Gautam Mukunda:

Over the course of your careers you've undoubtedly met extraordinary people. Is there one who most impressed you? And who was it and why?

Shawn Bice:

There's two. So Andy Jassy is somebody that I got to know well who I worked with at Amazon. And what an amazing person. Andy is somebody who is humble, hungry, and smart. That's why. It's fascinating when you meet somebody like that who really deeply cares about the world and improving the customer experience and in every way you could possibly think about it. It's so real. And I'm so excited for him to be the CEO of Amazon today. What a remarkable human being, great leader.

Shawn Bice:

And then another person that I would add to this is somebody who I've recently met. I participate on the technical committee with the Seattle Kraken, which is a new NHL team of the National Hockey League. And I've gotten to know Tod Leiweke. And what an amazing leader! I always thought that anybody who's running a professional sports organization is first thinking about sponsors and all of the things you need to actually fund the team and make it work. But Tod is the opposite. He's running this new NHL team and he's always talking about fans and the fan experience. And when I think of the Seattle Seahawks and what he did there in creating the 12th Man and what he's doing with the Seattle Kraken... Just what an incredible leader who is just determined to make sure that the fan experience is by far the best it could possibly be. And that's just not something I expected. So two great, great, great people.

Gautam Mukunda:

Awesome. And Nick, same question for you?

Nick Beim:

The person I'd say who's particularly relevant to this discussion on AI is Daniel Kahneman, who, as you all know, was a psychologist who revolutionized a variety of fields, including economics, where he won a Nobel prize for showing how rife with cognitive biases the human mind is and how it has these different systems of thinking that compete with each other to produce our actions. It's so interesting in these AI discussions, there's a tendency to think of all the biases an AI model could have and how horrible that would be. And then when you actually look at the human mind to see how we are an evolutionary [inaudible 00:34:34] of extraordinary biases that have served us well over time from an evolutionary perspective, but can create all sorts of problems in solving these kinds of problems that we're trying to have AI solve. It's a humbling reminder of her own limitations.

Gautam Mukunda:

AI is going to change the world. It's already begun to. I really think that, and it's not just because Person of Interest was one of my favorite TV shows. It's not that we're going to invent something smarter than a human, or at least not yet. It's going to change the world because it's going to give the humans who figure out how to work with AI a huge leg up, and maybe even abilities we would have thought something from science fiction not that long ago. And oddly enough, that's not anything new.

Gautam Mukunda:

The very first telegram sent in May of 1844 read, "What hath God wrought?" Samuel Morris, who invented it, was that amazed by this thing that he had created. And that made sense. The telegraph gave human beings the power to communicate almost instantaneously with people on the other side of the world. That used to be the stuff of legend. Now we check our phones an average of once every 10 minutes.

Gautam Mukunda:

Our array of powers has become quite impressive over the years, from protecting ourselves from lightning to accessing all the world's knowledge to reigning unimaginable destruction on our enemies to reshaping the very stuff of life itself. AI is the newest of these technological miracles. And as with each of the previous ones, the effects of the technology are not inherent in the technology. They are the product of choices. We had to learn how to use each of these technologies, capture their benefits, and mitigate their downsides. For some of them that learning process is still ongoing.

Gautam Mukunda:

Learning is part of what makes us human. Today it's an ability we're starting to give to machines. If we can learn together, use our emotional intelligence to compliment AI's calculating power, guided by leaders who understand both the science and the ethics of AI, we'll be able to work together to shape a better world, even when the person we're working with isn't a person at all.

Speaker 4:

World Reimagined with Gautam Mukunda, a leadership podcast for a changing world, an original podcast from NASDAQ. Visit the World Reimagined website at nasdaq.com/world-reimagined-podcast.

Nasdaq Watch

See what's playing at Nasdaq

Watch Now