MENU

Change Starts Here: Shaping a Future of Equitable AI

Michael C. Bush, CEO, Great Place To Work

Join Michael C. Bush as he shares insights into what leaders must consider now through the transformative potential of AI and its profound impact on society, work, and leadership. Challenging conventional notions of intelligence, leadership, and success, Michael describes a 500 year vision where AI serves as a catalyst for inclusivity, equity, and long-term thinking. By reimagining the role of technology in shaping a better world, Michael inspires audiences to embrace a collective responsibility in harnessing the potential of AI to create inclusion and belonging for all.

Join us in Las Vegas!

Register for the next For All Summit™, April 8-10, to connect with leaders and experts from great workplaces around the world.

Show Transcript
Michael C. Bush (00:15):

Hello. So I'm going to go ahead and get started and I'm going to talk about something called the Cody problem. This is Cody Coleman. Some of you may have heard about him. Cody Coleman was born in prison. He doesn't know exactly how it happened, but his mother suffered from mental health issues in and out of prison. She was in prison, he was born. Never met his father. As soon as he was born, he was put into the foster care system, and then his legal guardians, became his grandparents.

(00:56):

He was living in New Jersey in the poorest part of New Jersey. The schools that he ended up going to, his schools were 330 out of 332 schools in New Jersey. He lived in the poorest part of New Jersey. Poor people are very giving though, so whenever at the school they would have bring food to give to those who really need it. Any minister will tell you poor people are very giving, but what Cody realizes when they have a food drive which he thought was a lot of fun, most of the food ended up at his house because that's how poor he was.

(01:30):

But yet he had a dream, and his dream was to one day go to Princeton. That's where he wanted to go. He didn't know if it would happen, actually didn't think it would happen, but it just became a dream of his. But he had a gift. An older brother who was 18 years his senior, his older brother Sean, and what older brother Sean did for him was gave him hope, gave him inspiration, gave him wisdom, this kind of wisdom, a type of accumulated knowledge that's usually applied with a sense of empathy, ethics, and enlightenment. That's real wisdom, and he put in Cody a sense of, "Why not?" Why can't I go to Princeton?

(02:14):

Because Cody didn't realize it, but he had something that Angela Duckworth, who will be here on our stage tomorrow, calls grit. He had perseverance and he had passion, so he started to put it to work. He went from an average student to a straight A student in his junior year and continued that through his senior year. And he decided, "Yeah, Princeton's fine, but I think I'll go to MIT." He went to MIT and graduated with a perfect GPA.

(02:46):

But along the way, his trigonometry teacher in high school, Chantel Smith, who he calls Mom, kind of embraced him. When his teeth needed repair, she took him to get his teeth repaired and paid for the dentist appointment. When he was getting ready to go to MIT, didn't have clothes for really cold weather, she bought him all the clothes. She just took care of him, guided him, made sure that he had a laptop and all the things required to succeed. She's what Angela Duckworth calls a guardian angel. Here's a picture of him and Angela Duckworth. He finished MIT with a perfect GPA. You can't do any better than that. That led him to go on to Stanford University where he got a PhD in computer science.

(03:39):

So this is him now. He's the co-founder and CEO of an AI startup called Coactive. He raised $14 million, got started, and he's well on his way. Then after his 30th birthday, he got some news. Somebody recommended, "Maybe you need to see somebody." Found out that he had dyslexia. Just another thing in the life of Cody Coleman. He got this news and he learned things about how his brain works. And to hear him talk about it, he tells the story in a very interesting way, like he's just on a journey learning things about himself.

(04:16):

He said, "When you have dyslexia, it's like your brain works different than other people. And it's kind of like an airplane and a car." And this is the way he talks about it. He said, "A car, if you're trying to go from London to New York, a car's not going to really help you. Or if you're trying to go from New York to California because you've got a sick family member, a car's not really going to help you. But a plane is really going to help you. But if you need to go from your house to the local market in the neighborhood, a plane's not really going to help you." So he said, "These things do two different things and they are both needed. That's just like me with dyslexia." And he said, "But let me tell you, me with dyslexia, I really understand computers." So he knew that he was good at it. He was especially gifted.

(05:02):

I'm not saying today that Cody is anything special, but what I am saying is he's not the only one. There's no way. If you think about this planet, there's a lot of Cody's on this planet. So the world is full of Cody's waiting for us to find a way to find them.

(05:24):

The AI experiment, this age that we're all living in now, this is Marc Andreessen. If you don't know who he is, one of the most successful venture capitalists in history, the founder of Netscape. Some of you have to Google what that was. You just don't know anything about it, let me tell you. Okay? It was this thing. It was all the rage. And then you'd have this cord at home and you'd plug in, and click, click, click, click, click. Boom. It'll take you like four hours, finally you get the internet, then you'd immediately lose it. I'm sure my kids, they just can't imagine that world, but I remember living in it. A lot of that came from him.

(06:03):

He's made more investments with his firm, Andreessen Horowitz and a16z in artificial intelligence than any company in the world. He is definitely viewed as an expert, and he has written something that I recommend you read called The Manifesto. Now, I don't know if you drink or not, but it might drive you to drink. He has a view of the future that everything is going to be perfect. He is what you call a techno optimist. But I encourage that you read it so you can see what he's thinking about.

(06:36):

Now, there are others in the world who have seen movies about what's going to happen as machines gain more influence over our lives, and we aren't quite as optimistic as Mark might be. But both of these things are worthy of debate. What we aren't debating is that there's going to be great transformation as a result of this technology. I think that debate is over. We understand something is coming. And we like a great place to work, the metaphor of the caterpillar to the butterfly, because we think it's beautiful. We think that it's a story about something that's impossible, how this thing that crawls just really face down on a leaf one day can fly 3,000 miles at speeds between 10 and 30 miles an hour. We just think that's impossible to think about. That's why we're inspired by it, because we're inspired to try and create a world that might be impossible to see right now.

(07:30):

We love this transformation story, and we are in the period of transformation in terms of AI right now. And we're wondering, "Will it make things better?" We aren't really sure. Will it do something about the great divide between those who are very rich and those who don't have very much? We are wondering how could we shape and inform the DNA of a computer? What will machines learn from us? They'll learn that we'll do anything to survive and to protect those near us. They'll learn that we can build great communities. They'll learn that we can destroy great communities. They'll learn that we can be really selfish and do wonderful things for a complete stranger, like Chantel. And they'll learn that we can be only about one thing, ourselves. They will learn that we are willing to make the ultimate sacrifice. Any vets out there? Give them a round of applause. They will learn that we have this thing called religion and there seem to be many gods. But regardless of the religion or of what god, there's a concept of love in every one of them. But yet there's war related to religion and related to this thing called love.

(09:11):

So machines will go through everything from the beginning of time to understand how can this be true, and they will find that there was one human who was apparently a genius who got it all figured out. What's love got to do with it? Doesn't seem to have much to do with it. We have tribes. We have geographic boundaries around countries, around nations. We have states that are blue, red. Now there's purple and light blue, all separating ourselves from one another. Based on where we come from, part of a tribe. Here's a tribe. Any Manchester fans out there? Okay, two of them. Part of a tribe. These people are part of a tribe too, but what's love got to do with it? I'm not throwing any shade to Brazil or Argentina, I'm just saying that passion, love can be complicated. Because we feel when we're in tribes that we are right and that they are wrong. It's the problem. Shot of Moscow; one tribe, right? Everybody agrees with each other in Moscow. Maybe not.

(10:43):

Here's one thing about tribes. They're often led by a leader who has more power than anyone else in the tribe. Something about tribes. And within the same tribe, I'm not going to go too deep, but within the same tribe, there are people who aren't doing too well, yet they are part of the tribe.

(11:06):

For a computer, this defies logic. What is this? Can't make any sense out of this. What are these things doing? They have a stock market that goes up and comes down. And the next time it goes up, people start writing how it will never go back down. Inflation, bubbles, war; from everything a computer can see, there's only been a couple of them that perhaps made sense, but the rest of them were questionable. Yet we still do it.

(11:42):

Computers will look at what we have done to our natural resources. Lakes like this, oceans that don't have any fish in them, but they've got a lot of other things. And zip codes in parts of countries where people are very rich, only separated from a road from those who are very poor. We are the teachers. History is the void of an all-powerful ruling leader who has benefited all of society. The computers will find that they can't find one. They will know it's not a good idea.

(12:21):

We have two fatal flaws. One of them is zero-sum thinking, this idea that I can only gain if you lose something. So it's just my slice of the pie. The only way to get more pie is you have a smaller slice; this notion. I have a friend who works for a multinational company, works in the U.S., called another friend of his in Brazil who got promoted. And he was like, "She got promoted. She got my job." I'm like, "You're in the United States. She speaks Portuguese, English, and French, and you've denied all requests for the company to send you to Brazil. But she took your job." Okay? That's zero-sum thinking.

(13:06):

And here's the problem with zero-sum thinking. It trains our mind about a false scarcity that exists in the world. It's completely false, at least to short-term thinking. Very problematic when you're thinking about solving complex problems. Short-term thinking, zero-sum thinking makes us think of me.

(13:31):

I want to break that mold. I think we need to start thinking about the next 500 years. We really need to start thinking about the next 500 years, especially to solve the complex problems that face us today and the ones that are coming. And just to show that it's not so crazy, $69 million. Anybody from Cisco here? All right, some Cisconians over here. This is the revenue of Cisco in 1990, $69 million. That's the revenue of Cisco last year, $56 billion. All right? That's just that. Yeah, give y'all something. That's just 34 years. This is naturally what happens. This is naturally what is happening. So I just did a little math and I said, "What if this revenue continued to grow at 6% a year?" Which there's no way Chuck Robinson is growing at less 6% a year, but let's say that he did for 500 years. That's how much money will be the annual revenue of Cisco in 500 years. This is going to happen. It's going to be a number far bigger than that. You want to know what that number is? That's $3 octillion. If you're familiar with octillion tables, that's a lot of money. It is going to happen, not just for Cisco. This is absolutely going to happen, and it's going to be accelerated with generative AI.

(15:02):

I want to talk about the marshmallow experiment. This is an experiment where researchers in the '70s pulled kids into a room like this one and they gave them one marshmallow. See the kid looking at the marshmallow? Except they were given an instruction, don't eat the marshmallow. And if you don't eat the marshmallow by the time I come back, we're going to give you another marshmallow. And they watched the kids to see what they would do. The grand experiment.

(15:34):

Decades later, they found some interesting observations and correlations. They found that the kids who would wait for the second marshmallow did way better than the other kids on the SAT. They found that the kids who immediately ate the marshmallow became CEOs in Silicon Valley.

(16:02):

So years later, somebody looked at the results, did a little more work, and found out something that was really game changing. They found that if the child trusted the researcher, that by a factor of four they were more likely to wait; by a factor of four. Because without trust, people think short term. Without trust, people think about me.

(16:34):

So 1993, I'm in business school. I go to AT&T to learn about strategic planning. Went to a floor of people. There were a hundred people all during strategic planning. Went to the CEO's office and said, "Michael, you see those binders? All AT&T blue binders? Those are our strategic plans for the next 10 years." Late in the day when he was drinking bourbon and talking to us, he goes, "When I think about the future, the only thing I know is nothing in those books is going to happen." Okay? That's exactly what happened. Now, the bourbon had some effect. This is what's happened in terms of strategic planning. We used to talk about 10 years, then five years, then three years. Now it's 90 days. Short term thinking has become the norm.

(17:27):

Now, CEOs and marshmallows. If I said to a CEO, "Hey, I'll give you some marshmallows. But if you wait, I'll give you twice as many marshmallows." You know what a CEO would say? "Can I get them by the end of Q2?" That's what would happen. Focusing on right now. Not picking their head up. I'm not throwing shade at all CEOs; I'm just making a point. Picking your head up. When your head is up, you start to think about unintended consequences. When your head is up, you think about much more than the world you're living in and the problems you's trying to solve because you see the rest of the world.

(18:12):

Here's a woman working in a company just doing her thing. Doesn't really look like she's working at a great place to work, doing her thing. But then she gets access to a database and finds out that she's making twice as much as anyone else in the company. Just like that, here's how she feels. Just that little piece of information, this is how she feels. And then a recruiter reaches out and says, "Hey, got a job for you at another company doing exactly what you're doing, except you're going to make twice as much money." She takes it. She's really happy now. But what she didn't know is she's at a new company that's 100% pay transparency, so she has access to a database and finds out she's far from the highest paid person in the company. Now she looks like this. Now she's still making twice as much money as she was when she looked like this, but just that little bit of information of me versus somebody else changes the way she feels. She's making more money, but that's the way she feels.

(19:26):

This is zero sum thinking. This is me versus you. It's problematic. I love this little girl. This little girl, I'll just tell you in advance, she doesn't have a high level of trust for others. Watch what happens when they set down the marshmallow and give her the instruction of if she waits, she'll get two marshmallows. Gets that marshmallow, looks right in the camera, and goes, "I don't know what you talking about. I'm eating this marshmallow." Because I don't know. From her life's experience, she's like, "This is the winning move for me to get this marshmallow." Because without trust, we think about me.

(20:15):

Machines equal hope as far as I'm concerned. They equal hope. I believe machines can optimize outcomes for all. Machines will think zero-sum thinking makes no sense, so they will not do it. Machines will not take short term thinking because they're not worried about managing their career or retiring at 65. Those things will be irrational. They'll be thinking about things like 500 years.

(20:45):

We all know AI's going to take a lot of energy; big problem that has to be solved. That's why people are thinking of more wind farms, more solarplexes to fuel the machines that are going to be doing the work of generative AI. Now, this problem of creating a world that's really good for everybody, forgetting about tribes but thinking about everyone, has clearly been unsolvable for human beings. How long will it take a computer to solve this problem? I think it's like seven milliseconds, and they can do it with two size D batteries. It's not that complicated because it's actually simple. When you think about 500 years, you realize that everyone does better when everyone does better. It's as simple as that. The math will bear this out.

(21:36):

And what do we mean by everyone? We mean everyone; all the employees in your organization, we mean your partners, your suppliers, people that you're doing business with. We mean your investors, your shareholders. We mean the community, making sure they're safe, because our employees want to live in safe communities. It's part of our responsibility and machines can be used to help solve this problem. We want to make sure our schools are good. Your employees are doing much better when they feel like their kids are getting well-educated. We want to make sure your countries are safe and they're not under stress based on things that are going on around them. These become everyone's concern because everyone does better when everyone does better, including the planet. You think of 500 years, you think more carefully about what you put in the water. This is what machines can do.

(22:34):

The space program, put a question mark around that. Ultimately, it comes down to what we believe. I know what some people believe. They believe in yachts. They believe in buying islands. They believe in building bunkers. They believe in going to Mars. That's what some people believe. What do I believe? I believe in humanity for sure. Some interesting statistics here as you think about the Meta family of companies. Facebook, 3 billion users; Instagram, 2 billion users; WhatsApp, 2 billion users. Seven billion users. Let's think about the money. $1.2 trillion in market cap value for Meta. And in terms of other commerce done on those platforms, it's another $50 trillion. That's the impact of technology. It's a 50 to one return. Yeah, the IT, technology company does great, others make money too. But I believe no amount of money is worth sacrificing the mental health of young people.

(23:43):

I believe we shouldn't accept a system that requires guardian angels to rescue a kid like Cody from poverty. We should not accept that. We should not be hoping to breed more guardian angels. This should be a problem that is systematically solved. And what do we mean by better? Just want to be real clear. We mean everybody feeling like they are respected, is going to be treated fairly in an equitable way, is going to be communicated with in an honest way. They can trust what they see and have ways of verifying what they see. That they can have a sense of pride and know that they matter. Regardless of where they are in the world, know that they matter, and be working around other people who feel everyone does better when everyone does better.

(24:36):

We believe in equity in terms of compensation. Representation, like Marion talked about, opportunity moving up, and well-being. The stage that was just shared with Ariana and DJ, talking about people and their mental health. Everyone and their families, their physical health, as well as their financial health. This is what we mean for everyone, for all. Going to work looking like this woman doing something she likes to do and feeling really good about it. She knows that it matters. This team doing things together that they couldn't do on their own. This person feeling like, "I'm at a company. We have pet insurance. The company cares about me and my cat." That's a good feeling. Now, it's a pretty unproductive posture here, but it's a good feeling. Because the cat's in the way on the mouse. This person can't be getting too much done.

(25:29):

Or these people; I think these are a shot of some employees from Appy coming together doing what they could never do on their own. Gives you a really good feeling. These people from DHL. Yep, give yourselves a shout-out. Worked every day during the pandemic, never took a day off. Kept delivering not only medicine, but other things that people needed. Because that's their mission. That was their mandate. This is the power of people. Hilton people, they probably don't really do this because then they got to remake up everything, but it's a great shot. These people from Hilti. These people from Cisco. I shouldn't have done two Ciscos. Somebody's going to be pissed off.

(26:18):

And volunteerism, which is all through the Great Place to Work network. All of our companies support their employees volunteering, being selfless, giving to others. It gives us a great feeling. Making our schools better, getting out into our community, taking care of our future. When we think about 500 years, we're going to want to do more of it. Yeah, we change the world with our products and services, but there's so much more that we can do.

(26:45):

What's it going to take to shape this feature? We're the ones that are going to have to do it. We're responsible. I wouldn't be looking to anyone else. I think it's us. ERGs, can I hear from you? The thing I love about people in ERGs is they all have serious jobs with serious deadlines, serious expectations, paying rent, mortgages, healthcare, all the things everybody else has done, and yet they volunteer time to make their company better. These are special people. If you're not in an ERG, give the ERG people a round of applause. We know these people are special. We know these people are part of the solution. Some used to say think global, act local. No, you got to think global and act global. You can't let go of one and focus on the other. We have to be thinking about everyone all the time.

(27:55):

Action; what can we do? Learn, read, watch videos, consume. Don't be afraid of this AI notion, learn about it. Encourage your kids to learn about it. We want to get schooled up. Here's a video I recommend for you. It's an hour and two minutes. It's Amy Webb's presentation. It's South by Southwest. To hear Amy, you have to get in line at like 5:00 A.M. to hear her talk about AI. If you haven't seen this, I'd watch it. Now, you might need something to drink after this one too, because she's going deep into the future and what can happen. But I encourage you to do it. You're going to walk through the future. You got to face things that make you uncomfortable, that you're afraid of.

(28:40):

This is Dr. Daniel Wendler who'll be on our stage tomorrow to teach us all more about neurodivergent people, how to recruit them, support them, how to lead them, how to support leaders who are neurodivergent. We're going to go to school. This is part of the learning. And I'd be paying attention tomorrow because if you spend any time in Silicon Valley, it becomes clear there's a high percentage of neurodivergent people who are founding AI companies. Something's going on. I think it's important in terms of our future.

(29:11):

Angela Duckworth will be on our stage tomorrow talking about grit, the skill that I believe is the secret weapon. It's a skill of perseverance and passion that a lot of people who haven't traditionally been around the table like ERG members, they happen to have this thing called grit. And organizations are going to need it, and plenty of it.

(29:34):

You need to establish a framework for yourself of a stakeholder model of what everyone means to you and start to try and influence your organization to think the same way. We're going to be a partner of yours on this journey. We understand going forward we have to do much more than measure the employee experience. We have to do much more than measure the experience that leaders are providing. We have to make sure that everyone is going to do better when everyone does better.

(30:02):

Gen T, this is something that Amy talks about. She says, "We are Gen T," like the test generation the next few decades. I didn't like the feeling of that. I don't like being the test anything. It just didn't make me feel that great. This isn't the workplace type thing I'm aspiring to. I want to be driving change. I don't want to be the test subject. I want to be in the room where it happens, and I'm sure you do too. Here's the problem; the room where it happens doesn't always look like this room. And for Cody Coleman, he says he's naively optimistic. Now he's 30 something. I'm 60 something. I'm not naively optimistic because I'm not naive. I've been living in the real world a little longer than him. I hope he stays this way. Life has its way of changing things. I'm not naive about anything. I see the real world and the way that it is, but I am optimistic. I can certainly match his optimism.

(31:08):

The Amy Webb test, one of the things she did is you'll see in the video is two years ago she did a test with all of the available generative AI tools and asked them to produce a photo of a CEO of a large company. Two years ago, all white males. A year ago, all white males. She goes, "With all the progress and change, I'm going to do it right now today." All white males. That's what happened. That's the problem. The human race isn't really the human race. It's a group of part of the human race making decisions.

(31:44):

This is great opportunity I feel. Now, one thing I noticed, I mean there's a lot of leadership groups around the world that are dominated by one group and there's a whole lot of people who aren't there. There's three women in this shot. And you go back through G-20 shots and you'll find no more than three and usually one and a long era of none.

(32:10):

Part of the reason that we are where we are is all the people who have been left out; all the Cody Coleman's who have something to say and something to offer, but they've been locked out. Machines are going to bring them in because machines are just going to be solving on the best talent period.

(32:28):

Job description; they're very useful, huh? I'm being cynical. I'll stop there. But a job description's role is part of hiring, finding people, filling it out. But if you listen to Cody Coleman and there's plenty of his stuff available, somebody posted a job description on LinkedIn, you're not going to find a Cody Coleman. First of all, job descriptions are not a description of the job. When's the last time you looked at your job description? I'm sure you look at it every day to know exactly what you're doing. Now I know they have a purpose. Don't send me anything. Okay? But I'm just saying in the real world, in the real life every day, they don't really mean that much. And in this world that's changing every minute, they can't mean very much because the jobs are fluid. Jobs are changing. Not going to find Cody that way.

(33:24):

Intelligence; I looked at multiple definitions. The ability to apply knowledge to manipulate one's environment or to think abstractly as by objective criteria. One's; it's red because it's very me. Another, human intelligence; mental quality that consists of the ability to learn from experience; knowledge to manipulate one's environment. All the definitions of intelligence that I found were about you or me or the person. Nothing about we or for all. It's like that's non-intelligent. What's intelligent is me focusing on me and what I can do.

(34:10):

Currently, companies match people to jobs. This paper description and a person, they put a call out, 536 people apply, a portion of the people in the available world, and there's a sliver that you actually get to due to proximity bias. Same recruiters, same schools, same connections, same referrals. You're not going to find Cody Coleman that way. I believe AI is going to match people to measurable skills. You ain't going to find that to be an easy instant thing to do. All those red dots to me, those red dots are Cody Colemans waiting to be found to get involved.

(34:52):

Looking at this, these are admissions rates to MIT, Harvard, and these other Ivy League schools, all four to 5%. They brag about the fact that if 100 people apply, we would check 95. That's a weird thing to be proud about. We would check 95. We would check 96. We're number one. We would check 96. Now, that would be okay if they were spending all their money on those students. Look at their endowments. If they're doing so great and if their mission is to educate, why shouldn't they take that endowment money and let a lot more people in to help them be great to do great things? It's just logic. Now, computers are going to do something about this. They're going to be, "This makes no sense. If these places can deliver talent and skilled people, we need a lot more of them and to find some way to do it."

(35:42):

I just want to talk about what it's going to mean to be a Great Place to Work certified in 2034. It's going to be beyond the employee experience. It's going to be about your organization's ability to think about everyone, to think long-term, to think about more than just one or two stakeholders, to think about the entire system. Because we know this is the right answer and believe that machines can provide some hope here.

(36:10):

Jeff Bezos; I don't know him. I never met him. I'd like to. Elon Musk; I don't know him. I've never met him. I actually don't want to meet him. Okay? He's not on my list. Oh, he's on my list, but he's not on the want to meet list. Okay? I know enough about what's going on in his organizations to not feel good about anything related to them. But what these two men have in common is they have the best information in the world, and they have bunkers and yachts, and they want to go to Mars. I just don't know if they want me on Mars. I'm not so sure if their vision of Mars in the future includes me or people that look like me.

(36:59):

And by the way, I don't want to go to Mars. I don't want to be walking around like this in a suit that's holding everything for a long period of time. I can't even keep my phone charged, let alone know that I got to be back in before I have no oxygen left. I want to live in the real world, people. Play with my grandkid in the real world on the grass, not in a suit, where I can touch him and feel it. I want to be here on earth. This is my city, Oakland. Some of you may have heard about Oakland. That's Oakland.

(37:35):

You can't be thinking about me. Can't be thinking about we. Have to be thinking about all. And I believe machines offer some hope here because of what we're able to do. I think with a couple of size D batteries, machines will help us know that everyone does better when everyone does better. Thank you very much for coming, and have a great summit. Thank you very much. Thank you.