Transcript for “Catching A.I. in the Act” with Jesús Mantas

AI-generated content has exploded into people’s feeds, and with that comes greater concern over deep fakes and misinformation. As we scroll through election content this season, how can we ensure we know when something is real or fake? In this interview, IBM’s Jesús Mantas addresses how his work proactively stops “fake news” and what we, as ordinary citizens, can do to spot and combat misinformation, especially AI-generated.  

Jesús Mantas is the Global Managing Partner in IBM Consulting responsible for Business Transformation Services, leading the $10B unit transforming and operating mission critical businesses with digital technology and AI. He also serves as an Independent Director and Chair of the Compensation and Management Development Committee in the Board of Biogen (NASDAQ:BIIB), a leading biotechnology company focused on neuroscience. He serves as a member of the Audit Committee as well. He is also a member of the World Economic Forum AI Global Council. Prior to joining IBM, Jesús was a Partner in the High Technology practice of PricewaterhouseCoopers Consulting, an adjunct professor at University of California Irvine Graduate School of Management, and an officer in the Air Force of Spain.

Thank you to Starts with Us for their collaboration on this series. Starts with Us is an organization committed to overcoming extreme political and cultural division. Check them out at startswith.us.


Jesús Mantas: I think it’s the potency, if you want, of the level at which you can misinform that creates that problem with the use of deep fakes, fake news, highly personalized misinformation with the intention of interfering with the democratic process. I think that’s the problem.

Don MacPherson: That is Jesús Mantas, a global leader at IBM and an artificial intelligence expert. Jesús joined 12 Geniuses to discuss how AI is being used to disrupt elections and what voters can do to better identify when something is real or fake.

My name is Don MacPherson, your host of 12 Geniuses. Heading into any election season can be divisive, that’s why 12 Geniuses has partnered with Starts With Us on this series to help you navigate the overall 2024 election. Elections have always been subject to fake news and lies, big and small, but AI has brought a scalability to election and manipulation that has never been seen before. 

Jesús Mantas is here to help us better understand this disruptive technology. Jesús is a global leader at IBM Global Business Services, a $17 billion unit of IBM. He’s a member of the World Economic Forum AI Global Council, an investor in numerous AI startups, and was an early pioneer on artificial intelligence and supply chain enterprise solutions.

Thank you to Starts With Us for their collaboration on this series. Starts With Us is an organization committed to overcoming extreme political and cultural division. Check them out at startwith.us.

Jesús, welcome to 12 Geniuses. Can you tell us who you are and what you’ve been doing over the course of your career?

Jesús: Sure. I’m a kid that was born in the south of Spain, a product of a great family and a good education system. I went to university in Madrid, became an engineer, ended up, by some serendipity, in the United States to spend three years. And now it’s been 30. I’ve made a career here in the United States and very proud of it.

Don: The focus of this conversation is artificial intelligence and its use in the 2024 upcoming election. So, we are recording this January 23rd, 2024, but we’re going to talk about AI and how it’s been used to influence elections in the past and how you’re expecting it to be used for this upcoming election. But can you just give us a background on your experience using artificial intelligence?

Jesús: I’ve spent more time than the average person around the evolution of artificial intelligence and its application. Most recently, obviously in the last decade or so, I’ve spent time in IBM both from the deep learning and what you would call now almost the traditional AI, which is more used for forecasting and clustering and things like that. And the last two, three years with the use of foundation models and what we call the generative AI.

Don: And one of your responsibilities is as a member of the World Economic Forum AI Global Council, are you still participating in that?

Jesús: I was. That was the original council that was created in 2018 as part of the 4th Industrial Revolution group of the World Economic Forum. I was one of the founding members. It was 40 of us that we really came together to figure out, even back then, one of our priorities was what is the ethical use of artificial intelligence. We all could see the potential that it would have, but with any new technology or capability, there’s always two sides to it. Back in 2018, we spent, one of our priorities was to figure out the ethical framework around that. And that ethical framework now eventually became more of a priority, especially as AI has been consumerized by Generative AI.

Don: When you say two sides of AI, are you meaning the risks and the potential benefits to humanity, or what do you mean by that?

Jesús: Whenever you have a tool, a technology, a new capability, there is always two sides. One is you’ve got to protect the bad use and the risks of actually having that available. And the other side is the benefits that a society or business you can obtain with it.

Don: Yeah, what fascinates me about that conversation is just how democratized AI is and how rapidly it can spread in a way that I don’t think any other technology has ever been used in human history. Would you agree with that assessment?

Jesús: Yeah, definitely — the speed and the adoption. We have been saying this for 10 years, by the way. So, with the internet…

Don: Well, you have been, but the general population hasn’t been (laughs).

Jesús: Well, think about that. We probably said that with the browser in 1996 with Netscape. When Netscape came up and it was like, “Oh, wow.” I mean, think about that. Now you can have the entire information of the world at your fingertips. How is the world ever going to go back? And it’s true, it’s like 10 years later, none of us could remember a world where the internet wouldn’t exist. And then we had smartphones, mobile phones, and then we said, “Well, listen, now you can actually access everything wherever you are. How are we going to go back?” And then came Facebook. And that was like the fastest application. “Oh, look at that.” And now we are saying that with ChatGPT. It was indeed the fastest adopted application that we have today.

So yeah, I think we keep conquering some of those tests. But I think it has, it has captured everybody’s imagination because there is a little bit of an element of magic to it is the fact that you have this like very simple to use application that just behaves as if they can talk to you, they can do things that, before, you wouldn’t think that technology is capable. I think that mistake, together with the accessibility, as you said, that everybody can just use it on their cell phone, has made it an incredibly fast technology to adopt.

Don: Can you share your definitions for fake news, deep fakes, and misinformation? And maybe there’s overlap between those definitions, but how do you define those terms?

Jesús: I think I define that as intentional use of untrue information with the intention to mislead. And the reason why I say that is you could actually create a clone of myself dancing, and you can do that for fun, and I wouldn’t call that misinformation. Or you could create a deep clone of me actually doing something evil and put it on TV with the intention of getting me in jail. And one of those is misinformation and the other one is fun use of the same technology, right? In my view, I would put the intent to misinform or mislead. So, intention to actually persuade someone with knowingly wrong information is what would make something like that fake news or deep fakes or misinformation.

Don: What is the extent of the problem with fake news and misinformation in elections? And we can talk historically. And it can be U.S., it can be outside the U.S.

Jesús: I think the issue is you’re basically manipulating at scale a population based on potentially untrue information in order to change the outcome of the democratic process. Now, that definition, by itself, I don’t think is new because there has been, probably since elections have existed, there have probably been negative ads. And I’m pretty sure that not every comment and every word in every negative ads that a political campaign has ever run was a hundred percent true. So then you’re like, “Okay, well, that has been happening because people are trying to win an election. What’s the difference now?” And I think what the difference now is the scale of the tool with which you can do that, the personalization level, the targeting level, and therefore the effectiveness.

Maybe with a wide ad where you say things that you’re stretching the truth that might have happened and you don’t get caught on that, maybe you’re 1% to 3% effective. But now you have a tool with AI that can be highly credible, highly targeted, and therefore highly effective, and is very hard to ever undo its effect, right? So it is like, let’s say you get caught and your ad is not incorrect, and then you’re being told, “Listen, either we’re going to stop the ad because otherwise you’re going to be sued or somebody’s going to sue you,” and you have to basically run that ad, and say, “No, that wasn’t really true.” Or somebody runs another and says something like that. But then with this new tool, the combination of the fact that you can generate highly credible, misleading content, and you can distribute it highly effectively to the target population in a personalized manner that makes it highly effective is really hard to undo once you have done that.

I think it is the potency, if you want, of the level at which you can misinform that creates that problem with the use of deep fakes, fake news, highly personalized misinformation with the intention of interfering with the democratic process.

Don: We’re using really, really advanced technologies, but we’re tapping into something that’s very primal, which is belief. And we know belief is more powerful than actual facts. It’s very difficult to untangle our beliefs in the presence of facts. So, I find those two things to be very, very interesting.

Jesús: I think the biggest challenge, I mean the biggest risk, in my view, of the technology is that we start to erode the ability as society, or even as people, to understand what ground truth is. It starts eroding the basic levels of trust upon which we build society, right? So, any element of society, if you go from civil law, is the fact that you can own land. The fact that you can own land is because there is a trust system that ultimately determines, do you or do you not own this land? And if you do not own this land, who owns this land? And that becomes a foundation of the society that we build. The fact that you have truth in lending, right? Is when a bank gives you money or you give money to a bank, there is an insurance system that is like, don’t worry. We guarantee that if you need your money back, up to a hundred thousand dollars in the United States, the government is going to guarantee that you can get your money back. So there is truth in lending.

And upon those trust systems, we build each subsequent layer of society, right? So we build the education system. We build all of the systems upon which we evolve society. I think the biggest challenge of this technology is that goes to the heart of some of those systems in such a credible way that then you go to court, and now if you show a jury a video that sees you or I killing somebody, then it’s like, how do you know that’s not true? It’s like the video is like, you’re there in the video. I’ve seen it with my eyes. A lot of the basic things of like, “Okay, you can’t tell me that didn’t happen I’ve seen it with my eyes” is so credible that whether it’s true or false, then you can no longer believe it, that I think is the biggest risk of the misuse of AI technology is the erosion, or is the basic trust layers of society.

And with that, you could crumble. You could crumble, you’ve mentioned elections, so you could crumble the democratic system if you embed enough doubt that nobody ever believes what the between “truth” is because there is no way to actually determine what is true. That if somebody’s like, “Oh, I’ve seen it with my own eyes that they were all fake ballots.” It’s like, “Well, I can prove to you that it’s not.” It’s like, “Well, it doesn’t matter what you prove to me. I’ve seen it with my own eyes.” I think that’s the biggest danger here.

Don: How do we restore this trust?

Jesús: I think the companies that use AI technology need to be responsible for how that technology is used. And that means they need to understand, one, is the intent to which they’re using it. Are they intending to create a harm or are intending to help? I think that intent has to be clearly established. The second thing is we encourage every company that we engage with, we encourage for them to actually maintain a library, a list of all the places in which you’re using AI It is very difficult to govern the use of AI if you don’t have a list. So start with a list. Where are you using A.I? And then start asking these questions, we call them a fact sheet, which is, “Okay, so we’re using AI, do you actually understand what data was used to train that AI?” Is it public? Was the IP, intellectual property used to train some of these algorithms? Did you actually have the right to use that or not? Because, otherwise, you could be liable for the fact that you’re using somebody else intellectual property on that.

Those elements of it are valid. I think that would start, if you want, from the very beginning of the application, start governing preventing a lot of the ill-use because you already know the intent, you already know the lineage of the data, and therefore you can better govern how is it going to be used? That’s on one side. I think on the other side of the equation, on the receiving side, or on the consumer side, a lot of the same solutions that we have been exploring for cybersecurity, I think they’re helpful in this environment. When you get a phishing email, somebody saying, “Oh, your order was just rejected, click here and give me your password,” I think we are training more and more and more both consumers and people to say, “Yeah, you shouldn’t click there.”

We’re doing both. Technology, so we’re building more and more technology to detect those patterns and say, “Hey, careful, this doesn’t seem like an email that is legit.” There’s a lot of technology going into flagging content and saying, “Yeah, this content may be suspicious.” But then the other side is really a great deal of education to say, in the same way, you need to be careful what email you click on. And you shouldn’t give anybody in the internet that you don’t know your password. I think we have to grow on the education of this healthy skepticism to say, just because you watch a video that says something, you shouldn’t necessarily blindly believe it. 

And I think a healthy habit to just look at counterpoints. Look at the point, look at the counterpoint. If you see something that looks odd, then try to find another either confirming or contrarian point. A lot of it has to be education. As much as we would like a great tool that is just going to flag it for you, I think human agency should still be retained, and therefore education is very important.

Don: Well, Jesús, I love you because you’re so optimistic. I’m a little bit skeptical. And it just seems like that would take so long for us to get the mass of humanity to start doing this sort of due diligence. But I agree with you. I think there is some personal accountability that has to happen in order for us to move forward and for us to restore this trust that you’ve been talking about that’s been eroded. I want to get back to a couple of things that you mentioned. You talked about the technologies that are being used to amplify fake news, and these would be social media companies. So we don’t have to talk about specifically which ones, but how do we hold these companies accountable through regulation or through other sorts of governance while holding them accountable is counter to the incentives that they’re seeking?

Jesús: 87% of us would say we actually understand the reasonable limits on both sides of it. Not everybody should be sued for everything, but on the other side, if you’re actually intentionally misinforming and causing harm, you should be held accountable to that. I think that is the dilemma that I think the technology is basically going a lot faster than our legislation. And I think that the legislation has to catch up. I think we need to take a risk-based approach to these things. As I mentioned, I don’t think we necessarily should regulate or legislate hammers because somebody could hit you in the head with them and kill you. I think we should regulate or legislate the act of somebody hitting you in the head with a hammer, but we should allow people to use hammers to build houses.

I looked at it last week in Davos. I actually looked at how many initiatives worldwide are being taken to either create guiding principles, regulate or legislate AI. And there are over 1,600 in 70 countries. So, 1,600 initiatives are actually being taken to proactively figure out what are these boundaries of the use of AI, whether they translate into guiding principles or they’re going to eventually translate into legislation. I think that evolution, hopefully, is going to find us into a reasonable way where we allow people to use the hammer for good things, but we hold them accountable when they’re harming other people.

Don: Very difficult to hold somebody in Russia accountable if they’re interfering with U.S. elections. But certainly we should be able to hold people accountable if they’re spreading and disseminating fake news in the United States about the U.S. election. That’s just my opinion, and I don’t know if you agree with that.

Jesús: Yeah, and I think the challenge is, I mean, if it would be easy, it would already be done. As I said, I think it is just keeping that lane of… I mean, in a lot of these social media companies, they do a lot of good for a lot of groups. They actually enable a lot of groups to self-organize to achieve great things. They enable grandparents to talk to their grandchildren. I mean, there is a lot of positive uses of those technology. I think, as you mentioned, the other element that, that you mentioned is that it’s a challenge is when your business model depends on capturing and retaining the attention of people, and when your business model depends on monetizing people’s data, you’re always going to have an inherent conflict between your business model and, at some point, what may be good for society.

We’re fortunate that in IBM, the company that I work for, we don’t depend…our business model is not depending on monetizing anybody’s data. We’re very clear always like, your data is your data. We say use software, but we don’t monetize your data in any way. But some of our competitors have a different business model where they actually have to monetize data, and that presents a bigger ethical challenge, right? Because we’re very clear what the delineation is. We’re very clear on our commitment to trustworthy and ethical use of the technology. And our business model doesn’t depend on it. I’m pretty sure a lot of the executives in the other companies, they actually do want to be ethical and trustworthy and do a lot of things, but they have to actually figure out how to resolve this business model conflict.

Don: I want to ask you about two potential technology solutions to fighting fake news and misinformation and disinformation. The first is blockchain. And going back to your point about a trust ledger and a lineage of data, do you see Blockchain being a potential tool to help root out this sort of misinformation, disinformation or fake news?

Jesús: In some use cases, it can be a really good way of indelible watermarking of information, and therefore you have the full traceability of information. And in some scenarios, I think that that’s a valid way to at least achieve the first element that I mentioned, which is to track the lineage of data that is not being tempered. You actually can assure whether something is original or not. So, for some use cases, it could be useful. At a wide level, Blockchain is extremely computationally expensive technology. So that limits its scalability to the zettabytes of information that are flying everywhere. So, for those more at scale information, I don’t see it. It would just be too expensive, too computational expensive.

It would put a lot of burden on it. Maybe somebody that is more an expert on blockchain than me can give me an answer to that, but I haven’t seen it. I haven’t seen it at scale. That’s why a lot of other companies, like Adobe and others, they have found an alternative way. If we’re trying to figure out if an image is generated by an algorithm or is an original, what companies like Adobe are creating a consortium where you actually have a library of millions of the originals, and then you can basically say, well, that’s the original, that’s not right. I think other companies are trying to figure it out if we can use AI to determine if something was AI generated because you can see a lot of fingerprints. When AI generates an image, that image is generated statistically. It’s not generated based on physical principles.

You typically can see that the shadows are not correct because AI doesn’t really know that if the sun is there, the shadow is here, the sun is there, the shadows here — AI just statistically produces an image based on a bunch of images that it has seen, right? I think you can see some of those fingerprints if you want, and then you could say, “Well, it’s highly likely that this image is a fake versus a real.” I think there’s going to be multiple approaches to that, but that’s why I said maybe, meaning in some use cases, blockchain can be useful. As a general answer, I don’t think it is.

Don: When I was a kid growing up in the seventies and eighties, we had something called Consumer Reports. And this was a product guide. And so if you’re buying a new car or a microwave or something like that, consumer reports would go out and do the research on how long this car was going to last and what sort of mileage it got. Maybe you had something similar in Spain. Well, you would just buy consumer reports, and this would help you with your purchasing decision and it really gave you a rating system. So, my theory or my thought was there should be a way to use AI to score sources of information. So, Jesús is a 94 out of 100, what he puts out there is highly trustworthy, whereas somebody else who’s putting out, spreading a lot of disinformation might be a 22 or something like that.

And so we have a very quick way of determining, using AI through the social media app credibility. And I wonder if there’s any sort of credence that you would lend to that and if this is a sort of a valid idea, and maybe somebody is already trying it.

Jesús: I was about to say, it seems to me like you’ve got a great idea for a new venture. I think it’s plausible. I mean, think about that. We trust peer-rated products or articles more than we trust institutional sources. And the first company that I can remember that figured that out and made a very lucrative business model about it was eBay. The magic of eBay was not actually creating a marketplace. The magic of eBay was the trust system that was created by the scoring, by the fact that if you look at somebody that has done a hundred transaction and it’s a hundred percent rated. You don’t know this person, you don’t know where this person lives, you don’t know if they’re going to take your money and run, but you don’t think so. I remember at the time of eBay, people like me said like, “Oh, wow, people are going to buy cars like this on eBay.”

I was laughed at, I was really laughed at, saying, “No, this is just for $10 things. Nobody’s ever going to spend a thousand dollars on eBay.” Well, yeah, good luck. I think it’s, it’s an interesting idea especially if it is not done by an algorithm, I think you start eroding the trust, if the actual rating is done by an algorithm, then it’s like, hmm, I don’t know what that is. But if it’s other people rating, I think that’s a pretty good idea.

Don: In which ways can social media companies be incented to eliminate damaging fake news?

Jesús: So their business model is tied to a lot of this. So that’s a challenge. Technologically, I’m pretty sure that they’re probably putting people to do some of this… They’re putting people and they’re putting technology, they’re putting resources to try to figure out to tag as much of this content as you can, but you have a limitation on that. I think at some point, there’s going to be some form of accountability for it, right? So, if there is a damage done, then how do you actually hold them accountable for it? I think there’s going to be evolution towards some of that level without encroaching the point where everybody will sue them for everything and therefore they can’t really provide the service that actually is helping a lot of people do the good.

I think the other aspect of the solution is the fact that human attention is part of the issue. So, we don’t report on 10,000 landings of planes every day, but we all pay attention to the one crash. And that’s just human nature. And that human nature translates to the business model for media, which is eyeballs and clicks. And the unfortunate situation that we all provide more eyeballs and clicks when there is blood and when there is drama than when it is good news. I haven’t figured out that one because that one is really hard to… what are you going to do? Create a business model that doesn’t depend on the clicks. Well, then, I mean, media companies already having a problems as it is making money and staying alive.

So that, I don’t really know how to solve for that one. If anybody has any ideas, I’m willing to listen to it. But that’s part of the issue is part of the issue is both the political system and the media system, it is designed to be polarized. It is not designed to be unified.

Don: Yes. I do think you identified the solution, and that is awareness — self-awareness. For example, do you prefer to be at peace or outraged? Just curious.

Jesús: I prefer to be at peace.

Don: At peace, I think most people do, but yet we are drawn to things that outrage us, right? And once I recognize that, I realized, no, I’m done with cable news. No, I’m done with political satire, even if it’s funny, even if it makes me laugh, if it’s going to outrage me. But it took that self-awareness to say, “Oh, this is putting me in the wrong kind of mood, so I’m done with it." But that’s a lot.

Jesús: Maybe-

Don: There’s 8 billion people, there’s 330 million in the United States. That’s a lot of self-awareness that we need to start spreading.

Jesús: Yeah. I mean, you’re giving me some ideas in the middle of this conversation, which is evolution made us, like our species, Homo sapiens, we conquered the tree of evolution because of our social skills and because we could organize and we had this affinity to be together as opposed to be alone.

And that represented an evolutionary advantage to do that. Part of that advantage, I mean, I’m probably going to oversimplify, but we actually have a system called mirror neurons in our brain, which is like most of our senses are based on physical properties. Like the sound waves, we detect them. The molecules in the air, we can smell them. The light, we can see it. But there is no physical property to an emotion. There is no actual mean to propagate an emotion. So, “evolution” invented for us the system of mirror neurons, which is we have over a hundred muscles in our face, and it’s a flagging system that when you’re with another human being, you have these neurons that will recreate the emotional state of the person in front of you.

So when you see somebody crying, you cry. But there is no physical mean, there is no molecules, there is no waves that are communicating that emotion. You are recreating it based on these mirror neurons. I think part of the problem that you said is we don’t feel catastrophe. Like when catastrophe is really close to us, even though it’s news, to your point, it really hurts. You prefers to be at peace. You don’t want to be at war. But the moment that you’re watching that on a TV and there is no mirror neurons, there is no, like, you’re not feeling the horror or the pain or the drama, then you’re curious, but you don’t have consequences. Maybe part of it is we need to develop a higher range of empathy amplification, if you want, a mechanism through which we immerse ourselves more to both understand the other viewpoint, but also a little bit suffering.

And that’s a little bit what’s happening with climate change, if you think, right now. This last year, 2023, my view is the first time that society at large felt climate change. Before, intellectually, we were talking about two degrees or one and a half degrees or the Paris Accord, but you don’t feel anything about that. But when you start seeing hurricanes develop in 24 hours, and you start seeing record heat that destroys your livelihood or your house is underwater, now you’re feeling it. Now it’s becoming, “Oh, I don’t want that.” Maybe we need more mechanisms to amplify empathy as a society.

Don: I’m curious to know in what ways the U.S. government can be more diligent in thwarting foreign actors from spreading misinformation.

Jesús: I think they’re doing the best they can. I think it’s a combination of both the cyber defense systems and I think the now increasingly regulation of the application of AI and actually investing more into becoming an AI superpower. I think, for a while, many other governments outspend the United States in AI. And that was a risk. And that was a risk that was very clear. I think that probably is the number one way. I think making sure that we, as the United States, are developing more resources and we are elevating technical groups to create those responses. And then it becomes a matter of resource allocation. I mean, the government, as a government, as little as they can do, other than allocate resources to the areas that are most important. And I think the research and development of cybersecurity, AI, quantum technologies is really important for the future. That’s probably the number one answer that I would give.

Don: Can you leave us with some final advice for how voters can stay informed while avoiding fake news and misinformation?

Jesús: Yeah, I mean, I think go back to trusted sources. So, sources that are vetted. I think develop the habit of always trying to look both at the arguments that support your beliefs and the arguments that counter your belief and not be offended by it. I think it’s a great habit that my friend Daniel Lubetzky helped me develop, which is this idea of like, well, just take a week, and whichever news TV channel you watch, then take the other side and watch it for a week. And it’s a really hard thing to do, by the way, because the first like 10 minutes, I mean, your blood is boiling. But then you start developing this mechanism whether you agree or disagree, then you at least understand that another person like you that would be listening to this for a week would probably have certain set of beliefs.

So, you’re able to understand that good people can have bad opinions or different opinions than yours. That doesn’t make them bad people necessarily. It’s just they’re subject to a different environment, a different set of messages, a different set of beliefs that if you spend enough time with those, they become reasonable. You may disagree with them, they may not be true in your opinion, but you can then understand that, for those people, when they actually do the same thing and flip, they probably have a similar view of your own beliefs. And I think that becomes a really great mechanism to learn to develop this both sense of empathy and also skepticism that not everything that is in your belief side may necessarily be a hundred percent correct, and not everything that is in the belief system of the other side is a hundred percent incorrect.

And that helps create, still form your own opinion, but this idea of, as a habit, always figured out point and counterpoint has been really helpful for me.

Don: What fills you with a sense of optimism? Because you’re an optimist, clearly.

Jesús: I bet on human ingenuity. I think when you’re designing something, you are obligated to be an optimist because you tend to design what you believe. Therefore, unless you can be an optimist, you can’t really design great solutions. But I think over and over and over again, human ingenuity has conquered the most difficult challenges through the last 3,000 years history at any level. And at the time, for a period of time, it may look impossible. It may look unachievable, and yet we figured out how to find a way. So, if you look at the challenges of our time, climate change, the polarization of politics around the world. And I think 2024 is the year to stress test democracy in the planet. If you look at AI that we just talked about and the potential erosion of the trust foundation of our society, so pretty big challenges.

I think human ingenuity, over and over and over again, we’ve just come up with that spark, that new idea, that new way of doing things. And for 2000 years, we have found a way. You could say it’s like, “Oh, wait a minute, when we invented the crane, then all those people lifting things, they ran out of jobs.” You could say, I mean, more recent, it’s like, “Oh, now we have Eagle Eye on tennis matches. What are you going to do with all those people whose job was to like just watch the line and call the ball in and out? They don’t have a job anymore.” Well, they probably have a better job now.

That is what gives me optimism. But optimism is not a given. Optimism, it has to be led and it has to be conquered. So, I still think that we have to have a society. I think we have to be accountable to steer a lot of these new possibilities, new tools, new ways of doing things in a way that is balanced and is sustainable. But as a designer of that future, I think I have to be optimistic and I have to design solutions that leverage human ingenuity to a positive outcome.

Don: Jesús, this has been a phenomenal conversation. I could talk to you for hours. Thank you for your time, and thank you for being a genius.

Jesús: Of course, anytime.

Don: Thank you for listening to 12 Geniuses. In our next episode, I interview Scott Shigeoka. Scott is a curiosity expert who shares his research and in-the-field experience who using curiosity to navigate polarizing issues. If you are learning from and enjoying the podcast, please share it with others who might find value in it, and please consider rating the show on your favorite podcast app. Thanks for listening, and thank you for being a genius.