Skip to main content Skip to footer site map

Cautions About AI and How to Avoid Them

Francois Candelon of BCG Henderson Institute
Francois Candelon of BCG Henderson Institute

Watch

Summary

Connect with Francois:
LinkedIn

Learn more about the current trends in AI.

View Transcript

Francois Candelon (00:00):

Most of my clients weren't even thinking about AI and now we hear about it all the time. I'm just kind of curious, like, why do you think that is? Like what has happened recently that suddenly just catapulted AI to like the forefront of technology conversation?

 

Speaker 2 (00:17):

Welcome to Paychex THRIVE, a Business Podcast, where you'll hear timely insights to help you navigate marketplace dynamics and propel your business forward. Here is your host, Gene Marks.

 

Gene Marks (00:33):

Hey, everybody, and welcome back to another episode of the "Paychex THRIVE" podcast. My name is Gene Marks. Very happy that you are joining us this week. We're gonna talk about AI and regulations and some of the issues that can potentially be impacting your business. And we have Francois Candelon with us for now speaking to us from Paris. Francois is the Managing Director and Senior Partner and Global Director at BCG Henderson Institute. So, first of all, Francois, thank you very much for joining me. I'm really happy that you're here.

 

Francois Candelon (01:02):

I'm very pleased to be with you.

 

Gene Marks (01:04):

So, let's talk first of all about BCG Henderson and, you know, what that institute is all about and what your role is there. And I think we'd all like to learn how you got into that role. So, tell us a little bit about yourself and the institute.

 

Francois Candelon (01:17):

So, basically I've been, let's say, BCG partner for 30 years and as part, and BCG Henderson Institute is BCG think tank. And I think that we have a very specific role because it's not just a think tank made of academic work, but it is a sweet spot where we academic people, we are both academic and practitioners. So, I'm delegating 50% of my time to the think tank, but 50% of my time as well to develop AI transformation at our TMT clients.

 

Gene Marks (01:54):

Got it. So, you're consulting clients, mostly larger clients obviously you had said before but-

 

Francois Candelon (01:59):

Usually large clients.

 

Gene Marks (02:01):

Right. And you're specializing in AI and regulations. Tell us a little bit about your background. How did you, you know, get to the point where you're specializing in that?

 

Francois Candelon (02:10):

And so, basically I've been working for almost the last 30 years in tech and telecom. And thanks to this I've been trying to develop all the tech capabilities, and it is clear that regulation is an important component of AI, of tech in general. Digitization, we looked at what happened in social media, but in the gaming industry as well. And today, AI is very critical and is a topic that where regulation is playing a critical role for what the future of AI will look like.

 

Gene Marks (02:46):

You know, you have to admit like the, you know, AI has been around for a long time, you know? I mean, I remember my dad, you know, 40 years ago was reading books and talking a little bit, he was in the technology world, about artificial intelligence. But it's only really been in the past couple years where it has sort of just exploded into the sort of mindset. You know, most of my clients weren't even thinking about AI, and now we hear about it all the time. I'm just kind of curious, like, why do you think that is? Like what has happened recently that's suddenly just catapulted AI to like the forefront of technology conversations?

 

Francois Candelon (03:20):

So, I would say two things. So, we had, and for a long time, analytical AI and it is true that thanks to semiconductors through AI chips that help make it, I would say easier and cheaper to deal with. I think analytical AI was really coming, but what is true as well in that, ChatGPT last November, it is still will celebrate its "birthday," first "birthday" in a few months from now, in a couple of months, is really captivate, let's say capture everyone's imagination. And why? Because it is so easy to consume.

 

Gene Marks (04:05):

Right.

 

Francois Candelon (04:06):

Everyone can prompt, everyone has tried. However, and everyone has understood that it would have a significant impact on its job,

 

Gene Marks (04:19):

Right.

 

Francois Candelon (04:21):

But it is not, because it's easy to consume that it's actually easy to produce generative AI in a company to create a source of competitive advantage. It's of a different nature.

 

Gene Marks (04:35):

Makes sense. It makes sense. And you mentioned generative AI. I mean, that is really the AI that most of us are dealing with right now, and generative AI and, Francois, correct me if I'm wrong, but, you know, you're basically having a conversation with a chat bot that's been trained on some data and it's generating responses back to you, you know, which, you know, can give you advice on anything from writing a blog to a job description to doing market research that's generative AI. You know, when I'm doing this on ChatGPT, it doesn't seem harmful, you know? It seems great, you know? It's given me lots of great information. You know, why is there such a concern that AI, you know, this could go beyond and actually create harm and issues not only to individuals, but to businesses.

 

Francois Candelon (05:26):

So, you have plenty of it. And first of all, when you say train on some data, basically it's trained on all the data that exists.

 

Gene Marks (05:35):

Correct.

 

Francois Candelon (05:35):

So basically, it's as if you had read everything and you were remembering everything. But as we all know, when you remember, memory is not something that is reliable.

 

Gene Marks (05:47):

Right.

 

Francois Candelon (05:48):

And I think it's part of the key issues we face. The first one is that from time to time, you can have what we call hallucinations. So basically, you believe that something is right, true, and it is not. One of my very good, very good friends with a professor at Harvard was telling me last February he wanted to create a new curriculum made of six lessons. Okay, and he asked ChatGPT, and he, let's say, he interacted with it. And at some point he was saying, and it was good, the answer was good. And then he was saying, okay, but what should the pre-reads be? And one specific one, let's say, capture his attention, okay, it was a case study made by his lab, written by him.

 

Gene Marks (06:38):

Right.

 

Francois Candelon (06:38):

The bad thing was that this, and it was a very interesting abstract. The bad thing that basically he never wrote that piece. But the interesting thing that it was so relevant that he decided to write it. And I think that it is what is made of, let's say, generative AI. It's someone who you can interact with that will provide you ideas, but you need to be very careful about the risk of hallucination. The second thing that is really potentially harmful is because of the deep fake, it can really multiply by one or two orders of magnitude cybersecurity risks.

 

Gene Marks (07:25):

Right.

 

Francois Candelon (07:26):

And so, we need to be very careful about that part as well. So, you have that, you have questions about biases, you have the issue as well about copyrights because as you said, it was trained, as we said, it was trained on a myriad of data. All the internet was read, but okay, what are the copyrights? So, we are facing a new world that not everyone can understand, because, you know, we are at the beginning of an industrial revolution.

 

Gene Marks (07:58):

Right?

 

Francois Candelon (08:01)

And therefore, copyrights were created later in our, let's say, if you are back to, let's say, the print industry and Gutenberg, basically, it took us a century before copyrights were developed. So we are here, there are many other things to create as well. And this industrial revolution, we will try to see how to collaborate between human and AI in a company, but as well, how do the regulation, what are the things that have to be done and so on. So, I think that we are a bit trying, it's a try and error mode both for regulation and both for corporations in the way they use AI.

 

Gene Marks (08:41):

You know, you-

 

Francois Candelon (08:48):

Sorry, generative AI because very often people now focus on generative AI, as I said, but there is this analytical AI that is able to help you get better informed, to take better informed decisions, that can be either predictive or prescriptive and that are creating lots of value. So, we need to keep these two because it's fine.

 

Gene Marks (09:16):

There's no question about the value and all the things that you bring up, you know, the deep fakes where AI can spoof somebody by video or their voice, and already that's being used by some criminals to do, you know, financial people into making transactions. They think they're talking to the CEO of the company and it turns out that it's not even do that. There's hallucinations, which is basically just wrong information that your AI is coming up with. You know, there's misinformation which can be used the wrong way. Like in political campaigns, we actually just saw it. Like last election, we, you know, some of those things, you know, happening as well. All of those things are, you know, they're issues and there are certainly problems that need to be, they need to be addressed. And then I read like last week that Ukraine is deploying drones that are using AI and can instruct to fire upon their enemy at will. In other words, like learn the battlefield, and have the ability to make assessments and start firing at people. And that to me is like sort of the next level of concern where we have AGI, you know, or artificial general intelligence that's not only just generating responses, but also like just doing stuff that we're giving it the ability to do that stuff. And to me, like that's the really scary part of AI. And I'm curious what thoughts you have on that. I mean, that's coming sooner rather than later. It's kind of already here.

 

Francois Candelon (10:50):

Yes, and before you get to AGI, because AGI is supposed in our mind to have a kind of a conscious and so on, but you'll have something that is relatively similar that we call autonomous agents.

 

Gene Marks (11:02):

Yeah, that's right.

 

Francois Candelon (11:02):

But depending on people are will be available between two to five years, but there is honestly, at the moment, no reason why it could not happen. So, I mean, there is no theory they call issue or substantive issue that would prevent it to happen. And this is basically exactly what you've just described and it can be done in other ways, not just on the battle field. But, yes, an autonomous agent is something that is able to interact with the world to assess and then not just to, if I take LLMs for a generative AI, we are interacting with them, but basically they can act. So, you could say, okay, and we could have your own, let's say, autonomous agent dealing with mine so that we come to something and we agree. And so it doesn't need to-

 

Gene Marks (11:58):

Yeah, or I mean, they aren't self-driving cars, like an example of that.

 

Francois Candelon (12:02):

Yes, but-

 

Gene Marks (12:05):

But they're evaluating data that's coming in from all different places and then they're saying-

 

Francois Candelon (12:06):

Absolutely.

 

Gene Marks (12:07):

We need to turn left here and it turns left, right?

 

Francois Candelon (12:10):

Yes, but basically, I think that what we need to keep in mind is that for this to happen, it will require quite what I call a kind of a social license.

 

Gene Marks (12:19):

Okay.

 

Francois Candelon (12:21):

So, I mean, the social license is made of three things. The fact that it needs to be a responsible AI, there's this question about fairness, about transparency, so that it's dealt with that. The second thing that you need to have in this social life center as well, a cost benefit assessment. And it can differ from region to region. I used to live in China and, you know, on the health data, in western China basically you have, let's say, you don't have great GPs, so if you want to have a good diagnostic, it's quite difficult. And the government tried to push AI there, but of course you needed to have, to bring your own data so that you were able to have this. But then they were accepting it because the relationship to data, personal data is different on one hand. And because the benefit was really amazing to have a good diagnostic.

 

Gene Marks (12:27):

Sure.

 

Francois Candelon (13:29):

While in Europe, when we tried before at the beginning of COVID to create these databases or data, nobody wanted it. So, I think that the question about the cost benefit, depending on the region might be a second thing. The third thing, as you mentioned, related to the self-autonomous driving, is that, okay, but who will be responsible? Can we trust these companies to have the right level of capabilities? And I think this is only if we get these three buckets that will have a socialized sense that we will get, we'll be able to discuss differently. I was discussing with someone, a regulator the other day on the airline industry was telling me, "Oh, basically, you know, we don't need pilots. If we have pilots, it's just to make sure that people in the aircraft, in the airplane, or the passengers feel at ease." Basically, they're not useful. However, what will be interesting related to AI, if I take this analogy with the line, so it makes me think that at least we have pilots who are currently able to pilot. But in the coming years, once we'll be accustomed to using AI, will we be able to keep our capabilities, or because we are all lazy, you are lazy, I am lazy.

 

Gene Marks (14:59)

Yes.

 

Francois Candelon (15:00):

We'll be in a position where it will be difficult and there is a kind of an atrophy that might happen.

 

Gene Marks (15:06):

So, you know, the future is scary, but it's also bright. You know, in the end, I'm gonna get, I do want to get your thoughts about that. But right now I see online or on television, our technology leaders, you know, executives from Facebook and Microsoft and Google and OpenAI, and they're testifying in congressional committees about, you know, the potential risks of AI and, you know, potential regulations that might be needed to protect the public. And of course, they're testifying to people that, I don't know if you see these, you know, congressional committees, some of these people I don't even know if they can plug in a television set, let alone understand, you know, the implications of what they're hearing. So, does that concern you? Like, do you trust the tech companies to come up with their own framework to protect society from their own technology?

 

Francois Candelon (16:05):

So, what I know, and we can see that there are very different perspectives in different regions, but what I know, and I made some research on it, is that self-regulation worked once. It was in Japan at the end of the 90s when the gaming industry that was attacked on, let's say, because of the violence and the sexual content that you had in different games, self-regulated itself, but it's easy only one that works. So, it doesn't mean that AI could not be the second one, but what I mean by that is that it's not something that we can totally trust. So, I think that, and especially at a moment when we are facing a world that is fragmenting and where AI can become a source of advantage for the competitiveness of some nation.

 

Gene Marks (17:02):

Yes. Right.

 

Francois Candelon (17:04):

So, this is what makes me feel that we cannot just come to self-regulation to make it happen. But it doesn't mean that I don't trust people, and I'm sure they are willing to do for good because it's in their interest.

 

Gene Marks (17:19):

Sure.

 

Francois Candelon (17:19):

Because at some point, we don't get this social license that I mentioned previously. If they are not trusted, it won't happen. Look, for instance, at the self, let's say, self driving cars, one of the reasons why it is not developing as much as it could is probably because there is a lack of trust for the full environment. So it's why everyone would be better with a regulation that could get trusted.

 

Gene Marks (17:48):

So, relying on the people to self-regulate obviously does not seem to be the right path forward. Meanwhile, we just came off a global pandemic where, you know, we had every single country was handling it, you know, their own way with, you know, with little coordination with each other. I mean, do you feel optimistic that there could be a worldwide framework for AI that protects people?

 

Francois Candelon (18:13):

No, I personally don't believe there will be a worldwide framework. I was discussing the other day with Gabriela Ramos from UNESCO developed such a framework and we work together agreeing that it would be difficult to make it happen, but it is worth trying.

 

Gene Marks (18:33):

Okay.

 

Francois Candelon (18:33):

I doubt, but it's worth trying. But at the same time, I believe we will be in a position, and for corporations, it means that they will have to deal with different frameworks in different regions. And so this is something they need to take into account because, for instance, it will be different in the US from Europe and from China. And if I take Europe that I know well, and that we all know that this AI act, EU AI act that is coming, won't be in a vacuum. Basically we'll have the Digital Service Act. You have the Digital Market Act. So the DMA, the DSA and all of this is really important. So it will be critical for companies to think in a framework and it's not in a vacuum. So, this is 0.1.

 

Gene Marks (19:35):

Okay.

 

Francois Candelon (19:37):

The second thing that I believe they would have to add to this framework, that they will have to build, and that will take time, but they need to do it, it'll save a lot of time then.

 

Gene Marks (19:48):

Right.

 

Francois Candelon (19:49):

They probably need as well to, how could I say, to add multinational or regional arms to that.

 

Gene Marks (19:57):

Right.

 

Francois Candelon (19:59):

Because it's true that even when you look at moderation, for instance, you've seen that between and the EU commission and Twitter X, there was an issue. We'll see how we, how it is solved over time. But for instance, moderation in Europe will be done in a very different way than it is done in the US. So when you are, let's say, a businessman, how to deal with this is something that is important. It will take time. It will take resources. It's not optimal, but this is a way to go.

 

Gene Marks (20:35):

So, what would your, if you put this in a framework of a US business owner, Francois, you know, and say you are, say you're running a small construction firm in Michigan and you're, you know, maybe you're dabbling a little bit in AI, but you're using the applications that you have. You know, is there something, is there anything that a business owner should be doing now to prepare themselves for these potential issues? Is there anything that you would be doing?

 

Francois Candelon (21:02):

So, for instance, one of the first things they need to understand is that they will be considered as liable to what AI do. And for instance, I was interviewed the other day by a commissioner of your US commissioner, was telling me, "You know, the type of answer, oh, I don't understand. The AI's decided it. I don't understand what it does, won't be accepted."

 

Gene Marks (21:31):

You can't say that, right, 'cause you-

 

Francois Candelon (21:32):

You cannot say that even if many people say it. So, but you cannot, and therefore there will be fines and you will have to pay for that. So, which mean that you absolutely need to create, let's say, a kind of a watch doc internally.

 

Gene Marks (21:51):

Yeah, yeah. Not to interrupt you with the examples from the accounting standpoint, and we tell our clients all the time that just because a firm prepares your tax returns and you sign it, that doesn't mean that you can just push up all the responsibility on that firm if there's a problem with the return.

 

Francois Candelon (22:06):

Absolutely.

 

Gene Marks (22:06):

It's your tax return. And it's the same thing with your technology, right?

 

Francois Candelon (22:12):

Absolutely. And so it has a real impact for them and for small and medium businesses to be able to do that. But nevertheless, and this is one of the things that I have seen living, as I said, living in China, and one of the main differences I find between China, the US, and Europe, is the ability in China to have an ecosystem that helps small and medium businesses adopt AI, and this is, I would believe, a real competitive advantage for China. The US, you innovate much more, China adopt much more easily. And Europe doesn't do anything, but it's another story. But, and you know, Michael Porter, a famous strategy professor at Harvard, used to say that the competitiveness of a nation depends on the capacity of its industry to innovate and upgrade. You innovate, Chinese adopt, Chinese upgrade, and Europe is waiting.

 

Gene Marks (23:19):

So, are you optimistic about the future, Francois? I mean, there's a lot to be concerned about, but.

 

Francois Candelon (23:25):

Yeah, I'm never optimistic nor pessimistic. It will happen. And it'll be, you know, it's as if, were you asking, at that time there was no podcast, but in the middle of the 19th century when you had the second industrial revolution, would you say, are you optimistic or pessimistic?

 

Gene Marks (23:45):

Right.

 

Francois Candelon (23:47):

We are entering an industrial revolution.

 

Gene Marks (23:50):

Right.

 

Francois Candelon (23:51):

If I use, let's say, the notion of "Schumpeterian", creative destruction cycles, meaning that many jobs will get destroyed, many jobs will get transformed, many jobs will get created. We're at a moment when, and it doesn't mean that because a job is destroyed somewhere, it is created at the same place.

 

Gene Marks (24:14):

Right.

 

Francois Candelon (24:15):

But it is important for each company to understand that. And I believe that the ability for a company to adopt these technologies, because we're mentioning ChatGPT year ago, we will have Gemini coming from Google, and DeepMind, we'll have multimodality coming where you can transform sound into images, vice versa and so on. So in words, this is coming. Let's adapt.

 

Gene Marks (24:44):

Yep.

 

Francois Candelon (24:46):

And let's adopt.

 

Gene Marks (24:48):

Francois Candelon is the managing director and senior partner and global director of BCG Henderson Institute in Paris, think Tank, and a research firm. And also, that provides consulting services to companies, big and small, about the impact of technology and things like AI. Francois, thank you very much for joining us. It was very illuminating. And hope to talk to you again sometime in the near future.

 

Francois Candelon (25:11):

Thank you.

 

Gene Marks (25:12):

Do you have a topic or a guest that you would like to hear on THRIVE? Please let us know. Visit payx.me/thrivetopics and send us your ideas or matters of interest. Also, if your business is looking to simplify your HR, payroll, benefits, or insurance services, see how Paychex can help. Visit the resource hub at paychex.com/worx, that's W-O-R-X. Paychex can help manage those complexities while you focus on all the ways you want your business to thrive. I'm your host, Gene Marks, and thanks for joining us. Till next time, take care.

 

Speaker 2 (25:49):

This podcast is property of Paychex, Incorporated. 2023. All rights reserved.

Topics