Pasar al contenido principal Saltar al pie de página del mapa del sitio

Stop Using AI Wrong: Constant Contact’s AI Director Louis Gutierrez

Resumen

Discover how AI is transforming small business operations with Louis Gutierrez, director of AI at Constant Contact. He joins Gene to explore the reality behind AI hype — from accuracy challenges to trust concerns —along with what’s next. Learn why establishing clear AI goals matters more than rushing to adopt tools, how agentic systems will revolutionize your future marketing campaigns, and why AI should empower your team rather than replace them.

Topics include:
00:00 – Episode preview and welcome
01:50 – Guest introduction: Louis Gutierrez
03:33 – Louis’s journey into AI
05:03 – AI’s role in business
07:16 – Challenges with AI accuracy
11:01 – How data and objectives can impact AI performance
13:48 – Creating your AI governance strategy
16:46 – The future of marketing automation
19:27 – Building customer trust in AI-powered tools
23:08 – Current and future AI capabilities
27:34 – Fine-tuning AI models for better results
29:37 – Will AI replace jobs?
32:24 – Wrap up and thank you

Spending too much time on HR tasks? Discover how the right support can give you hours back in your week.

Have a question for upcoming episodes or a topic you want covered? Let us know!

Ver transcripción

Gene Marks (00:00)

Hey, everyone. You know, managing your team does not have to be complicated. I mean, from hiring the right people, getting them onboarded, keeping things running smoothly, Paychex has the HR tools and support that you need to do it all. It's like having an extra set of hands when you really need it. So, if you're curious, visit us at paychex.com/MeetPaychex. That's P-A-Y-C-H-E-X.com/M-E-E-T-P-A-Y-C-H-E-X. You can also find the link in our show notes.

Louis Gutierrez (00:33)

One of the things that we think is really important for small businesses is to first just establish, even if it's just one or two people, an AI governance team. Like, what do we actually want to get out of this? What do we want to get out of AI? What are our goals? What are our values in some cases, right? Some people want to make sure that AI is not, you know, injecting bias into the problem, right? Or some people want to maximize growth with AI. So, I think, like, that's the most important thing is even just, even just talking to some people in your company and saying, let me write down, like the top five things that I want to get out of AI and then understand, like, how you can bridge, like the tools that you do have access to to what your actual goals and values are and your strategy around AI. I think that's probably the most important step to take.

Announcer (01:25)

Welcome to THRIVE, a Paychex Business Podcast. Your blueprint for navigating everything from people to policies to profits. And now your host, Gene Marks.

Gene Marks (01:35)

Hey, everybody, it's Gene Marks. Thank you so much for joining us again for another episode of the Paychex THRIVE podcast. We are going to continue to talk because we've been having some other episodes that have talked about AI, but this is a really good one, you know. I'm speaking with Louis Gutierrez, the director of AI at Constant Contact. I know most you guys, I am sure, are familiar with Constant Contact as a marketing, email marketing platform. I've used Constant Contact and been a fan of the platform for years. Louis, my company actually, we sell CRM software and people always say, like, well, can't we use one of the CRM platforms that you sell for, you know, for, you know, bulk email or email campaigns? I'm always like, no use, like a Constant Contact because that's like what you guys do, you know. So I just want to reiterate that for people that are watching or listening that, you know, if you are planning on sending, doing bulk marketing and digital marketing and email campaigns, even if you have a CRM assistant that professes to be able to do that, I strongly recommend that you integrate it with a good platform like a Constant Contact because that, you know, that's what they're in business to do. But so, Louis, first of all, thank you for joining.

Louis Gutierrez (02:49)

It is absolutely. Thank you.

Gene Marks (02:51)

It's great to have you here. So, your title is director of AI at Constant Contact. I'm going to make, I'm going to make a big assumption that you're probably the first person to hold that role at Constant Contact. Tell me a little bit about yourself and how you got to be in that role and what you do.

Louis Gutierrez (03:10)

Yeah, yeah. Even moreover, I'm probably one of the few people that has AI in their title at Constant Contact, except for some other AI engineers that I've hired.

Gene Marks (03:19)

Right. And for all I know, you're a bot. You are human. Right? I want to make sure.

Louis Gutierrez (03:24)

I'll leave that, I'll leave that to the interpretation of some of your viewers.

Gene Marks (03:27)

Okay, fair enough. You seem pretty real. You seem pretty real.

Louis Gutierrez (03:30)

Right.

Gene Marks (03:30)

So how'd you get that? How'd you get that job? Yeah.

Louis Gutierrez (03:33)

So, my story is a while back, during the financial crisis of 2008, I was a software engineer at a company back then, Sun Microsystems, doesn't exist anymore. So I lost my job, got laid off. Then after snowboarding for a winter and enjoying myself, I decided to go to grad school. Originally, I was going to do security. So, I was in grad school going to do my PhD. Do security. And then one day, as part of our, as part of our requirements for first year PhD students, we needed to go to poster sessions, talks, and I saw someone give a talk about a machine learning algorithm that they were doing to analyze some articles, I think in the New York Times. And I saw it and it was just horrible. The results were just, just really, really bad. And I did what every good first year grad student does. I roasted the person up there, asked him a bunch of really hard questions, made him sweat. And so then afterwards, I uncharacteristically went up there, started to talk to him, maybe asked him about some of the work that he was doing that turned into coffee. And then by the next day, I was looking to switch my advisor to machine learning AI. Yeah. And I think at some point during that conversation, I realized that someday I would be here talking to people about how AI and machine learning is changing the world, how everybody's adopting it, how it's changing everything and that we need to learn how to use this for productivity, for small businesses, for everything. I just knew that it was going to play an integral role in everything that we did as a society. So that's how I got started in it. I worked at several companies. Most recently before Constant Contact I was at a fintech. We're working a lot like with AI machine learning models on underwriting loans, fraud detection. I wanted to come over to Constant Contact because it represented a really good opportunity right now for some of the models that are becoming really popular. Content generation. Obviously like in fintech, you're dealing with people's money so there's a lot more regulation around it. And if you've noticed like healthcare and fintech have been a little slower to adopt some of these generative models. So that's one of the reasons why I wanted to come over to Constant Contact. What my job is at Constant Contact, I think we're still defining that to a certain extent. Right. I think it's one part AI strategist. So, working with the AI governance team to understand what do we want to get out of AI, what are our goals, what are our overarching goals, what are our values that we want to communicate with AI from a security, from a legal perspective. One part manager of managers where I manage several teams that do technical work. So I'm still heavily embedded in some of the technical aspects of it and then another part on the productivity side. So understanding, you know, Constant Contact is a small business as well. Right. We're a medium-sized business. And so we want to adopt these tools. So how do we adopt them? How do we see ROI? How do we make sure that we're enabling our employees to get the most out of these tools and then also just dealing with, you know, all kinds of all AI issues that come up at any given point is probably a large part of it as well.

Gene Marks (06:47)

It is, it's a bit, it's a, it's a responsible job and a big job and obviously I think a lot of the Constant Contact's future just like you know, many tech companies, really hinge on their successful implementation and rollout of AI and you know, so your job is, I think it's critical to the company and I'll make sure I tell your boss that as well because you know, everybody needs to be reminded of that.

Louis Gutierrez (07:10)

I'm going to make sure to connect you two after this call.

Gene Marks (07:12)

Yes. You'll have to make sure that. Okay, so, being in that world, and your timing is terrible because as we're having this conversation right now, just over the past few weeks, I mean, the Wall Street Journal, the New York Times, various tech trades have all been writing about recent surveys that have come out, recent studies that are done. Work slop, you know, where AI is creating, you know, no, people aren't laying people off because of AI. It's actually creating more work for a lot of companies trying to fix all the wrong stuff that AI generates, you know. I write for Forbes like, six to eight times a month and I just wrote last week just to say, like, I did a, just, I used ChatGPT and Grok and Gemini's image creators because they're all, you know, oh, my God, these are, like, fantastic. And I asked, you know, just like, some simple prompts about, like, you know, have a, you know, create a Yorkshire Terrier playing baseball, hitting a home run, you know, at the plate or whatever. And you can see the article, there's like, it's all over the place. You know, I mean, it's really not great. And then I would come back with more prompts to, can you fix this? Can you make it like, whatever? And maybe it's just me. I'm just not writing good prompts. But I mean, like, that's, you know, I'm like a typical average dude, like, trying to use this stuff and it's not... Anyway, you know, you must deal with that a lot. Like, what, what are your thoughts on the accuracy and the reliability of what AI is doing right now? And where do you think it's going? Will it get better?

Louis Gutierrez (08:47)

Yeah. Yeah. What a great question. So, I can even, I've experienced that myself as well. So, I've been coding since the 90s, right?

Gene Marks (08:54)

Sí.

Louis Gutierrez (08:54)

To me, coding is second nature, and I'm just not very proficient with some of these AI tools, like Cursor. So, we adopt Cursor for all of our engineers at Constant Contact. And to be honest with you, for me personally, I see that it generates a lot of code, and a lot of times I have to go back and fix that code. And that seems like it might take more time than having me just writing the code. And then even more so, I also teach at UT Austin. I teach Introduction to Fine Tuning LLMs. I teach explainable AI. And I think I'm seeing a similar phenomenon, too. A lot of the students aren't really learning coding anymore. They're learning to use these tools. And a lot of times it's just this really verbose code that takes me a long time to go through and figure out what they're doing. So it's a big problem. I think, and even to compound that issue, I think there's a lot of, not necessarily untrue things out there, but a lot of, like, exaggerations about what AI can do. So, for me, I always think of AI... So, I grew up in San Antonio and everybody. I meet people sometimes to come to San Antonio and they're like, yeah, this is our one vacation a year and we came to San Antonio. I'm like, you came here. I know everything about San Antonio. It's not so great, right?

Gene Marks (10:19)

San Antonio is really nice by the way, don't say that. The Riverwalk is great.

Louis Gutierrez (10:21)

That's exactly what I'm talking about. So I have the same feeling for AI and machine learning. These are tools that I've been using since I started my PhD in, you know, 2009, finished in 2014, right? So, I'm very familiar with everything that we're doing. Everything that everybody's experiencing, I experienced over the course of 15 years already. And so, I'm not surprised that some of these tools don't necessarily meet up to the expectation of like, here's a magic box and I'm going to give it something and I'm going to get something in return. There's this old staying in statistics that generalizes to AI and machine learning is all models are bad, but some are useful, right? And that's essentially saying, like, these models inherently aren't good at anything, right? They're only good because you're able to strategically put them in a place, in a workflow that can bring you value and then you're able to measure that value. And that's the really, really hard part is how do I know that a model is actually doing something good or how do I measure the impact of it? So all those things are not surprising to me. And it goes to this, that adopting some of these tools, like just being able to adopt the tools is only the first step. Being able to become proficient with them, being able to measure them, being able to grow as the tools change and as your problems change. There's this saying in machine learning called there's data drift and there's concept drift, right? So data drift means, like, I use a model, I deploy it in this very specific scenario, this very specific workflow, and then what ends up happening is that it was good for that moment, but then the distribution, like my customer base starts to change or something externally in the world starts to change, you know, tariffs go up or something else happens. And so then like the distribution of my data changes. And so now you can no longer expect the model to perform the same way that it did when you first put it there. And then concept drift is what I define as success. Right. So, at Constant Contact, somebody sent a campaign, right. So maybe that starts to change as well. Right. So the success metric for your model changes as well. And then at that point your model is no longer good. You need to reassess it. You need to think about how do I change this to fit the new, what the new data is saying.

Gene Marks (12:41)

Right. You know, it's, you've used the word data like you know, half a dozen times, you know, in what you're just saying. And your, Constant Contact focuses on for the most part SMBs. Right. I mean that's who, you know, that's your bread-and-butter material. Speaking just, it's not even anecdotally, I mean just like real world, all the clients I deal with, our databases are disaster. You know, I mean, they're, you know, I mean, out of date email addresses, wrong misspellings, things that are wrong, not updated, blanket, whatever. I mean, you know, how does, how do you deal with that? I mean you, the only way that your platform is a success is if I'm sending out emails to my list and, and then I get, you know, good open rates and response rates and I'm like, oh, I love Constant Contact and I love the AI tools that are helping me do this faster. But where does AI come in to help me, you know, you with my data, which is so lousy and to have a successful campaign? What can you guys do to work on that?

Louis Gutierrez (13:48)

Yeah, yeah, great question. So, I think that's one of the biggest... I mean, Constant Contact, when I first came here, it's a 30-year-old company, so we have lots of legacy data, lots of old data, like legacy infrastructure. So I think you hit a really, really important point that I think is something that small businesses and large, you know, corporations experience is data fragmentation. Data is all in different places and you want your AI to make some sort, to be the logic, the decision maker. But you know, sending over that data in a way that as content that the AI can ingest and then make decisions on becomes really, really difficult. Yeah, so I think that's a huge problem in addition, I think like one of the things that we think is really important for small businesses is to first just establish, even if it's just one or two people, an AI governance team. Like, what do we actually want to get out of this? What do we want to get out of AI? What are our goals? What are our values in some cases. Right. And some, some people want to make sure that AI is not, you know, injecting bias into the problem. Right. Or some people want to maximize growth with AI. So I think like that's this most important thing is even just, even just talking to some people in your company and saying, let me write down like the top five things that I want to get out of AI and then understand like how you can bridge like the tools that you do have access to to what your actual goals and values are and your strategy around AI is. I think that's probably the most important step to take.

Gene Marks (15:23)

Yeah, I think that's really, really great advice. And I do think that it's a, I think it's important for people to own their databases and how they're using AI. I have a number of companies that, bigger companies, but smaller companies should also be knowing to create AI policies as well for the governance team. Right. And what can AI be used for? What can it not be used for? So next question I have for you is, there's generative AI and there's agentic AI. And I'm wondering, to me, to me the Mecca of Constant Contact, and again, I'm a user, I'm a subscriber. The Mecca would be, Louis, for me to say, create an email that does this, send it to everybody with blue eyes and green hair in this list and Constant just goes to it. You know, like, to me, like that is, again, this is, you know, we're far away from that point. But I want you to know that's sort of like the perception that a lot of business owners like me have about how to truly use AI. Like, instead of paying a marketing person, I go to Constant Contact and I talk to it as if there is a marketing person and say, all right, you're, you know, you've got all the AI genius there, so you do this work, you know, you act as my agent to do that. Is that unrealistic in the future? Do you think that's where things are heading? Or am I, or is this way too far out in the future?

Louis Gutierrez (16:46)

Yeah, no, I think it's very realistic and it's in aligned with our internal AI strategy. So yes, you do have generative models and then you have other types of models as well, discriminative models. And then you have an agentic framework, which is more like a delivery system around those tools, right? And so, yeah, at Constant Contact, our immediate goals, which, you know, in the very near future we're going to release some features for this, but essentially our immediate goals are anything that a human can do when they log in to the interface, the web interface, the agent system can do, and you can interface with that agent system through a prompt. And so, when you think about agentic systems, and I have a personal affinity for agentic systems, I actually worked on them a really long time ago. We didn't use generative models, but we used reinforcement learning models. I was at the Department of Energy during some of my graduate work. And essentially at its most fundamental, it's just, it's super simple. It's basically like you have a model that's a decision maker. In this case, it's going to be an LLM and it has access to three things. It has access to prompts, it has access to data and has access to tools, things that could do things like a content generator, right? And so then an input prompt comes in and then the model decides, like, there's an objective in here. It loops through its resources, those three things until it meets that objective, and then it returns to the user. And so, in that case, I think you can, like, what you're asking is not far away. Constant Contact is headed in that direction. And I think, like, hopefully you can have me back at some point soon, And I can say I can give you an announcement of the release of a feature that's really similar, that'll get 80% of what you're saying right now. Obviously that last 20% that you talked about, about identifying particular users from, like, you know, like you said, blue eyes and green hair or something, that one is a little tougher. Like, we, you would have to like, integrate with external data sources to understand more about your user base and so forth. But I think that something like that is on the future where we can start to cohort across like this organic lines that exist. So in machine learning, it's actually called like unsupervised learning. So cohorts that may not be very obvious to us because those relationships are too complex. Like, the model will be able to decipher those cohorts, put them aside and say, okay, this email goes to here and I'm going to modify this email to this particular cohort because they're going to respond better based on some optimization metrics that are predefined. So, yeah, the future is almost the present, I think.

Gene Marks (19:27)

Okay, all right, that's good to hear. Now, when you say the future is almost the present, do you think that your typical customer is actually going to trust Constant Contact to do this? I mean. And by the way, you guys are inheriting like any tech company, you know, you do realize, I mean, this is decades old. I mean, I grew up with Microsoft releasing versions of Windows back in the day that just were not ready for prime time. But, you know, the whole goal was for users to use it, they would be the ones to find the bugs, report it back, and by the time you get into it... That's why there's always this old saying goes, like you never buy version one of anything Microsoft makes you wait until like version three before they actually get it right, you know, And I feel like, I feel like the same mistakes are happening again and again. Like with AI, there's such a rush to get it out there that it, you know, the product that gets put out there doesn't work as well or isn't as trustworthy or reliable. And then people lose confidence in it. Like business owners like me, they're like, jeez, this stuff like, doesn't work like they say it's going to work. So how am I going to believe Constant Contact that to trust them with my customer or prospect data, to send out the right campaigns and organize it that way? What do you say to a customer when they have that sort of challenge with trust, you know?

Louis Gutierrez (20:36)

Yeah, no, I agree. And I think that's, that's a legitimate question and fear to have, right? Because a lot of times you're entrusting some of these AI model workflows to do things that a human would do. And there's a lot of risk associated with it. So, I think every company, like, needs to define, like, what level of risk are they willing to take? Because all these models are inherently risky. That's actually the definition of probabilistic modeling. It's that, you know, there is no... Two plus two equals four, but in probabilistic modeling, there's always going to be a percentage, a delta of error associated with it. So, before a company, you know, takes that leap into adopting AI tools, for example, like a healthcare company, decisions that a healthcare company may make may be different than that of, you know, like an ice cream shop or something. Right? Yeah. So why should customers trust us? One is that everything that we do, every decision that we make from the AI strategy perspective comes from a governance committee that is a cross of our legal team, our security team, our tech team, and we all come together and we decide at a high level, here are the things, here are the things that are important to us. And at the top of that list is customer trust, deliverability, and adopting AI in a way, in a clear-eyed way that brings measurable value to our customers. Right. So it's not about like we need to just superficially put in some agentic system that, you know, that shows that we're adopting quickly. We've been working on these problems for, I've been at the company for a year now and some of these, some of these solutions have been, you know, worked on for six months, eight months, nine months, because we want to make sure that when we release them, they're secure, they're fast, and there's an accuracy that we can measure and be, and it's predictable. And most importantly, like that it brings value to our customers. Right. And this goes to, like you said, this company's been around for decades and decades, so we have a core set of users who like it the way that it is. Right. So that part, that component, that user experience isn't changing for them. But we do have new users who are willing to adopt these tools faster to be like, like you said, those beta users in some cases, like kind of work through some of the issues, but mostly like we don't release anything here until we're 100% sure that we can predict what the behavior will be in a production environment.

Gene Marks (23:07)

Okay, that's great. All right, so speak to me as a customer, which I am of Constant Contact, and finish the sentence. You know, Gene, our AI tools at Constant Contact will enable you to do this right now and will enable you to do this in the next six to 12 months.

Louis Gutierrez (23:28)

Yeah, that's like the actual operation of an LLM, right? It just completes tokens. Yeah, so I'll play the LLM here. So, Gene, right now our AI tools will enable you to have a minimal prompt to send out a multi-channel campaign based on that minimal prompt. Because we take in your brand data to enrich the context that we send over to the AI. So essentially something that used to take you 30 minutes, 45 minutes, in some cases an hour, I've talked to customers that they used to spend hours and hours wordsmithing an email. One of the most interesting things that we've noticed in our data is that over the course of the year we've seen the length of the prompt, the input prompt go from relatively long, where people were giving lots and lots of instruction to really, really small. In fact, at this point, probably these numbers are off the top of my head, but you know, under three, four, five words is a prompt, right? Most of our prompts. And so people are still seeing increased campaign sent. We're seeing increased campaign send based on that. So essentially right now you can save a lot of time. You can, you can exceed and have campaigns sent off that really target your audience, that communicate what you want to say about your business in a shorter amount of time so that you can go spend more time doing whatever it is that you love about your business. So, Gene, where are things going in the near future? So I'll talk about the near near future, meaning this quarter. So in the near future, we're going to release out an agent system that's going to have a chat interface that allows you to do most of the things. We're targeting about 80% of what you can do in Constant Contact, generate campaigns, access knowledge documents, ask for advice, and so forth. Right. So basically, a marketing assistant there, which is going to be released this quarter, that's going to allow you to do, to send out campaigns and to essentially shrink that time having to log in and actually having to click on buttons. Instead you can just say, what should I do today for my company in order to grow the business? It'll offer up some examples and then say like you should generate these campaigns, how about these examples? It'll generate a template preview and then you can click to adopt it and then send it out. Where are we going in the future? So one of my really big beliefs about this industry, and it's not anything, any prediction that I'm making, but it's things that I've observed historically, is that these models, like you said, are great, they're amazing. Like what OpenAI has done, what Meta has done, has been pumped in a lot of money, taken all the experts in the world, taken all the data in the world and trained these really, really amazing. models. The way that I like to describe it is, and this probably hints at some of the frustration that you're having, it's like a C student. It's like okay at everything, right? Yeah. And so a lot of times that's not good enough for like our customers. We want an A-plus student, we want the overachiever student. Right. And so, where we're headed, and we'll probably be looking at this early in H1 next week, or at least we're currently already have some versions of this is we're fine-tuning models. Right. So, what does fine-tuning mean? So one of the ways that you augment models to make them better, and that we all do, is we send context over to it. I ask it like I want to eat, you know, I want to have dinner tonight somewhere. And then he gives me some examples and I say, oh, wait a minute, in Austin and I'm interested in Chinese food. Right. And so, I'm starting to send more context and it's giving a better solution. So, the way to turn the level up on that is taking your data and then instead of just sending context, which doesn't like the base-level model stays the same during that, what you do is you actually train the model and update the parameters within the model so you fundamentally change the model.

Gene Marks (27:31)

That's great. And this is all to improve accuracy. Correct?

Louis Gutierrez (27:34)

All to improve accuracy and even more. Right. So, in some cases, like for us, like we're not so concerned with the co-generation of a model. Right. Or if whether it could write a book report on World War II. Unless you're like a World War II company, memorial company or something, right? And so, the idea here is like can we make it into an A student for generating marketing copy at the risk of possibly making it a D student in co-generation, which we don't really care that much about. Right. So, then we're going to squeeze out more performance based on the fact that we're a 30-year-old company, have been doing this for a really long time, we are surrounded by experts, we have lots and lots of data that OpenAI and Meta don't have access to.

Gene Marks (28:18)

Right.

Louis Gutierrez (28:18)

So we have our own proprietary data. So we're going to train that model to squeeze out more performance based on an optimization or metrics that we want to, that we define as success. And so that improves the performance, reduces the error rate and also then it allows like in some cases we can actually make the model smaller. Right. And so that'll increase, that'll decrease latency, so it'll be faster. And then once we have this open source model that we train, we can start to host it internally. Right. So that increases security. Right. So now that, now that we have control, it's in our VPC, assuming so long as like AWS isn't down. Yeah, it's in our VPC. So we have complete control. We're no longer sending data off to a third-party API. Right. And then we can control cost on that as well, because we're not paying for tokens anymore. We're just paying for whatever computation cost it has to host that model. So we're talking about improved performance, decreased error, we're talking about increased speed, and then we're talking about cheaper costs. So we don't have to raise any prices or we don't have to, like, incur that cost. If OpenAI all of a sudden decides to raise their API costs tomorrow, a lot of companies are going to be put in a bad position. So then, you know, getting away from that dependency as well.

Gene Marks (29:37)

I love it. I love it. I only have, we only have another minute to go and you've given me great information. I just, one of the other takeaways that I get from this, particularly for small and midsize businesses that are watching this, is that like anything else, like, I don't picture myself doing this with Constant Contact. Like, I don't picture the typical business owner, unless they're really, really small, leveraging these tools. I believe that it's our marketing people who are doing this. And for those, you know, people that are out there, I know it's those, you know, tech companies like to say, we're not replacing people, we're not replacing people, which is baloney because I think some people will be replaced, a lot of people will be replaced by AI. But there are certain knowledge jobs, like, to me, I look at, if you're a really good marketing person in a company and you're using Constant Contact, these AI tools that you're introducing are nothing more than helping me do my job even better, you know, and also doing more of it as well. Like, I don't feel like this is something, oh it's going to take my job away. It's like, I need to learn this stuff and become really proficient with it so I can be a great marketer in 2030 because the, you know, the skills of that job are clearly changing. Does that make sense? Do you agree with that?

Louis Gutierrez (30:51)

Yeah, I, no we agree with that completely here at Constant Contact. And I think that in a sense, like, AI for large, gigantic companies and AI for small businesses mean two different things. I think at the very large scale, like, there is, there is a strategy to leaner teams. Leaner teams enabled with AI tools that can do more. Whereas like before, the strategy was like, let's buy up all of the talent and have this really big team. And the interesting thing here is that that's always been the strategy of small business businesses, a lean team that could do more, that can wear lots and hats. So I think that you're absolutely right. Like AI for the small businesses for essentially is a tool that can help them do more and free up time to do other things. Because I think a lot of times it's probably the case with, with your business, your marketing, you know, your marketing manager probably does other stuff as well.

Gene Marks (31:44)

Yeah, we're not doing enough. We're not doing enough. You know, I mean, I would be having her send, you know, Constant Contact campaigns like all day, every day, but she's got other stuff that she needs to do.

Louis Gutierrez: (31:53)

Yeah, exactly.

Gene Marks (31:55)

These tools just seem like it would just, it would make her that much more productive. And I guess for people that are watching or listening to this that are employees or marketing managers of companies, you should not be scared. You should be embracing this stuff because that will increase your value to your employers. And that's, you know, that's the takeaway that I get from this conversation. Louis, you're great. I really appreciate all the time that you have taken today, you know, and yeah, we will definitely would love to have you come back in a, like in a year or so. I mean, like, you know, because it's just the world is changing so much in the area that you're in, and it'd be curious to see if some of your plans are going to come to fruition and how well they're working out. But I really do appreciate you taking the time.

Louis Gutierrez (32:37)

Yeah, thanks so much. It's been really fun. Yeah, I'm happy to come back.

Gene Marks (32:40)

Do you have a topic or a guest that you would like to hear on THRIVE? Please let us know. Visit payx.me/ThriveTopics and send us your ideas or matters of interest. Also, if your business is looking to simplify your HR, payroll, benefits, or insurance services, see how Paychex can help. Visit the resource hub at paychex.com/worx. That's W O-R-X. Paychex can help manage those complexities while you focus on all the ways you want your business to thrive. I'm your host, Gene Marks, and thanks for joining us. Till next time, take care.

Announcer (33:14)

This podcast is property of Paychex Incorporated 2025. All rights reserved.