The Digital Project Manager
The Digital Project Manager is the home of digital project management inspiration, how-to guides, tips, tricks, tools, funnies, and jobs. We provide project management guidance for the digital wild west where demanding stakeholders, tiny budgets and stupid deadlines reign supreme.
The Digital Project Manager
How to Actually Measure AI’s Business Impact
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In a world where AI hype is everywhere, what does meaningful, grounded transformation actually look like? In this episode, Galen Low sits down with Michael Domanic, VP and Head of AI at UserTesting, to unpack how AI is being strategically integrated into core business functions—not just to ride the hype wave, but to unlock measurable value. From demystifying the ROI of AI to cultivating a culture of experimentation and enablement, Michael shares his real-world approach to driving AI transformation that sticks.
They dive into the mindset shifts needed as organizations mature in their AI journey, how UX professionals are becoming more essential than ever, and why the future of AI in business may not be about tech at all—but about how people adapt to ongoing change.
Resources from this episode:
- Join the Digital Project Manager Community
- Subscribe to the newsletter to get our latest articles and podcasts
- Connect with Michael on LinkedIn
- Check out UserTesting
Is it even possible to quantify the impact AI is having on an organization?
Michael Domanic:If you're improving a part of the sales process, then you want to measure how much improvement did that thing make? What outcome did it achieve? Did we sell more? Did we achieve a revenue target as a result of implementing that thing into the business? You know, same thing in marketing, we're improving a lot of process in marketing. Okay, great. So if we save time, for example, creating marketing campaigns, how many more campaigns can we create, and what is the value of those additional campaigns that we're creating? I think that's where you want to get a little bit more focused on measuring value.
Galen Low:What sort of organizational mindsets and behaviors need to change along the way to keep pace with the AI transformation journey?
Michael Domanic:Everyone's gonna have to be comfortable with the fact that we're all going to be in transformation and learning mode for the foreseeable future. One of the things that we know is that pace of proliferation of capabilities isn't going to stop. It's going to continue to advance, and we're gonna have to be really adaptive to all of these changes and all of these advancements.
Galen Low:You just published an article about your predictions for AI in 2026. One thing that you also hypothesized was that UX researchers and designers will become some of the most important people at companies in 2026. Can you walk me through what you mean by that?
Michael Domanic:Yeah, so...
Galen Low:welcome to The Digital Project Manager podcast — the show that helps delivery leaders work smarter, deliver smoother, and lead their teams with confidence in the age of AI. I'm Galen, and every week we dive into real world strategies, emerging trends, proven frameworks, and the occasional war story from the project front lines. Whether you're steering massive transformation projects, wrangling AI workflows, or just trying to keep the chaos under control, you're in the right place. Let's get into it. Okay, today we are talking about slaying false assumptions about AI within enterprise culture and giving it a system reset in order to better understand how AI is impacting your core business, not just vanity metrics. We'll be covering the right and wrong ways of measuring the impact of AI, how an organization's view of AI needs to change as its AI maturity changes, and we'll be covering a few predictions about how AI will impact business trends throughout 2026. With me today is Michael Domanic, Head of AI at UserTesting. Michael is on a mission to drive enterprise-wide transformation with AI to unlock meaningful business value. He does that by partnering closely with cross-functional leaders to embed AI into core workflows, foster a culture of innovation, and scale AI fluency across his organization using a pragmatic, impact-driven approach. Michael, thank you for being with me here today.
Michael Domanic:Thanks, Galen. I really appreciate the invite to join you today.
Galen Low:I'm excited about this conversation because we've had some really interesting pre-calls leading up to this. And I know that we'll probably like Zig and zag throughout this topic, but just in case, because I'm that project manager, here's the roadmap that I've sketched out for us today. So to start us off, I wanted to just set the stage by getting your hot take on like a big hairy controversy written question, but then I'd like to zoom out from that and talk about three things first. I wanted to talk about how you've been implementing AI and measuring the impact of AI at user testing, and also why you're approaching things the way that you're approaching them. And then I'd like to just explore where the rabbit hole goes as an organization matures throughout their AI journey, like what mindsets, behaviors, and values need to shift or maybe even be discarded completely. In order to remain successful. And then lastly, I'd like to just dive into some of your predictions for how AI will impact our work lives and the way we do business in 2026. How does that sound to you?
Michael Domanic:Sounds great. Let's do some zigging and zagging.
Galen Low:Let's zig and zag. Let's go. Alright, so I thought I'd start us off with one big hairy question, and I'm just gonna give this a bit of context. The way I see it, most enterprises want to be able to measure the success of their implementation of AI. That sort of ever elusive ROI of AI that goes beyond the binary of like failing or not failing, but opinions seem to be a bit divided on this. Like some people I talk to, they say that AI doesn't even need to be measured because it's clearly creating improve. Others say that organizations need to come up with net new AI related metrics and measure them rigorously. So my big hairy question is this, is it even possible to quantify the impact AI is having on an organization, or is it a bit of a fool's errand at this stage for most organizations to be thinking about their transformation that way?
Michael Domanic:Yeah, it is a good question, and it is one that does get hotly debated quite often, especially by characters like me who are running, you know, AI and organizations. Yeah, I think that there is a school of thought that says. You know, don't bother measuring the ROI of AI, like it's so early. And also like this should feel like table stakes to most organizations, right? Like you don't measure the ROI of electricity. You don't measure the ROI of email. So why measure the ROI of this. My view is a little bit different. Like I understand all that and I agree with parts of that. You know, we have a CFO, we have a board. We're spending a lot of money on our AI transformation. We need to be able to show some kind of return on these investments, right? So what we do is instead of looking at, instead of trying to come up with this like holistic number and say like, this is the ROI across our organization by bringing AI in. What we do instead is we look. On a quarterly basis, we will look at our top 10 most meaningful implementations of AI and we will measure those implementations. We'll measure the return on investment of those implementations. So for example, if someone in our organization creates a GPT, that's going to help a critical part of the sales process. We're gonna measure that, right? We're gonna measure what is the outcome, you know, not necessarily the time saved. I think times, a lot of times when people are measuring ROI, they like tend to get too focused on time savings. Time savings is not an outcome, it's a benefit, right? So the outcome is really what did you achieve by implementing this AI solution into this part of the business? So again, if you're improving a part of the sales process, then you want to measure how much improvement did that thing make? What outcome did it achieve? Did we sell more? Did we achieve a revenue target as a result of implementing that thing into the business? Same thing in marketing, we're improving a lot of process and marketing. Okay, great. So if we save time, for example, creating marketing campaigns, how many more campaigns can we create and what is the value of those additional campaigns that we're creating? I think that's where you want to get a little bit more focused on measuring value.
Galen Low:I love that fidelity, like that level of, you know, not necessarily going so granular to be counting hours saved and not necessarily going way out of your way to like create new metrics and a new way of thinking about the business. Ultimately, it's like about the outcome and your point well taken about the fact. Yeah. We don't measure email efficiency right now, you know, and like in some way, shape or form. This is new. It's transformative, but you know, I mean, I'm skipping ahead to predictions, but you know, a decade from now, it will be unremarkable in some way, shape, or form.
Michael Domanic:Agreed. And look, I think that there are two different types of things to measure. There are there's the squishy ROI that I call, and then there's the hard ROI. What I described is the hard ROI, right? Like that's the thing that you can actually measure. And say, we booked 64% more discovery calls this quarter because of this AI solution. We know what the value of a discovery call is. Critical part of the sales process in a SaaS company. Right? But then there's kind of the squishy metrics or the squishy measures, which is. We are an AI enabled workforce. We know that people want to go work for AI enabled workforces, and we know that adding value and helping us recruit better talent, that's a little bit harder to measure, but we know it's important.
Galen Low:I love that. And yeah, it's been coming up in conversations all over right now. Right. It's almost like, to your point, to attract the right talent. We need to be thinking about it in this way of the fact that people do want to work at organizations that are progressive, that are leading the charge, that are at, you know, the razor's edge, the front line of modern business, because that bodes well.
Michael Domanic:Yeah, totally. And one of the recommendations that I make to anyone that is in an interview process, you know, with a company, one of the most important questions that you need to ask in that interview process. Is, tell me about your AI transformation. Tell me about how AI enabled you are and you know, my advice to companies is you better have a really good answer to that question if you expect to attract top talent.
Galen Low:I'd love to dive into, that's a whole other episode in terms of like, yes, that interview process, the job seeking process, the, you know, the brand of an organization. That's a great tip because I know for a fact there's a lot of organizations that think they have. A solid approach, a solid strategy for AI, but might not actually, might not be able to answer that question succinctly in an interview. That's very interesting. I wonder if we could zoom out a bit because user testing it's a platform that like a lot of my project teams have used. It's always been something I view as an intelligent platform that is focused on, you know, creating impactful user experiences based on real world insights and that I'm all about that. What I found interesting about you is like you actually have a deep history with this organization. You started in a customer success role, and then you moved your way into AI strategy before becoming. I think your formal title right now is VP and head of generative AI business strategy, which is awesome. Congrats. But for all of those reasons, like I'm really keen to pick your brain about AI and how to measure the impact it's having on an organization and how, you know, how to change organizational culture. But maybe even just the level set, like, can you gimme a few examples of how the team at user testing is using AI in their work today? We've talked about, you know, go to market and booking calls, you know, where does this rabbit hole go for you?
Michael Domanic:Yeah, I mean, AI plays a big role in almost everything that we do now because we really have kind of moved past the adoption curve and are implementing it everywhere. We think that it's gonna add value, which is kind of most of the things that we're doing. There are a lot of granular use cases that I can give you, but from a high level, you know, we're, again, we're a SaaS company, right? So there are things that we need to do really well as a SaaS company to be successful, we need to grow revenue, we need to retain our customers, and we need to increase our product velocity. That's where like I, you know, I could do, I could spend all of my days focusing on all parts of the business, but that's where I tend to try to get really focused. I know that's gonna have the biggest impact, like implementations of AI. So for example, in our product. I think most product teams are probably doing this by now, or they know that they should be. But you know, if you're an engineer, you should be using tools like Codex. You should be using tools like Copilot, Cursor, right? Like that Can help you augment your process, create greater efficiencies and greater rigor in the code that you're writing. Designers on our product team are using rapid prototyping tools. Their vibe coding, right? Like they're creating prototypes much faster in order for us to again, increase that product velocity. We wanna create better products and we wanna create those products faster. So the whole product team right is involved in that process. If you move over to go to market. I talked about some examples of things that our sales team is doing. We create custom GPTs to improve parts of the sales process. Our SDRs have a series of GPTs that help them target prospects that are more relevant, have better conversations with those prospects. Get their interest in having a call with us or a discovery call so we can identify, you know, what challenges does that company have that we could potentially be a solution to. Our marketing team is using it to create better campaigns, create those campaigns faster, write and edit, copy, create images. So, yeah, I mean, it's being used across our entire business.
Galen Low:I think SaaS is such a good use case for this, where, you know, to your point, velocity matters. It's intensely competitive in any SaaS space. Not only that, but the barriers to entry are now much lower for all the reasons you just explained, right? You don't have a entire, you know, like a hundred person engineering team, doesn't matter, you know, don't have 50 interface designers or product owners with two decades of experience. It's now closing that gap, but the competition can catch up much more quickly and everything moves so ferociously fast in that world where it's actually justified. I know a lot of my project manager audience, you know, they're listening and there's always that pressure. Right, okay. Yeah. Like, deliver faster, better, cheaper. And they're like, this isn't as motivated. It's like, okay, so that an organization can line its pockets. But what I like about what you're saying is the health and the progressiveness of the organization, the success of the organization, it just, it's lifting all boats because this velocity matters in your industry.
Michael Domanic:And mean, to your point about the barriers falling, I mean, we are well aware of the quickly falling barriers to entry to SaaS, right? So know that we have to be better at everything that we're doing. Just to drill into this point a little bit, you know, it doesn't matter if you're a SaaS company, you know pretty much any company. There are three ways that you use AI in a business setting, right, in a professional setting. So we use AI first as an intern, and that's a valuable use case where you're going to use AI to help you summarize document. Help you do a little bit of research, right? Like if you're a salesperson, you wanna understand the accounts that you're selling to. So if you had this super smart intern, you would be asking that person to help you do research on your prospects, do research on your customers that you're working with. The second way that we use AI in an organization as an assistant, and that gets into the more generative stuff, right? So your marketing team is going to use AI to create copy. You're gonna use it to create emails or LinkedIn posts, right? Like it's all the creation stuff that you can use AI for, and those first two use cases are very valuable, but nothing comes close to the value of the third use case, which is to use it as a thought partner. AI is very good at giving you strategic insights into the things that you're trying to do in your business. So I talk about this all the time in our organization, right? I want everyone to use it for those first two use cases. But what I really want you doing is having a really strategic conversation with the AI tool that you're using to help enhance your strategy, help enhance the approach that you're taking to big challenges in the company.
Galen Low:I really like that. It gets me curious about your role. I think, you know, we've been talking about how to measure the impact of AI. Like your role is literally to guide the transformation of the organization through AI. How involved do you get, like are you kind of like sitting there over watching all of the teams? Is this. More of a sort of like less frequent coaching. How are you planting these seeds and getting people to like activate these mindsets around how to use AI and also how to like think critically about AI. Not just, you know, a thought partner, not just someone who's gonna do the work and then you copy and paste, but actually how to like challenge what's coming back and how to like add and enhance what's coming back from the tools with your own sort of human judgment.
Michael Domanic:Yeah, I mean there's a lot to unpack there and there's so much that we're trying to do. I mean, when it comes to, when it comes to the expectations to transform an organization around AI, we know that we need to bring everyone, tools, training, and enablement to do that. Right? And we're a company of around 750, 800 people. So I can't be everywhere at once. Like I can't be having these conversations with every team, at least like even on a weekly basis, right? It is something that, you know, I try to get focused on the things that matter most in the organization. So we talked about those three buckets earlier. Outside of those three buckets, there are a lot of valuable things to do. I think a critical piece to this, and I would strongly encourage most organizations to do this as well. Look across your organization and find the people that are already experimenting, the people who are leaning in, they're going to become your center of excellence. So we do have a center of excellence here at user testing. It's about two dozen cross-functional people, folks around the organization. Folks from hr, sales, marketing, product engineering, right? Like every part of the company, we're kind of like deputizing them to be the AI experts within their teams. So they're signing up for the mission to say, I'm gonna stay deeply involved in this. I'm going to try to wrap my arms around all this constant change in AI and I'm going to try to figure out. What are the most meaningful implementations of AI? For my role, I'm gonna talk to people who have the same role. I'm gonna talk to people who are in roles adjacent to my role, and we're gonna figure this thing out together. So that has actually been a really big part of. Our adoption journey, right? It's been a big part of our transformation, is enabling that center of excellence to be experts in their team and experts in their part of the business.
Galen Low:I love that approach. I love the enablement approach. You know, so much of what I hear from people who feel like, you know, AI is being crammed down their throats with like very little direction. I think that like it sets that right balance. You can't be everywhere in your organization. I'm assuming you don't have like a large gaggle of underlings that are also AI transformation people that you can kind of send out in the organization. But the community of practice model, I really like it because. It does take into account, like you said it right there, it's like the experts in what they do should be educated on AI, but should be making the decisions about where there is the most value to be had and then how to sort of roll that out.
Michael Domanic:And like I feel like I know enough to be dangerous around most of the business, but I'm not an expert in pretty much all functions of the business. But the people who are doing it every day, they are the experts. And as long as we're having a conversation around capabilities of AI, they're the ones that are actually gonna decode what those capabilities mean for their roles and other people in their roles. Again, like people adjacent to their roles.
Galen Low:The sort of picture I have in my head, right, is like people being deputized, raising their hands. Being empowered, have you encountered the opposite? Like what kind of pushback or maybe like false assumptions around AI have you come up against in your role as you sort of facilitate the transformation of the business and like how have you been dealing with that?
Michael Domanic:Yeah, so there are conscientious objectors in every organization who maybe don't feel like they're on board with this. I take a pretty open-minded and respectful approach to those individuals. There is no hard mandate that you absolutely have to use AI in your organization, but we do have the expectation that you know I do believe you are at a disadvantage at this point. If you're gonna say, I'm not gonna experiment. I'm not gonna change the way that I. And I try to help people recognize that, but also being respectful at the same time that there are some legitimate concerns around this, right? It's moving so fast. There are potential environmental impacts that, you know, maybe we're not signing up for here. So, you know, I do try to be respectful of conscientious objectors. I think even like more than the people who are excited about AI and really wanna lean in. Like, I think those conscientious objectives are the ones that I actually wanna bring to the table the most because I'm probably being more thoughtful than the rest of us, right? About what this all means. And I really value that perspective. I try to articulate that as much as possible to those, that group of individuals. Again, I respect that you want to kind of sit this one out, but I also want you involved, right? Because you have a really important perspective and a voice in all of this.
Galen Low:That's a really good point, and I, you know, I like the call out of conscientious objectives, right? Being people who have are thinking about this hard, they're having a deep, think about this. They're reevaluating things against their morals and their values and their sort of, you know, wellbeing, their livelihood. You know, differentiated from, I would say the like kind of. Blatant objectors who haven't really engaged with it or scratched the surface on it. They just, and I mean, you know, in fairness, are probably afraid, have anxiety around it, haven't really been thinking about it. But I like that you're consciously bringing these conscientious objectors to the table.'cause they represent from a project management review, I'm like, they represent risk. They're thinking about risks. And that's actually in some ways what we need someone to maybe just pump the brakes, like those humans in the loop who are going to challenge and question and not just take all these things at face value. Because on the other end of the spectrum, if we're taking everything at face value, at the speed at which they're coming at us, it's not necessarily going to lead to the best outcome for every organization.
Michael Domanic:To be clear, I think most organizations, when you look at that group of people who are maybe not using AI and not adopting, I think they fall into two groups. One group is, I'm just not interested enough to think about how I should change the way that I work. I don't wanna spend a ton of energy there, but I do wanna spend a lot of energy with the folks who are not using it because they're being really thoughtful about it. Right. That's actually very informative for us.
Galen Low:That's interesting. And also, you know, in some ways a good, that's user testing in some way, shape, or form. Right. I thought maybe I'd go there. I'm curious because love the platform, love the tool. I'm curious, do you use your own product to test like new AI ways of working?
Michael Domanic:We do, quite often actually. So funny enough, my journey into user testing started seven years ago. So 10 years ago I was building AI solutions and testing it inside of user testing. That was kind of hype cycle 1.0 for AI, right? And then that died down and now we're in Hype Cycle 2.0, which is much, much bigger than 1.0. But anyway. So, yeah, my job is to drive transformation in the organization, so I'm constantly using our platform to capture feedback around the company, around the things that we're doing. So that might be our biannual AI proficiency survey where we're looking at. We're asking questions around how do people feel about AI, right? Where do they feel like they need more support in their own personal transformations? We're looking at how people are using AI in many different categories across the organization. We're asking about the tools that maybe we don't have access to today that people want to use. That's super informative for me and it helps me augment the enablement that we're providing across the company. We've built a really strong culture around custom GPTs here at user testing. So we've built actually nearly a thousand of them now, and we're a company of around 750 people now. Look, a lot of that is just experimentation, valuable experimentation, but there's a lot of custom GPTs that have been built across user testing that are being used by. Large cross sections of the company, if not the entire company. So those GPTs that we're building that get used by large audiences, we will test them inside of user testing with a test group, an internal test group, and we'll make sure that it's intuitive, we'll make sure that it's actually solving the problem that it's intended to solve. It's not too complicated, right? Like I don't want anyone to have to be some kind of like advanced prompt engineer to be using those custom GPTs, right? So. We're using our platform to capture feedback on the AI solutions that we're bringing to our employees, to our workforce.
Galen Low:That's such a cool practice and probably something that a lot more folks need to do and, you know, user testing or another tool, right? Like I, I think there's so much fascination right now with piloting and experimenting, and I see less conversation about optimizing and getting feedback and talking to the people about how to improve something. I see a lot of folks just moving on to the next custom GPT or moving on to the next, you know, vibe coded app or moving on to the next, you know, what have you, whereas actually to sort of mature an organization, some of those GPTs are great and also can be improved over time based on user feedback. I think that's like a really good practice. Also, can I just say like, I'm really fond of like what I'm hearing about the culture there, you know, that you live this sort of human centered, like these human centered values you said earlier, right? Not mandated to use AI tools. I know some organizations that are literally like measuring, you know, how often somebody logs in to cursor or, you know, it's putting on performance reviews that have metrics around AI usage and you know, I get it, but it also seems very forced. I actually, I like the picture you're painting of the fact that, you know, people are empowered to make decisions for themselves. You're encouraging them to explore it and from an authentic place, right? You don't want them to fall behind. You think it's actually to their benefit to be learning these tools. I totally get it. And the other thing I liked was that your metrics, even for yourself, but also for the teams, like they're based on core business metrics. You know, it's not necessarily measuring AI specifically or independently, but it's like how it's actually impacting the business. I guess where I wanna go from here is that you and I, when we were talking before, we were talking about the fact that yes, but none of this is static. Everything's always moving. Transformation is not, you know, standing in one place. And, you know, I see things online where, you know, folks are like, there's no best practice right now because it's all moving and it's not gonna be one sort of silver bullet for all eternity to succeed at AI transformations. It goes through multiple stages or phases, you know, as an organization achieves its goals of reaching deeper AI maturity. So I thought I'd ask you what sort of organizational mindsets and behaviors. Need to change along the way to keep pace with the AI transformation journey. Even if they're things that like the organization just figured out or just learned in the previous phase.
Michael Domanic:Yeah, like I think everyone's gonna have to be comfortable with the fact that we're all going to be in transformation and learning mode for the foreseeable future. That could be the next five years, 10 years. I dunno how long it's gonna be, but it's gonna be a while. Like dig it at this point is my advice. One way of looking at that or thinking about that is. Just think about all of the progress that has been made in AI over the last three years. Now, imagine all of that progress was paused. Today we go forward into the future. With all the capabilities that have been built over the last three years. It would take us at least 10 years to unpack and unravel how that's going to impact our organizations. I, I probably take longer than that, but at least 10 years is how long that transformation journey would be. Now, one of the things that we know is. That pace of proliferation of capabilities isn't going to stop. It's going to continue to advance, and we're gonna have to be really adaptive to all of these changes and all of these advancements. For the foreseeable future. So I think my advice to every organization at this point is you gotta start being really comfortable, being very adaptive very quickly to all of this transformation and all of this change.
Galen Low:I love that. It's like intimidating and like in some ways terrifying. And you see it, right? You see these organizations like the AI transformation is like, yeah, we'll just leap over this river. We're gonna be good. But actually they're stepping into an ocean that you know, is endless, and you know.
Michael Domanic:I think that's where this transformation differs from probably all transformations in the past, is like, it felt like, okay, here's this big thing that we need to transform around, like we need to adapt to. And it felt like, okay, it might take a couple years, it might, maybe it takes longer than that, but we know that there's something on the other side of that. And I think when it comes to, again, generative AI proliferation, the fact that A-G-I-A-S-I maybe on the horizon, I think that this, the transformation that we've gone through over the last three years. Probably doesn't even scratch the surface of what we're gonna be dealing with over the next 10 years.
Galen Low:That is wild.
Michael Domanic:No, I know. Like it's kind of doom and gloom and a little bit scary, but it is the reality, right? Like we just have to get used to that reality.
Galen Low:And I think that's the clincher there, is that like accepting the fact that the expanse. Or what's ahead of us is expansive. It's not just this thing that we start and stop. It is actually, you know, a new episte of the way we think about it and that actually, if we can embrace that, then you'll be much better off in some way, shape or form, than assuming that this is temporary.
Michael Domanic:Yes.
Galen Low:Can I return quickly to the metrics and the measurements? Because you were saying that right now you're in a phase where it's just attached to core business. You know, we don't measure these things outside of core business. It just augments it. Will that change in your organization as you enter a future phase? Do you see it being measured differently as we start to understand the depth of change that we're facing and then start innovating our businesses outside of what we understand to be the core business model?
Michael Domanic:To answer your question, will it change? Yes. How will it change? That's a little bit murkier, right? I think that what one of the things that we look at right now is, you know, we're entering 2026. When I look at the last 18 months of our formal transformation, there were kind of two things that we've been measuring. The first is adoption. Adoption of AI across the organization. A little bit of a vanity metric, but it is critical. You're not gonna transform unless you're using the tools. We need to track that right now we're at a point where I would say that we're at full adoption of AI. Our weekly active and daily active users of AI tools is nearly a hundred percent. The other thing that we were tracking over the last 18 months was how is this impacting business outcomes? We're gonna definitely continue to do that. We'll probably place less emphasis on the adoption metrics. But going forward, you know what I've been describing to our executive leadership over the last few months is we're now moving into phase two, and phase two is really gonna be about autonomous agent AI and bringing that into business processes around the organization. We're gonna start to measure that, right? Like once we, we've been doing some experiments over the last few months, but once we really start to get into the formal part of rolling that out across the organization, then we're gonna need to get hyperfocused on how that's impacting processes and workflows around the company. I don't know quite what that looks like yet, but I have an idea. Right. And I think that this probably comes back to measuring business outcomes, but maybe doing it in a different way. Yeah, we'll continue to refine that as we go.
Galen Low:I like that. And you know, you talked earlier about the squishy side of things, culture. I actually do like the way you're approaching usage of tools, right? Not mandated, but like are people using it? And you're kind of there, you're entering phase two and now it's like, are we using agents? And there is a lot, there's a lot of question marks that businesses will need to figure out for themselves really for their unique business of how they measure that. I wanted to switch gears and maybe just round out by talking about the future, especially because. You just published an article, which I enjoyed by the way, and your article was about your predictions for AI in 2026. Among them were things like roles like your own right ahead of AI being kind of table stakes for any medium to large enterprises or AI agents like running the show quietly in the background like we've been talking about. You mentioned like post a GI strategy, decks being circulated among the C-suite, which I think is really interesting. Whether or not people actually sort of fully understand what that means. And the other thing being that our feeds, you know, our social media and our information sources will become much more, and I've put in quote, sloppier as in, there's more AI kind of slop in it. Probably good AI stuff as well, but also, you know, other slop. But the one thing that you also hypothesize. UX researchers and designers will become some of the most important people at companies in 2026. Can you walk me through what you mean by that?
Michael Domanic:So we talked about this earlier, how the barriers to entry and creating product, like all that stuff is falling really fast. So it's never been easier to create wire frames, prototypes. Do that at scale, even create a new business, right? So companies are now using these tools that are helping them increase that product velocity. But one of the things that's still incredibly difficult is creating products that people actually want to use, right? That is, I think, where designers and UX researchers come in and. They're going to be the experts in the room to help the business understand out of all of the vibe coding tools that we're using, that are creating all these like new products for us. Which of these are actually going to resonate with our customers? I think that UX designers, UX researchers, like they've always played an incredibly important role, especially for SaaS companies like ours. But you know, even if you're creating like a consumer product, you know, again, super important role, but there's going to be a demand for companies to create these products a lot faster. Again, increase that product velocity. It's no easier today to create products that resonate with your customers. It's no easier to do that today than it was three years ago. And I think that ux, you know, people in the UX discipline are going to be those really in demand, high demand experts. That are going to be the ones that are gonna help their companies do that. Because if you roll out 10 x more features next year than you did last year, and those features are not again, resonating with your customers, you did not move the needle at all. Like you did not increase your product velocity.
Galen Low:There's so much in there. I love that all. Long time listeners know that I've got a really deep, soft spot in my heart for good user experience design coming from a sort of human-centered design background, but.
Michael Domanic:We have that same soft spot.
Galen Low:Yeah. Right. And then I think it's like, it's really important what you mentioned a. When you started there, I was like, some folks might be like, wait a minute, you just said that wire framing's getting easier, coating's getting easier. Developing experiences is getting easier. Why would a UX person be valuable? Isn't that their whole job? And therein lies the problem, which is that actually that's not what the job is. The job is to resonate with users. And then I'm thinking about your transformation even internally and like not even the external product, but you have users interfacing with digital technology that you have sort of built. To achieve an AI transformation of some sort that's UX that needs that. These, they're users now. They're not just, you know, employees using an internal tool. And I think everyone listening and yourself, I think we've all had experience with that internal tool that no one pays attention to because it's internal. It's just like, oh yeah, we suffer through this. Like terrible cobbled together pseudo ERP so that we can run our business. But like no one would think about improving it and yet. The sort of adoption challenge, right? To get people starting to use these tools and for them to incrementally get better, not just like, cool, we did the thing, we have our custom GPTs like we're done, but actually create an experience that people will use that they find valuable to uplift the culture so that the business is healthier, so that you can attract the talent who is gonna do great work and still achieve that velocity and still achieve that level of innovation that needs to happen, especially in SaaS, but in any business. So that we can continue on, I guess, into that expanse.
Michael Domanic:Totally agree.
Galen Low:I'm just riffing off what you said. I really enjoyed that article. I'm gonna include a link to it in the notes because as tempted as I am to like, we could dive into each one of them, I think there's so much to unpack there. Maybe that's a conversation for another time.
Michael Domanic:You know, when I think of what I wrote in that article, it what might be really resonant with your audience is obviously the UX thing, but also get ready for the slot apocalypse because it's coming, I think by the end of the year. By the end of 2026, i'm predicting a massive decline in usage of social media.'cause I think that people are just gonna just be fed up with the slop and not really know what they're looking at. And I actually think that like, you know, UX plays a role in there as well, right? If you're a social media company, you need to be thinking really strategically about what this means for your business and what kind of experience you wanna deliver to the people using your products. So we're all trying to avoid that slop apocalypse, but it's coming.
Galen Low:You know, we wear out our channels so quickly these days. Right. It's like my inbox, like forget about it. Email is like a channel that I, we barely use anymore because most of what I get is spam. It'll be interesting to see what steps up to replace it or how we adapt.
Michael Domanic:Totally.
Galen Low:Michael, this has been awesome. I really appreciate it. Just for fun, do you have a question that you wanna ask me?
Michael Domanic:What are your predictions for AI in 2026?
Galen Low:You know, at a very high level and maybe an optimistic from an optimistic angle. I think that some of the like myths, especially the like fearmongering will start to fall away. It'll start to become irrelevant. It just the forcing function of people actually using it and realizing what it can be used for and what it is good at doing and what it's not good at doing. I think a lot of, call it hype cycle 3.0 will be, people will just be like, okay, like it. I can't make it this boogeyman anymore. It is a thing. It's not necessarily good or bad as technology, but I think some of the like less factual hot takes about AI and transformation and what organizations are doing and how that's impacting business. I hope that will start to like dissipate from the conversation. Yes, maybe more schlop will enter, but like some of that sort of human led fear mongering that comes from an uninformed place, I think we'll start to dissipate.
Michael Domanic:Interesting. So we're all gonna stop reading people like Gary Marcus who are AI doomers.
Galen Low:I mean, I guess there will probably still be doomers. I just hope fewer. Michael, I really appreciate it. For folks who wanna learn more about you and UserTesting, where can they learn more about you?
Michael Domanic:You can follow me on LinkedIn. I post a lot of stuff about AI on LinkedIn if you're interested in following along. Yeah. Otherwise you know, if you're in a company that's interested in getting feedback on anything that you're doing, come see us at UserTesting. It could be your AI transformation program, could be the products that you're creating. It could be the slop that you're creating, like whatever. If you wanna get feedback, let us know.
Galen Low:I love those three use cases. Awesome. Michael, thank you again. I really appreciate it.
Michael Domanic:All right, thanks, Galen. Great to chat with you.
Galen Low:That's it for today's episode of The Digital Project Manager podcast. If you enjoyed this conversation, make sure to subscribe wherever you're listening. And if you want even more tactical insights like case studies and playbooks, head on over to thedigitalprojectmanager.com. Until next time, thanks for listening.