
The Product Manager
Successful products don’t happen in a vacuum. Hosted by Hannah Clark, Editor of The Product Manager, this show takes a 360º view of product through the perspectives of those in the inner circle, outer perimeter, and fringes of the product management process. If you manage, design, develop, or market products, expect candid and actionable insights that can guide you through every stage of the product life cycle.
The Product Manager
How to Use UX Research to Delight Your Users (with Laura Klein, Steve Portigal, and Thomas Stokes)
Not to get all "check your privileges" on you, but if your organization has an in-house research team, or works with a research firm, or even has just one UXR on staff, you gotta count yourself lucky. According to the 2024 State of Research Report by User Interviews, for every one dedicated researcher, there are five PWDRs—that stands for 'people who do research'. So by my math, that means that there's a 1 in 6 chance that one of those PWDRs is you.
So if you do identify as a PWDR, you're likely in a situation where you're doing the absolute best job you can doing UXR off the side of your desk, while painfully aware that you don't know what you don't know about doing it better.
And since 1 in 6 of us are in this exact position, we held a phenomenal panel event with three renowned user research experts who really get it and want to help. In this recording, you'll learn what good, decent, and great user research looks like, the traits that distinguish good, decent, and great UX design, and useful strategies to connect UX insights to your product's unique selling proposition.
Resources from this episode:
- Subscribe to The Product Manager newsletter
- Connect with Laura, Steve, and Thomas on LinkedIn
- Check out Users Know, Portigal Consulting, and Drill Bit Labs
Not to get all checker privilege on you, but if your organization has an in-house research team, or works with a research firm, or even has just one UXR on staff, you gotta count yourself lucky. According to the 2024 State of Research Report by User Interviews, for every one dedicated researcher, there are five PWDRs—that stands for 'people who do research'. So by my math, that means that there's a 1 in 6 chance that one of those PWDRs is you. So if you do identify as a PWDR, you're likely in a situation where you're doing the absolute best job you can doing UXR off the side of your desk, while painfully aware that you don't know what you don't know about doing it better. And since 1 in 6 of us are in this exact position, we held a phenomenal panel event with three renowned user research experts who really get it and want to help. In this recording, you'll learn what good, decent, and great user research looks like, the traits that distinguish good, decent, and great UX design, and useful strategies to connect UX insights to your product's unique selling proposition. Let's jump in. Today, we have three really fantastic guests. I'm very excited to introduce to you. We've got Laura Klein, who's the author of Build Better Products. Laura is a seasoned UX expert. She's known for her ability to bridge the gap between research and actionable design. And I'm going to start everybody off with a little bit of a jeopardy question. So Laura, I'll follow the first one to you. So you've got some pretty helpful insights for anyone deciding whether they should be living at the top of a mountain or at the bottom, what would you say would be the best place to live?
Laura Klein:Oh, I mean, this is not a tough one for me because I live at the top of a very short mountain, and I really love it, but I think the most important thing is whether you live at the top or the bottom of the mountain, you should always have a funicular. A funicular is the correct way to get to or from your home. That's the most important thing. Top or the bottom and yeah.
Hannah Clark:I live in an apartment building and I think I should also have a funicular that sounds like...
Laura Klein:...much more zip line or fireman's pole.
Hannah Clark:I think I would be, I would much prefer that to the staircase to walk up.
Laura Klein:You'd be the most popular person in the apartment.
Hannah Clark:I think so, too. We also have Steve Portigal joining us today. He's the author of Interviewing Users: How to Uncover Compelling Insights, which is a must read for those who if you're not familiar with the work, you should check it out. So Steve is a master of user research. He's renowned for his work in uncovering insights that drive meaningful design decisions, as the book title would suggest. So Steve, your Jeopardy question for today. You just took a week off to camp an Airbnb around Oregon. Which place should we check out next time we're in the area?
Steve Portigal:I've got to, and they're both connected because they're just, I don't know, whatever. The Barnacle Bistro in Gold Beach, Oregon has got just the best sign. I encourage everyone to google it. It's like a very strange character playing banjo. And you can get food there, but they also have just the craziest sign. And along those lines, in Medford, Oregon, is Blackbird, which is this hardware store with a parking lot with a giant kind of sculptural black bird. That is a good roadside attraction kind of place. So we went to Blackbird after having been there maybe 20 years ago. So we went to see how the bird was doing and the bird is doing well.
Hannah Clark:We also have Thomas Stokes joining us. He's the principal of UX research and digital strategy at Drill Bit Labs. Thomas is a leader in product strategy and specializes in creating compelling user experiences that are tightly aligned with business objectives, which of course, highly relevant to what we're talking about today. Thomas, you just took a week off to hike part of the Appalachian trail. Did you grow out the grizzled Appalachian trail beard, like you've seen the before and after pics of others who have liked it?
Thomas Stokes:I gave it my best shot, but I really grow a better mustache than a beard. It's pretty full here. Not so full here. So I shaved really promptly. On my return, but it's gonna be back and thanks for having me here, Hannah.
Hannah Clark:Yeah, we're looking forward to it. Just a few other words about the product manager membership before we get started with the discussion. If you are not yet a member of the community, first of all, welcome. Thank you for attending. This is one of our monthly sessions that we conduct that we invite members and non members to attend. But if you'd like to learn a little bit more about membership and some of the exclusive events that we host, please learn more at theproductmanager.com/membership. We would love to have you on board. Have a lot of fun. And with that, let's get into the discussion. So this discussion is going to be taking part into three parts. We'll have a discussion on UX research. So what separates the good, decent and great UX research from the bad. Also, we'll be looking at what separates good, decent and great UX or user experience from the bad. And we'll also be looking at how we can connect the dots between those kind of two significant areas to create really meaningful experiences for our users. So to kick us off on section one, so what separates the good, decent, and great UR from the bad? I've got Steve. So Steve, what would you say to that?
Steve Portigal:Right, research is a big set of activities. So if you say good research or bad research, I think we often start with kind of data collection or interviewing or testing, whatever your kind of method is. So Good versus bad for quality interviews interviewers that don't have a lot of experience or training and asking good question, but don't ask follow ups is the first part, but then even stepping back from that, like what's to research is an area to think about what is good and bad. So I think sometimes there's a naive application of research to the challenges that businesses face, like, where all you do is test. You make decisions about what to make, and then you test. Or research just means, hey, show people the prototypes that you're thinking of shipping and seeing if they give you a thumbs up, thumbs down. And you hear those phrases oh, we don't want our customers to tell us what to do because of something that Steve Jobs maybe said or didn't say or whatever. So. Those sort of naive understandings of how you would use research to make what decisions, I think, limit the quality of research. I think sometimes we do research and it gets treated like stenography. Again, that's that customer telling us what to do. The idea that, requirements gathering, like you just ask people and then you. Tabulate what the requests are versus thinking of research as something that feeds into this very active, very creative. Synthesis process, you make things called insights. Those are not quotes from people directly. And I think if you don't know that is possible, you don't know the research can do that. Because you've never been exposed to that, then you, your application of research is limited. Yeah. Another piece, I guess, from that is that research is a way of driving change in the culture. And so you hear these things like, oh, well, is bad research better than no research? And this is a wonderfully, messy kind of hot topic. Bad research sends you down the path of making bad business decisions. So that's not good, but getting companies out there, people in companies, teams, stakeholders, whoever out there seeing real people and learning from them, even if it's not good by whatever measure of good you have, like that's good for the organizational culture that, Hey, we don't know everything. Our customers are going to tell us things that we don't know. And then, I don't know, there's just this piece around what makes research good or what makes research not good that I actually disagree with. And probably the other folks can add way more nuance than I can, but there's this kind of self own that I see researchers doing, which says that my research isn't good. If someone else doesn't adopt it and integrate it. I think, yes, that's why we're doing research. But I think when you put that in front of you I'm not successful in doing research. If somebody else. who I can't control ultimately, doesn't make some decision or doesn't build something, doesn't implement something. Research has a lot of other sort of softer outcomes than just another person taking an action. So I don't know. I like to keep it open to, I think the question of how do we assess if research is good is not so blunt force as just did the feature we recommended ship. I think there's, we have to look for. Other kinds of signals before we can even say was this piece of research. Good anyway, rant over for now, but that's my take on that.
Hannah Clark:Does anyone have anything they want to add to that?
Laura Klein:I want to address that because I think that I have made the comment before that decent research gets used. And so I think I actually. And this is I'm sorry, Steve, but I think that I actually agree with you. I think that there is a difference. between bad research. Maybe it's the whole like good, bad, decent thing. Like you can do great research and it doesn't get used. And that is, I totally agree. That's not your fault. We should never, I mean, we should never blame the human when a system fails, but I do think it's a systemic failure. if you're paying people to do all of this great research and learn all about your users and bring all this great insight and you're not using it. So maybe my framing is that's not bad research. That's bad company, right? That's bad product decision making to have all of this insight and just be like, whatever, like my gut says we should do this other thing. So I never, ever want to blame the researcher in that case. And I think the researchers I have found that researchers tend to blame themselves a lot for stuff like that, and I find it so sad and they, I always say that the question that I always get asked anytime I talk about research is, how do I get people to actually listen to me and do the things that I recommend? And obviously, there are a lot of great answers to that. There's storytelling, there's how to share the information, and there's stakeholder management, and there's all this stuff. And also, sometimes you are dealing with people who aren't going to listen to you. And it's very important that you not internalize that and make that your fault. You can do everything right and have that not turn out well because of systemic reasons. So, I just wanted to say, I do think it's important that research get used. It may not, and again, I also agree, it may not get used in the. You built this feature. It may get used in the now. We all generally know more about the people in the context and the flow and what they're doing and why they're doing it. And maybe it helps us make a better decision next time. So sometimes it's a little slow.
Hannah Clark:That's a really good thing to point out as well, is that sometimes that the research, even if it doesn't get used in the moment, it's still valuable. There's still a lot of value to conducting it and conducting it properly, but it doesn't always go well. Thomas, I'm curious to hear your thoughts on, some of the things that can go wrong throughout the research process.
Thomas Stokes:Yeah, and I want to add one thing to what Laura just said real quick before I talk about that. I think there's a useful frame of mind to think about this, and I don't, I'm not often into sports analogies, but this one is a sports analogy. There's this coach, I think he coached for like the 49ers or some football team. But he had this thing that he said to his players that said the score takes care of itself. And the reason why he did that was so that they stopped paying attention to the end score. So in this case, I guess the score would be okay. They shipped something because of my research. And he specifically wanted people to start focusing on the things that they could control, like their process, their input. Maybe the way that they execute a play or whatever, again, I'm talking outside of my expertise here on the football aspect, but with the research aspect, I suppose. What that could mean, or what that could look like is. Okay, focus on my process. What can I do as maybe the person who's helping to actually do and then deliver research findings. And that's all the stuff that you were talking about, right, Laura? It's okay. What's in my control? Clearly articulating my findings. Doing it in a way that's convincing, putting it in a format that we know people will access and we'll listen to. So that has the maximum potential of actually converting and making that effect. So I think maybe that's the nuance of the conversation there is. There are things within our control, we can't always pay attention to the end result, but in theory, without organizational misgivings, set aside, there's things that we could do to make it so that they will act on a research funding, right? So, I think maybe that's the element. And that's, I think, Hannah, you're asking, I have got this way of thinking about. Ways research can go wrong, where we talk about what we do before, during, and after conducting a study. All that has to do with after. Once we have findings, how do we carry them forward? But we can also think about what we do before a study. That's talking about planning. Are we doing research on the right things? Are we selecting methods that are appropriate? And do we go in with unbiased objectives? So that, we don't have a finger on the scale, so to speak. Unduly influence results so that the findings never really mean anything because we went in with an assumption. We're trying to prove. And we can also talk about, I think, Steve, you mentioned this in your initial response, right? Once we start to do the study, are we doing that? Well, are we experienced enough and do we have enough knowledge about the method so that. Like you said, we're doing good interviewing techniques. We asked follow up questions. We ask questions in a way that's not leading that sort of stuff. So we can look at before, during and after. Conducting research to frame up where things might go wrong.
Laura Klein:Can I just make 1 mention on the methods thing because y'all both brought up methods and I think that is so unbelievably important. I'm a huge fan of mixed methods and specifically bringing in quantitative data and qualitative data together, but also just knowing all of the amazing opportunities that we have to do different kinds of experimentation and testing and research and ethnography. And these are all really useful for very different things. And if you are trying to A B test your way. Through when you should be doing ethnography or vice versa, probably not getting the results that you want. And that's the time that you really need to bring in an expert who maybe does know the sort of, oh, you should actually be doing, oh, maybe this should be a diary study, not yet another prototype test.
Steve Portigal:I want to recommend Christian Roers article. I think it's called when to use which user experience research method and I think it's a Nielsen Norman article that's been revised like steadily over the last 10 years or something, but there's a great matrix in there that shows. Different methods and what they produce and then the whole article, I think, in one revisionally talks about what question do you have or where are you at in your product maturity and what types of methods are used to answer what questions because your point, Laura, that's expertise. And not everyone's gonna, read Christian's article and be like, yep, I know how to use that, but at least says this is not a hit or miss. There is a process here, and Christian, in my experience, has done the best job at documenting that in a way where oh, we can actually choose and.
Hannah Clark:Awesome. While we're shouting out additional resources on some of Laura's thoughts, Laura has actually joined us on The Product Manager podcast on basically that exact topic. I believe the episode is called how to have fun with user research. If you're interested in hearing some of Laura's more elaborated thoughts on that topic, please check it out. It's a great episode. And that kind of brings us to section two, which is what separates good, decent and great user research from, but not so good. So, Laura, I actually have you set to take this one on head on, if you like.
Laura Klein:This one's for a user experience design, right?
Hannah Clark:Yes.
Laura Klein:So, it's so funny because when you first set the name of the panel, the, how to delight users, and I'm like, You know what I find delightful? I find it delightful when things work, which is often a thing that people, it's so funny because we always talk about, oh, we need to delight users and people think this needs to be like, oh, it needs to be fun and enjoyable. And I'm like, yeah, it needs to do the thing that I want it to do and get the hell out of my way. So I think there's actually this, which kind of brings me to the point that good user experience. takes into account what the user is trying to get out of product. If it's a productivity app that I have to use every single day for my job. I mean, if it's a game, I play a lot of video games, right? If it's a game, it should have a very different kind of user experience. Then, a CRM, I get a little bit annoyed when people try to force all of the same things into the user experience. It should take into account, again, my context, the flow of what I am doing, it should not interrupt me, it should not force me to work a different way, if possible, unless that way is a way that you can teach me in such a way that I'm like, Oh, no, this is really much better. Good design is about behavior change, right? You are changing my behavior, but you should be changing it in such a way that I get what I want. out of the product and I should get to be or do whatever I want to do within reason. But mostly, I don't know that you need to delight me. You just need to make it work. You need to figure out what I'm trying to do, how I'm trying to do it. and help me to do it the correct way. So I think bad user experience often tries to be, I mean, sometimes it tries to be too clever. Sometimes it tries to be too pretty. Sometimes it tries to be, sometimes it tries to be too minimal. It tries to be too, the misuse of the word lean, sometimes it does. a tenth of what I want it to do. And then I'm like, well, that doesn't really help me. So the funny thing is, a lot of us are willing to put up with suboptimal user experiences for products that actually help us do things that we want to do. Doesn't mean we should have to. It very much depends on how important the thing is that we're trying to do with the product. It depends answer. There you go.
Hannah Clark:I think this is a very common theme with many things, but In this case, yeah, the context is really everything. Thomas, I know that you had some frameworks in mind that were applicable, despite the fact that there was obviously a lot of context that comes into play with each individual situation.
Thomas Stokes:Yeah, I think one that we can plug into this conversation really well is Aaron Walter's hierarchy of user needs, and anyone hasn't come across that before. It's very similar if you've come across like Maslow's hierarchy of human needs. It's similar in idea, but it's recontextualized to the context of user experience. So the idea is that there are some very foundational needs towards the bottom of the hierarchy and higher level needs as you advance. And there's four levels going up. There's the design is functional. It does what it's intended to do. Next step up would be reliable. Not only does it, but does it consistently at all times. And then of course, from there. Not only is it functional reliable, third level would be that it's usable. It is very usable. It is user friendly. And then the highest level, Walter argues that it's pleasurable. So in theory, if we want to take that on its face. We could say good user experiences, achieve those higher level goals, less good ones, achieve the higher level ones, or maybe don't even achieve the lower level ones. And I think, Laura, one of the things that you said that really stands out to me is that if someone tries to be, or a product tries to be too clever. Like too beautiful without actually addressing foundation of usability that's below that or even reliability or functionality, then what's the point of that kind of pleasurable? Oh, and if it's not actually meeting all the other needs that support it, I think we can do all if we're conscious of it, but it's just a matter of recognizing that. There's kind of foundations, right? We've got to build from the feet up an experience that achieves all the four of those elements.
Laura Klein:I agree that we should strive to achieve all four. I actually struggle with it being so linear because I actually think this is a weird, this is a weird thing that most people don't talk about. I think different types of products, those may be in a different order. If you are doing something that appeals to, like shopping or clothes or makeup or beauty or whatever, actually. Making it beautiful may be more important to your users. Then having it be super reliable. I play, like I said, lots of video games, and I have a few that are what I would like to call not unbuggy. And they're not super reliable, yet they're still fun enough that I play them and I enjoy them. And would I prefer that they work all the time? Yeah, absolutely. 100%. But they're fun enough that I'm willing to forgive that. Stack, I think, applies great to something like, again, a productivity app or, email or something that you have to use. But for things that are more, you have to figure out where you live in that product and how important your product is to the person and what they are trying to get from it. But I agree that everything should hit all 4 of those ideally.
Hannah Clark:Yeah, this is a really interesting insight. The idea that the hierarchy of needs itself can be contextual. That's an interesting takeaway. Just to ensure that we're hitting all the marks before we start to get into Q&A. I'd like to move into section three, which is how do we connect the dots to elevate and experience and offer this unique proposition for our product that kind of combines all the elements of our research to create these great experiences. And since we haven't heard from you in a few moments here, Steve, I'll get you to start off with this one.
Steve Portigal:Yeah, and I think it builds nicely from what we're talking about, right? The unique aspect, I think, is key here and Lauren Thomas are talking about. What should it hit? But also, what does that mean for you and your product? And, I think because we're speaking broadly to a broad audience about broad topics, we're using words like pleasurable and delight and efficient and so on. And Laura's kind of hinting at it changes as you move around from category, but beautiful for, when we say beautiful, we think of something visual, but there's. It's beautiful when two pieces snick together properly and you just go like it just feels good. I think there's beauty and delight that, we have to be diverse in how we think about what that looks like. I feel like a hundred people have written in the comments. It depends. It depends in like the very enthusiastic cheerleading way. Like I feel like we're at a rally here where we could just say it and everyone would fill in the phrase. It depends. And Laura, you're getting at this as well, right? Who are you and what do you stand for? I was thinking as you were talking, Thomas, about Slack, like Slack has a way, and this is more about content than experience, but if you update the Slack app, they have a very specific tone in their contents. That's their consistence, and they don't write anybody else, that's irreverent, but it says a little bit about who they are, who they think you are, what they think their relationship is with you. I don't personally think Slack translates that into their user experience. Necessarily, but there is something about being a brand, having a personality, knowing who you are in a consistent way and knowing who your users are. It's just tied back to research. You have to do a lot of work on yourself just to therapize the language here and do the research to understand everybody else and then do the design work to connect all those in a way that, that you build in a way that's consistent. And we're not talking here about I mean, you talk about, unique selling proposition. We're not even really getting to like doing the thing that you're there to do that. People care about accomplishing, but just what outfit do you wear while doing that? I guess is really where I'm talking about. So knowing yourself, knowing your customers, integrating those consistently. That's the connecting the dot piece. I guess that's kind of part of your question. Yeah. I'm running out of steam here. Somebody else jump in.
Hannah Clark:Yeah, Thomas, did you want to add a new insights?
Thomas Stokes:Yeah, I'll throw one in. Steve, you said a really important line in there said, do the research. And 1 element of that is that research should happen across the product development lifecycle at all different stages, right? It's not enough. If we really want to have a USP, it's not enough to just like. usability tests and flows before things go live, that misses the mark. If we're really going to draw a circle that says this is what users want, or this is what people need, and this is what we're good at, and we're going to find out what's in the middle of that, we have to be able to draw that circle. This is what people want and need. And to do that, right, we need very early discovery foundational research. Also, we understand that what people want Or if we want to keep saying it, what delights people is like a moving target that changes over time. It's not always going to be the same. It's not static. So we have to actually have strategies to measure things after they go live. I'm a big believer that a good UX measurement plan helps you keep your finger on the pulse of how people are receiving what you're putting out there. And so Having actual post live measurement will help you understand how things are actually shifting. So at all stages, whether or not you're Not even sure what to build if you're building the thing, or if it's even out there. You have to be doing research on all of it to actually have that USP that people are after.
Hannah Clark:I think you'd also had some comments about that's one part of it, but then there's the element of, what's feasible there's other matters to take into account.
Thomas Stokes:Yeah, and I think. This comes full circle to something that we said, or that Steve said earlier, Steve, you often say research isn't stenography. It doesn't tell you or you don't take it on its face. It doesn't tell you what to do directly further that, right? We're talking about using research to essentially prove out user desirability and usability. Right? But I'm talking to budget. Product managers, I'm sure everyone's heard this before, but it's worth just getting this caveat that there's going to be 2 other elements in addition to that user desirability that we've got a balance. There's. Obviously feasibility, we've got to be able to build a thing. It'd be great if user research reveals that we could build something that you click one button and your house is clean and your laundry is done and you've got groceries. Maybe that's not one button click, right? So it's got to be feasible and it's also got to be viable for the business. You work within a organizational system that is going to select for things that Advance its mission. So the business viability, the technical feasibility and the user desirability have to all come together. And that involves a bit of decision making, right?
Hannah Clark:Laura, did you have anything you wanted to add to that?
Laura Klein:Yeah, I have a kind of a weird side take on this, which is I think it's much easier to deliver great user experiences that with your business needs. If everybody's incentives are aligned and that has to do with your business model. So if you have a fundamentally what I would consider to be a more ethical business model that says we are going to deliver a great product that is so good that people are excited to give us money for it, then the better you make that product, the more people are going to want to give you money for it. And everybody's incentives are aligned, and that's fantastic. And obviously that's simplistic, and that's not also the complete definition of ethical. You can still hurt a lot of people doing that. That's the baseline of, if you have a business model where your user's incentives are aligned with your business incentives, you can make a much better argument for, we want to make this better for users, cuz. That's going to make it better for the business when you maybe don't have that or you have, the customer and the user, which happens a lot in B2B where the person who's buying isn't necessarily the person using it. Then you have to be a little bit more creative about connecting those dots between if we make it better for the user, it actually makes it better for, say, the organization. Which is a good reason for you to buy it. So you have to make that sort of leap. And then you also have the products where those things are just very disconnected. And there isn't really a great argument for making this a better product makes us more money because it might not. I don't know. I don't think those are great places to work personally. And I don't think those are great, definitely not a great place to be a researcher or designer. So keep that in mind when you're looking at your next job.
Hannah Clark:We did get a question that seemed to be building on what Thomas mentioned, which was where would necessity come into play? I'm just trying to see if that user who posed the question, maybe can elaborate a little bit more on those comments. But does that kind of provoke any kind of response from any of the panelists?
Laura Klein:Necessity for whom or for what if you're, is it like the thing if you're forced to use the product, then again, it's, I mean, it would still be great if it were easy to use.
Thomas Stokes:Yeah, I suppose there could be that user buyer disconnect and a lot of spaces, right? Where they build stuff for the person who's actually in the sales cycle. The 1 who decides. On purchasing the experience for the team, all actual end users be damned. So I suppose that would be a negative implication of necessity if you're required to use it in a B2B space. But even then, I think we could all argue that it's to the benefit of the folks who are building those experiences that we would in theory, when out, if we support the end user and we can show, there's some meaningful case studies or whatever that. By supporting the end user, we actually achieve better outcomes, which then influences the buyer's decision, right? So maybe it's a roundabout type thing, or I'm not sure.
Hannah Clark:All right, so let's move on to the Q&A. So our first question from a member is, How do I get people to answer surveys? I send Slack DMs, email, surveys without a budget to properly pay and incentivize customers. It's tough to get their time. People are stretched really thin these days, so it's understandable, but it makes my work very tough. I think Thomas had some kind of preemptive thoughts about how we can manage this a little bit.
Thomas Stokes:Yeah, I think there was a key line in there about not having budget to incentivize people to participate. So, I've got two main points, one specifically about budget. If we're not actually incentivizing people for the time, we don't have the appeal of monetary incentive. We have got to appeal to something else. And one thing that I've found is that in those situations, you might be lucky enough that you're in a situation where your customers actually really care about Impacting the direction of the product. And so you might appeal to that. You might actually really emphasize in whatever recruitment channels you're using that, hey, this is really something we are going to use to drive forward product improvements. You gotta make good on that promise, but use that to appeal potentially to potential participants. The other thing is, if we're not actually incentivizing them for their time through a sort of reward with money, be relentlessly. I guess, scrupulous and just really look over the survey that you're building, just prioritize really what you need. It might not be the perfect survey that you'd have yet unlimited budget and time, but narrow things down. So you at least get a good response rate when the people who click in thinking, yeah, I'm going to help the product direction. They see a million questions, they're probably still not going to answer. So go with both of those, try to appeal something other than just monetary incentive and really relentlessly prioritize your survey down to what's essential for the decisions that you might make off of it.
Hannah Clark:Great answer. So our next question from our members is, my challenge is in getting the truth from users about their reasoning and intent for using the software the way that they do. We hear tons of stories about how they think things should, quote unquote, should work or look, but it's hard to get them to open up about what they're actually doing using the product. Steve, did you want to take this one on?
Steve Portigal:Yeah, I agree that interviews should cover what people are actually doing, I think the fun of this format is trying to infer some more context from the question. I really wonder, are these interviews? So I'm thinking about, like, how are they being set up? Who's being asked to participate? It's what you're saying, but the dynamic that we have with our research participants, I think, in interviews, the same thing is true with surveys. What's the expectation? Who's being asked to participate? When and how? Are these people calling in with tech support questions? And then they're being escalated to the interview. That's a different context versus, hey, we want to talk to you and learn. How is the interviewer introducing the subject of interest? We want to talk to you about x and y. And how is the interviewer asking those questions? Because I think I usually ask questions about what are people doing? What are you doing? How does that work? Before I get to anything about what would you want to see different in the future? It's a great question for me because it makes me wonder, well, why isn't that happening to begin with? So again, I think just expectation setting, what questions are you asking? What follow ups are you asking? I think, and yes, every interview, no matter how well you set it up, somebody is going to come to it with a different sense of what the purpose of that is. And sometimes you just got to let them share what they want to share, rather than squelch them into your model of the conversation. Let them share what they want to share and then say, This is great. We've got some other things we'd like to know. Can we talk about what your workflow looks like today? I want to go right back to the beginning. How do you configure this? I think you can keep asking questions to build to that kind of outcome. How are they working is not saying, how are you working? It's a many smaller questions to get at. That information, so I think it goes back to what's good and so on, like it's technique, it's expectation management, it's all kind of that stuff. So just go based on what that question makes me think of. I don't know. That's my kind of my riffing on where to think about making improvements.
Hannah Clark:So the next member question is about the administrative pains of research. So ongoing customer meetings to discuss their needs and desires and hopes and dreams is a painful administrative task full of cancellations and rescheduling. Is there a way to ensure user turnout for these meetings? Does anyone want to take a crack at that one?
Laura Klein:I'm not really a user, like a research ops expert. I don't know if Steve or Thomas is either. This feels very much I mean, so can you make people show up to things? No, I mean, there's all sorts of things that you can do to make it better. You can send reminders and make sure that you're offering an incentive and all that kind of stuff. But like people are people and they're going to people you're going to get ghosted. It's going to happen. But I think that having a good research ops organization, if you can, or person who is responsible and who can guide that, and it at least takes away some of the hassle if it is somebody's job to be able to say, oh, we, we got you an extra person and, you can't make people show up to things they don't want to show up to, especially these days.
Thomas Stokes:I'll give one too. There's a useful stat I think we can start off with. You can expect 10 to 20 percent no shows and cancellations. That's generally true. So, you can look at your own practices if you're trending below that 10 percent. Great work if you're over that 20 percent. Maybe there's a lot that you can do to bring it down. Laura said, if you establish contact, if you do things, ops wise, like actually maybe automate reminders and that sort of stuff, make sure that the incentives are around the right level to encourage people to show up. That's good. But I saw this one trick. I'm wondering how well it works for most organizations. I'm not sure if anyone else has seen this one, but it's exploiting consistency bias. I don't know if anyone else has seen this, but essentially like in a screener or recruitment thing, you get people to agree or disagree with a scale question. That's I'm the type of person who typically keeps my appointments or I show up on time, something like that. I've seen people try to say that putting those in, and if people agree, they're more likely to show up to your actual sessions. It sounds fun and interesting. I'm yet to test it out myself, but who knows, it's worth a shot.
Laura Klein:That feels like one of those things that people that, end up getting written up in a very popular airport book and then later on, it turns out that it was, exactly eight people, all of whom were college students at Duke, and it is not at all applicable to anybody else in the world or whatever it is. But it's I mean, it certainly sounds fun. It seems like an easy thing to try.
Hannah Clark:Yeah, I would love to run that experiment and get back...
Laura Klein:...get back to us. I don't know.
Hannah Clark:Yeah, well, that's I think it's very interesting to play upon people's, self awareness and their perception of who they are as a person is as a means to incentivize them to show up for an appointment. Maybe I should do that on myself. Okay, we'll move right along here. This is an interesting question. It's a little bit more about bias management. How do you avoid the biased voice of the angry customer who's reaching out to vent and or turn the call into a support case? I guess this is about doing research based on stuff that folks have volunteered. Does anyone want to take this one on?
Steve Portigal:I might start here. I think it's just like the other question about what we're not hearing from people and I had a lot of questions about context here. Right, you're in control of your sample. So, this to me is a, it's a sampling question and it's a method question. So, Thomas made a reference to screening. So how are you screening people in or out to participate in research, asking a question about your disposition towards the brand or the quality of experiences you've had. Is that you could filter in or out people to get a balanced sample, right? We do research on individuals, but we do research on a sample. And so somebody has a perspective, I don't know, is an angry customer, a biased voice, or is an angry customer who has. A lived experience in today's parlance, and maybe it's the second piece of the question is, so should we include angry people that don't like our product? Yeah, but maybe not exclusively unless we're, that suits the objectives of, what we want to research. People wanting to turn the call into a support case, I think that goes back to the thing I was saying before, expectation management, who's asking for the call? What's the purpose of the call? Let's reiterate that at the beginning of the call. And I think this is really important. Let's live up to that. That behavior. So I see researchers telling participants, Okay, first of all, there's no wrong answers and then go on to be very excited or not about the responses, which basically tells people yes, there are right and wrong answers. So how do you set and live up to that expectation in your actions in your interactions with participants? And if someone has something they have to tell you, a wise woman once said people are people and they're going to people. You can't stop someone, and nor should you, from saying the thing that they're, they're the they're the rage character from inside out if that's where they're at meet them there, that doesn't mean that's the tone or the content of your entire interaction with them that's just a starting point, so lots and lots of interview participants Come with an expectation and it's a bias about the interview and then you're like, yes, and yes, and tell me about this and tell me about this and tell me about that. So I think it's very manageable, but it just takes a little bit of know how.
Hannah Clark:Yeah, shout out to the improv tactic there. The yes, and very good points. We've talked a lot about some of the methodologies around, how we're thinking about things and conducting research, but I also want to make sure that we're mentioning some of the more specific tools and that kind of thing that we use in order to be effective in our roles. So what are some of the go to research tools that you folks are using for capturing insights and gathering user feedback, and especially at scale?
Laura Klein:I feel like this is a very different answer for people who are on teams and large teams versus small teams and teams that have research ops versus teams that don't and consultants versus non consultant, in house versus not in house. Anyway, it's not a question that I can answer specifically. There are a ton of tools right now that are available and I would say that as with anything else, you need to look at what does your team need the most. Do you need to You know, democratize the response. You need to make things easily searchable. Do you need to do management of participants? What is your specific problem? And there's so many more tools out there than there were back when I was, doing user research every week, but they each kind of fill a specific niche and just make sure that's the niche you need filled.
Thomas Stokes:Yeah. And I'll also say. I've got a rule for myself that I don't answer this question in a public venue with people coming from many different backgrounds and organizations, but I do answer this question, private channels. So, if anyone does want actual kind of contextualized advice around tools, I will help you out with that. You can reach out, but yeah, like you said, Laura, there's just way too many things that influence whether someone should choose this tool or that tool. There's even just like elements of the way that tool does business. Might not work with your organization and the way that like your procurement works. So there's all these thorny details about actually getting in and starting to use a software tool in the BDB space that just. Makes it a slog.
Laura Klein:No single person has used all of the ones available, and there might have been one released yesterday that solves everybody's problems, and that's wonderful, and I wouldn't want to miss that one by saying, use this thing that I used five years ago.
Hannah Clark:I think we're, we have maybe time for one more. So we'll just go ahead and jump into it here. It's actually, it's coming from an anonymous user. How can we make UX easy for a novice user, but at the same time get out of the way of an expert user? Oh, this is an interesting one.
Laura Klein:That is also its own whole thing. Here's the thing. It's not like advanced users want things to be harder. It's just that sometimes they are doing weird things with your product. I think a lot of times people think of easy as we are going to strip away all of your choices or all of your options, or we are going to shove everything into a settings thing. And I think the most important thing is just to understand. What is it that people need to actually get started and be successful with your product? In that onboarding experience, by the way, it's probably not five screenshots with little arrows saying, Hey, do this. Hey, we added five new features. Check it out. It's probably not a product tour like that. It's probably some sort of process that helps them get to the first time using your product while training them the steps that they would need to go through to be successful. And then having there be, the next time, the option to maybe move through that a little more quickly. But it's very much understanding what is getting in the way of the novice user, what, you don't drop them into a giant complicated interface and be like, good luck. And then understanding what makes a power user, what is a power user trying to do that is fundamentally different from what a novice user is trying to do, and how do we make that, again, easy the first time, even for power users, sometimes they might be doing something with your product that is their first time doing that thing, you still need to make that easy as well. So don't think about it so much in terms of noobs versus, old folks or whatever. It's how do I get people started doing the thing that they want to do in the way that helps them to learn what they should do the next time they want to do it. And you can have settings and you can have things for like actual power users. I'm an ex engineer and sometimes we want hotkeys. Don't take them away from us.
Hannah Clark:Panelists, this was exactly as fun as I knew it would be. So thank you so much for being here, for giving your insights and your great personalities. We really appreciate your expertise and your time. Thanks for listening in. For more great insights, how-to guides, and tool reviews, subscribe to our newsletter at theproductmanager.com/subscribe. You can hear more conversations like this by subscribing to The Product Manager, wherever you get your podcasts.