ENGINEERING CHANGE® PODCAST
ENGINEERING CHΔNGE® is the podcast designed to help REDEFINE engineering by: RE-imaging who we see as engineers and what we see as engineering; DE-siloing our approach to academic programs, research, and problem solving; and FINE-tuning organizational conditions so people with different backgrounds and perspectives can contribute fully to outcomes that serve all of society. It's about being just as intentional with our organizational systems as we are with solving any other problems in engineering; applying a carefully planned, iterative process that includes the stakeholders from problematization through ideation, evaluation and ultimately, selecting the best solutions. Each episode will leave you with something concrete you can do to better understand your system and move forward from wherever you are in the process of ENGINEERING CHΔNGE®.
ENGINEERING CHANGE® PODCAST
The Myth of Meritocracy in Engineering
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Is engineering really a meritocracy?
We’re taught that hard work, strong performance, and clear metrics determine who advances. But what if the system isn’t as objective as it seems?
In this episode of ENGINEERING CH∆NGE®, I break down how “merit” is often interpreted, or even manufactured, not measured and how the systems we trust to evaluate performance can actully distort it.
In this episode:
- Why performance without context is incomplete and often misinterpreted.
- How shifting standards and uneven scrutiny reshape who advances.
- What happens when metrics become targets and start driving behavior instead of reflecting impact.
Through real-world examples - from internship decisions to NSF review panels - this episode reveals how evaluation systems can manufacture merit instead of measure it.
If you’ve ever questioned how decisions really get made in academia, engineering, or leadership, this conversation will change how you see performance, potential, and fairness.
Ask yourself:
Are we rewarding true impact, or just what’s easiest to measure?
Grab a latte and listen.
If this conversation resonates with you, follow ENGINEERING CH∆NGE® and leave a five-star review to help more engineers and leaders join the conversation.
Visit the ENGINEERING CH∆NGE® podcast website to learn more and to request a free copy of my new brief, Engineering for Society.
ENGINEERING CHΔNGE® is a registered trademark held by Dr. Yvette E. Pearson for producing and providing podcasts.
Welcome to ENGINEERING CH∆NGE®, the podcast designed to help REDEFINE engineering by RE-imaging who we see as engineers and what we see as engineering; DE-siloing our approach to academic programs, research and problem solving; and FINE-tuning organizational conditions so people with different backgrounds and perspectives can contribute fully to outcomes that serve all of society. Each episode offers actionable takeaways you can use wherever you are in the process of ENGINEERING CH∆NGE®. I'm your host, Dr. Yvette E. Pearson. Hello, Agents of Change. Welcome to the ENGINEERING CH∆NGE® Podcast. Engineering is often described as a meritocracy. We like to believe that if you work hard, perform well, and meet the criteria, you advance. That outcomes are based on talent, effort, and measurable contributions along. That belief feels especially natural in engineering. We're trained to value data, metrics, and objectivity. Numbers don't lie. Outputs reflect inputs. Performance speaks for itself. But early in my career - actually, before my career even began - I had an experience that tested that belief. As an undergraduate engineering student, I was in a serious car accident one semester. I missed a significant amount of class time while I was recovering. And by the time I was able to get physically back on campus, I was behind, especially in two of my engineering courses. I had a choice to make. I could drop all of my classes and reboot the next semester, or I could drop down to 12 credit hours, which was the minimum required to keep my scholarships, and do the best I could with the remaining classes. I learned that if I dropped everything, I would lose my funding, so I decided to stay enrolled in four courses. I earned two As and I earned two Fs. On paper, that semester didn't look great. My GPA dropped. My transcript showed two failures in foundational engineering courses, and there was no footnote I could add to my transcript explaining the context, just the grades and the numbers. Later, I went to interview for an internship with a company that was very specific about the GPA they required. Thankfully, this was way back in the 1900s, as my daughter would say, and the co-op office wasn't using computers at that time to screen folks for interviews, so I wasn't weeded out. The interviewer looked at my transcript. He looked at my GPA and at the two Fs, and he asked me quite directly why I was there, because my GPA was not what they were looking for. I explained what had happened. I explained the accident. I explained the scholarship situation. I explained the decision I made and why I made it. He paused. And instead of focusing on the numbers, he focused on my capabilities. He told me I showed good decision making and judgment, and I got the internship. That moment has stayed with me for decades, because what happened in that interview, though I didn't fully understand it at the time, was an example of contextual evaluation. My GPA was an output, but it was an output produced in a given context under certain constraints. In engineering, we know outputs don't exist in isolation. They are shaped by inputs and conditions. If we ignore the conditions, we risk misinterpreting the outputs. That internship interview was the first time I realized something important about the idea of merit. Merit isn't just measured. It's interpreted. An interpretation depends on whether we're willing to examine context. Over the course of my career in academia, in leadership, in federal service, and in consulting, I've seen that same dynamic play out again and again. Systems that claim to reward merit often rely on narrow metrics, and those metrics are treated as objective, but rarely is there a consideration for the conditions under which outcomes are produced. When we ignore context and constraints and irregularities, what we call merit can become something else entirely. So today, I want to take a deeper look at the belief that many people hold almost reflexively, that engineering is a meritocracy. Here are some guiding questions to consider as we navigate this conversation. If two people produce the same output under very different conditions, is it really the same performance? And when we evaluate performance, do we consider the conditions under which it was produced? Or are we measuring output without context and calling it merit? These are questions I want to explore today. So grab a latte and listen as we dive into episode 32 of ENGINEERING CH∆NGE®, The Myth of Meritocracy in Engineering. Consider this scenario that plays out in many organizations. A unit establishes clear criteria for advancement. The expectations are documented. The thresholds are explicit. People are told,"Meet these standards and you will advance." So someone does exactly that. They track the requirements carefully. They align their work to the stated criteria. They cross the finish line, believing the rules are stable. But once the review process begins, the conversation shifts. Points that were counted in previous cases are interpreted differently for them. Weightings change subtly. New qualifiers appear; phrases like "fit," "trajectory," or"not quite there yet" enter the discussion, even though they were not part of the written standards. Policies are the same, but the goal posts have moved. The bar is raised after the jump. And here's the problem. When criteria become fluid once a case is under review, merit stops functioning as an objective measure of contribution and starts functioning as a gatekeeping mechanism. It becomes situational. And situational standards produce different outcomes for different people, even though they appear objective on paper. Now, don't get me wrong. Organizations absolutely have the right to evolve standards. Roles change, priorities change, expectations shift. That's expected. But refinement and retroactive reinterpretation are not the same thing. And when interpretation varies depending on the person being evaluated, something deeper is happening. Merit is no longer being measured. It is being manufactured. On prior episodes, I've mentioned a survey I conducted of CEOs of small engineering firms across the US. It wasn't a lengthy survey, but it provided great insights into a number of issues. So I'd like to highlight one of them for this episode. Part of the survey asked CEOs to identify the three most common reasons employees leave their companies. I provided a list of nine possible reasons, including an open-ended option labeled "another reason." The most frequently selected reason, cited by just under 80% of respondents, was relocation or resignation for personal reasons. The second most common, which was selected by roughly 45%, was lack of growth and advancement opportunities. The third most common response at about 40% was "another reason." And among those open-ended responses, the most frequently cited explanations were that employees had changed industries or careers altogether or that the CEO was unsure of the reason. Now, I want to be careful here. The survey did not measure whether personal reasons or career changes were connected to workplace experiences like the ones I've been describing. But as a systems thinker, I can't help but ask a question. When nearly half of leaders cite a lack of advancement opportunities and a substantial portion cite unexplained career changes as reasons employees leave, what might be happening inside those organizational systems? When high performing individuals encounter systems where standards move, where interpretation is inconsistent or where criteria are applied unevenly, some of us don't challenge the system. We don't appeal. We don't fight through 25 layers of review. We leave. And when we leave, it's often recorded as a personal choice, not a system failure, mostly because no one bothers to ask. I get it. As leaders, it's not easy to look in the mirror and see where our systems fall short. But as James Baldwin said,"Not everything that is faced can be changed, but nothing can be changed until it is faced." We have to look in the mirror. And even when organizations do ask, many people won't say much because by that point they're done or they're thinking about future references and don't want to risk it. So they leave quietly. And what's underneath all of this is how we define merit, because when people experience shifting expectations or uneven application of criteria, what they're really seeing is that merit isn't a constant, it's a variable. And that's where this becomes an engineering problem. As engineers, we know if our measurement systems are flawed, our conclusions will be flawed. And yet in academia and often in engineering practice, we treat certain metrics as if they are synonymous with merit itself. In research intensive environments, impact is often reduced to a short list of numbers - funding, publications, citations, and increasingly, composite indicators like the H-index. Now, don't get me wrong, I am not dismissing these metrics. Say it with me - "She's not dismissing these metrics." Thank you. But here's the engineering question. What happens when a metric stops being a measurement tool and becomes the target? When that happens, behavior adapts. We've seen this dynamic in other systems. When standardized test scores became the dominant measure of success in K-12 schools in the US - and notice I emphasize schools and not students- instruction shifted toward optimizing for the test rather than optimizing for learning. That doesn't mean that the educators don't care about learning. It means that the systems and the people within the systems respond to what the systems reward. The same thing happens in academia. Citation counts are rewarded more than practical applications so scholars optimize for citation visibility. H-index has become shorthand for intellectual worth. So scholars are motivated to build citation networks, publish in venues that maximize academic referencing, and prioritize work that travels well inside scholarly circles. The metric begins to manufacture the merit, which begs the question,"Are we then truly measuring impact?" Let's make this a bit more concrete. It's not uncommon to see scholarship that carries modest citation counts in academic databases, especially in research to practice oriented areas. On paper, that can look unimpressive when compared to highly cited work, but some of that same scholarship may be downloaded thousands of times; and that suggests something different. It suggests the work is being accessed possibly by practitioners, policy leaders, educators, administrators, users, and decision makers who may never publish academic articles, but who are applying the ideas. Citations tell us who reference the work in another scholarly publication. Downloads suggest who may be using it. Both are data points. Both demonstrate impact, but evaluation systems tend to elevate one and largely ignore the other. And when systems privilege one signal over the other, they send a clear message about what kind of impact counts. When we question then how merit is defined, it's often framed as lowering standards. It's not. It's recognizing that different forms of impact travel through different channels. And if we only count what is easily cited, we may miss what is actually shaping practice. From a systems perspective, that's an optimization problem. When evaluation systems are built around narrow indicators, people align their behavior accordingly. If individual authorship is rewarded more than collaboration, we incentivize solo work, even when complex problems require convergent approaches. If sole PI status is celebrated more than shared leadership, we shape how people build teams - or not. If H-index becomes a proxy for excellence, scholars organize their portfolios and networks around it. But what impact is the work truly creating? In other words, the metric doesn't just measure merit, it manufactures it. In the ebook, Engineering for Society, I describe metrics like these as performance signals, the indicators a system elevates to represent value and define excellence, because people respond to signals. So then the signals themselves shape what gets produced. This also shows up in how we balance teaching and research and service in academia. In theory, all three matter. In practice, for tenure system faculty at research institutions, research dominates. For non-tenure system faculty, teaching dominates. And service, things like mentoring, committee work, cross-unit coordination, informal advising, often count the least even when institutions publicly say they value things like collaboration, teamwork, mentorship, and student success. So what happens? Rational folks allocate their time toward what the system rewards most heavily. It's called structural alignment. When mentoring, outreach, recruitment, interdisciplinary bridge building, and knowledge translation are publicly praised, but minimally rewarded, they become optional and often invisible labor. So it makes sense that people choose to focus on the activities that move the promotion needle. When we call a system a meritocracy, we have to ask,"Is it rewarding the full range of contributions that make engineering outcomes possible? Or is it rewarding the narrowest slice of what's easiest to count?" Because when merit is defined narrowly, excellence becomes narrow. And when excellence becomes narrow, we limit what kinds of contributions and contributors can advance. And once we decide that a narrow set of outputs equals merit, anyone who doesn't fit that output profile can be labeled as less impactful, even when their work may be shaping systems at scale. If we look at how organizations are evolving, there's a clear shift in how performance is being understood. Leading firms are finding that outcomes improve when decision making draws on better input, not just individual output. Many are moving toward skills-based models, evaluating people based on their capabilities and adaptability rather than just credentials or past metrics. Research shows that when organizations broaden how they define qualifications, they often uncover high performing individuals they would have otherwise filtered out. If expanding how we define merit reveals talent we're missing, then the issue isn't a lack of high performing people. It's that our definitions and our measurement systems haven't been capturing performance accurately in the first place. Before we move on, let's not lose this thread. Performance outcomes don't happen in a vacuum. They're produced within systems and are shaped by conditions, constraints, and context. And when we define merit too narrowly or apply it inconsistently, we don't just simplify evaluation. We distort it. We start rewarding what's most visible and easiest to count while missing the contributions that actually drive results. At that point, merit isn't just being measured, it's being manufactured. And here's where the conversation becomes even more complex, because even when the metrics are clear and even when the rules are written, merit is still interpreted by people and human interpretation is never neutral. I saw this very clearly when I served as a program officer at the National Science Foundation. I was leading a proposal review panel and one of the proposals on that panel was from an HBCU. A panelist launched into an extended critique, not about the quality of what was being proposed, but about what the institution and its students were not going to be capable of accomplishing in that reviewer's eyes. Now, raising concerns about proposals and proposers is a part of the merit review process, so that wasn't a problem. The problem was that those same concerns were not raised about any of the proposals from other institutions. This panelist didn't speculate about whether those other universities' students were capable. They didn't preemptively question whether those institutions could execute the work. In fact, the reviewer began their comments by saying,"I may have an implicit bias here," which tells us immediately this was not an implicit bias. It was overt. Now, I have a lot more to say about that, but I'll hold off for now. The bottom line is that scrutiny was uneven. This reviewer applied different evaluation criteria to that specific proposal, and that's where merit and perception begin to intertwine. Because before we ever get to counting publications or calculating grant funding, someone is deciding who gets the funding opportunities, who is presumed competent, and who must prove competence before it is granted. And when presumed competence flows easily in one direction and presumed incompetence flows automatically in another, merit is no longer functioning as a neutral filter. It's functioning as a selective amplifier. Uneven scrutiny produces uneven opportunity. Uneven opportunity produces uneven output, and those outputs are then cited - listen to this - as evidence of merit or the absence thereof. When some people are automatically presumed competent and others are presumed incompetent, the system does not start everyone at the same line. The results will reflect that difference. And then those results are used to justify the very assumptions that shape them. That, my friends, is how myths sustain themselves. So here's your system check. Over the next few weeks, take a look at how merit is defined where you work - not how it's described in mission statements, but how it is actually operationalized. What are the top three metrics that determine advancement? What performance signals are being sent intentionally or unintentionally about what excellence looks like? What behaviors do those metrics incentivize? What forms of contribution are praised publicly, but rewarded minimally? When someone falls short of a benchmark, does your organization examine only the output or does it also consider the conditions under which that output was produced? And here are a few harder questions. Who consistently receives the benefit of the doubt and who must repeatedly prove themselves? If two people produce similar outcomes under different conditions, does your system see that difference or does it collapse everything into a single number and call it merit? Because if merit is being defined narrowly, if scrutiny is uneven, if interpretation varies depending on the person, then what you have may not be a meritocracy, but rather, a myth. Thank you for listening. If this episode was useful, do me a favor. Subscribe and leave a five star rating and review. It helps this work reach others who are navigating change. To download resources or share ideas and questions for the show, visit engineeringchangepodcast.com. Until next time, remember, the most meaningful change comes from being as intentional about our systems as we are about our solutions. That, my friends, is ENGINEERING CH∆NGE®.