In my last two articles, we delved into how to Observe and Orient within the data landscape. Now, let's move on to the next phase of the OODA Loop: Decide. Now, let’s focus on the next step in the OODA Loop: Decide. A year ago, when my company restructured the setup with new tools, I was thrust into a new team by my former manager. The challenge was to bring order to the newly created chaos. He joked that he was sending me into the jungle with only a Swiss Army knife-referencing some few technical skills I had mastered previously.
The message was clear: I’d need to depend on those skills to navigate this uncharted, sometimes hostile environment—much like Rambo cutting his way through the wilderness. The new setup posed a real challenge. After all, “Chaos is an integral part of change. With the more naïve two-stage model, we don’t expect Chaos. When it occurs, we mistake it for the New Status Quo.”1
Accustomed to relying on data dashboards for visibility, I found myself in unfamiliar territory where data insights were still new and somewhat alien. I had to quickly adapt to different ways of tracking and analyzing information while trying to establish structure in an environment that was still taking shape. Fortunately, I had some excellent resources to draw on. One example that mirrors my situation comes from “The Hard and Soft Sides of Change Management: Tools for Managing Process and People”: “The initiative not only involved replacing the outdated applicant tracking system, but also would entail changing some of the processes hiring managers and recruiters used to post job vacancies, review résumés, identify candidates to be interviewed, and send job offers (...) Changing the technology meant changing work processes, which meant changing the organizational structure.” 2 What follows are a few of the key lessons that I have learned from this experience
Scoring goals differently
I've noticed some fascinating parallels between performance in the workplace and on the sports field. Just as in sports, where the movie "Moneyball" showed us how analytics can revolutionize our understanding of talent management, similar principles apply in the corporate world. For Brad Pitt's character in the film, finding the right metrics was the difference between success and failure. Ignoring or misinterpreting data can be just as misleading as focusing on the wrong numbers. With the growing importance of analytics in sports, where top performers often become role models, it's clear that data-driven decision-making is critical. But how do these insights translate to people analytics in the workplace?
Take hockey as an example. Wins and goals are usually highlighted, but they don't tell the full story of a team's performance. To truly understand which team is better, you need to dig deeper than these surface metrics. A more insightful metric is Shots-At-Goal (SAG), which tracks every scoring attempt and provides a more complete picture of a team's offensive capabilities
At first glance, it may seem logical to judge a goalie by the number of goals he allows. However, a better approach is to consider how many shots he has stopped. A goalie who faces and stops more shots will often outperform a goalie who allows fewer goals but faces fewer attempts. This is where SAG comes in. SAG counts every shot on goal, not just the ones that result in a save or a goal, and thus is in tune with the law of small numbers.
Think about it: the average NHL team scores about 450 goals in a season, but they take about 5,000 shots on goal (SOG) and make about 9,000 SAG attempts. The difference matters. SOG only counts shots that either score or are stopped by the goalie. SAG, on the other hand, includes every shot on goal, providing a more complete picture of a team's offensive efforts.3
This distinction is important because goals, which occur about 2.3 times per game, are much rarer than SAG attempts, which occur more than 10 times per game. With more SAG data, the impact of luck or chance diminishes. A lucky bounce that leads to a winning goal doesn't necessarily reflect a team's true strength. By capturing total offensive effort, SAG provides a more accurate picture of a team's skill level. It's like zooming in to see which team is consistently creating scoring chances - a true indicator of hockey talent. Similarly, in sales, success shouldn't be measured solely in terms of deals closed. Focusing only on the bottom line can be misleading. Instead, tracking behind-the-scenes activities - such as sales calls, follow-ups, and meetings -provides a clearer picture of a sales team's true effectiveness.
Gergely Orosz, in his critical response to the McKinsey consultants, argues that measuring team performance is often more meaningful than focusing on individuals. He points out that: Engineering teams track performance by projects shipped, business impact, and other indicators, similarly to how sports teams track performance via numbers of wins, losses, and other stats. That should be the case not only for software engineering productivity.4
In competitive environments such as sports, focusing solely on individual statistics can be misleading when evaluating overall team performance. Research shows that simply counting goals scored doesn't effectively predict future success. A better indicator might be the number of shots taken, but even that isn't enough. It's more valuable to consider assists and teamwork, with possession often the best predictor of success.
The key is to consider the complexity and rarity of the desired outcome. The more complex and uncommon the success metric (such as closing a big deal), the more important it is to measure the activities that lead to that outcome. By focusing on team performance rather than individual accomplishments, we foster a culture of collaboration rather than one centered on rock stars. When team members rely on each other, evaluating performance at the group level becomes more effective. This approach not only improves results, but also boosts team morale and cooperation.
One team, one dream
Speaking of team collaboration, it's important to recognize the significant impact that leaders have on team morale. A global consulting firm recently conducted a study analyzing the networks of some 80 partners. The study uncovered two types of valuable collaboration that were overlooked by the firm's performance management system, which focused primarily on individual revenue generation. These overlooked collaborations involved partners working together to win new clients and provide excellent service to existing clients.5
To address this oversight, the firm should update its performance evaluation system to recognize and reward partners who contribute to these collaborative efforts. By shifting the focus from individual performance to teamwork, the company can better capture the true value created by these partnerships.
In many organizations, tracking individual performance can be challenging and even counterproductive. Grouping employees into teams encourages collaboration and shifts the focus from individual performance to collective success. This approach contrasts sharply with the freelancer model, where individuals may prioritize personal gain over teamwork and may seek other opportunities if collaboration isn't emphasized.
A useful way to think about team dynamics is through the lens of a sports team rather than a family. Patty McCord describing Netflix's culture mentioned that: “we decided to use the metaphor that the company was like a sports team, not a family. Just as great sports teams are constantly scouting for new players and culling others from their lineups, our team leaders would need to continually look for talent and reconfigure team makeup”.6 The family metaphor, often criticized for blurring the lines between work and personal life, falls short because, unlike families, organizations sometimes have to let go of members for poor performance or financial reasons. The sports team metaphor is more realistic and practical for shaping organizational culture.
Another example comes from Google's Project Oxygen, a year-long study to understand what makes teams work well together. The study revealed that success wasn't driven by seniority, experience, or diversity, but rather by giving every team member an equal opportunity to share their opinions. Once again, teamwork proved superior to solo efforts, both in terms of performance and in creating a positive team dynamic.7 This points to the importance of focusing on team performance and examining individual performance only when the team's impact appears to be limited.
Finding the flow
The performance of knowledge workers differs significantly from that of factory workers. Their tasks are less repetitive and mechanical, and involve more complexity, critical thinking, and dependencies that can cause delays. As emphasized in "Making Work Visible", task size and coordination are critical in this context.8 While efficiency is key in cost accounting for large, predictable projects such as building airplane engines, knowledge work such as software development requires a different approach. In these environments, coordination costs can increase dramatically as batch sizes increase. Unlike traditional manufacturing, where larger batches often lead to better economies of scale, managing knowledge work requires a shift in perspective.
In knowledge work, flow efficiency is a vital concept. It can be expressed by the formula: Flow Efficiency = (Work / (Wait + Work)) * 100. This equation calculates the percentage of time spent on productive work out of the total time, which includes both waiting and working. Understanding flow efficiency is essential for analyzing process and operational efficiency because it shows how much time is being used effectively versus time lost waiting.
A key principle of lean manufacturing is to reduce batch sizes. This "less is more" approach minimizes the amount of work in progress at any one time, allowing teams to focus on efficiently completing tasks. The benefits are clear: faster feedback loops for quicker adjustments, improved quality through early identification of problems, and overall efficiency gains. In software development, where code can quickly become obsolete, working in smaller batches is even more critical. It helps maintain a smooth and consistent workflow, prevents bottlenecks, and keeps the organization on track for success.
Two important metrics to understand in this context are lead time and cycle time. Lead time measures the total time from the start of a process to its completion, such as from the time raw materials are ordered to the time the finished product is delivered. Cycle time, on the other hand, measures the time it takes to complete a specific process step, such as turning raw materials into a finished product. Customers care about lead time because it affects the speed of delivery, while teams focus on cycle time to improve efficiency by reducing the time spent at each stage. In lean organizations, the goal is to minimize both lead time and cycle time to achieve fast delivery and efficient production.
These concepts apply to recruiting as well. Lead time in recruiting covers the entire period from requisition to hire and reflects the candidate's journey. Cycle time begins when the candidate enters the active pipeline and ends at hire, focusing on process efficiency. Tracking these metrics helps identify delays and improve hiring stages. In addition, metrics such as work in progress (WIP) and throughput are critical to checking the operational health of a team. WIP tracks the number of tasks or projects currently being worked on, indicating whether the team is overloaded. Throughput measures the amount of work completed in a given time period, such as weekly or monthly, and can signal shifts in performance that require attention. By focusing on these metrics, organizations can identify inefficiencies and develop strategies for continuous improvement to ensure teams are performing at their best.
Seasonality, indexes and rankings
The law of small numbers can create the illusion of a trend when it's really just an outlier. Let me give you an example from my days at an agency. During an exercise to match prospects with job openings on Salesforce, I hit the 5x benchmark. My manager was ecstatic, but I had simply cherry-picked roles I knew well and prospects I remembered from previous sessions. I was lucky because I had a better understanding of the technical requirements than my peers. However, this success wouldn't be scalable to other roles and markets over time. This experience taught me an important lesson about interpreting performance metrics - sometimes what appears to be a trend is just a fluke.
In my article on understanding the use of LinkedIn Recruiter licenses, I emphasized that comparing teams or roles - not individuals - provides a better understanding of performance. Using indexes can be incredibly helpful, as long as everyone understands what they represent. I suggested the LRI Index 2.0, (shared with me by the LinkedIn team), which incorporates weights for four different metrics. Then it generates a ranking, sorted from highest to lowest performer. While the roles may change, the process remains the same: it's still about search, views, reach and daily activity. However, metrics can be tricky. Consider a situation involving reachout messages from LinkedIn Recruiter: “if the user is sending in July 500 messages and in August 50, the system will assume that all the replies received in August were matching those 50 InMails. Even if the replies are to InMails from August, July and even much earlier. But the same happens with comparing Applicants to Hires - if the candidate applies in December, but gets hired in January.”9
To address these delayed results, such as hiring decisions made after the initial application, consider using time-lagged attribution. This method attributes results to the period when the initial action occurred, such as sending messages or receiving applications. In addition, "carry-over metrics" can track activity across reporting periods, such as ongoing conversations or applications in process. Adjusting reporting cycles to better align with the lifecycle of activities can provide a more accurate picture of performance. Predictive analytics, which estimate future outcomes based on historical data, can also help understand long-term trends.
A practical approach is to create a combined index that includes various metrics such as Offer Acceptance Rate (OAR), Quality of Hire, and Time to Hire. This approach provides a comprehensive view of a recruiter's performance. However, it's important to recognize that recruiters have different working styles. For example, a recruiter who excels at in-depth candidate assessments may have less daily activity than one who focuses on rapid outreach. Not all recruiters thrive in a fast-paced environment. Some excel at building strong relationships with candidates, while others are more metrics-driven. The best approach is to use a combined index for a broad view, while considering individual recruiter strengths to set realistic expectations.
Seasonality plays an important role in performance metrics. For example, in November, when companies close their budgets and want to know their annual results, the year isn't even over. By mid-November, you may only have data for the last 10 months. So how do you project results for the full year? Should you base your calculations on just 10 months instead of 12? One method is to use the average revenue from the first 10 months to estimate a baseline. While this provides a quick snapshot, it may miss important seasonal trends. Alternatively, you could extrapolate the current trend to predict full-year results. This approach provides a more complete picture, but requires you to account for seasonal variations that could skew the forecast. If your business experiences large fluctuations throughout the year, historical data can help you adjust your estimates for greater accuracy.
As Daniel Kahneman explains, ranking can reduce both pattern noise and level noise. When you compare the performance of two team members instead of giving each a separate grade, you're less likely to see inconsistencies. He offered an example: “If Lynn and Mary are evaluating the same group of twenty employees, and Lynn is more lenient than Mary, their average ratings will be different, but their average rankings will not. A lenient ranker and a tough ranker use the same ranks.”10 This approach is especially useful for roles where people work outside the regular schedule - weekends, nights, holidays. To make fair comparisons, normalize the data by dividing the metrics by the number of days or individual employees active during a given time period.
Whichever method you choose, be clear about your approach and the assumptions behind it. Communicate these to stakeholders so they understand the reasoning behind your projections. Don't overlook the impact of seasonality - use historical data to inform your projections and ensure your estimates are as accurate as possible.
Over reliance on algorithms
When it comes to predicting hiring performance, different methods offer different levels of accuracy. The simplest approach is to compare the characteristics of the best and worst performers and test for statistical significance. However, this often misses deeper complexities. A more accurate method compares these characteristics within the same cohort and job, taking into account role-specific differences. To gain further insight, multivariate regression can be used to analyze multiple factors simultaneously, providing a clearer understanding of performance drivers. The most effective approach combines multivariate regression with selection correction, which accounts for hiring and attrition biases and provides the most accurate and reliable predictions of hiring performance.
Given the complexity of these methods, it's important to ask: How does data analysis compare to human judgment? Unfortunately, even when various tests and selection methods are combined, much of the performance remains unexplained. Even more troubling, attrition decreased when managers overrode the algorithm less often. This finding suggests that while algorithms often produce better results, human intervention can introduce inconsistencies.11
Despite the potential of algorithms to improve decision making, people still tend to prefer human judgment. This preference exists because people are generally more forgiving of human errors than those made by algorithms. However, this resistance to algorithmic decisions diminishes when people are involved in the process, even minimally. By participating in the process, people become more comfortable with and accept the results generated by algorithms.12
This preference for human judgment isn't limited to algorithms; it also affects performance reviews. Different interviewers can have very different opinions about the same candidates, and performance reviews of the same employee can vary widely, often reflecting the reviewer's perspective more than the employee's actual performance. This inconsistency is in line with the subjective nature of personal evaluations in the workplace. Daniel Kahneman states that “general rule: the combination of two or more correlated predictors is barely more predictive than the best of them on its own. Because, in real life, predictors are almost always correlated to one another.”13
This variability highlights the importance of process analysis in areas such as technical support, where traditional metrics often track individual performance in resolving problems. For example, if a manager notices that Specialist A takes longer to install a new operating system than Specialist B, the first reaction might be to focus on individual coaching. But a more effective approach would be to analyze the process itself. Are Mac installations more cumbersome than PC installations? Is there a lack of standardized tools or training materials? By focusing on process performance metrics - such as average task completion time or number of errors encountered during installations - managers can identify bottlenecks and inefficiencies within the system. This data-driven approach allows them to streamline processes, set the entire team up for success, and deliver more consistent, efficient service to customers. This method can also be applied to non-technical issues, especially when comparing inefficient processes to those that are already semi-automated. Focus on process analysis to identify and eliminate inefficiencies, while using individual coaching as a supplementary strategy.
In addition to the challenges of algorithmic decision making versus human judgment, there's also the role of perceived luck in decision making. People sometimes think of luck as something inherent, almost like a personal trait, that affects their decisions in unpredictable ways. For example, studies have shown that “people were less willing to sell their lottery tickets when they had chosen the ticket number themselves than when the numbers had been chosen for them. (...) It is confusion between chance and skill, but the `skill' involved may be exactly the sort of clairvoyance.”14 This illustrates how people often confuse chance with skill, which can further complicate the already subjective nature of human judgment in performance evaluations.
Emotional connection
Remember, all the numbers are there to help you make decisions. Dashboards are the journey, not the destination. Early in my analyst career, I made the mistake of building every possible permutation of data, trying to predict what end users would ask for. But I learned that less can actually mean more. While we strive to provide self-service solutions, not every question needs an answer. The issue is not just whether the data is correct, but whether the explanation makes sense. Or, as the author of “Communicating with Data” puts it: “If I didn't know an answer, stakeholders often saw it as a gap in the data. The data was correct; the problem was that the message wasn't fully conveyed correctly.”15
As I mentioned in my previous article, we often rush to see patterns even in meaningless information. This tendency can lead “to conspiracy theories and Malcolm Gladwell best-sellers. In the 21st century, society’s guardians of truth are the statisticians, and they name these false positives Type 1 errors.”16 Type 1 errors occur when we conclude that there is an effect or a difference when, in fact, there isn't one. This is why a critical approach is essential for anyone who wants to use data. In our post-truth world, verifying misinformation requires even more effort. We cannot afford to be data illiterate.
Success is often attributed to someone's skill or luck, but we overlook the numerous failures they have faced because those failures are not publicized. A solid paper by Jonathan Baron and John C. Hershey, "Outcome Bias in Decision Evaluation," points out that decision makers are often judged on the outcomes of their decisions rather than on the quality of the information available to them at the time.17 In many areas - law, regulation, politics, and everyday life - people evaluate decisions with the benefit of hindsight, leading to biased judgments. This bias, known as hindsight bias, occurs when we confuse the outcome of a decision with the quality of the decision itself. As a result, we may unfairly judge a decision as bad simply because it led to a bad outcome.
But being aware of this bias isn't enough to avoid it. When a decision leads to an unfavorable outcome, it's helpful to review the decision from the perspective of the person who made it, given the information they had at the time. This method ensures a fair assessment and can lead to better decisions in the future. Interestingly, society rarely sets up commissions of inquiry to examine decisions that turn out well, even though such analysis could be equally valuable.
In my previous article, I also mentioned how managers can game the system to meet certain quotas. It's not just about measurable goals, it's also about feeling good about yourself. Simply put, being human makes us act with all the implicit biases of a self-serving worldview. “Personal interests and involvements often distort the way people treat information and the way they argue, and emotional commitments make it harder to look at an issue from someone else's point of view.”18 Daniel Goleman, a leading expert on emotional intelligence (EQ), argues that emotions play a much larger role in thinking, decision-making, and individual success than is commonly recognized. In a world where we are drowning in data, the ability to think critically is undervalued. Often, intuition - learned from experience and recognizing recurring patterns - provides better guidance than blind faith in data.
As we can see, making a decision isn't easy, and being armed with numbers can complicate the picture even more. Changing the way you look at a problem, much like putting a picture in a different frame, can significantly change your thoughts and feelings about it. Martin Cohen, who penned “Critical Thinking for Dummies”, suggests using a concept called the Powers of Ten. “Basically, the method is to exaggerate everything and take it ‘to the extreme’. If you’re, say, designing a play area for children on a budget of £1,000, you may ask ‘what if the budget is only £10 – or what if it’s £1 million? If the area is likely to fit in a classroom, you may ask what if it’s just 1 metre square — or what if it's the size of the playing field?”19 This technique can help you reframe your perspective and approach problems from new angles, leading to more creative and effective solutions.
Critical communications
Does it make sense to bring transparency to data, especially when the results may be troubling? More often than not, it can spark important conversations that many of us tend to avoid. I recall a situation where a team leader was reluctant to track his team's efficiency, citing the stress of recent structural changes and the implementation of a new system. My manager, with a keen perspective, asked, "But don't you want to know what's causing your team's stress?" This question underscored the importance of uncovering the root causes of problems, even when the data may reveal uncomfortable truths. Fortunately, it opened the door to dialogue and understanding.
Problem solving and analysis require context, so data must be integrated across the HR function to effectively address broader business challenges. However, as we increasingly rely on AI tools for data analysis, we need to be wary of potential pitfalls. The issue of "hallucination" in generative AI - where AI confidently presents incorrect information - can also distort our expectations of each other. If people become too familiar with AI tools and start to trust them blindly, they run the risk of relying too much on the AI's output and potentially ignoring their own common sense. A recent experiment highlighted this danger, showing that when users trust AI results without applying critical thinking, they are more likely to make mistakes. Fabrizio Dell'Acqua's research aptly titled “Falling Asleep at the Wheel: Human/AI Collaboration in a Field Experiment on HR Recruiters” shows that when we rely on tools that automate all the work, we become more prone to error.20 For this reason, I believe that critical thinking is essential to continuous improvement. While data and AI tools are valuable resources, decisions should be based on sound judgment as well as data.
There's an old marketing adage: "“If you can't measure it, you can't improve it”. But relying too heavily on data carries significant risks. Nike learned this the hard way when it lost market value. The company invested billions in a solution that was easier to measure but less effective, overlooking a more effective option simply because it was harder to quantify.21 This mistake led to a significant financial loss-up to $25 billion in market capitalization-directly tied to ineffective branding decisions or strategies.
My former manager, whom I mentioned earlier, often loosely quoted Martin Luther King's philosophy: “It's not enough to be right; you have to act to make a meaningful impact”. While statistics and insights are valuable, they don't create value on their own. It's then essential to close the OODA (Observe, Orient, Decide, Act) loop with decisive action, a topic we'll explore in the upcoming final chapter.
–Michael Talarek
Timothy Lister, Tom DeMarco, “Peopleware: Productive Projects and Teams”
Kathryn Zukof, “The Hard and Soft Sides of Change Management: Tools for Managing Process and People”
Eric Van Vulpen, “The Basic principles of People Analytics” | https://www.aihr.com/resources/The_Basic_principles_of_People_Analytics.pdf
People Analytics course at Wharton University (Coursera) | Martine Haas, “Intervening in Collaboration Networks” | https://www.coursera.org/learn/wharton-people-analytics
Patty McCord, “Powerful: Building a Culture of Freedom and Responsibility”
https://www.nytimes.com/2016/02/28/magazine/what-google-learned-from-its-quest-to-build-the-perfect-team.html
Dominica DeGrandis, “Making Work Visible”
Daniel Kahneman, “Noise: A Flaw in Human Judgment”
People Analytics course at Wharton University (Coursera) | Prof. Matthew Bidwell, “Predicting Hiring Performance” | https://www.coursera.org/learn/wharton-people-analytics
People Analytics course at Wharton University (Coursera) | Cade Massey, “Talent Analytics: The Importance of Context” | https://www.coursera.org/learn/wharton-people-analytics
Daniel Kahneman, “Noise: A Flaw in Human Judgment”
Carl Allchin, “Communicating with Data: Making Your Case with Data”
https://www.sas.upenn.edu/~baron/papers.htm/judg.html
Martin Cohen, “Critical Thinking for Dummies”
Martin Cohen, “Critical Thinking for Dummies”
https://static1.squarespace.com/static/604b23e38c22a96e9c78879e/t/62d5d9448d061f7327e8a7e7/1658181956291/Falling+Asleep+at+the+Wheel+-+Fabrizio+DellAcqua.pdf
https://www.linkedin.com/pulse/nike-epic-saga-value-destruction-massimo-giunco-llplf/