Range Is The New Black - Part III
Why Developing Range Is the Most Important Thing You Can Do for Managing and Developing Your Career
If you missed the previous opening posts on this topic:
Part I is here.
Part II is here.
Continuing…
Chapter 4 — Learning, Fast and Slow
In this chapter, David dismantles another sacred cow of conventional career advice: the idea that fast learning is always better learning.
In schools, bootcamps, and even professional development programs, we’re conditioned to seek efficiency. The faster you acquire a skill, the smarter you must be. The more fluently you can recall, repeat, and replicate, the more advanced you’re perceived to be.
But as David shows, there’s a massive difference between fluency in the moment and flexibility over time. And surprisingly, it’s the struggler, not the natural, who often comes out ahead.
The Interleaving Study: Slow Learners, Fast Results
David cites a fascinating experiment with two groups of students learning to identify types of paintings. One group studied paintings by artist A, then artist B, then C. All in “blocked” sessions. The second group studied the same paintings, but mixed them up. This is a technique called interleaving.
The results were revealing.
During training, the blocked group outperformed the interleaved group by a significant margin. They “learned faster.” But when tested later, when the task required transferring knowledge, the interleaved group crushed it.
Why? Because the struggle of interleaving, of comparing, contrasting, and making distinctions created deeper encoding. It felt harder, but it worked better.
David’s conclusion? “Desirable difficulty” the kind that makes learning slower actually leads to more robust performance when it matters most.
Where This Shows Up in Tech Careers
Let’s translate that to your career.
The professional equivalent of blocked learning is what most engineers and analysts do early in their roles: focus on one problem domain, one language, one layer of the tech stack. The work becomes fluent. You can solve things quickly, almost without thinking.
It feels like mastery.
But then something changes: business priorities shifts, your tools evolve, or as we are learning AI is about to automate part of your workflow, if not already. Suddenly, you’re asked to solve a new kind of problem: unfamiliar data, undefined metrics, a new ‘wicked environment’ requiring tradeoffs to be made for design and build that will change business outcomes.
So what happens to the people who struggled more in the early days who rotated through different tools, took on seemingly weird projects no one was willing to touch, or had to explain their work to non-technical audiences? They are the ones primed to make confident decisions in ambiguity. Because they learned how to learn.
The speed of your early fluency is not the same as your readiness for complexity.
AI Makes This Even More Critical
Let’s put this in the context of AI.
AI accelerates syntactic fluency. It helps you autocomplete code, translate frameworks, even write documentation. This is the kind of speed that flatters the fast learner.
But AI can’t do:
Model-to-business mapping
Organizational prioritization
Systems thinking across departments
Reconciliation and resolution between independently developed roadmap and architecture
Human intuition in unknown domains
That’s where the interleavers, i.e. the slow, struggling learners win.
Because they weren’t just memorizing patterns. They were building mental models that not only become adaptable to breaking down complexity but scales with it.
Why This Matters in a Wicked World
Wicked environments don’t reward people who are masters of tools. They reward people who can transfer principles from one domain to another.
When the problem is fuzzy, the systems have shaky foundation, you have multiple conflicting priorities - you need more than fluent system builders. You need flexible thinkers.
So if you’ve felt like your learning journey has been messy and if you’ve had to figure things out slowly, through failure, across different stacks or domains, then, that wasn’t wasted time. That was your real edge being built.
Practical Strategy: How to Train Like a Generalist
Practice “interleaved” learning in your career
Don’t optimize for staying in one domain just because it’s comfortable. Mix projects. Pair with someone in a different role. Explain your work to people outside your team. Each friction point builds cognitive flexibility.
Do things the hard way (on purpose)
Every once in a while, write code without your favorite helper tools. Build a model from scratch without AutoML. Recreate a report without a template. Struggle intentionally. That’s where the learning lives.
Debrief your decisions, not just your results
The fast learner just celebrates that something worked. The flexible learner asks: “Why did that work? Would it work here again? What was different?” That reflection is what trains adaptive judgment.
Teach what you learn
Teaching forces you to translate deep understanding into clear explanations. That process slows down learning and makes it permanent.
Bottom Line: Fast is Fragile. Flexible is Forever.
In a world where machines will outperform humans in speed, efficiency, and recall our real advantage isn’t fluency. It’s range-informed flexibility.
The best careers aren’t built by those who sprinted to early mastery. They’re built by those who got their hands dirty across contexts and turned struggle into strength.
Chapter 5: Thinking Outside Experience
This is the chapter where David starts to seriously dismantle the idea that expertise comes only from repetition inside a single domain. In fact, he flips that logic completely. Instead of asking, “How deep is your experience?”, he suggests we ask a more powerful question. “Can you think beyond your experience?”
That’s a different kind of intelligence. And it’s exactly the kind that becomes invaluable in tech careers navigating wicked, unpredictable change.
The Firefighter Analogy That Fell Apart
David opens this chapter by revisiting a famous concept from psychology. It’s the idea that experts can make incredibly fast decisions because they have built up thousands of stored patterns in their brain. The classic example is the firefighter who senses that a building is about to collapse, even though no visible clues are present. It’s intuition, born from experience.
This is the gold standard in many fields. In fact, we tend to revere that kind of “instinctive expertise” in senior engineers, senior staff-level ICs, and technical architects. We assume that the more time someone has spent in a specific domain, the more likely they are to know what to do when a crisis hits.
But David then asks a harder question. What happens when the environment changes? What if you’re suddenly facing a different kind of fire?
He brings up the case of the Columbia space shuttle disaster. In this case, the engineers with the most NASA experience were the ones who insisted everything was fine. They had seen foam strikes before, and nothing catastrophic had ever happened. So they treated this one like all the others.
Except, this time, the foam strike did cause fatal damage. And the people who had the hardest time seeing the risk were the ones with the most direct experience. Their deep familiarity blinded them to a new kind of danger.
Experience Can Be a Liability in Machine Learning Too
If you’ve spent years mastering classical machine learning, you’ve likely developed a strong intuition for how to clean data, engineer features, tune hyperparameters, and validate models through careful experimentation. You’ve earned your credibility by building robust, explainable, well-evaluated systems often for narrow business use cases.
Sine the advent of LLMs, the terrain has shifted.
Use cases now require building and deploying LLMs and working with foundation models. You’re no longer training models from scratch: you’re prompting them, fine-tuning them, or chaining their outputs. Feature importance is not a topic of discussion anymore in this context. There is no hyper parameter tuning. You are designing and experimenting with prompts. Evaluation is no longer a simple confusion matrix as you would use in a classification problem, it’s about user behavior, evaluating open-ended quality metrics on user expectations and experience, as well as controlling and evaluating for all the risks a LLM poses.
Your traditional ML instincts grounded in structured model development process (CRISP-DM / SEMMA), precision, and explainability appears to be “different worldly” for a data scientist. You may try to slow things down, asking questions about where do we reintroduce classical rigor, or question whether this is “real” machine learning.
But that instinct, however valid in the past, might be holding you back now.
Because generative AI is a different paradigm, not just a new toolkit. And applying your hard-earned ML patterns to it without updating your lens can lead to misalignment.
This is how experience even deep technical experience becomes a liability.
Not because it’s wrong, but because it’s out of context for classically trained statisticians, ML engineers or data scientists.
So Where Do the Best Ideas Actually Come From?
In the book, David gives the example of scientists who made breakthroughs outside their primary domains. He cites a study that showed researchers who solved complex innovation problems were more likely to do so when their domain expertise was only loosely related to the problem.
They weren’t so close that they relied on familiar patterns. But they weren’t so far that they couldn’t grasp the problem at all.
That sweet spot, known as far transfer, is where innovation lives.
In practical terms, that means the best solutions to your hardest problems might come from someone who doesn’t “specialize” in that exact space. They might work in design. In ops. In legal. Or they might be you, if you’ve taken time to build perspective across disciplines.
Why This Matters in an AI World
As AI starts to become embedded in every workflow, the boundaries between disciplines is likely to dissolve or at least introduce a AI-Human integrated layer..
An engineer today might need to understand not just how models work, but how people trust them. A product manager might need to reason about ethics, safety, infrastructure cost. A technical lead might need to navigate product strategy, user privacy, and regulatory compliance. Consider this: the job title of “AI Engineer” was practically non-existent prior to advent of LLMs becoming mainstream.
In other words, you don’t just need more experience. You need the right kind of experience. That means varied, cross-disciplinary, and intentionally adjacent.
And it also means you need to practice thinking outside of it. Not just “What do I know that applies here?”, but “What is just outside my experience that could inform this?”
That’s what makes a strategist. That’s what makes someone future-proof.
Practical Strategy: Train for Far Transfer
Read outside your domain
If you’re a data person, read product design case studies. If you’re in backend engineering, explore behavioral psychology. The more frameworks you can draw from, the more novel connections you can make.
Run solution sprints with outsiders
Bring in someone from an unrelated team when solving a tough problem. Not for execution help, but to ask questions that break your usual framing.
Challenge your mental models
Ask yourself, “If I had to explain this to someone in marketing, how would I do it?” This forces you to abstract your knowledge and look for new metaphors.
Adopt a second lens
Pick a second professional identity to cultivate. Are you an engineer who also studies business models? A data analyst who reads philosophy? That second lens gives you clarity no specialist can replicate. Personally, I have spent a good part of a decade adopting a second lens myself. It definitely continues to serve me well.
Bottom Line: Mastery Is a Ceiling. Range Is a Bridge.
In a world where everything is changing from how we build systems to how we trust machines, the person who can think outside their experience will always have an edge over the one who only deepened theirs.
The next leap in your career won’t come from going deeper. It’ll come from seeing sideways, connecting dots no one else noticed, and pulling in lessons from other domains.
Chapter 6: The Trouble With Too Much Grit
We’ve been taught that grit is a defining trait of success. It’s the quality behind “winners never quit and quitters never win.” It’s the force that powers you through code reviews, late-night incident responses (yes, I also had a job two decades ago that involved both of these type of roles), years of climbing the technical ranks.
But David asks a more uncomfortable question: What if grit keeps you stuck?
In this chapter, David makes an important pivot. He challenges not just how we learn, but how we persist. While grit, the celebrated ability to push through challenges and ‘never quit’ has become a professional virtue in popular self development literature, David offers a counter point of view that talks about the dark side of sticking to a path too long, especially when the world around you is shifting.
The Escalation of Commitment
David shares the story of elite students who enter prestigious career tracks such as pre-med, law, finance and feel pressure to persist, even when it becomes obvious the path no longer fits them. These individuals aren’t failing. In fact, they’re often succeeding on paper. But they’re also stuck in what psychologists call the escalation of commitment. It is a tendency to continue investing in a decision because of the time and effort already sunk into it.
It’s not just that they fear quitting. It’s that they’ve spent so long building an identity around being gritty, that switching feels like personal failure. How many people do you know professionally who seems like this persona? Anyone?
David shows that many of the most innovative thinkers and leaders aren’t the grittiest in the traditional sense. They’re the ones who know when to walk away from the wrong path, even if it means disappointing others or starting over.
In other words, it’s not just perseverance that matters. It’s strategic quitting. (I also did this in 2007 - I wrote about this in part 1 here if you missed it)
Why AI Changes the Risk Equation
In an AI-transformed future, domain development lifespan will shrink.
Tools that once took years to learn can now be semi-automated.
Categories of technical work are being co-piloted by … you guessed it copilot .. no pun intended (Github etc). Let’s not yet go to the topic of vibe coding. I have a point of view on that I will cover in future posts.
What was once construed a “career moat” is now a white space for a bright eyed AI startup founder.
If you define grit as loyalty to a specific technical skill, you’re building permanence on sand.
Instead, your resilience needs to come from range: the ability to move, adapt, reframe, and re-engage in new contexts without fear of leaving your sunk costs behind.
Real-World Parallel: Andre Agassi vs. Roger Federer
Though discussed earlier, this is where the Federer analogy from Chapter 1 deepens. Federer didn’t lock himself into tennis at age four. He tried basketball, handball, soccer. He switched late and dominated.
Contrast that with the professional identity crisis Andre Agassi describes in his memoir Open, where early over-specialization left him burned out, struggling, and without a sense of self outside of the sport.
In tech, this happens all the time. People specialize so early and so deeply that when change comes and it always does they have no transferable story, no adjacent skills, and no appetite to start over.
Practical Strategy: Redefining Grit in a Dynamic World
Run periodic “career audits”
Ask yourself: “If I were starting from scratch today, would I still choose this path?” If the answer is no, don’t ignore it. You don’t need to quit tomorrow, but start sampling immediately.
De-risk switching by diversifying
Explore adjacent domains before you need them. Take on stretch projects. Join cross-functional initiatives. Build the escape ramps before the current lane closes.
Change your identity from ‘expert’ to ‘explorer’
When people ask what you do, don’t box yourself into a title. Frame yourself as someone who solves complex problems across contexts. This creates space to evolve without shame. My personal philosophy on this is “Titles don’t define who you are”. Because that is very self-limiting.
Measure progress by range, not just role
Promotions are one form of progress. But so is the ability to navigate uncertainty, influence across domains, and pivot with confidence. Track that as your growth metric.
Bottom Line: Smart Quitters Win
The most resilient tech professionals won’t be the ones who went the deepest. They’ll be the ones who knew when depth became a dead end, and had the courage to move laterally or even diagonally to stay relevant.
Letting go of a skill, title, or path doesn’t mean you gave up. It means you saw the future coming and moved.
Chapter 7: Flirting With Your Possible Selves
In this chapter, David introduces the concept of “possible selves”, a term from psychologist Hazel Markus. It describes the mental versions of ourselves we carry in our heads. The different careers, different lives, or different roles we could become. These are often undefined, rough sketches, not clear goals. But they serve a crucial purpose. They help us try, test, and refine who we really are.
David argues that success doesn’t come from identifying your “passion” early and chasing it relentlessly. Instead, it comes from actively testing different versions of yourself, seeing what fits, and letting experience inform identity, not the other way around.
He draws on the story of Frances Hesselbein, a woman who didn’t step into a leadership role until her 50s. She tried on multiple professional identities across her life: teacher, volunteer, community member, before eventually becoming CEO of the Girl Scouts and one of the most influential voices in leadership. Hesselbein didn’t plan her way into greatness. She sampled her way there.
Why This Matters in Tech
In tech, we are obsessed with clarity. Job ladders. Career paths. Titles. Promotions. Specializations. But clarity can easily harden into rigidity. And before long, the question “What do you do?” becomes a cage, not a launchpad. Again, self-limiting.
Here’s how it often plays out:
Say, you start as a marketing analyst. You build performant dashboards, analyze experimentation results, apply attribution models, know marketing KPIs inside out. You get known for it and rewarded for it. You become “the marketing analytics person.” Then someone asks, “Have you ever thought about moving into marketing strategy because you understand marketing so well” Or “You have a knack for storytelling, why not join marketing comms”
You respond, “That’s not really my thing.” But … how do you know?
That’s the point of trying on possible selves.
You don’t know which version of your career might fit or where your hidden strengths lie until you give yourself permission to test the edges.
This isn’t about abandoning your data skills.
It’s about using them as a foundation to explore what else you might be good at that is in the next adjacent space.
Because in a world where AI can generate queries and run analyses, it’s not just about who’s the best analyst, it’s about who has the range to connect data to influence, decisions, and strategy.
And you can’t build that kind of range if you never leave the role you’ve been told you’re good at.
The Professional Power of Trying Things On
David’s research shows that people who explore broadly, who take detours, pivots, lateral moves, or even temporary regressions are more likely to land in careers that match both their strengths and their values.
Why? Because your sense of identity emerges from action, not introspection.
Trying out new domains, functions, or roles gives you feedback loops you can’t get from thinking alone. It tells you how your skills translate. It reveals what kind of work energizes you. It shows you which environments reward your instincts.
And in a world being reshaped by AI, this becomes your survival tool.
What It Looks Like in a Tech Career
Let’s say you’re a data scientist with solid modeling skills. But you’re curious about product strategy. You’ve never done it. You’re not sure if it’s for you. But part of you wonders, “Could I be good at that?”
You don’t need to quit your job or get an MBA.
You could:
Sit in on product roadmap meetings.
Shadow a PM for two sprints.
Take on a project where you translate data insights into feature prioritization.
Run a small experiment where you pitch a data-driven initiative to a cross-functional team.
In doing that, you try on a possible self. i.e. the new strategic PM, the hybrid lead, the internal consultant. You don’t have to marry that identity. But you get to ask, “Does this feel like me?”
Multiply that process over years, and you’ll have a wide portfolio of tested identities. Which means when the time comes to pivot either because of interest or disruption you’ll know exactly where you fit next.
In an AI-Transformed World, Optionality Is Everything
As AI does more of the repeatable tasks in technical roles, affording yourselves the optionality to move between value-creating roles becomes the real “career” moat.
The engineer who can hold product conversations.
The analyst who can evolve into an leader.
The technical IC who can step into strategy without losing credibility.
These are not linear transitions. They’re the result of possibility sampling of having tested multiple selves in lower-stakes settings, so you’re ready when opportunity arrives.
Practical Strategy: Expand Your Range of Selves
Run career “micro-experiments”
Choose one identity you’re curious about (e.g. technical pre-sales, solution architect, team lead). Set a 90-day goal to explore it lightly. Shadow someone. Take on a side project. Reflect on how it feels.
Track energy, not just skill
When you try new things, don’t just assess whether you’re good at them. Ask: Did this energize me? The best roles lie at the intersection of ability and alignment.
Decouple identity from title
Don’t limit your professional identity to your job description. Try introducing yourself in broader terms. For example, “I help organizations turn messy data into clear decisions.” That gives you flexibility to try new expressions of the same value.
Stay open to unlikely paths
Frances Hesselbein didn’t plan to lead a major organization. Her leadership identity emerged from service. Your best opportunities may not come from plans. They may come from patterns that only make sense in retrospect.
Bottom Line: Test, Don’t Assume
The most successful and adaptive tech professionals aren’t the ones who picked the perfect ladder. They’re the ones who explored enough paths to build a multidimensional map of themselves.
If you feel unclear about where your career is going, that’s not failure. That’s signal. It means you’re still exploring, still testing, still building range.
The next big idea covered in the book is based on The Outsider Advantage, where we’ll explore how people from outside a domain often make the biggest breakthroughs, and why that matters when you’re thinking about your next professional leap.
Until next time
Vijay
If this helped you reframe your thinking about your career development please consider sharing with others. (And, feel free to post it to your LinkedIn network).
More content like this in the future. Stay tuned.



