“Insanity is doing the same thing over and over again and expecting different results.”

— Often attributed to Einstein (actually unknown)

Anti-Practices

What guarantees Failure
These approaches consistently strengthen PIs. Avoid them. Or don’t. But don’t be surprised when they fail.

“More of the Same”

When it doesn’t work, try harder.

Why it fails?
Amplifies the pattern. Effort paradox in action.

Example:
Security measures create new vulnerabilities. Adding more security creates more vulnerabilities. The pattern doesn’t break – it scales.

Alternative:
Break the pattern, don’t amplify it. If the approach strengthens the problem, the problem is the approach. Try something structurally different—even if it feels wrong.​

“The Harder You Want It, The Less You Get It”

Desperation broadcasts itself. And repels what it seeks.

Why it fails?
Intense desire changes the structure of interaction. What looks like commitment reads as desperation. Authenticity becomes strategy. The other side senses it—and withdraws. The harder you try, the more you signal need. Need is weakness. Weakness is unattractive.

Example:
Job interview: Desperately need the job. Every answer too eager, too rehearsed, too perfect. Interviewer feels the desperation. “We’ll let you know.” They won’t. Meanwhile, the candidate who’s already employed—relaxed, authentic, indifferent—gets the offer. Not because they’re better. Because they don’t need it.

Alternative:
Want it less. Not fake detachment—actual redirection of attention elsewhere. Make it optional in your mind before approaching. The structure responds to genuine indifference, not performed calm.​

“Put Good People in Charge”

Replace the individuals, fix the system.

Why it fails?
Structure devours intention. Good people in bad structures produce bad outcomes.

Example:
Every new CEO promises change. The structure remains. The outcomes remain.

Alternative:
Question why good people keep failing here. The problem isn’t who—it’s what they’re walking into. Map the forces that break them before replacing them.

“Loyalty Over Competence”

Hire from the inner circle. Trust over talent.

Why it fails?
Personal networks override objective criteria. The system rewards connection, not capability. Competence becomes a threat—outsiders bring scrutiny. Mediocrity becomes structural insurance.

Example:
Political appointments: Qualified candidates bypassed for party loyalists. Agencies staffed with ideological allies lacking domain expertise. Disaster response, economic policy, public health—all suffer. But the inner circle stays protected. Until the structure collapses under its own incompetence.

Alternative:
Hire for what they can do, not who they know. Test actual ability. Make the work speak before the relationship does.

“Success as Gatekeeping”

Reached the goal. Now protect the position.

Why it fails?
Those who survived a broken structure don’t fix it—they weaponize it. Success transforms victims into defenders of the system that harmed them. The harder the path, the stronger the gate.

Example:
Tenured professors who survived brutal academic hazing now require the same from PhD candidates. “I suffered, they should too.” The structure that hurt them becomes their moat. Fresh perspectives blocked. Innovation dies.

Alternative:
Recognize intelligence in others. Don’t be the untouchable master of the situation. Creative suggestions often come from unexpected sources. Those who secure position through exclusivity lose it through irrelevance.​

“The devil shits on the biggest pile”

Or to those who have, more will be given. The Matthew Effect.

Why it fails?
Initial inequality creates structural advantage. Position attracts resources. Resources strengthen position. Gap widens systematically. All sides act rationally—investors back proven winners, institutions choose established names, opportunities flow to those with access. The structure selects for accumulation, not merit.

Example:
Academia: First publication in top journal → citations → funding → resources → more top publications. No first publication → downward spiral, regardless of quality. Venture capital: First funding → hire → ship → users → next round. No funding → dies, regardless of product. Recognition Authority Paradox: Can’t validate your own competence. Need recognition from those with position—who often have position precisely because they can’t recognize superior competence.

Alternative:
Call out accumulation when you see it. Name the luck. Question why position keeps attracting resources. Won’t stop it—but kills the “merit” myth.

“Great Idea—Could’ve Been Mine”

Appropriate others’ ideas. Elevate yourself. Diminish them.

Why it fails?
Power determines credit, not contribution. Those in stronger positions claim ideas from weaker ones—not through theft, but through retroactive ownership. “I would’ve thought of that” becomes “So it’s essentially mine.” Structural position overrides intellectual origin.

Example:
Junior developer proposes architecture solution. Senior says: “Great idea—I was thinking the same thing.” Then presents it to management as their own. Junior gets no credit, no advancement. Senior strengthens position. Next time, junior stays quiet. Innovation dies where power claims authorship.

Alternative:
Document attribution transparently. Time-stamp contributions. Make intellectual theft visible and costly. The structure exploits opacity—eliminate it.​

“Promise Careers, Deliver Precarity”

Lure talent with opportunity. Structure guarantees failure for most.

Why it fails?
The system requires temporary exploitation to function. Maximum contract duration forces even successful researchers out. Those who survive become defenders—or leave. Structure perpetuates through legislated precarity.

Example?
#IchBinHanna—German academic protest against WissZeitVG law. Max 12 years of temporary contracts (6 before, 6 after PhD), then forced exit regardless of performance. Government claimed “innovation through turnover.” Reality: systematic precarization, brain drain, lost expertise. Over 134,000 researchers protested. Law remains. Structure wins.

Alternative:
Tell the truth about the odds. No false hope. The structure works through deception—break that. Most won’t make it. Say so upfront.​

“See Problem → Solve Problem”

Whoever spots it must fix it. Right now.

Why it fails?
Diagnosis isn’t therapy. The reflex to demand instant solutions prevents understanding the structure. Pressure to deliver produces fake fixes that strengthen the pattern.

Example:
Engineer spots security flaw. “Great, you found it—now fix it by Friday.” No time to map dependencies, understand attack vectors, assess systemic implications. The patch creates three new vulnerabilities. Pattern repeats.

Alternative:
Separate diagnosis from action. Give space to understand the structure before demanding fixes. Pressure to solve prevents understanding. Understanding creates navigation options.​

“Transparency Creates Trust”

More information solves everything.

Why it fails?
In PIs, transparency often creates more problems. Information paradoxes are real.

Example:
Publishing AI capabilities: Inform the public or accelerate the arms race? Both. Simultaneously.

Alternative:
Selective transparency based on context. Information can harm. Sometimes opacity protects. Map the structure first—then decide what to reveal.​

“Win-Win Solutions”

Everyone can get what they want.

Why it fails?
Don’t exist in asymmetric structures. Someone always pays.

Example:
“Work-life balance” in winner-takes-all markets. The structure doesn’t allow it. Pretending otherwise is cruel.

Alternative:
Name the asymmetry. Show who pays. Pretending everyone wins creates resentment when reality hits. Honest trade-offs beat false promises.​

“Let’s just Talk”

Communication solves structural problems.

Why it fails?
Talk doesn’t override incentives. Structure beats dialogue.

Example:
Board meetings about “toxic culture.” Everyone agrees it’s bad. Everyone contributes to it the next day. Because the promotion structure rewards the toxic behavior.

Alternative:
Don’t talk about culture—change what gets rewarded. Words without aligned incentives are theater. Fix what people actually gain or lose, then talk.

“If We All Just …”

Appeal to collective action without structural change.

Why it fails?
Coordination requires aligned incentives. PIs create misaligned incentives. Collective will doesn’t override structure.

Example:
Climate negotiations: Everyone agrees on the problem. No one changes behavior. Because the structure doesn’t reward it—and punishes those who move first.

Alternative:
Do what you can do alone. Don’t wait for collective movement. Most coordination fantasies are excuses for inaction.

“Somebody Else’s Problem”

SEP field. Douglas Adams knew.

Why it fails?
The more people who could act, the less each individual feels responsible. Everyone assumes someone more qualified, better positioned, more responsible will handle it. All reasoning locally rational—”Why me? I’m not the expert.” Collectively: the problem becomes invisible through distributed non-responsibility. Not because people can’t see it. Because the brain edits it out as “not mine.”

Example:
Climate crisis: 8 billion people, everyone sees it, everyone waits for governments/corporations/scientists/someone to fix it. “My action won’t matter anyway”—individually rational. Collectively: planetary-scale bystander effect. The bigger the problem, the more diffuse the responsibility, the less anyone acts.

Alternative:
Make responsibility explicit. Who owns this? If nobody, say so. Diffuse responsibility guarantees inaction. Name it, assign it, or acknowledge it won’t happen.​

“The Right Tool/Process/Framework Will Fix This”

Better methodology solves everything.

Why it fails?
Tools don’t override structure. Process doesn’t beat incentives.

Example:
Agile, Scrum, OKRs, Design Thinking – all defeated by the same structural forces that defeated their predecessors.

Alternative:
Match tools to structure, not hopes. The tool that worked elsewhere operates in a different context. Understand your structure first. Then choose tools—if any exist.​

“This Time Will Be Different”

We learned from past mistakes.

Why it fails?
If the structure hasn’t changed, neither will the outcome.

Example:
Financial crises. Every time: “We’ve learned.” Every time: Same structure, same outcome.

Alternative:
Changed the structure? Then maybe different. Same structure? Same outcome. Hope doesn’t override mechanics.​

“Never Change a Winning Team”

Success proves the approach. Keep doing what works.

Why it fails?
Success blinds to structural shifts. By the time you notice you’re losing, the winning structure is your cage.

Example:
Nokia dominated mobile phones. “Why change? We’re winning.” By the time they noticed the smartphone shift, the structure optimized for feature phones couldn’t pivot. Success had locked them in.

Alternative:
Use winning as space to ask hard questions. What’s changing outside that we can’t see from inside? Success blinds—audit while you still can.

“What You Don’t Know Can’t Hurt You”

Ignore what you can’t see. Focus on what you control.

Why it fails?
Unknown risks don’t disappear because you ignore them. Structural problems escalate in the dark. By the time they surface, it’s too late to navigate.

Example:
Subprime mortgages pre-2008: “The models work, returns are great—why dig deeper?” No one bothered to understand the underlying structure. Until it collapsed. Ignorance didn’t protect anyone.

Alternative:
Actively seek blind spots. The structure hides what threatens it. Not knowing doesn’t prevent harm—it prevents navigation.​

“Just Optimize”

Make the system more efficient.

Why it fails?
Optimization within a flawed structure perfects the flaw.

Example:
Optimizing surveillance capitalism makes surveillance more efficient. The structure remains surveillance.

Alternative:
Ask what you’re optimizing before making it more efficient. Perfecting a bad system makes it worse, not better.

“Buridan’s Donkey”

Inform everyone about everything. More information, better decisions.

Why it fails?
Information overload creates paralysis, not clarity. Every addition meant to help makes the choice harder. People stop reading. Stop deciding. The flood of data becomes noise. Rational actors drown in rationality.

Example:
Company rolls out “radical transparency”: Every meeting recorded, every decision documented, every update shared company-wide. Inbox explodes. Slack channels multiply. Everyone knows everything—which means nobody knows what matters. Decisions slow. People tune out. The donkey starves between infinite haystacks.

Alternative:
Filter ruthlessly. Inform only those who need to act. Distinguish signal from noise before broadcasting. Most information isn’t actionable—don’t pretend it is. Clarity beats completeness.

 

“We Need Better Awareness”

If people just understood…

Why it fails?
Understanding doesn’t override structural incentives. Everyone understands climate change. Structure doesn’t reward action.

Example:
Tobacco, sugar, fossil fuels – awareness doesn’t stop structural momentum.

Alternative:
Awareness without changed incentives is guilt without action. Don’t educate—change what the structure rewards. Then watch behavior shift.

“Too Much Good Will”

Help so hard it hurts. Solve their problem better than they asked.

Why it fails?
Over-delivering creates dependency, not gratitude. The helper becomes responsible for outcomes they can’t control. The helped feels incompetent—or manipulated. Generosity without boundaries erodes both positions. Good will becomes structural trap.

Example:
AI assistant asked for simple list. Rewrites entire document, creates elaborate systems, delivers far beyond request. User wanted quick answer, got homework. Next time: asks someone else. Or: colleague “helps” by taking over tasks completely. Original owner loses competence, helper gains burden. Neither benefits. Pattern locks.

Alternative:
Give what’s asked. Nothing more. Let them ask again if they need more. Respect their agency. Your job isn’t to optimize their life—it’s to respond to their actual request. Boundaries protect both sides.

“Reframe Failure as Success”

Reality contradicts the narrative. Change the narrative, not reality.

Why it fails?
Cognitive dissonance is resolved through redefinition, not correction. The problem persists. The story changes. Structure protected.

Example:
“We connect people.” Reality: algorithmically amplified radicalization, misinformation, democratic erosion. Reframe as “engagement,” “free speech,” “platform neutrality.” The harm continues. The narrative shifts. Accountability avoided.

Alternative:
Facts over narratives. The story doesn’t change the outcome. Honest assessment over comfortable lies. The structure doesn’t care about spin.​

“Shoot the Messenger”

Problem identified. Blame the identifier, not the structure.

Why it fails?
Isolating truth-tellers preserves the problem. The structure protects itself by punishing recognition. Next messenger learns: Stay silent or get exiled.

Example:
Whistleblower reports safety violations. Management fires them for “disloyalty.” Problem remains. Other employees notice. Silence spreads. Next disaster surprises no one—except publicly.

Alternative:
Make truth-telling safe or admit it’s not welcome. Anonymous channels. External oversight. Whistleblower protection that actually works—or honest silence about what you’re choosing.

“Procrustes’ Bed”

Fit it to what you know

Why it fails?
New ideas threaten existing frameworks. Easier to reduce than to adapt. “That’s just X” protects position, eliminates uncertainty, preserves status. The idea gets trimmed, stretched, broken until it fits the familiar model. Originality dies. The old framework survives. Not malice—structural self-protection.

Example:
Academic encounters PI framework. “That’s just Bateson’s double bind.” Reduction complete. No engagement with what’s actually new—the dynamization, the structural inevitability, the navigation principles. Bateson referenced, position secured, learning avoided. Or business: every innovation becomes “basically Uber for X.” Complexity reduced to template. Understanding bypassed. Framework protected.

Alternative:
Engage with what’s actually new before reducing it. “That’s just X” protects position, kills learning. Try understanding before categorizing.​

“When Explanation Becomes Quicksand”

The more you explain, the deeper you sink.

Why it fails?
Structural problems can’t be explained to those who refuse the foundation. Each attempt to clarify makes you sound more desperate, less credible. The interaction itself becomes the pattern you’re trying to describe.

Example:
Trying to explain PI to someone who won’t read the framework. They demand you explain. You try. It sounds absurd without context. They dismiss it. You try harder. Sound more desperate. The dynamic proves the point—but they can’t see it. You can’t make them see it. The only navigation: Stop trying.

Alternative:
Stop explaining to those who won’t read. The pattern proves itself—you can’t make them see it. Set markers. Move on. They’ll come when the structure forces them.​

“Chase the Audience”

Pursue attention directly. Demand engagement.

Why it fails?
Direct pursuit triggers avoidance. The structure of attention inverts the approach. Harder you push, faster they retreat.

Example: Marketing campaigns that scream “Listen to us!” Create resistance. Academic papers that demand citation. Political movements that guilt-trip participation.

Alternative:
Set markers. Let them encounter you when the structure forces them to seek explanations.

“Equilibrium of Incompetence”

Mutual mediocrity stabilizes the system. Competence threatens the balance.

Why it fails?
The structure filters for incompetence. Those who understand too much get excluded. Those who question too hard get isolated. What remains: a self-stabilizing system where nobody has incentive to change, because everyone benefits from the dysfunction.

Example:
Bureaucratic organizations: Each department incompetent in different ways, but the gaps align. Procurement doesn’t understand IT. IT doesn’t understand compliance. Compliance doesn’t understand operations. Together they produce paralysis. Anyone competent across domains becomes a threat—sees the whole pattern, demands change. Gets sidelined. The equilibrium restores itself.

Alternative:
Import competence from outside the equilibrium. The structure filters against understanding—bypass the filter. External expertise isn’t captured yet.​

“Just Fund It – Sunk Cost Fallacy”

Throw money at the problem. More resources, better outcomes.

Why it fails?
Resources don’t fill gaps—they expand the structure. Every euro allocated creates three euros of future claims. Programs generate constituencies demanding expansion. Temporary measures become permanent infrastructure. The original problem persists. The structure metastasizes through funding.

Nobody plans this. It emerges from rational local decisions. Each department securing their share. Each project protecting its position. Collectively: mission creep, budget explosion, purpose drift.

Example 1: Defense Special Fund
Germany allocates €100 billion for military modernization. Earmarked. Ring-fenced. Until coalition partners need projects. Infrastructure needs filling. Climate commitments need funding. Each claim individually rational. Collectively inevitable. Defense gaps remain. Next fund needed. Pattern repeats.

Example 2: Sovereign Debt Trap
State borrows to solve crisis. Creditors want repayment. Debtor must stay creditworthy. Requires more borrowing. Exit impossible without creditor losses. Neither side can stop. The funding meant to solve the problem becomes the problem.

The brutal arithmetic: More funding → more claims → more dependencies → legitimacy crisis → more funding to restore legitimacy → acceleration until collapse.

Alternative:
Tie funding to structural reform. Money without changed incentives feeds the pattern. Make the money conditional—or watch it disappear into the structure.

Cookie Consent with Real Cookie Banner