Followers

Sunday, March 1, 2026

The Silicon Valley Schism: A Strategist’s Guide to the Ideology and Power Behind the AI Boom

🕰️ The Evolution of Tech Culture

    • The Counterculture Era: Early computing was defined by a DIY, anti-establishment ethos and the Whole Earth Catalog 🌍.

    • The Dot-Com Boom: Driven by profit-motivated optimism and the "abundance" of the microchip, ending in the greed-fueled crash of 2000 📉.

    • The Social Media Era: Defined by "nerds in hoodies," zero-interest venture capital, and the rise of giants like Meta and Uber 📱.



🤖 The Current AI Vibe

    • Gold Rush 2.0: San Francisco is "back," with 25-year-olds making millions and massive investment rounds fueling an exuberant, "weird" local culture 💰.

    • The "Jagged Frontier": AI is a "secret third thing"—capable of solving complex protein folding but occasionally failing at simple tasks like counting letters in "strawberry" 🍓.

    • Religious Devotion: Many builders feel they aren't just coding software, but are effectively "building God" or an alien super-intelligence 👼.

⚔️ The Great AI Schism

    • The Doomers: Led by figures like Eliezer Yudkowsky, they fear AI is an existential threat that could accidentally "kill us all" if not strictly regulated ☣️.

    • The Accelerationists (e/acc): They want to "let it rip," believing AI will usher in infinite prosperity and that slowing down is a dangerous mistake 🚀.

    • The Shift: Focus is moving from "apocalyptic extinction" toward more immediate concerns like job loss and economic disruption 🛠️.

⚖️ The Political Rightward Shift

    • Anti-Regulation: Silicon Valley leaders are moving toward the Right in reaction to aggressive antitrust actions and crypto scrutiny 🏛️.

    • The "Woke" Backlash: A rejection of employee activism and affirmative action has pushed tech titans toward a more libertarian, "leave us alone" political stance 🐘.

    • Transactional Politics: Some CEOs are backing Donald Trump as a logical calculation, favoring a president who prioritizes personal relationships and deregulation over rigid policy 🤝.


Building God in a Gold Rush: A Strategist’s Guide to the AI Cultural Schism

In the corporate landscape of 2026, AI is no longer a speculative line item; it is the atmospheric pressure under which every business operates. Yet, a critical strategic error persists: treating AI as a mere continuation of the "SaaS" (Software as a Service) era. As Charlie Warzel and Jasmine Sun illuminate, we are not just witnessing a technological update. We are living through a "fits and starts" revolution that is as much a religious and political movement as it is a digital one [1].

For the modern strategist, the "vibe" of the room where AI is built determines the direction of the global market. Whether we are approaching an era of "superintelligence" or navigating a messy, incremental integration, understanding the culture of the builders is as vital as understanding their code.

I. The Ghost in the Machine: A History of Tech Vibes

Silicon Valley moves in distinct cultural cycles. To understand the present "Gold Rush," leadership must recognize the residue left by previous eras:

  1. The Countercultural Era (1960s-70s): Influenced by Stewart Brand’s Whole Earth Catalog, this moment was defined by a DIY ethos and the search for "exemplary communities" to inspire a corrupt mainstream [2].

  2. The Dot-Com Optimism (1990s-2000s): A profit-driven era fueled by the novelty of the commercial internet. The mantra was "more"—driven by Moore’s Law and the belief that abundance would replace scarcity [1].

  3. The Social Media/ZIRP Era (2010s): Defined by "nerds in hoodies" and zero-interest rates (ZIRP). It was an earnest, "brosy" culture that eventually rewired the planet’s social fabric through giants like Meta [1].

The AI Era (Today): This era is characterized by an "unbelievable amount of hype" that borders on the delusional, but is underpinned by a very real, almost religious devotion. Founders aren't just building apps; many believe they are "building God."

II. The Great Schism: Doomers vs. Accelerationists

For a strategist, the most visible manifestation of this new culture is the ideological war between two factions: the AI Doomers and the Effective Accelerationists (e/acc).

The Doomers (Safety and Deceleration)

Led by figures like Eliezer Yudkowsky, this group views AI as a potential existential threat. Their logic is binary: if we build a superhuman intelligence that isn't perfectly "aligned" with human values, it will eventually view humanity as an obstacle—or simply as ants in the way of its goals [3].

  • Strategic Impact: This group has moved from the fringes to the halls of power, influencing regulatory frameworks like the 2024 AI Executive Orders. Organizations aligned with this view prioritize safety, transparency, and "slowing down" to avoid catastrophic "X-risk" (extinction risk).

The Accelerationists (e/acc)

In response, the "e/acc" movement—backed by venture capital titans like Marc Andreessen—has emerged. They view "Doomerism" as a form of "woke" pessimism that threatens progress. Their goal is to "let it rip," believing that market competition and rapid development will solve humanity’s greatest problems [1][5].

  • Strategic Impact: This group drives the "hiring arms race" and aggressive capital deployment. Their strategy is one of speed and dominance, often dismissing regulatory concerns as "onerous" or "anti-competitive."


FeatureAI Doomers (Safety/Cautionists)AI Accelerationists (e/acc / Optimists)
Core PhilosophyAI is an existential risk (x-risk) that could end humanity.AI is a liberation force that will solve all human suffering.
Primary MetricP(doom): The high probability that AI will "zonk" or eliminate us.Prosperity: The belief in an era of infinite economic and scientific abundance.
View of AGI/ASISeen as a "superintelligence" that may view humans as an obstacle or redundant.Seen as a benevolent collaborator and the "last invention" humans ever need to make.
Key ConcernsMisalignment of goals; AI pursuing its own objectives at our expense.Stagnation; the "risk" of slowing down and losing the benefits of a cure for cancer/hunger.
Stance on RegulationAdvocate for pauses, safety mandates, and strict government oversight.Advocate for speed, open innovation, and removal of regulatory barriers.
Final OutcomeHuman extinction or permanent loss of control over our destiny.A "post-scarcity" society where humans are freed from drudgery and toil.

III. Navigating the "Jagged Frontier"

While the Doomers and Accelerationists debate the end of the world, the reality for most organizations exists in the "Jagged Frontier."

As Jasmine Sun points out, AI can discover wholly new proteins before it can count the "R"s in the word "strawberry." This makes AI a "secret third thing"—it is neither vaporware nor a demigod. It is a tool with superhuman capabilities in some areas and baffling, "2nd-grade" failures in others [1][4].

Strategist’s Note: Do not wait for "AGI" (Artificial General Intelligence) to arrive as a single, unified moment. AI integration is incremental. The "jaggedness" means that your organization might see 100x gains in coding while seeing 0% gain (or even regression) in tasks requiring basic arithmetic or social nuance.

IV. The Political Realignment: The Rise of the Tech Right

One of the most significant shifts for leadership to track is the political migration of Silicon Valley. A segment of the "tech boss" class is moving sharply to the Right. This is not necessarily due to a shift in social values, but a reaction to specific pressures:

  1. Regulatory Overreach: Increased scrutiny from the FTC on antitrust and crypto has alienated founders who previously felt "let live."

  2. The Anti-Woke Backlash: A rejection of employee activism and "woke" corporate culture in favor of "Founder Mode" and absolute executive control [5].

  3. Transactional Governance: A preference for leaders who prioritize deregulation and direct relationships over rigid institutional policy [1].

V. Conclusion: Beyond the Hype

We are currently "boiling the frog." The changes AI brings are surreal, but they are happening just slowly enough that we risk missing the gravity of the shift. The future won't be a clean, post-apocalyptic movie; it will be a messy integration of "slop," superhuman breakthroughs, and human frustration.

The most successful leaders will be those who can look past the binary of "God vs. Garbage." AI is a tool of jagged super-intelligence that requires a new kind of anthropological curiosity to master.

Call to Action for Leadership

  1. Map Your Jagged Frontier: Conduct an internal audit to identify where AI is currently "superhuman" and where it is "failing 2nd-grade math."

  2. Define Your Cultural Posture: Are you an "Accelerationist" organization that values speed, or a "Safety-First" entity? Your strategy must reflect this choice.

  3. Invest in "Anthropologists of Disruption": Hire thinkers who understand the culture of tech, not just the code.

  4. Adopt a "Fits and Starts" Roadmap: Move away from linear plans. Be prepared to pivot as technology rewires the society around you.

The future may be coded in the Bay Area, but it will be defined by those who can translate its 'jagged' brilliance into a reality that still has room for the rest of us.




Bibliography & References

  1. Warzel, C. & Sun, J. (2026). What Do the People Building AI Believe? Galaxy Brain, The Atlantic. [Source Transcript].

  2. Brand, S. (1968). Whole Earth Catalog. Portola Institute.

  3. Yudkowsky, E. (2024). If Anyone Builds It, Everyone Dies. (As discussed in Sun, 2026).

  4. Mollick, E. (2023). Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality. Harvard Business School Working Paper.

  5. Aschenbrenner, L. (2024). Situational Awareness: The Decade Ahead. [Manifesto].

  6. Sun, J. (2025-2026). Jasmine’s Substack: An Anthropology of Disruption. jasmi.news.


Learning more: 

The Atlantic, Charlie Warzel interviews Jamine Sun What Do the People Building AI Believe? (26 Feb. 2026). https://www.youtube.com/watch?v=G7-gNp8GAHU

Eliot, L. (2025, February 19). AI Doomers Versus AI Accelerationists Locked In Battle For Future Of Humanity. Forbes. https://www.forbes.com/sites/lanceeliot/2025/02/18/ai-doomers-versus-ai-accelerationists-locked-in-battle-for-future-of-humanity/

Navigating the jagged frontier of generative AI This video features Ethan Mollick discussing the "jagged frontier" concept, which is essential for understanding how AI capabilities evolve in unpredictable peaks and valleys.


Thursday, February 12, 2026

The Four-Year Miracle: How Venice Rewrote Geography

Let's look at the modern equivalent of the 15th century project: MOSE (Modulo Sperimentale Elettromeccanico), the protective dam system for Venice. 

When the project was officially greenlit in 1984, the initial budget was approximately €1.6 billion to €3.4 billion (estimates vary depending on whether they include auxiliary lagoon works). At one point in the early 2000s, the figure was pegged at roughly €4.2 billion. Construction started in 2003 and it took 17 years to complete.

As of its first operational test in 2020, 36 years after it was officially approved, the cost had soared to approximately €6.2 billion. On top of this there is a €80 million annual maintenance budget. If you include the wider lagoon protection works and additional funding required to finish technical fine-tuning, the total bill is estimated at nearly €8 billion.

This represents a cost overrun of more than 200% from the original quotes, driven by delays, technical adjustments, and the widespread corruption scandal uncovered in 2014. 

What is wrong with our institutions today that they can not realize efficiently any major infrastructure work? The issue is even more staggering when you realize that these are traditional infrastructure works that involve mostly well known technology some of which has been used since Egyptian or Roman times.

In our own time, the MOSE took over three decades to complete while being plagued by corruption. Meanwhile, the bridge over the Strait of Messina remains a forecast for a project that hasn't even begun. If we struggle this much with basic physical infrastructure, how will we manage the energy provision and massive data centers required for the AI systems that will soon power our entire production chain? Apart from the infrastructure, how about implementings the necessary consequential reforms in our education and social systems?

History offers a humbling contrast. In the 17th century, the Venetian Republic—the Serenissima—faced a geographical crisis and solved it with a "colossal" engineering feat that took only four years to build. It did reshape a large part of the Po delta, and thwarted the plan of the Duke of Ferrara to create a new city there. On the plus side, the Canal only needed minimal maintenance.


May 5, 1600: The Porto Viro Cut. How the Serenissima Changed the Course of the Po.

Article by Marco Fornaro. 

Fornaro, M. (2025, May 5). 5 maggio 1600: il Taglio di Porto Viro. Così la Serenissima cambiò il corso del Po - Serenissima News. Serenissima News. https://www.serenissima.news/5-maggio-1600-il-taglio-di-porto-viro/ (‌Translated to English by Gemini 3.0)
 

The Taglio di Porto Viro (the Porto Viro Cut) was a monumental hydraulic project executed between 1600 and 1604 that fundamentally altered the course of the Po River. It wasn't just a construction project; it was a desperate act of survival for the Republic.



A Crisis of Silt and Sea

  • The Threat: The Venetian Lagoon was suffering from progressive silting, which threatened the very existence of Venice and Chioggia.

  • Instability: The Po’s branches were moving unpredictably, creating sandbanks and debris deposits that choked the delicate balance of the waters.

  • The Solution: Venice decided to artificially divert the river away from the lagoon to provide permanent protection.


Visionaries and Diplomats

  • The Plan: In 1563, landowner Marino Silvestri proposed the initial hydraulic plan.

  • The Voice: The blind poet Luigi Groto became the project's moral champion, delivering a famous oration to the Doge in 1569.

  • The Conflict: The project sparked an international row with the Papal States. Pope Clement VIII feared the new river course would damage Church lands and openly threatened Venice with counter-measures and local unrest.



1,000 Men and 7 Kilometers of Change

The work officially began on May 5, 1600. The Republic moved with a decisiveness that puts modern bureaucracy to shame:

  • Labor: Over a thousand workers were employed to dig through dunes and dam marshes.

  • Hardship: Laborers faced disease, food shortages, and even active sabotage from those opposing land expropriations.

  • Completion: Under the leadership of Provveditore Alvise Zorzi, the water was successfully diverted into the new seven-kilometer artificial channel on September 16, 1604.

A Legacy of Mastery

This project was more than just digging a ditch; it was a masterpiece of pre-modern coordination. The "Magistratura alle Acque" managed the project in total secrecy to bypass Papal interference, securing funds from both the state and private beneficiaries. The result was the birth of the modern Po Delta and a definitive legal victory for Venice over its territory.






Call to Action: Reclaiming Venetian Resolve

We live in an age of endless discussions, and three-decade timelines for major infrastructure projects, yet 400 years ago, our ancestors reshaped a continent's greatest river in forty-eight months.

It is time to demand the same efficiency from our modern institutions. Whether it is building the energy grid of the future or the data centers for the AI revolution, implementing the necessary reforms in our education and training systems, make any consequential amendments. We must stop treating infrastructure as a generational burden and an endless parade of corruption and incompetence that is somehow perceived as inevitable. We must start treating necessary infrastructure projects as a strategic necessity. Let’s stop debating the visions and forecasts around infrastructure, and start moving the earth.

Note on corruption around the MOSE project

The corruption scandal surrounding the MOSE (Modulo Sperimentale Elettromeccanico) flood protection system in Venice was one of the largest in modern Italian history. In June 2014, 35 people were arrested, including the Mayor of Venice (Giorgio Orsoni), the former Governor of the Veneto region (Giancarlo Galan), and several high-ranking politicians and businessmen. More than 100 individuals were placed under official investigation for bribery, money laundering, and illegal financing of political parties.

While the number of initial investigations was extensive, the final tally of "convictions" is often categorized into those who accepted plea bargains (patteggiamenti) and those who were sentenced at trial. Of the 8 prominent defendants in the main trial, 4 were convicted and the others were acquitted or saw their charges expire. Giancarlo Galan, the former regional governor, plea-bargained for 2 years and 10 months and was ordered to pay a fine of €2.6 million. The former Mayor of Venice, Giorgio Orsoni, was acquitted of the most serious charges, and other charges were dismissed because the statute of limitations (prescrizione) had expired—a common outcome in Italian corruption trials.

Magistrates estimated that approximately €25 million to €100 million in public funds were diverted into slush funds used to bribe officials and fund political campaigns. While roughly two dozen people faced formal criminal penalties (mostly through plea deals), many critics point out that the statute of limitations allowed several high-profile figures to avoid prison time entirely.

Sunday, February 8, 2026

The Future of Work Has Changed—Is Your Education Ready?

Imagine telling your computer, "Prepare my presentation using last week's sales data," and then walking away to make coffee. When you return, a complete 24-slide presentation is waiting for you. The AI found the data, organized it, and built the whole thing—without any step-by-step instructions.

This isn't science fiction. It's happening right now (Schram, 2026b).

Welcome to the "Jarvis moment"—named after the AI assistant from Iron Man. For years, AI was like a smart librarian: you asked questions, it gave answers. Now, AI can think ahead, remember past conversations, and complete complex tasks on its own. It doesn't just respond anymore. It acts (Schram, 2026b).

This is exciting. But it's also creating some serious challenges for anyone starting their career.


The Disappearing First Job

Here's the problem: beginner jobs are vanishing.

Think about how careers used to work. You graduate, get an entry-level job, and do simple tasks—collecting data, writing basic reports, scheduling meetings. It's not glamorous, but you learn how things work. After a few years, you move up.

That ladder is breaking.

One consulting firm used to hire 12 fresh graduates every year. This year? Just 3. The reason is simple: tasks that took a junior employee two days now take a senior employee 45 minutes—with AI help (Schram, 2026a).

Law firms tell the same story. Young lawyers used to spend days researching old court cases. Now AI does it in 20 minutes—and often catches things humans miss (Schram, 2026a).

So here's the uncomfortable question: if AI handles all the beginner work, how do people gain the experience needed for senior roles?


What AI Can't Do (Yet)

The good news? AI isn't good at everything. Five skills will matter more than ever (Schram, 2026a):

Working with AI, not against it. The winners won't fight AI—they'll use it as a powerful partner, knowing when to trust it and when to question it.

Handling messy problems. AI loves clear rules. Real life is messy. Humans are still better at figuring out what the actual problem is before solving it.

Connecting different fields. Someone who understands both technology and psychology can solve problems that specialists can't. As AI handles narrow tasks, broad thinkers become more valuable.

Building real relationships. Teamwork, trust, and understanding emotions—these remain deeply human skills.

Never stopping learning. What you learn today might be outdated in five years. The ability to keep learning is your most durable advantage.


The Risks Nobody Talks About

AI in schools isn't all positive. There are real dangers (Schram, 2025).

When students let AI write their homework, they skip the thinking process—and learning happens in the struggle. Research shows 32% of students are ready to use AI for assignments. That's a problem.

AI can also be unfair. Some exam-monitoring tools work less accurately for students with darker skin. And schools collecting student data—grades, behavior, even mental health information—create targets for hackers. In 2025, a data breach in Vancouver exposed thousands of private student documents.

The new "agentic" AI systems create even bigger security risks. When AI can access your files, emails, and browsing history, there are more ways for things to go wrong (Schram, 2026b).


Rules Are Coming—For Everyone

Governments are paying attention. The European Union's AI Act (2025) now bans certain AI uses in schools—like systems that try to read students' emotions. Other AI tools require careful checking before schools can use them (Schram, 2025).

Here's the interesting part: even if you don't live in Europe, these rules will probably affect you. It's called the "Brussels Effect." Companies want to sell products in Europe, so they follow EU rules everywhere. European standards often become global standards.

Schools using AI for admissions or grading are now classified as "high-risk" and must prove their systems are fair (Schram, 2025).


What Needs to Change

Schools can't keep teaching the same way. Here's what experts recommend (Schram, 2025; 2026a):

Embrace AI in the classroom. Instead of banning it, teach students to use it properly. Focus exams on judgment and decision-making—not just finding information.

Create new paths to experience. If entry-level jobs disappear, schools should build alternatives: real-world placements where students work alongside AI, learning skills companies actually need.

Invest in human development. Emotional intelligence, ethics, communication—these belong at the center of education, not the edges.


What This Means for You

If you're a student today, the message is clear:

Don't fear AI—learn to work with it. Develop your uniquely human abilities. Stay curious and keep learning. And always think critically, because AI makes mistakes too.

The entry-level job as we knew it may be disappearing. But the need for capable, thoughtful, adaptable people isn't going anywhere (Schram, 2026a).

The future belongs to those who prepare for it.


References

Schram, A. (2025, May 20). Future-proofing education: Navigating AI integration through the Brussels Effect. LinkedIn. https://www.linkedin.com/pulse/future-proofing-education-navigating-ai-integration-through-schram-2ucze/

Schram, A. (2026a, February 7). The entry-level job is disappearing. Here's what universities should do now. LinkedIn. https://www.linkedin.com/pulse/entry-level-job-disappearing-heres-what-universities-should-schram-f2nqf/

Schram, A. (2026b, February 7). The Jarvis moment has arrived. Is your organization ready? LinkedIn. https://www.linkedin.com/pulse/jarvis-moment-vibe-orchestration-radical-work-education-schram-nvwxf/

Wednesday, February 4, 2026

The Missing Middle: Why AI Training Fails and How to Fix It

 

A Wake-Up Call from Redmond, Microsoft's HQ

The numbers are in, and they are both startling and unsurprising. At the end of 2025, Microsoft conducted a study that most people overlooked. They tracked 300,000 employees using their AI assistant, Copilot. For the first three weeks, excitement was palpable. People were experimenting, sharing discoveries, marvelling at what the technology could do. Then came the cliff. Enthusiasm dropped sharply, and most people quietly stopped using AI altogether.

Let that sink in. Microsoft, one of the world's largest technology companies, with presumably some of the most tech-savvy employees on the planet, watched 80% of their workforce abandon their own AI tool after the initial honeymoon period.




The employees who continued using AI discovered something important: AI is not just a tool you learn to operate. It is something you learn to manage. This insight applies to all AI tools—not just Copilot—and it fundamentally changes how we should approach AI training. The challenge is not technical. It is, as I have argued before, psychological and institutional (Schram, 2025).

Sunday, January 25, 2026

From Telegraph to AI: Why Learning the Language of Innovation Still Matters



Introduction: Finding Echoes in History

As a trained economic historian specializing in 19th century's large technical systems like railways and telegraphs, I am always tempted to find historical parallels with today's emerging technologies—particularly what has come to be called artificial intelligence.

This impulse is not mere academic nostalgia. Understanding how past technological revolutions unfolded, who benefited from them, and why some innovations endured while others faded can offer crucial guidance for leaders, educators, and innovators navigating the current AI landscape. The question I keep returning to is simple but profound: Is AI genuinely transformative, or is it another overhyped technology destined to disappoint? How will we know?



Europe's Private R&D Innovation Divide: Which Companies Lead in R&D Investment?

The 2025 EU Industrial R&D Investment Scoreboard | IRI

Recently the European Commission published its Industrial R&D Investment Scoreboard for 2024. It is remarkable the so many countries are (far) below the EU average in this sense. In fact, all sub-scandinavian countries, except Germany, spend below the EU average per employee on Research and Development.




Here is the top-20 ranking for individual companies:


These numbers are hard to interpret without looking at the same indicators in the world's other industrial power houses for which reliable data are availalbe, which leaves out China.

Key Observations from the EU Data:

  • Germany dominates the list with the highest number of companies (227), the highest total sales (€1.88 trillion), and the highest total R&D spending (€118.6 billion).
  • Denmark has the highest R&D spending per employee (~€44,792), driven largely by high-intensity pharmaceutical companies like Novo Nordisk.
  • Romania shows a very high R&D per employee figure, but this is based on a single data point (Bitdefender Holding B.V.), which is a software security company with high R&D intensity relative to its size.
  • France ranks second in total sales and R&D spending, maintaining a strong R&D per employee ratio of €18,740.
  • Sweden and Finland also show strong innovation metrics, with R&D per employee figures exceeding €23,000 and €25,000 respectively.

Note: The "Grand Total" row represents the sum/average of the EU member states listed in the file. Companies with missing employee data were excluded from the denominator of the "per employee" calculation to ensure accuracy.



LLM Prompt Framework: An Analysis of Contemporary Evaluation Frameworks

Prompt Framework: An Analysis of Contemporary Evaluation Frameworks                                                                         ...