Followers

Monday, April 6, 2026

Stop Bolting-On AI: Why Your "Factory Floor" Still Runs on Steam

The global economy is currently in the grip of a $1.3 trillion contradiction. Since the dawn of the 2020s, organizations have poured astronomical sums into digital transformation, yet the failure rate remains a haunting 70% to 80%. We are living through what I call the "Transformation Trap"—a period where the rapid irruption of technology is mistaken for the deep reorganization of the institutions that use it.

As a historian of large technical systems, I see a pattern today that is eerily familiar. When we ask engineers what the AI-embedded society of 2050 will look like, they describe faster algorithms and more GPUs. But history tells us that technology is never the bottleneck. The bottleneck is us: our hierarchies, our incentives, and our refusal to let go of the "central drive shafts" of a previous era.


Thursday, March 19, 2026

Why Your AI Teaching Assistant Keeps Getting It Wrong (And the One Skill That Fixes It)

#EdTech #AIinEducation #PromptCraft #ContextEngineering #TeacherAI #DigitalLearning

Freebe: here is my prompt engineering app for teachers https://poe.com/DrAlbertPrompt.


Background

In January 2026, The Economist published "Failing the Screen Test," a sweeping investigation into educational technology [1]. The verdict was stark: ed tech is "mostly useless," a $165 billion global industry that delivers marginal gains while student achievement collapses worldwide. The piece opened with Principal Inge Esping in a Kansas middle school, watching laptops go back into closets after three years of broken promises from adaptive math software. Paper and pencil returned. The magic never came.


The Economist got a lot right. It also got some critical things terribly wrong. But the most important lesson from that article has nothing to do with whether technology works in classrooms. It has everything to do with how we ask technology to work for us, and what has changed in the last two months that makes that question more urgent than ever.

Sunday, March 1, 2026

The Silicon Valley Schism: A Strategist’s Guide to the Ideology and Power Behind the AI Boom

πŸ•°️ The Evolution of Tech Culture

    • The Counterculture Era: Early computing was defined by a DIY, anti-establishment ethos and the Whole Earth Catalog 🌍.

    • The Dot-Com Boom: Driven by profit-motivated optimism and the "abundance" of the microchip, ending in the greed-fueled crash of 2000 πŸ“‰.

    • The Social Media Era: Defined by "nerds in hoodies," zero-interest venture capital, and the rise of giants like Meta and Uber πŸ“±.



πŸ€– The Current AI Vibe

    • Gold Rush 2.0: San Francisco is "back," with 25-year-olds making millions and massive investment rounds fueling an exuberant, "weird" local culture πŸ’°.

    • The "Jagged Frontier": AI is a "secret third thing"—capable of solving complex protein folding but occasionally failing at simple tasks like counting letters in "strawberry" πŸ“.

    • Religious Devotion: Many builders feel they aren't just coding software, but are effectively "building God" or an alien super-intelligence πŸ‘Ό.

⚔️ The Great AI Schism

    • The Doomers: Led by figures like Eliezer Yudkowsky, they fear AI is an existential threat that could accidentally "kill us all" if not strictly regulated ☣️.

    • The Accelerationists (e/acc): They want to "let it rip," believing AI will usher in infinite prosperity and that slowing down is a dangerous mistake πŸš€.

    • The Shift: Focus is moving from "apocalyptic extinction" toward more immediate concerns like job loss and economic disruption πŸ› ️.

⚖️ The Political Rightward Shift

    • Anti-Regulation: Silicon Valley leaders are moving toward the Right in reaction to aggressive antitrust actions and crypto scrutiny πŸ›️.

    • The "Woke" Backlash: A rejection of employee activism and affirmative action has pushed tech titans toward a more libertarian, "leave us alone" political stance 🐘.

    • Transactional Politics: Some CEOs are backing Donald Trump as a logical calculation, favoring a president who prioritizes personal relationships and deregulation over rigid policy 🀝.


Building God in a Gold Rush: A Strategist’s Guide to the AI Cultural Schism

In the corporate landscape of 2026, AI is no longer a speculative line item; it is the atmospheric pressure under which every business operates. Yet, a critical strategic error persists: treating AI as a mere continuation of the "SaaS" (Software as a Service) era. As Charlie Warzel and Jasmine Sun illuminate, we are not just witnessing a technological update. We are living through a "fits and starts" revolution that is as much a religious and political movement as it is a digital one [1].

Thursday, February 12, 2026

The Four-Year Miracle: How Venice Rewrote Geography

Technology, and civil engineering in particular, has always served to protect humanity from nature. Let's look at the modern equivalent of the 15th century project: MOSE (Modulo Sperimentale Elettromeccanico), the protective dam system for Venice. 

When the project was officially greenlit in 1984, the initial budget was approximately €1.6 billion to €3.4 billion (estimates vary depending on whether they include auxiliary lagoon works). At one point in the early 2000s, the figure was pegged at roughly €4.2 billion. Construction started in 2003 and it took 17 years to complete.

As of its first operational test in 2020, 36 years after it was officially approved, the cost had soared to approximately €6.2 billion. On top of this there is a €80 million annual maintenance budget. If you include the wider lagoon protection works and additional funding required to finish technical fine-tuning, the total bill is estimated at nearly €8 billion.

This represents a cost overrun of more than 200% from the original quotes, driven by delays, technical adjustments, and the widespread corruption scandal uncovered in 2014. 

What is wrong with our institutions today that they can not realize efficiently any major infrastructure work? The issue is even more staggering when you realize that these are traditional infrastructure works that involve mostly well known technology some of which has been used since Egyptian or Roman times.

Sunday, February 8, 2026

The Future of Work Has Changed—Is Your Education Ready?

Imagine telling your computer, "Prepare my presentation using last week's sales data," and then walking away to make coffee. When you return, a complete 24-slide presentation is waiting for you. The AI found the data, organized it, and built the whole thing—without any step-by-step instructions.

This isn't science fiction. It's happening right now (Schram, 2026b).

Welcome to the "Jarvis moment"—named after the AI assistant from Iron Man. For years, AI was like a smart librarian: you asked questions, it gave answers. Now, AI can think ahead, remember past conversations, and complete complex tasks on its own. It doesn't just respond anymore. It acts (Schram, 2026b).

This is exciting. But it's also creating some serious challenges for anyone starting their career.


The Disappearing First Job

Here's the problem: beginner jobs are vanishing.

Think about how careers used to work. You graduate, get an entry-level job, and do simple tasks—collecting data, writing basic reports, scheduling meetings. It's not glamorous, but you learn how things work. After a few years, you move up.

That ladder is breaking.

One consulting firm used to hire 12 fresh graduates every year. This year? Just 3. The reason is simple: tasks that took a junior employee two days now take a senior employee 45 minutes—with AI help (Schram, 2026a).

Law firms tell the same story. Young lawyers used to spend days researching old court cases. Now AI does it in 20 minutes—and often catches things humans miss (Schram, 2026a).

So here's the uncomfortable question: if AI handles all the beginner work, how do people gain the experience needed for senior roles?


What AI Can't Do (Yet)

The good news? AI isn't good at everything. Five skills will matter more than ever (Schram, 2026a):

Working with AI, not against it. The winners won't fight AI—they'll use it as a powerful partner, knowing when to trust it and when to question it.

Handling messy problems. AI loves clear rules. Real life is messy. Humans are still better at figuring out what the actual problem is before solving it.

Connecting different fields. Someone who understands both technology and psychology can solve problems that specialists can't. As AI handles narrow tasks, broad thinkers become more valuable.

Building real relationships. Teamwork, trust, and understanding emotions—these remain deeply human skills.

Never stopping learning. What you learn today might be outdated in five years. The ability to keep learning is your most durable advantage.


The Risks Nobody Talks About

AI in schools isn't all positive. There are real dangers (Schram, 2025).

When students let AI write their homework, they skip the thinking process—and learning happens in the struggle. Research shows 32% of students are ready to use AI for assignments. That's a problem.

AI can also be unfair. Some exam-monitoring tools work less accurately for students with darker skin. And schools collecting student data—grades, behavior, even mental health information—create targets for hackers. In 2025, a data breach in Vancouver exposed thousands of private student documents.

The new "agentic" AI systems create even bigger security risks. When AI can access your files, emails, and browsing history, there are more ways for things to go wrong (Schram, 2026b).


Rules Are Coming—For Everyone

Governments are paying attention. The European Union's AI Act (2025) now bans certain AI uses in schools—like systems that try to read students' emotions. Other AI tools require careful checking before schools can use them (Schram, 2025).

Here's the interesting part: even if you don't live in Europe, these rules will probably affect you. It's called the "Brussels Effect." Companies want to sell products in Europe, so they follow EU rules everywhere. European standards often become global standards.

Schools using AI for admissions or grading are now classified as "high-risk" and must prove their systems are fair (Schram, 2025).


What Needs to Change

Schools can't keep teaching the same way. Here's what experts recommend (Schram, 2025; 2026a):

Embrace AI in the classroom. Instead of banning it, teach students to use it properly. Focus exams on judgment and decision-making—not just finding information.

Create new paths to experience. If entry-level jobs disappear, schools should build alternatives: real-world placements where students work alongside AI, learning skills companies actually need.

Invest in human development. Emotional intelligence, ethics, communication—these belong at the center of education, not the edges.


What This Means for You

If you're a student today, the message is clear:

Don't fear AI—learn to work with it. Develop your uniquely human abilities. Stay curious and keep learning. And always think critically, because AI makes mistakes too.

The entry-level job as we knew it may be disappearing. But the need for capable, thoughtful, adaptable people isn't going anywhere (Schram, 2026a).

The future belongs to those who prepare for it.


References

Schram, A. (2025, May 20). Future-proofing education: Navigating AI integration through the Brussels Effect. LinkedIn. https://www.linkedin.com/pulse/future-proofing-education-navigating-ai-integration-through-schram-2ucze/

Schram, A. (2026a, February 7). The entry-level job is disappearing. Here's what universities should do now. LinkedIn. https://www.linkedin.com/pulse/entry-level-job-disappearing-heres-what-universities-should-schram-f2nqf/

Schram, A. (2026b, February 7). The Jarvis moment has arrived. Is your organization ready? LinkedIn. https://www.linkedin.com/pulse/jarvis-moment-vibe-orchestration-radical-work-education-schram-nvwxf/

Wednesday, February 4, 2026

The Missing Middle: Why AI Training Fails and How to Fix It

 

A Wake-Up Call from Redmond, Microsoft's HQ

The numbers are in, and they are both startling and unsurprising. At the end of 2025, Microsoft conducted a study that most people overlooked. They tracked 300,000 employees using their AI assistant, Copilot. For the first three weeks, excitement was palpable. People were experimenting, sharing discoveries, marvelling at what the technology could do. Then came the cliff. Enthusiasm dropped sharply, and most people quietly stopped using AI altogether.

Let that sink in. Microsoft, one of the world's largest technology companies, with presumably some of the most tech-savvy employees on the planet, watched 80% of their workforce abandon their own AI tool after the initial honeymoon period.




The employees who continued using AI discovered something important: AI is not just a tool you learn to operate. It is something you learn to manage. This insight applies to all AI tools—not just Copilot—and it fundamentally changes how we should approach AI training. The challenge is not technical. It is, as I have argued before, psychological and institutional (Schram, 2025).

Sunday, January 25, 2026

From Telegraph to AI: Why Learning the Language of Innovation Still Matters



Introduction: Finding Echoes in History

As a trained economic historian specializing in 19th century's large technical systems like railways and telegraphs, I am always tempted to find historical parallels with today's emerging technologies—particularly what has come to be called artificial intelligence.

This impulse is not mere academic nostalgia. Understanding how past technological revolutions unfolded, who benefited from them, and why some innovations endured while others faded can offer crucial guidance for leaders, educators, and innovators navigating the current AI landscape. The question I keep returning to is simple but profound: Is AI genuinely transformative, or is it another overhyped technology destined to disappoint? How will we know?



Stop Bolting-On AI: Why Your "Factory Floor" Still Runs on Steam

The global economy is currently in the grip of a $1.3 trillion contradiction. Since the dawn of the 2020s, organizations have poured astrono...