The strategic side of AI: why strong foundations matter
We live in one of the most exciting and transformative moments in tech history. Generative AI has become widely accessible, not just for big tech firms or engineers, but for everyone. Whether you’re an entrepreneur, a government employee, or simply someone with curiosity, it’s now possible to build your own AI assistant, automate your work, and boost your creativity without writing a single line of code. That’s not just powerful, it’s revolutionary!
This wave of innovation offers massive potential to improve how we work, learn, and connect. But with great opportunity comes great responsibility. It’s essential we also take a moment to step back. Tto reflect, to prepare, and to build the right foundations before diving headfirst into the AI revolution. Because while AI is full of promise, it also introduces risks, especially when implemented without the necessary guardrails.
A career spanning internet revolution
In 1992, I became known as “Mister Internet” while working at Zurel and tried their employees helping understand the transformative potential of the online world before most people had email. By 1999, I was general manager at SurfControl which introduced the world’s first Virtual Image Agent, a groundbreaking system using 22,000 algorithms to analyze digital content, long before “AI” was a household term. I became the youngest Nokia Country Manager in 2002, responsible for security and enterprise devices, and since 2007, I’ve advised leaders and hosted masterclasses in digital transformation, cybersecurity, and AI. During the COVID-19 pandemic, I was appointed national crisis manager for data and ICT security in the Netherlands (2021 to 2024).
This isn’t just nostalgia. It’s pattern recognition.
Because I’ve seen what it looks like when the world races ahead with shiny new tech, while forgetting the basics. And right now, it’s happening again: this time with artificial intelligence.
Building on sand: AI without a foundation
Across boardrooms, LinkedIn feeds, and conference stages, AI is the new oxygen. Leaders talk about ChatGPT, autonomous agents, and AI transformation like it’s already done.
But behind closed doors, a different reality emerges.
Most organizations I speak to have no written AI policy, no data classification standards, no AI risk framework, and no understanding of what employees are already pasting into external tools.
They’re building the future, but with no foundation. And that’s a formula we’ve seen before.
If your cybersecurity, privacy, and behavioral protocols aren’t in place, implementing AI is like installing a smart lock on a door that’s already open.
From dot-com to dot-AI: the bubble we don’t want to repeat
This is not just my opinion. Signals from across the globe are blinking red.
-
A recent article from the World Economic Forum draws direct comparisons between the current AI boom and the dot-com bubble of the late 1990s. Huge investment. Little governance. And an overreliance on future promises rather than proven outcomes.
-
A Yale research group observes how the current explosion of AI funding mirrors the speculative flows seen in 1999. Only this time, with more complex dependencies and societal risk.
-
The Bank of England has formally warned of a “growing risk that the AI bubble could burst,” calling for regulators and markets to assess exposures and systemic vulnerabilities.
Even Sam Altman, CEO of OpenAI, recently acknowledged: “Yes, we’re in a bubble. But some bubbles contain truths.”
The question is: which truths are we acting on?
Hype versus readiness: three gaps I see every day
- Tools before training
Employees are using AI agents with sensitive client data, personal records, and intellectual property. With zero oversight or training. This is not digital innovation. It’s digital negligence. - Projects before policy
Leaders commission AI “experiments” and pilot tools without policies in place to govern what’s safe, what’s ethical, or even what’s legal. There’s no incident response plan, no red-teaming, and no data usage guidelines. That’s not transformation. That’s exposure. - Speed before security
Everyone is chasing speed: who launches fastest, who integrates fastest, who creates the next internal GPT. But if you haven’t tested your 2FA, segmented your data, or trained your people. Your AI program is built on a vulnerable surface.
Assistants versus agents and why the risks multiply
Most people still think AI tools like ChatGPT are just fancy assistants. But we’re now entering the “agent” phase: autonomous systems that not only draft content but schedule meetings, trigger payments, manage workflows, and make decisions.
And the real danger? These agents are often invisible to leadership. They’re activated by employees without review, paste-sensitive content into unregulated systems, and can go “off the rails” without anyone knowing.
That’s why I always advocate for the P-SEP model , ensuring all AI implementation respects:
-
People – AI must serve people, not replace or manipulate them. Human-centered, understandable, empowering
-
Security – Systems and data must be protected across the full lifecycle (security by design). Privacy-proof, breach-aware, access-controlled
-
Ethics – AI decisions should be fair, explainable, and auditable (ethics by design). Bias-aware, auditable, fair.
-
Privacy – Personal data must be treated with respect and transparency (privacy by design). GDPR-compliant, transparent, minimal

And while the West experiments, the East executes
Let’s shift focus, from Europe and Silicon Valley to China.
Recently, Howard Yu, professor and innovation strategist at IMD Business School, wrote something that grabbed me:
“OpenAI showed the world you could use Zillow and Figma inside ChatGPT without switching apps. Revolutionary? Maybe.
But 1.4 billion people in China have done this since 2017 with WeChat.”
Tencent’s Hunyuan AI model powers an entire commerce-first ecosystem. You don’t ask for an app; it shows up before you even need it. Booking travel? The bot opens flight options in chat. Discussing dinner? It surfaces nearby tables. That’s not assistant behavior. That’s ecosystem orchestration.
Meanwhile, OpenAI’s ecosystem is conversation-first: you prompt, it answers. China’s model is context-first: it predicts, then enables.
Yu points out: “Winning used to mean users came to your app. Now, it’s about being the app that appears inside someone else’s conversation.”
This shift is tectonic.
China’s trillion dollar bet: good enough beats best
Howard Yu’s second major insight is even more provocative.
China’s AI ecosystem no longer depends on NVIDIA chips. Not because of U.S. restrictions — but because Beijing believes Huawei is now good enough. They’re going all-in on local chips, local algorithms, and local scale.
Let that sink in:
-
China files 23,695 AI papers annually versus 6,378 from the U.S.
-
China holds 35,000+ AI patents. More than the U.S., Canada, Japan, and South Korea combined!
-
Half of all AI researchers globally are Chinese.
-
Beijing mandated all companies shift to domestic AI infrastructure. Immediately.
This isn’t a reaction. It’s strategy. And while the West debates over OpenAI board seats or Hollywood script generation, China is building the next digital ecosystem with precision, scale, and sovereign control.
Their strategy isn’t about having the best chips. It’s about making “best” irrelevant.
When good-enough becomes the standard, best becomes a luxury nobody needs.
This is disruption theory 101, as defined by Clay Christensen decades ago, and now extended by thinkers like Howard Yu.
My final warning: are we missing the shift again?
I’ve said it on stage. I’ll say it here again:
“I’ve lived through the dot-com bubble. I’ve lived through the mobile explosion.
This AI wave feels eerily familiar, and dangerously underprepared.”
The dot-com crash didn’t happen because the internet failed. It happened because we assumed technology alone was the value.
We forgot about customer need. We forgot about ethics. We forgot about risk.
Today, many are doing the same: chasing AI because it’s trending, not because it’s needed or even safe.
What I recommend to leaders now
Whether you lead a business, a nonprofit, or a government agency, ask yourself:
✅ Do we have a written AI usage policy?
✅ Have we mapped where sensitive data flows into AI systems?
✅ Are employees trained to use AI securely and ethically?
✅ Is AI part of our cyber resilience strategy, or floating outside it?
If not, you’re not ready. And no expensive assistant or agent can fix that.
Reset before you regret
This is why I continue to give in-house cyber resilience masterclasses, followed (only when the basics are in place) by AI masterclasses grounded in practical use and the P-SEP model. My mission is to make AI tangible, safe, privacy-proof, and empowering for everyone: from 14-year-olds to senior executives.
And yes: AI will transform the workplace.
But transformation without foundation becomes chaos.
And hype without preparation becomes a crisis.
It’s time to reset before the next bubble bursts.
About Erik Jan Koedijk
Erik Jan is a seasoned advisor, keynote speaker, and founder of cybersecurity.vision. Known as “Mister Internet” in 1992, he led innovations at SurfControl and Nokia, and most recently served as national crisis manager for data and ICT security in the Netherlands during the COVID-19 pandemic. He delivers hands-on, accredited masterclasses in AI, cyber resilience, and digital strategy across Europe and the Caribbean.
Request more information or get in contact with Erik Jan


