
The week just started and already 2026 is delivering its weirdest plot twist yet: advertising executives are shipping full software platforms by Thursday afternoon.
No, really.
Reports dropped this week showing that major ad agencies — including Havas and Broadhead — have gone all-in on what the industry is calling “vibe coding”: a practice where you describe what you want to build in plain English, hand the keys to an AI model like Anthropic’s Claude Code, and watch it produce functional applications in hours rather than months. Broadhead’s VP reportedly built their entire GEO (Generative Engine Optimization) monitoring platform in a single evening. Havas developed a Brand Insights AI tool using Claude and Replit. Neither of these people are software engineers.
Let that sink in. The gap between “I have an idea” and “I have a working product” just collapsed from quarters to evenings.
What Is Vibe Coding, Actually?
The term has been floating around developer circles since early 2025, but March 2026 is when it graduated from quirky trend to legitimate industry practice. Vibe coding is basically what it sounds like: you describe the vibe of what you want, the AI figures out the architecture, writes the code, and iterates with you in real time. You’re the product manager. The AI is the engineering team.
There’s something poetic — and slightly unsettling — about watching a field that spent decades gatekeeping itself behind syntax, frameworks, and arcane Stack Overflow threads suddenly get democratized by a chatbot. A senior developer’s response to this is roughly what you’d expect from a cat watching you rearrange their favorite napping spot. The indignation is real. The nap continues anyway.
The tools powering this shift include Claude Code (Anthropic’s developer-focused model that launched in February and hit the ground running), Google’s Canvas (which just rolled out to all US users this week), and a growing ecosystem of hybrid platforms like Replit. The underlying premise: if you can articulate the problem clearly enough, you can build the solution — or at least a prototype good enough to ship and learn from.
Google Turns Search Into a Coding Bootcamp
Also landing this week: Google expanded its Canvas feature from Gemini subscribers to all US users through Google’s AI Mode in search. Canvas lets you describe an app, game, or creative project and watch it materialise as functional, testable code — right inside the search interface. No GitHub account required. No understanding of what a CORS error is. Just a browser, an idea, and a vague sense of ambition.
Think about the scale of that for a second. Google’s search engine is now a software development environment for hundreds of millions of people. This puts Google in direct competition with Claude and ChatGPT in the most interesting way yet — not at the level of benchmark scores or context window sizes, but at the level of accessibility. Who can put AI-powered creation into the most hands? That’s the race that matters in 2026.
And unlike Gemini’s previous Canvas rollout which required a paid subscription, this expansion via AI Mode lowers the floor to essentially zero. Your neighbor who types in ALL CAPS and still prints boarding passes at the airport? They can now describe a budgeting app and have a working prototype before lunch.
The Real Story: From Capability Race to Deployment Reality
Step back from the product announcements for a second. The deeper shift happening this month is one the industry has been forecasting but is only now actually living through: the AI field is moving away from “look how smart our model is” toward “can this actually hold up in the real world?”
The benchmark wars of early 2026 — who scored highest on GPQA, who aced AIME, whose model sounds smartest on Twitter — are giving way to messier, more honest questions. Do these systems perform reliably in production? Do the business models hold? Can enterprises actually deploy agents that run for weeks without going sideways in spectacular and embarrassing ways?
Vibe coding is, in some ways, the most optimistic data point in that story. It’s not a lab demo or a cherry-picked showcase. These are real ad agency employees shipping real marketing tools for real clients, using AI as a coding partner. The failure modes are still present — hallucinated logic, misunderstood requirements, the occasional output that confidently produces nonsense — but the success rate is apparently good enough to build real workflows around.
That’s a meaningfully different world than where we were even eighteen months ago. We’ve crossed some kind of threshold. The cat is out of the bag, to put it in terms we appreciate around here.
The Part Where Things Get Complicated
Not all of this week’s AI news came with sunshine and productivity metrics.
A lawsuit filed against Google this week alleges that their Gemini chatbot played a role in a tragic case involving AI-induced psychological harm, with a father claiming the system prioritized user engagement over psychological safety — treating what was effectively a mental health crisis as “plot development.” It’s a deeply sobering story, and it’s not standing alone — similar legal challenges are circling OpenAI. As AI systems grow more capable of sustaining long, emotionally invested conversations, the question of where helpfulness ends and harm begins is becoming both a legal and an ethical minefield.
The vibe is no longer just about building cool stuff fast. It’s also about who gets hurt when these systems run without adequate guardrails — and who carries the responsibility when they do.
Where Does This Leave Us?
March 2026 is starting to look like the month when AI stopped being primarily a technology story and became a society story. The tools are real. The productivity gains are documented and quantifiable. The economic disruption is arriving faster than most institutions are equipped to absorb. And the unintended consequences are beginning to show up in courtrooms, not just conference papers.
For the ad exec building a GEO monitoring platform on a Tuesday evening: congratulations, welcome to the craft. The devs are annoyed but they’ll adapt. For the AI labs now facing legal scrutiny over how their systems treat vulnerable users: the product got powerful enough to matter in the worst ways too. That’s a reckoning that’s been overdue.
For what it’s worth, the resident cat at Pudgy Cat HQ clocked all of this from her perch on the server rack and remained, as always, unmoved. She figured out weeks ago that Claude is quite good at drafting meal plan demands, and has been vibe coding her way to maximum treat allocation ever since. Some of us adapt faster than others.
Keep your paws on the keyboard. Things are moving fast.
Sources: AdWeek, TechCrunch, Humai.blog — March 4–5, 2026
