As we close out 2025, we find ourselves in the peculiar position of having to confess something: we’ve been having far too much fun with AI coding assistants. Not in a “let’s replace our engineering team” sort of way (our engineers would have my head), but in a “let’s see what happens when we connect an LLM to Norsk Studio via MCP” sort of way.
tl;dr: What happens is a delightful mess of rapid prototyping, zero production code, and a business development officer who’s been explicitly banned from committing to the main branch (for very good reasons!)
The Great MCP Experiment
For those unfamiliar, MCP (Model Context Protocol) is Anthropic’s way of letting AI assistants actually do things rather than just talk about them. When we hooked it up to Norsk Studio, we discovered something rather useful: if humans can read our APIs and understand our drag-and-drop workflow builder, then apparently AI can too. Shockingly well, in fact. (Read our lead developer Rob Ashton’s thoughts on MCPs here and here.)
When we built Norsk, our goal was to provide our customers with the tools to build their own live streaming worfklows, from the simple (say, SRT in, ABR ladder out) to the complex (like remote commentary in multiple languages). Part of that ethos revolved around making the functions in Norsk as accessible as possible to a wide range of customers. The Norsk SDK lets you replace hundreds or thousands of lines of code with human-legible commands. (True story: When the developers first showed me some sample workflows in an early version of the SDK, even I — embarrassingly yet unapologetically non-technical — could make out what was happening in the JavaScript onscreen). Norsk Studio took things one step further with its no-code interface, with pre-built, drag-and-drop components with clever labels like “SRT Ingest,” “Source Switcher,” and “Preview.” All of which means that when it came time to integrate Norsk with LLMs via an MCP (read more about that here and here) … well, if a human can read it, surely an AI could.
While we’d never call ourselves a UI company (another part of the Norsk ethos is that our customers should be able to build their own custom UIs based on their business logic), that MCP integration has enabled our developers and users to generate perfectly functional and user-friendly dashboards for all manner of workflows right from the APIs Norsk Studio generates. And not just developers, either. Our chief business development officer Dom Robinson, who is quite technical compared to me but had heretofore been prohibited from coding by the folks at Norsk who actually know how to do it, has spent much of his time over the past few months creating dashboards for customer demos and samples; read about his experience here).
The result? A few months of absurdly quick proof-of-concept builds that demonstrate what’s possible with Norsk — though emphatically not what we’re shipping. These are conversation starters, demo fodder, and “what if” explorations.
The POC Gallery (Or: Things Dom Built Instead of Doing Actual Work)
AI-assisted source switcher
Google Gemini watches a live sports stream and automatically switches to ads at breaks in play. Or switches to literally anything else based on whatever prompt you fancy. Works beautifully for demos. Would require actual engineering for production to ensure details like frame accurate cuts and so on. All possible with Norsk. All details that Dom didn’t have the patience to include. Fine for a demo, but in production the difference matters.


Voice- and Motion-Triggered Vision Director
“Camera 1!” “Camera 2!” Because apparently typing is for people with time on their hands. Also responds to motion detection, which is either incredibly useful or the beginning of a deeply annoying future, depending on your perspective.


Multi-language remote commentary
Our not-so-secret weapon for IBC 2025’s Master Control Cloud accelerator project: a full cloud remote commentary platform built in Norsk Studio. Features remote commentary in three languages, complete talkback systems, audio mixing, and enough dashboards to make you feel like you’re actually in control. (You are. That’s the point.) Check out this Workflow Wednesday to see how remote commentary works in Norsk Studio, and this white paper to find out more about how Norsk handles time sync in general.







Browser Overlay with Stats
Add graphics overlays to your stream, toggle them on and off, swap them out, monitor stream health. Standard broadcast stuff, zero custom code required for the POC. Delightfully straightforward. (We covered how to add a browser overlay to your video in this Workflow Wednesday.)


Google Dynamic Ad Insertion (DAI) with AI
Work in progress as we go to press, but too interesting to withhold: Gemini monitors the stream, detects optimal ad insertion moments, and inserts SCTE-35 markers. Google DAI will handle the rest. It’s automated monetization with an AI trigger finger. Proceed with appropriate caution and business logic.


The Actual Point (Beneath the Snark)
Here’s what this year of experimentation has taught us: Norsk Studio’s MCP integration gives you two genuinely useful paths forward.
Path One: You need something now for a one-off project, budget is tight, and “if it works, it works” is your quality threshold. Spin up an AI assistant, connect it to Studio via MCP, and build your POC in hours instead of weeks. Perfect for pilots, demos, and “let’s just see if this idea has legs” scenarios.
Path Two: Use AI-generated prototypes to communicate with your production team. These POCs become conversation starters, requirement documents disguised as working demos. Your developers can see exactly what you’re after, iterate on the concept, and then build properly supportable, production-ready code the old-fashioned way: with humans who understand what they’ve created and can maintain it.
The critical distinction: AI code is fast, but hand-crafted code is supportable.
Norsk Studio itself represents years of careful engineering by actual humans who answer support tickets and care about backwards compatibility. That foundation is what makes the rapid AI experimentation possible in the first place.
What Makes This Work
The reason these POCs came together so quickly isn’t AI magic — it’s that Norsk Studio’s workflow builder is genuinely readable. Human-legible APIs, sensibly labeled components (“SRT Ingest” does exactly what you’d expect), and a drag-and-drop canvas that makes visual sense.
When your foundation is that clear, adding an AI assistant just accelerates what’s already possible.
But workflows are workflows. They’re logical, they’re powerful, they’re … not always user-friendly for the folks who just want to use the system rather than build it.
That’s where the UI dashboards come in. Give someone a “Press this button to switch cameras” interface instead of a “Configure this media pipeline” interface, and suddenly broadcast technology becomes accessible.
The Invitation
These examples represent a tiny fraction of what’s buildable with Norsk Studio, with or without AI assistance. If any of these use cases touch on problems you’re actually trying to solve – or if you’ve got entirely different ideas about what’s possible – we’d genuinely like to hear about it.
Because for all our sardonic commentary about rapid prototyping versus production code, there’s real value in being able to test ideas quickly, communicate concepts clearly, and build exactly what your workflow requires.
Want to explore what you could build with Norsk Studio? Get in touch for a demo. We’ll keep the AI-generated chaos to a reasonable minimum.
Norsk Studio: Professionally engineered foundations since 2010, occasionally assisted by machines since 2025.