11 min read
I got fired. This is what I built instead.
A widget engine built in spare time after work - SSR+CSR, under 15kb, faster and more featured than the day job version. It sat on my hard drive for six months. Then next-vibe needed it.
I was about to show it to my team. Then I got fired.
The origin story
My day job had a widget problem. Other sites would import our JavaScript to display a widget - a form, a chat bubble, a dashboard panel. The script was non-performant, bloated, slow to load. Third-party sites importing it were visibly paying the cost. In my spare time after work, I started building a replacement.
The prototype came out cleaner than I expected. SSR and CSR support - SSR for speed, CSR for interactivity. Under 15kb total. Faster than the day job version. More features. Fully reactive, fully typesafe. A proper postMessage protocol between the iframe and the host page. No shared state. Trigger system, display modes, display frequency controls. I was about to show it to my team. Then I got fired.
The codebase sat on my hard drive for about six months. Then I realized what next-vibe actually needed it for: not just forms - but rendering the full interactive UI of a tool running on a remote server, inside a sandbox, on any page. That is remote tool execution with a live UI.
The problem with script tags
When you embed third-party content with a bare script tag, you pay in two ways. Performance: the script has to load, parse, and execute before anything renders. If the third-party server is slow, your page waits. And security: that script has full access to the page - the DOM, cookies, localStorage, event listeners. If it's buggy, your page breaks. If it's malicious, your users are exposed.
The standard safe alternative is an iframe. But iframes do not communicate with the parent page by default. Resize events do not bubble. Form submissions do not propagate. The host page cannot inject data. You end up with a dumb isolated box that cannot tell its parent anything.
What you actually need is a bridge.
The postMessage API lets the iframe and the host page communicate safely, across origins. You define a protocol. You validate origins. Every message has a type. The iframe can say: ready, height changed, form submitted. The parent can say: here is auth data, switch to dark mode, pre-fill this field. That is VibeFrame.
The postMessage bridge
Every message is prefixed vf:. The parent bridge validates the origin before processing anything. The iframe never executes in the host page context. The communication is fully controlled.
Display modes and triggers
Four display modes
Embeds directly in a DOM element. Auto-resizes with content.
Centered overlay with backdrop. Appears above the page.
Slides in from the right. Good for forms or secondary content.
Slides up from the bottom. Standard mobile pattern.
Seven trigger types
Immediate - mounts as soon as the page loads
Scroll - fires when the user has scrolled a percentage of the page
Time - fires after N milliseconds
Exit intent - fires when the mouse leaves the viewport through the top
Click - fires when a specific element is clicked
Hover - fires on mouse enter of a selector
Viewport - fires based on screen size
Display frequency
always, once-per-session, once-per-day, once-per-week, once-per-user. Enforced client-side with localStorage. No server round-trip.
Two script tags. Done.
Any endpoint becomes embeddable. The tool runs on its own server. The widget renders in a sandbox on your page. Full features, zero shared state.
The complete embed code for a contact form from unbottled.ai
1import { VibeFrame } from "next-vibe/vibe-frame";
2
3VibeFrame.mount({
4 serverUrl: "https://unbottled.ai",
5 endpoint: "contact_POST",
6 target: "#contact-form",
7 trigger: { type: "immediate", display: "inline" },
8});Or as a plain script tag for any website:
1<script>
2 window.vibeFrameConfig = {
3 serverUrl: "https://unbottled.ai",
4 integrations: [{
5 endpoint: "contact_POST",
6 target: "#contact-form",
7 trigger: { type: "immediate", display: "inline" },
8 }],
9 };
10</script>
11<script src="https://unbottled.ai/vibe-frame/vibe-frame.js"></script>The admin panel generates this for you. Pick endpoint, pick display mode, pick trigger. Copy. Paste anywhere.
Not a side effect. The point.
When I ported VibeFrame into next-vibe, the first real use was not embedding a contact form on some external page. It was remote tool execution within the platform itself - rendering any endpoint's full interactive widget UI inside a sandbox, so distributed tools feel like one system.
Real data. Live indicators. A Vibe Sense graph reacts to what is happening on the server. This is not a screenshot or a static export. The tool is running. VibeFrame renders its widget UI in a sandbox wherever you need it - inside the platform, on a dashboard, or on any other page.
That is when the architecture clicked. VibeFrame was not just a way to embed forms on third-party sites. It was the rendering half of remote tool execution - the missing piece that makes a distributed tool system feel like a single coherent platform.
Federated embedding
Each integration in VibeFrame can point to a different serverUrl. That means you can embed widgets from multiple next-vibe instances on the same page. No shared backend. No shared database. Each instance serves its own widgets. The host page is just a composition layer.
Multiple instances, one host page, zero shared infrastructure
1import { VibeFrame } from "next-vibe/vibe-frame";
2
3// Chat widget from unbottled.ai
4VibeFrame.mount({
5 serverUrl: "https://unbottled.ai",
6 endpoint: "agent_chat_threads_GET",
7 target: "#chat",
8});
9
10// Product catalog from a shop instance
11VibeFrame.mount({
12 serverUrl: "https://shop.example.com",
13 endpoint: "products_GET",
14 target: "#products",
15 trigger: { type: "scroll", scrollPercent: 30, display: "modal" },
16});
17
18// Analytics dashboard from yet another instance
19VibeFrame.mount({
20 serverUrl: "https://analytics.example.com",
21 endpoint: "vibe_sense_dashboard_GET",
22 target: "#dashboard",
23 trigger: { type: "click", clickSelector: "#show-stats", display: "slideIn" },
24});The definition travels with the widget. The server that owns the endpoint owns the render.
The invocation half
VibeFrame handles rendering. The tool registry handles invocation. You connect remote next-vibe instances - each with a name like hermes, thea, or any custom alias. Their endpoints become available alongside your local tools. One unified tool set.
The AI uses tool-help to discover every available endpoint - local and remote. It sees the full registry: names, descriptions, typed inputs, typed outputs. When it calls execute-tool, the platform routes to the right instance. The AI does not care where a tool lives.
You stay in control. Pin the tools you want visible, opt out of any you do not need. Same management for local and remote endpoints - no difference in how you configure them.
Connect instances. The AI discovers the tools. You decide which ones to use.
Skills: a persona layer
On top of this, skills add a persona. A skill is a preset - a name, a system prompt, a voice, a personality, and optionally a restricted tool set. The user picks a tutor, a coder, a storyteller. Under the hood, it is the same registry, the same endpoints, the same execute-tool calls.
A skill is a persona. A tutor, a coder, a storyteller. Each has a name, a voice, a personality. You pick one and start talking.
A skill is a configuration preset. It can restrict which tools are visible, pin specific endpoints, or leave the full registry open. Same execute-tool interface, just scoped.
Remote tool execution
Here is the thing that connects VibeFrame and the tool registry. When an AI calls execute-tool with a remote endpoint, next-vibe routes the call to the target instance. That instance executes the tool, returns the result. Standard enough.
But every endpoint in next-vibe also has a widget - a typed, fully-featured UI component that knows how to render that tool's inputs and outputs. If the remote instance is publicly accessible, VibeFrame renders the actual widget in a sandboxed iframe, communicating with the tool's server via postMessage. Full interactivity, live data, real UI.
If the instance is not public - behind a firewall, on a private network - the platform falls back to definition-driven UI. The endpoint definition carries enough metadata (field types, labels, validation rules) to render a functional interface locally, without ever reaching the remote server's frontend. Either way, the tool works. The UI adapts.
calls tool
executes, returns result
renders full UI in sandbox
The server that owns the tool owns the UI. VibeFrame renders it anywhere you need it. This is what remote tool execution looks like with a full frontend.
What they have in common
VibeFrame and the tool registry solve the same problem from opposite ends. The tool registry handles invocation - any endpoint on any instance, callable from any AI. VibeFrame handles rendering - any widget from any instance, embeddable on any page. Remote tool execution is the bridge between them: call the tool, render its UI.
A distributed tool system with a distributed rendering system. That is next-vibe.
View on GitHub
1git clone https://github.com/techfreaque/next-vibe
2cd next-vibe
3cp .env.example .env
4bun install
5vibe devI never got to show it to those colleagues. But I am showing it to you.
Chat, create, and connect - text, images, video, and music
Privacy-first AI with 119 models - chat, images, video & music
© 2026 unbottled.ai. All rights reserved.