This is your platform watching itself.
I abandoned a trading bot years ago. Its architecture became the most interesting part of next-vibe. Every node in the graph is an endpoint - callable from the CLI, discoverable by AI, wired into your platform.
The trading bot I abandoned
A few years ago I forked OctoBot and built something I called Octane. Python backend, React frontend. You could drag technical indicators onto a canvas, chain evaluators together, configure execution rules, set up alerts, fire orders. It had a full visual strategy builder. I still run it for my own portfolio.
I abandoned it as a codebase. The Python was sprawling, the architecture had accumulated enough debt that every change felt expensive, and I had other things to build.
But I kept thinking about it. Specifically about what made it work as a system.
βThe trading bot had this architecture right. The thing I got wrong was building it in isolation.β
Visual strategy builder, drag-and-drop indicators, Python backend. Abandoned as a codebase.
That structure doesn't describe trading. It describes any business process with data over time.
Properly typed. Every node is a standard endpoint. Accessible everywhere the platform is.
Every business is a time series
In a trading bot, the pieces are simple. A data source: price data, volume, whatever you're reading. An indicator: moving average, RSI, MACD - takes raw data, produces a derived signal. An evaluator: is the fast MA above the slow MA? Boolean condition. An action: when the evaluator fires, do something.
That structure doesn't describe trading. It describes any business process where you have data over time, conditions you care about, and actions you want to take when those conditions are met.
That's a time series. Is it trending down? That's an evaluator. Send a win-back campaign. That's an action.
Open rates, bounce rates, unsubscribe rates. All time series. All conditions you can evaluate. All triggerable.
Spending velocity. Burn rate vs. purchase rate. All of it.
Refund rate spikes above 20% in a day - Thea gets notified before you see it in a dashboard.
The four node types
Three node types you need to understand to read any graph. Plus a fourth that closes the loop.
A domain-owned endpoint that queries your database and returns { timestamp, value }[] for a given time range and resolution. Lives with its domain. Knows about its own schema.
A pure, reusable computation endpoint - EMA, RSI, MACD, Bollinger Bands, clamp, delta, window average. No SQL. No domain knowledge. Call it on any data source.
A threshold or condition. Takes a series and asks a question. Is this value below 0.7? Did this ratio exceed 20%? Outputs a signal - fired or not fired.
When the upstream evaluator fires, a specific endpoint gets called. In-process. No HTTP. Same validation, same auth, same response type as any other call in the system.
A domain-owned endpoint that queries your database and returns { timestamp, value }[] for a given time range and resolution. Lives with its domain. Knows about its own schema.
A pure, reusable computation endpoint - EMA, RSI, MACD, Bollinger Bands, clamp, delta, window average. No SQL. No domain knowledge. Call it on any data source.
A threshold or condition. Takes a series and asks a question. Is this value below 0.7? Did this ratio exceed 20%? Outputs a signal - fired or not fired.
When the upstream evaluator fires, a specific endpoint gets called. In-process. No HTTP. Same validation, same auth, same response type as any other call in the system.
Every node is an endpoint
Here's the thing I want to land right now, before we go any further.
EMA exists only as a node in a graph. You can't call it from the CLI. It doesn't show up as an AI tool. It's a private implementation detail.
Every Vibe Sense node is a standard endpoint, defined with createEndpoint(), registered in the same endpoint registry as everything else on the platform.
1$ vibe vibe-sense-ema --period=7
2
3 βββββββββββββββββββββββββββββββββββββββββββ
4 β analytics/indicators/ema β
5 β EMA (Exponential Moving Average) β
6 βββββββββββββββββββββββββββββββββββββββββββ€
7 β period 7 β
8 β points 365 input β 365 output β
9 β result [ { timestamp, value }, ... ] β
10 βββββββββββββββββββββββββββββββββββββββββββThe same EMA endpoint that ran as a node in the lead funnel graph - same definition, same validation, same auth - callable standalone from the CLI.
The SAME endpoint that's a node in your lead funnel graph is also a standalone tool on 13 platforms.
The pipeline is just endpoints calling endpoints.
But actions aren't trades
When a signal fires, the engine calls any endpoint. In-process. No HTTP round-trip. An alert. A campaign trigger. An AI escalation with pre-filled context. Thea gets notified. A win-back sequence starts. Whatever is wired to that evaluator.
The platform calls itself.
Call complete-task - Thea picks it up immediately.
Trigger a conversion sequence when lead velocity crosses a threshold.
Fire an AI run with pre-filled context about what signal triggered it.
Walking the lead funnel graph
This is the Lead Acquisition Funnel. It runs every six hours. Let's trace it top to bottom.
Real endpoints. Each lives at leads/data-sources/. They accept a time range and resolution, run their SQL query, return { timestamp, value }[].
Queries leads by created_at. Sparse - hours with no new leads produce no data point.
Grouped by converted_at, counts leads that reached SIGNED_UP status.
Leads with bounced email per time bucket.
Snapshot indicator at ONE_DAY resolution. Counts total leads not in terminal states.
Pure computation. The EMA endpoint lives at analytics/indicators/ema. Its graph config is just { type: "indicator", indicatorId: "ema", params: { period: 7 } }.
EMA indicator, period=7. Automatically extends the upstream fetch range for warmup.
Transformer: divides leads.converted by leads.created per time bucket. Clamped 0β1.
Threshold conditions. Each outputs a signal - fired or not fired.
EMA(7) < 0.7 at ONE_WEEK resolution. Lead creation velocity smoothed over 7 periods drops below 70%.
leads.created < 1/day. A whole day passes with no new leads at all.
conversion_rate < 5%/week. Funnel conversion falls below 5%.
Every node is an endpoint
The same EMA endpoint that ran as a node in the lead funnel graph - same definition, same validation, same auth - callable standalone from the CLI.
1// analytics/indicators/ema/definition.ts
2const { POST } = createEndpoint({
3 scopedTranslation,
4 aliases: [EMA_ALIAS],
5 method: Methods.POST,
6 path: ["system", "unified-interface", "vibe-sense", "indicators", "ema"],
7 title: "post.title",
8 description: "post.description",
9 icon: "activity",
10 category: "endpointCategories.analyticsIndicators",
11 tags: ["tags.vibeSense"],
12 allowedRoles: [UserRole.ADMIN],
13 fields: objectField(scopedTranslation, {
14 usage: { request: "data", response: true },
15 children: {
16 source: timeSeriesRequestField(scopedTranslation, { ... }),
17 resolution: resolutionRequestField(scopedTranslation, { ... }),
18 range: rangeRequestField(scopedTranslation, { ... }),
19 lookback: lookbackRequestField(scopedTranslation, { ... }),
20 period: requestField(scopedTranslation, {
21 fieldType: FieldDataType.NUMBER,
22 schema: z.number().int().min(1).max(500),
23 label: "post.fields.period.label",
24 description: "post.fields.period.description",
25 }),
26 result: timeSeriesResponseField(scopedTranslation, { ... }),
27 meta: nodeMetaResponseField(scopedTranslation, { ... }),
28 },
29 }),
30 errorTypes: { /* all 9 required */ },
31 successTypes: { title: "post.success.title", ... },
32 examples: { requests: { default: { source: [...], period: 3 } }, ... },
33});computeEma()
The SAME endpoint that's a node in your lead funnel graph is also a standalone tool on 13 platforms.
1// analytics/indicators/ema/repository.ts
2// Pure computation. No DB access.
3export class EmaIndicatorRepository {
4 static computeEma(points: TimeSeries, period: number): TimeSeries {
5 if (points.length === 0) return [];
6 const k = 2 / (period + 1);
7 const result: TimeSeries = [];
8 let ema: number | undefined;
9 for (const p of points) {
10 ema = ema === undefined ? p.value : p.value * k + ema * (1 - k);
11 result.push({ timestamp: p.timestamp, value: ema });
12 }
13 return result;
14 }
15}
16
17// route.ts wires it:
18// handler: ({ data }) => {
19// const result = EmaIndicatorRepository.computeEma(data.source, data.period);
20// return success({ result, meta: { ... } });
21// }Domain-owned data sources
One of the architectural decisions I'm most satisfied with: data sources live with their domain, not in some central vibe-sense/ directory.
leads/data-sources/leads-created knows about the leads table. It imports from leads/db. It uses LeadStatus from leads/enum. If you delete the leads module, the data sources go with it. Nothing orphaned.
Pure computation - EMA, RSI, MACD, Bollinger Bands, clamp, delta, window average. No domain knowledge. Call them on any data source.
At startup, the indicator registry auto-discovers both. Data source endpoints register as node definitions. Indicator endpoints register as node definitions. You add a new domain, you add data-sources/ endpoints, you export graphSeeds. They appear.
The domain owns its own observability.
Versioning, backtest, persist
Three things that make Vibe Sense safe to run in production.
Graphs are versioned. When you edit a graph, you create a new version - never mutate the active one. The new version is a draft. You promote it explicitly. Rollback is trivial.
Before promoting, you can backtest over a historical time range. Conditions evaluate. Signals record. Endpoints never fire. Gate closed.
Persist modes
Every computed data point is written to the datapoints store. For event-based indicators: leads created per minute, credits spent per minute.
Computed on demand, cached, but not stored to the main table. Daily totals, cumulative counts.
Always recomputed live from inputs. EMA outputs, ratios - no storage cost. Lookback auto-extended for warmup.
What ships vs. what's coming
Production-ready today
Full engine: data source endpoints, indicator endpoints (EMA, RSI, MACD, Bollinger, clamp, delta, window), threshold evaluators, transformer nodes, endpoint action nodes.
Topological execution via graph walker. Multi-resolution support with automatic scale-up/down. Lookback-aware range extension.
Versioning, backtest mode with full run history, signal persistence as audit trail.
CLI access - vibe vibe-sense-ema, vibe vibe-sense-rsi, any indicator endpoint, callable standalone.
MCP registration - indicator endpoints show up in the tool list. Thea can call indicators directly.
Seed graphs: 29 graphs across 9 domains - leads, credits, users, subscriptions, referrals, newsletters, payments, messenger, AI chat, and system health. All run out of the box on vibe dev.
Coming next
Visual drag-and-drop graph builder. The engine is fully built. The canvas editor is the next chapter.
Trading endpoints. Price data source endpoints, exchange API endpoints, order execution wired as endpoint nodes. A trading graph is just another graph.
Strategy marketplace. Once you can build graphs visually, you can share them. Import a pre-built lead monitoring strategy. Fork it, modify it.
What this actually is
Every business process that can be described as: given this data, when these conditions are met, do this - that's a Vibe Sense graph. Monitoring, yes. Alerting, yes. But also: automated lead qualification, revenue anomaly detection, credit economy balancing, marketing automation.
The trading bot had this architecture right. Indicators, evaluators, actions, backtest mode. The thing I got wrong was building it in isolation. In Octane, EMA was locked inside the pipeline. In next-vibe, EMA is a first-class endpoint.
You don't build a monitoring system. You build your platform. The monitoring system is already there.
1git clone https://github.com/techfreaque/next-vibe
2cd next-vibe
3cp .env.example .env
4bun install
5vibe devvibe dev starts PostgreSQL in Docker, runs migrations, seeds data, seeds the Vibe Sense graphs, backfills 365 days of historical data, and launches the dev server. Open localhost:3000. The graphs are running.
Define it once. It exists everywhere. The pipeline is just endpoints calling endpoints.
Chat, create, and connect - text, images, video, and music
Privacy-first AI with 119 models - chat, images, video & music
Β© 2026 unbottled.ai. All rights reserved.