Real-Time Apps Shouldn't Be This Hard

REST API. Cache layer. Message broker. Invalidation logic. What if you replaced all of it with one database?

01

Collapse the Stack

Here's what "real-time" looks like with a traditional database — and what happens when a streaming database replaces the middle layer entirely.

TRADITIONAL Client Server Redis DB (Postgres) 4 services, all glue
TRADITIONAL STREAMING Client Server Redis DB (Postgres) RisingWave Push

The traditional real-time stack. Four services, all glue.

4 services

02

Why It's So Complex

Your app polls the database. Every request re-scans from scratch. As data grows, queries slow down — so you add a cache, then a broker, then invalidation logic. Each layer exists because the one below it can't keep up.

A streaming database eliminates the root cause. It pre-computes results in a materialized view and maintains them incrementally. Reads are constant-time. No cache needed. No broker. No glue code.

BATCH App Postgres 120ms 340ms 890ms 2.4s SELECT * every 5s Gets slower over time
STREAM App RisingWave INSERT Push ~1ms Stays fast, always

03

See It Work

The prediction market demo runs two identical React apps against the same database. Same frontend code, same useQuery() calls. One server reads from a materialized view. The other re-queries the raw table.

As trades flow in, the simplified stack stays snappy — constant-time reads, no cache warming, no invalidation. The traditional approach degrades under the same load. Same code, different architecture, visible difference.

MATERIALIZED VIEW RE-QUERY RisingWave +MV Trades Server :8081 MV Server :8082 Re-query App :3002 Live App :3003 Delayed WebSocket WebSocket 200ms 5s
Same useQuery('marketPrices') call. Same React components. The only difference is how the server resolves the query.

Prediction market demo — MV-backed (left) vs re-query (right)

Benchmark dashboard — live latency and throughput under load (4x speed)

04

Data Flow

Trades stream in and get inserted into RisingWave. A materialized view incrementally maintains a windowed aggregation — average price, total volume, trade count per market — over the last 50 trades. No external workers. No scheduled jobs. The database does it.

Both servers read from the same pre-computed view. One polls every 200ms. The other every 5 seconds. Same read path, different frequencies — that's the entire gap you see in the demo.

DATA PIPELINE Data Source Polymarket WS or Synthetic INSERT trades AUTO MV AVG(price) SUM(amount) COUNT(*) per market ... last 50 trades 200ms :8081 → :3002 Live constant-time reads 5s :8082 → :3003 Delayed same reads, less often
The materialized view is incrementally maintained — when a new trade arrives, only the affected market's aggregation updates. No cron jobs, no worker queues. The database handles the computation that used to require a separate service.

05

Under the Hood

Surf is the framework that makes this stack collapse practical. Declare your schema and queries in TypeScript. Surf compiles them to SQL — CREATE TABLE, CREATE MATERIALIZED VIEW — and applies them to RisingWave on server start.

A subscription manager detects changes and pushes updates through WebSockets. On the client, useQuery() subscribes and re-renders when data changes. No REST endpoints. No cache. No broker. No glue.

FRAMEWORK DEFINE defineSchema() query() mutation() YOUR CODE compile SQL CREATE TABLE CREATE MV SUBSCRIBE RISINGWAVE detect PUSH SubscriptionManager WebSocket Server SERVER render REACT useQuery() useMutation() YOUR CODE
From INSERT to React re-render — one database, one WebSocket. No REST layer, no cache, no message broker. That's the simplification in practice.