Run AI-built apps at C-speed, with no runtime overhead

DataVec gives AI a fast, local runtime to compile to. Your app logic stays high-level, runs like hand-written C, and costs a fraction to operate.

104 KB actors | 300,000 invocations/sec per core | Zero cold‑starts

Why DataVec?

The serverless paradigm has unlocked horizontal scale but sacrificed vertical scale. DataVec addresses this by inventing zero‑overhead abstractions that enable rapid development with high‑level interfaces for server and service authoring, while delivering performance that meets or exceeds state‑of‑the‑art specialized servers.

See our deck for a deeper summary.

Current Platforms

  • Heavy cold starts (10–100 ms)
  • High per‑invocation overhead
  • Stateless, hash‑partitioned scaling

DataVec

  • Instant actor dispatch (µs – single page fault)
  • 8 KB memory per actor + 96 KB per socket (tunable)
  • Ontological locality for state persistence

Our Approach

mnvkd is a C service stack composed of four frameworks:

The architecture comprises:

  • Threading: Novel M:1 micro-process coroutine library.
  • Actor: Deductive poller micro‑kernel operating across isolated huge‑page micro‑heaps.
  • Service: Virtual forking‑server built on micro‑processes.
  • Cloud Function: WinterTC framework maintaining state in micro‑processes.

AI-Powered Development

Today’s AI startups promise no‑code app building—but they struggle with platform code, because platforms manage resources in space and time, which requires spatial reasoning—something AI still handles poorly. As a result, AI‑generated apps default to high-level platforms (e.g., serverless, managed runtimes), which are easy for AI to target but deliver unhappy trade-offs: bloated runtime overhead, poor performance, and skyrocketing costs at scale. DataVec flips that script. We expose familiar, high‑level interfaces (the ones people use in expensive managed runtimes) on a lean, low-level platform built from the ground up for efficiency. Then we let AI translate “slow” platform calls into our fast, resource-local primitives. In other words, we turn a problem AI is bad at (writing low-level platform code) into one it’s actually good at (translating from one API to another). The result: AI-assisted app development that runs like hand-optimized C—without the C-level complexity.

Key Features

Performance

Benchmarks demonstrate over 300,000 full HTTP requests per second per core on a commodity i7, with consistent sub‑millisecond median latency and deterministic memory usage.

Ontological Objects and Locality of Reference

We invented the locality method of ontological objects, a novel paradigm complementing lambda calculus. All framework levels are cache‑aware and designed to leverage modern virtual memory and processor capabilities. While unrestricted C is supported, selective, cache‑aware encapsulation enables extreme performance.

Zero‑overhead abstractions deliver high‑level threaded I/O and OTP‑style actor interfaces with exceptional performance. At their core, mnvkd coroutine continuations are three times faster than Go goroutines, and throughput further improves due to application code locality, unencumbered by Go’s work‑stealing scheduler or garbage collection. Smaller cache sizes amplify mnvkd’s performance advantage because it explicitly optimizes for spatial limits.

Consider the following:

The unexpected benefit of ontological objects is improved lambda calculus: functional paradigms ease object management, while ontological objects enable more effective function optimization.

mnvkd Virtual Sockets

In mnvkd, virtual sockets (vk_vectoring) use bidirectional tx/rx iovec ring buffers with automatic wrapping. They integrate directly with scheduling and network polling, avoiding low‑water marks and complex flushing logic. Deductive polling occurs with automatic registration and flushing at optimal points, eliminating unnecessary copies and syscalls.

Despite being lower‑level, these I/O interfaces are higher‑level than standard streams—even Python file objects—enabling easy development of state‑of‑the‑art socket‑based servers. mnvkd is a server authoring toolkit in the similar way SQLite is a database authoring toolkit. In fact, the embedded nature of SQLite mates with the locality paradigm of mnvkd.

Use Cases

Cloud Function Framework

Build and deploy full cloud & edge applications with a C WinterTC cloud function interface providing isolated yet unbounded compute.

Service Framework

Build data services with custom protocols using the Super-Server interface.

Actor Framework

Execute soft‑real‑time actors for Edge AI and event‑driven tasks with <2 ms latency.

Threading Framework

Power deterministic, high-throughput coroutines (300K invocations/sec) with message-passing I/O aggregation.

Roadmap

Team

Ben Woolley
Platform Development, a generalist with deep experience across all levels, from web front-ends to operating system internals. Decades of marketing technology experience.
Tony Davydets
Application Development, a prolific application developer who can build novel products on any platform. Decades of government contracting experience.
Shane Kutzer
Business Development, the king of customer loyalty. Decades operating multiple businesses.

Contact Us

Email
info@datavec.com
LinkedIn
DataVec LinkedIn Page.