Cloud Function Framework
Build and deploy full cloud & edge applications with a C WinterTC cloud function interface providing isolated yet unbounded compute.
The serverless paradigm has unlocked horizontal scale but sacrificed vertical scale. DataVec addresses this by inventing zero‑overhead abstractions that enable rapid development with high‑level interfaces for server and service authoring, while delivering performance that meets or exceeds state‑of‑the‑art specialized servers.
mnvkd
is a C service stack composed of four frameworks:
The architecture comprises:
Today’s AI startups promise no‑code app building—but they struggle with platform code, because platforms manage resources in space and time, which requires spatial reasoning—something AI still handles poorly. As a result, AI‑generated apps default to high-level platforms (e.g., serverless, managed runtimes), which are easy for AI to target but deliver unhappy trade-offs: bloated runtime overhead, poor performance, and skyrocketing costs at scale. DataVec flips that script. We expose familiar, high‑level interfaces (the ones people use in expensive managed runtimes) on a lean, low-level platform built from the ground up for efficiency. Then we let AI translate “slow” platform calls into our fast, resource-local primitives. In other words, we turn a problem AI is bad at (writing low-level platform code) into one it’s actually good at (translating from one API to another). The result: AI-assisted app development that runs like hand-optimized C—without the C-level complexity.
Benchmarks demonstrate over 300,000 full HTTP requests per second per core on a commodity i7, with consistent sub‑millisecond median latency and deterministic memory usage.
We invented the locality method of ontological objects, a novel paradigm complementing lambda calculus. All framework levels are cache‑aware and designed to leverage modern virtual memory and processor capabilities. While unrestricted C is supported, selective, cache‑aware encapsulation enables extreme performance.
Zero‑overhead abstractions deliver high‑level threaded I/O and OTP‑style actor interfaces with exceptional performance. At their core, mnvkd
coroutine continuations are three times faster than Go goroutines, and throughput further improves due to application code locality, unencumbered by Go’s work‑stealing scheduler or garbage collection. Smaller cache sizes amplify mnvkd
’s performance advantage because it explicitly optimizes for spatial limits.
Consider the following:
The unexpected benefit of ontological objects is improved lambda calculus: functional paradigms ease object management, while ontological objects enable more effective function optimization.
mnvkd
Virtual SocketsIn mnvkd
, virtual sockets (vk_vectoring
) use bidirectional tx/rx iovec
ring buffers with automatic wrapping. They integrate directly with scheduling and network polling, avoiding low‑water marks and complex flushing logic. Deductive polling occurs with automatic registration and flushing at optimal points, eliminating unnecessary copies and syscalls.
Despite being lower‑level, these I/O interfaces are higher‑level than standard streams—even Python file objects—enabling easy development of state‑of‑the‑art socket‑based servers. mnvkd
is a server authoring toolkit in the similar way SQLite is a database authoring toolkit. In fact, the embedded nature of SQLite mates with the locality paradigm of mnvkd
.
Build and deploy full cloud & edge applications with a C WinterTC cloud function interface providing isolated yet unbounded compute.
Build data services with custom protocols using the Super-Server interface.
Execute soft‑real‑time actors for Edge AI and event‑driven tasks with <2 ms latency.
Power deterministic, high-throughput coroutines (300K invocations/sec) with message-passing I/O aggregation.
info@datavec.com