Benchmark Results

This page mixes current runtime metrics and deeper R&D measurements. Start with the reproduction commands and table notes before comparing figures directly.

TEST ENVIRONMENT

RoleProviderCPUMemory
Benchmark serverHetzner AX102AMD Ryzen 9 7950X3D128 GB DDR5
Demo gatewayHelsinkiAMD EPYC 9454128 GB DDR5
TrainingVast.aiNVIDIA A100 80 GB--

EXECUTION LATENCY

OperationRequestsErrorsp50p99
Cached execution (.syn)1,000,00000.004ms0.013ms
Cold start (.syn)----0.063ms0.539ms
Python transpile (cold)----0.678ms--
Python transpile (cached)1,00000.005ms0.015ms

Cached means the compiled Wasm module is reused from the moka cache. Cold starts compile from source. Real-world AI agent traffic is a mix of both.

PLATFORM COMPARISON

MetricSynapse (measured)AWS LambdaE2BCloudflare Workers
Cold Start<2ms200-500ms *~200ms *<5ms *
Memory / Sandbox64KB128MB min *~150MB *128MB *
Sandboxes / $60 Server~130,000 (theoretical)N/AN/AN/A
IsolationWasm SandboxFirecrackerFirecrackerV8 Isolate

* Competitor figures sourced from their published documentation as of March 2026. Sandbox density is a theoretical calculation based on 64KB arena size — not a tested concurrent load. Cost comparison removed pending independent verification.

WORKLOAD COMPARISON

WorkloadDockerSynapseSpeedEnergy
noop233ms0.015ms15,533x7.4x
arithmetic246ms0.025ms9,840x7.9x
fibonacci321ms2.261ms142x9.1x
prime_sieve251ms0.721ms348x7.6x
memory_stress248ms0.033ms7,515x8.0x

All measurements from real runs on Hetzner AX102 (AMD Ryzen 9 7950X3D, 128 GB DDR5).

HEADLINE NUMBERS

9x
Energy Reduction
RAPL-measured on AMD EPYC
797x
Reward Eval Acceleration
88,426 evals/sec, single CPU core
47x
Python Subset Acceleration
1M evaluations, integer arithmetic
<1ms
Formal Verification
Z3 theorem prover, included in pipeline

ENERGY MEASUREMENT

Measured via RAPL (Running Average Power Limit) instrumentation on AMD EPYC.

Synapse
~1.6 mJ
per execution
Docker
~12 mJ
per execution
Ratio
7-9x
less energy

REPRODUCTION

SYNAPSE_START_GATEWAY=1 python3 tests/release_test.py
SYNAPSE_START_GATEWAY=1 python3 tests/test_gateway_verification_http.py
SYNAPSE_START_GATEWAY=1 python3 tests/test_self_hosted_examples.py
python3 benchmarks/eval_only_benchmark.py

# Extended energy study on benchmark hardware
bash benchmarks/energy_benchmark.sh --iterations 1000

The first four commands are the current product proof set. The energy benchmark is a deeper methodology pass for the benchmark server, not a prerequisite for evaluating the runtime locally.

TECHNOLOGY

Self-Hosted Compiler
The .syn compiler is written in .syn itself (~1,685 lines across scanner, parser, codegen, emitter). It compiles to Wasm and runs on the same runtime it targets.
DOM UI from Wasm
A 253-line .syn program (boreal_ui.syn) implements a drag-and-drop node graph using FFI calls to create_element, append_child, and set_text — replacing React for certain UI patterns.
Full Gateway in .syn
gateway.syn (~1,200 lines) implements a complete API gateway including HTTP routing, request parsing, and response formatting — entirely in the .syn language.

HOW TO READ THE NUMBERS

01These tables mix current product metrics and deeper R&D measurements. Read the notes on each table before comparing figures directly.
02The strongest current product metrics are bounded execution, restricted-Python acceleration on documented workloads, and the self-hosted verification boundary.
03Some figures describe research demonstrations or methodology-specific benchmarks rather than the default buyer workflow.
04For external evaluation, pair this page with the self-hosted quickstart, technical brief, and security model.

CAVEATS

  • Competitor figures sourced from published documentation as of March 2026, not our own measurements
  • Sandbox density (~130K) is a theoretical calculation based on 64KB arena size, not a tested concurrent load
  • 797x acceleration compares in-process Wasm FFI (cached instances) vs spawning Python subprocess per evaluation
  • Distributed benchmarks not yet published