Synapse removes layers between model intent and execution: from frameworks and containers down to WebAssembly sandboxes, formal verification, receipts, and eventually sovereign bare-metal compute. The self-hosted runtime is available today. Other areas on this page are research, demonstration, or roadmap work and are labeled accordingly. All claims are backed by reproducible tests, journals, and formal proofs where applicable.
Most AI companies build upward — more frameworks, more cloud services, more abstraction. Synapse builds downward — fewer layers, lower overhead, tighter control, and more deterministic execution. The horizon points toward bare metal. The self-hosted runtime is the product available today.
A complete pipeline where every stage carries formal guarantees. No external ML frameworks. No containers. No cloud APIs. Each step has working code and passing tests.
Transformer training orchestrated by .syn code. Self-attention, layer normalization, residual connections, AdamW optimizer.
Z3 proves training correctness: gradient math, loss convergence, structural integrity. 120+ automated tests.
Self-verifying inference. The program checks its own output against formal bounds before returning results.
Verified self-improvement, composition, and fine-tuning — each with Z3 proofs that safety properties are preserved.
Honest scope:The verified AI training pipeline operates on small models (80–160 parameters) at demonstration scale. Compositional verification scales to 50M+ parameters (8.7ms for 12 layers) but produces conservative bounds due to interval arithmetic. The production execution engine (Rust gateway, Wasm sandboxing, code compilation) is real infrastructure. The verified ML lifecycle (training, self-improvement, federated learning) is research demonstrated via Python scripts, not production gateway features. The contribution is the architecture and the proofs, not the parameter count.
Each area of investigation addresses a question that could not be resolved through standard engineering practice or reference to publicly available knowledge. Resolution required systematic experimental investigation.
No existing WebAssembly runtime integrates SMT theorem proving at compile time. We investigated whether Z3 proofs could complete within production latency bounds and whether property invariants could be statically verified without full program analysis.
Z3 verification completes in <1ms for property invariants, ~14ms for expression proofs. The verification pipeline runs in the production gateway.
Self-hosting compilers exist (GCC, Rust, Go) but all run on a full operating system with virtual memory and dynamic heap allocation. A self-hosting compiler constrained to Wasm linear memory with a fixed arena allocator and a 64KB address space had no documented precedent.
The .syn compiler (~2,000 lines of .syn) compiles to WebAssembly and runs on the same runtime it targets. Bootstrap verification confirms identical output to the reference compiler.
Standard autograd systems (PyTorch, JAX) operate within a single process with shared memory. We investigated whether reverse-mode differentiation could function when the forward pass is orchestrated by a sandboxed Wasm module calling tensor operations through host FFI, with gradient computation managed entirely by the host.
Gradient-verified training at multiple scales. Convergence proven on transformer architectures with self-attention, layer normalization, and residual connections. Results match reference implementations to f32 precision.
Existing neural network verification focuses on inference-time properties (robustness, output bounds). We found no prior work combining SMT solvers with step-level training verification — proving properties about loss monotonicity, gradient correctness, and convergence during training.
Z3 proves training loop structure, gradient formula correctness (8/8 operations, 38ms), output bounds for trained models, and safety preservation across fine-tuning. Three-layer verification: compile-time structural, runtime assertions, and offline formal proofs.
When chaining neural networks (Model A output feeds Model B input), output bounds typically compound — each model adds uncertainty. We investigated whether transformer architectures enable compositional verification where bounds contract rather than expand.
Z3 proves composed output bounds for multi-model pipelines. Architectural properties of transformer networks cause bounds to contract at each composition stage, making composed systems formally safer than individual models.
Industry standard isolation (AWS Firecracker, Docker) requires 128MB+ per sandbox with 100-200ms cold starts. We investigated whether Wasm linear memory could provide equivalent isolation at dramatically lower resource cost.
64KB per sandbox, <2ms cold start, memory wiped between requests. No guest filesystem, no guest network access. Theoretical density of ~130,000 concurrent sandboxes per 128GB server.
DP-SGD (gradient clipping + calibrated Gaussian noise) is the standard for private ML training, but existing implementations rely on empirical testing. We investigated whether Z3 can formally prove that privacy noise calibration, composition bounds, and gradient clipping satisfy their mathematical guarantees.
Z3 proves four properties: noise sigma matches the Gaussian mechanism formula, composition budget is correctly computed, noise is sufficient for claimed epsilon, and gradient clipping holds universally. Training converges despite substantial noise injection.
Each achievement has working code, automated tests, and formal proofs where applicable. Classification distinguishes genuine novel R&D from applied research.
Novel typed DAG substrate with 7 biologically-inspired operations. Outperforms ternary weight matrices by 48.6 percentage points on temporal tasks (99.2% vs 50.7%, N=5 seeds, p<0.001). The substrate IS the invention.
Ternary Mamba (State Space Model) variant outperforms float on Shakespeare character-level LM. 458K params, N=5 seeds. First demonstration that ternary SSMs can exceed float accuracy. Scaling to 1B+ params underway.
World’s first State Space Model (Mamba) training inside a verified, fuel-metered Wasm sandbox. Full forward/backward through selective scan, SiLU gating, RMSNorm, and AdamW. Every weight update is receipted. 254/254 tests passing. Red-team audited (RT8).
Trained .nerve DAG compiled to a deterministic integer-only program. Provably terminates, bounded memory, zero floating-point operations. Fits in 2 TCP packets. Runs on any CPU.
Scaffold Architecture: ternary convolution + SpikeMIM spiking classification. All weights ∈ {-1,0,+1}. Adversarially audited (RT7: 7/7 PASS, 3 WARN, 0 FAIL).
Germany, Finland, and client browser independently execute identical computation and produce bit-identical SHA-256 receipts. Live at synapserun.dev/demo. No blockchain required.
Every differentiable operation in the autograd engine formally verified by Z3. Dual validation with numerical finite differences confirms agreement at machine precision.
A compiler that compiles its own source language, constrained to Wasm linear memory with a fixed arena allocator. No OS, no heap, no string libraries. No documented precedent.
Layer-by-layer Z3 verification that scales linearly. Synapse-50M verified across all 12 transformer layers. Bounds stabilize after layer 2.
Hebbian, Correlation-Guided Wiring, and Discrete Target Propagation systematically tested and falsified with mechanistic diagnoses. Narrows the research frontier for future work.
Model generates .syn programs, runs them through verified execution, and accepts programs meeting quality thresholds as new training data. Bootstraps improvement from its own outputs.
Train, self-improve, fine-tune, merge, verify, RL reward, and receipt — all in a single .syn execution. 28 weight tensors, fully deterministic.
Novel reward signal derived from compiler pipeline stages. In-process Wasm evaluation achieves 797x acceleration over CPython for reinforcement learning.
For deterministic computation workloads, formally verified small models on commodity hardware can achieve equivalent correctness at a fraction of the compute cost.
Comparison applies to deterministic computation tasks, not general-purpose AI. The verified AI pipeline operates at demonstration scale. Production scaling is an active research direction.
AI infrastructure consumes approximately 10 TWh of electricity annually, growing 30%+ year-over-year. For deterministic workloads where correctness can be formally proven, Synapse eliminates the need for GPU hardware entirely by running verified models on commodity CPUs inside Wasm sandboxes.
Measured energy reduction (RAPL instrumentation, AMD EPYC) is 7-9x per execution vs Docker containers on identical hardware. All research conducted in Canada by Synapse
All benchmarks are automated and reproducible. Methodology details are documented alongside each measurement.
Each investigation follows a hypothesis-experiment-result cycle documented in dated journal entries. 145+ entries spanning 45+ research phases, with 8 adversarial red team audits.
Reverse-mode backpropagation can function when the forward pass is orchestrated by a sandboxed Wasm module and gradient computation is managed by the host.
Implemented autograd system with gradient-verified training at multiple scales. Tested against reference implementations at f32 precision.
Exact match with reference at 2x4, 4x8, and 4x16 dimensions. Loss convergence verified across all scales.
Z3 can prove meaningful properties about neural network training: gradient correctness, loss monotonicity, convergence, and safety preservation.
Built three-layer verification system (compile-time, runtime, and offline formal proofs). Applied across MLP and transformer architectures.
All 8 backward formulas Z3-proved in 38ms. Training convergence verified. Fine-tuning safety preservation proved with three simultaneous Z3 properties.
Independently trained transformer models can be composed with formal end-to-end safety guarantees.
Trained two transformers on different data, composed their inference, and applied formal bound propagation through the composition.
Z3 proves composed output bounds in 23ms. Architectural properties cause bounds to contract at each composition stage.
Compiler pipeline stages can serve as a structured training signal for reinforcement learning.
Trained Synapse-50M using reward signals from each compiler stage. Measured evaluation throughput via in-process Wasm.
88,000 evals/sec/core. Model generates compilable .syn at >90% first-attempt success rate.
Verified AI addresses a gap in regulated industries: the ability to formally prove that AI models preserve safety properties after adaptation. No existing technology provides this guarantee.
Hospitals fine-tune diagnostic models for local populations. Verified fine-tuning proves original safety properties are preserved after adaptation.
Banks adapt credit scoring models for new products. Formal verification satisfies model risk management requirements.
Military AI adapted for new environments needs formal guarantees that existing capabilities are preserved.
Safe self-modification — the alignment community's core concern — implemented as verified self-improvement with Z3 proofs.
Synapse is developed by Synapse, incorporated in British Columbia, Canada. All research and development is conducted in Canada by Canadian residents.
The following are novel Canadian IP developed through systematic R&D:
By targeting open hardware (AMD, open-source drivers) instead of proprietary ecosystems (NVIDIA CUDA), Synapse avoids dependency on a single foreign hardware vendor, aligning with Canadian digital sovereignty objectives.