Verified Execution For AI-Generated Logic

Self-hosted deterministic execution for evaluators, reward functions, policy gates, and other bounded generated logic. The hosted endpoint exists for demos and evaluation. Production deployments run on your own gateway.

Execution Profile
p50 Cached
0.004ms
p50 Cold Start
0.063ms

Your model or application calls the local gateway. No container boot. No per-task Docker startup. The strongest current fit is hot-path logic that benefits from determinism and bounded behavior, not arbitrary general-purpose code execution.

Integration

Python SDK

Published
shell
pip install synapserun
python
from synapse import Synapse

client = Synapse(
    base_url="http://localhost:8000",
)

result = client.execute_python("""
result = 0
for i in range(1, 11):
    result = result + i
""")

print(result.result)      # 55
print(result.latency_ms)

JavaScript SDK

Experimental
shell
sdk/js/  # in-repo package; start with the Python SDK for the most stable integration path

REST API

Evaluation
shell
curl -X POST http://localhost:8000/v1/execute/python \
  -H "Content-Type: application/json" \
  -d '{"code":"result = 21 + 21"}'
Frameworks

LangChain

python
from synapse.integrations import SynapseTool

tools = [SynapseTool(client)]
agent = initialize_agent(
    tools, llm,
    agent=AgentType.OPENAI_FUNCTIONS
)

CrewAI

python
from synapse.integrations import SynapseCrewTool

agent = Agent(
    role="compute",
    tools=[SynapseCrewTool(client)]
)

Synapse integrations are best used for bounded logic in agent loops, especially evaluators, reward functions, and policy gates.

Framework integrations call the same local gateway through the Python SDK. Start there if you want the simplest integration path.

Current Best Fits

Use Synapse when you need deterministic execution for generated logic.

Good fits right now: evaluators, reward functions, policy gates, bounded transforms, and auditable agent-side business logic.

Not a fit for arbitrary Linux workloads, general sandboxing, or a fully managed cloud execution platform.

Deployment
Self-Hosted
Supported
Single 15MB Rust binary
Your infrastructure, your edge auth
Deploy behind your firewall
No external dependencies
Managed API
Evaluation
Demo and evaluation surface
Useful for testing the bounded execution model
For production, deploy the runtime on your own infrastructure

The hosted gateway is for demos and evaluation. Production deployments run on your own infrastructure.

Why Agent Teams Choose Synapse
Problem
Industry Standard
Synapse
Cold start latency
200ms (Docker)
<1ms (Wasm)
Memory per sandbox
128MB (container)
64KB (arena)
Concurrent sandboxes
~1,000 per server
~130,000 (theoretical)
Execution determinism
Non-deterministic
Same input โ†’ same output
Result verification
None
Cryptographic receipt

Synapse is not a generic code sandbox. It is deterministic, auditable execution for generated logic that needs to be replayable and defensible.

Evaluation Path
01Start with the self-hosted quickstart and run the example workloads locally.
02Review the verification boundary, benchmark methodology, and security model.
03Integrate through the Python SDK if you need the most stable starting point.
04Use the research and benchmark pages for deeper technical diligence.