We hire people who see what needs to be done and do it — with full ownership and zero quit. No hand-holding. No waiting for specs.
We don't wait for permission. We identify problems, propose solutions, and execute.
We work on hard problems because we genuinely care about the outcome, not just the paycheck.
When the path gets hard, we push through. Resilience is not optional — it's foundational.
If it's broken, it's your problem. We own results, not tasks.
We're looking for exceptional people to help build the trust layer for the AI economy.
We're looking for a hardware engineer to own the silicon-level foundations of our technology. You'll bridge the gap between hardware security features and AI workloads — building verifiable computation pipelines where we can statically derive what a GPU kernel should compute and cryptographically prove that it did. You'll work directly with silicon vendors, design attestation systems for modern accelerators, and build the tamper-proof link between a model's architecture and its execution on hardware.
We are open to hiring both early-career and experienced candidates. For the latter, we are open to higher compensation and more senior titles. We will also consider exceptional candidates on a part-time basis.
You'll own our prosumer product experience end-to-end — the product that a technically curious person installs, connects to their AI tools, and actually uses every day.
This is not a "build to spec" role. You'll take high-level outcomes ("a user should be able to chat with their agent and see what it did") and drive them from prototype to alpha to beta to GA, making deliberate scope and quality tradeoffs at each stage. You'll decide what to ship now, what to cut, and what to defer — then ship it, put it in front of users, and iterate.
You'll work closely with the founding team on product direction and with our architecture lead on the underlying platform. You'll inherit a large, architecturally sound monorepo that has accumulated complexity — your job is to navigate it, simplify where needed, and ship a working product without needing to rewrite everything.
AI agents are getting powerful fast. The tooling for making sure they're trustworthy, controllable, and transparent is lagging behind. We're building the product that closes that gap — starting with individuals who want to use AI agents confidently, and expanding to teams and organizations that need governance at scale.
You'd be building something you'd use yourself: an AI assistant that's actually transparent about what it does, with guardrails that protect you without getting in the way.
Send us:
We're always looking for exceptional people. Drop us a note and tell us what you'd build.