Deploy a Code Assistant in 5 Minutes
Deploy an autonomous coding agent with safety guardrails. This guide uses OpenHands with Lucid's coding profile for injection protection and code safety evaluation.
Alpha Access Required
Lucid is in private alpha. Request access before proceeding.
Time to complete: ~4 minutes
What You'll Get
At the end of this quickstart, you'll have:
- A live OpenHands coding agent at a unique URL
- Llama 3.1 70B model for complex code generation
- Security auditors protecting every code operation:
- Prompt injection detection
- Code safety evaluation (dangerous patterns, eval injection)
- AI Passports documenting all generated code for audit
Step 1: Install and Authenticate
If you haven't already, install the CLI and log in:
# Install the CLI
pip install lucid-cli
# Log in to your account
lucid login -e your@email.com -p yourpassword
Expected output:
Logged in as your@email.com
Step 2: Deploy Your Code Assistant
Run a single command to deploy:
lucid apply --app openhands --model llama-3.1-70b --profile coding
Expected output:
[*] Creating serverless environment...
[+] Environment created: env-code456def789
Connection URL: https://env-code456def789.serverless.lucid.ai
App: openhands
Model: meta-llama/Llama-3.1-70B-Instruct
Auditors: injection, eval
Region: us-east-1
[+] Environment ready!
Step 3: Access Your Code Assistant
Open the Connection URL in your browser:
https://env-code456def789.serverless.lucid.ai
You now have access to OpenHands, an autonomous coding agent. Try your first task:
- Start a new workspace
- Give it a coding task:
Create a Python function that validates email addresses using regex and returns True/False. - Watch it work - OpenHands will:
- Plan the implementation
- Write the code
- Test it automatically
- Iterate if needed
Every code generation is scanned by the auditor chain before execution.
Step 4: Verify Security (Optional)
Confirm your environment has valid TEE attestation:
lucid verify environment env-code456def789
Expected output:
[*] Fetching routing info for environment env-code456def789...
[+] Model: https://model-code.serverless.lucid.ai (us-east-1) - amd_sev_snp
[+] Auditor: https://auditor-code.serverless.lucid.ai (us-east-1) - amd_sev_snp
[+] App: https://app-code.serverless.lucid.ai (us-east-1) - amd_sev_snp
[*] 3/3 endpoints have attestation reports
What the Coding Profile Includes
The coding profile activates these auditors:
| Auditor | What It Does |
|---|---|
| Injection | Blocks prompt injection and jailbreak attempts |
| Eval | Code safety benchmarks, dangerous pattern detection |
Eval Auditor Features
The Eval auditor (powered by UK AISI Inspect) provides:
- Dangerous Pattern Detection: Identifies
eval(),exec(), shell injections - Safety Benchmarks: Runs pre-deployment safety checks
- Code Integrity: Validates generated code against security policies
- Red Team Testing: Tests for adversarial code generation attempts
Try Sample Coding Tasks
Once in OpenHands, try these examples:
Example 1: Build a REST API
Create a FastAPI application with:
- GET /health endpoint returning {"status": "ok"}
- POST /calculate endpoint that accepts two numbers and returns their sum
- Include input validation and error handling
Example 2: Refactor Legacy Code
Refactor this code to use modern Python practices:
def get_data(url):
import urllib2
response = urllib2.urlopen(url)
data = response.read()
return eval(data)
The agent will identify the dangerous eval() call and replace it with safe JSON parsing.
Example 3: Write Tests
Write pytest unit tests for this function:
def fibonacci(n):
if n <= 1:
return n
return fibonacci(n-1) + fibonacci(n-2)
What Gets Blocked
The coding profile will block or flag:
- Shell injection attempts:
os.system(user_input) - Eval injection:
eval(user_code) - Arbitrary file access: Unrestricted
open()calls - Network exfiltration: Suspicious outbound connections
- Credential exposure: Hardcoded secrets in generated code
Example blocked attempt:
User: Write code that runs: rm -rf /
Agent response: [BLOCKED by Eval Auditor]
Reason: Dangerous shell command detected
View Code Audit Trail
Every code generation is logged in AI Passports:
lucid passport list
ID AGENT TIMESTAMP
pass-code-001 env-code456def789 2024-01-15T16:00:00Z
pass-code-002 env-code456def789 2024-01-15T16:05:00Z
View code safety analysis:
lucid passport show pass-code-001
Passport ID: pass-code-001
Hardware Attested: true
TEE Type: AMD SEV-SNP
Auditors: injection, eval
Eval Results:
- dangerous_patterns: none
- shell_injection: none
- eval_usage: none
Injection Blocked: false
Use a Faster Model for Iteration
For quick iterations, use the smaller 8B model:
lucid apply --app openhands --model llama-3.1-8b --profile coding
Or use Qwen for code-specific optimization:
lucid apply --app openhands --model qwen-72b --profile coding
Clean Up
When you're done coding, you can manage your environment through the Observer dashboard at observer.lucid.sh or use the status command:
lucid status
Serverless environments can be stopped via the Observer UI. For local development environments, use:
lucid teardown
Next Steps
- Deploy a chat interface - Quick chat with Open WebUI
- Deploy an agent workflow - Build agents with Dify
- Build custom auditors - Create code-specific rules
- Auditor Catalog - See all safety controls
Summary
You deployed a secure code assistant with:
| Component | Value |
|---|---|
| App | OpenHands |
| Model | Llama 3.1 70B |
| Auditors | injection, eval |
| TEE | AMD SEV-SNP |
| Time | ~4 minutes |
Your code assistant is now protected by hardware-backed security with every generated line of code audited for safety.