close

DEV Community

João André Gomes Marques
João André Gomes Marques

Posted on

Add governance to OpenAI Agents SDK in 3 lines

OpenAI Agents SDK has guardrails for input/output validation but no audit trail. Here is how to add tamper-evident signing.

pip install asqav[openai-agents]
Enter fullscreen mode Exit fullscreen mode
from agents import Agent, Runner
from asqav.extras.openai_agents import AsqavGuardrail

guardrail = AsqavGuardrail(api_key=\"sk_...\")

agent = Agent(
    name=\"research-agent\",
    instructions=\"You help with market research\",
    input_guardrails=[guardrail]
)

result = Runner.run_sync(agent, \"Analyze competitor pricing\")
Enter fullscreen mode Exit fullscreen mode

Every tool call and agent action now gets an ML-DSA-65 signature. The guardrail runs before execution and signs the input. After execution, the output gets signed and chained to the input signature.

The audit trail is exportable as JSON or CSV for compliance teams.

GitHub: https://github.com/jagmarques/asqav-sdk
Docs: https://asqav.com/docs/integrations.html#openai-agents

Top comments (1)

Collapse
 
maxberg profile image
Maxim Berg

Tamper-evident signing is a nice touch — being able to prove exactly what the agent did (and that the log wasn't altered) is useful for compliance. One thing I've been thinking about: for actions with real cost (purchases, paid API calls), cryptographic proof that the agent did something doesn't help you prevent it. You'd want a pre-execution check too — verify the action is within policy before it runs, not just sign it afterward.