Trigger.dev’s cover photo
Trigger.dev

Trigger.dev

Software Development

Build and deploy fully‑managed AI agents and workflows

About us

Trigger.dev is the platform for building AI workflows in TypeScript. Long-running tasks with retries, queues, observability, and elastic scaling.

Website
https://trigger.dev
Industry
Software Development
Company size
2-10 employees
Headquarters
London
Type
Privately Held
Founded
2022

Locations

Employees at Trigger.dev

Updates

  • In case you missed it, here are some of the things we shipped in March: → Our @vercel integration is live. Push code, tasks deploy automatically. Env vars sync both ways. Atomic deployments gate Vercel's promotion until your tasks are ready, so your app never goes live with a mismatched task version. No 𝚝𝚛𝚒𝚐𝚐𝚎𝚛.𝚍𝚎𝚟 𝚍𝚎𝚙𝚕𝚘𝚢, no CI/CD workflow. → Query. Ask "what are my p95 durations for the chat task?" and the AI writes the TRQL, runs it on ClickHouse, renders the chart. Or write the SQL yourself. → Dashboards Every project ships with a pre-built one: run volume, success rates, failures, costs, versions. Build custom ones on top with big numbers, charts, and tables. Filters apply across every widget at once. → v4.4.2 - v4.4.4: • batch trigger concurrency bumps. • 𝚜𝚢𝚗𝚌𝚂𝚞𝚙𝚊𝚋𝚊𝚜𝚎𝙴𝚗𝚟𝚅𝚊𝚛𝚜 pulls your Supabase connection strings in so you stop copy-pasting • Test page generates example payloads with AI (uses your 𝚜𝚌𝚑𝚎𝚖𝚊𝚃𝚊𝚜𝚔 schema if you have one, infers from past runs if you don't) • Dev CLI auto-cancels in-flight runs on exit • 11 new MCP tools including 𝚚𝚞𝚎𝚛𝚢, 𝚜𝚝𝚊𝚛𝚝_𝚍𝚎𝚟_𝚜𝚎𝚛𝚟𝚎𝚛, 𝚐𝚎𝚝_𝚜𝚙𝚊𝚗_𝚍𝚎𝚝𝚊𝚒𝚕𝚜. Your coding agent can now actually debug your runs. • Task-level and global TTL defaults • Multi-provider object storage for large run outputs New on the blog: 1. We replaced Node.js with Bun in one of our most latency-sensitive services. 2,099 → 10,700 req/s. 5x throughput. Also found a Bun memory leak along the way (which was then swiftly patched by the Bun team). https://lnkd.in/eNpkrWcb 2. Deep dive on how TRQL works: the SQL-like language behind Query. Users write familiar SQL, but the ANTLR grammar physically can't express 𝙸𝙽𝚂𝙴𝚁𝚃 or 𝙳𝙴𝙻𝙴𝚃𝙴. Every query has its tenant filter injected at compile time. https://lnkd.in/erQkgGqz 3. Plus 10 Claude Code tips most people miss. --𝚏𝚘𝚛𝚔-𝚜𝚎𝚜𝚜𝚒𝚘𝚗 for context pre-warming. --𝚏𝚛𝚘𝚖-𝚙𝚛 to resume the agent that wrote your PR. ! for inline shell output. 𝙲𝚝𝚛𝚕+𝙶 to compose prompts in your editor. https://lnkd.in/eVXzwbTd That's March.

  • We're heading to AI Engineer Europe in London! April 8-10 | QEII Centre, London UK | Booth G6 Come and chat with us If you're building AI agents, workflow automation, or background jobs. We'll be doing live demos, sharing what we're building, and handing out swag. See you there!

    • No alternative text description for this image
  • Most developers use Claude Code like a chatbot with a shell wrapper. That's barely scratching the surface. After digging deep into the CLI, here are 10 advanced patterns that genuinely change how you orchestrate AI-assisted development: 1. Session forking (--fork-session) — create a "master session" loaded with architectural context, then branch it for each feature. Think git branch for your LLM context window. 2. Code review loops (--from-pr) — resume the exact agent session that wrote the code. It comes back with full awareness of its original decisions. No more cold-start reviews. 3. Ctrl+G to escape the REPL — opens your $EDITOR for proper multi-line prompt crafting. Small feature, massive quality improvement. 4. Inline shell with ! — run commands directly, and stdout/stderr get injected into context automatically. Run the test, type "fix it", done. 5. Effort levels — four tiers from Low to Max. Boilerplate doesn't deserve the same compute as debugging a race condition. Your API bill will thank you. 6. Parallel worktrees (--worktree) — each agent gets a fully isolated working directory via native git worktree. Same repo, zero conflicts. 7. Structured JSON output (--json-schema) — turn the LLM into a strictly typed function. Essential for automation pipelines. 8. Context compaction (Esc+Esc) — compress failed debugging attempts into dense summaries. Reclaim your token budget without losing the narrative thread. 9. Dynamic subagents (--agents) — define session-scoped specialists on the fly with model routing. Opus for architecture, Haiku for repetitive tasks. 10. Budget-capped CI/CD — combine --max-turns and --max-budget-usd as circuit breakers. Non-negotiable for putting autonomous agents in production pipelines. The gap between "I use Claude Code" and "I orchestrate Claude Code" is wide and getting wider. Full deep dive with code examples in the article. Link in the comments below.

  • Introducing Query and Dashboards. Full SQL-powered observability over your background jobs and AI agent runs. Under the hood is TRQL, a SQL-style language that compiles to ClickHouse. You write familiar SELECT statements. ClickHouse executes them. Queries over millions of runs come back in milliseconds. Two tables right now: `runs` for status, timing, costs, and tags, and more coming. You don't need to memorize the schema. There's an AI assistant built into the editor. Describe what you want in plain English: → "Why did failures spike after my last deploy?" → "What's the p95 duration for my chat task?" → "What are my most expensive runs?" It writes the TRQL for you. If it fails, "Try fix error" diagnoses and corrects it. Every project ships with a pre-configured dashboard: run volume, success rates, failures, costs, version breakdowns. You can also build your own with three widget types: big numbers for KPIs, charts for trends, tables for breakdowns. Drag to reorder, resize to fit, filter by time, task, queue, or scope. And not only that, TRQL isn't just for humans. `query.execute()` lets you embed run data into your own product. Power a status page. Feed results to an AI agent for debugging. Build custom alerting. All against the same data the dashboard uses. Live now for all users. Every project already has a built-in dashboard. Full details in the comments 👇

  • Just launched: our Vercel integration for Trigger. Push code. Vercel deploys your app. Trigger deploys your tasks. Env vars sync both ways. Your app never goes live with mismatched task versions. 1. Atomic deployments: Trigger gates your Vercel deployment until the task build completes, sets the correct TRIGGER_VERSION, then triggers a redeployment. Your app always runs against the exact task version it was built with. This used to require a custom GitHub Actions workflow. Now it's a toggle. 2. Env var sync works both directions: Vercel → Trigger: your Vercel env vars get pulled per-environment (production, staging, preview) before each build. Trigger → Vercel: API keys like TRIGGER_SECRET_KEY sync back automatically. No more copy-pasting between dashboards, and you can control sync behavior per-variable from your environment variables page. 3. Deployments reference each other on both sides: Trigger creates deployment checks on your Vercel deployments so you can see task build status without leaving Vercel. Each Trigger deployment links back to the corresponding Vercel deployment. No more tab-switching to figure out which app deploy matches which task deploy. 4. Fun fact: this was also our most requested feature, with 354 votes. Read more in our changelog: https://lnkd.in/epFnchAh

  • Run Cursor's headless CLI agent inside a Trigger task and stream its output live to your app's frontend. This open source demo: Next.js + Trigger. ~1,000 lines of code in total. Trigger tasks run in their own isolated environments. You can install any binary via our build extensions, spawn it as a child process, and stream it to stdout. This demo uses Cursor's CLI. Same pattern works for FFmpeg, Playwright, etc. The build extension runs 𝚌𝚞𝚛𝚕 -𝚏𝚜𝚂𝙻 𝚑𝚝𝚝𝚙𝚜://𝚌𝚞𝚛𝚜𝚘𝚛.𝚌𝚘𝚖/𝚒𝚗𝚜𝚝𝚊𝚕𝚕 | 𝚋𝚊𝚜𝚑 at image build time; the official installer, nothing custom. At runtime the task spawns the cursor-agent Node binary. Cursor CLI outputs NDJSON. We parse it line by line, push events into Realtime Streams v2, and render each one as a row in a React terminal component. One CursorEvent type definition flows from task → stream → useRealtimeRunWithStreams hook → React component. You get full-stack type safety with zero duplication. The repo is open source. If you want to run a CLI tool in the cloud and stream its output to a browser, this is a working reference you can fork. https://tgr.dev/XI25EOs

  • We just shipped Vercel AI SDK 6 support in Trigger. That means full compatibility across all major versions of the AI SDK (4, 5, and 6) so you can upgrade on your own terms. Here's what this unlocks when you run AI agents on Trigger: * Durable execution: Your ToolLoopAgent runs as a long-lived task. If something fails, it retries automatically. No babysitting infrastructure. * Real-time streaming: Stream agent activity directly to your frontend with Realtime Streams. Your users see what the agent is doing as it happens. * Human-in-the-loop: Pause execution mid-task for approval using waitpoints. Zero compute cost while waiting for a human decision. * Autonomous tool use: Agents decide what to do next: call tools, gather context, or return a final answer. On the v6-specific side, we've added async validation handling for the new Schema type and made migration seamless, existing jobs keep working without changes. Full changelog: https://tgr.dev/mPNJqEG

  • The "Weekend Demo" vs. "Production Reality" in AI Development. We've all been there. You hack together an AI agent on a Saturday. You use Vercel's AI SDK, throw in some LangChain, and it works perfectly on your localhost. It answers quickly. It handles errors. Then you push to production. Suddenly, reality hits: 1. Timeouts: Your sophisticated reasoning chain takes 75 seconds. Your serverless function kills it at 60. Hard stop. 2. Flakiness: The OpenAI API hiccups. Your script crashes. The user has to restart the entire process. 3. Concurrency: 50 users try it at once. Your rate limits explode. Jobs get dropped. This is the "Production Gap". Building reliable AI agents requires more than just prompt engineering. It requires reliable infrastructure. At Trigger, we built the infrastructure specifically for this gap. We call it Durable Execution. - No Timeouts: Run tasks for hours or days. Perfect for deep research agents. - Checkpointing: If an API fails, we retry just that step. We don't restart the whole run. - Queueing: Heavy load? We queue the jobs and process them as capacity allows. Nothing gets dropped. Stop trying to shoehorn long-running AI processes into short-lived serverless functions. Use infrastructure designed for the job. Learn more 🧵

  • Tierly uses Trigger to orchestrate 10+ AI models for competitive pricing analysis. Each analysis: dozens of scraping tasks, human review gates, and real-time progress updates. They analyze SaaS pricing pages, discovers competitors, and generates recommendations. Workflows take 5-15 minutes with multiple AI calls. Their initial sync API routes hit timeouts, rate limits, and had zero visibility into failures. Moving to Trigger fixed all of it. Two chains run in parallel via batch triggers which cut analysis time in half. Wait tokens pause execution for human review with no webhooks needed. Shared queue keeps Firecrawl requests under limit across all concurrent analyses. Progressive model escalation: gpt-4o-mini → gpt-4o → gpt-4o + markdown fallback. Trigger handles retries automatically. The results: → Reliable 10+ AI call workflows → Human review gates without webhook complexity → Automatic rate limiting → Full visibility into every step → Workflows in TypeScript alongside their Next.js app Read the full story: https://lnkd.in/gV5eyduq

    • No alternative text description for this image

Similar pages

Browse jobs

Funding

Trigger.dev 2 total rounds

Last Round

Pre seed

US$ 500.0K

See more info on crunchbase