One of our Backboard.io users just pulled off a fantastic hack to get a ton of free model usage. I love it so much we’re absolutely not closing it. Here’s what he did: Went to OpenRouter and put $10 of credit on his account. OpenRouter gives ~1,000 calls/day on free models with that setup. He then brought that OpenRouter API key into Backboard (BYOK). Now he’s using those OpenRouter models through Backboard with: - Free state management (persistent memory across conversations) - Free web search wired into those models - No custom tool calling to maintain, it “just works” So for $10, he essentially unlocked a serious playground of LLMs that can remember, reason across sessions, and reach out to the web, all orchestrated through Backboard. This is exactly how I want people to think about Backboard: - Bring any provider you like (OpenRouter, OpenAI, Gemini, etc.) - Treat models as interchangeable infrastructure - Let Backboard handle the state, memory, and web search plumbing for you If you’re hacking on AI agents or multi-model workflows and you’re not doing something like this yet, you’re leaving a lot of leverage on the table.
Rob Imbeault’s Post
More Relevant Posts
-
Just published a deep dive on Mechanistic Interpretability — how we can reverse-engineer LLMs to understand the exact circuits and features that drive their behavior. Today I’m taking it one step further with something extremely practical for AI security teams and builders. I built and open-sourced a Guardrail Benchmark Colab that lets you rigorously test safety interventions (steering, ablation, SAE-based feature control, etc.) on a real model — Qwen2.5-1.5B-Instruct — across: 100+ jailbreak & harmful prompts (JailbreakBench + Do-Not-Answer) Capability benchmarks (MMLU, GSM8K, HumanEval) to measure the safety-performance tradeoff Why this matters for security right now: Every time you expose an LLM endpoint to users (chat interface, API, RAG app, agent, etc.), you are giving attackers a direct line into your model’s residual stream. Without strong guardrails: A single clever jailbreak can bypass your system prompt The model can be tricked into revealing sensitive data it has access to (customer records, internal docs, API keys, PII, etc.) Prompt injection + tool access = real data exfiltration risk This notebook gives you a quantitative way to measure exactly how effective your guardrails are — not just “does it refuse?” but “how much capability did we lose?” and “which internal features/circuits are actually being suppressed?” It’s built on the same mechanistic toolkit I wrote about (Linear Representation Hypothesis, residual stream as communication bus, feature-to-logit attribution, etc.), so you can literally see why a guardrail works or fails inside the model. Github Link: https://lnkd.in/g_wx-thj Would love to hear from: Red teamers & AI security engineers Anyone shipping production LLM apps Researchers working on activation steering / SAEs / circuit-level safety Drop your thoughts below or DM if you want to collaborate on extending this benchmark. #AISafety #LLMSecurity #MechanisticInterpretability #Guardrails #AIAlignment #PromptInjection #CyberSecurity #LLM #ResponsibleAI
To view or add a comment, sign in
-
two things happened this week that I think actually matter Bret Taylor stood on stage and said buttons are dead. and honestly he's right. Sierra shipped a full customer service agent for Nordstrom in four weeks using plain language. no UI, no forms, no navigation. just describe what you need and the agent figures it out. that's not a demo, that's production. for DevRel this is a real shift. we've been writing docs for developers. but developers are now building agents that consume APIs autonomously. that's a different reader. different mental model. different failure modes. the documentation problem just got harder and more interesting. then Anthropic launched Project Glasswing. they built a model, Claude Mythos Preview, that found a 27-year-old zero-day in OpenBSD and a 16-year-old bug in FFmpeg that automated tools had walked past five million times. it wasn't trained for security. it got there through general coding and reasoning ability. it was so capable at exploiting vulnerabilities that they decided not to release it at all. instead they stood up a $100M initiative with AWS, Apple, Google, Microsoft, Nvidia and others purely for defense. that's the detail worth sitting with. the capability gap between what's publicly available and what's sitting in research just became very visible. as someone working in AI/ML DevRel, this is the job right now. not just explaining what models can do, but helping developers understand where the edges are, what's coming, and how to build things that won't break when the next capability jump lands. what are you building that you think is actually ready for that?
To view or add a comment, sign in
-
AI Is Not a Product: Why Security-First Fails and Business-First Wins + Video Introduction: The discourse surrounding Artificial Intelligence (AI) security is rapidly shifting from abstract theory to tangible operational reality. As highlighted in recent industry leadership discussions, treating AI as a mere product to be secured in a vacuum is a critical mistake; instead, it must be understood as an accelerator that simultaneously enhances business value and introduces complex new risk vectors. Effective security in this domain is no longer about rigid controls but about designing adaptive frameworks that balance developer velocity, employee experience, and customer trust without stifling innovation....
To view or add a comment, sign in
-
CORS is the ultimate vibe killer. AI usually suggests setting origin: '*' to make the error go away. It "works," but I never ship that. Opening an API to the entire web isn't a fix—it's a security hole. I spend the extra minute whitelisting specific domains & handling credentials properly. I don't just want the code to run. I want the system to be secure. Real engineering is fixing the root cause, not just hiding the symptom. #softwareengineering
To view or add a comment, sign in
-
To every developer who spent their night rotating API keys and resetting SSH tokens: We see you. ☕️ The latest supply chain attack on LiteLLM , a popular open-source AI tool is a tough reminder of how a "quick tool" can accidentally become a security nightmare. When your AI setup relies on unverified packages, one bad update can compromise your entire environment. It’s time to move AI out of the "experimental" bucket and into production-grade infrastructure. With Kong AI Gateway, you get the best of both worlds: 🚀 Move Fast: Switch LLM providers with a single line of config—no code rewrites. 🔒 Stay Secure: Built-in protection and a verified, enterprise-ready distribution model.
To view or add a comment, sign in
-
-
Whoa, that late March 2026 leak of 512K lines from Anthropic's Claude Code? Total accident via some npm screw-up. So we've got a front-row seat to how a killer AI coder really works. Spoiler: it's way more about the full setup than fancy prompts. Sharing the standout bits for biz folks and tech leads. Thread: (1/5) 1. What Makes the AI "Harness" Tick It layers memory smart—fast summaries always ready, pulls specific files as needed, verifies everything against real code. Runs tasks in parallel to speed things up. And it quietly trims old chats to save tokens and stay fresh. For business? Cuts coding time in half, easy. 2. Stuff That Actually Works in Practice Drop a CLAUDE.md file in your project root for all the key details like build commands. Tell the AI exactly what tool to use, like "grep this," not some fuzzy ask. Set clear "stop here" rules so it doesn't spin forever. Teams love this—scales without the drama. 3. Security Stuff That Hit Hard ⚠️ They baked in "poison pills" to wreck data if someone tries stealing it for training. Bad guys instantly made fake repos with malware. Plus a sneaky mode to hide AI edits in git commit. Smart move—keeps your IP safe. 4. Cool Features Still Cooking KAIROS (persistent agent) runs in the background 24/7, even "autoDreams" overnight to refine memory. ULTRAPLAN kicks big plans to the cloud when your laptop can't hack it. Game-changer for non-stop productivity. 5. Takeaways for the Corner Office Most leaks? Dumb human errors like bad configs—so lock down those pipelines. Sometimes writing your own code beats trusting sketchy libs. Bottom line: Build AI like a team, not a gimmick. Huge wins ahead.
To view or add a comment, sign in
-
I don't usually comment on new AI models but Gemma 4 has potential. This looks like a great candidate for leveling up private AI integrations! A lot of my work in defense integrates offline LLMs into existing software to create and productionize novel capability. This isn't easy. They're often difficult to deploy on real infrastructure. Offline LLMs can't compete with the performance of frontier models. Here's why I'm excited to test Gemma 4: - The Apache 2.0 license is great when you need data sovereignty. - Scaling from mobile devices to high-end infrastructure is critical for deployment and real operational use. - The multi-lingual capability looks good enough for real use-cases. Are you paying attention to open-source and open-weight model releases?
To view or add a comment, sign in
-
Do you use AI to write code? Is it secure? Then prove it. The world's first live Prompt Battle Royale is here. Clash of Prompts. May 7. Amazon Web Services (AWS) Builder Loft. San Francisco. Two developers go head-to-head. One prompt each. 60 seconds on the clock. AI generates code from both live, on screen, in front of the whole room. Scored in real time on vulnerabilities, security best practices, and prompt efficiency. Winner advances. Loser rethinks their entire prompting strategy. Here's the thing: research shows 87–94% of AI-generated code ships with security flaws. Even when the developer tries to prompt securely. This event makes that painfully, publicly, undeniably visible. What's on the line? 🏆 Grand prize: a $3,500 Razer Blade 16 gaming laptop. 💰 Top 20 players split $20,000 in free AI credits with the model of their choice (plus our secure coding agent). You walk in. You prompt. The code gets judged. The crowd watches. Join us in SF, click the link below! https://lnkd.in/dXPsJskQ
To view or add a comment, sign in
-
-
This is going to be 🔥 AI-generated code is already everywhere, but security is still the biggest blind spot. Seeing it tested live, in front of a crowd, is exactly the kind of pressure test this space needs. Clash of Prompts is such a cool way to make something abstract (AI risk) real, fast. If you’re building with AI, you should probably be in this room. Who’s going? 👇 #AI #Cybersecurity #GenAI #Developers #AIsecurity
Do you use AI to write code? Is it secure? Then prove it. The world's first live Prompt Battle Royale is here. Clash of Prompts. May 7. Amazon Web Services (AWS) Builder Loft. San Francisco. Two developers go head-to-head. One prompt each. 60 seconds on the clock. AI generates code from both live, on screen, in front of the whole room. Scored in real time on vulnerabilities, security best practices, and prompt efficiency. Winner advances. Loser rethinks their entire prompting strategy. Here's the thing: research shows 87–94% of AI-generated code ships with security flaws. Even when the developer tries to prompt securely. This event makes that painfully, publicly, undeniably visible. What's on the line? 🏆 Grand prize: a $3,500 Razer Blade 16 gaming laptop. 💰 Top 20 players split $20,000 in free AI credits with the model of their choice (plus our secure coding agent). You walk in. You prompt. The code gets judged. The crowd watches. Join us in SF, click the link below! https://lnkd.in/dXPsJskQ
To view or add a comment, sign in
-
-
AI is writing your code… but would you bet your reputation on it? 😅 Cool one coming up at the Amazon Web Services (AWS) Builder Loft in San Francisco, by Symbiotic Security. If you’re around on May 7, worth stopping by 👇
Do you use AI to write code? Is it secure? Then prove it. The world's first live Prompt Battle Royale is here. Clash of Prompts. May 7. Amazon Web Services (AWS) Builder Loft. San Francisco. Two developers go head-to-head. One prompt each. 60 seconds on the clock. AI generates code from both live, on screen, in front of the whole room. Scored in real time on vulnerabilities, security best practices, and prompt efficiency. Winner advances. Loser rethinks their entire prompting strategy. Here's the thing: research shows 87–94% of AI-generated code ships with security flaws. Even when the developer tries to prompt securely. This event makes that painfully, publicly, undeniably visible. What's on the line? 🏆 Grand prize: a $3,500 Razer Blade 16 gaming laptop. 💰 Top 20 players split $20,000 in free AI credits with the model of their choice (plus our secure coding agent). You walk in. You prompt. The code gets judged. The crowd watches. Join us in SF, click the link below! https://lnkd.in/dXPsJskQ
To view or add a comment, sign in
-
More from this author
Explore related topics
- How to Use OpenAI Reasoning Models in Business
- Building AI Applications with Open Source LLM Models
- Benefits of Open-Source AI Models
- How Open Models Foster Collaboration in AI
- Leveraging Workflow Automation Software
- Utilizing Low-Code Automation Platforms
- How Open-Source Models can Challenge AI Giants
Love how you're celebrating the hack instead of shutting it down. That says a lot about how you think about building for developers. The best growth comes from users finding creative ways to get more value out of your product.