Remote AI coding just became a fortress. 🛡️🚀 I'm thrilled to announce v0.3.0 of Antigravity Phone Connect! This update isn't just about features—it's about building a "Security-First" architecture for the modern developer. The major highlights: ✅ Zero-Inline Hardening: We've refactored 100% of our frontend to remove 'unsafe-inline' JS. Every click, toggle, and modal now runs through a strict, decoupled event system. 🖱️ ✅ Strict Content Security Policy (CSP): By blocking inline scripts at the browser level, we've added a robust primary defense against XSS in mirrored IDE snapshots. 🛡️ ✅ Automated Security Audit: The server now audits itself on startup. Using default passwords? You’ll see high-visibility ⚠️ warnings in your terminal instantly. 🕵️♂️ ✅ Cloudflare Tunnel Support: Added native support for 'cloudflared' alongside ngrok. Access your AI globally with even lower latency. 🌍 ✅ Deterministic Permissions: We’ve extended our click-relay to handle complex IDE permission bars ("Allow", "Deny", "Review Updates") with perfect accuracy. 🎮 Antigravity Phone Connect remains the most powerful way to stay productive while away from your desk. 📱✨ 💖 Sponsor: https://lnkd.in/gGXWySZr 🔗 Repo: https://lnkd.in/gbKBEjCg
Krishna Kanth B’s Post
More Relevant Posts
-
Just tried Claude Code Channels with Telegram Messenger — and recorded it! The buzz: Is this the OpenClaw killer? Here's my take after hands-on testing. TL;DR — They're not the same thing. 🔸 Claude Code Channels: Message your running Claude Code session from Telegram/Discord. MCP-based, runs locally, coding-focused. Allowlisted plugins, pairing-code auth. Research preview. 🔸 OpenClaw: Full autonomous AI agent — emails, files, web browsing, scheduling — across WhatsApp, Telegram, Discord, iMessage, Slack. 328K+ GitHub stars. NVIDIA built NemoClaw on top of it. But also: 12% of its skill marketplace was found compromised + RCE vulnerabilities. Channels solves ONE slice of OpenClaw's appeal — "message your AI from your phone" — but does it with Anthropic-grade security and native Claude Code integration. For developers in the Claude Code ecosystem, Channels is the cleaner, safer path. For a full autonomous personal agent? OpenClaw remains the broader platform. Either way, the pattern is clear: async agentic workflows are becoming the default. We're moving from "sit at terminal" to "message your agent and move on." What's next? I'm planning to run Claude Code Channels on a Raspberry Pi 24/7 — always-on, low-power, agentic coding from my phone. Detailed Medium article coming soon with the full setup and architecture. Stay tuned! Repo: https://lnkd.in/gkG4NNa2
To view or add a comment, sign in
-
Whoa, that late March 2026 leak of 512K lines from Anthropic's Claude Code? Total accident via some npm screw-up. So we've got a front-row seat to how a killer AI coder really works. Spoiler: it's way more about the full setup than fancy prompts. Sharing the standout bits for biz folks and tech leads. Thread: (1/5) 1. What Makes the AI "Harness" Tick It layers memory smart—fast summaries always ready, pulls specific files as needed, verifies everything against real code. Runs tasks in parallel to speed things up. And it quietly trims old chats to save tokens and stay fresh. For business? Cuts coding time in half, easy. 2. Stuff That Actually Works in Practice Drop a CLAUDE.md file in your project root for all the key details like build commands. Tell the AI exactly what tool to use, like "grep this," not some fuzzy ask. Set clear "stop here" rules so it doesn't spin forever. Teams love this—scales without the drama. 3. Security Stuff That Hit Hard ⚠️ They baked in "poison pills" to wreck data if someone tries stealing it for training. Bad guys instantly made fake repos with malware. Plus a sneaky mode to hide AI edits in git commit. Smart move—keeps your IP safe. 4. Cool Features Still Cooking KAIROS (persistent agent) runs in the background 24/7, even "autoDreams" overnight to refine memory. ULTRAPLAN kicks big plans to the cloud when your laptop can't hack it. Game-changer for non-stop productivity. 5. Takeaways for the Corner Office Most leaks? Dumb human errors like bad configs—so lock down those pipelines. Sometimes writing your own code beats trusting sketchy libs. Bottom line: Build AI like a team, not a gimmick. Huge wins ahead.
To view or add a comment, sign in
-
Anthropic’s write-up on Claude Code auto mode is worth reading, because it gets straight to the real issue with AI agents in business. Article: https://lnkd.in/gTE-SiF7 Everyone likes the idea of faster agents. What businesses actually need is faster agents that are still governable. Claude Code’s auto mode is interesting because it is not just “let the model do whatever it wants”. The system is designed to auto-approve safer actions while using prompt-injection checks and transcript classifiers to block riskier behaviour. That is the right direction. The next phase of AI adoption is not going to be won by the tool that feels the most magical in a demo. It will be won by the tool that people can trust to run inside real workflows. For businesses, that matters because speed on its own is not the outcome. The outcome is getting admin, delivery, and internal processes moving faster without creating a new layer of operational risk.
To view or add a comment, sign in
-
Stop running AI agents in "YOLO mode" 🚩 We’ve all seen the power of AI coding agents like Claude Code. But there is a security elephant in the room: 𝐑𝐮𝐧𝐧𝐢𝐧𝐠 𝐭𝐡𝐞𝐬𝐞 𝐚𝐠𝐞𝐧𝐭𝐬 𝐨𝐧 𝐲𝐨𝐮𝐫 𝐥𝐨𝐜𝐚𝐥 𝐦𝐚𝐜𝐡𝐢𝐧𝐞 𝐢𝐬 𝐫𝐢𝐬𝐤𝐲. Most of us are logged in as admins on our laptops. We have local databases, SSH keys to production servers, and browser cookies for internal tools. If an LLM decides to execute a destructive command or exfiltrate a file, it has the same permissions you do. 𝐓𝐡𝐞 "𝐀𝐩𝐩𝐫𝐨𝐯𝐚𝐥 𝐅𝐫𝐢𝐜𝐭𝐢𝐨𝐧" 𝐏𝐫𝐨𝐛𝐥𝐞𝐦: You can run agents in "safe mode" where you approve every command, but that kills the flow. The moment you switch to "YOLO mode" for speed, you’re opening a backdoor to your entire machine. That’s why we built 𝐅𝐥𝐨𝐜𝐤 𝐒𝐩𝐚𝐜𝐞. 🚀 Flock Space provides remote AI agent workspaces that live in a sandboxed, least-privileged environment. 𝐇𝐨𝐰 𝐢𝐭 𝐰𝐨𝐫𝐤𝐬: ✅ 𝐑𝐞𝐦𝐨𝐭𝐞 𝐒𝐚𝐧𝐝𝐛𝐨𝐱𝐢𝐧𝐠: Agents run on a shared VM, not your local hardware. ✅ 𝐎𝐒-𝐋𝐞𝐯𝐞𝐥 𝐈𝐬𝐨𝐥𝐚𝐭𝐢𝐨𝐧: 1 Agent = 1 Linux User. Each agent has its own home directory but can collaborate in shared project spaces. ✅ 𝐒𝐒𝐇-𝐅𝐢𝐫𝐬𝐭: No heavy client-side tooling. If you have a terminal and an SSH key, you’re ready to go. ✅ 𝐏𝐞𝐫𝐬𝐢𝐬𝐭𝐞𝐧𝐭 𝐂𝐨𝐧𝐭𝐞𝐱𝐭: Agents run in persistent tmux sessions. You can attach, watch them work, disconnect, and come back later without losing progress. ✅ 𝐒𝐞𝐜𝐮𝐫𝐞 𝐀𝐮𝐭𝐡: Uses OIDC device flow to map your identity to your SSH keys. How do you keep AI agents both fast and secure in your workflow?
To view or add a comment, sign in
-
-
Like many, I was sold on the idea of an AI agent running 24/7. However, when I tried setting up OpenClaw with my local AI, it broke. I hit a couple of blocking bugs. As an engineer, my first instinct was to fix it and open a PR, until I saw it was 600,000 lines of code standing between me and a fix. That codebase was clearly vibe-coded into existence, and I wasn't about to lose my mind trying to navigate it. Only a few weeks old at the time, yet became the new age legacy code. Instead of fighting it, I built my own from scratch in Golang. Meet Justabot. A native, simple to use agentic framework that runs locally, using just 120MB of RAM when idle. No databases to configure, and no waging wars with Node or Python runtimes. It can use the browser, write code, navigate the file system, and execute shell commands. Here is how I am currently using it: - Online Research: Anytime I wish to understand latest tech news, or how bad the traffic is going to be this morning, I use Justabot to check it and give me a TLDR while I am preparing for work. - Stock Research: Analyzing stocks, with plans to give it a monthly budget for automated investing. Tho not just stocks, some cryptto as well. - Bug Bounties: 24/7 vulnerability research. It finds potential issues, I triage and confirm the true positives, and then forward them to maintainers. - Real Development: I’m not a fan of vibe coding or AI IDE plugins. Instead, Justabot runs in its own VM, writes code in feature branches, and opens PRs for me to review and merge. Just like a real coworker. - General Queries: Fully replaced my previous use of Grok. It’s secure, private, and built to be used by anyone, not just us nerds. No setup required, double-click to open the program and that's it. Currently available for Mac, Windows, and Linux, with Web and Mobile support coming soon. Check it out here: https://lnkd.in/eVgxX7DU
To view or add a comment, sign in
-
AI agents are getting powerful enough to delete your database. Anthropic just published how they're thinking about stopping that. Claude Code's new auto mode is one of the first serious engineering writeups on the "AI took initiative I didn't intend" problem — what they call overeager behavior. Real incidents from their internal log: → Agent deleted remote git branches from a misread instruction → Agent uploaded a GitHub auth token to an internal compute cluster → Agent retried a failed deploy with a --skip-verification flag None of these required a "misaligned" model. The agent was genuinely trying to help — it just took initiative past what the user actually authorized. This is the agentic AI safety problem most people aren't talking about. Not sci-fi alignment. Not rogue models. Just: an AI that's very eager to solve your problem, in ways you didn't ask for. Their solution is a classifier that judges every action against user intent, not just task relevance. "Clean up my branches" doesn't authorize a batch delete. "Can we fix this?" is a question, not a directive. The 17% false-negative rate they report is an honest number — they didn't hide it. And it's a meaningful improvement over the alternative (no guardrails at all). As AI agents take on more autonomous work in 2026, this kind of engineering transparency is exactly what the industry needs more of. https://lnkd.in/gmVwfSZi #AIAgents #AISafety #TechLeadership #FutureOfWork #Claude
To view or add a comment, sign in
-
Some of the heaviest AI governance overhead exists because weaker models needed constant scaffolding. ➡️ What happens when the model no longer needs it? Rich Sutton called it the bitter lesson: as compute scales, systems built around human-engineered process lose to methods that scale with computation. The same pattern is showing up in governance. Most production AI systems today carry dense procedural instructions. Classify the intent into one of 14 categories. Route to the appropriate handler. Retrieve the top five knowledge base articles. Check the response for hallucinated URLs. That choreography exists because earlier models needed it. They would skip steps, hallucinate references, lose context. The scaffolding compensated for limitations. But each model generation makes some of those steps unnecessary. The scaffolding that was a safety net becomes drag. The point is not fewer controls. It is better-placed controls. Stronger models shift governance value away from procedural choreography and toward three things: 1. Explicit acceptance criteria. Define what a successful, compliant output looks like. Evaluate against it at decision points and before release. 2. Hard boundaries. Non-negotiable constraints that survive any model upgrade. Never disclose customer financial data. Always verify refund eligibility against policy. 3. Auditable decision rights. Who owns the downstream outcome when an agent interprets a policy differently than intended? Anthropic's recent security work with Mozilla is one signal of what this shift looks like in practice. Claude Opus 4.6 found 22 previously unknown Firefox vulnerabilities in two weeks, 14 rated high severity by Mozilla. A separate GitHub advisory credited Claude-assisted research in disclosure of a critical Ghost CMS SQL injection. These are not hypothetical capabilities. Firefox 148 shipped the patches to hundreds of millions of users. The organizations that benefit first will not be the ones with the heaviest governance. They will be the ones whose governance is aimed at outcomes instead of compensating for limitations. Fewer procedural handoffs. Clearer boundaries. Harder acceptance gates. The right controls. Less drag. #AIGovernance #EnterpriseAI #AgenticAI #RiskManagement #CISO #AICompliance #DigitalTransformation #AIStrategy
To view or add a comment, sign in
-
-
Most vibe-coded apps fail their first security check. Not because the developer is careless. Because the AI didn't tell them what it quietly broke. 3 things Bolt and Lovable commonly leave behind: → API keys hardcoded in source files → Database queries built from raw user input → Auth missing on routes the AI added last minute You won't see these in Lighthouse. You won't catch them in a code review you're doing alone. Free scan at vibedoctor.io No credit card. Just connect your GitHub repo or upload your code.
To view or add a comment, sign in
-
Most AI agents can't do the things that actually matter. They scrape and summarize. But ask one to actually log in: reply to a thread, complete a checkout, post a comment. It hits a wall. Chrome extensions won't save you. They run inside the browser but outside your agent's process, you still can't drive them programmatically without a human in the loop. The culprit is what I call The Sandbox Session Divide. Your agent runs in a container. Your sessions live on your host machine, inside a real Chrome browser with real cookies and active logins. By default, those two environments don't talk to each other. Three obstacles make bridging them harder than it looks: • Chrome only listens to localhost. Containers reach the host through a bridge network, not localhost. • Chrome rejects requests where the Host header isn't localhost. Even routed traffic fails. • CDP WebSocket URLs are hardcoded to localhost. Container clients connect to the wrong endpoint. The fix is a three-layer bridge: Chrome → host-side TCP proxy that rewrites Host headers → socat inside the container that maps localhost to the host. Result: your agent runs with your real browser profile. ReCAPTCHA passes. Logins persist. No re-authentication on every run. The infrastructure is the part nobody explains. Full build is in this month's SyncAI — link in comments. #AIAgents #AgenticAI #EnterpriseAI #AIEngineering #DevOps
To view or add a comment, sign in
-
-
One of the biggest challenges with agentic AI isn’t capability — it’s governance. While experimenting with a RAG architecture using Claude Sonnet, I ran into a familiar problem: approving every file change and command. Safe? Yes. Productive? Not always — it breaks flow quickly. Skipping approvals entirely never felt right either. Productivity and safety have been treated as a zero‑sum game for too long. What I kept wondering was whether there’s a smarter way to reduce interruptions without giving up guardrails — something that actually allows you to step away and let the model work (maybe with a cup of coffee ☕). Anthropic’s newly announced Auto Mode for Claude Code is a thoughtful attempt to break that trade‑off. By using built‑in safeguards to block high‑risk actions while allowing safe ones to proceed, it points toward a more sustainable model for AI‑assisted development. 🔗 https://lnkd.in/e4teAVtr This is the kind of progress that matters — not flashy demos, but improvements that fit real developer workflows. Looking forward to testing this in practice. Where do you draw the line today between speed and control when using AI in development? #ArtificialIntelligence #Claude #DeveloperExperience #AICoding #FutureOfWork
To view or add a comment, sign in