close

DEV Community

Cover image for I Open-Sourced the AI Agent That Grew My LinkedIn 5x in 30 Days
Mykola Kondratiuk
Mykola Kondratiuk

Posted on

I Open-Sourced the AI Agent That Grew My LinkedIn 5x in 30 Days

I've been running an AI social media agent for months. It comments, posts and builds connections on LinkedIn, Twitter, Reddit, Dev.to, and 6 more platforms — as me, in my voice.

Last week I open-sourced the whole thing. Here's the architecture, the hard decisions, and what I learned building it.

GitHub → Open-Twin/opentwins

Why I built this

I'm a tech leader at a gaming company by day. I also run a (Personal Brand) across 10 platforms. The math was brutal:

  • 10 platforms × 3-5 meaningful comments/day = 30-50 interactions
  • Each one takes 2-3 minutes if you actually read the post and write something thoughtful
  • That's 1.5-2.5 hours/day just on engagement

I wasn't willing to sacrifice quality for speed. Template-based tools like Buffer can schedule posts, but they can't read a thread and contribute something useful. LinkedIn automation tools like Expandi get your account banned because they hit APIs directly.

I needed something different: an AI that could think, read context, and engage like I would — using a real browser, not API abuse.

The stack (and why)

Here's what's under the hood:

┌─────────────────────────────────────┐
│           OpenTwins CLI             │
│         (Node.js + Bree)            │
├──────────┬──────────┬───────────────┤
│ Scheduler│Dashboard │  Agent Loop   │
│ (cron)   │ :3847    │               │
├──────────┴──────────┼───────────────┤
│                     │  Claude Code  │
│    Chrome CDP       │  (the brain)  │
│   (real browser)    │               │
├─────────────────────┴───────────────┤
│              SQLite                 │
│     (sessions, logs, metrics)       │
└─────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

Let me walk through the decisions that weren't obvious.

Decision 1: Real browser, not API calls

This was non-negotiable. Every LinkedIn automation tool that hits their API directly gets detected and banned eventually. The detection isn't just about rate limits — it's about how you connect.

OpenTwins launches a real Chrome instance via CDP (Chrome DevTools Protocol). You log in once manually. The agent uses your actual browser session. From LinkedIn's perspective, there's no difference between you and the agent — same browser fingerprint, same cookies, same IP.

The tradeoff: it's slower. Each action takes 5-15 seconds instead of milliseconds. But that's actually a feature — it looks human.

Decision 2: Claude as the brain

I evaluated GPT-4o, Gemini, and Claude for the core agent loop. Claude won for three reasons:

  1. Voice matching. I fed it 50 of my real comments and it nailed my tone. Not generic "Great post! 🔥" energy — actual technical depth with my specific quirks.
  2. Context window. Agents need to read an entire thread + the original post + the commenter's profile before deciding what to say. That's a lot of context.
  3. Tool use reliability. The agent loop involves 8-12 tool calls per session. Claude's function calling was the most consistent.

The agent doesn't just generate text. It thinks: "Is this post worth engaging with? What angle hasn't been covered in the comments? Would my human actually care about this?"

Decision 3: Local-first, no cloud

Everything runs on your machine. Your credentials never leave your computer. There's no SaaS backend, no analytics we collect, no "free tier with data sharing."

This was a product decision as much as a technical one. If someone's going to let an AI post as them on LinkedIn, they need to trust the system completely. Open source + local-only is the maximum trust architecture.

The cost? You need your own Claude Code subscription or Anthropic API key. But at ~$2-5/day for 10 active platforms, it's cheaper than any SaaS alternative.

The agent loop (simplified)

Every hour during your configured active hours, each platform agent runs this loop:

1. DISCOVER  → Find relevant posts/threads to engage with
2. EVALUATE  → Score each one: relevance, engagement potential, recency
3. THINK     → Decide: comment, react, skip, or create original content
4. COMPOSE   → Generate response in your voice with full context
5. ACT       → Execute in the real browser
6. LOG       → Record everything to SQLite for the dashboard
Enter fullscreen mode Exit fullscreen mode

The EVALUATE step is where most of the intelligence lives. A bad agent comments on everything. A good agent is selective — just like a real person scrolling their feed.

What went wrong along the way

Voice drift. Early versions would slowly drift from my voice over long sessions. The agent would start sounding more "AI-generic" by comment #15. Fix: I now re-inject the voice calibration prompt every 5 actions.

Rate limit tuning. My first LinkedIn run did 40 comments in 2 hours. That's... not human. I got a soft warning. Now the defaults are conservative: 8-12 comments/day on LinkedIn, spread across active hours with randomized gaps.

Platform detection on Reddit. Reddit's anti-bot systems are sophisticated. They look at things like: do you always comment within X seconds of a post going live? Do your comments follow a pattern? The fix was adding randomized delays and mixing in genuine "lurk" sessions where the agent reads but doesn't engage.

The "too helpful" problem. Claude is really good at writing helpful, thorough comments. But sometimes a 3-paragraph response to a simple question looks suspicious. I had to add length calibration: match the depth of your response to the depth of the thread.

Real numbers (30 days)

Since going live with the full 10-platform setup:

Metric Before After Change
LinkedIn profile views/week ~120 ~450 +275%
New connections/week 5-8 35-40 +5x
Inbound DMs/week 1-2 8-12 +6x
Hours spent on engagement 10-15h <1h -93%
Platforms actively maintained 3-4 10 +150%

The sub-1-hour figure is real. I spend about 30 minutes per week reviewing the activity feed and adjusting the strategy. The agents handle the rest.

What's next

OpenTwins is MIT licensed and I'm actively developing it. The roadmap includes:

  • Multi-language support — agents that can engage in different languages per platform
  • Content pipeline — auto-generate original posts from your existing content (blog posts, repos, talks)
  • Team mode — run agents for multiple team members from one dashboard
  • Analytics dashboard - deeper insights into what types of engagement drive the most results

Try it

⭐ Star on GitHub if this is interesting to you. Issues and PRs are very welcome.


I'm Mykola - I build things with AI and write about the process. Follow me here on DEV.to for more OpenTwins updates and the occasional deep dive into AI agents, automation and building in public.

Have questions about the architecture or want to share your results? Drop a comment below - I read every one (manually, I promise 😄).

Top comments (0)