Another lesson in building reliable systems - not just configuring them.
I thought I had everything set up correctly.
- Backups configured
- S3-compatible storage connected
- Backup triggered via cron jobs during testing
And yet nothing showed up where I expected.
What looked like a simple configuration issue turned out to be a wrong mental model of how S3 actually works.
This post breaks down what went wrong and what fixed it.
The Setup
I was working with:
- An S3-compatible object storage (not AWS directly)
-
A system that allows:
- Configuring a bucket
- Setting a backup path
- Defining backup frequency
Everything seemed straightforward.
But the problem started with one assumption:
Buckets can behave like folders.
The First Mistake: Treating Buckets Like Folders
In a traditional file system, you think like this:
backups/
app1/
db.sql
So it felt natural to assume:
- Create a “folder” in object storage
- Then create buckets inside it for different use cases
In my case, I had something like a folder already created in the object storage UI, and I assumed:
That is my base, and I can create buckets under it
So I tried:
- Connecting to that “folder” as a bucket
- Then creating another bucket inside it (for vector DB backups)
This kept failing.
At first, I thought:
- Maybe it is a permission issue
- Maybe my user does not have enough access
But that was not the real problem.
What Was Actually Going Wrong
I was effectively trying to:
- Treat a bucket like a parent directory
- And create another bucket inside it
That is not how S3 works.
In S3:
- Buckets are top-level containers
- You cannot nest buckets inside other buckets
So when I tried to:
- Connect to an existing bucket
- And then create another bucket under it
It failed because the concept itself is invalid.
The Correct Mental Model
This is how S3 actually works:
bucket: backups
object key: app1/2026-04-15/db.sql
There are only two things:
- Bucket (top-level)
- Object key (full path as a string)
There is no real folder hierarchy.
Organizing Data the Right Way
The fix was not about creating folders.
It was about changing how I name objects.
Instead of trying to structure things at the bucket level, I moved that structure into the object key.
For example:
object_name = f"qdrant/{collection_name}/{snapshot_name}"
This gives a structure like:
bucket: backups
qdrant/
collection_1/
snapshot_001
collection_2/
snapshot_002
Even though S3 is flat internally, most UIs render this as a folder-like structure.
This is the correct way to organize data.
The Second Mistake: Mixing Bucket and Path
Another issue was passing paths as part of the bucket name.
For example:
bucket = backups/qdrant
This is invalid.
Correct approach:
- bucket = backups
- object key = qdrant/collection_name/snapshot
S3 APIs expect a valid bucket name, not a path.
What Finally Clicked
The breakthrough was realizing:
- I was not dealing with folders at all
- I was dealing with string prefixes inside object keys
Once I stopped trying to create hierarchy at the bucket level and moved everything into object naming, the entire setup started working as expected.
Putting It All Together
Correct configuration:
- Bucket:
backups
- Object naming:
f"qdrant/{collection_name}/{snapshot_name}"
This alone was enough to:
- Organize backups cleanly
- Avoid bucket-related errors
- Make the storage layout intuitive in the UI
Key Takeaways
- Buckets are not folders
- You cannot create a bucket inside another bucket
- S3 is a flat object store
- Folder-like structures come from object key prefixes
- Always keep bucket and path separate
Final Thought
The issue was not with permissions or configuration.
It was a mismatch between how I expected storage to behave and how it actually works.
Once the mental model changed, the implementation became simple.
If something feels unnecessarily complicated in S3, it is often a sign that the model being used is incorrect.

Top comments (0)