Will Amazon S3 Vectors kill vector databases or save them?

166 Fendy 87 9/8/2025, 3:35:46 PM zilliz.com ↗

Comments (87)

simonw · 9h ago
This is a good article and seems well balanced despite being written by someone with a product that directly competes with Amazon S3. I particularly appreciated their attempt to reverse-engineer how S3 Vectors work, including this detail:

> Filtering looks to be applied after coarse retrieval. That keeps the index unified and simple, but it struggles with complex conditions. In our tests, when we deleted 50% of data, TopK queries requesting 20 results returned only 15—classic signs of a post-filter pipeline.

Things like this are why I'd much prefer if Amazon provided detailed documentation of how their stuff works, rather than leaving it to the development community to poke around and derive those details independently.

libraryofbabel · 7h ago
> Things like this are why I'd much prefer if Amazon provided detailed documentation of how their stuff works, rather than leaving it to the development community to poke around and derive those details independently.

Absolutely this. So much engineering time has been wasted on reverse-engineering internal details of things in AWS that could be easily documented. I once spent a couple days empirically determining how exactly cross-AZ least-outstanding-requests load balancing worked with AWS's ALB because the docs didn't tell me. Reverse-engineering can be fun (or at least I kinda enjoy it) but it's not a good use of our time and is one of those shadow costs of using the Cloud.

It's not like there's some secret sauce here in most of these implementation details (there aren't that many ways to design a load balancer). If there was, I'd understand not telling us. This is probably less an Apple-style culture of secrecy and more laziness and a belief that important details have been abstracted away from us users because "The Cloud" when in fact, these details do really matter for performance and other design decisions we have to make.

TheSoftwareGuy · 6h ago
>It's not like there's some secret sauce here in most of these implementation details. If there was, I'd understand not telling us. This is probably less an Apple-style culture of secrecy and more laziness and a belief that important details have been abstracted away from us users because "The Cloud" when in fact, these details do really matter for performance and other design decisions we have to make.

Having worked inside AWS I can tell you one big reason is the attitude/fear that anything we put in out public docs may end up getting relied on by customers. If customers rely on the implementation to work in a specific way, then changing that detail requires a LOT more work to prevent breaking customer's workloads. If it is even possible at that point.

wubrr · 4h ago
Right now, it is basically impossible to reliably build full applications with things like DynamoDB (among other AWS products), without relying on internal behaviour which isn't explicitly documented.
cbsmith · 42m ago
I've built several DynamoDB apps, and while you might have some expectations of internal behaviour, you can build apps that are pretty resilient to change of the internal behaviour but rely heavily on the documented behaviour. I actually find the extent of the opacity a helpful guide on the limitations of the service.
JustExAWS · 3h ago
I am also a former AWS employee. What non public information did you need for DDB?
tracker1 · 2h ago
Try ingesting the a complete WHOIS dump into DDB sometime. This was before autoscaling worked at all when I tried... but it absolutely wasn't anything one can consider fun.

In the end, after multiple implementations, finally had to use a Java Spring app on a server with a LOT of ram just to buffer the CSV reads without blowing up on the pushback from DDB. I think the company spent over $20k in the couple months on different efforts in a couple different languages (C#/.Net, Node.js, Java) across a couple different routes (multiple queues, lambda, etc) just to get the initial data ingestion working a first time.

The Node.js implementation was fastest, but would always blow up a few days in without the ability to catch with a debugger attached. The queues and lambda experiments had throttling issues similar to the DynamoDB ingestion itself, even with the knobs turned all the way up. I don't recall what the issue with the .Net implementation was at the time, but it blew up differently.

I don't recall all the details, and tbh I shouldn't care, but it would have been nice if there was some extra guidance of trying to take in a few gb of csv into DynamoDB at the time. To this day, I still hate ETL work.

JustExAWS · 2h ago
tracker1 · 1h ago
Cool... though that would make it difficult to get the hundred or so CSVs into a single table, since it isn't supported I guess stitching them before processing would be easy enough... also, no idea when that feature became available.
JustExAWS · 1h ago
It’s never been a good idea to batch ingest a lot of little single files using any ETL process on AWS, whether it be DDB, Aurora MySQL/Postgres using “load data from S3…”, Redshift batch import from S3, or just using Athena (yeah I’ve done all of them).
libraryofbabel · 6h ago
And yet "Hyrum's Law" famously says people will come to rely on features of your system anyway, even if they are undocumented. So I'm not convinced this is really customer-centric, it's more AWS being able to say: hey sorry this change broke things for you, but you were relying on an internal detail. I do think there is a better option here where there are important details that are published but with a "this is subject to change at any time" warning slapped on them. Otherwise, like OP says, customers just have to figure it all out on their own.
lazide · 6h ago
Sure, but the court isn’t going to consider hyrum’s law in a tort claim, but might consider AWS documentation - even with a disclaimer - with more weight.

Rely on undocumented behavior at your own risk.

vlovich123 · 3h ago
Has Amazon ever been taken to court for things like this? I really don't think this is a legal concern.
teaearlgraycold · 2h ago
I don't buy the legal angle. But if I was an overworked Amazon SWE I'd also like to avoid the work of documentation and a proper migration the next time implementation is changed.
lazide · 3h ago
Amazon is involved in so many lawsuits right now, I honestly can’t tell. I did some google searches and gave up after 5+ pages.
simonw · 2h ago
Thanks for this, that's a really insightful comment.
javier2 · 2h ago
Its likely not specified, because they want to keep their right to improve or change it later. Documenting too detailed leads to way harder changes
whakim · 6h ago
> It's not like there's some secret sauce here in most of these implementation details.

IME the implementation of ANN + metadata filtering is often the "secret sauce" behind many vector database implementations.

citizenpaul · 7h ago
I have to assume that at this point its either intentional(increases profits?) or because AWS doesn't truly understand their own systems due to the culture of the company.
messe · 6h ago
> because AWS doesn't truly understand their own systems due to the culture of the company.

This. There's a lot of freedom in how teams operate. Some teams have great internal documentation, others don't, and a lot of it is scattered across the internal Amazon wiki. I recall having to reach out on slack on multiple occasions to figure out how certain systems worked after diving through docs and the relevant issue trackers didn't make it clear.

cyberax · 6h ago
AWS also has a pretty diverse set of hardware, and often several generations of software running in parallel. Usually because the new generation does not quite support 100% of features from the previous generation.
alanwli · 7h ago
The alternative is to find solutions that can reasonably support different requirements because business needs change all the time especially in the current state of our industry. From what I’ve seen, OSS Postgres/pgvector can adequately support a wide variety of requirements for millions to low tens of millions of vectors - low latencies, hybrid search, filtered search, ability to serve out of memory and disk, strong-consistency/transactional semantics with operational data. For further scaling/performance (1B+ vectors and even lower latencies), consider SOTA Postgres system like AlloyDB with AlloyDB ScaNN.

Full disclosure: I founded ScaNN in GCP databases and am the lead for AlloyDB Semantic Search. And all these opinions are my own.

speedysurfer · 8h ago
And what if they change their internal implementation and your code depends on the old architecture? It's good practice to clearly think about what to expose to users of your service.
altcognito · 8h ago
Knowing how the service will handle certain workloads is an important aspect of choosing an architecture.
libraryofbabel · 6h ago
If you can truly abstract away an internal detail, then great. But often there are design decisions that you cannot abstract away because they affect e.g. performance in a major way. For example, I don't care whether some AWS service is written in Java or Go or C++. I do care a bit about how its indexing and retrieval works, because I need to know that to plan my query workloads.

I actually think AWS did a reasonably good job of this with DynamoDB. Most of the performance tradeoffs, indexing etc., is pretty clear if you ready enough docs without exposing a ton of unnecessary internals.

redskyluan · 9h ago
Author of this article.

Yes, I’m the founder and maintainer of the Milvus project, and also a big fan of many AWS projects, including S3, Lambda, and Aurora. Personally, I don’t consider S3Vector to be among the best products in the S3 ecosystem, though I was impressed by its excellent latency control. It’s not particularly fast, nor is it feature-rich, but it seems to embody S3’s design philosophy: being “good enough” for certain scenarios.

In contrast, the products I’ve built usually push for extreme scalability and high performance. Beyond Milvus, I’ve also been deeply involved in the development of HBase and Oracle products. I hope more people will dive into the underlying implementation of S3Vector—this kind of discussion could greatly benefit both the search and storage communities and accelerate their growth.

redskyluan · 9h ago
By the way, if you’re not fully satisfied with S3Vector’s write, query, or recall performance, I’d encourage you to take a look at what we’ve built with Zilliz Cloud. It may not always be the lowest-cost option, but it will definitely meet your expectations when it comes to latency and recall.
Shakahs · 4h ago
While your technical analysis is excellent, making judgements about workload suitability based on a Preview release is premature. Preview services have historically had significantly lower performance quotas than GA releases. Lambda for example was limited to 50 concurrent executions during Preview, raised to 100 at GA, and now the default limit is 1,000.
pradn · 5h ago
Thanks for writing a balanced article - much easier to take your arguments seriously! And a sign of expertise.
qaq · 9h ago
"I recently spoke with the CTO of a popular AI note-taking app who told me something surprising: they spend twice as much on vector search as they do on OpenAI API calls. Think about that for a second. Running the retrieval layer costs them more than paying for the LLM itself. That flips the usual assumption on its head." Hmm well start sending full documents as part of context see it flip back :).
heywoods · 9h ago
Egress costs? I’m really surprised by this. Thanks for sharing.
qaq · 8h ago
Sry maybe should've being more clear it was a sarcastic remark. The whole point of doing vector db search is to feed LLM with very targeted context so you can save $ on API calls to LLM.
infecto · 8h ago
That’s not the whole point it’s in the intersection of reducing tokens sent but also getting search both specific and generic enough to capture the correct context data.
j45 · 5h ago
It's possible to create linking documents between the documents to help smooth out things in some cases.
conradev · 5h ago

  At a glance, it looks like a lightweight vector database running on top of low-cost object storage—at a price point that is clearly attractive compared to many dedicated vector database solutions.
They also didn’t mention LanceDB, which fits this description but with an open source component: https://lancedb.github.io/lancedb/
kjfarm · 4h ago
This may be because LanceDB is the most attractive with a price point of standard S3 storage ($0.023/GB vs $0.06/GB). I also like that Lancedb works with S3 compatible stores, such as Backblaze B2 which is even cheaper (~70% cheaper).
nickpadge · 4h ago
I love lancedb. It’s the only way I’ve found to performantly and cheaply serve 50m+ records of 768 dimensions. Runs on s3 a bit too slow, but on EFS can still be a few hundred millis.
factsaresacred · 35m ago
For low cost, there's also Cloudflare Vectorize ($0.05 per 100 million stored vectors), which nobody seems to know exists: https://www.cloudflare.com/developer-platform/products/vecto...
scosman · 9h ago
Anyone interested in this space should look at https://turbopuffer.com - I think they were first to market with S3 backed vector storage, and a good memory cache in front of it.
nosequel · 6h ago
Turbopuffer was mentioned in the article.
janalsncm · 7h ago
S3 vectors has a topK limit of 30, and if you add filters it may be less than that. So if you need something with higher topK you’ll need to 1) look elsewhere or 2) shard your dataset into N shards to get NxK results, which you query in parallel and merge afterwards.

I also didn’t see any latency info on their docs page https://docs.aws.amazon.com/AmazonS3/latest/API/API_S3Vector...

mediaman · 5h ago
And a topk of 30 also means reranking of any sort is out, except for maybe limited reranking of 30->10, but that seems kind of pointless with today’s LLMs that can handle a bit more context.
janalsncm · 4h ago
Yeah exactly, so you could do something like shard by the first 4 bits of md5 of the text (gives you 16 buckets) but now you’re adding extra complexity to work around their limitations.
cpursley · 7h ago
Postgres has pgvector. Postgres is where all of my data already lives. It’s all open source and runs anywhere. What am I missing with the specialty vector stores?
CuriouslyC · 7h ago
latency, actual retrieval performance, integrated pipelines that do more than just vector search to produce better results, the list goes on.

Postgres for vector search is fine for toy products or stuff that's outside the hot loop of your business but for high performance applications it's just inadequate.

cpursley · 7h ago
For the vast majority of applications, the trade off is worth keeping everything in Postgres vs operational overhead of some VC hype data store that won’t be around in 5 years. Most people learned this lesson with Mongo (postgrest jsonb is now good enough for 90% of scenarios).
whakim · 6h ago
It depends on scale. If you're storing a small number of embeddings (hundreds of thousands, millions) and don't have complicated filters, then absolutely the convenience factor of pgvector will win out. Beyond that, you'll need something more powerful. I do think the dedicated vector stores serve a useful place in the market in that they're extremely "managed" - it is really really easy to just call an API and never worry about pre- or post- filtering or sharding your index across a large cluster. But they also have weaknesses in that they're usually optimized around small(er) scale where the bulk of their customers lie, and they don't really replace an actual search system like ElasticSearch.
CuriouslyC · 7h ago
I'm a legit postgres fanboy, my comment history will back this up, but the ops overhead and performance implications of trying to run PGvector as your core vector store for everything is just silly, you're going to be doing all sorts of postgres replication gymnastics to make up for the fact that you're using the wrong tool for the job. It's good for prototyping and small/non-core workloads, use it outside that scope at your own peril.
alastairr · 6h ago
Interested to hear any more on this. I've been using pinecone for ages, but they recently increased the cost floor for serverless. I've been thinking of moving everything to pgvector (1M ish, so not loads), as all the bigger meta data lives there anyway. But I'd be interested to hear any views on that.
CuriouslyC · 5h ago
It depends on your flow honestly. If you're just using your vectors for where filters on domain objects and you don't have hundreds of millions of vectors PGVec is fine. If you have any sort of workflow where you need low latency access to vectors and reliable random read performance, or where vector work is the bottleneck on performance, PGVec goes tits up.
whakim · 6h ago
At 1M embeddings I'd think pgvector would do just fine assuming a sufficiently powerful database.
cpursley · 6h ago
Guess I'm just not webscale™
j45 · 5h ago
Appreciate the clarification. I have been using it for small / medium things and it's been OK.

The everything postgres as long as reasonably possible approach is fun, but not something I expect to last for ever.

cpursley · 7h ago
Also, no way retrieval performance is going to match pgvector because you still have to join the external vector with your domain data in the main database at the application level, which is always going to be less performant.
jitl · 4h ago
i'll take a 100ms turbopuffer vector search plus a 50ms postgres-select-where-id-in over a 500ms all-in-one pgvector + join query.

When you only need to hydrate like 30 search result item IDs from Postgres or memcached i don't see the join being "too expensive" to do in memory.

CuriouslyC · 7h ago
For a large class of applications, the database join is the last step of a very involved pipeline that demands a lot more performance than PGVector can deliver. There are also a large class of applications that don't even interface with the database directly, except to emit logging/traceability artifacts.
hbcondo714 · 5h ago
It would be great to have the vector database run on the edge / on-device for offline-first and be privacy-focused. https://objectbox.io/ does this but i would like to see AWS and others offer this as well.
greenavocado · 5h ago
I am already using Qdrant very heavily for code dev (RAG) and I don't see that changing any time soon because its the primary choice for the tools I use and it works well
storus · 9h ago
Does this support hybrid search (dense + sparse embeddings)? Pure dense embeddings aren't that great for specific search, they only hit meaning reliably. Amazon's own embeddings also aren't SOTA.
danielcampos93 · 8h ago
I think you would be very surprised by the number of customers who don't care if the embeddings are SOTA. For every Joe who wants to talk GraphRAG + MTEB + CMTEB and adaptive rag there are 50 who just want whatever IT/prodsec has approved
infecto · 9h ago
That’s where my mind was rolling and also if not, can this be used in OpenSearch hybrid search?
rubenvanwyk · 7h ago
I don’t think it’s either-or, this will probably become the default / go-to - if you aren’t storing your vectors in your db like Neon or Turso.

As far as I understand, Milvus is appropriate for very large scale, so will probably continue targeting enterprise.

resters · 9h ago
By hosting the vectors themselves, AWS can meta-optimize its cloud based on content characteristics. It may seem like not a major optimization, but at AWS scale it is billions of dollars per year. It also makes it easier for AWS to comply with censorship requirements.
coredog64 · 9h ago
This comment appears to misunderstand the control plane/data plane distinction of AWS. AWS does have limited access to your control plane, primarily for things like enabling your TAMs to analyze your costs or getting assistance from enterprise support teams. They absolutely do not have access to your dataplane unless you specifically grant it. The primary use case for the latter is allowing writes into your storage for things like ALB access logs to S3. If you were deep in a debug session with enterprise support they might request one-off access to something large in S3, but I would be surprised if that were to happen.
resters · 9h ago
If that is the case why create a separate govcloud and HIPAA service?
thedougd · 7h ago
HIPAA services are not separate. You only need to establish a Business Associations Addendum (BAA) with AWS and stick to HIPAA eligible services: https://aws.amazon.com/compliance/hipaa-eligible-services-re...

GovCloud exists so that AWS can sell to the US government and their contractors without impacting other customers who have different or less stringent requirements.

barbazoo · 9h ago
> It also makes it easier for AWS to comply with censorship requirements.

Does it, how? Why would it be the vector store that would make it easier for them to censor the content? Why not censor the documents in S3 directly, or the entries in the relational database. What is different about censoring those vs a vector store?

resters · 9h ago
Once a vector has been generated (and someone has paid for it) it can be searched for and relevant content can be identified without AWS incurring any additional cost to create its own separate censorship-oriented index, etc. AWS can also add additional bits to the vector that benefit its internal goals (scalability, censorship, etc.)

Not to mention there is lock-in once you've gone to the trouble of using a specific embedding model on a bunch of content. Ideally we'd converge on backwards-compatible, open source approaches, but cloud vendors want to offer "value" by offering "better" embedding models that are not open source.

simonw · 9h ago
Why would they do that? Doesn't sound like something that would attract further paying customers.

Are there laws on the books that would force them to apply the technology in this way?

resters · 9h ago
Not official laws that we can read, but things like that are already in place per the Snowden revelations.
whakim · 9h ago
Regardless of the merits of this argument, dedicated vector databases are all running on top of AWS/GCP/Azure infrastructure anyways.
barbazoo · 9h ago
And that doesn't apply to any other database/search technology AWS offers?
resters · 9h ago
It does to some but not to most of it, which is why Azure and GCP offer nearly the exact same core services.
j45 · 5h ago
Also, if it's not encrypted, I'm not sure if AWS or others "synthesize" customer data by a cursory scrubbing of so called client identifying information, and then try to optimize and model for those scenarios at scale.

I do feel more and more some information in the corpus of AI models was done this way. A client's name and private identifiable information might not be in the model, but some patterns of how to do things sure seem to come up from such sources.

teaearlgraycold · 3h ago
> Not too long ago, AWS dropped something new: S3 Vectors. It’s their first attempt at a vector storage solution

Nitpick: AWS previously funded pgvector (the slow down in development indicates to me they have stopped). Their hosted database solutions supported the extension. That means RDS and Aurora were their first vector storage solutions.

j45 · 5h ago
The cloud is someone else's computer.

If it's this sensitive, there's a lot of companies staying on the sidelines until they can compute in person, or limiting what and how they use it.

giveita · 4h ago
Betteridge can answer No to two questions at once!
Fendy · 10h ago
what do you think?
sharemywin · 10h ago
it's annoying to me that there's not a doc store with vectors. seems like the vector dbs just store the vectors I think.
simonw · 9h ago
Elasticsearch and MongoDB Atlas and PostgreSQL and SQLite all have vector indexes these days.
KaoruAoiShiho · 8h ago
> MongoDB Atlas

It took a while but eventually opensource dies.

CuriouslyC · 7h ago
My search service Lens returns exact spans from search, while having the best performance both in terms of latency and precision/recall within a budget. I'm just working on release cleanup and final benchmark validation so hopefully I can get it in your hands soon.
storus · 9h ago
Pinecone allows 40k of metadata with each vector which is often enough.
whakim · 9h ago
Elasticsearch and Vespa both fit the bill for this, if your scale grows beyond the purpose-built vector stores.
jeffchuber · 10h ago
chroma stores both
nkozyra · 9h ago
As does Azure's AI search.
intalentive · 9h ago
I just use sqlite