You can get really, really far with this approach. Even 'naive' approaches like classifying what you're embedding and directing it to different models, or using multiple and blending scores can get you to a point where your results are better than anything you could pay (a lot!) for.
What is especially beneficial about that approach is that you can hang each of the embeddings off of the same bits in the db and tune how their scores are blended at query time.
If you haven't tried it yet: because what you're searching is presumably standardized enough to the point that there will be sprawling glossaries of acronyms, taking those and processing them into custom word lists will boost scores. If you go a little further and build lil graphs/maps of them all, doubly so, and it will give you 'free' autocomplete and the ability to specify which specific acronym(s) you meant or don't want on the query side.
Have recently been playing around with these for some code+prose+extracted prose+records semantic searching stuff, its a fun rabbit hole
I love reading battlefield notes like this for RAG/search systems. Anyone shooting for useful output is going to hit the same pain points but each article like this has a different set of solutions.
I’m leaning on OpenAI for my embedding needs but will be trying llama-server in the future. I stuck with Postgres because it was easy to run it on my Dokku installation. Great to know sqlite is an option there too. My corpus is too small for Postgres to elect to use an index so it’s running the full table scans that sqlite would. For seeding I use a msgpack file and ship that with the code when deploying.
This is my site: https://customelon.com (niche need of tariff and excise information for shipping to The Bahamas)
It’s built with ASP.NET, Postgres/pgvector, and OpenAI embedding/LLMs. Ingestion is via Textract with a lot of chunking helpers to preserve context layered on top.
Thanks! Yeah embedding is simple enough and my needs were small enough that I didn’t want to pay. Both llama-server and ollama are great options, and if container size isn’t an issue you get a greater variety running what you want with sentence transformers.
Hi there, thanks for writing and sharing your experiences. I'm one of the builders of GoodMem (https://goodmem.ai/), which is infra to simplify end-to-end RAG/agentic memory systems like the one you built.
It's built on Postgres, which I know you said you left behind, but one of the cool features it supports is hybrid search over multiple vector representations of a passage, so you can do a dense (e.g. nomic) and sparse (e.g. splade) search. Reranking is also built in, although it lacks automatic caching (since, in general, the corpus changes over time)
It also deploys to fly.io/railway and costs a few bucks a month to run if you're willing to use cloud-hosted embedding models (otherwise, you can run TEI/vLLM on CPU or GPU for the setup you described).
I didn't know sqlite had a vector extension. I'm also using nomic 1.5 with 256 size vectors. After about 44k entries searching is way too slow. I'm thinking about reducing the size to half. What size are you using?
For text search, I'm using lnx which is based off of Tantivy.
I disabled the vector search feature for now but I will re-enable it after some optimization. The site is at https://stray.video
I use full length vectors (512 dimensions) and have seen very fast lookups with pgvector (HNSW index) and sqlite-vec on 20k vectors. I think any decent vector database should be able to handle 44k entries… which one are you using now?
You can get really, really far with this approach. Even 'naive' approaches like classifying what you're embedding and directing it to different models, or using multiple and blending scores can get you to a point where your results are better than anything you could pay (a lot!) for.
What is especially beneficial about that approach is that you can hang each of the embeddings off of the same bits in the db and tune how their scores are blended at query time.
If you haven't tried it yet: because what you're searching is presumably standardized enough to the point that there will be sprawling glossaries of acronyms, taking those and processing them into custom word lists will boost scores. If you go a little further and build lil graphs/maps of them all, doubly so, and it will give you 'free' autocomplete and the ability to specify which specific acronym(s) you meant or don't want on the query side.
Have recently been playing around with these for some code+prose+extracted prose+records semantic searching stuff, its a fun rabbit hole
This is a really cool idea. By “different models” do you mean models fine tuned on different sources? How would you decide how to classify chunks?
I love reading battlefield notes like this for RAG/search systems. Anyone shooting for useful output is going to hit the same pain points but each article like this has a different set of solutions.
I’m leaning on OpenAI for my embedding needs but will be trying llama-server in the future. I stuck with Postgres because it was easy to run it on my Dokku installation. Great to know sqlite is an option there too. My corpus is too small for Postgres to elect to use an index so it’s running the full table scans that sqlite would. For seeding I use a msgpack file and ship that with the code when deploying.
This is my site: https://customelon.com (niche need of tariff and excise information for shipping to The Bahamas)
It’s built with ASP.NET, Postgres/pgvector, and OpenAI embedding/LLMs. Ingestion is via Textract with a lot of chunking helpers to preserve context layered on top.
Again, great article.
Thanks! Yeah embedding is simple enough and my needs were small enough that I didn’t want to pay. Both llama-server and ollama are great options, and if container size isn’t an issue you get a greater variety running what you want with sentence transformers.
Cool site :)
Hi there, thanks for writing and sharing your experiences. I'm one of the builders of GoodMem (https://goodmem.ai/), which is infra to simplify end-to-end RAG/agentic memory systems like the one you built.
It's built on Postgres, which I know you said you left behind, but one of the cool features it supports is hybrid search over multiple vector representations of a passage, so you can do a dense (e.g. nomic) and sparse (e.g. splade) search. Reranking is also built in, although it lacks automatic caching (since, in general, the corpus changes over time)
It also deploys to fly.io/railway and costs a few bucks a month to run if you're willing to use cloud-hosted embedding models (otherwise, you can run TEI/vLLM on CPU or GPU for the setup you described).
I hope it's helpful to someone.
This is really cool. How is reranking built in? Is there a model that runs inside the database? If so, how did you choose it?
I didn't know sqlite had a vector extension. I'm also using nomic 1.5 with 256 size vectors. After about 44k entries searching is way too slow. I'm thinking about reducing the size to half. What size are you using?
For text search, I'm using lnx which is based off of Tantivy.
I disabled the vector search feature for now but I will re-enable it after some optimization. The site is at https://stray.video
I use full length vectors (512 dimensions) and have seen very fast lookups with pgvector (HNSW index) and sqlite-vec on 20k vectors. I think any decent vector database should be able to handle 44k entries… which one are you using now?
Sorry, but hard to not have some negative sentiment about you working at xAI, Elon is so incredibly toxic.
Thanks for the article though.