Memora is a vector database that has:

  • Built-in multistage reranking, delivering a better search accuracy compared to basic semantic search;
  • An incredibly powerful custom-made embedding model, so you never have to worry about embedding again.

With Memora, you can:

  • Store text and related metadata (in JSON format);
  • Execute searches using state-of-the-art multistage reranking;
  • Generate your LLM prompts based on the search results.

Overview of Memora

How can Memora benefit you

If you use retrieval-based methods like semantic search or keyword matching to generate a LLM prompt, then by using Memora you will significantly increase your app performance.


Memora’s design ethos is to simplify, not complicate. We take pride in being simple:

  1. You provide the data you wish to search in the future;
  2. You submit a search query, and we deliver the most relevant results.

Say goodbye to the overhead of selecting the appropriate metrics for similarity search, handling with external embedding models or dealing with tricky search methods like HyDE. With Memora handling these complexities, you can channel your efforts into your business, while we ensure optimal search performance & latency.

Where to go next

If you are new to Memora, we recommend you proceed to the quickstart. Here are some other useful links:

Was this page helpful?