In a previous post, we wrote about using vectorized search and cosine similarity to quickly query a database of blog posts and retrieve the most relevant content to a natural language query. This is achieved using OpenAI’s embeddings API, Pinecone (a vector database), and OpenAI ChatCompletions. For reference, here’s the rough architecture:

The steps above do the following:
- A console app retrieves blog post URLs from an RSS feed and reads all the posts one by one
- For each post, create an embedding with OpenAI which results in a vector of 1536 dimensions to store in Pinecone
- After the embedding is created, store the embedding in a Pinecone index; we created the index from the Pinecone portal
- A web app asks the user for a query (e.g., “How do I create a chat bot?”) and creates an embedding for the query
- Perform a vectorized search, finding the closest post vectors to the query vector using cosine similarity and keep the one with the highest score
- Use the ChatCompletion API and submit the same query but add the highest scoring post as context to the user question. The post text is injected into the prompt
ℹ️ See Pinecone and OpenAI magic: A guide to finding your long lost blog posts with vectorized search and ChatGPT – baeke.info for more information.
We can replace Pinecone with Redis, a popular open-source, in-memory data store that can be used as a database, cache, and message broker. Redis is well-suited for this task as it can also store vector representations of our blog posts and has the capability to perform vector queries efficiently.
You can easily run Redis with Docker for local development. In addition, Redis is available in Azure, although you will need the Enterprise version. Only Azure Cache for Redis Enterprise supports the RediSearch functionality and that’s what we need here! Note that the Enterprise version is quite costly.
By leveraging Redis for vector storage and querying, we can harness its high performance, flexibility, and reliability in our solution while maintaining the core functionality of quickly querying and retrieving the most relevant blog post content using vectorized search and similarity queries.
ℹ️ The code below shows snippets. Full samples (yes, samples 😀) are on GitHub: check upload_vectors_redis.py
to upload posts to a local Redis instance and search_vectors_redis.py
to test the query functionality.
Run Redis with Docker
If you have Docker on your machine, use the following command:
docker run --name redis-stack-server -p 6380:6379 redis/redis-stack-server:latest
ℹ️ I already had another instance of Redis running on port 6379 so I mapped port 6380 on localhost to port 6379 of the redis-stack-server container.
If you want a GUI to explore your Redis instance, install RedisInsight. The screenshot below shows the blog posts after uploading them as Redis hashes.

Let’s look at creating the hashes next!
Storing post data in Redis hashes
We will create several Redis hashes, one for each post. Hashes are records structured as collections of field-value pairs. Each hash we store, has the following fields:
- url: url to the blog post
- embedding: embedding of the blog post (a vector), created with the OpenAI embeddings API and the
text-embedding-ada-002
model
We need the URL to retrieve the entire post after a closest match has been found. In Pinecone, the URL would be metadata to the vector. In Redis, it’s just a field in a hash, just like the vector itself.
In RedisInsight, a hash is shown as below:

The embedding field in the hash has no special properties. The vector is simply stored as a series of bytes. To store the urls and embeddings of posts, we can use the following code:
import redis
import openai
import os
import requests
from bs4 import BeautifulSoup
import feedparser
# OpenAI API key
openai.api_key = os.getenv('OPENAI_API_KEY')
# Redis connection details
redis_host = os.getenv('REDIS_HOST')
redis_port = os.getenv('REDIS_PORT')
redis_password = os.getenv('REDIS_PASSWORD')
# Connect to the Redis server
conn = redis.Redis(host=redis_host, port=redis_port, password=redis_password, encoding='utf-8', decode_responses=True)
# URL of the RSS feed to parse
url = 'https://blog.baeke.info/feed/'
# Parse the RSS feed with feedparser
feed = feedparser.parse(url)
p = conn.pipeline(transaction=False)
for i, entry in enumerate(feed.entries[:50]):
# report progress
print("Create embedding and save for entry ", i, " of ", entries)
r = requests.get(entry.link)
soup = BeautifulSoup(r.text, 'html.parser')
article = soup.find('div', {'class': 'entry-content'}).text
# vectorize with OpenAI text-emebdding-ada-002
embedding = openai.Embedding.create(
input=article,
model="text-embedding-ada-002"
)
# print the embedding (length = 1536)
vector = embedding["data"][0]["embedding"]
# convert to numpy array and bytes
vector = np.array(vector).astype(np.float32).tobytes()
# Create a new hash with url and embedding
post_hash = {
"url": entry.link,
"embedding": vector
}
# create hash
conn.hset(name=f"post:{i}", mapping=post_hash)
p.execute()
In the above code, note the following:
- The OpenAI embeddings API returns a JSON document that contains the embedding for each post; the embedding is retrieved with
vector = embedding["data"][0]["embedding"]
- The resulting
vector
is converted to bytes withvector = np.array(vector).astype(np.float32).tobytes()
; serializing the vector this way is required to store the vector in the Redis hash - the Redis
hset
command is used to store the field-value pairs (these pairs are in a Python dictionary calledpost_hash
) with a key that is prefixed withpost:
followed by the document number. The prefix will be used later by the search index we will create
Now we have our post information in Redis hashes, we want to use RediSearch functionality to match an input query with one or more of our posts. RediSearch supports vector similarity semantic search. For such a search to work, we will need to create an index that knows there is a vector field. On such indexes, we can perform vector similarity searches.
Creating an index
To create an index with Python code, check the code below:
import redis
from redis.commands.search.field import VectorField, TextField
from redis.commands.search.query import Query
from redis.commands.search.indexDefinition import IndexDefinition, IndexType
# Redis connection details
redis_host = os.getenv('REDIS_HOST')
redis_port = os.getenv('REDIS_PORT')
redis_password = os.getenv('REDIS_PASSWORD')
# Connect to the Redis server
conn = redis.Redis(host=redis_host, port=redis_port, password=redis_password, encoding='utf-8', decode_responses=True)
SCHEMA = [
TextField("url"),
VectorField("embedding", "HNSW", {"TYPE": "FLOAT32", "DIM": 1536, "DISTANCE_METRIC": "COSINE"}),
]
# Create the index
try:
conn.ft("posts").create_index(fields=SCHEMA, definition=IndexDefinition(prefix=["post:"], index_type=IndexType.HASH))
except Exception as e:
print("Index already exists")
When creating an index, you define the fields to index based on a schema. Above, we include both the text field (url) and the vector field (embedding). The VectorField
class is used to construct the vector field and takes several parameters:
- Name: the name of the field (“embedding” here but could be anything)
- Algorithm: “FLAT” or “HNSW”; use “FLAT” when search quality is of high priority and search speed is less important; “HNSW” gives you faster querying; for more information see this article
- Attributes: a Python dictionary that specifies the data type, the number of dimensions of the vector (1536 for
text-embedding-ada-002
) and the distance metric; here we useCOSINE
for cosine similarity, which is recommended by OpenAI with their embedding model
ℹ️ It’s important to get the dimensions right or your index will fail to build properly. It will not be immediately clear that it failed, unless you run FT.INFO <indexname
> with redis-cli
.
With the schema out of the way, we can now create the index with:
conn.ft("posts").create_index(fields=SCHEMA, definition=IndexDefinition(prefix=["post:"], index_type=IndexType.HASH))
The index we create is called posts
. We index the fields defined in SCHEMA and only index hashes with a key prefix of post:
. The hashes we created earlier, all have this prefix. With the index created and our existing hashes, the index should be populated with them. Ensure you can see that in RedisInsight:

Redis vector queries
With the hashes and the index created, we can now perform a similarity search. We will ask the user for a query string (use natural language) and then check the posts that are similar to the query string. The query string will need to be vectorized as well. We will return several post and rank them.
import numpy as np
from redis.commands.search.query import Query
import redis
import openai
import os
openai.api_key = os.getenv('OPENAI_API_KEY')
def search_vectors(query_vector, client, top_k=5):
base_query = "*=>[KNN 5 @embedding $vector AS vector_score]"
query = Query(base_query).return_fields("url", "vector_score").sort_by("vector_score").dialect(2)
try:
results = client.ft("posts").search(query, query_params={"vector": query_vector})
except Exception as e:
print("Error calling Redis search: ", e)
return None
return results
# Redis connection details
redis_host = os.getenv('REDIS_HOST')
redis_port = os.getenv('REDIS_PORT')
redis_password = os.getenv('REDIS_PASSWORD')
# Connect to the Redis server
conn = redis.Redis(host=redis_host, port=redis_port, password=redis_password, encoding='utf-8', decode_responses=True)
if conn.ping():
print("Connected to Redis")
# Enter a query
query = input("Enter your query: ")
# Vectorize the query using OpenAI's text-embedding-ada-002 model
print("Vectorizing query...")
embedding = openai.Embedding.create(input=query, model="text-embedding-ada-002")
query_vector = embedding["data"][0]["embedding"]
# Convert the vector to a numpy array
query_vector = np.array(query_vector).astype(np.float32).tobytes()
# Perform the similarity search
print("Searching for similar posts...")
results = search_vectors(query_vector, conn)
if results:
print(f"Found {results.total} results:")
for i, post in enumerate(results.docs):
score = 1 - float(post.vector_score)
print(f"\t{i}. {post.url} (Score: {round(score ,3) })")
else:
print("No results found")
In the above code, the following happens:
- Set OpenAI API key: needed to create the embedding for the query typed by the user
- Connect to Redis based on the environment variables and check the connection with
ping()
. - Ask the user for a query
- Create the embedding from the query string and convert the array to bytes
- Call the
search_vectors
function with the vectorized query string and Redis connection as parameters
The search_vectors
function uses RediSearch capabilities to query over our hashes and calculate the 5 nearest neighbors to our query vector. Querying is explained in detail in the Redis documentation but it can be a bit dense. You start with the base query:
base_query = "*=>[KNN 5 @embedding $vector AS vector_score]"
This is just a string with the query format that Redis expects to pass to the Query class in the next step. We are looking for the 5 nearest neighbors of $vector in the embedding fields of the hashes. You use @ to denote the embedding field and $ to denote the vector we will pass in later. That vector is our vectorized query string. With AS vector_score
, we add the score to later rank the results from high to low.
The actual query is built with the Query class (one line):
query = Query(base_query).return_fields("url", "vector_score").sort_by("vector_score").dialect(2)
We return the url
and the vector_score
and sort on this score. Dialect is just the version of the query language. Here we use dialect 2 as that matches the query syntax. Using an earlier dialect would not work here.
Of course, this still does not pass the query vector to the query. That only happens when we run the query in Redis with:
results = client.ft("posts").search(query, query_params={"vector": query_vector})
The above code performs a search query on the posts
index. In the call to the search
method, we pass the query we built earlier and a list of query parameters. We only have one parameter, the vector
parameter ($vector in base_query) and the value for this parameter is the embedding created from the user query string.
When I query for bot
, I get the following results:

The results are ranked with the closest match first. We could use that match to grab the post from the URL and send the query to OpenAI ChatCompletion API to answer the question more precisely. For better results, use a better query like “How do I build a chat bot in Python with OpenAI?”. To get an idea of how to do that, check my previous post.
Conclusion
In this post we discussed storing embeddings in Redis and querying embeddings with a similarity search. If you combine this with my previous post, you can use Redis instead of Pinecone as the vector database and query engine. This can be useful for Azure customers because Azure has Azure Cache for Redis Enterprise, a fully managed service that supports the functionality discussed in this post. In addition, it is useful for local development purposes because you can easily run Redis with Docker.