Today's AI, powered by machine learning algorithms, has introduced us to semantic search. Essentially, semantic search interprets queries in a more human way by not just reading the keywords we type but also understanding the context, emotions, and intent behind our queries.
In our first blog in the semantic search series, we discussed how it can transform how we search. We also presented a demo application that we built to experiment with semantic search.
In this blog, we will explore a combination of Pinecone and OpenAI which are emerging as one of the key players in creating intelligent, user-friendly AI experiences. We'll also unravel the code that powers our demo application and illustrate how AI is revolutionizing data search.
Our approach to building the solution
We are moving away from traditional keyword-based searches towards searches that grasp the purpose and context behind our query. A key element enabling this smarter search is the integration of embedding models within Large Language Models (LLMs).
These models create vector embeddings that represent data in a multi-dimensional way, which helps in understanding content on a deeper level.
Tackling challenges with the right tools
While the potential of semantic search is impressive, there are some challenges in making it work effectively. These challenges include handling a robust infrastructure, reducing the time it takes to get search results, and keeping data up-to-date to ensure it remains relevant and useful.
However, when we have the right tools in place, these challenges become much easier to handle. An optimized vector database, for example, enhances the user experience by reducing delays and enabling real-time updates to the search results. It means we no longer have to choose between fast query responses and keeping our data current, resulting in a smoother and more efficient search experience.
How vector databases enhance semantic search
In semantic search, a vector database plays a crucial role as a storage hub for data embeddings. These embeddings capture the intricate contextual nuances of data.
When we perform a search query, instead of just matching words, we're looking out for vectors that carry similar meanings. This action not only sharpens the relevance of search results but also tailors them to fit the context. It proves beneficial in various scenarios, such as:
- Precise information retrieval: It helps teams to precisely locate specific internal data or knowledge without sorting through irrelevant information.
- User-friendly applications: It fine-tunes search results to match individual queries and elevates user engagement.
- Enhanced data integration: It amplifies search capabilities for large user bases by combining insights from diverse data sources.
- Personalizedrecommendation systems: Users receive suggestions closely aligned with their search history and preferences, making their experience more personal.
- Detecting anomalies in data streams: It assists in identifying and flagging outlier data or information that significantly deviates from established patterns, ensuring data quality and consistency.
- Streamlined content classification: It automates the grouping and labeling of data based on similarities, simplifying data management and utilization.
Why we chose Pinecone as our vector store
While looking for the right solution to seamlessly implement semantic search, we needed something that could easily work with our systems.
Pinecone stood out as a great choice, thanks to its user-friendly REST API. Apart from being easy to integrate, Pinecone provides:
- Extremely fast search results, even when dealing with huge amounts of data.
- Ability to update our existing data points in real-time, ensuring that we always have the latest information at our fingertips.
- A fully managed platform, so we can focus on using it rather than dealing with maintenance.
- Flexibility in terms of hosting options, including platforms like GCP, Azure, and AWS.
Selecting OpenAI as our embedding service
To enhance the efficiency and precision of our semantic search capabilities, selecting the right embedding service was important.
We found OpenAI's Embedding API to be an ideal choice for achieving unmatched contextual understanding in data processing.
Here's why we opted for OpenAI:
- Deep text understanding: OpenAI's embedding service is known for its ability to understand text deeply, making it skilled at finding important patterns and connections in large datasets (powered by the advanced GPT-3 model).
- Easy integration: OpenAI's well-documented API seamlessly fits into our existing systems, making it simple to add advanced search features. It's a quick choice for trying out new ideas.
- Continuous improvement: OpenAI is committed to making its services better over time. It means we can expect regular updates and enhancements, ensuring our search capabilities stay at the cutting edge of technology.
Opting for Rust as our backend API
To enhance our backend infrastructure, selecting the right programming language is crucial, but not necessarily restrictive.
Rust stood out as a great option, but it's not a strict requirement. Languages like JavaScript, Python, or any language capable of making cURL requests and handling data can work just fine.
However, there were some compelling reasons that made us choose Rust, especially as we explore Rust-based libraries for near real-time LLM inference on cost-effective hardware, which we'll discuss in a future instalment of this series.
Here's why Rust was an excellent choice for our API development:
- Speed: Rust is known for compiling efficient machine code, delivering performance similar to languages like C and C++. This speed makes it a strong choice for high-throughput APIs, although it's important to mention that our current reliance on a third-party API presents a challenge in achieving response times under 150 ms.
- Concurrency: Rust's ability to prevent data conflicts in concurrent programming with its skill in managing asynchronous tasks is very useful when handling many API requests happening simultaneously.
- Memory Safety: Rust proactively detects common errors like trying to access non-existent data, reducing crashes and security vulnerabilities during the coding process. It also works seamlessly with tools like the rust-analyzer plugin in VS Code, making debugging and development smoother.
- Compatibility: Rust works well with C libraries, making it easy to directly use functions from these libraries. This provides flexibility when integrating with existing systems.
- Community and Ecosystem: Rust benefits from a growing library collection and an active community. It's becoming a central hub for strong tools needed to create web services and APIs. The fact that enthusiasts have creatively made LLMs work with Rust, highlights it as a quick and capable option for more experiments.
Streamlining data and AI embeddings on Pinecone
In this section, we'll walk you through the process of collecting data, generating AI embeddings using the OpenAI Embedding API, and conducting semantic search experiments within the Pinecone Vector Database.
OpenAI models, like GPT-3, have been trained on a vast and diverse collection of text data. They excel at capturing complex language patterns and understanding context. These models transform each word or phrase into a high-dimensional vector. This process, known as embedding, captures the meaning of the input in a way that's easy for systems to understand.
For example, the word "lawyer" might be represented as a 1536-dimensional vector (using the 2nd gen OpenAI embedding API model text-embedding-ada-002, which is based on GPT-3). Each dimension in this vector captures a different aspect of the word's meaning.
The below example is from Tensorflow. It uses a completely different model, but the concept remains the same.
These embeddings play a crucial role in semantic search. When a user enters a search query, the AI model creates an embedding of that query.
This embedding is then sent to the vector database, such as Pinecone in our case, which finds and retrieves the most similar vectors, essentially providing the most contextually relevant results.
Think of it as translating human language into a format that machines can easily grasp and work with effectively. By generating these embeddings, OpenAI models enable us to achieve more precise and context-aware search results, a significant advancement over traditional keyword-based search methods.
Let's take an example: imagine a user searching for "rights of a tenant in Illinois." With a traditional keyword-based search, you'd get documents containing those exact words. But when we use an AI model to create embeddings, it understands the real meaning – that the user is looking for information about tenant rights in Illinois.
The system then fetches relevant results, even if they don't use the exact phrasing of the query but discuss the same idea. This could mean providing a comprehensive guide to tenant rights, mentioning a relevant court case in Illinois, or sharing a related law statute. In the end, it gives the user a more detailed and helpful response.
It's the combination of OpenAI's Embedding API and Pinecone's efficient vector search that makes this enhanced, contextually-aware search experience possible.
Implementation of our solution
Please note that this experiment was tailored for a specific website.
Here's what we did:
Step 1
Data Cleanup and Preparation: Our first step involved cleaning up the data and making sure it was compatible with Pinecone. We used Python for data preparation since it is simple for handling large datasets.
Data Collection: We collected a CSV file with over 1600 rows, all related to legal assistance from the Illinois Legal Aid Online site.
Pinecone Database Requirements: Pinecone needs data to have three specific columns: id, vectors, and metadata. We ensured our data met these requirements for seamless integration.
Before we proceed to create the vector, let's organize the data. Column names have been shortened for data privacy.
Note: We are not using metadata in our current experiment. It's primarily used for filtering and faster querying, or as a means to transmit data while querying in a different environment. In our case, Pinecone Query API will perform more efficiently without the inclusion of metadata.
Here's an overview of what the data looks like after we completed the initial cleanup and processing. We applied the functions we created earlier to each row in the database.
Next up is the generation of AI embeddings for the vector database. Please note that you'll need OpenAI API keys for this step.
For example, when we input the test string "hello," we receive the following set of embeddings as output.
We apply this AI embedding function to all the rows in our dataset. You can see the vector column in the image below.
Now, on to the final step - uploading this data to Pinecone. We are using the gRPC protocol provided by the Pinecone library, which makes this upload faster, taking less than 15-30 seconds in total.
Take a peek at the index statistics using
Lastly, when querying in Pinecone, we need to provide the AI embeddings to get the relevant results.
You might wonder why we're generating AI embeddings before uploading data to Pinecone and prior to querying. The reason is that embeddings can be created using various models, and they are not always compatible with one another.
Different models like Bert and simple Word2Vec, among others, can be used to perform the same task, and they produce embeddings that may not work interchangeably.
This is why it's important to have the embeddings prepared in advance to ensure a smooth and consistent search experience.
Step 2
Next, we'll create an API that can manage both AI embedding and querying processes seamlessly.
Here is an overview of the backend development process.
Here is the code for Rust backend that couples our Pinecone and OpenAI APIs together.
For our demo, we've implemented a short-term, in-memory cache to prevent unnecessary API calls.
However, in the future, we aim to introduce an on-disk cache and implement local semantic search capabilities for queries with similar meanings. It will enhance the efficiency and responsiveness of our system down the line.
Next, we have a helper function ‘fetch_new_data’. This function handles calls to both the OpenAI and Pinecone APIs, ensuring a smooth flow of data retrieval and processing.
Currently, we're experiencing significant delays with both the Pinecone and OpenAI APIs. Our goal is to cut down this delay further by exploring on-premises solutions.
It involves considering options like Bert for AI embedding and Milvus or similar open-source vector databases that can be used locally.
With the API components in place, take a look at what the demo frontend has to offer.
Here's a simple "useEffect" method within our React app. It's responsible for updating the search results in real-time as the user types in their query.
What’s next
We tried the RAG approach which involved feeding the results into a ChatGPT-like system, ultimately providing users with more personalized results without the need to navigate through numerous blog pages. We employed interesting strategies and encountered some challenges with this approach. You can delve deeper into our journey by checking out our next blog in this series.
We are soon planning to launch a platform where you can play with all these experiments. As we move forward, our focus is on matching the right technology with the needs of our customers. The field of semantic search and vector similarity is full of exciting possibilities, such as creating recommendation engines based on similarity.
Right now, we're actively working on several Proof of Concepts (PoCs), where we're balancing speed and accuracy depending on the specific application. We're exploring various models, including BERT-based ones and more, beyond the standard 'text-embedding-ada-002.'
We're also attentive to our customers' preferences for platforms like Azure or GCP. To meet these preferences, we're adjusting our approach to include models recommended by these providers, aiming to create a versatile system that can effectively serve different use cases, budgets, and unique requirements.