Oracle Continues MySQL HeatWave Innovation with Vector Store and New Generative AI Capabilities

By Akash
on 21-09-2023 05:38 AM

Oracle today announced significant enhancements to MySQL HeatWave, including support for vector store, generative AI, new in-database machine learning features, MySQL Autopilot enhancements, new HeatWave Lakehouse capabilities, support for JavaScript, acceleration of JSON queries, and support for new analytic operators. Currently in private preview, the vector store will enable customers to leverage the power of large language models (LLMs) with their proprietary data to get answers that are more accurate than using models which have been trained on public data only. With generative AI and vector store capabilities, customers can interact with MySQL HeatWave in natural language and efficiently search documents in various file formats in HeatWave Lakehouse.“Today’s enhancements to MySQL HeatWave are another significant step on our journey to address pressing customer data, analytics, and AI issues,” said Edward Screven, chief corporate architect, Oracle. “We’ve previously added real-time analytics with the best price-performance in the industry, automated machine learning, lakehouse, and multicloud capabilities to HeatWave. Now vector store and generative AI bring the power of LLMs to customers, providing them with an intuitive way to interact with data in their enterprise and get the accurate answers that they need for their business.” For customers looking to perform analytics, transaction processing, machine learning, and generative AI across a variety of data types and sources, additional capabilities have been added to MySQL HeatWave—for both MySQL-compatible and non-MySQL workloads.

The vector store ingests documents in a variety of formats such as PDF and stores them as embeddings generated via an encoder model. For a given user query, the vector store identifies the most similar documents by performing a similarity search over the stored embeddings and the embedded query. These documents are used to augment the prompt given to the LLM so that it provides a more contextual answer.