At its annual Inspire conference, Microsoft announced a number of new AI features headed to Azure, perhaps the most notable of which is Vector Search. Available in preview through Azure Cognitive search, Vector Search uses machine learning to capture the meaning and context of unstructured data, including images and text, to make search faster.
Vectorization, an increasingly popular technique in search, involves converting words or images into vectors, or series of numbers, that encode their meaning — allowing them to be processed mathematically. Vectors enable machines to structure and make sense of data, enabling them to understand, for example, that words close together in “vector space” — like “king” and “queen” — are related and quickly surface them from a database of millions of words.
Microsoft’s flavor of vector search offers “pure” vector search, hybrid retrieval and “sophisticated” reranking. The company notes that it can be used in apps and services to generate personalized responses in natural language, deliver product recommendations and identify data patterns.
“Vector search is integrated with Azure AI, allowing customers to build search-enabled, chat-based apps, convert images into vector representations using Azure AI Vision [and] retrieve relevant information from large data sets to help automate processes and workflows,” the company writes in a blog post. “The integration of Vector search seamlessly extends to other capabilities of Azure Cognitive Search, including faceted navigation, filters and more.”
Elsewhere across Azure, Microsoft is launching what it’s calling the Document Generative AI solution, which integrates Microsoft’s existing AI-powered document processing services, including Azure Form Recognizer, with the Azure OpenAI Service. (Recall that the Azure OpenAI Service is Microsoft’s fully managed, enterprise-focused offering designed to give businesses access to AI tech from OpenAI — with whom Microsoft has a close commercial partnership — with added controls and governance features.)
The Document Generative AI solution — leveraging OpenAI’s latest AI language models — ingests files for tasks like report summarization, value extraction, knowledge mining and generating new types of documents. It essentially lets a company build an app like OpenAI’s ChatGPT that can read documents and use those documents as the basis for its responses.
For example, using the Document Generative AI, a customer could upload invoices, bills and contracts to allow employees to ask questions about service guarantees and specific line items. The Document Generative AI solution answers questions in text as well as images and tables, providing citations with a link to the source content.
“[Using the Document Generative AI solution, you can] interact with documents using natural language and generate new content from your existing documents, including blog posts, newsletters, summaries and captions … Whether you require intelligent document chat capabilities, writing assistance, query support, comprehensive search functionality or even document translation, Document Generative AI can handle complex and diverse document tasks through models from OpenAI.”
In a related announcement, Microsoft revealed that OpenAI’s Whisper model, an automatic speech recognition model, will soon come to the Azure OpenAI Service as well as Microsoft’s family of AI speech services. Enterprise customers will be able to use Whisper to transcribe and translate audio content as well as produce batch transcriptions “at scale,” Microsoft says.
Rounding out the AI unveilings at Inspire, Microsoft announced the public preview of Real-time Diarization, an AI-driven speech service that can identify which of several people are speaking in real time. The company also announced the general availability of Custom Neural Voice, which taps AI to closely reproduce an actor’s voice or create an original synthetic voice.
Previously, Custom Neural Voice was in limited access, meaning that customers had to apply and be approved by Microsoft in order to use it.
Lest folks be concerned about the deepfakes potential, Microsoft says that Custom Neural Voice includes controls to help prevent misuse of the service. When a customer submits a recording, the voice actor — if one is being used — has to make a statement acknowledging that they understand the tech and are aware the customer is having a voice made. The recording is then compared via speaker verification to make sure the voices match before the customer can begin creating a voice.
Microsoft also contractually requires customers to get consent from voice talent, and customers have to agree to a code of conduct before they can begin using Custom Neural Voice. In addition, Microsoft offers watermarking and detection tools aimed at making it easier to identify if a given audio clip was created with Custom Neural Voice.
Those controls, assuming they work as advertised, won’t necessarily solve the licensing and consent controversies around voice cloning tech. But Microsoft’s evidently decided that it isn’t its battle to fight.