Your all-in-one Generative AI platform: Azure AI Studio

During the last couple of months, generative AI has changed productivity, operations and customer support. Several AI providers have put a lot of emphasis on generative AI, incorporating it in their tools and offering tools such that customers can implement it inside their businesses. Besides Azure OpenAI which leverages the models from OpenAI such as GPT4 and Ada, Azure has recently released Azure AI Studio, which is a solution that offers multiple foundation models to build your generative AI solution, comparable to Amazon Bedrock.

What is Azure AI Studio?

Azure AI Studio combines the different capabilities from multiple Azure AI services, such as Azure AI infrastructure, machine learning, cognitive services and OpenAI service enabling you to develop your generative AI solution. It offers you the opportunity to build your solution using the UI of the studio or in a coding way. It is designed to let you build solutions collaboratively as a team, as it provides an environment with shared files and solutions to pretrained models, data and compute.

Foundation models

As a response to the flexibility of Amazon Bedrock and their multitude of foundation models, Azure has released Azure AI Studio to include different foundation models and therefore offer more flexibility. This way, it is possible to choose the foundation model that best fits your specific use case and specifics, rather than having to choose the same model for every case. A myriad of foundation models are offered from providers like OpenAI, Meta, Mistral AI and NVIDIA. Besides, it is also possible to choose from open-source models, with the most popular ones being curated by Azure AI, meaning that they are optimized for use in Azure Machine Learning, offering state of the art performance and throughput on Azure hardware and native support. The other open-source models are offered through the HuggingFace hub and accessible for real-time inference with online endpoints. It is possible to perform a wide range of tasks using these models, ranging from speech recognition to image classification and question and answering. This wide variety ensures that Azure is competitive with AWS, including models from OpenAI, NVIDIA and the newest Mistral AI offerings, but lacking some state of the art models and embeddings from Amazon, Anthropic and Cohere.

You can choose and evaluate the different models available in Azure AI Studio in the Explore tab. You can select the model that best suits your specific use case and more information is provided about the specific models in an intuitive interface. The different models can be compared based on different statistics and situations and specific prompts can be generated that can be used to test the final solution.

After selecting the right models for your solution, you can build it using both the UI and code. You can customize your solution by connecting your LLM to tools like Azure OpenAI and Azure AI Content Safety for content filtering. It is also possible to use external data, without copying the data to the project. This can be done by the use of Azure Blob Container, Azure Data Lake Gen2 or Microsoft OneLake. You can choose your preferred search type, the default being a combination of keyword search, vector search and semantic search, with an embedding necessary for the vector search.  You can store this RAG pattern in an Index asset, which contains the most important information about your index and is primarily stored In Azure AI search, which offers support for information retrieval over both your vector and textual data. Alternatively, you can use FAISS to store your index (not available in the UI). FAISS is an open source library that allows to quickly search in a vector store. FAISS will scale with the underlying compute loading index and you can use it to locally store your index and build and query your index in memory.

Evaluation

After deploying your generative AI solution, it is possible to evaluate your model. This can be done in two ways, a manual and automatic evaluation. Azure recommends to first do manual measurements before the automatic evaluation. The manual evaluation is used for manually checking the progress against a small dataset until no harm is observed anymore. Automatic measurement is used for large scale testing with increased coverage to provide comprehensive resulEts. Automatic metrics can be subdivided in two parts:

  1. Traditional ML measurements: can be used when the ground truth and expected answers are known.
  2. AI-assisted measurements: LLMs are used to evaluate the output. This is done by instructing the LLM to quantify different aspects of the AI-generated output. This is mostly used when there is a lack of ground truth and expected answers, such as in open-ended Q&A or creative writing. It can help to measure the quality or safety of the answer.

Securing your application

To safeguard your application of harmful content, Azure AI Studio includes content filtering, powered by Azure AI content Safety which is a content moderation platform using AI to keep the content safe. The content filtering system detects and takes actions on specific categories of potentially harmful content in the input and output prompts. On top of this, you can also protect your application from attempts of changing the chatbots initial purpose, such that the chatbot will not change its persona. Furthermore, it is also possible to reject potential IP infringement in the prompts.

Prompt flow

The entire development cycle can be streamlined by the use of Prompt flow. It is a solution that simplifies the process of prototyping, experimenting, iterating and deploying the application. This way, you can have a clear overview of the different components of your generative AI solution, such as prompt engineering and chunking of input data, and can change them easily.

Conclusion

Azure AI Studio offers a great solution to implement generative AI applications inside your organisation. Thanks to the various foundation models, it offers a competitive alternative to Amazon Bedrock for flexible solutions in a wide range of applications. You can deploy your solutions in an intuitive way, both using the UI or in a coding way. Furthermore, it is also easy to customize your application with your own data, and securing it such that no harmful content can be created.

Pieter Verfaillie

Analytics consultant @Aivix

Pieter is a motivated data scientist with a Master of Science in business engineering. He’s dedicated to constructing models that address problems and offer insights. Proficient in Python, Pieter has a wealth of experience from diverse projects, reflecting his enthusiasm for learning. His passion lies in leveraging data to innovate solutions and his solid business background adds a unique dimension to his approach.