Big organizations are always torn between leveraging modern technology like AI to stay relevant and securing the services they already have so they don’t jeopardize their existing revenue streams. The new Azure OpenAI Service helps you walk this tightrope, allowing for the integration of large language models (LLMs) like ChatGPT without compromising security.
This article will describe what the Azure OpenAI Service (AOS) brings, in addition to using the OpenAI service directly.
What Is the Azure OpenAI Service?
AOS, built and maintained by Microsoft, is part of Azure AI Services, formerly known as Azure Cognitive Services and Azure Applied AI Services. It integrates OpenAI services into the Azure cloud platform, letting you implement Azure’s security capabilities with OpenAI while giving you access to the same models as the plain OpenAI API.
With AOS, you can use content generation, summarization, semantic search, and natural-language-to-code translation while also leveraging Azure’s infrastructure features—security, compliance, and regional availability—to build enterprise-ready AI applications.
You should note that access to AOS is currently limited.
How Does the Azure OpenAI Service Work?
AOS allows you to deploy instances of models into your Azure account. These instances then relay your queries to the OpenAI API, so you still get most of the features you already have from using the OpenAI API directly.
Since the AOS instances are resources inside Azure and your Azure account, you get all the permission controls you know from other Azure services. Azure can also check all queries sent to OpenAI, along with the answers you get back.
What Features Does the Azure OpenAI Service Bring?
With AOS, you can use existing features of the OpenAI API, but you also get Azure-specific extras to ensure safe integration with your current services. Let’s go through these added benefits.
Pre-Trained Generative AI Models
AOS provides REST API access to OpenAI’s powerful language models GPT-4, GPT-3.5-Turbo, and the Embeddings Model series. The GPT-4 and GPT-3.5-Turbo model series have reached general availability.
You can fine-tune AI models like Ada, Babbage, Curie, Cushman, and Davinci. Or, use “Azure OpenAI on your data” to run supported chat models, like GPT-3.5-Turbo and GPT-4 without needing to go through the time-consuming process of fine-tuning at all.
AOS lets you customize the conversational AI models and avoid generating outdated responses or incorrect information.
Tools to Detect and Mitigate Harmful Use Cases
AOS automatically checks your prompts and completions with Azure’s content policy to ensure problematic content will be filtered before it reaches your users.
Microsoft Responsible AI
Microsoft takes AI seriously and has made a lot of effort to ensure its safety. For example, the responsible AI program educates product creators on implementing application-level protections that put users in charge.
These include tips and guides like:
- Explaining that text output is AI-generated and letting users approve it
- Filtering content input and output
- Using process and policy protections, including abuse reporting systems and service-level agreements
- Designing guidelines and transparency notes
The Microsoft Responsible AI Principles
Microsoft defined six AI principles that should guide your usage of AI:
- Fairness: AI systems should treat all people fairly.
- Reliability and safety: AI systems should perform reliably and safely.
- Privacy and security: AI systems should be secure and respect privacy.
- Inclusiveness: AI systems should empower everyone and engage people.
- Transparency: AI systems should be understandable.
- Accountability: People should be accountable for AI systems.
Enterprise-Grade Security with Role-Based Access Control and Private Networks
As AOS supports managed identities via Azure Active Directory, you can use role-based access control (RBAC) to manage permissions for your AI models just as with any other Azure service. This means that you can use existing user databases together with this new service.
AOS also works with virtual private networks and private links to ensure your AI-related traffic and workloads are protected from the public internet and your existing private services can securely access AOS.
Note: Private network access is not supported when you use “Azure OpenAI on your data.”
Azure OpenAI Studio
With Azure OpenAI Studio, Microsoft even offers a no-code tool that allows people who can’t program to browse and fine-tune language models.
You can select from all available models and deploy them to your Azure account. Azure OpenAI Studio lets you configure the models via a graphical interface, so no coding skills are required. You can even run your model in a playground environment to ensure it works as expected.
Accessing Azure OpenAI Service
Since access to AOS is currently limited, you have to apply for permission before using it. This process can take up to 10 days, and approval is not guaranteed. While Microsoft states that some features are in general availability (GA), this only means they’re generally available for customers with AOS access. This also implies that some features aren’t in GA, so even if you get access to AOS, you can’t use them.
If Microsoft does greenlight your access, you can use AOS via different methods. For each of these, there is a quickstart guide available.
The most general way to access AOS is via the REST API; it works with all programming languages that support HTTP connections—even the ones that aren’t supported with an official SDK from Microsoft. Azure documentation gives you OpenAPI specs for an API client generator.
The SDKs are more high-level than using the AOS REST API directly.
Officially supported languages are:
Azure OpenAI Studio
With Azure OpenAI Studio, there is also a way for people who can’t program to deploy, configure, and test OpenAI models on Azure.
Using ChatGPT with Other Cloud Providers
Right now, and probably for the foreseeable future (since Microsoft is a big investor in OpenAI), Azure is the only cloud provider with official OpenAI integration. This makes using the OpenAI API via Azure smoother than with other cloud providers. You can still use it with other providers, but you will need more custom integration code to get things working correctly.
Note: Other cloud providers are either invested in their own AI solutions or the solutions of OpenAI competitors. AWS offers its own conversational AI called Amazon Titan and officially supports integration with Anthropic’s Claude via Amazon Bedrock. Google also released its own conversational AI called Bard, but it’s still experimental and doesn’t have official integrations with GCP.
The Azure OpenAI Service brings OpenAI-powered conversational AI to an enterprise, giving you new models as soon as possible while allowing you to manage their access via Azure Active Directory and private networks.
Microsoft also brings additional features besides the security of their cloud platform to the mix. These include the Microsoft Responsible AI Standard, an educational resource that teaches how to use AI safely, and Azure OpenAI Studio, a no-code tool that allows for the deployment, configuration, and testing of language models without the need for programming knowledge.
Azure is currently the only cloud provider with an officially supported integration. However, you’re at Microsoft’s mercy in terms of access. If you don’t get access, you’re either left to manual integration of the OpenAI API or should check out the alternatives from AWS and Google.