Skip to main content
All CollectionsAI Providers & ModelsAI ConnectionsMicrosoft Azure
How to Get Your Azure Model API Endpoint and API Key
How to Get Your Azure Model API Endpoint and API Key
Promptitude Team avatar
Written by Promptitude Team
Updated over a week ago

To use the Azure AI Model Inference API, you’ll need two key values:

  • Endpoint Target URI

  • API Key

This guide will help you collect both.

Prerequisites

  • An active Azure subscription

  • Access to the Azure portal

Step-by-Step Guide

1️⃣ Create your Azure AI Studio hub

Step 1: Navigate to Azure AI Studio

  1. Log in to the https://ai.azure.com/

  2. Select All hubs from the left pane and then select + New hub:

  3. In the Create a new hub dialog, enter a name and select your subscription for your hub and select Next. Leave the default Connect Azure AI Services option selected. A new AI services connection is created for the hub:

  4. Review the information and select Create:

  5. You can view the progress of the hub creation in the wizard:


  6. Once the creation process is complete, you'll be automatically redirected to the newly created hub:


2️⃣ Create your Azure AI Studio project

Step 1: Go to your Azure AI Studio hub

  1. In your Azure AI Studio hub, Select Hub overview from the left pane and then select + New project:

  2. In the Create a project dialog, enter a name and select Create a project:

  3. You can view the progress of the project creation in the dialog:

  4. Once the creation process is complete, you'll be automatically redirected to the newly created project:


3️⃣ Find your model in the model catalog

Step 1: Go to your project model catalog

  1. In your Azure AI Studio project, Select Model catalog from the left pane and in the search bar type in the model you want to deploy (we will use the Jais model as an example):

  2. Select the model, and you'll be redirected to its model card with detailed information:

  3. Click Deploy in the model information to begin the deployment process:

  4. In the deployment dialog, review the deployment details, pricing, and terms. If you're comfortable, click the Subscribe and deploy button:

  5. You can view the validation progress in the dialog:

  6. Once the validation process is complete, a dialog will pop up. Here, you can name your deployment. After naming it, click the Deploy button:

  7. When the deployment is complete, you'll be redirected to the deployment page. Here, you can see that the creation process is ongoing:


    Endpoint Target URI and API key

  8. Once the creation process is complete, you'll see two key values needed to use the model in Promptitude.io:


4️⃣ Configure your Serverless Inference API connection in Promptitude.io

  1. Go to the Serverless Inference API Hub page and click the New Inference API Provider button:

  2. In the slide-over panel, give a name to your new Inference API provider and click the Save button:

  3. Once the provider is created, you'll be redirected to the provider's models. From here, click Add Endpoint Model:

  4. Configure all the properties you previously obtained in order to add your avilable model. Make sure that the properties are correct, click Save:

  5. If everything went well, you will see your model in the table:

  6. Now you can use your model in your prompts!:

📌 Summary

By following these steps, you will have gathered the following information:

  • Endpoint Target URI: The private endpoint URI (Uniform Resource Identifier) of your deployed model.

  • API key: The private API key of your deployed model.

After obtaining this information you will be able to configure and use the models you have configured within your Promptitude.io application.

Did this answer your question?