Skip to main content
All CollectionsSettings
What is Serverless Inference API Hub in Promptitude?
What is Serverless Inference API Hub in Promptitude?
Promptitude Team avatar
Written by Promptitude Team
Updated over a month ago

Promptitude's Serverless Inference API Hub is your gateway to seamlessly integrating various AI providers into your workflow.

Connect and manage multiple AI models from different cloud providers, all within the Promptitude platform. By leveraging serverless technology, you can enjoy scalable and secure AI inference without the hassle of managing servers yourself.

Note πŸ”” Serverless functionality is available with Enterprise plans in Promptitude.

Serverless & Promptitude Hub

Serverless computing is a cloud-native development model that allows you to build and run applications without having to manage the underlying infrastructure. In the context of Promptitude's Serverless Inference API Hub:

  • Deploy Multiple Models: You can deploy all your AI models from various cloud providers, such as Microsoft Azure, directly within Promptitude.

  • Maintain Privacy and Security: The hub ensures that the privacy and security of both your AI cloud providers and Promptitude are maintained.

  • Scalability: Enjoy scalable inference without the need for server management, allowing your applications to handle varying workloads efficiently.

  • Structured Prompt Management: Use Promptitude's features to create prompts in a structured and organized way, enhancing your overall workflow.
    ​

How to use and set up Serverless Inference API Hub

Getting started with the Serverless Inference API Hub is straightforward:

1️⃣ Add a New Serverless Configuration

  • Go to the Settings section in your Promptitude account.

  • Navigate to the Serverless tab.

  • Click on the option to add a new Serverless configuration.

  • Choose the cloud provider you want to integrate (e.g., Microsoft Azure).

2️⃣ Configure Your Provider and Deploy Your Models

  • Create your provider by adding the necessary details such as custom Endpoints and API keys for your models.

  • Ensure you meet the Promptitude prerequisites for the selected provider.

  • Once your provider is set up, deploy your AI models according to the specific requirements of each provider.

The deployment of models depends on the requirements of each provider.

3️⃣ Select at your prompt

  • Use in your chats and prompts like any AI Connection you have added.

    You're done!


πŸ“Œ Supported Cloud AI Providers

Currently, Promptitude's Serverless Inference API Hub supports:

As Promptitude continues to evolve, you can expect more providers to be added, giving you even more flexibility in your AI model deployment.

The Serverless Inference API Hub in Promptitude offers a convenient, secure, and scalable way to manage your AI models from various cloud providers.

By following these simple steps, you can leverage the full potential of your AI models without worrying about server management. Create structured and organized prompts, making your workflow more efficient and effective.


​

Did this answer your question?