Databricks Foundation Model APIs

This article provides an overview of the Foundation Model APIs in Azure Databricks. It includes requirements for use, supported models, and limitations.

What are Databricks Foundation Model APIs?

Mosaic AI Model Serving now supports Foundation Model APIs which allow you to access and query state-of-the-art open models from a serving endpoint. With Foundation Model APIs, you can quickly and easily build applications that leverage a high-quality generative AI model without maintaining your own model deployment. Foundation Model APIs is a Databricks Designated Service, which means that it uses Databricks Geos to manage data residency when processing customer content.

The Foundation Model APIs are provided in the following pricing modes:

  • Pay-per-token: This is the easiest way to start accessing foundation models on Databricks and is recommended for beginning your journey with Foundation Model APIs. This mode is not designed for high-throughput applications or performant production workloads.
  • Provisioned throughput: This mode is recommended for all production workloads, especially those that require high throughput, performance guarantees, fine-tuned models, or have additional security requirements. Provisioned throughput endpoints are available with compliance certifications like HIPAA.

See Use Foundation Model APIs for guidance on how to use these modes and the supported models.

Using the Foundation Model APIs you can do the following:

  • Query a generalized LLM to verify a project’s validity before investing more resources.
  • Query a generalized LLM in order to create a quick proof-of-concept for an LLM-based application before investing in training and deploying a custom model.
  • Use a foundation model, along with a vector database, to build a chatbot using retrieval augmented generation (RAG).
  • Replace proprietary models with open alternatives to optimize for cost and performance.
  • Efficiently compare LLMs to see which is the best candidate for your use case, or swap a production model with a better performing one.
  • Build an LLM application for development or production on top of a scalable, SLA-backed LLM serving solution that can support your production traffic spikes.

Requirements

Use Foundation Model APIs

You have multiple options for using the Foundation Model APIs.

The APIs are compatible with OpenAI, so you can use the OpenAI client for querying. You can also use the UI, the Foundation Models APIs Python SDK, the MLflow Deployments SDK, or the REST API for querying supported models. Databricks recommends using the OpenAI client SDK or API for extended interactions and the UI for trying out the feature.

See Query generative AI models for scoring examples.

Pay-per-token Foundation Model APIs

You can access pay-per-token models in your Azure Databricks workspace. These models are recommended for getting started. To access them in your workspace, click the Serving tab in the left sidebar. The Foundation Model APIs are located at the top of the Endpoints list view.

Serving endpoints list

The following table summarizes the supported models for pay-per-token. See Supported models for pay-per-token for additional model information.

If you want to test out and chat with these models you can do so using the AI Playground. See Chat with LLMs and prototype GenAI apps using AI Playground.

Important

  • Starting December 11, 2024, Meta-Llama-3.3-70B-Instruct replaces support for Meta-Llama-3.1-70B-Instruct in Foundation Model APIs pay-per-token endpoints.
  • Meta-Llama-3.1-405B-Instruct is the largest openly available state-of-the-art large language model, built and trained by Meta and distributed by Azure Machine Learning using the AzureML Model Catalog.
  • The following models are now retired. See Retired models for recommended replacement models.
    • Llama 2 70B Chat
    • MPT 7B Instruct
    • MPT 30B Instruct
Model Task type Endpoint Notes
GTE Large (English) Embedding databricks-gte-large-en Does not generate normalized embeddings.
Meta-Llama-3.3-70B-Instruct Chat databricks-meta-llama-3-3-70b-instruct
Meta-Llama-3.1-405B-Instruct* Chat databricks-meta-llama-3-1-405b-instruct See Foundation Model APIs limits for region availability.
DBRX Instruct Chat databricks-dbrx-instruct See Foundation Model APIs limits for region availability.
Mixtral-8x7B Instruct Chat databricks-mixtral-8x7b-instruct See Foundation Model APIs limits for region availability.
BGE Large (English) Embedding databricks-bge-large-en See Foundation Model APIs limits for region availability.

* Reach out to your Databricks account team if you encounter endpoint failures or stabilization errors when using this model.

Provisioned throughput Foundation Model APIs

Provisioned throughput provides endpoints with optimized inference for foundation model workloads that require performance guarantees. Databricks recommends provisioned throughput for production workloads. See Provisioned throughput Foundation Model APIs for a step-by-step guide on how to deploy Foundation Model APIs in provisioned throughout mode.

Provisioned throughput support includes:

  • Base models of all sizes. Base models can be accessed using the Databricks Marketplace, or you can alternatively download them from Hugging Face or another external source and register them in the Unity Catalog. The latter approach works with any fine-tuned variant of the supported models, irrespective of the fine-tuning method employed.
  • Fine-tuned variants of base models, such as models that are fine-tuned on proprietary data.
  • Fully custom weights and tokenizers, such as those trained from scratch or continued pre-trained or other variations using the base model architecture (such as CodeLlama).

The following table summarizes the supported model architectures for provisioned throughput.

Important

Meta Llama 3.3 is licensed under the LLAMA 3.3 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved. Customers are responsible for ensuring their compliance with the terms of this license and the Llama 3.3 Acceptable Use Policy.

Meta Llama 3.2 is licensed under the LLAMA 3.2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved. Customers are responsible for ensuring their compliance with the terms of this license and the Llama 3.2 Acceptable Use Policy.

Meta Llama 3.1 are licensed under the LLAMA 3.1 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved. Customers are responsible for ensuring compliance with applicable model licenses.

Model architecture Task types Notes
Meta Llama 3.3 Chat or Completion See Provisioned throughput limits for the Meta Llama model variants that are supported and their region availability.
Meta Llama 3.2 3B Chat or Completion
Meta Llama 3.2 1B Chat or Completion
Meta Llama 3.1 Chat or Completion
Meta Llama 3 Chat or Completion
Meta Llama 2 Chat or Completion
DBRX Chat or Completion See Provisioned throughput limits for region availability.
Mistral Chat or Completion
Mixtral Chat or Completion
MPT Chat or Completion
GTE v1.5 (English) Embedding Does not generate normalized embeddings.
BGE v1.5 (English) Embedding

Limitations

See Foundation Model APIs limits.

Additional resources