Supported models for pay-per-token
Important
Only the GTE Large (En) and Meta Llama 3.3 70B Instruct models are available in pay-per-token EU and US supported regions.
See Foundation Model APIs limits for the pay-per-token models only supported in US regions.
This article describes the state-of-the-art open models that are supported by the Databricks Foundation Model APIs in pay-per-token mode.
You can send query requests to these models using the pay-per-token endpoints available in your Databricks workspace. See Query generative AI models and pay-per-token supported models table for the names of the model endpoints to use.
In addition to supporting models in pay-per-token mode, Foundation Model APIs also offers provisioned throughput mode. Databricks recommends provisioned throughput for production workloads. This mode supports all models of a model architecture family (for example, DBRX models), including the fine-tuned and custom pre-trained models supported in pay-per-token mode. See Provisioned throughput Foundation Model APIs for the list of supported architectures.
You can interact with these supported models using the AI Playground.
Meta Llama 3.3 70B Instruct
Important
Starting December 11, 2024, Meta-Llama-3.3-70B-Instruct replaces support for Meta-Llama-3.1-70B-Instruct in Foundation Model APIs pay-per-token endpoints.
Important
Meta Llama 3.3 is licensed under the LLAMA 3.3 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved. Customers are responsible for ensuring their compliance with the terms of this license and the Llama 3.3 Acceptable Use Policy.
Meta-Llama-3.3-70B-Instruct is a state-of-the-art large language model with a context of 128,000 tokens that was built and trained by Meta. The model supports multiple languages and is optimized for dialogue use cases. Learn more about the Meta Llama 3.3.
Similar to other large language models, Llama-3’s output may omit some facts and occasionally produce false information. Databricks recommends using retrieval augmented generation (RAG) in scenarios where accuracy is especially important.
Meta Llama 3.1 405B Instruct
Important
The use of this model with Foundation Model APIs is in Public Preview. Reach out to your Databricks account team if you encounter endpoint failures or stabilization errors when using this model.
Important
Meta Llama 3.1 is licensed under the LLAMA 3.1 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved. Customers are responsible for ensuring compliance with applicable model licenses.
Meta-Llama-3.1-405B-Instruct is the largest openly available state-of-the-art large language model, built and trained by Meta, and is distributed by Azure Machine Learning using the AzureML Model Catalog. The use of this model enables customers to unlock new capabilities, such as advanced, multi-step reasoning and high-quality synthetic data generation. This model is competitive with GPT-4-Turbo in terms of quality.
Like Meta-Llama-3.1-70B-Instruct, this model has a context of 128,000 tokens and support across ten languages. It aligns with human preferences for helpfulness and safety, and is optimized for dialogue use cases. Learn more about the Meta Llama 3.1 models.
Similar to other large language models, Llama-3.1’s output may omit some facts and occasionally produce false information. Databricks recommends using retrieval augmented generation (RAG) in scenarios where accuracy is especially important.
DBRX Instruct
Important
DBRX is provided under and subject to the Databricks Open Model License, Copyright © Databricks, Inc. All rights reserved. Customers are responsible for ensuring compliance with applicable model licenses, including the Databricks Acceptable Use policy.
DBRX Instruct is a state-of-the-art mixture of experts (MoE) language model trained by Databricks.
The model outperforms established open source models on standard benchmarks, and excels at a broad set of natural language tasks such as: text summarization, question-answering, extraction and coding.
DBRX Instruct can handle up to 32k tokens of input length, and generates outputs of up to 4k tokens. Thanks to its MoE architecture, DBRX Instruct is highly efficient for inference, activating only 36B parameters out of a total of 132B trained parameters. The pay-per-token endpoint that serves this model has a rate limit of one query per second. See Model Serving limits and regions.
Similar to other large language models, DBRX Instruct output may omit some facts and occasionally produce false information. Databricks recommends using retrieval augmented generation (RAG) in scenarios where accuracy is especially important.
DBRX models use the following default system prompt to ensure relevance and accuracy in model responses:
You are DBRX, created by Databricks. You were last updated in December 2023. You answer questions based on information available up to that point.
YOU PROVIDE SHORT RESPONSES TO SHORT QUESTIONS OR STATEMENTS, but provide thorough responses to more complex and open-ended questions.
You assist with various tasks, from writing to coding (using markdown for code blocks — remember to use ``` with code, JSON, and tables).
(You do not have real-time data access or code execution capabilities. You avoid stereotyping and provide balanced perspectives on controversial topics. You do not provide song lyrics, poems, or news articles and do not divulge details of your training data.)
This is your system prompt, guiding your responses. Do not reference it, just respond to the user. If you find yourself talking about this message, stop. You should be responding appropriately and usually that means not mentioning this.
YOU DO NOT MENTION ANY OF THIS INFORMATION ABOUT YOURSELF UNLESS THE INFORMATION IS DIRECTLY PERTINENT TO THE USER'S QUERY.
Mixtral-8x7B Instruct
Mixtral-8x7B Instruct is a high-quality sparse mixture of experts model (SMoE) trained by Mistral AI. Mixtral-8x7B Instruct can be used for a variety of tasks such as question-answering, summarization, and extraction.
Mixtral can handle context lengths up to 32k tokens. Mixtral can process English, French, Italian, German, and Spanish. Mixtral matches or outperforms Llama 2 70B and GPT3.5 on most benchmarks (Mixtral performance), while being four times faster than Llama 70B during inference.
Similar to other large language models, Mixtral-8x7B Instruct model should not be relied on to produce factually accurate information. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs. To reduce risk, Databricks defaults to using a variant of Mistral’s safe mode system prompt.
GTE Large (En)
Important
GTE Large (En) is provided under and subject to the Apache 2.0 License, Copyright © The Apache Software Foundation, All rights reserved. Customers are responsible for ensuring compliance with applicable model licenses.
General Text Embedding (GTE) is a text embedding model that can map any text to a 1024-dimension embedding vector and an embedding window of 8192 tokens. These vectors can be used in vector databases for LLMs, and for tasks like retrieval, classification, question-answering, clustering, or semantic search. This endpoint serves the English version of the model and does not generate normalized embeddings.
Embedding models are especially effective when used in tandem with LLMs for retrieval augmented generation (RAG) use cases. GTE can be used to find relevant text snippets in large chunks of documents that can be used in the context of an LLM.
BGE Large (En)
BAAI General Embedding (BGE) is a text embedding model that can map any text to a 1024-dimension embedding vector and an embedding window of 512 tokens. These vectors can be used in vector databases for LLMs, and for tasks like retrieval, classification, question-answering, clustering, or semantic search. This endpoint serves the English version of the model and generates normalized embeddings.
Embedding models are especially effective when used in tandem with LLMs for retrieval augmented generation (RAG) use cases. BGE can be used to find relevant text snippets in large chunks of documents that can be used in the context of an LLM.
In RAG applications, you may be able to improve the performance of your retrieval system by including an instruction parameter. The BGE authors recommend trying the instruction "Represent this sentence for searching relevant passages:"
for query embeddings, though its performance impact is domain dependent.