Endpoint structure
When using the OpenAI-compatible API with Runpod, your requests are directed to this base URL pattern:ENDPOINT_ID
with your Serverless endpoint ID.
Supported APIs
The vLLM worker implements these core OpenAI API endpoints:Endpoint | Description | Status |
---|---|---|
/chat/completions | Generate chat model completions | Fully supported |
/completions | Generate text completions | Fully supported |
/models | List available models | Supported |
Model naming
TheMODEL_NAME
environment variable is essential for all OpenAI-compatible API requests. This variable corresponds to either:
- The Hugging Face model you’ve deployed (e.g.,
mistralai/Mistral-7B-Instruct-v0.2
). - A custom name if you’ve set
OPENAI_SERVED_MODEL_NAME_OVERRIDE
as an environment variable.
Initialize the OpenAI client
Before you can send API requests, set up an OpenAI client with your Runpod API key and endpoint URL:Send requests
You can use Runpod’s OpenAI-compatible API to send requests to your Runpod endpoint, enabling you to use the same client libraries and code that you use with OpenAI’s services. You only need to change the base URL to point to your Runpod endpoint.You can also send requests using Runpod’s native API, which provides additional flexibility and control.
Chat completions
The/chat/completions
endpoint is designed for instruction-tuned LLMs that follow a chat format.
Non-streaming request
Here’s how you can make a basic chat completion request:Response format
The API returns responses in this JSON format:Streaming request
Streaming allows you to receive the model’s output incrementally as it’s generated, rather than waiting for the complete response. This real-time delivery enhances responsiveness, making it ideal for interactive applications like chatbots or for monitoring the progress of lengthy generation tasks.Text completions
The/completions
endpoint is designed for base LLMs and text completion tasks.
Non-streaming request
Here’s how you can make a text completion request:Response format
The API returns responses in this JSON format:Streaming request
List available models
The/models
endpoint allows you to get a list of available models on your endpoint:
Response format
Chat completion parameters
Here are all available parameters for the/chat/completions
endpoint:
Parameter | Type | Default | Description |
---|---|---|---|
messages | list[dict[str, str]] | Required | List of messages with role and content keys. The model’s chat template will be applied automatically. |
model | string | Required | The model repo that you’ve deployed on your Runpod Serverless endpoint. |
temperature | float | 0.7 | Controls the randomness of sampling. Lower values make it more deterministic, higher values make it more random. Zero means greedy sampling. |
top_p | float | 1.0 | Controls the cumulative probability of top tokens to consider. Must be in (0, 1]. Set to 1 to consider all tokens. |
n | int | 1 | Number of output sequences to return for the given prompt. |
max_tokens | int | None | Maximum number of tokens to generate per output sequence. |
seed | int | None | Random seed to use for the generation. |
stop | string or list[str] | list | String(s) that stop generation when produced. The returned output will not contain the stop strings. |
stream | bool | false | Whether to stream the response. |
presence_penalty | float | 0.0 | Penalizes new tokens based on whether they appear in the generated text so far. Values > 0 encourage new tokens, values < 0 encourage repetition. |
frequency_penalty | float | 0.0 | Penalizes new tokens based on their frequency in the generated text so far. Values > 0 encourage new tokens, values < 0 encourage repetition. |
logit_bias | dict[str, float] | None | Unsupported by vLLM. |
user | string | None | Unsupported by vLLM. |
Additional vLLM parameters
vLLM supports additional parameters beyond the standard OpenAI API:Parameter | Type | Default | Description |
---|---|---|---|
best_of | int | None | Number of output sequences generated from the prompt. From these best_of sequences, the top n sequences are returned. Must be ≥ n . Treated as beam width when use_beam_search is true . |
top_k | int | -1 | Controls the number of top tokens to consider. Set to -1 to consider all tokens. |
ignore_eos | bool | false | Whether to ignore the EOS token and continue generating tokens after EOS is generated. |
use_beam_search | bool | false | Whether to use beam search instead of sampling. |
stop_token_ids | list[int] | list | List of token IDs that stop generation when produced. The returned output will contain the stop tokens unless they are special tokens. |
skip_special_tokens | bool | true | Whether to skip special tokens in the output. |
spaces_between_special_tokens | bool | true | Whether to add spaces between special tokens in the output. |
add_generation_prompt | bool | true | Whether to add generation prompt. Read more here. |
echo | bool | false | Echo back the prompt in addition to the completion. |
repetition_penalty | float | 1.0 | Penalizes new tokens based on whether they appear in the prompt and generated text so far. Values > 1 encourage new tokens, values < 1 encourage repetition. |
min_p | float | 0.0 | Minimum probability for a token to be considered. |
length_penalty | float | 1.0 | Penalizes sequences based on their length. Used in beam search. |
include_stop_str_in_output | bool | false | Whether to include the stop strings in output text. |
Text completion parameters
Here are all available parameters for the/completions
endpoint:
Parameter | Type | Default | Description |
---|---|---|---|
prompt | string or list[str] | Required | The prompt(s) to generate completions for. |
model | string | Required | The model repo that you’ve deployed on your Runpod Serverless endpoint. |
temperature | float | 0.7 | Controls the randomness of sampling. Lower values make it more deterministic, higher values make it more random. Zero means greedy sampling. |
top_p | float | 1.0 | Controls the cumulative probability of top tokens to consider. Must be in (0, 1]. Set to 1 to consider all tokens. |
n | int | 1 | Number of output sequences to return for the given prompt. |
max_tokens | int | 16 | Maximum number of tokens to generate per output sequence. |
seed | int | None | Random seed to use for the generation. |
stop | string or list[str] | list | String(s) that stop generation when produced. The returned output will not contain the stop strings. |
stream | bool | false | Whether to stream the response. |
presence_penalty | float | 0.0 | Penalizes new tokens based on whether they appear in the generated text so far. Values > 0 encourage new tokens, values < 0 encourage repetition. |
frequency_penalty | float | 0.0 | Penalizes new tokens based on their frequency in the generated text so far. Values > 0 encourage new tokens, values < 0 encourage repetition. |
logit_bias | dict[str, float] | None | Unsupported by vLLM. |
user | string | None | Unsupported by vLLM. |
Environment variables
Use these environment variables to customize the OpenAI compatibility:Variable | Default | Description |
---|---|---|
RAW_OPENAI_OUTPUT | 1 (true) | Enables raw OpenAI SSE format for streaming. |
OPENAI_SERVED_MODEL_NAME_OVERRIDE | None | Override the model name in responses. |
OPENAI_RESPONSE_ROLE | assistant | Role for responses in chat completions. |
Client libraries
The OpenAI-compatible API works with standard OpenAI client libraries:Python
JavaScript
Implementation differences
While the vLLM worker aims for high compatibility, there are some differences from OpenAI’s implementation: Token counting may differ slightly from OpenAI models due to different tokenizers. Streaming format follows OpenAI’s Server-Sent Events (SSE) format, but the exact chunking of streaming responses may vary. Error responses follow a similar but not identical format to OpenAI’s error responses. Rate limits follow Runpod’s endpoint policies rather than OpenAI’s rate limiting structure.Current limitations
The vLLM worker has a few limitations:- Function and tool calling APIs are not currently supported.
- Some OpenAI-specific features like moderation endpoints are not available.
- Vision models and multimodal capabilities depend on the underlying model support in vLLM.
Troubleshooting
Common issues and their solutions:Issue | Solution |
---|---|
”Invalid model” error | Verify your model name matches what you deployed. |
Authentication error | Check that you’re using your Runpod API key, not an OpenAI key. |
Timeout errors | Increase client timeout settings for large models. |
Incompatible responses | Set RAW_OPENAI_OUTPUT=1 in your environment variables. |
Different response format | Some models may have different output formatting; use a chat template. |