Runpod Public Endpoints provide instant access to state-of-the-art AI models through simple API calls, with an API playground available through the Runpod Hub.
Available models
For a list of available models and model-specific parameters, see the Public Endpoint model reference.
Public Endpoint playground
The Public Endpoint playground provides a streamlined way to discover and experiment with AI models.
The playground offers:
- Interactive parameter adjustment: Modify prompts, dimensions, and model settings in real-time.
- Instant preview: Generate images directly in the browser.
- Cost estimation: See estimated costs before running generation.
- API code generation: Create working code examples for your applications.
Access the playground
- Navigate to the Runpod Hub in the console.
- Select the Public Endpoints section.
- Browse the available models and select one that fits your needs.
Test a model
To test a model in the playground:
- Select a model from the Runpod Hub.
- Under Input, enter a prompt in the text box.
- Enter a negative prompt if needed. Negative prompts tell the model what to exclude from the output.
- Under Additional settings, you can adjust the seed, aspect ratio, number of inference steps, guidance scale, and output format.
- Click Run to start generating.
Under Result, you can use the dropdown menu to show either a preview of the output, or the raw JSON.
Create a code example
After inputting parameters using the playground, you can automatically generate an API request to use in your application.
- Select the API tab in the UI (above the Input field).
- Using the dropdown menu, select the programming language (Python, JavaScript, cURL, etc.) and POST command you want to use (
/run
or /runsync
).
- Click the Copy icon to copy the code to your clipboard.
Make API requests to Public Endpoints
You can make API requests to Public Endpoints using any HTTP client. The endpoint URL is specific to the model you want to use.
All requests require authentication using your Runpod API key, passed in the Authorization
header. You can find and create API keys in the Runpod console under Settings > API Keys.
To learn more about the difference between synchronous and asynchronous requests, see Endpoint operations.
Synchronous request example
Here’s an example of a synchronous request to Flux Dev using the /runsync
endpoint:
curl -X POST "https://api.runpod.ai/v2/black-forest-labs-flux-1-dev/runsync" \
-H "Authorization: Bearer RUNPOD_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"input": {
"prompt": "A serene mountain landscape at sunset",
"width": 1024,
"height": 1024,
"num_inference_steps": 20,
"guidance": 7.5
}
}'
Asynchronous request example
Here’s an example of an asynchronous request to Flux Dev using the /run
endpoint:
curl -X POST "https://api.runpod.ai/v2/black-forest-labs-flux-1-dev/run" \
-H "Authorization: Bearer RUNPOD_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"input": {
"prompt": "A futuristic cityscape with flying cars",
"width": 1024,
"height": 1024,
"num_inference_steps": 50,
"guidance": 8.0
}
}'
You can check the status and retrieve results using the /status
endpoint, replacing {job-id}
with the job ID returned from the /run
request:
curl -X GET "https://api.runpod.ai/v2/black-forest-labs-flux-1-dev/status/{job-id}" \
-H "Authorization: Bearer RUNPOD_API_KEY"
All endpoints return a consistent JSON response format:
{
"delayTime": 17,
"executionTime": 3986,
"id": "sync-0965434e-ff63-4a1c-a9f9-5b705f66e176-u2",
"output": {
"cost": 0.02097152,
"image_url": "https://image.runpod.ai/6/6/mCwUZlep6S/453ad7b7-67c6-43a1-8348-3ad3428ef97a.png"
},
"status": "COMPLETED",
"workerId": "oqk7ao1uomckye"
}
Python example
Here is an example Python API request to Flux Dev using the /run
endpoint:
import requests
headers = {"Content-Type": "application/json", "Authorization": "Bearer RUNPOD_API_KEY"}
data = {
"input": {
"prompt": "A serene mountain landscape at sunset",
"image_format": "png",
"num_inference_steps": 25,
"guidance": 7,
"seed": 50,
"width": 1024,
"height": 1024,
}
}
response = requests.post(
"https://api.runpod.ai/v2/black-forest-labs-flux-1-dev/run",
headers=headers,
json=data,
)
You can generate Public Endpoints API requests for Python and other programming languages using the Public Endpoints playground.
JavaScript/TypeScript integration with Vercel AI SDK
For JavaScript and TypeScript projects, you can use the @runpod/ai-sdk-provider
package to integrate Runpod’s Public Endpoints with the Vercel AI SDK.
Run this command to install the package:
npm install @runpod/ai-sdk-provider ai
To call a Public Endpoint for text generation:
import { runpod } from '@runpod/ai-sdk-provider';
import { generateText } from 'ai';
const { text } = await generateText({
model: runpod('qwen3-32b-awq'),
prompt: 'Write a Python function that sorts a list:',
});
For image generation:
import { runpod } from '@runpod/ai-sdk-provider';
import { experimental_generateImage as generateImage } from 'ai';
const { image } = await generateImage({
model: runpod.imageModel('flux/flux-dev'),
prompt: 'A serene mountain landscape at sunset',
aspectRatio: '4:3',
});
For comprehensive documentation and examples, see the Node package documentation.
Pricing
Public Endpoints use transparent, usage-based pricing. For example:
Model | Price | Billing unit |
---|
Flux Dev | $0.02 | Per megapixel |
Flux Schnell | $0.0024 | Per megapixel |
WAN 2.5 | $0.5 | Per 5 seconds of video |
Whisper V3 Large | $0.05 | Per 1000 characters of audio transcribed |
Qwen3 32B AWQ | $0.01 | Per 1000 tokens of text generated |
Pricing is calculated based on the actual output resolution. You will not be charged for failed generations.
Here are some pricing examples that demonstrate how you can estimate costs for image generation:
- 512×512 image (0.25 megapixels)
- Flux Dev: (512 * 512 / 1,000,000) * $0.02 = $0.00524288
- Flux Schnell: (512 * 512 / 1,000,000) * $0.0024 = $0.0006291456
- 1024×1024 image (1 megapixel)
- Flux Dev: (1024 * 1024 / 1,000,000) * $0.02 = $0.02097152
- Flux Schnell: (1024 * 1024 / 1,000,000) * $0.0024 = $0.0025165824
Runpod’s billing system rounds up after the first 10 decimal places.
For complete pricing information for each model, see the Public Endpoint model reference page.
Best practices
When working with Public Endpoints, following best practices will help you achieve better results and optimize performance.
Prompt engineering
For prompt engineering, be specific with detailed prompts as they generally produce better results. Include style modifiers such as art styles, camera angles, or lighting conditions. For Flux Dev, use negative prompts to exclude unwanted elements from your images.
A good prompt example would be: “A professional portrait of a woman in business attire, studio lighting, high quality, detailed, corporate headshot style.”
For performance optimization, choose the right model for your needs. Use Flux Schnell when you need speed, and Flux Dev when you need higher quality. Standard dimensions like 1024×1024 render fastest, so stick to these unless you need specific aspect ratios. For multiple images, use asynchronous endpoints to batch your requests. Consider caching results by storing generated images to avoid regenerating identical prompts.