---
title: "DeepInfra on Hugging Face Inference Providers 🔥"
source_name: "Hugging Face Blog"
original_url: "https://huggingface.co/blog/inference-providers-deepinfra"
canonical_url: "https://www.traeai.com/articles/768e0600-c3ad-42e9-98a0-0437c96b278b"
content_type: "article"
language: "英文"
score: 8.5
tags: ["Hugging Face","DeepInfra","AI推理"]
published_at: "2026-04-29T00:00:00+00:00"
created_at: "2026-04-30T02:05:26.722592+00:00"
---

# DeepInfra on Hugging Face Inference Providers 🔥

Canonical URL: https://www.traeai.com/articles/768e0600-c3ad-42e9-98a0-0437c96b278b
Original source: https://huggingface.co/blog/inference-providers-deepinfra

## Summary

Hugging Face宣布支持DeepInfra作为新的推理服务提供商，用户可以在模型页面直接使用多种AI能力，包括对话和文本生成任务。

## Key Takeaways

- DeepInfra成为Hugging Face Hub上的新推理服务提供商。
- DeepInfra支持多种模型类型，如LLM、文本到图像等。
- 用户可以通过设置API密钥来直接调用DeepInfra的服务。

## Content

Title: DeepInfra on Hugging Face Inference Providers 🔥

URL Source: http://huggingface.co/blog/inference-providers-deepinfra

Published Time: 2026-04-29T00:00:00.695Z

Markdown Content:
[Back to Articles](https://huggingface.co/blog)

[![Image 1: Aray Sultanbekova's avatar](https://huggingface.co/avatars/e0bd265664c87422309edbec1cab4e9b.svg)](https://huggingface.co/araikin)

[![Image 2: Shang-Pin's avatar](https://huggingface.co/avatars/ab99965d9b51a20757a369481c44aa01.svg)](https://huggingface.co/shang-pin-deepinfra)

[![Image 3: Utemuratov's avatar](https://huggingface.co/avatars/b2c332123cda757dbe7fbf075315fb0c.svg)](https://huggingface.co/Pernekhan)

[![Image 4: Yessen K's avatar](https://huggingface.co/avatars/6a656f92b24920030cc3ce39d8675536.svg)](https://huggingface.co/yessenzhar)

[![Image 5: Oguz Vuruskaner's avatar](https://cdn-avatars.huggingface.co/v1/production/uploads/63053f9c85c9f507e35219b5/oa2tquiZGikto0Elo3ZfQ.png)](https://huggingface.co/ovuruska)

[![Image 6: Célina Hanouti's avatar](https://cdn-avatars.huggingface.co/v1/production/uploads/6192895f3b8aa351a46fadfd/gsGa2PvHTKAv7MNbyLH8y.jpeg)](https://huggingface.co/celinah)

[![Image 7: Simon Brandeis's avatar](https://cdn-avatars.huggingface.co/v1/production/uploads/1608146735109-5fcfb7c407408029ba3577e2.png)](https://huggingface.co/sbrandeis)

[![Image 8: Lucain Pouget's avatar](https://cdn-avatars.huggingface.co/v1/production/uploads/1659336880158-6273f303f6d63a28483fde12.png)](https://huggingface.co/Wauplin)

*   [How it works](http://huggingface.co/blog/inference-providers-deepinfra#how-it-works "How it works")
    *   [In the website UI](http://huggingface.co/blog/inference-providers-deepinfra#in-the-website-ui "In the website UI")

    *   [From the client SDKs](http://huggingface.co/blog/inference-providers-deepinfra#from-the-client-sdks "From the client SDKs")

*   [Billing](http://huggingface.co/blog/inference-providers-deepinfra#billing "Billing")

*   [Feedback and next steps](http://huggingface.co/blog/inference-providers-deepinfra#feedback-and-next-steps "Feedback and next steps")

[![Image 9: banner image](https://huggingface.co/blog/assets/inference-providers/welcome-deepinfra.jpg)](https://huggingface.co/blog/assets/inference-providers/welcome-deepinfra.jpg)

We're thrilled to share that **DeepInfra** is now a supported Inference Provider on the Hugging Face Hub!

DeepInfra joins our growing ecosystem, enhancing the breadth and capabilities of serverless inference directly on the Hub's model pages. Inference Providers are also seamlessly integrated into our client SDKs (for both JS and Python), making it super easy to use a wide variety of models with your preferred providers.

[DeepInfra](https://deepinfra.com/) is a serverless AI inference platform offering one of the most cost-effective pricing per token in the industry. With a catalog of over 100 models, DeepInfra makes it easy for developers to integrate a wide range of AI capabilities into their applications with minimal setup.

DeepInfra supports a broad spectrum of model types - from LLMs to text-to-image, text-to-video, embeddings, and more. As part of this initial integration, DeepInfra is launching support for **conversational and text-generation tasks** on Hugging Face, enabling access to popular open-weight LLMs such as [DeepSeek V4](https://huggingface.co/deepseek-ai/DeepSeek-V4-Pro?inference_provider=deepinfra), [Kimi-K2.6](https://huggingface.co/moonshotai/Kimi-K2.6?inference_provider=deepinfra), [GLM-5.1](https://huggingface.co/zai-org/GLM-5.1?inference_provider=deepinfra), and many more. **Support for additional tasks** (text-to-image, text-to-video, embeddings, and more) will roll out soon!

Read more about how to use DeepInfra as an Inference Provider in its dedicated [documentation page](https://huggingface.co/docs/inference-providers/providers/deepinfra).

See the full list of models supported by DeepInfra [here](https://huggingface.co/models?inference_provider=deepinfra&sort=trending).

Follow DeepInfra on Hugging Face: [https://huggingface.co/DeepInfra](https://huggingface.co/DeepInfra).

## [](http://huggingface.co/blog/inference-providers-deepinfra#how-it-works) How it works

### [](http://huggingface.co/blog/inference-providers-deepinfra#in-the-website-ui) In the website UI

1.   In your user account settings, you are able to:

*   Set your own API keys for the providers you've signed up with. If no custom key is set, your requests will be routed through HF.
*   Order providers by preference. This applies to the widget and code snippets in the model pages.

![Image 10: Inference Providers](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/inference-providers/user-setting-v2.png)

1.   As mentioned, there are two modes when calling Inference Providers:

*   Custom key (calls go directly to the inference provider, using your own API key of the corresponding inference provider)
*   Routed by HF (in that case, you don't need a token from the provider, and the charges are applied directly to your HF account rather than the provider's account)

![Image 11: Inference Providers](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/inference-providers/explainer.png)

1.   Model pages showcase third-party inference providers (the ones that are compatible with the current model, sorted by user preference)

![Image 12: Inference Providers](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/inference-providers/model-widget-v2.png)

### [](http://huggingface.co/blog/inference-providers-deepinfra#from-the-client-sdks) From the client SDKs

DeepInfra is available through the Hugging Face SDKs - `huggingface_hub` (>= 1.11.2) for Python and `@huggingface/inference` for JavaScript.

The following examples show how to use [DeepSeek V4 Pro](https://huggingface.co/deepseek-ai/DeepSeek-V4-Pro) through DeepInfra. Use a [Hugging Face token](https://huggingface.co/settings/tokens) to authenticate - the request will be routed to DeepInfra automatically.

#### [](http://huggingface.co/blog/inference-providers-deepinfra#from-your-favorite-agent-harness) From your favorite Agent Harness

Hugging Face Inference Providers are integrated in most Agent Harnesses - including Pi, OpenCode, Hermes Agents, OpenClaw, and more. This means you can plug DeepInfra-hosted models straight into your favorite tools without any extra glue code. Browse the full list of integrations [here](https://huggingface.co/docs/inference-providers/en/integrations/index).

#### [](http://huggingface.co/blog/inference-providers-deepinfra#from-python) from Python

```
import os
from openai import OpenAI

client = OpenAI(
    base_url="https://router.huggingface.co/v1",
    api_key=os.environ["HF_TOKEN"],
)

completion = client.chat.completions.create(
    model="deepseek-ai/DeepSeek-V4-Pro:deepinfra",
    messages=[
        {
            "role": "user",
            "content": "Write a Python function that returns the nth Fibonacci number using memoization."
        }
    ],
)

print(completion.choices[0].message)
```

#### [](http://huggingface.co/blog/inference-providers-deepinfra#from-js) from JS

```
import { OpenAI } from "openai";

const client = new OpenAI({
    baseURL: "https://router.huggingface.co/v1",
    apiKey: process.env.HF_TOKEN,
});

const chatCompletion = await client.chat.completions.create({
    model: "deepseek-ai/DeepSeek-V4-Pro:deepinfra",
    messages: [
        {
            role: "user",
            content: "Write a Python function that returns the nth Fibonacci number using memoization.",
        },
    ],
});

console.log(chatCompletion.choices[0].message);
```

## [](http://huggingface.co/blog/inference-providers-deepinfra#billing) Billing

For direct requests, i.e. when you use the key from an inference provider, you are billed by the corresponding provider. For instance, if you use a DeepInfra API key you're billed on your DeepInfra account.

For routed requests, i.e. when you authenticate via the Hugging Face Hub, you'll only pay the standard provider API rates. There's no additional markup from us; we just pass through the provider costs directly. (In the future, we may establish revenue-sharing agreements with our provider partners.)

**Important Note** ‼️ PRO users get $2 worth of Inference credits every month. You can use them across providers. 🔥

Subscribe to the [Hugging Face PRO plan](https://hf.co/subscribe/pro) to get access to Inference credits, ZeroGPU, Spaces Dev Mode, 20x higher limits, and more.

We also provide free inference with a small quota for our signed-in free users, but please upgrade to PRO if you can!

## [](http://huggingface.co/blog/inference-providers-deepinfra#feedback-and-next-steps) Feedback and next steps

We would love to get your feedback! Share your thoughts and/or comments here: [https://huggingface.co/spaces/huggingface/HuggingDiscussions/discussions/49](https://huggingface.co/spaces/huggingface/HuggingDiscussions/discussions/49)
