Get the latest tech news

Hugging Face makes it easier for devs to run AI models on third-party clouds


Hugging Face's new Inference Providers feature is designed to make it easier for devs to run AI models using the hardware of their choice.

But in a blog post Tuesday, the company explained that its focus has shifted to collaboration, storage, and model distribution capabilities. Serverless inference lets developers deploy and scale AI models without configuring or managing any of the underlying hardware. To date, Hugging Face has raised close to $400 million in capital from investors including Salesforce, Google, Amazon, and Nvidia.

Get the Android app

Or read this on TechCrunch

Read more on:

Photo of devs

devs

Photo of AI models

AI models

Photo of Hugging Face

Hugging Face

Related news:

News photo

Alibaba’s Qwen team releases AI models that can control PCs and phones

News photo

Day of the Devs will return to SF in March for GDC 2025 event

News photo

Meta's XR plans reportedly include seeding Orion to devs and Oakley smart glasses