Get the latest tech news
Hugging Face makes it easier for devs to run AI models on third-party clouds
Hugging Face's new Inference Providers feature is designed to make it easier for devs to run AI models using the hardware of their choice.
But in a blog post Tuesday, the company explained that its focus has shifted to collaboration, storage, and model distribution capabilities. Serverless inference lets developers deploy and scale AI models without configuring or managing any of the underlying hardware. To date, Hugging Face has raised close to $400 million in capital from investors including Salesforce, Google, Amazon, and Nvidia.
Or read this on TechCrunch