Get the latest tech news

Show HN: We made our own inference engine for Apple Silicon


A high-performance inference engine for AI models. Contribute to trymirai/uzu development by creating an account on GitHub.

A high-performance inference engine for AI models on Apple Silicon. Simple, high-level API Hybrid architecture, where layers can be computed as GPU kernels or via MPSGraph (a low-level API beneath CoreML with ANE access) Unified model configurations, making it easy to add support for new models Traceable computations to ensure correctness against the source-of-truth implementation Utilizes unified memory on Apple devices Then, create an inference Session with a specific model and configuration:

Get the Android app

Or read this on Hacker News

Read more on:

Photo of apple silicon

apple silicon

Photo of inference engine

inference engine

Related news:

News photo

Cyberpunk 2077: Ultimate Edition for Macs with Apple silicon is launching this week

News photo

Steam finally goes native on Apple Silicon, here’s how to try it (Beta)

News photo

Apple will end support for Intel Macs next year, macOS 27 will require Apple Silicon