Get the latest tech news

ONNX Runtime and CoreML May Silently Convert Your Model to FP16


& CoreML May Silently Convert Your Model to FP16 (And How to Stop It) Running an ONNX model in ONNX RunTime (ORT) with the CoreMLExecutionProvider may change the predictions your model makes implicitly and you may observe differences when running with PyTorch on MPS or ONNX on CPU. This is because the default arguments ORT uses when converting your model to CoreML will cast the model to FP16.

None

Get the Android app

Or read this on Hacker News

Read more on:

Photo of model

model

Photo of onnx runtime

onnx runtime

Photo of fp16

fp16

Related news:

News photo

Detailed balance in large language model-driven agents

News photo

Yann LeCun confirms his new ‘world model’ startup, reportedly seeks $5B+ valuation

News photo

LG G5 vs. LG G4: I spent hours testing both OLED TVs, and this model was the surprise winner