Get the latest tech news

Fivefold Slower Compared to Go? Optimizing Rust's Protobuf Decoding Performance


When optimizing the write performance of GreptimeDB v0.7, we found that the time spent on parsing Protobuf data with the Prometheus protocol was nearly five times longer than that of similar products implemented in Go. This led us to consider optimizing the overhead of the protocol layer. This article introduces several methods that the GreptimeDB team tried to optimize the overhead of Protobuf deserialization.

This means the original elements within the slice are still retained and not garbage collected (only the len field is set to 0), hence Go's object reuse mechanism can effectively avoid repeated memory allocations. In this article, a compromise approach was adopted, bypassing the reference counting mechanism of Bytes through unsafe methods, manually ensuring the input buffer remains valid for the entire lifecycle of the output. In this article, we saw the overhead of conversion between PROST's BytesAdapter and Buf traits, and the dynamic dispatch cost introduced by Bytes to accommodate different underlying data sources, etc.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of Protobuf

Protobuf

Photo of Performance

Performance

Photo of Rust

Rust

Related news:

News photo

Rust rustles up fix for 10/10 critical command injection bug on Windows in std lib

News photo

Rust 101

News photo

Better biosensors just need a touch of cheap plastic