Top
Best
New

Posted by PaulHoule 7 days ago

Zero-copy protobuf and ConnectRPC for Rust(medium.com)
130 points | 44 comments
ethegwo 3 days ago|
I previously worked at Bytedance and we've maintained a Rust zero-copy gRPC/Thrift implementation for 4 years: https://github.com/cloudwego/volo, it is based on Bytes crate (reference counting bytes, for folks don't familiar with Rust ecosystem). A fun fact: when we measuring on our product environment, zero-copy isn't means higher performance in lots of scenarios, there are some trade-offs:

1. zero-copy means bytes are always inlined in the raw message buffer, which means the app should always access bytes by a reference/pointer

2. You cannot compress the RPC message, if you want to fully leverage the advantages from zero serdes/copy

3. RC itself

nopurpose 3 days ago||
same thing with io_uring zero copy in my limited testing: buffer usage accounting is not free and copying memory makes things drastically simpler.
stevefan1999 3 days ago||
Speaking of volo I'm trying to implement a etcd shim with SurrealKV. Haven't been able to get the OG etcd E2E conformance test 100% passed yet so I'm not releasing it just now
nopurpose 4 days ago||
True zero-copy is not achievable with Protobuf, you need something like FlatBuffers for that. What is presented here is more like a zero-allocations.
brancz 4 days ago||
I also find this misleading, and could be solved so easily by just explaining that of course varints need resolving and things will just happen lazily (presumably, I didn’t read the code) when they are requested to be read rather than eagerly.
secondcoming 3 days ago||
Is this still true? New versions of protobuf allow codegen of `std::string_view` rather than `const std::string&` (which forces a copy) of `string` and `repeated byte` fields.

https://protobuf.dev/reference/cpp/string-view/

nopurpose 3 days ago|||
It allows avoiding allocations, but it doesn't allow using serialised data as a backing memory for an in-language type. Protobuf varints have to be decoded and written out somewhere. They cannot be lazily decoded efficiently either: order of fields in the serialised message is unspecified, hence it either need to iterate message over and over finding one on demand or build a map of offsets, which negates any wins zero-copy strives to achieve.
eklitzke 3 days ago|||
This is true but the relative overhead of this is highly dependent on the protobuf structure in one's schema. For example, fixed integer fields don't need to be decoded (including repeated fixed ints), and the main idea of the "zero copy" here is avoiding copying string and bytes fields. If your protobufs are mostly varints then yes they all have to be decoded, if your protobufs contain a lot of string/bytes data then most of the decoded overhead could be memory copies for this data rather than varint decoding.

In some message schemas even though this isn't truly zero copy it may be close to it in terms of actual overhead and CPU time, in other schemas it doesn't help at all.

nly 3 days ago|||
The win could be only decoding the fields you actually care about, rather than all fields.

It's the same for any other high performance decoding of TLV formats (FIX in finance for instance).

jeffbee 3 days ago|||
Those field accessors take and return string_view but they still copy. The official C++ library always owns the data internally and never aliases except in one niche use case: the field type is Cord, the input is large and meets some other criteria, and the caller had used kParseWithAliasing, which is undocumented.

To a very close approximation you can say that the official protobuf C++ library always copies and owns strings.

secondcoming 3 days ago||
Well that is very disappointing news.

Even the decoder makes a copy even though it's returning a string_view? What's the point then.

I can understand encoders having to make copies, but not in a decoder.

akshayshah 3 days ago||
This is very cool! I’m most interested in the protobuf runtime - Rust has historically used Prost, which doesn’t pass the protobuf compliance test suite and isn’t Google-maintained. Google’s priority internally is cpp interop, so they use unsafe for protobuf - which the community is understandably not excited about.

(For full disclosure, I started the ConnectRPC project - so of course I’m excited about that part of the announcement too.)

willvarfar 4 days ago||
Exciting!

I have been on a similar odyssey making a 'zero copy' Java library that supports protobuf, parquet, thrift (compact) and (schema'd) json. It does allocate a long[] and break out the structure for O(1) access but doesn't create a big clump of object wrappers and strings and things; internally it just references a big pool buffer or the original byte[].

The speed demons use tail calls on rust and c++ to eat protobuf https://blog.reverberate.org/2021/04/21/musttail-efficient-i... at 2+GB/sec. In java I'm super pleased to be getting 4 cycles per touched byte and 500MB/sec.

Currently looking at how to merge a fast footer parser like this into the Apache Parquet Java project.

arianvanp 4 days ago||
I've been running into _a lot_ of issues with Hyper/Tonic. Like literal H2 spec violations. Try hosting a tonic server behind nginx or ALB. It will literally just not work as it can't handle GOAWAY retries in a H2 spec-compliant way.

If this fixes that I might consider switching.

However, Google is also working in a new grpc-rust implementation and I have faith in them getting it right so holding tight a little bit longer.

gobdovan 3 days ago||
About protocols in this vicinity, I've been noticing a missing piece in OSS around transport as well. In Python, you often need incompatible dependency sets in one app, and the usual choices are either ad-hoc subprocess RPC that gets messy over time or HTTP / containers that are overkill and make you change deployment strategy.

I ended up building a protocol for my own use around a very strict subprocess boundary for Python (initially at least, protocol is meant to be universal). It has explicit payload shape, timeout and error semantics. I already went a little too far beyond my usecase with deterministic canonicalization for some common pitfall data types (I think pickle users would understand, though). It still needs some documentation polish, but if anyone would actually use it, I can document it properly and publish it.

secondcoming 4 days ago||
Google really dropped the ball with protobuf when they took so long to make them zero-copy. There are 3rd party implementations popping up now and a real risk of future wire-level incompatibilities across languages.
jeffbee 4 days ago|
"zero copy" in this context just means that the contents of the input buffer are aliased to string fields in the decoded representation. This is a language-level feature and has nothing to do with the wire format.
mgaunard 3 days ago||
It's 2026 and I'm still defining my own messaging and wire protocols.

Plain C structs that fit in a UDP datagram that you can reinterpret_cast from is still best. You can still provide schemas and UUIDs for that, and dynamically transcode to JSON or whatever.

bluGill 3 days ago||
Until you have to work with big and little endian systems. There are other weirdness about how different computers represent things as well. utf-8 / ucs-16 strings (or other code pages). Not all floats are ieee-754. Still when you can ignore all those issues what you did is really easy and often works.
codedokode 3 days ago||
I disagree. Big endian is long dead and not worth worrying about. And code pages too. What is more important, is dealing with schema changes, when you add new fields to requests and responses.
bluGill 2 days ago||
There are niches where those matter.

but yes schema changes is most likely to get you today

pjc50 3 days ago|||
Provided that:

    - you agree never to care about endianness (can probably get away with this now)

    - you don't want to represent anything complicated or variable length, including strings
codedokode 3 days ago||
You can have strings by using relative pointers ("string starts 123 bytes before this").
mgaunard 1 day ago||
You can also just use an array which sets a max capacity, and either use a null-terminator or a separate size field.

In practice you probably want to have both, and choose what's most practical based on the message.

benterix 3 days ago|||
If you decide to use UDP, do you ignore the transmission errors or write the handling layer on your own?
mgaunard 3 days ago||
I handle it in different ways by topic.

For topics which are sending the state of something, a gap naturally self-recovers so long as you keep sending the state even if it doesn't change.

For message buses that need to be incremental, you need to have a separate snapshot system to recover state. That's usually pretty rare outside of things like order books (I work in low-latency trading).

For requests/response, I find it's better to tell the requester their request was not received rather than transparently re-send it, since by the time you re-send it it might be stale already. So what I do at the protocol level is just have ack logic, but no retransmit. Also it's datagram-oriented rather than byte-oriented, so overall much nicer guarantees than TCP (so long as all your messages fit in one UDP payload).

codedokode 3 days ago||
What you use is perfect for short-range communication (application and child process talking over shared memory), but not good for long-range communication (over Internet) because you can have old client talking to new version of a server, so you will have to add version numbers and have the code to parse outdated formats. But protobuf has compatibility built in and you do not need to write anything to support outdated clients. Also, protobuf uses solutions like varints to compress data to use less network traffic. So it is obviously made for long-range communication, and you probably do not have that and send 7 zeros for every small number.

TL;DR protobuf has version compatibility and compact number encoding.

mgaunard 1 day ago||
I already said you can UUIDs and schemas, and even dynamic conversion between mismatched schemas.

Doing plain C structs doesn't prevent any of this.

codedokode 1 day ago||
It requires extra effort to write conversion algorithm for older data structure version.
codedokode 3 days ago||
I wanted to recall what protobuf is, but when I opened the docs I didn't see the binary structure - the most important part - instead there are some code examples which are less important. If you are making a serialization format, please begin the docs with wire format diagrams.
nu11ptr 3 days ago|
I'd like to use this, but I don't want to refactor all my services when they change the request/response types. Interested to know the timing of 1.x. It seems to be moving pretty fast atm - hopefully that momentum keeps going.
codedokode 3 days ago|
As I understand, protobuf has compatibility (it stores field ids), so new service can read request from older client, and vice versa, so you do not need to refactor anything. Also, it is made for long-range communications, and is inefficient for inter-process or inter-thread messaging.
sa46 3 days ago||
Presumably, OP refers to the generated rust types which depend on the specific protobuf framework.

I had the same issue when looking to adopt ConnectRPC for Go, which uses a custom wrapper type to model requests.

More comments...