Add a new type of Serializer for supporting zero allocation scenarios #2177
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Right now
ISerializer
andISerializerAsync
are forced to allocate a new array for every message which is sent. For a high throughput application we'd like to avoid allocating large amounts of memory.The API exposed by librdkafka supports buffer reuse as it accepts a length and offset alongside each byte array, however this is not exposed by the serialization framework in Kafka dotnet.
The simplest approach here is to allow
ArraySegment<byte>
to be returned by the serializer rather thanbyte[]
, however changing all serializers to returnArraySegment<byte>
would surely be a large breaking change. So instead I decided to implement this as a new type of serializer.Please give me some feedback and thoughts on this PR.
Thanks