Avro to Protobuf
Convert an Avro schema to a Protocol Buffers .proto3 definition — instant, no upload
Input
Output
What does Avro to Protobuf do?
You have an Apache Avro schema (.avsc) and you need a Protocol Buffers .proto3 definition for the same shape — usually because the data is moving from a Kafka pipeline that speaks Avro into a gRPC service that speaks Protobuf. Paste the schema into the left panel, and the right panel returns a syntax-3 .proto with one message per Avro record, one enum per Avro enum, and field types mapped sensibly: string stays string, long becomes int64, int becomes int32, arrays become repeated, maps become map<string, V>.
The converter walks the schema tree and emits messages depth-first, so nested types are defined before the parents that reference them. Avro unions like ["null", "Address"] resolve to the non-null branch — Address in this case — because Protobuf has no exact equivalent of nullable unions in proto3 (every singular field is implicitly nullable). The output is well-formed proto3 source you can drop straight into protoc, see the proto3 language guide for the syntax rules.
It runs in your browser. No upload, no schema stored, no telemetry.
How to Use Avro to Protobuf
Three quick steps. The buttons described below are the actual buttons on this page.
Paste, Upload, or Load a Sample
Paste an Avro schema into the left Input panel. Click Upload for an .avsc file, or hit Sample to load the realistic Order schema with a nested OrderItem record, an enum currency, and a nullable Address. Quick example of what minified Avro looks like:
{"type":"record","name":"Order","namespace":"com.example.commerce","fields":[{"name":"orderId","type":"string"},{"name":"totalCents","type":"long"}]}The schema must be valid JSON in .avsc form, per the Avro 1.11 specification. If you have a binary .avro Object Container File, run avro-tools getschema first to extract the JSON schema, then paste it here.
Read the .proto Output
The right Output panel shows a syntax = "proto3" definition with one message per Avro record and one enum per Avro enum. Field tag numbers start at 1 and follow the field order in the source schema. The package line uses the Avro namespace when present; otherwise it falls back to avro.example.
Copy or Download
Hit Copy to grab the .proto for a gRPC service or a Schema Registry Protobuf entry. Hit Download to save as output.proto. Review the field numbers and any heuristic mappings (logical types, unions) before passing it to protoc — see the FAQ below for the gotchas.
When You'd Actually Use This
Migrating Kafka to gRPC
You have an Avro-based Kafka topic feeding a downstream service that you are now exposing over gRPC. Convert the topic's Avro schema to .proto3 here, hand-tune the field tags and any logical types, generate code with protoc, point the gRPC service at the same data shape.
Cross-Format Schema Documentation
Your team owns the Avro schema as the source of truth, but the consumer team works in Protobuf. Generate the .proto mirror so both sides have a definition in their preferred language — same shape, same field names, just different syntax.
Protobuf Schema Skeleton
You are designing a new Protobuf message and the data shape already exists as an Avro schema in another repo. Generate the skeleton from the Avro source, then iterate on it directly — saves you from typing out 30 fields by hand and getting one of them wrong.
Avro-to-Protobuf Bridges
Building or debugging a piece of middleware that translates Avro records to Protobuf messages on the fly. Having the two schemas side-by-side, generated from the same source, makes the field-mapping logic much easier to verify.
Common Questions
What happens to Avro logical types like decimal or timestamp-millis?
They get mapped to the underlying physical type — timestamp-millis on a long becomes int64, decimal on bytes becomes bytes. If you want Protobuf well-known types like google.protobuf.Timestamp or a custom Decimal message, swap them in by hand. The converter does not import timestamp.proto for you because that imposes opinions on your file layout.
How are Avro unions handled?
Nullable unions (["null", T]) resolve to T, since proto3 singular fields are nullable by default. Non-nullable multi-branch unions (["int", "string"]) pick the first branch — Protobuf oneof would be the better fit, but generating oneof automatically requires field-grouping decisions the converter cannot guess. Tweak the output by hand if you need oneof.
Are field tag numbers stable across runs?
They are sequential by field order in the input — first field is = 1, second is = 2, and so on. If you reorder fields in the Avro schema, the numbers shift, which would be a wire-incompatible change in real Protobuf. Once you commit the .proto to a repo, freeze the numbers and stop regenerating from Avro for that file.
Does the output compile with protoc as-is?
Usually yes for straightforward record/enum/array schemas. Edge cases that may need a hand-edit: Avro field names that collide with Protobuf reserved words, deeply recursive named types, fixed-size types (fixed) which become bytes without the size constraint, and unions that need to be expressed as oneof. Always run protoc --proto_path=. file.proto as a smoke test.
Is the schema sent to a server?
No. Conversion runs entirely in your browser using JSON.parse() and a recursive walker. Nothing is uploaded, nothing is logged. The page is fully usable offline once it has loaded.
What about Avro maps and arrays?
Avro {"type": "array", "items": T} becomes repeated T field_name = N;. Avro {"type": "map", "values": V} becomes map<string, V> field_name = N; — Avro maps are always string-keyed, which lines up with the most common Protobuf map shape. See the maps section of the proto3 guide for the wire-format implications.
Other Avro and Protobuf Tools
Converting to Protobuf is one piece. These tools handle the rest of the schema lifecycle: