Input (.proto schema)

Output (validation report)

What this tool does

You hit save on a Protocol Buffers file, run protoc or buf, and the build dies with a cryptic line/column error. This validator parses your .proto with the same rules a strict compiler enforces and tells you what is wrong — in plain language, before you commit and the CI pipeline catches it for you.

Beyond "did it parse", the tool runs a lint pass with checks the spec actually requires: field numbers must be in 1..536870911, the range 19000..19999 is reserved by Google internally, every field number in a message must be unique, and field names must not repeat. These are the violations that produce real-world build failures and the validator surfaces all of them in one shot rather than one-error-at-a-time like the compiler does.

Everything runs in your browser — your .proto, your message names, your package paths never leave your machine. The parser handles syntax/package/import directives, line and block comments, nested message and enum blocks, oneof, map<K, V>, repeated, optional, services (skipped), and field options. Designed for the same workflow as the official proto3 language guide.

How to use it

Three steps, runs as you type. The output editor updates ~300 ms after you stop editing.

1

Paste your .proto schema

Drop the schema into the left editor — single file, however long. syntax = "proto3"; at the top is fine but optional. The parser handles import statements (skipped — cross-file resolution is out of scope here, paste imported messages inline if you need them validated). All comments are stripped before parsing.

If your editor adds smart-quote characters when pasting, the validator may flag a tokenization error. Strip them or paste from a plain-text source.

2

Read the report

On the right: a green check if the schema is clean, a list of issues if not. Each issue points at the exact message and field, so you can fix them in your editor without grep. The report also summarises message count, enum count, and total fields.

3

Fix and re-paste

Apply the fix in your editor, paste the updated schema. The output revalidates in under a second. No reload, no rebuild, no waiting for CI to come back red. When the schema is clean, copy the report into a PR comment if you want a record of the validation.

When this actually saves time

Catching errors before pushing to CI

Your team runs buf lint in CI. Validating locally first means you do not push, wait, see red, fix, and push again — the whole cycle collapses to a single browser tab.

Reviewing a Protobuf PR from a colleague

You are reviewing a teammate's schema change but do not have the protoc toolchain set up locally. Paste the new .proto here, see if it is structurally clean, and leave a focused review instead of "looks fine, ship it".

Migrating from proto2 to proto3

Old schemas often use required (gone in proto3) or have field numbers that look fine until you check them. The validator flags duplicates and out-of-range numbers in one pass, which is faster than reading 800 lines of .proto by hand.

Validating a generated .proto from a code-gen tool

Generators (e.g. JSON Schema → Protobuf, OpenAPI → Protobuf) sometimes produce schemas with edge-case errors — duplicate field numbers, reserved-range hits. Pipe the output through the validator before you trust it.

Common questions

Is my .proto schema sent to a server?

No. The parser and the lint checks run entirely in your browser as JavaScript. Open DevTools and watch the Network tab while you paste — zero requests. Useful for schemas that include internal type names, package paths, or anything you would not want to ship to a third-party validator.

What checks does the lint pass actually run?

Field numbers must be in 1..536870911 (excluding the Google-reserved 19000..19999 range), every field number in a message must be unique, and every field name in a message must be unique. Anything that fails one of these is reported with the exact message.field path. The parse step also fails on missing semicolons, mismatched braces, unexpected tokens, etc.

Does it validate against the proto3 spec or proto2?

It accepts both syntaxes (proto3 default-zero, proto2 required/optional qualifiers, etc.). The lint checks are spec-defined and apply to both. The strictest checks like "no required in proto3" are not enforced here — protoc itself will catch those.

Why is my field number 19000 flagged?

The range 19000..19999 is reserved by Google internally for Protocol Buffers implementation use. If you assign a field number in that range, real protoc will refuse the schema. The validator catches it early so you find out from this tool rather than from a confused build error.

Does it follow imports?

No. import statements are recognised and skipped — cross-file message types resolve to "unknown" and are not validated. If you need to validate a message that depends on imported types, paste the imported messages into the same input.

How big a schema can it handle?

Tens of thousands of lines easily. Validation is local, no upload, no rate limit, no network round-trip — paste the entire repo if you want, the limit is your browser memory.

Related tools

If you are working with Protobuf and JSON, these pair well: