Every team I've worked on eventually invents their own API response format. It seems harmless at first —
a little wrapper object here, a custom error shape there — and then six months later you're writing a fourth
version of your error-parsing middleware and arguing in code review about whether data.user or
data.result.user is the "right" path. There's no universal standard that solves all of this, but
there are patterns that hold up in production and anti-patterns that will absolutely come back to bite you.
Here's what I'd actually put in a design doc.
Consistent Success Responses
The first question every team debates: should every response be wrapped in an envelope like
{"status": "ok", "data": {...}}? The honest answer is — probably not by default. Envelopes
made more sense in the early 2000s when HTTP status codes weren't always reliable across proxies and
mobile networks. Today, a flat response that lets the resource speak for itself is almost always cleaner.
Reserve the envelope for endpoints that genuinely return mixed payloads, like a bulk operation that
partially succeeds.
// ✅ Good — flat, direct, the order IS the response
// GET /v1/orders/ord_9kZ2m
{
"id": "ord_9kZ2m",
"status": "fulfilled",
"customer_id": "cus_4xA1p",
"total_amount": 149.99,
"currency": "USD",
"created_at": "2026-03-15T11:42:00Z",
"line_items": [
{ "sku": "HDPHN-BLK-XM5", "quantity": 1, "unit_price": 149.99 }
]
}
// ❌ Avoid — unnecessary envelope adds a layer clients have to unwrap every time
{
"status": "success",
"code": 200,
"data": {
"order": {
"id": "ord_9kZ2m"
}
}
}Wrapping makes sense when you need to co-locate metadata that isn't part of the resource itself —
pagination cursors, request IDs for tracing, or partial-failure summaries in bulk endpoints. For a simple
GET /orders/:id, the order is the response. Don't make clients write
response.data.order.id when response.id works just fine. If you want a spec to
reference, JSON:API is an opinionated
but well-thought-out standard that defines exactly when and how to use envelopes — worth reading even if
you don't adopt it wholesale.
Error Responses — Use RFC 7807 Problem Details
Custom error shapes are one of the most common sources of integration pain. Every API ends up with
something slightly different — {"error": "..."}, {"message": "...", "code": 42},
{"errors": [...]} — and every client that consumes your API has to write bespoke error-parsing
logic. The IETF solved this with
RFC 7807 — Problem Details for HTTP APIs.
It's a lightweight standard that defines a consistent JSON structure for errors, with a
Content-Type of application/problem+json. Adopt it and your error format becomes
something any developer can read without reaching for docs.
// POST /v1/orders — 422 Unprocessable Entity
// Content-Type: application/problem+json
{
"type": "https://api.example.com/problems/validation-error",
"title": "Validation Failed",
"status": 422,
"detail": "The order could not be created because one or more fields are invalid.",
"instance": "/v1/orders/requests/req_7bN3k",
"errors": [
{
"field": "line_items[0].quantity",
"message": "Quantity must be a positive integer."
},
{
"field": "shipping_address.postal_code",
"message": "Postal code is required for US shipments."
}
]
}- Predictable parsing: Clients always know where to find the human-readable message (
detail), the machine-readable category (type), and the HTTP status mirrored in the body (status). - Extensible by design: The spec explicitly allows extra fields like
errorsfor field-level validation detail — you're not working around it. - Tooling support: OpenAPI 3.x supports
application/problem+jsonas a response content type, so your generated docs and client SDKs understand the shape natively. - The
typeURI is a document, not just a string: Point it at a real page explaining the error, and you've just replaced a support ticket with a self-service answer.
HTTP Status Codes + JSON Body Together
The status code and the JSON body are not redundant — they play different roles. The status code tells the HTTP layer (proxies, caches, browsers, monitoring tools) what happened. The JSON body tells your application layer. Both need to be correct. MDN's HTTP status reference is the fastest way to resolve debates about which code fits. The ones that trip up teams most often are 400 vs 422 (both are client errors, but 422 specifically means the syntax was valid and the server understood it — the semantics were wrong), and 401 vs 403 (401 means "who are you?", 403 means "I know who you are — you can't do this").
// 400 Bad Request — malformed JSON or missing required field at the HTTP level
{
"type": "https://api.example.com/problems/bad-request",
"title": "Bad Request",
"status": 400,
"detail": "Request body is not valid JSON."
}
// 422 Unprocessable Entity — valid JSON, but business rules rejected it
{
"type": "https://api.example.com/problems/insufficient-inventory",
"title": "Insufficient Inventory",
"status": 422,
"detail": "HDPHN-BLK-XM5 has 0 units available; requested 2.",
"instance": "/v1/orders/requests/req_7bN3k"
}
// 404 Not Found — resource doesn't exist (or you don't want to reveal it does)
{
"type": "https://api.example.com/problems/not-found",
"title": "Order Not Found",
"status": 404,
"detail": "No order with ID ord_XXXXX exists in this account."
}- 200 OK — successful GET, PUT, PATCH that returns a body
- 201 Created — successful POST that created a resource; include a
Locationheader pointing to the new resource - 204 No Content — successful DELETE or action with no response body; no JSON needed
- 400 Bad Request — malformed request syntax, the server can't even parse it
- 401 Unauthorized — missing or invalid authentication credentials
- 403 Forbidden — authenticated but not permitted
- 404 Not Found — resource doesn't exist
- 409 Conflict — state conflict (e.g. duplicate order, optimistic lock failure)
- 422 Unprocessable Entity — valid syntax, failed semantic/business validation
- 429 Too Many Requests — rate limit hit; always include a
Retry-Afterheader - 500 Internal Server Error — something broke server-side; never leak stack traces in the body
Dates and Times — Always ISO 8601
Unix timestamps look clean — just a number. But they're a trap. Is 1710499200 seconds
or milliseconds? (Both are common. JavaScript's Date.now() gives milliseconds, POSIX gives
seconds.) What timezone? They're unreadable in logs without a converter. They can't represent dates before
1970 cleanly. And they'll overflow 32-bit integers in 2038 on systems that haven't migrated yet.
ISO 8601 strings solve
all of this. Use UTC and always include the timezone offset — a bare 2026-03-15T11:42:00
without a trailing Z or +00:00 is ambiguous and will eventually cause a bug in
a client that assumes local time.
// ✅ Good — unambiguous, human-readable, timezone-explicit
{
"created_at": "2026-03-15T11:42:00Z",
"updated_at": "2026-04-01T08:15:33Z",
"scheduled_delivery": "2026-03-18T00:00:00Z",
"expires_at": "2026-04-15T23:59:59Z"
}
// ❌ Avoid — ambiguous, unreadable, seconds vs ms confusion
{
"created_at": 1710499200,
"updated_at": 1743494133000,
"scheduled_delivery": "15/03/2026",
"expires_at": "April 15, 2026"
}Null vs Omitted Fields
These two are not the same and conflating them creates subtle bugs that only surface in edge cases.
Null means the field exists, the server knows about it, and its current value is "nothing" —
like a fulfilled_at timestamp on an order that hasn't shipped yet.
Omitting a field entirely means it doesn't apply in this context — like a
return_tracking_number on a non-returned order. If a client sees "fulfilled_at": null,
it knows the field is part of this resource's schema and is explicitly unset. If the field is absent, the
client should treat it as outside the scope of this response — which matters when you're doing partial
updates with PATCH. Sending null means "clear this field"; omitting it means "don't touch it".
// Order that exists but hasn't shipped yet
// fulfilled_at: null — we know about this field, it's just not set yet
// return_tracking_number: omitted — returns don't apply to this order
{
"id": "ord_9kZ2m",
"status": "processing",
"created_at": "2026-03-15T11:42:00Z",
"fulfilled_at": null,
"shipped_at": null,
"tracking_number": null,
"total_amount": 149.99
}
// PATCH /v1/orders/ord_9kZ2m — cancel the scheduled delivery
// Only include fields you want to change
{
"scheduled_delivery": null,
"status": "cancelled"
}
// "total_amount" is omitted — we're NOT zeroing it out, just not touching itPagination — Cursor Over Offset
Offset pagination (?page=3&per_page=20) is intuitive to implement and easy to explain,
but it breaks silently on live data. If a record is inserted while a client is paginating — between page 2
and page 3 — they'll skip an item. If a record is deleted, they'll see a duplicate. For any dataset that
changes frequently (orders, events, notifications), cursor-based pagination is the correct default. You give
the client an opaque cursor (typically a base64-encoded ID or timestamp) that represents their position in
the result set. The next page starts from that exact point, regardless of inserts or deletes. Offset
pagination is fine for admin UIs where the dataset is stable and users genuinely need to jump to page 47.
It's not fine for any mobile client doing infinite scroll.
// GET /v1/orders?limit=20&cursor=eyJpZCI6Im9yZF85a1oybSJ9
{
"orders": [
{ "id": "ord_9kZ2m", "status": "fulfilled", "total_amount": 149.99, "created_at": "2026-03-15T11:42:00Z" },
{ "id": "ord_8jY1l", "status": "processing", "total_amount": 89.00, "created_at": "2026-03-14T09:10:00Z" }
],
"pagination": {
"next_cursor": "eyJpZCI6Im9yZF84alk1bCJ9",
"has_more": true,
"limit": 20
}
}
// When has_more is false, omit next_cursor entirely (or set to null)
// Clients: fetch next page with ?cursor=<next_cursor> until has_more === falseField Naming — snake_case vs camelCase
Pick one convention and enforce it with a linter. The actual choice matters less than the
consistency. That said: if your primary consumers are JavaScript/TypeScript clients,
camelCase integrates cleanly with destructuring and object spread.
If your primary consumers are Python or Ruby backends, snake_case feels natural.
If you serve both, the pragmatic solution is to document the convention and let clients use a
transformation layer — JSON.parse
with a reviver, a Python humps library, or a single serialization config in your framework.
What you should never do is mix conventions in the same API — customerId next to
order_total is a sign that different engineers wrote different endpoints without talking to
each other. Use the JSON Schema Generator to document your field
names consistently across endpoints.
Versioning
Two schools: URL versioning (/v1/orders, /v2/orders) and header versioning
(Accept: application/vnd.example.v2+json or a custom API-Version: 2026-03-15
header). URL versioning wins in practice almost every time. It's visible in logs without parsing headers,
it works with every HTTP client without configuration, you can test it in a browser, and you can run v1
and v2 side by side in the same gateway with a simple path-prefix rule. Header versioning is theoretically
more RESTful per the
IANA media type
model, but it creates invisible complexity — a request that looks identical in the URL is actually
behaving differently depending on a header most developers don't check first.
Stripe's date-based versioning (Stripe-Version: 2024-06-20) is the best of both worlds for
large platforms, but that's a different problem from picking your first version scheme.
Whatever you choose, version from day one. Retrofitting versioning onto an unversioned API in production
is painful and rarely goes cleanly. Use the JSON Validator to confirm that
responses from both API versions are structurally sound during migration testing.
Wrapping Up
None of this is groundbreaking — but that's the point. The teams that struggle most with API design aren't the ones who made technically wrong choices. They're the ones who made different choices in different endpoints and never wrote them down. Flat success responses. RFC 7807 error bodies. ISO 8601 dates. Cursor pagination on live data. Null for "known and empty", omitted for "doesn't apply". URL versioning from day one. These patterns aren't perfect, but they're predictable — and predictability is what makes an API a pleasure to integrate with rather than a puzzle to reverse-engineer. The formal JSON specification lives at RFC 8259 if you ever need to settle a spec-level argument. For everything above that layer, the best standard is the one your team actually writes down and follows consistently.