Both JSON and TOON represent structured data. Both are text-based. Both can describe objects, arrays, and nested hierarchies. So if you've stumbled across TOON and wondered whether it's just another JSON alternative trying to solve a problem that doesn't exist — that's a fair question. The answer is: they're optimised for completely different jobs, and once you see it laid out, the choice becomes obvious.

JSON is the universal language of data exchange. It's been the backbone of REST APIs for over a decade, it's supported natively in every modern programming language, and it's what you reach for by default. TOON is a newer format with a narrower mission: minimising token usage when you pass structured data to and from large language models. Same underlying data, radically different footprint.

What Makes JSON Great

JSON's superpower is ubiquity. The RFC 8259 spec is simple enough to fit on a few pages, which is why every language from Python to Go to Rust has a first-class JSON parser in its standard library or ecosystem. You don't need to install anything. You don't need to explain the format to your colleagues. And tooling — formatters, validators, schema validators — is universally available.

  • Universally supported. Every language, every runtime, every cloud service speaks JSON natively.
  • Human-readable. Keys are quoted strings, structure is explicit with braces and brackets.
  • Typed primitives. Numbers, booleans, nulls, strings, arrays, and objects — all unambiguously represented.
  • Rich tooling. Linters, formatters, JSON Schema validators, JSONPath query engines — the ecosystem is enormous.
  • REST API standard. Content-Type: application/json is the default expectation across virtually every HTTP API.

What Makes TOON Different

TOON was built with one specific constraint in mind: LLM context windows and token costs. When you call the OpenAI API or any other LLM provider, you pay per token. Tokens aren't characters — they're roughly word-fragments, and JSON's structural punctuation (quotes, colons, commas, braces) consumes a surprising number of them. For tabular data especially, JSON is wasteful: every row repeats every key name.

TOON solves this with a compact notation. For tabular data, keys are declared once in a header, and rows are plain comma-separated values. For single objects and arrays, it uses a lean inline syntax. The OpenAI tokenizer is a useful tool to see the difference in practice — paste the same data in both formats and compare the token count directly.

Side-by-Side: 10-Row Dataset

Here's a product table with 10 rows in JSON — the kind of thing you might pass to an LLM to ask it to analyse pricing or categorise items:

json
[
  {"id": 1,  "name": "Wireless Mouse",      "category": "Electronics", "price": 29.99, "inStock": true},
  {"id": 2,  "name": "USB-C Hub",           "category": "Electronics", "price": 49.99, "inStock": true},
  {"id": 3,  "name": "Mechanical Keyboard", "category": "Electronics", "price": 89.99, "inStock": false},
  {"id": 4,  "name": "Monitor Stand",       "category": "Furniture",   "price": 39.99, "inStock": true},
  {"id": 5,  "name": "Webcam HD",           "category": "Electronics", "price": 69.99, "inStock": true},
  {"id": 6,  "name": "Desk Mat",            "category": "Accessories", "price": 19.99, "inStock": true},
  {"id": 7,  "name": "Laptop Stand",        "category": "Furniture",   "price": 34.99, "inStock": false},
  {"id": 8,  "name": "LED Desk Lamp",       "category": "Furniture",   "price": 44.99, "inStock": true},
  {"id": 9,  "name": "Cable Organiser",     "category": "Accessories", "price": 14.99, "inStock": true},
  {"id": 10, "name": "Headphone Hook",      "category": "Accessories", "price": 12.99, "inStock": false}
]

Now the same data in TOON. Keys are declared once in the header — rows contain only values:

text
products[10]{id,name,category,price,inStock}:
  1,Wireless Mouse,Electronics,29.99,true
  2,USB-C Hub,Electronics,49.99,true
  3,Mechanical Keyboard,Electronics,89.99,false
  4,Monitor Stand,Furniture,39.99,true
  5,Webcam HD,Electronics,69.99,true
  6,Desk Mat,Accessories,19.99,true
  7,Laptop Stand,Furniture,34.99,false
  8,LED Desk Lamp,Furniture,44.99,true
  9,Cable Organiser,Accessories,14.99,true
  10,Headphone Hook,Accessories,12.99,false
Token savings in practice: The JSON version above clocks in at roughly 420 tokens on the OpenAI tokenizer. The TOON version is approximately 195 tokens — less than half. At scale (thousands of API calls a day, or datasets with hundreds of rows), that gap becomes real money.

Using TOON in JavaScript / TypeScript

The @toon-format/toon npm package handles encoding and decoding. Install it once and the API is straightforward:

bash
npm install @toon-format/toon
ts
import { encode, decode } from '@toon-format/toon';

const products = [
  { id: 1, name: 'Wireless Mouse', category: 'Electronics', price: 29.99, inStock: true },
  { id: 2, name: 'USB-C Hub',      category: 'Electronics', price: 49.99, inStock: true },
  // ...more rows
];

// Encode to TOON before sending to an LLM
const toonString = encode(products);
// products[2]{id,name,category,price,inStock}:
//   1,Wireless Mouse,Electronics,29.99,true
//   2,USB-C Hub,Electronics,49.99,true

// Decode TOON back to a plain JS array when the LLM returns it
const decoded = decode(toonString);
console.log(decoded[0].name); // "Wireless Mouse"

You can also encode a single object inline. TOON's object notation uses curly braces without quoting keys, and its array notation is simply comma-separated values in square brackets:

text
// TOON object
{name:Alice,age:30,role:admin}

// TOON array
[1,2,3,4,5]

When to Use TOON

  • LLM prompts with structured data. Feeding a table, list of records, or product catalogue into a GPT-4 / Claude / Gemini call. The token savings directly reduce your API bill.
  • LLM output parsing. If you instruct an LLM to respond in TOON, the responses are shorter and cheaper on both input and output tokens.
  • Tabular data specifically. TOON's header-once-then-rows structure is dramatically more compact than JSON for anything table-shaped.
  • Batch processing pipelines. Running thousands of records through an LLM daily? Even a 40% token reduction adds up fast.
  • Context window pressure. When you're butting up against a model's context limit, TOON lets you fit more data in the same window.

When to Stick With JSON

  • REST APIs. Every HTTP client, every server framework, every API gateway speaks JSON. Don't break the convention.
  • Configuration files. package.json, tsconfig.json, settings.json — JSON is the standard for config in most ecosystems.
  • Database storage. PostgreSQL's jsonb, MongoDB documents, DynamoDB — these are JSON-native. TOON doesn't belong here.
  • Inter-service communication. When two services talk to each other, use JSON. It's what your logging, tracing, and monitoring tools understand.
  • Public APIs. If external developers are consuming your API, JSON is the expected contract. TOON is an internal optimisation, not a public interface format.
  • Browser-native parsing. JSON.parse() is built into every browser. TOON requires a library.

Decision Guide

A quick checklist to settle the question:

  • Is the data going into an LLM prompt? Use TOON.
  • Is the LLM expected to return structured data? Use TOON.
  • Is the data tabular (same keys across many rows)? Strongly consider TOON.
  • Is this a REST API request or response? Use JSON.
  • Is this a config file? Use JSON (or YAML).
  • Is this stored in a database? Use JSON.
  • Will non-LLM tooling consume this data? Use JSON.
  • Are you unsure whether token costs matter for your use case? Start with JSON, optimise later.

Useful Tools

If you're working with both formats, these tools will save you time. Use the JSON to TOON converter to take existing JSON data and produce a compact TOON representation ready for LLM input. The TOON to JSON converter goes the other direction — useful when an LLM returns TOON and you need to feed the result into a JSON-native downstream system. The TOON Formatter will clean up and validate TOON strings, and the JSON Formatter remains the go-to for prettifying raw JSON blobs.

Wrapping Up

JSON and TOON aren't competitors in the traditional sense — they target different parts of your stack. JSON owns the API layer, config files, and data storage. TOON owns the LLM layer, where token count is money and context window space is precious. The good news is you can use both in the same project without friction: store and transfer data as JSON, encode to TOON immediately before any LLM call, and decode back to JSON or a native object on the way out. Once you've set up that pattern, the token savings are automatic.