Paste SQL on the left and click "Convert" — we will turn it into JSONPaste SQL statements

What this tool does

If you have ever pulled a bunch of INSERT statements from a migration or a pg_dump output and needed them as JSON for an API seed or a fixture, this does it in one paste. No need to load the dump into a database just to SELECT ... FOR JSON it out again.

The tool reads INSERT INTO table (col, col, ...) VALUES (...) statements — multi-row inserts included — and emits a JSON array where each row is a JSON object keyed by the column names. Supports the standard dialect covered by ISO SQL and the common variants in PostgreSQL, MySQL, SQLite, and SQL Server.

Values are typed the way you would expect: integer and decimal literals stay numbers, quoted strings stay strings, NULL becomes JSON null, TRUE/FALSE become booleans. Date/time literals ('2024-01-15', '2024-01-15 10:30:00') emit as ISO-8601 strings. If you paste multiple INSERTs into different tables, each table gets its own key in the output, with the rows as a JSON array underneath.

How to use it

Three steps. Works the same whether you paste three rows or three thousand.

1

Paste your SQL (or try the sample)

Drop your INSERT statements into the left editor. A single INSERT, a multi-row INSERT, or multiple INSERTs into different tables — all fine. Click Load Sample to see a realistic orders-and-items example.

Leave the SQL as-is — trailing semicolons, inline comments (-- or /* ... */), and schema prefixes (public.orders) all parse correctly.

2

Hit Convert

Click the green Convert button. The tool reads every INSERT, matches values to column names, and builds the JSON in one pass.

3

Copy the JSON

The right panel shows a JSON array (or an object of arrays for multi-table dumps). Drop it straight into an API seed, a Jest fixture, or a static mock server.

When this actually comes in handy

Seeding from an existing dump

You have a <code>mysqldump</code> or <code>pg_dump</code> file and want JSON seed data for a new app that does not talk directly to the old DB. Paste the INSERTs, keep the JSON.

Building test fixtures

Grab a few rows from production as INSERT statements (minus anything sensitive) and convert to JSON fixtures for integration tests or Storybook mocks.

API seed files

A new microservice expects to be seeded from JSON but the data lives as SQL inserts in the monorepo. One paste gives you the seed file.

Handing data to a front-end team

The front-end team wants sample data for a new screen. You have the SQL handy. Convert to JSON and drop it in their repo as a mock response.

Common questions

Which SQL dialects does it handle?

The common subset: PostgreSQL, MySQL/MariaDB, SQLite, and SQL Server INSERT syntax. Dialect-specific quirks like PostgreSQL E'...' escapes or MySQL backtick-quoted identifiers are handled.

Does it support multi-row INSERTs?

Yes — INSERT INTO orders (id, total) VALUES (1, 9.99), (2, 15.50), (3, 42.00); comes out as a three-element JSON array with one object per row.

What about INSERTs into multiple tables?

Each table ends up as its own key in the output JSON, with the rows as a JSON array. So a migration that inserts into orders and order_items gives you {"orders": [...], "order_items": [...]} — handy for seed files that need to preserve relational structure.

How are NULL, dates, and booleans handled?

NULL becomes JSON null. TRUE/FALSE (or 0/1 if the schema cannot be inferred) become JSON booleans or numbers respectively. Date literals are emitted as ISO-8601 strings per RFC 3339.

Does it execute the SQL?

No — nothing is executed against a database. The tool parses the INSERT syntax and emits JSON. Your data never leaves the conversion request.

What about SELECT result sets?

Paste a formatted result set (column headers plus rows) and the tool will do its best to emit a JSON array. INSERT syntax is more reliable because the column names are explicit — if possible, prefer INSERTs.

Other tools you may need

SQL to JSON pairs well with the rest of the toolbox: