JSON / Parquet / Avro Converter — Convert Data Formats in Your Browser
Free, client-side data format converter. Convert between JSON, Apache Parquet, and Apache Avro instantly — no file uploads, no server processing, no data leaves your device. Powered by DuckDB-WASM for Parquet operations and avsc for Avro serialization.
Supported Conversion Paths
This tool supports all six conversion directions between the three most common data lake and streaming formats:
JSON → Parquet — Compress JSON arrays into columnar Parquet files for efficient analytics. Ideal for loading into Snowflake external tables, AWS Athena, or Spark.
Parquet → JSON — Inspect and preview Parquet files without a query engine. Read column types, row counts, and actual data as human-readable JSON.
JSON → Avro — Encode JSON records into compact Avro binary with auto-inferred schemas. Perfect for Kafka producers and schema registry workflows.
Avro → JSON — Decode Avro container files (OCF) or raw Avro binary back to readable JSON. Debug Kafka consumer output and inspect Avro payloads.
Parquet → Avro — Convert columnar Parquet files to Avro binary for streaming ingestion. The converter chains Parquet → JSON → Avro internally.
Avro → Parquet — Convert Avro container files to columnar Parquet for analytical queries. The converter chains Avro → JSON → Parquet internally.
How It Works
Select formats — Choose your source and target formats from the format selector (JSON, Parquet, Avro). Use the swap button to reverse direction instantly.
Provide input — Paste or type JSON directly, or drag-and-drop / browse for binary files (Parquet, Avro). Sample JSON data is preloaded for quick testing.
Convert — Click "Convert" and the tool processes everything in your browser. DuckDB-WASM handles Parquet read/write; avsc handles Avro encode/decode.
Preview and download — View a preview table (up to 50 rows), copy JSON output to clipboard, or download the converted file directly.
Technology
DuckDB-WASM — A full analytical SQL engine compiled to WebAssembly (~2 MB). Handles native Parquet reading (read_parquet) and writing (COPY TO FORMAT PARQUET). Shared instance with the SQL Playground so the engine only downloads once.
avsc — A pure JavaScript implementation of the Apache Avro specification (~264 KB). Handles Avro schema inference, binary serialization (toBuffer), deserialization (fromBuffer), and Avro Object Container File (OCF) decoding.
No server — All processing runs in your browser via WebAssembly and JavaScript. Files are read with the FileReader API and never uploaded anywhere.
Format Comparison
Feature
JSON
Parquet
Avro
Storage format
Text (row-oriented)
Binary (columnar)
Binary (row-oriented)
Compression
None (gzip separately)
Built-in (Snappy, Zstd, Gzip)
Built-in (Deflate, Snappy)
Schema
Schema-less
Embedded in footer
Embedded in header
Best for
APIs, config, debugging
Analytics, data lakes, OLAP
Streaming, Kafka, CDC
Columnar pruning
No
Yes (read only needed columns)
No
Human readable
Yes
No
No
Typical compression ratio
1x (baseline)
5-10x vs JSON
2-4x vs JSON
When to Use Each Format
Choose JSON when:
You need human-readable data for debugging, APIs, or configuration files
Data is small (under 10 MB) and schema flexibility matters
Consuming systems expect text-based input (REST APIs, logging pipelines)
Choose Parquet when:
Data is destined for analytical queries (Snowflake external tables, AWS Athena, Spark, BigQuery)
You need columnar pruning — queries that read a subset of columns are dramatically faster
Storage cost matters — Parquet typically achieves 5-10x compression vs raw JSON
You are building a data lake on S3, GCS, or Azure Blob Storage
Choose Avro when:
Data flows through Apache Kafka or other message brokers with schema registry
You need compact binary encoding for change data capture (CDC) pipelines
Privacy and Security
All conversions run entirely in your browser. DuckDB-WASM and avsc process files locally via WebAssembly and JavaScript — no data is uploaded to any server, no network requests are made during conversion, and no files are stored. You can safely convert proprietary or sensitive data.
Frequently Asked Questions
Is this converter free?
Yes, completely free with no signup, no limits, and no tracking. DuckDB-WASM and avsc run 100% in your browser.
What is the maximum file size?
The converter runs inside your browser's memory budget — typically 1-4 GB depending on your device. For most data engineering workflows (files under 100 MB), performance is fast. Very large files (500 MB+) may be slow or cause out-of-memory errors in the browser tab.
Does the Parquet output support compression?
Yes. DuckDB-WASM writes Parquet files with Snappy compression by default, which provides a good balance of compression ratio and speed. This matches the default used by Spark, Snowflake, and most data lake tools.
Can I convert Avro files without a schema?
Avro Object Container Files (OCF) embed their schema in the file header — the converter reads it automatically. For raw Avro binary without an embedded schema, you need to provide the schema separately (not currently supported in this tool).
How does Parquet-to-Avro conversion work?
The converter chains two steps internally: first it reads the Parquet file to JSON using DuckDB-WASM, then encodes the JSON to Avro using avsc. This approach works reliably for typical data sizes and avoids the need for a dedicated Parquet-to-Avro library.
Can I use this to preview Parquet files?
Yes. Select "Parquet → JSON", drop your .parquet file, and click Convert. The preview table shows the first 50 rows with all columns. You can also copy the full JSON output to clipboard.
Related Tools
SQL Playground — query data with SQL directly in your browser using DuckDB-WASM. Shares the same engine as this converter.
JSON to SQL DDL — generate CREATE TABLE statements from JSON samples for Snowflake, Postgres, BigQuery, and more.