Turn any JSON sample into a CREATE TABLE statement in seconds. The converter infers column types from your data, promotes types across multiple samples (e.g., INT + FLOAT = FLOAT), and tracks nullability when fields are missing from any row. Supports Snowflake, Postgres, BigQuery, and ANSI SQL.
How does type inference work?
For each field, the tool walks every sample, records the observed type (boolean, integer, float, string, object, array, null), then picks the most permissive compatible type. If a field shows up as INT in one sample and FLOAT in another, the output column becomes FLOAT. If any sample is missing the field, the column becomes nullable. Nested objects become VARIANT (Snowflake) or JSONB (Postgres) or STRUCT (BigQuery).
Supported output dialects
Snowflake: VARCHAR, NUMBER, FLOAT, BOOLEAN, VARIANT for nested, ARRAY for arrays
Postgres: TEXT, BIGINT, DOUBLE PRECISION, BOOLEAN, JSONB for nested/arrays
BigQuery: STRING, INT64, FLOAT64, BOOL, STRUCT<...> for nested, ARRAY<...> for arrays
ANSI SQL: VARCHAR, BIGINT, DOUBLE, BOOLEAN - safe for most databases
Best practices for schema inference
Paste at least 10-20 samples - single samples miss optional fields and type variation.
Include boundary cases - rows with nulls, empty strings, zero, max-length strings.
Review nullability - the tool marks fields nullable conservatively; tighten to NOT NULL where the business guarantees presence.
Validate against production - for high-cardinality string fields, check actual max-length before committing to VARCHAR(255).
Common gotchas
JSON numbers without decimals are treated as integers - if you expect fractional values, include at least one sample with a decimal.
Date strings stay as VARCHAR - JSON has no native date type; cast downstream.