TheDocumentation Index
Fetch the complete documentation index at: https://mintlify.com/questdb/questdb/llms.txt
Use this file to discover all available pages before exploring further.
/exp endpoint executes SQL queries and exports results as CSV or Parquet files.
Endpoint
Query Parameters
SQL SELECT query to execute. Must be URL-encoded.Only
SELECT queries are supported. INSERT, UPDATE, and DDL statements will return an error.Limit the number of rows exported. Format:
limit or skip,limit.Examples:limit=1000- export first 1000 rowslimit=100,500- skip 100 rows, export next 500
When
true, only count rows without exporting data.No metadata. When
true, exclude column headers from CSV export.Name of the downloaded file (without extension). Extension is added automatically based on format.If not specified:
- CSV exports use
questdb-query-<timestamp>.csv - Parquet exports use
questdb-query-<timestamp>.parquet
CSV field delimiter (CSV format only). Common values:
,- Comma (default)\t- Tab (URL-encoded as%09);- Semicolon|- Pipe
Export format:
csv- Comma-separated values (default)parquet- Apache Parquet columnar format
Query timeout in milliseconds. Overrides the
Statement-Timeout header.Parquet-Specific Parameters
These parameters only apply whenfmt=parquet:
Compression algorithm for Parquet files:
NONE- No compressionSNAPPY- Snappy (default, good balance)GZIP- GZIP (better compression, slower)LZ4- LZ4 (fast compression)ZSTD- Zstandard (best compression)BROTLI- BrotliLZO- LZO
Compression level (codec-dependent). Higher values = better compression but slower.Valid ranges depend on codec:
- GZIP: 1-9 (default 6)
- ZSTD: 1-22 (default 3)
- BROTLI: 1-11 (default 4)
Number of rows per Parquet row group. Affects query performance and file size.Typical values: 100,000 to 1,000,000
Size of data pages in bytes (1MB default). Smaller pages enable more fine-grained reads.
When
true, includes column statistics in Parquet metadata for query optimization.Parquet format version:
1- Parquet 1.0 (default, maximum compatibility)2- Parquet 2.0 (better encoding, requires newer readers)
When
true, uses raw array encoding for better performance with array types.Response mode. Set to
nodelay to start file download immediately (streams the first bytes before query completes).Request Headers
Query timeout in milliseconds. Can be overridden by the
timeout query parameter.Default: 60000ms (60 seconds) for Parquet, no timeout for CSVHTTP Basic authentication credentials (if authentication is enabled).
Response
CSV Response
Returns CSV data with appropriate headers:nm=true).
Parquet Response
Returns Parquet binary data:Examples
Export as CSV
Export as Parquet
Export with Custom Filename
sensor_data.csv.
Export with Tab Delimiter
Export Without Headers
Parquet with ZSTD Compression
Parquet with Custom Row Groups
Export with Aggregation
Export with Timeout
Streaming Export (No Delay)
rmode=nodelay, the download starts immediately with the Parquet magic bytes, enabling progress indication in browsers.
CSV Format Details
Data Encoding
- Strings: Quoted with double quotes, special characters escaped
- Numbers: Raw numeric values
- Timestamps: ISO 8601 format (
2024-01-01T00:00:00.000000Z) - NULL values: Empty field (consecutive delimiters)
- Boolean:
trueorfalse
Special Characters
Strings containing the delimiter, newlines, or quotes are automatically quoted and escaped:Parquet Format Details
Schema Mapping
QuestDB types map to Parquet types:| QuestDB Type | Parquet Type | Physical Type |
|---|---|---|
| BOOLEAN | BOOLEAN | BOOLEAN |
| BYTE | INT8 | INT32 |
| SHORT | INT16 | INT32 |
| INT | INT32 | INT32 |
| LONG | INT64 | INT64 |
| FLOAT | FLOAT | FLOAT |
| DOUBLE | DOUBLE | DOUBLE |
| STRING | STRING | BYTE_ARRAY |
| SYMBOL | STRING | BYTE_ARRAY |
| TIMESTAMP | TIMESTAMP(MICROS) | INT64 |
| DATE | DATE | INT32 |
| UUID | UUID | FIXED_LEN_BYTE_ARRAY(16) |
| IPv4 | INT32 | INT32 |
Performance Characteristics
Parquet benefits:- Columnar storage enables efficient compression (typically 5-10x smaller than CSV)
- Fast analytical queries (column pruning, predicate pushdown)
- Schema embedded in file
- Type-safe (preserves numeric precision)
- Human-readable
- Universal compatibility
- Simple parsing
- No schema required
Compression Comparison
| Codec | Compression Ratio | Speed | Use Case |
|---|---|---|---|
| NONE | 1.0x | Fastest | Already compressed data |
| LZ4 | 2-3x | Very fast | Real-time streaming |
| SNAPPY | 2-4x | Fast | General purpose (default) |
| ZSTD | 3-7x | Medium | Best balance |
| GZIP | 3-6x | Slow | Maximum compatibility |
| BROTLI | 4-8x | Slowest | Best compression |
Error Handling
Invalid Query
Table Not Found
Invalid Parquet Option
Use Cases
Data Analysis
Export data for analysis in Python, R, or other tools:Data Archival
Export compressed Parquet for long-term storage:Report Generation
Generate CSV reports for business users:Data Pipeline Integration
Export to cloud storage:Best Practices
Streaming mode: Use
rmode=nodelay for Parquet exports to start downloads immediately in web browsers, providing better UX for large queries.Limits and Configuration
Configure export limits inserver.conf: