QuestDB provides powerful CSV import capabilities through both the Web Console UI and the REST API. The import process includes automatic schema detection, type inference, and data validation.Documentation Index
Fetch the complete documentation index at: https://mintlify.com/questdb/questdb/llms.txt
Use this file to discover all available pages before exploring further.
Import Methods
Web Console (Interactive)
The Web Console provides a drag-and-drop interface for CSV imports. URL:http://localhost:9000
Features:
- Visual drag-and-drop import
- Interactive schema editor
- Real-time validation
- Import progress tracking
- Error inspection
REST API (Programmatic)
The/imp endpoint provides HTTP-based CSV import.
Endpoint: POST /impPort: 9000 (default) Features:
- Multipart form upload
- Batch processing
- Scriptable imports
- JSON/Text response formats
Web Console Import
Step 1: Access the Import UI
- Navigate to
http://localhost:9000 - Click the Import button in the top navigation
Step 2: Upload CSV File
Drag and Drop:- Drag your CSV file into the import area
- Or click to browse and select a file
- CSV (comma-separated)
- TSV (tab-separated)
- Pipe-separated (|)
- Custom delimiters
Step 3: Configure Import
Table Settings:- Table Name: Name for the new table (defaults to filename)
- Schema Action: Create new, append, or overwrite
- Partition Strategy: NONE, HOUR, DAY, WEEK, MONTH, YEAR
- Designated Timestamp: Select timestamp column
- Timestamp Format: Auto-detect or specify pattern
- Header: First line contains column names
- Skip LEV: Skip lines with extra values
- Atomicity: skipRow (continue on error) or abort (stop on error)
Step 4: Edit Schema
QuestDB analyzes the first 1000 lines to infer column types. Column Editor:- Change column names
- Modify data types
- Set timestamp column
- Configure index settings
- Numeric: BYTE, SHORT, INT, LONG, FLOAT, DOUBLE
- Text: STRING, SYMBOL
- Temporal: TIMESTAMP, DATE
- Other: BOOLEAN, UUID, LONG256, GEOHASH
Step 5: Import
Click Import to begin the import process. Progress Indicators:- Rows parsed
- Rows imported
- Parse errors
- Import status
- View error details for failed rows
- Download error report
- Adjust settings and retry
Step 6: Verify Import
After successful import:- Table appears in tables list
- Click table name to view data
- Run queries to validate
CSV File Format
Basic Format
With Header:Delimiters
Comma (default):Data Types
QuestDB infers types from the data: Integers:Timestamp Formats
QuestDB supports multiple timestamp formats: ISO 8601:Quoted Values
Strings with commas:REST API Import
Basic Import
Import with Options
Import with Schema
Query Parameters
| Parameter | Description | Default |
|---|---|---|
name | Table name | Filename |
timestamp | Timestamp column name | Auto-detect |
partitionBy | Partition strategy | DAY |
overwrite | Replace existing table | false |
forceHeader | Treat first line as header | Auto-detect |
skipLev | Skip line-extra-values errors | false |
atomicity | Error handling: skipRow/abort | abort |
fmt | Response format: json/text | text |
Response Format
JSON Response:Schema Specification
Type Mapping
| CSV Value | Inferred Type | Example |
|---|---|---|
123 | LONG | Integer values |
123.45 | DOUBLE | Decimal values |
"text" | STRING | Quoted text |
text | STRING/SYMBOL | Unquoted text |
true, false | BOOLEAN | Boolean values |
2024-01-15T10:00:00Z | TIMESTAMP | ISO 8601 dates |
1705315200000 | LONG/TIMESTAMP | Unix timestamps |
Symbol vs String
Use SYMBOL for:- Repeated values (e.g., sensor IDs, locations)
- Low cardinality (< 1M unique values)
- Frequent filtering and grouping
- Unique values (e.g., descriptions, messages)
- High cardinality
- Full-text content
- SYMBOL columns are indexed and memory-efficient
- STRING columns are stored as-is
Advanced Features
Parallel Import
Import multiple files in parallel:Incremental Import
Append to existing table:Large File Import
For files > 1GB, consider:- Split the file:
- Import chunks:
Custom Delimiters
QuestDB auto-detects delimiters, but you can specify:Error Handling
Skip Row on Error
Continue import despite errors:Abort on Error
Stop import on first error (default):Common Errors
Parse Error:http.text.max.required.line.length
Extra Values:
skipLev=true or fix CSV
Performance Optimization
Import Speed
Typical import rates:- Small rows (< 100 bytes): 1M rows/sec
- Medium rows (100-500 bytes): 500K rows/sec
- Large rows (> 500 bytes): 100K rows/sec
Optimization Tips
- Provide explicit schema - Skip type inference
- Use appropriate types - SYMBOL for repeated values
- Disable header detection - Set
forceHeader=true - Partition appropriately - Match query patterns
- Import to WAL tables - Better concurrent write performance
- Parallel imports - Multiple files simultaneously
Configuration
Complete Examples
Python Import Script
Bash Import Script
Best Practices
- Validate CSV locally - Check format before import
- Use explicit schema - Faster imports, predictable types
- Test with sample - Import small subset first
- Monitor errors - Check rowsRejected in response
- Choose appropriate partition - Based on query patterns
- Use SYMBOL for tags - Better performance for repeated values
- Specify timestamp column - Don’t rely on auto-detection
- Handle errors gracefully - Use atomicity=skipRow for dirty data
Related Topics
- REST API - Complete HTTP API reference
- InfluxDB Line Protocol - Alternative ingestion method
- SQL Reference - Query imported data
- Table Management - Manage tables after import