Core BatchPipe terms as used in the app and schema.
A company-level account and billing boundary. Pipes, API keys, billing, and events are isolated per workspace.
A human identity (email + password) that can belong to multiple workspaces.
A membership mapping between a user account and a workspace, with a role (owner/admin/member).
A named stream that receives records and delivers batches to configured destinations. Pipes also define enrichment and limits.
One JSON object ingested through the ingestion API (clients may send one record or an array per request).
A write-only credential used to ingest data. Keys are stored hashed; the secret is shown only once when created.
A short, non-secret part of the key format so you can tell which key was used without exposing the secret.
Optional browser origins allowed for a pipe. CORS is enforced by browsers; it is not a strong identity check for non-browser clients.
Accepting records into BatchPipe (authenticate, validate, enrich, enforce limits, and buffer/queue). Ingest answers: “How much data did we accept into the pipeline?”
Workers take buffered records, build batches, and write/send them to destinations (DB/object storage/HTTP). Delivery answers: “How much did we successfully push out to destinations?”
A group of records delivered together as a unit, controlled by size/time thresholds.
A configured endpoint where pipe batches are delivered (database, object store, or HTTP endpoint).
JSON settings for the destination (connection details, URL, table name, etc.). Treat as sensitive; it should be encrypted at rest.
For database destinations, how ingested JSON fields map to destination table columns (name, source field, type, nullable, optional semantic role).
Operational state of a destination: active = deliveries permitted, blocked = do not attempt delivery until unblocked (e.g., repeated failures or operator action).
The JSON field name in the ingested record that should be written into a destination database column.
Most values come directly from your ingestion payload (e.g. user_id).
Some values can come from BatchPipe enrichment if enabled on the pipe (e.g. an ingestion timestamp field or client IP field).
For database destinations, destination_column_type stores the
JSON
value type of the field: string,
number,
boolean,
object,
array, or
null.
Dates and timestamps are usually ingested as string (e.g. ISO-8601).
Delivery uses this to cast values into the actual SQL column type at the destination.
Optional “special meaning” on top of normal column mapping (primarily for database destinations):
max_records_per_sec: rate limit
max_records_per_day: daily cap
max_batch_size: records per batch
max_batch_interval_seconds: flush deadline
Append-only ingestion usage per time window (per workspace + pipe): records and bytes accepted.
Append-only delivery usage emitted by workers: batches attempted/succeeded/failed and bytes delivered.
Derived daily aggregates used for dashboards and invoicing (computed from the raw tables).
A stored alert or signal per workspace (for example destination auth failures, slow delivery, schema mismatch, or backlog growth).