GLOSSARY

Core BatchPipe terms as used in the app and schema.

← Back to Docs

Core concepts

Workspace

A company-level account and billing boundary. Pipes, API keys, billing, and events are isolated per workspace.

User account

A human identity (email + password) that can belong to multiple workspaces.

Workspace user

A membership mapping between a user account and a workspace, with a role (owner/admin/member).

Pipe

A named stream that receives records and delivers batches to configured destinations. Pipes also define enrichment and limits.

Record

One JSON object ingested through the ingestion API (clients may send one record or an array per request).

Credentials and access

API key

A write-only credential used to ingest data. Keys are stored hashed; the secret is shown only once when created.

API key prefix

A short, non-secret part of the key format so you can tell which key was used without exposing the secret.

Allowed origin (CORS)

Optional browser origins allowed for a pipe. CORS is enforced by browsers; it is not a strong identity check for non-browser clients.

Data flow

Ingest (ingestion)

Accepting records into BatchPipe (authenticate, validate, enrich, enforce limits, and buffer/queue). Ingest answers: “How much data did we accept into the pipeline?”

Delivery

Workers take buffered records, build batches, and write/send them to destinations (DB/object storage/HTTP). Delivery answers: “How much did we successfully push out to destinations?”

Batch

A group of records delivered together as a unit, controlled by size/time thresholds.

Destinations

Destination

A configured endpoint where pipe batches are delivered (database, object store, or HTTP endpoint).

Destination config

JSON settings for the destination (connection details, URL, table name, etc.). Treat as sensitive; it should be encrypted at rest.

Destination column mapping

For database destinations, how ingested JSON fields map to destination table columns (name, source field, type, nullable, optional semantic role).

Destination status

Operational state of a destination: active = deliveries permitted, blocked = do not attempt delivery until unblocked (e.g., repeated failures or operator action).

Destination column source field

The JSON field name in the ingested record that should be written into a destination database column. Most values come directly from your ingestion payload (e.g. user_id). Some values can come from BatchPipe enrichment if enabled on the pipe (e.g. an ingestion timestamp field or client IP field).

Destination column type (JSON)

For database destinations, destination_column_type stores the JSON value type of the field: string, number, boolean, object, array, or null. Dates and timestamps are usually ingested as string (e.g. ISO-8601). Delivery uses this to cast values into the actual SQL column type at the destination.

Destination column semantic role

Optional “special meaning” on top of normal column mapping (primarily for database destinations):

  • dedupe_id: stable identifier used to deduplicate/upsert
  • record_ts: business/event time (“when it happened”)
  • received_at: when BatchPipe received it (“when we saw it”)

Limits

max_records_per_sec: rate limit

max_records_per_day: daily cap

max_batch_size: records per batch

max_batch_interval_seconds: flush deadline

Billing tables

billing_ingest_raw

Append-only ingestion usage per time window (per workspace + pipe): records and bytes accepted.

billing_delivery_raw

Append-only delivery usage emitted by workers: batches attempted/succeeded/failed and bytes delivered.

billing_usage_daily

Derived daily aggregates used for dashboards and invoicing (computed from the raw tables).

Operational events

Workspace event

A stored alert or signal per workspace (for example destination auth failures, slow delivery, schema mismatch, or backlog growth).