What is the Outbound Flow?
The outbound flow lets you push JSON data into FileFeed through the API instead of uploading files via SFTP. It uses the same pipeline infrastructure — schemas, mappings, transforms, webhooks — but the entry point is an HTTP API call rather than a file drop.SFTP (Inbound)
Client uploads a file to SFTP → FileFeed detects and processes it.
API (Outbound)
Your backend pushes JSON data via API → FileFeed processes it.
When to use
- You already have the data in your backend and want to push it into a pipeline
- You don’t want to manage SFTP connections for a particular data source
- You need programmatic control over when data enters the pipeline
- You’re building an integration that produces data (not receives files)
Prerequisites
Before using the outbound flow, you need:- A Client — with an
awsUserName(created in the dashboard or via API) - A Schema — defining the target data structure
- An Outbound Pipeline — with
direction: "outbound", linking the client and schema with field mappings - An API key — user-type key for authentication
The pipeline must have
direction set to "outbound". Inbound pipelines will be rejected by the upload endpoints.Architecture
How it works
The outbound upload uses a multipart flow similar to AWS S3 multipart uploads:Step 1: Initialize
Create an upload session specifying the client, pipeline, number of parts, and optional filename.Step 2: Upload parts
Upload each part as a JSON array of objects. Parts are numbered 1 throughtotalParts.
Step 3: Complete
Finalize the upload by listing all parts. FileFeed combines them into one file, stores it in S3, and triggers pipeline processing.Step 4: Consume results
After processing completes, a pipeline run is created. Use the standard pipeline runs API to fetch results:Quick path: uploadJson helper
For most use cases, the SDK provides a convenience method that handles chunking, part uploads, and completion in one call:
Checking upload status
At any point during the upload, you can check progress:Aborting an upload
Cancel an in-progress upload and clean up temporary parts:- All temporary part data is deleted from storage
- The session is marked as
aborted - Further uploads to this session are rejected
API endpoints
| Method | Endpoint | Description |
|---|---|---|
POST | /outbound/uploads | Initialize upload session |
PUT | /outbound/uploads/:uploadId/parts/:partNumber | Upload one part |
POST | /outbound/uploads/:uploadId/complete | Complete and trigger processing |
POST | /outbound/uploads/:uploadId/abort | Abort and cleanup |
GET | /outbound/uploads/:uploadId | Get upload status |
Upload states
| State | Meaning |
|---|---|
initiated | Session created, no parts uploaded yet |
uploading | At least one part has been uploaded |
completed | All parts combined, processing triggered |
aborted | Upload cancelled, parts cleaned up |
Integration checklist
- Create Client with SFTP credentials (provides the
clientName/awsUserName) - Define Schema (target fields and validation)
- Create Pipeline with
direction: "outbound"(link client + schema, define mappings) - Get API key (Dashboard → My Account → Security Settings)
- Push data using
uploadJson()or the manual multipart flow - Poll or listen for pipeline run completion
- Fetch processed data and persist to your system
- Acknowledge the pipeline run