Usage
The Batch Upload API allows a set of related logs to be grouped together when uploading to Zebrium. When compared to single file uploads, or the upload-status APIs, batch uploads provide a more controlled and organized way to send groups of information to Zebrium. There can be multiple batch uploads concurrently underway.
The operational flow for batch uploads is:
-
Make an API call to Zebrium to begin a batch (begin_batch). A unique batch id is returned on success that is used in subsequent steps while working with a batch. This API call creates the required Zebrium state for a batch and must be the first operation for each new batch.
-
The logs associated with a batch are uploaded, e.g. using ze or curl. These use the configuration variable ze_batch_id to notify Zebrium that the logs are part of a batch. This must be set to the batch_id used in step 1.
-
When all files have been uploaded make another API call to Zebrium to end the batch upload phase (end_batch). This tells Zeberium that all files for a batch are uploaded and processing can begin on the batch.
-
Check the state of a batch periodically (using the get_batch API) until processing has completed.
See Example for more information.
Additional operations that can be performed are:
-
List Batches and their states.
-
Get batch metrics.
-
Cancel a non-finished batch.
-
List incidents associated with a batch.
Batch Ids and scope of batches
Each batch upload is identified by a unique string, the batch id. This is defined when the begin_batch API is called, and is valid for the lifetime of the batch upload.
Zebrium automaticaly returns a new batch id from the begin_batch API by default. Alternatively, a user-defined batch id may be supplied on the begin_batch API call. However, note that this cannot be reused until the batch has expired and been removed. Batch ids are formed using 1-36 alphanumeric characters, plus ‘_’ (underscore) and ‘-‘ (dash).
Batch ids are used as part of ZAPI uploads, along with a ZAPI token. They are associated with that ZAPI token at creation time, and may only be used with the same token in later upload calls.
The lifetime of a batch, or retention period, is set in hours. By default this is 8 hours. This can be overridden in the begin_batch API if desired. The retention period is used to extend the lifetime as a batch successfully proceeds through each state.
Batch States
Each batch upload exists in one of the following states:
State | Interpretation |
Uploading | Files are being uploaded to the batch (step 1, 2 above) |
Processing | All files have completed upload and are being processed. (triggered by step 3 above). |
Done | Ingest and bake has completed on all uploads |
Failed | The batch could not be uploaded and/or processed |
Cancelled | The batch was cancelled by the user prior to step 3. |
Opportunistic or Delayed Batch Processing
When starting a new batch the API (step 1) allows the user to specify how to stage and process the batch, either delayed or opportunistic. The default is delay.
In both cases uploaded files for a batch are processed together in one or more bundles, with no other logs included in the bundles.
Type | Interpretation |
Opportunistic | Zebrium may start processing uploaded files before the final commit (step 3). This can reduce the amount of temporary space needed for a batch, and spreads work out over a longer time. |
Delayed | Zebrium will delay processing uploaded files until the final commit (step 3) occurs. This guarantees the batch is processed as a unit, although it may consume more temporary space and cause a burst of work when the batch ends. |
If batches are typically small then using delay is appropriate. If batches are very large then using opportunistic may be appropriate.
Example
This example uses Curl to get a batch id, uses the ze
CLI to upload several files with the same batch id, then uses Curl to advise Zebrium that all data for the upload has been sent. Finally, a check is made whether or not all the data in the upload has been processed.
Begin batch, get a batch id:
curl --silent --insecure -H "Authorization: Token <authToken> " -H "Content-Type: application/json" -X POST https://<ZapiHost>/api/v2/batch
BATCH_ID=<newBatchId>
Upload logs using ze
CLI
ze up --url=https://mysite.example.com --auth=<authToken>--file=syslog.syslog.log --log=syslog --ids=ze_deployment_name=case1 --cfgs=ze_batch_id=$BATCH_ID
ze up --url=https://mysite.example.com --auth=<authToken> --file=jira.jira.log --log=jira --ids=ze_deployment_name=case1 --cfgs=ze_batch_id=$BATCH_ID
ze up --url=https://mysite.example.com --auth=<authToken> --file=conflnc.conflnc.log --log=conflnc --ids=ze_deployment_name=case1 --cfgs=ze_batch_id=$BATCH_ID
Indicate end of uploads:
curl --silent --insecure -H "Authorization: Token <authToken" -H "Content-Type: application/json" -X PUT --data '{ "uploads_complete" : true }' https://<zapi_host>/api/v2/batch/$BATCH_ID
Check the status of uploads is complete via the state
that is returned in the response payload:
curl --silent --insecure -H "Authorization: Token <authToken" -H "Content-Type: application/json" https://<zapi_host>/api/v2/batch/$BATCH_ID | grep state
When the state becomes Done the batch is successfully processed. While processing is underway other information from the get_batch API can be used to monitor progress, for example the number of bundles created for the batch, and completed so far:
...
"bundles": 8,
"bundles_completed": 3,
...
Note on Cancelled and Failed batches
A batch can be cancelled while still performing uploads using the cancel_batch API. This causes the batch to transition to the Cancelled state. Any uploaded files staged on Zebrium will be removed.
If a batch fails processing it transitions to the Failed state. The reason for the failure, if known, is available in the reason attribute. For example:
"state": "Failed",
...
"reason": "write bundle files failed"
would indicate insufficient temporary storage to process the batch.