Data endpoints

Data export

Complete and timely access to messaging data is important for a wide range of tasks such as data warehousing or analysis on user usage and behavior. To support such tasks, the Layer platform provides several options for exporting messaging data.

Historic export

A historical export provides a complete snapshot of the current state of an application’s messaging data at the time of export. Data is exported using a scatter and gather approach, pulling data from all nodes within the system and assembling it into a single JSON document that includes the current state of all conversations and messages in the application. Historical exports are triggered directly via the Server API and can be requested at most once per day.

Scheduled export

A scheduled export provides an incremental snapshot of your application’s activity on a configurable schedule. Unlike historical exports, scheduled exports include individual events detailing when and how application entities have changed. The schema of the export is very similar to that of Webhooks and is functionally analogous to writing all webhook events out to a file.

See the Exports section for details on export functionality.

Method HTTP request Description
Register public key PUT /apps/:app_uuid/export_security Register public key to encrypt archives with
Get public key GET /apps/:app_uuid/export_security Get the current public key being used
Request historical export POST /apps/:app_uuid/exports Initialize an export
Track export status GET /apps/:app_uuid/exports/:export_uuid/status Get current details for an export
Configure scheduled export PUT /apps/:app_uuid/export_schedule Configure settings for scheduled export
Get export settings GET /apps/:app_uuid/export_schedule Get configured export settings
Get exports GET /apps/:app_uuid/exports Get export outputs
Delete export DELETE /apps/:app_uuid/exports/:export_uuid Delete the export file

Register public key

Parameters:

Name Type Description
public_key string Contents of your public key

HTTP request:

PUT/apps/:app_uuid/export_security

Example:

{
    "public_key": "-----BEGIN PUBLIC KEY-----MII..."
}
curl -X PUT \
     -H 'Accept: application/vnd.layer+json; version=3.0' \
     -H 'Authorization: Bearer <TOKEN>' \
     -H 'Content-Type: application/json' \
     https://api.layer.com/apps/<app_uuid>/export_security\
 -d '{"public_key":"-----BEGIN PUBLIC KEY-----MII..."}'

Possible responses:

Public key set successfully
Status: 200 (OK)
(Empty body)

Discussion

As Layer hosts communications that are private and potentially sensitive, we take security very seriously throughout the platform. All export archives must be secured with RSA encryption using a key size of at least 2048 bits. This key is used to encrypt a copy of an AES key that is used to perform the actual encryption of the archive. This cryptographic protocol provides strong security guarantees and supports simple support for key rotation.

Before you can export any data, you must supply Layer with the public half of an RSA key pair. A key pair can be generated using OpenSSL from any Unix-like operating system (OS X, Linux, FreeBSD, etc) or via any other cryptographic solution capable of producing PEM encoded RSA keys. If OpenSSL is available on the system, then the following command can be used to generate a 2048-bit private key named layer-export-key.pem in the current working directory and echo its public counterpart to the terminal:

openssl genrsa -out layer-export-key.pem 2048 && openssl rsa -in layer-export-key.pem -pubout

Once the public key has been generated, use this endpoint to send it to Layer for use in encrypting export archives. You can change the public key at any time by issuing another PUT.

Note

Export archives that have already been generated are not re-encrypted when the public key is changed. Archive your private keys when keys are rotated to ensure that all archives remain accessible.

Get public key

Parameters:

This method takes no parameters.

HTTP request:

GET/apps/:app_uuid/export_security

Example:

curl -X GET \
     -H 'Accept: application/vnd.layer+json; version=3.0' \
     -H 'Authorization: Bearer <TOKEN>' \
     -H 'Content-Type: application/json' \
     https://api.layer.com/apps/<app_uuid>/export_security

Possible responses:

Got public key successfully | Status: 200 (OK)

{
    "public_key": "-----BEGIN PUBLIC KEY-----MII..."
}

Request historical export

Parameters:

This method takes no parameters.

HTTP request:

POST/apps/:app_uuid/exports

Example:

curl -X POST \
     -H 'Accept: application/vnd.layer+json; version=3.0' \
     -H 'Authorization: Bearer <TOKEN>' \
     -H 'Content-Type: application/json' \
     https://api.layer.com/apps/<app_uuid>/exports

Possible responses:

Request enqueued | Status: 202 (Accepted)

{
    "aes_iv": null,
    "completed_at": null,
    "created_at": "2016-03-11T17:57:46.501Z",
    "download_url": null,
    "download_url_expiration": null,
    "encrypted_aes_key": null,
    "expired_at": null,
    "id": "layer:///exports/c2ea7b50-e7b2-11e5-9e15-0242ac1101de",
    "public_key": "MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAxNGxpyHoXIaCS2PkL53OEvtQ6sUdtoc1unk8rMvCRul9KlBxlwvdUdHp/HpZHW4bumrtU+rFLZZOccU8CawMHMH7cdx/q2vh0sGE39kTD3antdkeLXRRgWg/01X5hzCJIaa1yMa0Pxqu88qo+svDw7mQUHcSoB5PRMC+am+eygiElB3cT656mVhDKyGUpijs0u0s5EKLSX6fFbbjy3zjdYiU8BVcYQeie1x7CB0bT7UbtgzznLHP9bkR97r7tTN1dqYoP0/LCK3SXMV9ol5jw5BsA+/1afjmqo1t7F7Uo55V+m3mNPo7JVI0S773zR1KzF8/oWV81tdSNt5IzGSKoQIDAQAB",
    "started_at": "2016-03-11T17:57:46.566Z",
    "status": "executing",
    "status_url": "https://api.layer.com/apps/c5c08562-3dff-11e4-bd1b-6a99000000e6/exports/c2ea7b50-e7b2-11e5-9e15-0242ac1101de/status",
    "type": "historical"
}

Discussion

Generating a historical export is a slow operation that may take many hours to complete. The duration is proportional to the amount of messaging data stored within Layer. Because of the processing power consumed during an export operation, no more than one per day may be requested.

Track export status

As historical exports can take a long time to complete, a status resource is exposed to allow developers to track progress. The URL is embedded within the body of the export representation on the status_url property. This resource can be polled periodically to track the completion of the export.

Parameters:

Name Type Description
app_uuid string Your Layer App ID
export_uuid string ID of the export

HTTP request:

GET/apps/:app_uuid/exports/:export_uuid/status

Example:

curl -X GET \
     -H 'Accept: application/vnd.layer+json; version=3.0' \
     -H 'Authorization: Bearer <TOKEN>' \
     -H 'Content-Type: application/json' \
     https://api.layer.com/apps/<app_uuid>/exports/<export_uuid>/status

Possible responses:

Got export details | Status: 200 (OK)

{
    "id": "EXPORT_ID",
    "type": "historical",
    "public_key": "-----BEGIN PUBLIC KEY-----\nMII...",
    "status": "executing",
    "created_at": "2016-05-14T22:10:09.177Z",
    "started_at": "2016-05-14T22:10:09.241Z",
    "completed_at": null,
    "expired_at": null,
    "download_url": null,
    "download_url_expires_at": null,
    "encrypted_aes_key": null,
    "aes_iv": null,
    "status_url": "https://api.layer.com/apps/APP_ID/exports/EXPORT_ID/status"
}

Configure scheduled export

Parameters:

Name Type Description
interval string Frequency for emitting exports. Accepted values are daily or disabled
events string[] Any of message.sent, message.delivered, message.read, message.deleted, conversation.created, conversation.updated.participants, conversation.updated.metadata, conversation.deleted

HTTP request:

PUT/apps/:app_uuid/export_schedule

Example:

{
    "interval": "daily",
    "events": [
        "message.sent",
        "message.delivered",
        "message.read",
        "message.deleted",
        "conversation.created",
        "conversation.updated.participants",
        "conversation.updated.metadata",
        "conversation.deleted"
    ]
}
curl -X PUT \
     -H 'Accept: application/vnd.layer+json; version=3.0' \
     -H 'Authorization: Bearer <TOKEN>' \
     -H 'Content-Type: application/json' \
     https://api.layer.com/apps/<app_uuid>/export_schedule\
 -d '{"interval":"daily","events":["message.sent","message.delivered","message.read","message.deleted","conversation.created","conversation.updated.participants","conversation.updated.metadata","conversation.deleted"]}'

Possible responses:

Exports configured
Status: 200 (OK)
(Empty body)

Discussion

To find out when the next export is scheduled, check the response body or do a GET on the same endpoint.

Get export settings

Parameters:

This method takes no parameters.

HTTP request:

GET/apps/:app_uuid/export_schedule

Example:

curl -X GET \
     -H 'Accept: application/vnd.layer+json; version=3.0' \
     -H 'Authorization: Bearer <TOKEN>' \
     -H 'Content-Type: application/json' \
     https://api.layer.com/apps/<app_uuid>/export_schedule

Possible responses:

Got scheduled export settings | Status: 200 (OK)

{
    "interval": "daily",
    "last_export_at": null,
    "last_export_id": null,
    "next_export_at": "2016-04-29T22:01:12+00:00"
}

Discussion

Response fields:

  • interval: How often this export runs. Currently, possible values are “daily” or “disabled”.
  • last_export_at: ISO 8601–timestamp specifying the last time the export was executed. null indicates that the export hasn’t run yet.
  • last_export_id: ID of the last export generated. nill indicates that the export hasn’t run yet.
  • next_export_at: ISO 8601–timestamp specifying the next time the export will be executed.

Get exports

All available export archives are exposed via the exports resource. Results returned include details about the public key, status, and secure download URLs for retrieving the encrypted archives.

Parameters:

This method takes no parameters.

HTTP request:

GET/apps/:app_uuid/exports

Example:

curl -X GET \
     -H 'Accept: application/vnd.layer+json; version=3.0' \
     -H 'Authorization: Bearer <TOKEN>' \
     -H 'Content-Type: application/json' \
     https://api.layer.com/apps/<app_uuid>/exports

Possible responses:

Got export details | Status: 200 (OK)

[
    {
        "aes_iv": "dvZXo11ZNZcSS3qJ6Vy/cw==",
        "completed_at": "2016-02-22T21:05:40.755Z",
        "created_at": "2016-02-22T21:05:44.471Z",
        "download_url": "https://storage.googleapis.com/export-prod1/scheduled/56b59936-c3ab-11e5-a1d7-f5d113005657/09c97940-d9a8-11e5-921b-0242ac110029-2016022221.tar.gz.enc?GoogleAccessId=749992877108-663m6ofgstp1ihdkejvj7o27oo9e2ln1@developer.gserviceaccount.com&Expires=1456179657&Signature=sGVk5GgKEA%2Bh4AOd5VA2KOU45hoLHavCXnpsMucjdK6M0Y8aXI%2BCAZtixo6l8F3%2FNBimoKza0eBGS7C4%2BUOuM8w7%2B6NkXgFOjZAuHEsJmimphwFUHhqDLjnRujP2q50y6GUl4G33lzA4%2FkpAqxsfNLSDwHsZUyZwDVW%2FX2KSd4s%3D",
        "download_url_expires_at": "2016-02-22T22:20:57.060Z",
        "encrypted_aes_key": "gbwxlNIYLjFmOfWiprfPY+uiiSIA1q2Gpom0zK3ZPdooO4vPz1s0fic8LduiQVsP2lPgHiSCym0Fv2KYiIutgk3bRwPikF7NUcriQLzT80k0Px5iDaGMEHAboMmVL7yDMP+qDkJ5gUsTIOGKPQKML1kjcLTvHc2j15Fhd3RYAcFaJpGJ2ZJW+Q+Ik91mvxsA6jyjO+v1mIEFhWTOTlSLu3OGFCJxj9oxLo0NqLEQVTfOiqwRGsuTiEMTMtgREP70WX4ZoAO1NgEnTaT4r8A430r6JP6Wcz1u84DOgiacA502XiMwpLQDP72ufYpjByip9LtqFSZvr7DVJkVj+cfhyg==",
        "expired_at": null,
        "id": "09c97940-d9a8-11e5-921b-0242ac110029",
        "public_key": "MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAxNGxpyHoXIaCS2PkL53OEvtQ6sUdtoc1unk8rMvCRul9KlBxlwvdUdHp/HpZHW4bumrtU+rFLZZOccU8CawMHMH7cdx/q2vh0sGE39kTD3antdkeLXRRgWg/01X5hzCJIaa1yMa0Pxqu88qo+svDw7mQUHcSoB5PRMC+am+eygiElB3cT656mVhDKyGUpijs0u0s5EKLSX6fFbbjy3zjdYiU8BVcYQeie1x7CB0bT7UbtgzznLHP9bkR97r7tTN1dqYoP0/LCK3SXMV9ol5jw5BsA+/1afjmqo1t7F7Uo55V+m3mNPo7JVI0S773zR1KzF8/oWV81tdSNt5IzGSKoQIDAQAB",
        "started_at": "2016-02-22T21:05:44.471Z",
        "status": "completed",
        "status_url": "https://staging-preview-api.layer.com/apps/56b59936-c3ab-11e5-a1d7-f5d113005657/exports/09c97940-d9a8-11e5-921b-0242ac110029/status",
        "type": "scheduled"
    }
]

Discussion

Note

Data exports are eventually pruned from our system. As a result, they should be consumed and archived promptly.

Response fields:

Name Type Description
aes_iv string A unique initialization vector used to seed the AES encryption process with randomness.
completed_at ISO 8601–timestamp The time that the export was completed.
created_at ISO 8601–timestamp The time that the export was created.
download_url string A secure, expiring URL from which to download the encrypted export archive. Refreshed automatically.
download_url_expires_at ISO 8601–timestamp The time at which the download_url will expire.
encrypted_aes_key ISO 8601–timestamp An copy of the AES key used to encrypt the export archive encrypted with the public key.
expired_at ISO 8601–timestamp The time that the export will expire and be purged.
id string A unique UUID value identifying the export.
public_key string The RSA public key used to encrypt a copy of the AES key that is used to encrypt the archive.
started_at ISO 8601–timestamp Time the export operation started.
status string The status of the export. Either (One of: “pending”, “executing”, “completed”)
status_url string The URL for polling export status.
type string The type of export. Either scheduled or historical.

Decrypting export archives

Once an export has been completed and downloaded from the download_url via an HTTP GET, it must be decrypted and unarchived before the contents is accessible. Export archives are distributed in the form of a tarball (.tar.gz) that has been encrypted with AES-256, a symmetric key algorithm. The AES key and initialization vector (IV) are embedded in the export JSON representation (encrypted_aes_key and aes_iv respectively). The AES key is embedded in the encrypted_aes_key property and has been encrypted using the public key specified by the public_key property. The decryption process looks like this:

  1. Locate the path to the downloaded encrypted archive (defined as ENCRYPTED_TARBALL in the script below).
  2. Locate the RSA private key that corresponds to the public_key in the export JSON (defined as the PRIVATE_KEY_PATH in script below).
  3. Extract the encrypted_aes_key and aes_iv from the export JSON (defined as ENCRYPTED_AES_KEY and AES_IV in script below).
  4. Pass the encrypted archive through openssl, supplying the AES-256 initialization vector and key obtained by decrypting the encrypted key with the private RSA key.
  5. The script below performs all of these actions in one shot, producing a decrypted export.tar.gz archive. The archive can then be decompressed by executing tar -xzf export.tar.gz and will produce an export.json file that contains all the events exported.

Inflating the archive may require use of GNU Tar to extract larger archives because of limitations in the base file format (You may get an error such as export.json: Attempt to write to an empty file).

# path to the file you just downloaded
export ENCRYPTED_TARBALL=downloaded.tar.gz.enc
# path for the unencrypted tar
export OUTPUT_TAR=export.tar.gz
# path to the private key
export PRIVATE_KEY_PATH=priv.key
# the encrypted_aes_key from the export json
export ENCRYPTED_AES_KEY=gbwxlNIYLjFmOfWiprfPY+uiiSIA1q2Gpom0zK3ZPdooO4vPz1s0fic8LduiQVsP2lPgHiSCym0Fv2KYiIutgk3bRwPikF7NUcriQLzT80k0Px5iDaGMEHAboMmVL7yDMP+qDkJ5gUsTIOGKPQKML1kjcLTvHc2j15Fhd3RYAcFaJpGJ2ZJW+Q+Ik91mvxsA6jyjO+v1mIEFhWTOTlSLu3OGFCJxj9oxLo0NqLEQVTfOiqwRGsuTiEMTMtgREP70WX4ZoAO1NgEnTaT4r8A430r6JP6Wcz1u84DOgiacA502XiMwpLQDP72ufYpjByip9LtqFSZvr7DVJkVj+cfhyg==
# the aes_iv key from the export json
export AES_IV=dvZXo11ZNZcSS3qJ6Vy/cw==

openssl enc -in $ENCRYPTED_TARBALL -out $OUTPUT_TAR -d -aes-256-cbc -K `echo $ENCRYPTED_AES_KEY | base64 --decode | openssl rsautl -decrypt -inkey $PRIVATE_KEY_PATH | hexdump -ve '1/1 "%.2x"'` -iv `echo $AES_IV | base64 --decode | hexdump -ve '1/1 "%.2x"'`

Delete export

Parameters:

This method takes no parameters.

HTTP request:

DELETE/apps/:app_uuid/exports/:export_uuid

Example:

curl -X DELETE \
     -H 'Accept: application/vnd.layer+json; version=3.0' \
     -H 'Authorization: Bearer <TOKEN>' \
     -H 'Content-Type: application/json' \
     https://api.layer.com/apps/<app_uuid>/exports/<export_uuid>

Possible responses:

Export deleted successfully
Status: 204 (No Content)
(Empty body)

Analytics

Currently, Layer doesn’t have a built-in analytics solution. However, we provide access to the raw data, which you can use to implement such a system.

Data sources

  • Webhooks: Real-time stream of data. Great for near–realtime analytics, but requires an always-on server to receive payloads.
  • Scheduled export: JSON array containing all the webhook payloads that would’ve been sent over the past 24 hours. Great for business analytics and dashboards that update every 24 hours.
  • Bulk export: JSON document containing all conversation and message data from the moment your application was created. It does not include fine-grained events, such as deletion timestamps or delivery and read receipts.

Available raw data

All data sources include all fields for conversation, message, and message part resources. Webhooks and scheduled exports also provide visibility into delivery and read receipts, as well as message and conversation deletion events, and changes to conversation participants and metadata.

Common use cases

  • Messages sent per day: Increment a counter every time you encounter a Message whose sent_at falls within the day (or other time range) you’re interested in. You can do this using any of the three data sources.
  • How soon new messages are read: Subscribe to message delivered and message read webhooks. Compute the difference between the timestamp of the former and the latter by matching the message.id property.
  • Agent responses per conversation: You can do this using any of the three data sources, but it may be easiest to use the Server API to get messages in a conversation, select the messages whose sender.user_id matches your agent’s user ID, and count those.

Data storage

For many use cases, you can simply subscribe to webhooks, ignore events that don’t match the event/predicate you’re interested in, and store a counter. For more advanced use cases, or for on-demand analytics, you may want to store data. The simplest way is to store webhook payloads (or unpack scheduled export payloads) into a SQL database, which will allow you to be flexible when designing queries and discovering new use cases. A standard SQL database should suffice for most use cases. For very high-volume apps, you may want to look into a data warehouse solution.