Azure Blob Storage
Sync Expectations
The following table describes the sync behavior of Fullstory events to Azure Blob Storage in parquet files.
Sync Interval | Relevant Timestamps | Setup Guide |
---|---|---|
Approximately processed_time rounded to the next hour + 1 hour | event_time ingested_time processed_time updated_time | Help Doc |
Events Parquet File Schema
The following table describes the schema for the parquet files containing Fullstory events.
Field | Type | Nullable | Description |
---|---|---|---|
event_id | string | N | The unique identifier for the event. |
event_time | string | Y | The time in UTC that the event occurred, formatted as yyyy-MM-ddTHH:mm:ss.SSSSSSZ . |
processed_time | string | N | The time in UTC that the event was packaged in the parquet file, formatted as yyyy-MM-ddTHH:mm:ss.SSSSSSZ . |
updated_time | long | Y | If set, the time when the parquet file was recreated represented by the number of milliseconds since the epoch. |
device_id | long | N | The device ID as defined in the base event model. |
session_id | long | Y | The session ID as defined in the base event model. |
view_id | long | Y | The view ID as defined in the base event model. |
event_type | string | N | The type of this event. |
event_properties | string | N | A json string containing the associated event properties. |
source_type | string | N | The source type of the event. |
source_properties | string | N | A json string containing the associated source properties. |
Events Parquet File Path
The path is following the hive partitioning layout (key=value, supported by BigQuery, etc.) that contains an ingested_time
column:
fullstory_<org_id>/events/ingested_time=yyyy-MM-dd HH:mm:ss/<file_name>.parquet
.
The ingested_time
column is the formatted time in UTC that the events are ingested by Fullstory's servers, truncated to seconds.
Note that the ingested time does not imply the range of the event time in a parquet file, use a larger range for the ingested time to make sure the target event time range is included in a query.
Transforming Events data for SQL Queries
Note: Azure provides several methods for transforming data stored in Azure Blob Storage. The following example demonstrates how to transform the data using Azure Synapse Analytics.
Azure Synapse Analytics
Azure Synapse Analytics is a cloud-based data integration service that allows you to transform data stored in Azure Blob Storage by setting up a source (Azure Blob Storage files) and a sink (a SQL table in this case).
- Create a new Synapse workspace in the Azure portal.
- Create a new Data Flow in the Synapse workspace.
- Set a source integration dataset pointing to the Azure Blob Storage files.
- Set a sink integration dataset pointing to a desired SQL table.
- Set up a pipeline to run the Data Flow and transform the data to be used in your sink table.
Querying the Events Table
To demonstrate how to query the events table, the following example shows how to count the number of rage clicks broken down by browser and URL. For demonstration purposes, we include Microsoft SQL Server syntax, although this should be adapted for the specific sink you are using.
SELECT
JSON_VALUE(source_properties, '$.user_agent.browser') AS browser,
JSON_VALUE(source_properties, '$.url.path') AS path,
COUNT(1) AS rage_clicks
FROM
<tablename> -- Update the table reference based on what was provided as sink table during transformation.
WHERE
event_type = 'click'
AND CAST(JSON_VALUE(event_properties, '$.fs_rage_count') AS INT) > 0
GROUP BY
JSON_VALUE(source_properties, '$.user_agent.browser'),
JSON_VALUE(source_properties, '$.url.path')
ORDER BY
rage_clicks DESC;