FullStory to Databricks

This page provides you with instructions on how to extract data from FullStory and load it into Delta Lake on Databricks. (If this manual process sounds onerous, check out Stitch, which can do all the heavy lifting for you in just a few clicks.)

What is FullStory?

The FullStory digital intelligence platform lets you replay customers' website journeys to solve problems, find answers, and optimize customers' experience. It features funnel analytics, click maps, and robust search and segmentation.

What is Delta Lake?

Delta Lake is an open source storage layer that sits on top of existing data lake file storage, such AWS S3, Azure Data Lake Storage, or HDFS. It uses versioned Apache Parquet files to store data, and a transaction log to keep track of commits, to provide capabilities like ACID transactions, data versioning, and audit history.

Getting data out of FullStory

You can use the FullStory API to get a list of sessions for a particular user. For example, to get information based on a user's email address, you could GET https://www.fullstory.com/api/v1/sessions?email=john@example.com.

Sample FullStory data

Here's an example of the kind of response you might see with a query like the one above.

[{
 "UserId": 1234567890,
 "SessionId": 1234567890,
 "CreatedTime": 1411492739,
 "FsUrl": "https://www.fullstory.com/ui/ORG_ID/discover/session/1234567890:1234567890"
}]

Loading data into Delta Lake on Databricks

To create a Delta table, you can use existing Apache Spark SQL code and change the format from parquet, csv, or json to delta. Once you have a Delta table, you can write data into it using Apache Spark's Structured Streaming API. The Delta Lake transaction log guarantees exactly-once processing, even when there are other streams or batch queries running concurrently against the table. By default, streams run in append mode, which adds new records to the table. Databricks provides quickstart documentation that explains the whole process.

Keeping FullStory data up to date

Now what? You've built a script that pulls data from FullStory and loads it into your data warehouse, but what happens tomorrow when you have new transactions?

The key is to build your script in such a way that it can identify incremental updates to your data. Thankfully, many of FullStory's API results include fields like CreatedTime that allow you to identify records that are new since your last update (or since the newest record you've copied). Once you've take new data into account, you can set your script up as a cron job or continuous loop to keep pulling down new data as it appears.

Other data warehouse options

Delta Lake on Databricks is great, but sometimes you need to optimize for different things when you're choosing a data warehouse. Some folks choose to go with Amazon Redshift, Google BigQuery, PostgreSQL, or Snowflake, which are RDBMSes that use similar SQL syntax, or Panoply, which works with Redshift instances. Others choose a data lake, like Amazon S3. If you're interested in seeing the relevant steps for loading data into one of these platforms, check out To Redshift, To BigQuery, To Postgres, To Snowflake, To Panoply, and To S3.

Easier and faster alternatives

If all this sounds a bit overwhelming, don’t be alarmed. If you have all the skills necessary to go through this process, chances are building and maintaining a script like this isn’t a very high-leverage use of your time.

Thankfully, products like Stitch were built to move data from FullStory to Delta Lake on Databricks automatically. With just a few clicks, Stitch starts extracting your FullStory data, structuring it in a way that's optimized for analysis, and inserting that data into your Delta Lake on Databricks data warehouse.