How-To: Loading Eloqua Activity Data in to Google BigQuery

This blog is going away soon! :( Check out my new site where you can read the latest and subscribe for updates!

Integrating Marketing Automation and Google Cloud Platform #

Previously, we discussed various methods of data pipelines and their integration with Marketing Automation Platforms (MAPs) (see: Scaling Marketing Data Pipelines). The final state outlined was the use of a Big Data platform as the center for data management, with the MAP becoming a source for email activity data, and a sync for contact/segmentation data, as shown below:

Target big data architecture with marketing automation

This also sets the stage for future enhancements, such as advanced data science analyses, including machine learning.

In this article, we’ll walk through an example of importing email send data from Eloqua in to Google BigQuery (part of Google Cloud Platform). We’ll use an Apache Beam pipeline deployed in Google Cloud Dataflow to make this happen, along with the PyEloqua Python package.

Exporting Eloqua data and importing to BigQuery via Dataflow

Why BigQuery? #

GCP offers several cloud-native database solutions, so let’s quickly dig in to why we’ve chosen BigQuery:

Looking at these options, BigQuery is the closest fit for our needs.

Set up Google Cloud Platform project #

If you do not already have one, you’ll need to set up an account and project on Google Cloud Platform. At the time of this writing, Google offers a trial of GCP of $300 for up to 12 months, which is more than enough for this example.

You’ll also need to enable APIs for Dataflow, BigQuery, ComputeEngine, and Google Cloud Storage. Next, create a new Google Cloud Storage bucket.

Clone example repo and install dependencies #

You can do this either locally (if you have the GCloud CLI installed and configured) or via Google Cloud Shell:

git clone https://github.com/colemanja91/eloqua-gcp
cd ./eloqua-gcp/etl-activity-email-sends
sudo bash ./install_packages.sh
# Activate resulting virtualenv
source venv/bin/activate

At this point, you may need to authenticate if you have not already:

gcloud auth application-default login

Set up BigQuery dataset and table #

Navigate to the BigQuery Console. On the left, next to your project name, click the dropdown, then select “Create new dataset.” Enter the “Dataset ID” as “eloqua”, and leave the other parameters on default.

Now open the file ./bq-schema/email_send.json and copy the contents. Back in the BigQuery Console, next to eloqua , click the “+” sign to create a new table. Enter activity_email_send as the table name. Under “Source data,” select “Create empty table.” Under “Schema,” click “Edit as Text” and paste the JSON file you copied earlier.

Click “Create Table” and you now have a destination table for our ETL!

Configure scripts #

Open the file etl_elq_activity_email_send.py, locate the section shown below, and replace constant definitions with the appropriate values:

PROJECT = 'gcp-project-name'
...
ELOQUA_COMPANY = 'company'
ELOQUA_USER = 'user'
ELOQUA_PASSWORD = 'password'

Further down, the line with the following contents indicates the time range we will be exporting:

'EmailSend', '2017-10-01 00:00:00', '2017-10-02 00:00:00'

This shows we will be exporting all Email Send data for October 1, 2017. You may modify this as you like, but keep in mind that very large send volumes will need to be broken down in to several data pulls.

Examine and run Dataflow pipeline #

Looking at the run() function defined in the Python script, we can see exactly how the pipeline is configured. It will be run direct on Cloud Dataflow (as indicated by --runner=DataFlowRunner ), and sends over a requirements.txt file which specifies to install PyEloqua.

The pipeline itself has only two components:

Now we can run our pipeline!

cd ./etl-activity-email-sends
python etl_elq_activity_email_send.py

In GCP console, navigate to Dataflow and you should see the following:

Pipeline visualization in Dataflow

It will take a few minutes for the workers to spin up and the pipeline to run.

View data in BigQuery #

Navigate back to the BigQuery console. Run the following query to view a subset of your data:

SELECT * FROM [<PROJECT ID>:eloqua.activity_email_send] LIMIT 10

Once you do a few back-loads of data, you’ll be able to examine your entire email history, and run more advanced SQL queries backed by the magic of BigQuery!

Next Steps #

There are still a few pieces missing that we’ll want to address later:

Once those are in place, we can get to the fun part:

But for now, we have a good start!


Feel free to connect with me!

 
0
Kudos
 
0
Kudos

Now read this

Running Luigi on Openshift 3

This blog is going away soon! :( Check out my new site where you can read the latest and subscribe for updates! In a previous post, I outlined how Red Hat’s Marketing Operations group is leveraging the power of Openshift 3 and Kubernetes... Continue →