OtherSoftware

Blis Optimizes Cost Effectiveness and Cloud Experience through GCP Migration

Blis is the global leader in real-world intelligence. The company specializes in understanding real, human behavior by analyzing vast quantities of mobile location data. This gives businesses a uniquely powerful tool: the truth about what people actually do, to improve consumer engagement and deliver measurable sales uplift.

Its Smart Platform provides unmatched transparency, accuracy, and scale through three proprietary technologies: SmartPin, Smart Scale and Smart Places. This enables more effective planning, activation, and measurement for marketers and business decision makers alike, fueling the next generation of insight-driven marketing.

blis-logo

BUSINESS CHALLENGE

To increase costs effectiveness and improve levels of support, Blis needed to migrate their services and data to Google Cloud Platform (GCP).

PROJECT DESCRIPTION

From a technical point of view, the project introduced two major challenges: high infrastructure complexity and strict migration downtime requirements.

The legacy infrastructure consisted of over 150 instances and services distributed across five different regions all over the world, with a centralized core infrastructure and regional centers responsible for low-latency incoming requests processing. Services included custom Java and C++ applications partially managed in containerized infrastructure, OLTP PostgreSQL and MySQL databases, analytics databases in RedShift and Druid, distributed Kafka and Cassandra clusters, and central Hadoop cluster for Spark-based ETL processing etc. Finally, the overall amount of data in all the repositories exceeded 5PB.

Due to the business specifics, Blis needed to ensure zero-downtime migration for the client-facing services and to limit central infrastructure downtime by 20 minutes maximum.

As a first stage of the project, SoftServe conducted a discovery phase in which Blis' infrastructure was analyzed to identify technologies in use, services dependencies, and migration requirements. As a result, SoftServe developed a detailed step-by-step migration plan that mapped legacy applications to GCP services, described applications migration order, and was heavily focused on the downtime elimination or minimization approach for the whole project.

Next, the GCP adaptation phase of the project was executed. SoftServe engineers prepared both infrastructure and customer applications for GCP migration without actually changing the production solution. Specifically, GCP projects structure, network infrastructure, and security framework based on Google OAuth2 were created and integrated with the existing solution. Infrastructure deployment was automated with Deployment Manager and Ansible. All containerized applications were moved to Kubernetes Engine with Helm.

In parallel, existing applications were re-engineered to work in the new infrastructure and technologies. For example, RedShift data warehouse and related ETL pipelines were reworked to use Google BigQuery solution. Other GCP services in use were GCS, Cloud SQL, Cloud Dataproc, Cloud Composer, and Cloud Memorystore.

In addition to services adaptation, a set of solutions was verified or developed at this stage to facilitate low-downtime migration, from standard Kafka and Cassandra replications to completely custom synchronization logic for OLTP databases and ETL pipelines.

When both GCP infrastructure and customer applications were ready, the step-by-step migration phase began. Client-facing services were deployed to GCP infrastructure and used in parallel with the legacy systems to guarantee zero-downtime migration. For the core, heavily data-oriented services, initial data migration was conducted and data synchronization services were running to keep the minimum data gap between production and new GCP services. Finally, the full core infrastructure switchover to GCP was executed together with final data synchronization steps, which met the customer’s 20 minutes downtime requirements.

KEY TECHNOLOGIES

  • Kubernetes engine (GKE) - main containers orchestration solution for most of the Blis' applications
  • Big Query - was used as a managed and scalable data warehouse. Big Query replaced RedShift cluster used in the legacy infrastructure
  • Cloud Dataproc and Cloud Composer - were used to implement and orchestrate the Blis' ETL pipelines. Cloud Dataproc and Composer replaced EMR and Luigi solutions used before
  • Kafka, Cassandra, Cloud Memorystore, Druid, and Cloud SQL databases - were used by both client-facing services and ETL pipelines and were migrated with no major changes

VALUE DELIVERED

Blis achieved GCP migration within a tight schedule. Geographically distributed and multi-technology infrastructure was migrated in less than four months. Zero downtime was achieved for critical applications. Downtime was impressively short for other services and our client’s monthly infrastructure bill reduced by 20%.

Let's Talk