Big Data Solutions Decision Tree

Last update: May 30, 2017

New version: Big Data Decision Tree v4 (Jan 14th, 2019).

Currently there are lot of existing solutions for Big Data storage and analysis. In this article, I will describe a generic decision tree to choose the right solution to achieve your goals.

Disclaimer. Process of solution selection for Big Data projects is very complex with a lot of factors. So I presented most important decision points based on my experience. You may use it as first approximation to start looking deep into described and other solutions.

Let’s look at a variety of products proposed for Big Data by Microsoft on-premises and in the Cloud: Analytics Platform System (APS), Apache HBase, Apache Spark, Apache Storm, Azure Data Lake Analytics (ADLA), Azure Data Lake Store (ADLS), Azure Cosmos DB, Azure Stream Analytics (ASA), Azure SQL DB, Azure SQL DW, Hortonworks Data Platform (HDP), HDInsight, Spark Streaming etc. With many big data solutions we need a structured method to choose the right solution for a Big Data problem.

Big Data is often described as a solution to the “three V’s problem”, and how we choose right solution depends on which one of these problems we are trying to solve first:

  • Volume: need to store and query hundreds of terabytes of data or more, and the total volume is growing. Processing systems must be scalable to handle increasing volumes of data, typically by scaling out across multiple machines.
  • Velocity: need to collect data at an increasing rate from many new types of devices, from a fast-growing number of users, and from an increasing number of devices and applications per user. Processing systems must be able to return results within an acceptable timeframe, often almost in real-time.
  • Variety: situation when data do not match any existing data schema – semi-structured or unstructured data.

Often you will use combination of solutions from all or some of these areas.

Here is the decision tree, which maps the three types of problems to specific solutions. Below I will provide some comments on each of them.

There are three groups of solutions:

  • Data Warehouses (DWHs) are central relational repositories of integrated data from one or more disparate sources. They store current and historical data and are used for different analytical tasks in organizations. Use DWH is you have structured relational data with defined scheme.
  • Complex event processing (CEP) is method of tracking and processing streams of data from multiple sources about events, identifying meaningful events, deriving a conclusion from them, and responding to them as quickly as possible. Use CEP if you need to process hundreds of thousands of events per second.
  • NoSQL systems provide a mechanism for storage and retrieval of data without tabular relations. Characteristics of NoSQL: simplicity of design, simpler “horizontal” scaling to clusters of machines. The data structures used by NoSQL databases (e.g. key-value, wide column, graph, or document) are more flexible, and therefore more difficult to store in relational databases. Use NoSQL systems if you have non-relational, semi-structured, or unstructured data; with no schema defined.

Data Warehouses (DWHs)

  • SQL Serverindustry leading database for OLTP mission critical applications, data warehouses, enterprise information management and analytics. Updatable ColumnStore indexes allow to achieve significant increase in performance for analytical workloads. Combination of b-tree, in-memory tables and ColumnStore indexes allows to run analytics queries concurrently with operational workloads using the same schema. PolyBase for SQL Server 2016 allows execute T-SQL queries against semi-structured data in Hadoop or Azure Blob Storage, in addition to relational data in SQL Server. SQL Server also includes components for Enterprise Information Management (SSIS, DQS, MDS), business intelligence (SSAS, SSRS), and Machine Learning (R Services). Advantages: relational store, full transactional support, T-SQL, flexible indexing, security, EIM/Analytics components included, operational analytics on top of OLTP systems. Concerns: on a single node it is hard to achieve high performance for over 100 terabytes of data; sharding and custom code may be used to use multiple instances of SQL Server which means much more development and administrative effort.
  • Azure SQL Database (DB) is a relational database service in the cloud based on the Microsoft SQL Server DBMS engine. SQL Database delivers predictable performance, scalability with no downtime, business continuity and data protection. Advantages: relational store, full transactional support, T-SQL, flexible indexing, security. Concerns: current maximum data volume in Azure SQL DB is 1TB, so probably even with sharding and custom code this solution is not applicable for big volumes of relational data.
  • Azure SQL Data Warehouse (DW) is MPP version of SQL Server in Azure. It scales to petabytes of data, allows resize of compute nodes in a minute, and integrated with Azure platform. Advantages: highly scalable, MPP architecture, lower cost relational storage than Blobs, pause able compute, relational store, T-SQL, flexible indexing, security.
  • Microsoft Analytics Platform System (APS) is a combination of the massively parallel processing (MPP) engine in Microsoft Parallel Data Warehouse (PDW) with Hadoop-based big data technologies. It uses the HDP to provide an on-premises solution that contains a region for Hadoop-based processing, together with PolyBase—a connectivity mechanism that integrates the MPP engine with HDP, Cloudera, and remote Hadoop-based services such as HDInsight. It allows data in Hadoop to be queried and combined with on-premises relational data, and data to be moved into and out of Hadoop. Advantages: very cost-effective fast MPP architecture if constantly and fully used. Concerns: make sure that for most of queries you don’t initiate data movements between the nodes which is more expensive operation.

Complex event processing (CEP)

  • Azure Stream Analytics (ASA) may be used for real-time insights from devices, sensors, infrastructure, and applications. Scenarios: real-time remote management and monitoring. ASA is optimized to get streaming data from Azure Event Hubs and Azure Blob Storage. ASA SQL-like queries run continuously against the stream of incoming events. The results can be stored in Blob Storage, Event Hubs, Azure Tables and Azure SQL database. So if the output is stored in Event Hub it can become the input to another ASA job to chain together multiple real-time queries. Advantages: SQL-like query language, cloud-based: close to globally distributed data.
  • Apache Storm is a distributed stream processing computation framework written predominantly in the Clojure programming language. A Storm application is designed as a “topology” in the shape of a directed acyclic graph (DAG). Edges on the graph are named streams and direct data from one node to another. Together, the topology acts as a data transformation pipeline. Storm topologies run indefinitely until “killed”. Storm uses Zookeeper to manage its processes. Storm can read and write files to HDFS. Architecture: Storm processes the events one at a time. Performance: millisecond latency. Advantages: complete stream processing engine with micro-batching support. Concerns: supports only streaming data, not integrated with Azure platform.
  • Spark Streaming is used to build interactive and analytical applications. Used to create low-latency dashboards and security alert system, to optimize operations or prevent specific outcomes. Includes high-level operators to read streaming data from Apache Flume, Apache Kafka, and Twitter; historical data – from HDFS. Architecture: Spark streams events in small batches that come in short time window before it processes them. Development: Scala+Dstreams. Performance: 100s of MB/s with low latency (few seconds). Concerns: not integrated with Azure platform.

NoSQL systems

On-premises NoSQL:

  • Hortonworks Data Platform (HDP) is an enterprise-ready Hadoop, which is supported by Hortonworks for business clients. It contains collection of open source software frameworks for the distributed storing and processing of Big Data. It is scalable and fault tolerant, and works with commodity hardware. HDP for Windows is a complete package that you can install on Windows Server to build your own fully-configurable big data clusters based on Hadoop. It can be installed on physical on-premises hardware, or in virtual machines in the cloud. Development: Java for MapReduce, Pig, Hive. Advantages: enterprise-ready; full control of managing and running clusters. Concerns: Installation on multiple nodes and managing versions is a complex process; will need Hadoop admin experience.

NoSQL for storage and querying:

  • Azure Blob Storage easily and cost-effectively stores hundreds of objects, or hundreds of millions. It allows petabytes of capacity and massive scalability. You pay only for what you use, saving you more than on-premises storage options. It’s designed for applications that require performance, high availability, and security—offering both client-side encryption and server-side encryption. Blob storage is ideal for storing big data for analysis, whether using an Azure analytics service or your own on-premises solution. Advantages: geo-replication; low storage costs; scalability and data sharing outside of the Hadoop cluster. Administrative tools: PowerShell, AzCopy. Development: .NET, Java, Android, C++, Node.js, PHP, Ruby, and Python; REST API with HTTP/HTTPS requests.
  • Azure Data Lake Store (ADLS) is a distributed, parallel file system in the cloud performance-tuned and optimized for analytics based on different data types. It is supported by leading Hadoop distributives: Hortonworks, Cloudera, MapR, and also with HDInsight and Azure Data Lake Analytics (ADLA). Development: WebHDFS protocol (behaves like HDFS). Advantages: low cost data store with high throughput, role-based security, AAD integration, larger storage limits than Blob Store.
  • Azure Table Storage allows you to store petabytes of semi-structured data while keeping costs down, without manual sharding. Using geo-redundant storage, stored data is replicated 3 times within a region—and an additional 3 times in another region. Development: OData-based queries.
  • Apache HBase is a NoSQL wide-column store for writing large amounts of unstructured or semi-structured application data to run analytical processes using Hadoop (like HDP or HDInsight).
  • Azure Cosmos DB is a globally distributed database service designed to elastically and independently scale throughput and storage across any number of geographical regions with a comprehensive SLA. It supports document, key/value, or graph databases leveraging popular APIs and programming models: DocumentDB API, MongoDB API, Graph API, and Table API. Development: SQL query and transactions over JSON documents, REST; SDKs: .NET, Node.js, Java, JavaScript, Python, Xamarin, Gremlin. Advantages: different formats of storage, global distribution, elastic scale out, 
low latency, 5 consistency models, automatically indexed, schema agnostic, native JSON, stored procedures.

NoSQL for Advanced Analytics:

  • Azure Data Lake Analytics (ADLA) is a distributed analytics service built on Apache YARN. It handles jobs of any scale instantly by setting how much compute power is needed. It allows to do analytics on Exabytes of data, and customers still pays only for the cost of the query. ADLA supports Azure Active Directory for Access Control, Roles, Integration with on-premises identity systems. It also includes U-SQL, a language that unifies the benefits of SQL with the expressive power of C#; runtime processes data across multiple Azure data sources. ADLA allows you to compute on data anywhere and a join data from multiple cloud sources like Azure SQL DW, Azure SQL DB, ADLS, Azure Storage Blobs, SQL Server in Azure VM. Advantages: U-SQL, AAD, security, integrated with Azure platform and Visual Studio. Concerns: currently only batch mode is supported — you may use HDInsight for other types of workloads.
  • HDInsight. This is a cloud-hosted service available to Azure subscribers that uses Azure clusters to run HDP, and integrates with Azure storage. Supports MapReduce, Spark, Storm frameworks. Advantages: cloud-based which means that the cluster can be created approximately in 15 minutes; scale nodes on demand; fully managed by Microsoft (upgrades, patching); some Visual Studio and IntelliJ integration; 99.9% SLA. Concerns: if you use some specific Hadoop-based components, make sure of compatibility.
  • Apache Spark is an open source cluster computing framework. It provides API based on resilient distributed dataset (RDD), a read-only multiset of data items distributed over a cluster of machines. RDDs function as a working set for distributed programs that offers a form of distributed shared memory. Components built on top of Spark: Spark SQL, Spark Streaming, MLlib, GraphX. It also can work with R Server. Advantages: in-memory, fast (5-7 times faster than MapReduce). Concerns: less components if you compare with components based on MapReduce.
  • Machine Learning technologies will be covered in next article.

Any comments are appreciated. To be continued…

Additional information:

Leave a comment