Cortana Intelligence Suite: Big Data and Advanced Analytics

In this post we will discuss reference architecture for Big Data and Advanced Analytics using Cortana Intelligence Suite. The architecture can be relevant for organizations looking to fully manage big data and advanced analytics to transform all enterprise information into intelligent action. This will allow to take action ahead of your competitors by going beyond looking in the rearview mirror to predicting what’s next.

In general, in such solutions you use relational and semi-structured data from business and custom applications, and also semi-structured or unstructured data from sensors, devices, web sites, social networks and other sources.

Big Data flow

Big Data flow includes following steps:

  • Ingestions of data, which can be based on bulk mode or event-based/real-time.
  • Processing data to prepare for storage.
  • Storing data in relational or unstructured storage.
  • Processing data for analytics like data aggregation, complex calculations, predictive or statistical modeling etc.
  • Visualizing data and data discovery using BI tools or custom applications.

big-data-flow

Big Data Reference Architecture

Big Data Reference architecture represents most important components and data flows, allowing to do following.

  • Track Azure data (Azure Website generating web logs) and store in ADLS
  • Track real-time data from IOT Suite: collect data from IOT Suite in permanent store (ADLS)
  • Run Machine Learning through R Server for HDInsight to find patterns in data
  • Show results in BI tools (Power BI)

big-data-ra

There are lot of different options to store data, process data and for machine learning. You may use Big Data and Machine Learning decision trees as a first help to choose most relevant components for your solution. (I will also write about information management components like Azure Data Factory, Azure Data Catalog, Sqoop, Pig, Oozie etc. in one of next posts).

Example of Big Data Solution

To show you simple example of Big Data architecture we will use following artificial scenario.

  • AdventureWorks Travel (AWT) provides concierge services for business travelers. In an increasingly crowded market, they are always looking for ways to differentiate themselves and provide added value to their corporate customers.
  • They are looking to pilot a web-app that their internal customer service agents can use to provide additional information useful to the traveler during the flight booking process. They want to enable their agents to enter in the flight information and produce a prediction as to if the departing flight will encounter a 15 minute or longer delay, taking into account the weather forecasted for the departure hour.
  • Data platform team prefers to use open source technologies for data processing tasks.
  • Developers will need an easy way to create prediction experiments.

Here is example of architecture allowing to solve the scenario described above. Selected components of Cortana Intelligence Suite are highlighted.

cis-example

Demonstration of described solution is available in MTC Studio webcast: 2016-12-08 | Cortana Intelligence Suite: Big Data and Advanced Analytics.

Additional materials

Machine Learning Solutions Decision Tree

Machine learning is a technique of data science that helps computers learn from existing data in order to forecast future behaviors, outcomes, and trends. Currently there are lot of products which can be used for this on-premises or in the cloud, based on single node or multiple nodes, in relational database or in Hadoop based storage.

This article will help you to choose right Machine Learning solution based on specific requirements. We will discuss open source products, which can be deployed in Microsoft Cloud (Azure), or Microsoft products which can be deployed on-premises.

Disclaimer. In this article I present most important decision points based on my experience. You may use it as first approximation to start looking deep into described and other solutions.

Decision on which product will be selected also depends on development platform used by specialists in organization, and on what Big Data solution is already used or there are plans to use. Key questions here are “Do you already use Hadoop or Data Warehouse?” (SPP or MPP?), “In the cloud or on-premises?”, and “How many data needed for machine learning?” (If storage and ML engine are separated, what will be cost and latency of data transfer?).

Process of machine learning solution selection may influence selection of Big Data solution itself. (Please see decision tree on Big Data solutions in a separate article).

Also, complexity and uniqueness of machine learning problem is important, and how much of effort the team is ready to provide to develop ML solution. Some of product are much easier to use (Azure Machine Learning), and for some tasks there are standard APIs available (Azure Cognitive Services).

Please note that some products can be deployed on top of one platform. (For example, MLlib and R Server deployed on top of Spark cluster).

So let’s see the decision tree first. Below I will provide some comments on each of products. (You may also download high-resolution printable version of the decision tree).

machine-learning-dt-v1-02

Azure Machine Learning

Azure Machine Learning (ML) is a cloud-based predictive analytics service that makes it possible to quickly create and deploy predictive models as analytics solutions.

azure-ml

In Machine Learning Studio, you can create predictive models by dragging, dropping, and connecting modules. Studio also provides a library of algorithms and samples to get you started. You may create new ML experiment using sample experiments, R and Python packages, standard algorithms (modules), and custom R and Python scripts.

In Cortana Intelligence Gallery, you can try analytics solutions authored by others or contribute your own.

Data Science development: Visual, R language and Python.

Advantages: Graphical experiments representation; easy to study; quick deployment; Excel integration; scalable in terms of multiple experiments.

Concerns: May not be fastest solution to process large amount of data using one single experiment – make sure that all components of your experiment can scale

Cognitive Services

Cognitive Services are a collection of artificial intelligence REST APIs. With Cognitive Services, developers can easily add intelligent features into their applications.

cognitive-services

Cognitive Services include:

  • Vision: From faces to feelings, allow apps to understand images and video
  • Speech: Hear and speak to users by filtering noise, identifying speakers, and understanding intent
  • Language: Process text and learn how to recognize what users want
  • Knowledge: Tap into rich knowledge amassed from the web, academia, or your own data
  • Search: Access billions of web pages, images, videos, and news with the power of Bing APIs

The collection will continuously improve, adding new APIs and updating existing ones.

Development: REST APIs.

Advantages: Quickly to use, platform independent, use some publicly available data from Bing.

Concerns: Can be used only for subset of machine learning tasks.

Microsoft R Server Family

Microsoft R Server is a broadly deployable enterprise-class analytics platform based on R; it is scalable and secure. Supporting a variety of big data statistics, predictive modeling and machine learning capabilities, R Server supports the full range of analytics – exploration, analysis, visualization and modeling. It is compatible with the entire collection of open source algorithms, connectors, visualization tools shared openly via CRAN, Bioconductor and other shared resources like GitHub. At the same time key extensions enable R to tackle big data challenges that exceed the capacity of open source R. Scripts can be developed on the desktop and immediately deployed to RDBMS – SQL Server, EDW (SQL Server & Teradata) or Hadoop (Microsoft, Cloudera, Hortonworks and MapR).

r-server

Data Science development: R language (Open R, Scale R).

Advantages: Distributes work across cores and nodes (if multiple nodes available); R Scripts built using R Server can be easily run on multiple platforms running R Server, on-premises and in the cloud (important for hybrid scenarios).

SQL Server R Services (R Server for Windows)

SQL Server R Services (also known as R Server for Windows) is Advanced Analytics and Stand Alone Server Capability built into SQL Server Enterprise Edition. It brings the perfect mix of fast querying and In-Memory OLTP optimization from SQL Server 2016, as well as data exploration, predictive modeling, scoring, and visualization from the R Services family of products. It delivers speed and performance for advanced analytics using near-database analytics and parallel threading and processing. It is integrated with SQL Server: T-SQL can call a Stored Procedure with R code, R scripts can run in SQL through extensibility model, and result sets can be sent through Web API to database or applications.

Data Science development: R language (Open R, Scale R).

Advantages: Included into SQL Server; distributes work across cores; no database data movement, which means it will work much faster; data generated by data scientists using R language can be secured and managed by DBAs and queried by data analysts.

Concerns: Uses only resources of one physical server.

r-services

R Server for MapReduce

R Server for MapReduce uses Apache MapReduce nodes for R computations.

Using R Server in MapReduce eliminates data movement latency and removes data duplication if you already use MapReduce for data storage.

Supported platforms: HDInsight Premium, Hortonworks, Cloudera, MapR.

Data Science development: R language (Open R, Scale R)

Advantages: Distributes work across cores and nodes; if you have lot of MapReduce code and have no plans to move off MapReduce, deploying R Server on top of it will eliminate data movement for machine learning.

Concerns: Uses MapReduce which is slower than Spark.

R Server for Spark

R Server for Spark uses Apache Spark nodes for R computations at in-memory speeds using Spark RDDs. R Server for Spark leverages Spark DAG (Directed Acyclic Graph to distribute work across the cluster) and persistence for computation (we may leave the task running and waiting for new requests). In this scenario you can develop models using larger amounts of data with better performance.

Using R Server in Spark eliminates data movement latency and removes data duplication if you already use Spark for data storage.

Supported platforms: HDInsight Premium with Spark and R Server, Spark on Hortonworks, Spark on Cloudera, Spark on MapR.

Data Science development: R language (Open R, Scale R)

Advantages: uses Spark, which means fast in-memory computations; distributes work across cores and nodes.

R Server for Teradata DB

R Server for Teradata DB uses MPP architecture for R computations.

Data Science development: R language (Open R, Scale R)

Advantages: works with Teradata DB; distributes work across cores; no database data movement, which means it will work much faster; data generated by data scientists using R language can be secured and managed by DBAs and queried by data analysts.

R Server for Linux

Data Science development: R language (Open R, Scale R)

Advantages: distributes work across cores.

Concerns: uses only resources of one physical server; additional time will be used to copy or stream data to Linux machine from HDFS.

Mahout MapReduce

Mahout MapReduce is a collection of machine learning algorithms based on Hadoop MapReduce framework.

Platform: Hadoop MapReduce, Java.

Advantages: Mahoot MapReduce comes with many ML algorithms to choose from; MapReduce is much more mature framework then Spark, therefore more stable.

Concerns: Slow and does not handle iterative jobs very well (constrained by disk accesses due to MapReduce).

Mahout Samsara

Mahout Samsara is a Scala-based programming environment based on different distributed engines (Spark and H2O) which also contains machine learning algorithms. It uses all algebraic expressions in R-like Scala DSL which means that is can be readable by R programmers and in general is easier to understand.

Platform: Hadoop Spark, Scala.

Advantages: Fast due to use of Spark.

Concerns: Currently is under development – unstable.

MLlib

MLlib is Spark’s scalable machine learning library consisting of common learning algorithms and utilities, including classification, regression, clustering, collaborative filtering, dimensionality reduction, as well as underlying optimization primitives.

MLlib ships with Spark as a standard component, so it works seamlessly with SparkSQL, Spark Streaming and Spark GraphX. Additionally, you may deploy R Server on top of Spark cluster.

spark-platform

Platform: Spark.

Data Science language: Python and Scala/Java.

Advantages: due to in-memory capabilities MLlib runs iterative algorithms 5-10 times faster than Mahoot based on Hadoop MapReduce; efficient and interoperable with SparkSQL, Spark Streaming & Spark GraphX; clear and consistent APIs.

Concerns: not all algorithms are implemented, though MLlib is growing very rapidly.

Additional information

Big Data Solutions Decision Tree

Last update: May 30, 2017

New version: Big Data Decision Tree v4 (Jan 14th, 2019).

Currently there are lot of existing solutions for Big Data storage and analysis. In this article, I will describe a generic decision tree to choose the right solution to achieve your goals.

Disclaimer. Process of solution selection for Big Data projects is very complex with a lot of factors. So I presented most important decision points based on my experience. You may use it as first approximation to start looking deep into described and other solutions.

Let’s look at a variety of products proposed for Big Data by Microsoft on-premises and in the Cloud: Analytics Platform System (APS), Apache HBase, Apache Spark, Apache Storm, Azure Data Lake Analytics (ADLA), Azure Data Lake Store (ADLS), Azure Cosmos DB, Azure Stream Analytics (ASA), Azure SQL DB, Azure SQL DW, Hortonworks Data Platform (HDP), HDInsight, Spark Streaming etc. With many big data solutions we need a structured method to choose the right solution for a Big Data problem.

Big Data is often described as a solution to the “three V’s problem”, and how we choose right solution depends on which one of these problems we are trying to solve first:

  • Volume: need to store and query hundreds of terabytes of data or more, and the total volume is growing. Processing systems must be scalable to handle increasing volumes of data, typically by scaling out across multiple machines.
  • Velocity: need to collect data at an increasing rate from many new types of devices, from a fast-growing number of users, and from an increasing number of devices and applications per user. Processing systems must be able to return results within an acceptable timeframe, often almost in real-time.
  • Variety: situation when data do not match any existing data schema – semi-structured or unstructured data.

Often you will use combination of solutions from all or some of these areas.

Here is the decision tree, which maps the three types of problems to specific solutions. Below I will provide some comments on each of them.

There are three groups of solutions:

  • Data Warehouses (DWHs) are central relational repositories of integrated data from one or more disparate sources. They store current and historical data and are used for different analytical tasks in organizations. Use DWH is you have structured relational data with defined scheme.
  • Complex event processing (CEP) is method of tracking and processing streams of data from multiple sources about events, identifying meaningful events, deriving a conclusion from them, and responding to them as quickly as possible. Use CEP if you need to process hundreds of thousands of events per second.
  • NoSQL systems provide a mechanism for storage and retrieval of data without tabular relations. Characteristics of NoSQL: simplicity of design, simpler “horizontal” scaling to clusters of machines. The data structures used by NoSQL databases (e.g. key-value, wide column, graph, or document) are more flexible, and therefore more difficult to store in relational databases. Use NoSQL systems if you have non-relational, semi-structured, or unstructured data; with no schema defined.

Data Warehouses (DWHs)

  • SQL Serverindustry leading database for OLTP mission critical applications, data warehouses, enterprise information management and analytics. Updatable ColumnStore indexes allow to achieve significant increase in performance for analytical workloads. Combination of b-tree, in-memory tables and ColumnStore indexes allows to run analytics queries concurrently with operational workloads using the same schema. PolyBase for SQL Server 2016 allows execute T-SQL queries against semi-structured data in Hadoop or Azure Blob Storage, in addition to relational data in SQL Server. SQL Server also includes components for Enterprise Information Management (SSIS, DQS, MDS), business intelligence (SSAS, SSRS), and Machine Learning (R Services). Advantages: relational store, full transactional support, T-SQL, flexible indexing, security, EIM/Analytics components included, operational analytics on top of OLTP systems. Concerns: on a single node it is hard to achieve high performance for over 100 terabytes of data; sharding and custom code may be used to use multiple instances of SQL Server which means much more development and administrative effort.
  • Azure SQL Database (DB) is a relational database service in the cloud based on the Microsoft SQL Server DBMS engine. SQL Database delivers predictable performance, scalability with no downtime, business continuity and data protection. Advantages: relational store, full transactional support, T-SQL, flexible indexing, security. Concerns: current maximum data volume in Azure SQL DB is 1TB, so probably even with sharding and custom code this solution is not applicable for big volumes of relational data.
  • Azure SQL Data Warehouse (DW) is MPP version of SQL Server in Azure. It scales to petabytes of data, allows resize of compute nodes in a minute, and integrated with Azure platform. Advantages: highly scalable, MPP architecture, lower cost relational storage than Blobs, pause able compute, relational store, T-SQL, flexible indexing, security.
  • Microsoft Analytics Platform System (APS) is a combination of the massively parallel processing (MPP) engine in Microsoft Parallel Data Warehouse (PDW) with Hadoop-based big data technologies. It uses the HDP to provide an on-premises solution that contains a region for Hadoop-based processing, together with PolyBase—a connectivity mechanism that integrates the MPP engine with HDP, Cloudera, and remote Hadoop-based services such as HDInsight. It allows data in Hadoop to be queried and combined with on-premises relational data, and data to be moved into and out of Hadoop. Advantages: very cost-effective fast MPP architecture if constantly and fully used. Concerns: make sure that for most of queries you don’t initiate data movements between the nodes which is more expensive operation.

Complex event processing (CEP)

  • Azure Stream Analytics (ASA) may be used for real-time insights from devices, sensors, infrastructure, and applications. Scenarios: real-time remote management and monitoring. ASA is optimized to get streaming data from Azure Event Hubs and Azure Blob Storage. ASA SQL-like queries run continuously against the stream of incoming events. The results can be stored in Blob Storage, Event Hubs, Azure Tables and Azure SQL database. So if the output is stored in Event Hub it can become the input to another ASA job to chain together multiple real-time queries. Advantages: SQL-like query language, cloud-based: close to globally distributed data.
  • Apache Storm is a distributed stream processing computation framework written predominantly in the Clojure programming language. A Storm application is designed as a “topology” in the shape of a directed acyclic graph (DAG). Edges on the graph are named streams and direct data from one node to another. Together, the topology acts as a data transformation pipeline. Storm topologies run indefinitely until “killed”. Storm uses Zookeeper to manage its processes. Storm can read and write files to HDFS. Architecture: Storm processes the events one at a time. Performance: millisecond latency. Advantages: complete stream processing engine with micro-batching support. Concerns: supports only streaming data, not integrated with Azure platform.
  • Spark Streaming is used to build interactive and analytical applications. Used to create low-latency dashboards and security alert system, to optimize operations or prevent specific outcomes. Includes high-level operators to read streaming data from Apache Flume, Apache Kafka, and Twitter; historical data – from HDFS. Architecture: Spark streams events in small batches that come in short time window before it processes them. Development: Scala+Dstreams. Performance: 100s of MB/s with low latency (few seconds). Concerns: not integrated with Azure platform.

NoSQL systems

On-premises NoSQL:

  • Hortonworks Data Platform (HDP) is an enterprise-ready Hadoop, which is supported by Hortonworks for business clients. It contains collection of open source software frameworks for the distributed storing and processing of Big Data. It is scalable and fault tolerant, and works with commodity hardware. HDP for Windows is a complete package that you can install on Windows Server to build your own fully-configurable big data clusters based on Hadoop. It can be installed on physical on-premises hardware, or in virtual machines in the cloud. Development: Java for MapReduce, Pig, Hive. Advantages: enterprise-ready; full control of managing and running clusters. Concerns: Installation on multiple nodes and managing versions is a complex process; will need Hadoop admin experience.

NoSQL for storage and querying:

  • Azure Blob Storage easily and cost-effectively stores hundreds of objects, or hundreds of millions. It allows petabytes of capacity and massive scalability. You pay only for what you use, saving you more than on-premises storage options. It’s designed for applications that require performance, high availability, and security—offering both client-side encryption and server-side encryption. Blob storage is ideal for storing big data for analysis, whether using an Azure analytics service or your own on-premises solution. Advantages: geo-replication; low storage costs; scalability and data sharing outside of the Hadoop cluster. Administrative tools: PowerShell, AzCopy. Development: .NET, Java, Android, C++, Node.js, PHP, Ruby, and Python; REST API with HTTP/HTTPS requests.
  • Azure Data Lake Store (ADLS) is a distributed, parallel file system in the cloud performance-tuned and optimized for analytics based on different data types. It is supported by leading Hadoop distributives: Hortonworks, Cloudera, MapR, and also with HDInsight and Azure Data Lake Analytics (ADLA). Development: WebHDFS protocol (behaves like HDFS). Advantages: low cost data store with high throughput, role-based security, AAD integration, larger storage limits than Blob Store.
  • Azure Table Storage allows you to store petabytes of semi-structured data while keeping costs down, without manual sharding. Using geo-redundant storage, stored data is replicated 3 times within a region—and an additional 3 times in another region. Development: OData-based queries.
  • Apache HBase is a NoSQL wide-column store for writing large amounts of unstructured or semi-structured application data to run analytical processes using Hadoop (like HDP or HDInsight).
  • Azure Cosmos DB is a globally distributed database service designed to elastically and independently scale throughput and storage across any number of geographical regions with a comprehensive SLA. It supports document, key/value, or graph databases leveraging popular APIs and programming models: DocumentDB API, MongoDB API, Graph API, and Table API. Development: SQL query and transactions over JSON documents, REST; SDKs: .NET, Node.js, Java, JavaScript, Python, Xamarin, Gremlin. Advantages: different formats of storage, global distribution, elastic scale out, 
low latency, 5 consistency models, automatically indexed, schema agnostic, native JSON, stored procedures.

NoSQL for Advanced Analytics:

  • Azure Data Lake Analytics (ADLA) is a distributed analytics service built on Apache YARN. It handles jobs of any scale instantly by setting how much compute power is needed. It allows to do analytics on Exabytes of data, and customers still pays only for the cost of the query. ADLA supports Azure Active Directory for Access Control, Roles, Integration with on-premises identity systems. It also includes U-SQL, a language that unifies the benefits of SQL with the expressive power of C#; runtime processes data across multiple Azure data sources. ADLA allows you to compute on data anywhere and a join data from multiple cloud sources like Azure SQL DW, Azure SQL DB, ADLS, Azure Storage Blobs, SQL Server in Azure VM. Advantages: U-SQL, AAD, security, integrated with Azure platform and Visual Studio. Concerns: currently only batch mode is supported — you may use HDInsight for other types of workloads.
  • HDInsight. This is a cloud-hosted service available to Azure subscribers that uses Azure clusters to run HDP, and integrates with Azure storage. Supports MapReduce, Spark, Storm frameworks. Advantages: cloud-based which means that the cluster can be created approximately in 15 minutes; scale nodes on demand; fully managed by Microsoft (upgrades, patching); some Visual Studio and IntelliJ integration; 99.9% SLA. Concerns: if you use some specific Hadoop-based components, make sure of compatibility.
  • Apache Spark is an open source cluster computing framework. It provides API based on resilient distributed dataset (RDD), a read-only multiset of data items distributed over a cluster of machines. RDDs function as a working set for distributed programs that offers a form of distributed shared memory. Components built on top of Spark: Spark SQL, Spark Streaming, MLlib, GraphX. It also can work with R Server. Advantages: in-memory, fast (5-7 times faster than MapReduce). Concerns: less components if you compare with components based on MapReduce.
  • Machine Learning technologies will be covered in next article.

Any comments are appreciated. To be continued…

Additional information:

Vehicle Health & Driving Pattern Analysis using Cortana Analytics with Power BI

Last changes: January 13, 2016

In the following scenario of advanced analytics we will show how car dealers, insurances and automobile manufacturers can use Cortana Analytics including Power BI to gain real-time and predictive insights on vehicle health and driving pattern behavior.

The solution can be applied to following business use cases:

  • Usage-based insurance
  • Vehicle diagnostic
  • Engine emission control
  • Engine performance remapping
  • Eco-driving
  • Roadside assistance calls
  • Fleet management

auto-scenarios-by-microsoft

Starting December 1, 2015 the solution called Vehicle Telemetry Analytics template is available at Cortana Analytics Gallery. Here is quick promotional video:

In the following video and text below you will see some details on solution architecture which includes following technologies: Event Hub, Azure Stream Analytics, Azure Machine Learning, Azure Data Factory, HDInsight, Azure Storage, Azure SQL DW, and Power BI.

Let’s look on data flow and solution components.

cc-arch

The Event Hub is used to ingest huge amount of events from the vehicles into Azure for real-time and batch analytics.

The Stream Analytics job is performing real-time data ingestion into the long term storage for batch analytics and data preparation for real-time predictive insights.

Below you can see description of three queries processed in the Stream Analytics for following purposes. (All three queries are enriched with detailed data on each vehicle from Blob Storage).

Query #1 performs join with reference data from Azure Blob Storage and accumulates the resultant data into a different container in the Blob Storage for rich batch analytics.

Query #2 publishes the data as-is to the output Event Hub so that it can be consumed by the RealtimeDashboard app that invokes machine learning request/response end-point for real-time anomaly detection and pushes the results to the PowerBI live dashboard.

Query #3 performs aggregations on the data within a 3 sec tumbling window and publishes it to an Azure SQL instance that got provisioned as part of the deployment.

Data Factory is used for

  • Orchestration, monitoring and management of the batch analytics pipeline
  • Transformation of the data in an on-demand HDInisght cluster for rich insights on Driving Behavior Pattern and Vehicle Health Trending
  • Data movement across the various data stores

cc-datafactory

All data in source datasets are processed using Hive queries where we describe data structures based on CSV files. Additionally we define new tables and calculate aggregations using INSERT request.

cc-hive

In this solution, we are targeting the following batch insights:

  • Aggressive driving behavior (Identifies the trend of the models, locations, driving conditions, and time of the year to gain insights on aggressive driving pattern allowing Contoso Motors to use it for marketing campaigns, driving new personalized features and usage based insurance.)
  • Fuel efficient driving behavior (Identifies the trend of the models, locations, driving conditions, and time of the year to gain insights on fuel efficient driving pattern allowing Contoso Motors to use it for marketing campaigns, driving new features and proactive reporting to the drivers for cost effective and environment friendly driving habits.)
  • Recall models (Identifies models requiring recalls by anomaly detection trend and correlation with driving habits)

An anomaly detection Azure Machine Learning model is used in this demo to detect safety issues for vehicle recall and identifying vehicles requiring maintenance. This model is published in an existing subscription and the web service endpoint is leveraged both in request/response and batch mode for operationalization in the real-time and the batch processing.

CC-demo-AML

Aggregated data from Blob Storage is moved to Azure Data Warehouse for historical storage.

Power BI dashboards contain historical data from Azure DW and real-time data from the Azure Stream Analytics and the Event Hub.

CC-demo-powerbi

Special thanks to authors of the demo scenario: Anand Subbaraj, Sanjay Soni, Christoph Schuler, Santosh Waghmare, Shashank Khedikar, and Sam Istephan.

 

Cortana Intelligence Suite

Cortana Intelligence Suite is a fully managed big data and advanced analytics suite that enables organizations to transform data into intelligent action.

Cortana Intelligence Suite includes following products and capabilities (also see the figure below):

cortana-analytics-suite

Details: Official site of Cortana Analytics Suite

Additional materials:

 

Analysis of Big Data for Financial Services Institutions

In this blog post, we will look at analysis of stock prices and dividends by industry. This task is important to all participants of Stock Market including individual retail investors, institutional investors such as mutual funds, banks, insurance companies and hedge funds, and publicly traded corporations trading in their own shares.

In this demo, team of Stock Trading Company analyses semi-structured stock data from the New York Stock Exchange (NYSE).

  1. Data Architect collects data and makes information accessible to business. He will use Hadoop-based distribution on Windows Azure and Hive queries to aggregate stock and dividend data by years.
  2. Financial Analyst will analyze stock data and prepare ad-hoc reports to support trading and management processes. She will use Power Query add-in for Excel to join aggregated data from Hadoop with additional information on top 500 S&P companies from Azure Marketplace Datamarket. Additionally she will create ad-hoc reports with Power View for Excel.
  3. Trading Executive is responsible for understanding key decision makers and suggesting best product mix of securities. He will make some modifications to Power View reports provided by Financial Analyst.

Details on how Data Architect aggregates data in Hadoop are available in a separate blog post.

Below you can see some screenshots from the demo.

role1

role1-1

role1-2

role2

role2-1

role2-2

role3

role3-1

role3-2

Aggregating Big Data with HDInsight (Hadoop) on Azure

When we a talking about Big Data we may mean huge amounts of data (high Volume), data in any format (high Variety), and streaming data (appearing with high Velocity). Microsoft provides solutions for all of these “3V” tasks under unified monitoring, management and security, as well as unified data movement technologies. These
workloads are supported correspondingly by SQL Server Database and Parallel
Data Warehouse, HDInsight (Hadoop for Windows or Azure), and Microsoft SQL
Server StreamInsight.

big-data-technologies

Let us talk about Microsoft Big Data technology for Non-Relational data.

Microsoft’s adaptation of Hadoop technology can be deployed in a cloud-based environment or on-premises. The Hadoop-based service on the Windows Azure platform is a cloud-based service that offers elastic (in a term of data volumes) analytics on Microsoft’s cloud platform. For customers who want to keep the data within their data centers, Microsoft provides Hadoop-based distribution on Windows Server.

In this blog post, we will start diving into Hadoop in Azure technology and Hive queries to analyze semi-structured data in Hadoop.

In addition to traditional data warehousing, when operational data stored in special structures in Enterprise Data Warehouse, we can store all other raw data in “Store it All” cluster. At any moment, we are able to create query to these data to answer some business question. (In addition, we may store the answer in the Data Warehouse if necessary)

additional-flow

Let me introduce the first part of Bid Data Demonstration where Data Architect will store log files with stock prices and dividends in Azure Blob Storage and will use Hive queries to aggregate data by years and stock tickers into separate file.

store-and-aggregate

Here is the video:

Additional materials: Windows Azure Storage Architecture Overview