Blog Posts Business Management

Data Lake Essentials – Part 1 – Storage and Data Processing

Blog: NASSCOM Official Blog

In this multi-part series we will take you through the architecture of a Data Lake. We can explore data lake architecture across three dimensions

Data Lake Storage And Data Processing

In today’s dynamic business environment, new data consumption requirements and use cases emerge extremely rapidly. By the time a requirements document is prepared to reflect requested changes to data stores or schemas, users have often moved on to a new set of schema changes. In contrast, the entire philosophy of a data lake revolves around being ready for an unknown use case. When the source data is in one central lake, with no single controlling structure or schema embedded within it, supporting a new additional use case is a much more straightforward exercise.

Data Lake architecture is all about storing large amounts of data which can be structured, semi-structured or unstructured, e.g. web server logs, RDBMS data, NoSql data, social media, sensors, IoT data and third-party data. A data lake can store the data in the same format as its source systems or transform it before storing.

The main purpose of a data lake is to make organizational data from different sources, accessible to a variety of end users like business analysts, data engineers, data scientists, product managers, executives, etc, in order to enable these personas to leverage insights in a cost-effective manner, for improved business performance. Today, many forms of advanced analytics are only possible on data lakes.

In order to create a data lake, we should take care of the data accuracy between source and target schema. One example could be a record count match between source and destination systems.

Data Lake Physical Storage

The foundation of any data lake design and implementation is physical storage. The core storage layer is used for the primary data assets. Typically it contains raw and/or lightly processed data. The key considerations while evaluating technologies for cloud-based data lake storage are the following principles and requirements:


Data Lake Physical Storage Image 2

High scalability

An enterprise data lake is often intended to store centralized data for an entire division or the company at large, hence it must be capable of significant scaling without running into fixed arbitrary capacity limits.

High durability

As a primary repository of critical enterprise data, a very high durability of the core storage layer allows for excellent data robustness without resorting to extreme high-availability designs.

Unstructured, semi-structured and structured data

One of the primary design considerations of a data lake is the capability to store data of all types in a single repository. e.g. XML, Text, JSON, Binary, CSV etc.

Independence from fixed schema

Schema evolution is common in the big data age. The ability to apply schema on read, as needed for each consumption purpose, can only be accomplished if the underlying core storage layer does not dictate a fixed schema.

Separation from compute resources

The most significant philosophical and practical advantage of cloud-based data lakes as compared to “legacy” big data storage on Hadoop/HDFS, is the ability to decouple storage from compute and enabling independent scaling of each.

Complementary to existing data warehouses

Data warehouse and data lake can work in conjunction for a more integrated data strategy.

Cost Effective

Open source has zero subscription costs, allowing your system to quickly scale as data grows. Handling of cold/hot/warm data and data models, along with appropriate compression techniques, are useful to prevent cost growth exponentially.

Given the requirements, object-based stores have become the de facto choice for core data lake storage. AWS, Google and Azure all offer object storage technologies. e.g. S3, Blob storage, ADLS etc.

Data Lake Data Processing – ETL/ELT

Below are different types of data processing based on SLA:

Real time

– Order of seconds refresh
Real time processing requires continuous input, constant processing, and a steady output of data. Good examples of real-time processing are data streaming, radar systems, customer service systems and bank ATMs, where immediate processing is crucial to make the system work properly.

Near-Real time

– Order of minutes refresh
Near real-time processing is when speed is important, but processing time in minutes is acceptable in lieu of seconds. An example of near real-time processing is the production of operational intelligence, which is a combination of data processing and CEP (Complex Event Processing). CEP involves combining data from multiple sources in order to detect patterns; and is useful to identify opportunities in the data sets (such as sales leads) as well as threats (detecting an intruder in the network).

Batch

– Hourly/Daily/Weekly/Monthly refresh
Batch processing is even less time-sensitive than near real-time. Batch processing involves three separate processes. Firstly, data is collected, usually over a period of time. Secondly, the data is processed using another separate program. Thirdly, the output is another set of data. Examples of data entered for analysis can include operational data, historical and archived data, data from social media, service data, etc. In general, MapReduce based solutions are useful for batch processing and for analytics that are not real time or near real-time. Examples of batch processing use cases include payroll and billing activities usually occurring on a monthly cycle, and deep analytics that are not essential for fast intelligence but necessary for immediate decision making.


Batch image 1

The post Data Lake Essentials – Part 1 – Storage and Data Processing appeared first on NASSCOM Community |The Official Community of Indian IT Industry.

Leave a Comment

Get the BPI Web Feed

Using the HTML code below, you can display this Business Process Incubator page content with the current filter and sorting inside your web site for FREE.

Copy/Paste this code in your website html code:

<iframe src="https://www.businessprocessincubator.com/content/data-lake-essentials-part-1-storage-and-data-processing/?feed=html" frameborder="0" scrolling="auto" width="100%" height="700">

Customizing your BPI Web Feed

You can click on the Get the BPI Web Feed link on any of our page to create the best possible feed for your site. Here are a few tips to customize your BPI Web Feed.

Customizing the Content Filter
On any page, you can add filter criteria using the MORE FILTERS interface:

Customizing the Content Filter

Customizing the Content Sorting
Clicking on the sorting options will also change the way your BPI Web Feed will be ordered on your site:

Get the BPI Web Feed

Some integration examples

BPMN.org

XPDL.org

×