Personal tools

Big Data Characteristics

Big Data Characteristics_030523A
[Big Data Characteristics - Javapoint]
 


- Overview

Big data is a collection of data from many different sources and is often describe by five characteristics: volume, value, variety, velocity, and veracity.

Big data has three main characteristics: Volume, Velocity, Variety. Other characteristics of big data include:
  • Value: The amount of valuable and reliable data
  • Structured data: Data with a dedicated data model, well-defined structure, and consistent order
  • Variability: The number of inconsistencies in the data, or the inconsistent speed at which big data is loaded into a database
  • Validity: How correct the data is for its purpose
  • Veracity: Ensures the quality of the data so the results produced from it will be accurate and trustworthy

Other characteristics of big data include: Veriety, Visualization, Structure, Storage.

Working with big data presents significant challenges, including:
  • Managing and storing massive volumes of data
  • Processing data at high speeds
  • Ensuring the accuracy and quality of data

 

These challenges require specialized tools and technologies.

 

- Types of Big Data Workload

Big data solutions typically involve one or more of the following types of workload:

  • Batch processing of big data sources at rest.
  • Real-time processing of big data in motion.
  • Interactive exploration of big data.
  • Predictive analytics and machine learning.

Consider big data architectures when you need to:

  • Store and process data in volumes too large for a traditional database.
  • Transform unstructured data for analysis and reporting.
  • Capture, process, and analyze unbounded streams of data in real time, or with low latency.

 

- Characteristics of Big Data

Big data is a large set of structured, unstructured, and semi-structured data sets that are challenging to manage using traditional data processing tools. It requires additional infrastructure to manage, analyze and turn into insights.

These essential characteristics of big data are listed below.

  • Data volume: Data volume is the amount of data—measured in gigabytes, zettabytes (ZB), and yabytes (YB). Industry trends predict a significant increase in data volumes in the coming years. Earlier, there were problems storing and processing such huge data volumes. But today, data collected from all these sources is organized using distributed systems like Hadoop. Understanding the purpose of data requires understanding its scale. Additionally, volumes can be used to identify whether a dataset is big data.
  • Velocity: Velocity describes the speed of data processing. Any significant data manipulation must run at a high rate. The correlation, bursts of activity, and velocity of change of incoming datasets make up this phenomenon. Sensors, social media platforms, and application logs are all constantly generating large amounts of data. It's no use spending time and effort on data flow if it's not constant.
  • Diversity: Multiple types of big data are known as diversity. Since it affects performance, it is one of the major issues that big data sectors are dealing with right now. It's critical to organize your data so you can effectively manage its diversity. Diversity is the wide range of information you gather from many sources.
  • Accuracy: The correctness of the data is known as accuracy. Poor accuracy can seriously compromise the accuracy of your findings, making it one of the most important big data qualities. It specifies the level of data reliability. It is crucial to remove unnecessary information and process with the remaining data, since most of the data you encounter is unstructured.
  • Value: Value is the advantage data provides to your company. Does it reflect your company's goals? Does it help your company grow? It is one of the most important foundations of big data. Data scientists start by turning raw data into knowledge. Once cleaned, the best data is extracted from this data set. Analysis and pattern recognition are performed on this data set. The results of this method can be used to determine the value of the data.

 

- Categories of Big Data

The data is categorized as follows:

  • Structured data: In structured schema, with all required columns. It is in tabular form. Structured data is stored in relational database management systems.
  • Semi-structured: In semi-structured, the schema is not properly defined, such as JSON, XML, CSV, TSV, and email. OLTP (Online Transaction Processing) systems are built to process semi-structured data. It is stored in relations, i.e. tables.
  • Unstructured Data: All unstructured files, log files, audio files, and image files are included in Unstructured Data. Some organizations have a lot of data available, but because the data is raw, they don't know how to get the value from it.
  • Quasi-structured data: The data format contains text data with inconsistent data format that takes some time and effort to format using some tools.

 

- Examples of Unstructured Data

Unstructured data can be thought of as data that is not actively managed in a transactional system; for example, data that does not exist in a relational database management system (RDBMS). Structured data can be thought of as records (or transactions) in a database environment; for example, rows in a SQL database table. 

There is no preference whether the data is structured or unstructured. Both have tools that allow users to access information. Unstructured data happens to be richer than structured data.

Examples of unstructured data are:

  • Rich media. Media and entertainment data, surveillance data, geospatial data, audio, weather data
  • Document collections. Invoices, records, emails, productivity apps
  • Internet of Things (IoT). Sensor data, market data
  • Analytics. Machine Learning, Artificial Intelligence (AI)

Before the advent of object-based storage, most, if not all, unstructured data was stored in file-based systems.

 

 

 

[More to come ...]

 

 

 
Document Actions