Personal tools

Big Data Collection

Rice University_Brandon Martin_083021A
[Rice University - Brandon Martin]
 
 

 

Big Data Collection Methods

 

- What is Data Collection?

Data collection is the process of gathering and measuring information on variables of interest, in an established systematic fashion that enables one to answer stated research questions, test hypotheses, and evaluate outcomes. It focuses on finding out all there is to a particular subject matter. Data is collected to be further subjected to hypothesis testing which seeks to explain a phenomenon. Hypothesis testing eliminates assumptions while making a proposition from the basis of reason.

For collectors of data, there is a range of outcomes for which the data is collected. But the key purpose for which data is collected is to put a researcher in a vantage position to make predictions about future probabilities and trends. The data collection component of research is common to all fields of study including physical and social sciences, humanities, business, etc. While methods vary by discipline, the emphasis on ensuring accurate and honest collection remains the same.

 

- The General Steps To Collect Big Data

Today, many companies collect big data to analyze and interpret daily transactions and traffic data, aiming to keep track of the operations, forecast needs or implement new programs. But how to collect big data directly. There may be a lot of data collection methods and you may feel quite confused. Following are the general steps to collect big data:

  • Step 1: Gather data according to different purposes. 
  • Step 2: Store data putting the data into databases or storage services for further processing. 
  • Step 3: Clean up data to sort the data, including cleaning up, concatenating and merging the data.
  • Step 4: Reorganize data to turn the unstructured or semi-unstructured formats into structured formats like Hadoop and HDFS. 
  • Step 5: Verify data to make sure the data you get is right and makes sense. 

 

These are the general steps to collect big data. However, to collect the data, analyze it and glean insights into markets is not as easy as it seems. 

 

- Big Data Collection Tools

Through the great advancements of technology and Internet of Things (IoT), it is now easier than ever to collect, process and analyse the data. Big data collection tools such as transactional data, analytics, social media, maps and loyalty cards are all ways in which data can be collected. It’s all about personalisation - businesses must be able to analyse the data collected and then use it to customize their marketing efforts to target specific customers and in turn, have highly effective campaigns.

Data collection tools like Octoparse help make this process so much easier. They allow users to gather clean and structured data automatically so there is no need to clean it up or reorganize it. Octoparse is the ultimate tool for data extraction (web crawling, data crawling and data scraping), which lets you turn the whole Internet into a structured format. After the data is collected, it can be stored in cloud databases, which can be accessed anytime from anywhere.

Here are the various techniques and methods to help businesses collect data about their customers:

  • Transactional Data - Transactional data includes multiple variables, such as what, how much, how and when customers purchased as well as what promotions or coupons they used.
  • Online Marketing Analytics - Every time a user browses a website, information is collected. For example, Google Analytics has the ability to provide a lot of demographic insight on each visitor. This information is useful is building marketing campaigns, as well as website performance analysis.
  • Social Media - In today’s day and age, most of humanity are using social media in one form or another. Nearly every aspect of our lives is affected. Social media is used in many ways on a frequent basis: networking, procrastinating, gossiping, sharing, educating, games etc.

 

Big Data Processing Methodology


- Overview

Big data refers to a process that is used when traditional data mining and handling techniques cannot uncover the insights and meaning of the underlying data. Data that is unstructured or time sensitive or simply very large cannot be processed by relational database engines. This type of data requires a different processing approach called big data, which uses massive parallelism on readily-available hardware.  

 

- The Traditional ETL Methodology

Traditional data was normally processed using the Extract, Transform, Load (ETL) methodology, which was used to collect the data from outside sources, modify the data to fit needs, and then upload the data into the data storage system for future use. Technology such as spreadsheets, RDBMS databases, Structured Query Languages (SQL), etc. were all initially used to carry out these tasks.

 

- The MAD Process

However, for big data, the methodology traditionally followed is both inefficient and insufficient to meet the demands of modern use. Therefore, the Magnetic, Agile, Deep (MAD) process is used to collect and store data. The needs and benefits of such a system are: attracting all the data sources regardless of their quality (magnetic), logical and physical contents of storage systems adapting to the rapid data evolution in big data (agile) and complex algorithmic statistical analysis required of big data on a very short notice. 

 

- The Computing Power To Support the MAD

The technology used to perform data storage using the MAD process requires vast amount of processing power, which is very difficult to create in a single, physical space/unit for nonstate or research entities, who cannot afford supercomputers. Therefore, most solutions used in big data rely on two major components to store data: distributed systems and Massive Parallel Processing (MPP) that run on non-relational (in-memory) database systems. 

 

 

[More to come ...]

  

Document Actions