What is ACID function and how it was impacting into Data lake storage environments? –Part5

--

A Quick Comparison

Please check below table comparison for the entire four frameworks.

Apache Hive

The Apache Hive™ data warehouse software facilitates reading, writing, and managing large datasets residing in distributed storage and queried using SQL syntax.

Built on top of Apache Hadoop™, Hive provides the following features:

  • Tools to enable easy access to data via SQL, thus enabling data warehousing tasks such as extract/transform/load (ETL), reporting, and data analysis.
  • A mechanism to impose structure on a variety of data formats

· Access to files stored either directly in Apache HDFS or in other data storage systems such as Apache HBase

Hive provides standard SQL functionality, including many of the later SQL:2003, SQL:2011, and SQL:2016 features for analytics.
Hive’s SQL can also be extended with user code via user defined functions (UDFs), user defined aggregates (UDAFs), and user defined table functions (UDTFs).

There is not a single “Hive format” in which data must be stored. Hive comes with built in connectors for comma and tab-separated values (CSV/TSV) text files, Apache Parquet, Apache ORC, and other formats. Users can extend Hive with connectors for other formats. Please see File Formats and Hive SerDe in the Developer Guide for details.

Hive is not designed for online transaction processing (OLTP) workloads. It is best used for traditional data warehousing tasks.

Hive is designed to maximize scalability (scale out with more machines added dynamically to the Hadoop cluster), performance, extensibility, fault-tolerance, and loose-coupling with its input formats.

Components of Hive include HCatalog and WebHCat.

  • HCatalog is a table and storage management layer for Hadoop that enables users with different data processing tools — including Pig and MapReduce — to more easily read and write data on the grid.
  • WebHCat provides a service that you can use to run Hadoop MapReduce (or YARN), Pig, Hive jobs. You can also perform Hive metadata operations using an HTTP (REST style) interface.

Insert Data

Let’s perform some row-level transactions available in Hive 0.14. Before creating a Hive table that supports transactions, the transaction features present in Hive needs to be turned on, as by default they are turned off.

If the above properties are not set properly, the ‘Insert’ operation will work but ‘Update’ and ‘Delete’ will not work and you will receive the following error:

The above syntax will create a table with name ‘employee’ and the columns present in the table are ‘EmpId, FirstName,LastName,Gender,EMail,Salary,City,Zip’. We are bucketing the table by ‘Stateand the table format is ‘orc’, also we are enabling the transactions in the table by specifying it inside the TBLPROPERTIES as ‘transactional’=’true’.

Update Data

The below command is used to update a row in Hive table and we have successfully updated the data.

Delete Data

The above command will delete a single row in the Hive table and We have now successfully deleted a row from the Hive table.

Merge Data

Use the SQL MERGE command to insert, update, or delete rows in a target table using data from a source such as a table, view, or sub-query and based on rules specified in a matching condition in the merge statements.

Conclusion

Hive is a data warehouse system which is used for querying and analyzing large datasets stored in HDFS. Hive uses a query language call HiveQL which is similar to SQL. Hadoop uses MapReduce for processing data. MapReduce required users to write long codes. Not all users were well versed with Java and other coding languages. This proved to be a disadvantage for them. Hive was developed with a vision to incorporate the concepts of tables, columns just like SQL. Apache hive does not offer real-time queries and Latency of Apache Hive queries are generally very high.

--

--

Selvam Rangasamy-Senior Data Engineer & Architect

I am Big Data Engineer & Solution Architect experience in various Cloud & Big data distribution systems, primarily on Hadoop & AWS Cloud services.