You will learn
how to build and maintain reliable, scalable, distributed systems with Apache
Hadoop. This book is ideal for programmers looking to analyze datasets of any
size, and for administrators who want to set up and run Hadoop
clusters.You’ll find illuminating case studies that demonstrate
how Hadoop is used to solve specific problems. This third edition covers recent
changes to Hadoop, including material on the new MapReduce Api, as well as
MapReduce 2 and its more flexible execution model (Yarn). To buy this book
click the image below to get a physical copy.
·
Store large datasets with the
Hadoop Distributed File System (Hdfs)
·
Run distributed computations with
MapReduce
·
Use Hadoop’s data
and I/O building blocks for compression, data integrity, serialization
(including Avro), and persistence
·
Discover common pitfalls and
advanced features for writing real-world MapReduce programs
·
Design, build, and administer a
dedicated Hadoop cluster—or run Hadoop in the cloud
·
Load data from relational
databases into Hdfs, using Sqoop
·
Perform large-scale data
processing with the Pig query language
·
Analyze datasets with Hive,
Hadoop’s data warehousing system
·
Take advantage of Hbase for
structured and semi-structured data, and ZooKeeper for building distributed
systems
To
download the pdf click the download image below.
Please
like, subscribe, and follow I would appreciate it
No comments:
Post a Comment