English | 2016 | MP4 | AVC 1280×720 | AAC 48KHz 2ch | 39 min | 100 MB
Hadoop is the cloud computing platform data scientists use to perform highly parallelized operations on big data. If you’ve explored Hadoop, you’ve probably discovered it has many levels of complexity. After getting comfortable with the fundamentals, you’re ready to see how to put additional frameworks and tool sets to use.
In this course, software engineer and data scientist Jack Dintruff goes beyond the basic capabilities of Hadoop. He demonstrates hands-on, project-based, practical skills for analyzing data, including how to use Pig to analyze large datasets and how to use Hive to manage large datasets in distributed storage. Learn how to configure the Hadoop distributed file system (HDFS), perform processing and ingestion using MapReduce, copy data from cluster to cluster, create data summarizations, and compose queries.
- Setting up and administrating clusters
- Ingesting data
- Working with MapReduce, YARN, Pig, and Hive
- Selecting and aggregating large datasets
- Defining limits, unions, filters, and joins
- Writing custom user-defined functions (UDFs)
- Creating queries and lookups