Big data usually includes data sets with sizes beyond the ability of commonly used software tools to capture, curate, manage, and process data within a tolerable elapsed time. Big data “size” is a constantly moving target, ranging from a few dozen terabytes to many petabytes of data. Big data requires a set of techniques and technologies with new forms of integration to reveal insights from datasets that are diverse, complex, and of a massive scale.
In this course we will be teaching some of the Big Data technologies including Apache Hadoop, Hives, Pig, Sqoop, Ambari, HDFC, Map Reduce, Yarn and others.
Training program has two tracks, one for admins and second for developers. Please look at the detailed curriculum for each of the programs.
Course is taught by Prakash Punj, Arvind Subramanium, and Kris Vishwanathan,
Prakash Punj is a BigData Architect and Admin with over 17+ years of experience, He worked on various technologies primarily as Infrastructure Architect. He is very experienced in handling Big Dara projects.
Arvind Subramanium is a BigData Architect and Developer. He has over 20+ years exp in Healthcare, Pharma, and Insurance industries. He currently works as a Sr Director in Connecture and in the past he worked on huge Big Data implementations with Truven (Now part of IBM Watson Health) and Glaxo Smithkline.
Kris Vishwanathan is one of the instructors for the class. He has over 20 years of experience in the IT industry of which last 15 years he was a Cloud Architect and Developer for IBM corporation. Highly experienced in Cloud technologies including AWS, Azure, IBM Cloud, Big Data and DevOps. He has executed multimillion dollar projects for JPMorgan Chase, Pfizer pharmaceuticals, E&Y, BCBS, Hertz corporation and many more fortune 500 companies. During his tenure in IBM he architected and developed IBM product multi-release development efforts. He will be teaching scenarios from many real complex projects.
Apache Hadoop Admin Training Curriculum
Big Data Developer Training Curriculum