Big data technologies have been growing exponentially over the past few years and have penetrated into every domain and industry in software development. It has become a core skill for a software engineer. Robust and effective big data pipelines are needed to support the growing volume of data and applications in the big data world. These pipelines have become business critical and help increase revenues and reduce cost.
Do quality big data pipelines happen by magic? High quality designs that are scalable, reliable and cost effective are needed to build and maintain these pipelines.
How do you build an end-to-end big data pipeline that leverages big data technologies and practices effectively to solve business problems? How do you integrate them in a scalable and reliable manner? How do you deploy, secure and operate them? How do you look at the overall forest and not just the individual trees? This course focuses on this skill gap.
What are the topics covered in this course?
We start off by discussing the building blocks of big data pipelines, their functions and challenges.
We introduce a structured design process for building big data pipelines.
We then discuss individual building blocks, focusing on the design patterns available, their advantages, shortcomings, use cases and available technologies.
We recommend several best practices across the course.
We finally implement two use cases for illustration on how to apply the learnings in the course to a real world problem. One is a batch use case and another is a real time use case.
Who this course is for:
- Big Data Pipeline Designers & Architects
- Big Data Developers looking to move into Design/Architecture roles
- Software Architects looking to gain Big Data Experience
- Big Data Technology Concepts
- Familiarity with Big Data Technologies like Apache Spark, Apache Kafka and NoSQL
- Development / Deployment Experience with Big Data Technologies and Pipelines
- Software Design and Development Experience including Cloud & Microservices
Last Updated 4/2023