Data Engineering Essentials – SQL, Python and Spark
About Data Engineering
Data Engineering is nothing but processing the data depending upon our downstream needs. We need to build different pipelines such as Batch Pipelines, Streaming Pipelines, etc as part of Data Engineering. All roles related to Data Processing are consolidated under Data Engineering. Conventionally, they are known as ETL Development, Data Warehouse Development, etc.
Database Essentials – SQL using Postgres
Getting Started with Postgres
Basic Database Operations (CRUD or Insert, Update, Delete)
Writing Basic SQL Queries (Filtering, Joins, and Aggregations)
Creating Tables and Indexes
Partitioning Tables and Indexes
Predefined Functions (String Manipulation, Date Manipulation, and other functions)
Writing Advanced SQL Queries
Programming Essentials using Python
Perform Database Operations
Getting Started with Python
Basic Programming Constructs
Overview of Collections – list and set
Overview of Collections – dict and tuple
Manipulating Collections using loops
Understanding Map Reduce Libraries
Overview of Pandas Libraries
Database Programming – CRUD Operations
Database Programming – Batch Operations
Setting up Single Node Cluster for Practice
Setup Single Node Hadoop Cluster
Setup Hive and Spark on Single Node Cluster
Introduction to Hadoop ecosystem
Overview of HDFS Commands
Data Engineering using Spark Data Frame APIs
Data Processing Overview
Processing Column Data
Basic Transformations – Filtering, Aggregations, and Sorting
Joining Data Sets
Windowing Functions – Aggregations, Ranking, and Analytic Functions
Spark Metastore Databases and Tables
Here is the desired audience for this course.
College students and entry-level professionals to get hands-on expertise with respect to Data Engineering. This course will provide enough skills to face interviews for entry-level data engineers.
Experienced application developers to gain expertise related to Data Engineering.
Testers to improve their testing capabilities related to Data Engineering applications.
Computer with decent configuration (At least 4 GB RAM, however 8 GB is highly desired)
Dual Core is required and Quad-Core is highly desired
Here are the details related to the training approach.
It is self-paced with reference material, code snippets, and videos provided as part of Udemy.
One can either use the environment provided by us or set up their own environment using Docker on AWS or GCP or the platform of their choice.
We would recommend completing 2 modules every week by spending 4 to 5 hours per week.
It is highly recommended to take care of the exercises at the end to ensure that you are able to meet all the key objectives for each module.
Support will be provided through Udemy Q&A.
The course is designed in such a way that one can self-evaluate through the course and confirm whether the skills are acquired.
Here is the approach we recommend you to take this course.
The course is hands-on with thousands of tasks, you should practice as you go through the course.
You should also spend time understanding the concepts. If you do not understand the concept, I would recommend moving on and come back later to the topic.
Go through the consolidated exercises and see if you are able to solve the problems or not.
Make sure to follow the order we have defined as part of the course.
After each and every section or module, make sure to solve the exercises. We have provided enough information to validate the output.
Author : Durga Viswanatha Raju Gadiraju, Annapurna Chinta, Vamsi Penmetsa
Ratings : 0.0 / 5.0
Students : 10,422 students