Data analysis has replaced data acquisition as the bottleneck to evidence-based decision making — we are drowning in it. Extracting knowledge from large, heterogeneous, and noisy datasets requires not only powerful computing resources, but the programming abstractions to use them effectively. The abstractions that emerged in the last decade blend ideas from parallel databases, distributed systems, and programming languages to create a new class of scalable data analytics platforms that form the foundation for data science at realistic scales.
In this course, you will learn the landscape of relevant systems, the principles on which they rely, their tradeoffs, and how to evaluate their utility against your requirements. You will learn how practical systems were derived from the frontier of research in computer science and what systems are coming on the horizon. Cloud computing, SQL and NoSQL databases, MapReduce and the ecosystem it spawned, Spark and its contemporaries, and specialized systems for graphs and arrays will be covered.
You will also learn the history and context of data science, the skills, challenges, and methodologies the term implies, and how to structure a data science project. At the end of this course, you will be able to:
1. Describe common patterns, challenges, and approaches associated with data science projects, and what makes them different from projects in related fields.
2. Identify and use the programming models associated with scalable data manipulation, including relational algebra, mapreduce, and other data flow models.
3. Use database technology adapted for large-scale analytics, including the concepts driving parallel databases, parallel query processing, and in-database analytics
4. Evaluate key-value stores and NoSQL systems, describe their tradeoffs with comparable systems, the details of important examples in the space, and future trends.
5. “Think” in MapReduce to effectively write algorithms for systems including Hadoop and Spark. You will understand their limitations, design details, their relationship to databases, and their associated ecosystem of algorithms, extensions, and languages.
write programs in Spark
6. Describe the landscape of specialized Big Data systems for graphs, arrays, and streams
Week 1 Data Science Context and Concepts
Lesson 1: Examples and the Diversity of Data Science
Lesson 2: Working Definitions of Data Science
Lesson 3: Characterizing this Course
Lesson 4: Related Topics
Lesson 5 : Course Logistics
Assignment 1: Twitter Sentiment Analysis
Assignment: Twitter Sentiment Analysis
Week 2 Relational Databases and the Relational Algebra
Lesson 6: Principles of Data Manipulation and Management
Lesson 7: Relational Algebra
Lesson 8: SQL for Data Science
Lesson 9: Key Principles of Relational Databases
Assignment 2: SQL
Assignment: SQL for Data Science Assignment
Week 3 MapReduce and Parallel Dataflow Programming
Lesson 10: Reasoning about Scale
Lesson 11: The MapReduce Programming Model
Lesson 12: Algorithms in MapReduce
Lesson 13: Parallel Databases vs. MapReduce
Assignment 3: MapReduce
Assignment: Thinking in MapReduce
Week 4 NoSQL: Systems and Concepts Graph Analytics
Lesson 14: What problems do NoSQL systems aim to solve?
Lesson 15: Early key-value systems and key concepts
Lesson 16: Document Stores and Extensible Record Stores
Lesson 17: Extended NoSQL Systems
Lesson 18: Pig: Programming with Relational Algebra
Lesson 19: Pig Analytics
Lesson 20: Spark
Lesson 21: Structural Tasks
Lesson 22: Traversal Tasks
Lesson 23: Pattern Matching Tasks and Graph Query
Lesson 24: Recursive Queries
Lesson 24: Representations and Algorithms