Leverage built-in datasets with just a few lines of code
Use APIs to control how you split your data
Process all types of unstructured data
Bringing a machine learning model into the real world involves a lot more than just modeling. This Specialization will teach you how to navigate various deployment scenarios and use data more effectively to train your model.
In this third course, you’ll use a suite of tools in TensorFlow to more effectively leverage data and train your model. You’ll learn how to leverage built-in datasets with just a few lines of code, use APIs to control how you split your data, and process all types of unstructured data.
This Specialization builds upon our TensorFlow in Practice Specialization. If you are new to TensorFlow, we recommend that you take the TensorFlow in Practice Specialization first. To develop a deeper, foundational understanding of how neural networks work, we recommend that you take the Deep Learning Specialization.
Data Pipelines with TensorFlow Data Services
You'll learn about the types of data that you would normally come across when doing machine learning.
Exporting your data into the training pipeline
This week you’re going to start looking at the code for using the data with input pipelines!
How you load your data into your model can have a huge impact on how efficiently the model trains. You'll learn how to handle your data input to avoid bottlenecks, race conditions and more!
Publishing your datasets
This week let’s learn about how you can share your data with the world in a way that’s easy for others to consume!