This course offers an opportunity to understand the theory behind neural networks and how it is derived from deep learning. Moreover, participants will get to learn about different modern architectures of Deep Learning. In the later stage of the course, the students will get to explore other concepts like deep learning models with emphasis on reinforcement learning, other variations of machine learning, and their practical applications. The course also talks about the applications of reinforcement learning in the modern era and its limitations.
Following the completion of this course, the student will be able to:
This course is designed for:
9 Modules – 80 Videos - 9 Reading - 24 Quizzes – 31 App Items – 3 Plugin - 1 Peer Review – Completion of Certificate
This is the first module of this course and spans around three hours. It offers a basic introduction to neural networks and deep learning, their practical applications, and their limitations. The instructor walks the students through each concept in detail, especially the theoretical background and characteristics used within machine learning. Moreover, it provides an in-depth introduction to machine learning algorithms and their types.
The students will also get to learn about the fundamental characteristics of different modeling techniques, what makes them stand out, and how to customize these techniques based on situation and use. The module also contains a special section to offer hands-on experience on neutral networks and all the essential concepts of algorithms. The students will get to apply these concepts in real life to build and develop robust solutions.
This is the second module of this course, and it will require nearly three hours to complete. The module tackles basic concepts about the math behind the popular Back Propagation algorithm and uses these algorithms to optimize neural networks. The students will also get to explore the Back Propagation notebook, its use, and its practical applications.
The participants will get to use the activation functions on different projects. This module aims to focus on most activation functions, introduce these functions to students, and help them work on non-linearity in the network. The module also empowers the students to utilize non-linearity to learn more complex patterns. Finally, it ends with concepts like a practical application of functions and API forms via the Keras library for solving issues related to neural networks and loading images.
The third module of the course is two hours long and focuses on using different options for organizing and prioritizing the training time. This module also explores concepts like neural network accuracy and some of the important deep learning models that are commonly used.
The students will also learn about using different modules and options and then prioritizing the training time accordingly. The focus of this module is not just on building concepts but also on improving understanding and accuracy of use for all neural network and deep learning models. The fundamental topics covered in this module include model training, optimizers, and data shuffling. There is also a section for hands-on practice using Keras, one of the go-to libraries for deep learning.
The fourth module of this course is all about convolutional neural networks. Students will require nearly five hours to complete this module. This section aims to offer students an in-depth introduction to convolutional neural networks, also known as space-invariant artificial neural networks.
The module includes all the basic concepts related to convolutional neural networks and their real-life application. By the end of the module, the students will also get an introduction to different CNN architectures, their common use, and the way these CNN architectures can be added to the toolkit of Deep Learning Techniques.
Transfer learning is the fifth module of this course, spanning around four hours. Within this module, the participants will get an in-depth understanding of transfer learning, its mechanism, and its practical applications. The students will get to learn, master, and then implement transfer learning with a step-by-step guide. The module also covers five general methods for using a variety of popular pre-trained CNN architectures.
Two popular pre-trained CNN architectures mentioned in this module include VGG-16 and ResNet-50. The instructor offers an overview of differences among CNN architectures and how these inventions solve the problems of their predecessors. Finally, by the end of this module, the students will also master the concept of working with deeper neural networks. It will equip the students to implement regularization techniques to offer a much simpler solution as an alternative to complex models and networks.
The sixth module of this training series is nearly three hours long. It walks the student through concepts like Recursive Neural Networks (RNNs) and Long-Short-Term Memory Networks (LSTM). The students will learn about speech-to-text recognition, how it works, its practical application, and how to use it. This module focuses on RNN, its types, and how it is used within AI applications. Students will master supervised learning concepts and practice them with hands-on assignments.
This training program section begins with a detailed introduction to Autoencoders, their practical applications, and their use in deep learning and Unsupervised Learning. As an important part of neural network architecture, Autoencoders improve the understanding of a lower dimensional representation of data, commonly images.
The seventh module of the course also covers some basics of deep learning-based techniques for data representation. This will eventually help the student utilize autoencoders in different situations and understand their workings and utility for image applications. These concepts are advanced, so students were encouraged to assess their knowledge with quizzes as well.
This is the second-to-last module of this series, and it requires three hours to complete. The module contains an in-depth introduction to different types of generative models, their use, and their workings. The students will learn about two different types of generative models: Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs).
This section also covers the theory behind each model and its implications in Keras for generating artificial images. In addition, this can be used to make generative images as realistic as possible. The module wraps up with some additional topics on deep learning, like using Keras in a GPU environment to boost the speed of the model training.
This is the last module of this course, and it will require another three hours to complete. It covers some important concepts, such as the variety of applications of Neural Networks. The students will also get an idea about Generative Adversarial Networks, frequently referred to as GANs and Reinforcement Learning.
There is also a section dedicated to the practical applications of Neural Networks for data generation. To wrap up this section, the instructor has also added some advanced concepts related to Reinforcement Learning, its impact on AI, and its utility based on the training algorithms using rewards. The module also equips the students to spot errors, minimize the chance of error, and find ways to ensure quality.
This intermediate-level course is designed for people with some basic understanding of deep learning, machine learning, and AI. The students need to be familiar with programming in a Python development environment, data cleaning, exploratory data analysis, unsupervised learning, supervised learning, calculus, linear algebra, probability, and statistics.
The course mainly focuses on machine learning and its two disciplines: deep Learning and reinforcement learning. Within deep learning, the participant will also master machine learning, its uses, and types like supervised and unsupervised learning.
This course has nine modules. Each module focuses on deep learning and its role in improving AI from different perspectives. These modules include videos, reading material, and quizzes to ensure that students understand each concept properly.
By the end of this course, the students will have enough information about concepts like supervised and unsupervised learning and their practical applications, architectures of deep learning, reinforcement learning, and other relevant topics to apply seamlessly at work.
Grover specializes in creating online content with maximum engagement. He started his career as a professor at Cape Fear Community College in Wilmington, NC, and later switched to IBM as a member of the Data & AI Learning team. As a coordinator in the Information Security program, Grover taught Computer Security, Network Administration, System Administration, and Microsoft Office.
During his early years, he also owned a computer sales and Service Company for over 13 years. With over 25 years of information technology experience, Mark is keenly interested in machine learning.
Joseph is a data scientist at IBM. He started with a Ph.D. in Electrical Engineering, where his focus was on machine learning, signal processing, and computer vision. His goal was to examine the impact of video on human cognition.
Li is a data scientist at IBM with a keen interest in building and deploying AI models. She has advanced-level skills in machine learning, natural language processing, and data analysis, and she utilizes these skills to solve complex problems across various industries, including insurance and finance. She has worked on several projects, including prediction models for auto insurance and deploying containerized AI applications. She is a certified professional with credentials in Machine Learning with Python and Natural Language Processing.