img-name

Generative AI with Large Language Models Free Course



Introduction

This course is a fantastic opportunity for anyone who wants to deepen their understanding of generative AI. This course teaches students about project lifecycles, model pre-training, and generative AI use cases. It is an intermediate-level course and perfect for individuals who want to learn about evaluating and fine-tuning large language models. This free AI course will also teach you about LLM-powered applications and what reinforcement learning is.

What Will You Gain from This Course

Following the completion of this course, participants will:

  1. Understand the fundamental ideas of generative AI and describe the important phases in the lifecycle of LLM-based generative AI, such as model selection, data collection, deployment, and performance assessment.
  2. Understand the transformer architecture that supports LLMs,
  3. Become familiar with the training procedure, and comprehend how LLMs may be fine-tuned to meet a variety of unique use cases.
  4. Learn to maximize the model's objective function while taking into account the size of the dataset, the compute budget, and the need for inference, applying empirical scaling rules.
  5. Learn to utilize cutting-edge techniques for inference, deployment, tweaking, and training to optimize model performance within project restrictions.
  6. Acknowledge the potential and problems that generative AI poses for companies, based on the experiences and insights of academics and practitioners in the field.

Skills Acquired:

  1. Python Programming
  2. Machine Learning
  3. Large Language Models
  4. Generative AI

Who Is This Course For

This course is designed for:

  1. Blockchain developers looking to include artificial intelligence in their works.
  2. Individuals who have finished interested in learning generative AI.
  3. People with an interest in the practical uses of generative AI.
  4. Specialists seeking to comprehend transformer architecture, the LLM lifespan, and fine-tuning methods.

Course Content

3 Modules – 48 Videos – 17 Readings – 3 Assignments – 4 App Items – Certificate of Completion

Week 1

This introductory program will take you on a thorough exploration of the principles of generative AI and how it may be used in practical situations. You will start the process by examining the application cases and tasks of generative AI and LLMs. Students will explore the revolutionary transformers architecture that changed the text generation industry. The lesson describes the project lifecycle for generative AI and covers the topic of producing text with transformers. It also covers the fundamental ideas of prompting and prompt engineering.

You will be given an overview of AWS labs and an in-depth overview of Lab 1, which focuses on summarizing dialogue as a generative AI use case, to provide you with hands-on experience. The module also covers the computational difficulties associated with training big language models, including information on effective multi-GPU compute techniques and scaling rules for models that are compute-optimal. In order to comprehend domain-specific training, you will also learn about pre-training for domain adaptation using case studies.

  1. 17 Videos
  2. 7 Readings
  3. 1 Assignment
  4. 2 App items

Week 2

In the second module of this course, you will delve further into the basics of optimizing and assessing generative AI models. Students will learn about fine-tuning instructions, as well as single-task and multi-task instruction fine-tuning follow. You will discover how to analyze model performance in an efficient manner and the significance of benchmarks for determining the potential of generative AI.

This module places a lot of emphasis on parameter-efficient fine-tuning (PEFT). Several PEFT approaches will be covered, such as LoRA (Low-Rank Adaptation) and Soft Prompts, which allow huge language models to be efficiently fine-tuned without requiring a lot of processing power. A thorough tutorial of Lab 2, where you will refine a generative AI model for dialogue summary, emphasizes practical application will also be covered in this module.

  1. 10 Videos
  2. 3 Readings
  3. 1 Assignment
  4. 1 App item

Week 3

You will examine the vital aspect of generative AI models' alignment with human values in the third module of this course, with a particular emphasis on reinforcement learning from human feedback (RLHF). The session starts by outlining the significance of matching models to human values. It then explores the several elements of reinforcement learning-based human-model fit (RLHF), such as getting human input, creating a reward model, and optimizing models using reinforcement learning. You will also discover how to successfully scale human feedback and the difficulties associated with reward hacking.

This module's detailed overview of Lab 3, where you use reinforcement learning to fine-tune the FLAN-T5 model to provide more favorable summaries, is a major highlight. Discussions on model optimizations for deployment, utilizing LLMs in applications, and connecting with external apps supplement this practical experience. Advanced topics covered in the curriculum include using program-aided language models (PAL), chain-of-thought approaches to assist LLMs in reasoning and planning, and the ReAct framework to integrate action and reasoning.

  1. 21 Videos
  2. 7 Readings
  3. 1 Assignment
  4. 1 App item

Description

Learn the principles of generative AI and how to use it practically in real-world applications with the "Generative AI with Large Language Models (LLMs)" course. The goal of this course is to provide you with a thorough grasp of generative AI and how to use it to your advantage.

Developers can quickly create working prototypes and make wise decisions for their organizations if they have a strong basic grasp of LLMs and the best methods for training and deploying them. You will get a practical understanding in this training on how to make the most of this innovative new technology.

Since this is an intermediate-level course, you must have prior Python coding expertise. Additionally, you have to understand the fundamentals of machine learning, including loss functions, supervised and unsupervised learning, and data splitting into test, validation, and training sets.

Meet the Instructor(s)

  1. Chris Fregly

Chris Fregly works at Amazon Web Services (AWS) in San Francisco, California, as a Principal Solutions Architect for Generative AI. He co-authored the O'Reilly books "Generative AI on AWS" and "Data Science on AWS." In addition, Chris is the creator of the international "Generative AI on AWS" Meetup series and a regular speaker at international AI and machine learning conferences, such as Nvidia GPU Technology Conference (GTC), O'Reilly AI, and Open Data Science Conference (ODSC).

  1. Shelbee Eigenbrode

Shelbee Eigenbrode works for Amazon Web Services (AWS) as a Principal Solutions Architect for Generative AI. She focuses on integrating her DevOps and ML skills to deploy and manage MLOps/FMOps/LLMOps workloads at scale. She has 23 years of experience in a variety of sectors, technologies, and positions, and she holds six AWS certifications. Shelbee is enthusiastic about constant innovation and using data to drive business outcomes. She has over 30 patents in a variety of technological disciplines. She is a co-founder of Women in Big Data in Denver and a published author.

  1. Antje Barth

At Amazon Web Services (AWS), Antje Barth is a Principal Developer Advocate for Generative AI. She co-wrote "Generative AI on AWS" and "Data Science on AWS." Speaking often at AI/ML conferences, events, and meetups throughout the world is Antje. Prior to joining Cisco and MapR, she worked in technical evangelism and solutions engineering, specializing in big data, AI applications, and data center technologies. Women in Big Data's Düsseldorf branch was co-founded by Antje.

  1. Mike Chambers

Mike Chambers works for Amazon Web Services (AWS) as a Developer Advocate for Generative AI. Mike has worked as a solutions and security architect for more than 20 years in the ICT sector, having worked with major corporations, governments, and his own prosperous start-up. With his distinct teaching style, Mike has instructed and amused more than a quarter of a million students worldwide as a trainer, specializing in cloud, serverless, and machine learning technologies.


Newsletter

Subscribe for latest courses update

© 2024 cryptojobs.com. All right reserved.