March 30, 2022
Chapter 1 covers essential advancements for transformers, recurrent architectures, the encoder-decoder framework, attention mechanisms, transfer learning in NLP, and the HuggingFace ecosystem.
Chapter 2 covers training a model to classify emotions expressed in Twitter messages.
Chapter 3 covers the Transformer architecture and different types of transformer models available on the Hugging Face Hub.
Chapter 4 covers fine-tuning a multilingual transformer model to perform named entity recognition.
Chapter 5 covers different methods for generating text with GPT-2.
Chapter 6 covers building an encoder-decoder model to condense dialogues between several people into a crisp summary.
Chapter 7 covers building a question-answering model that finds answers to questions in customer reviews.
Chapter 8 covers different methods to make transformer models more efficient in production.
Chapter 9 covers how to deal with few to no labels by training a model that automatically tags GitHub issues for the Hugging Face Transformers library.
Chapter 10 covers how to train a GPT-like model to generate Python source code from scratch.
Chapter 11 explores scaling up transformers, methods to make self-attention more efficient, and multimodel transformers.