SynopsisBuild custom NLP models in record time by adapting pre-trained machine learning models to solve specialized problems. Summary In Transfer Learning for Natural Language Processing you will learn: Fine tuning pretrained models with new domain data Picking the right model to reduce resource usage Transfer learning for neural network architectures Generating text with generative pretrained transformers Cross-lingual transfer learning with BERT Foundations for exploring NLP academic literature Training deep learning NLP models from scratch is costly, time-consuming, and requires massive amounts of data. In Transfer Learning for Natural Language Processing, DARPA researcher Paul Azunre reveals cutting-edge transfer learning techniques that apply customizable pretrained models to your own NLP architectures. You'll learn how to use transfer learning to deliver state-of-the-art results for language comprehension, even when working with limited label data. Best of all, you'll save on training time and computational costs. Purchase of the print book includes a free eBook in PDF, Kindle, and ePub formats from Manning Publications. About the technology Build custom NLP models in record time, even with limited datasets! Transfer learning is a machine learning technique for adapting pretrained machine learning models to solve specialized problems. This powerful approach has revolutionized natural language processing, driving improvements in machine translation, business analytics, and natural language generation. About the book Transfer Learning for Natural Language Processing teaches you to create powerful NLP solutions quickly by building on existing pretrained models. This instantly useful book provides crystal-clear explanations of the concepts you need to grok transfer learning along with hands-on examples so you can practice your new skills immediately. As you go, you'll apply state-of-the-art transfer learning methods to create a spam email classifier, a fact checker, and more real-world applications. What's inside Fine tuning pretrained models with new domain data Picking the right model to reduce resource use Transfer learning for neural network architectures Generating text with pretrained transformers About the reader For machine learning engineers and data scientists with some experience in NLP. About the author Paul Azunre holds a PhD in Computer Science from MIT and has served as a Principal Investigator on several DARPA research programs. Table of Contents PART 1 INTRODUCTION AND OVERVIEW 1 What is transfer learning? 2 Getting started with baselines: Data preprocessing 3 Getting started with baselines: Benchmarking and optimization PART 2 SHALLOW TRANSFER LEARNING AND DEEP TRANSFER LEARNING WITH RECURRENT NEURAL NETWORKS (RNNS) 4 Shallow transfer learning for NLP 5 Preprocessing data for recurrent neural network deep transfer learning experiments 6 Deep transfer learning for NLP with recurrent neural networks PART 3 DEEP TRANSFER LEARNING WITH TRANSFORMERS AND ADAPTATION STRATEGIES 7 Deep transfer learning for NLP with the transformer and GPT 8 Deep transfer learning for NLP with BERT and multilingual BERT 9 ULMFiT and knowledge distillation adaptation strategies 10 ALBERT, adapters, and multitask adaptation strategies 11 Conclusions, Building and training deep learning models from scratch is costly, time-consuming, and requires massive amounts of data. To address this concern, cutting-edge transfer learning techniques enable you to start with pretrained models you can tweak to meet your exact needs. In Transfer Learning for Natural Language Processing , DARPA researcher Paul Azunre takes you hands-on with customizing these open source resources for your own NLP architectures. You''ll learn how to use transfer learning to deliver state-of-the-art results even when working with limited label data, all while saving on training time and computational costs. about the technology Transfer learning enables machine learning models to be initialized with existing prior knowledge. Initially pioneered in computer vision, transfer learning techniques have been revolutionising Natural Language Processing with big reductions in the training time and computation power needed for a model to start delivering results. Emerging pretrained language models such as ELMo and BERT have opened up new possibilities for NLP developers working in machine translation, semantic analysis, business analytics, and natural language generation. about the book Transfer Learning for Natural Language Processing is a practical primer to transfer learning techniques capable of delivering huge improvements to your NLP models. Written by DARPA researcher Paul Azunre, this practical book gets you up to speed with the relevant ML concepts before diving into the cutting-edge advances that are defining the future of NLP. You''ll learn how to adapt existing state-of-the art models into real-world applications, including building a spam email classifier, a movie review sentiment analyzer, an automated fact checker, a question-answering system and a translation system for low-resource languages. what''s inside Fine tuning pretrained models with new domain data Picking the right model to reduce resource usage Transfer learning for neural network architectures Foundations for exploring NLP academic literature about the reader For machine learning engineers and data scientists with some experience in NLP. about the author Paul Azunre holds a PhD in Computer Science from MIT and has served as a Principal Investigator on several DARPA research programs. He founded Algorine Inc., a Research Lab dedicated to advancing AI/ML and identifying scenarios where they can have a significant social impact. Paul also co-founded Ghana NLP, an open source initiative focused using NLP and Transfer Learning with Ghanaian and other low-resource languages. He frequently contributes to major peer-reviewed international research journals and serves as a program committee member at top conferences in the field.
LC Classification NumberQA76.9.N38