English | 2021 | ISBN: 978-1800200883 | 453 Pages | PDF, EPUB, MOBI | 328 MB
Implement classical and deep learning generative models through practical examples
Explore creative and human-like capabilities of AI and generate impressive results
Use the latest research to expand your knowledge beyond this book
Experiment with practical TensorFlow 2.x implementations of state-of-the-art generative models
In recent years, generative artificial intelligence has been instrumental in the creation of lifelike data (images, voice, video, music, and text) from scratch. In this book you will unpack how these powerful models are created from relatively simple building blocks, and how you might adapt these models to your own use cases.
You will begin by setting up clean containerized environments for Python and getting to grips with the fundamentals of deep neural networks, learning about core concepts like the perceptron, activation functions, backpropagation, and how they all tie together. Once you have covered the basics, you will explore deep generative models in depth, including OpenAI’s GPT-series of news generators, networks for style transfer and deepfakes, and synergy with reinforcement learning.
As you progress, you will focus on abstractions where useful, and understand the “nuts and bolts” of how the models are composed in code, underpinned by detailed architecture diagrams. The book concludes with a variety of practical projects to generate music, images, text, and speech using the methods you have learned in prior sections, piecing together TensorFlow layers, utility functions, and training loops to uncover links between the different modes of generation.
By the end of this book, you will have acquired the knowledge to create and implement your own generative AI models.
What you will learn
- Implement paired and unpaired style transfer with networks like StyleGAN
- Use facial landmarks, autoencoders, and pix2pix GAN to create deepfakes
- Build several text generation pipelines based on LSTMs, BERT, and GPT-2, learning how attention and transformers changed the NLP landscape
- Compose music using LSTM models, simple generative adversarial networks, and the intricate MuseGAN
- Train a deep learning agent to move through a simulated physical environment
- Discover emerging applications of generative AI, such as folding proteins and creating videos from images