Python Artificial Intelligence Training Course

Learn the fundamentals of Python Artificial Intelligence and how to apply your knowledge

Pre-requisites

Our Intro To Programming level is required for entry into this course

Who will benefit

  • Python Coders who has gone through our Python Machine Learning Course and want to get into more Deep Learning Techniques and AI
  • Certification

    Attendance : If you have attended 80% of the sessions and completed all the class work, you qualify for the Attendance Certificate. Competency : If you have also completed all the practical projects as described the Outcomes section, you qualify for the Competency Certificate.

    What you will learn

    • Be familiar with several training models, including support vector machines, decision trees, random forests, and ensemble methods
    • Be able to Use the TensorFlow library to build and train neural nets
    • Be familiar into neural net architectures, including convolutional nets, recurrent nets, and deep reinforcement learning
    • Know techniques for training and scaling deep neural nets

    What do I need?

    Live Online Training : A laptop, and a stable internet connection. The recommended minimum speed is around 10 Mbps. Classroom Training : A laptop, please notify us if you are not brining your own laptop. Please see the calendar below for the schedule

    Day One1.2.

    I. Neural Networks and Deep Learning

    1. Introduction to Artificial Neural Networks with Keras

    • From Biological to Artificial Neurons
      • Biological Neurons
      • Logical Computations with Neurons
      • The Perceptron
      • The Multilayer Perceptron and Backpropagation
      • Regression MLPs
      • Classification MLPs
    • Implementing MLPs with Keras
      • Installing TensorFlow 2
      • Building an Image Classifier Using the Sequential API
      • Building a Regression MLP Using the Sequential API
      • Building Complex Models Using the Functional API
      • Using the Subclassing API to Build Dynamic Models
      • Saving and Restoring a Model
      • Using Callbacks
      • Using TensorBoard for Visualization
    • Fine-Tuning Neural Network Hyperparameters
      • Number of Hidden Layers
      • Number of Neurons per Hidden Layer
      • Learning Rate, Batch Size, and Other Hyperparameters

    2. Training Deep Neural Networks

    • The Vanishing/Exploding Gradients Problems
      • Glorot and He Initialization
      • Nonsaturating Activation Functions
      • Batch Normalization
      • Gradient Clipping
    • Reusing Pretrained Layers
      • Transfer Learning with Keras
      • Unsupervised Pretraining
      • Pretraining on an Auxiliary Task
    • Faster Optimizers
      • Momentum Optimization
      • Nesterov Accelerated Gradient
      • AdaGrad
      • RMSProp
      • Adam and Nadam Optimization
      • Learning Rate Scheduling
    • Avoiding Overfitting Through Regularization
      • ℓ1 and ℓ2 Regularization
      • Dropout
      • Monte Carlo (MC) Dropout
      • Max-Norm Regularization

    Day Two3.4

    3. Custom Models and Training with TensorFlow

    • A Quick Tour of TensorFlow
    • Using TensorFlow like NumPy
      • Tensors and Operations
      • Tensors and NumPy
      • Type Conversions
      • Variables
      • Other Data Structures
    • Customizing Models and Training Algorithms
      • Custom Loss Functions
      • Saving and Loading Models That Contain Custom Components
      • Custom Activation Functions, Initializers, Regularizers, and Constraints
      • Custom Metrics
      • Custom Layers
      • Custom Models
      • Losses and Metrics Based on Model Internals
      • Computing Gradients Using Autodiff
      • Custom Training Loops
    • TensorFlow Functions and Graphs
      • AutoGraph and Tracing
      • TF Function Rules

    4. Loading and Preprocessing Data with TensorFlow

    • The Data API
      • Chaining Transformations
      • Shuffling the Data
      • Preprocessing the Data
      • Putting Everything Together
      • Prefetching
      • Using the Dataset with tf.keras
    • The TFRecord Format
      • Compressed TFRecord Files
      • A Brief Introduction to Protocol Buffers
      • TensorFlow Protobufs
      • Loading and Parsing Examples
      • Handling Lists of Lists Using the SequenceExample Protobuf
    • Preprocessing the Input Features
      • Encoding Categorical Features Using One-Hot Vectors
      • Encoding Categorical Features Using Embeddings
      • Keras Preprocessing Layers
    • TF Transform
    • The TensorFlow Datasets (TFDS) Project

    Day Three5.6

    5. Deep Computer Vision Using Convolutional Neural Networks

    • The Architecture of the Visual Cortex
    • Convolutional Layers
      • Filters
      • Stacking Multiple Feature Maps
      • TensorFlow Implementation
      • Memory Requirements
    • Pooling Layers
      • TensorFlow Implementation
    • CNN Architectures
      • LeNet-5
      • AlexNet
      • GoogLeNet
      • VGGNet
      • ResNet
      • Xception
      • SENet
    • Implementing a ResNet-34 CNN Using Keras
    • Using Pretrained Models from Keras
    • Pretrained Models for Transfer Learning
    • Classification and Localization
    • Object Detection
      • Fully Convolutional Networks
      • You Only Look Once (YOLO)
    • Semantic Segmentation

    6. Processing Sequences Using RNNs and CNNs

    • Recurrent Neurons and Layers
      • Memory Cells
      • Input and Output Sequences
    • Training RNNs
    • Forecasting a Time Series
      • Baseline Metrics
      • Implementing a Simple RNN
      • Deep RNNs
      • Forecasting Several Time Steps Ahead
    • Handling Long Sequences
      • Fighting the Unstable Gradients Problem
      • Tackling the Short-Term Memory Problem

    Day Four7.8

    7. Natural Language Processing with RNNs and Attention

    • Generating Shakespearean Text Using a Character RNN
      • Creating the Training Dataset
      • How to Split a Sequential Dataset
      • Chopping the Sequential Dataset into Multiple Windows
      • Building and Training the Char-RNN Model
      • Using the Char-RNN Model
      • Generating Fake Shakespearean Text
      • Stateful RNN
    • Sentiment Analysis
      • Masking
      • Reusing Pretrained Embeddings
    • An Encoder–Decoder Network for Neural Machine Translation
      • Bidirectional RNNs
      • Beam Search
    • Attention Mechanisms
      • Visual Attention
      • Attention Is All You Need: The Transformer Architecture
    • Recent Innovations in Language Models

    8. Representation Learning and Generative Learning Using Autoencoders and GANs

    • Efficient Data Representations
    • Performing PCA with an Undercomplete Linear Autoencoder
    • Stacked Autoencoders
      • Implementing a Stacked Autoencoder Using Keras
      • Visualizing the Reconstructions
      • Visualizing the Fashion MNIST Dataset
      • Unsupervised Pretraining Using Stacked Autoencoders
      • Tying Weights
      • Training One Autoencoder at a Time
    • Convolutional Autoencoders
    • Recurrent Autoencoders
    • Denoising Autoencoders
    • Sparse Autoencoders
    • Variational Autoencoders
      • Generating Fashion MNIST Images
    • Generative Adversarial Networks
      • The Difficulties of Training GANs
      • Deep Convolutional GANs
      • Progressive Growing of GANs
      • StyleGANs

    Day Five9.10

    9. Reinforcement Learning

    • Learning to Optimize Rewards
    • Policy Search
    • Introduction to OpenAI Gym
    • Neural Network Policies
    • Evaluating Actions: The Credit Assignment Problem
    • Policy Gradients
    • Markov Decision Processes
    • Temporal Difference Learning
    • Q-Learning
      • Exploration Policies
      • Approximate Q-Learning and Deep Q-Learning
    • Implementing Deep Q-Learning
    • Deep Q-Learning Variants
      • Fixed Q-Value Targets
      • Double DQN
      • Prioritized Experience Replay
      • Dueling DQN
    • The TF-Agents Library
    • Installing TF-Agents
    • TF-Agents Environments
    • Environment Specifications
    • Environment Wrappers and Atari Preprocessing
    • Training Architecture
    • Creating the Deep Q-Network
    • Creating the DQN Agent
    • Creating the Replay Buffer and the Corresponding Observer
    • Creating Training Metrics
    • Creating the Collect Driver
    • Creating the Dataset
    • Creating the Training Loop
      • Overview of Some Popular RL Algorithms

    10. Training and Deploying TensorFlow Models at Scale

    • Serving a TensorFlow Model
      • Using TensorFlow Serving
      • Creating a Prediction Service on GCP AI Platform
      • Using the Prediction Service
    • Deploying a Model to a Mobile or Embedded Device
    • Using GPUs to Speed Up Computations
      • Getting Your Own GPU
      • Using a GPU-Equipped Virtual Machine
      • Colaboratory
      • Managing the GPU RAM
      • Placing Operations and Variables on Devices
      • Parallel Execution Across Multiple Devices
    • Training Models Across Multiple Devices
      • Model Parallelism
      • Data Parallelism
      • Training at Scale Using the Distribution Strategies API
      • Training a Model on a TensorFlow Cluster
      • Running Large Training Jobs on Google Cloud AI Platform
      • Black Box Hyperparameter Tuning on AI Platform
    Back to top