Thank you for sending your enquiry! One of our team members will contact you shortly.
Thank you for sending your booking! One of our team members will contact you shortly.
Course Outline
MATLAB Deep Learning Environment & GPU Validation
- Deep Learning Toolbox architecture and workflow overview
- Verifying GPU availability, CUDA/cuDNN compatibility, and driver configuration
- Configuring parallel workers, memory management, and mastering
gpuArraybasics - Lab 1: Environment validation and running your first GPU-accelerated deep learning script
Core Deep Learning Constructs in MATLAB
- Neural network layers: conv, pooling, batch norm, dropout, residual, and dense layers
- Fundamentals of
dlarray,dlnetwork, and custom training loops - Loss functions, optimizers (Adam, SGD, RMSProp), and learning rate scheduling strategies
- Visualizing architectures, weight distributions, and gradient flow for debugging
- Lab 2: Building a custom
dlnetworkfrom scratch and debugging layer interactions
Designing CNNs for Image Recognition
- CNN design patterns: feature extraction, spatial hierarchies, and receptive fields
- Transfer learning: leveraging pre-trained networks such as ResNet, EfficientNet, and MobileNet
- Data augmentation pipelines using
imageDatastore,augmentedImageDatastore, and custom transforms - Lab 3: Training a CNN from scratch on a custom image classification dataset with augmentation
Automated Data Labeling & Reproducible Pipelines
- Leveraging MATLAB’s active learning and semi-supervised labeling tools
- Importing and exporting annotations (COCO, Pascal VOC, YOLO, CSV)
- Building version-controlled, parameterized data preparation scripts
- Lab 4: Automating the labeling workflow and integrating it into a training script
Scalable Training: Multi-GPU, Cloud & Clusters
- Multi-GPU training strategies: batch size tuning, gradient accumulation, and data parallelism
- Distributed training with MATLAB Parallel Server and on-premises clusters
- Cloud training workflows (AWS, Azure, GCP) via MATLAB cloud compute profiles
- Training monitoring, checkpointing, and hyperparameter optimization techniques
- Lab 5: Scaling a model to a multi-GPU/cloud setup and profiling training throughput
Cross-Framework Interoperability & Model Exchange
- Importing pre-trained Caffe and TensorFlow/Keras models into MATLAB
- Validating accuracy parity and adapting architectures for MATLAB workflows
- Exporting models to ONNX, TensorFlow, or Core ML for cross-platform deployment
- Lab 6: Importing a TF-Keras model, fine-tuning it in MATLAB, and exporting to ONNX
Capstone Project & Production Readiness
- End-to-end pipeline: data ingestion, training, validation, optimization, and deployment
- Model compression: pruning, quantization, and code generation with GPU Coder
- Reproducibility best practices: logging, seeding, and sharing MATLAB deep learning apps
- Capstone: Build, train, optimize, and export a complete image recognition system tailored to your specific domain
To request a customized course outline for this training, please contact us.
Requirements
- Proficiency in MATLAB (syntax, programming workflows, toolbox familiarity)
- No prior data science or deep learning experience required
- Access to a local GPU-enabled workstation (CUDA-compatible) or approved cloud cluster for live labs
Audience
- Developers & Software Engineers
- Research Engineers & Domain Experts
- Teams transitioning from traditional signal/image processing to AI-driven workflows
14 Hours
Testimonials (3)
I really liked the end where we took the time to play around with CHAT GPT. The room was not set up the best for this- instead of one large table a couple of small ones so we could get into small groups and brainstorm would have helped
Nola - Laramie County Community College
Course - Artificial Intelligence (AI) Overview
Working from first principles in a focused way, and moving to applying case studies within the same day
Maggie Webb - Department of Jobs, Regions, and Precincts
Course - Artificial Neural Networks, Machine Learning, Deep Thinking
It felt like we were going through directly relevant information at a good pace (i.e. no filler material)