CSCA 5322: Deep Learning for Computer Vision

Get a head start on program admission

ÌýPreview this courseÌýin the non-credit experience today!Ìý
Start working toward program admission and requirements right away.ÌýWork you complete in the non-credit experience will transfer to the for-credit experience when you upgrade and pay tuition. See How It Works for details.

Course Type: Elective

Specialization: Introduction to Computer Vision

Instructors:ÌýDr. Tom Yeh

Prior knowledge needed:

  • Programming languages: N/A
  • Math: Basic to intermediate Linear Algebra, Trigonometry, Vectors & Matrices
  • Technical requirements: N/A

Course Description

Unlock the power of deep learning to transform visual data into actionable insights. This hands-on course guides you through the foundational and advanced techniques that drive modern computer vision applications—from image classification to generative modeling.

You'll begin with the building blocks of deep learning - understanding how multilayer perceptrons (MLPs) work, and exploring normalization techniques that stabilize and accelerate training. You'll then dive into unsupervised learning with autoencoders and discover the magic behind Generative Adversarial Networks (GANs) that can create realistic images from noise. After, you'll master the architecture that revolutionized computer vision by learning how CNNs extract spatial hierarchies and patterns from images for tasks like object detection and recognition. Finally, you'll explore cutting-edge architectures. ResNet introduces residual learning for deeper networks, while U-Net powers precise image segmentation in medical imaging and beyond. Whether you're a data scientist, engineer, or AI enthusiast, this course equips you with the skills to build and deploy deep learning models for real-world vision tasks. With practical examples and guided learning, you'll gain both theoretical understanding and hands-on experience. This course can be taken for academic credit as part of CU Boulder’s MS in Computer Science degree offered on the Coursera platform. These fully accredited graduate degrees offer targeted courses, short 8-week sessions, and pay-as-you-go tuition. Admission is based on performance in three preliminary courses, not academic history. CU degrees on Coursera are ideal for recent graduates or working professionals. Learn more: https://coursera.org/degrees/ms-computer-science-boulder.

Learning Outcomes

  • Improve model performance and training stability using multilayer perceptrons (MLPs) and applying normalization techniques.
  • Implement autoencoders for unsupervised feature learning and design Generative Adversarial Networks (GANs) to generate synthetic images.
  • Train convolutional neural networks (CNNs) for image classification tasks, understanding how layers extract spatial features from visual data.
  • Apply advanced architectures like ResNet for deep image recognition and U-Net for image segmentation.

Course Grading Policy

AssignmentPercentage of Grade
Neural Network, Multi-Layer Perceptron, and Normalization20%
Auto Encoder and GAN20%
Convolutional Neural Networks20%
ResNet and U-Net20%

Course Content

Duration: 4 hours

Welcome to Deep Learning for Computer Vision, the second course in the Computer Vision specialization. In this first module, you'll be introduced to the principles behind neural networks and their use in visual recognition tasks. You'll begin by learning the basic building blocks—neurons, weights, biases—and progress toward constructing simple multi-layer perceptrons. Then, you'll discover key activation concepts like batch processing and graph-matrix conversions. Finally, you will visualize neural networks with an emphasis on classification tasks.

Duration: 3ÌýhoursÌý

In this module, you’ll explore two powerful architectures in deep learning: autoencoders and generative adversarial networks (GANs). You’ll begin by learning how autoencoders compress and reconstruct data using encoder-decoder structures, and how reconstruction loss is minimized through backpropagation and gradient descent. You’ll then examine the role of loss functions and optimization techniques in training these models. In the second half of the module, you’ll dive into GANs, where a generator and discriminator compete to produce realistic synthetic data. You’ll study how adversarial training works, how binary cross-entropy loss is applied, and how GANs are used to model complex data distributions. By the end of this module, you’ll be able to implement and evaluate both autoencoders and GANs for representation learning and data generation.

Duration: 2ÌýhoursÌý

In this module, you’ll learn how convolutional neural networks extract features from images and perform classification. You’ll begin by building a tiny CNN by hand and in Excel, exploring convolution, max-pooling, and fully connected layers. Then, you’ll scale up to larger CNN architectures and examine how they process data through multiple convolution and pooling stages. You’ll also study how categorical cross-entropy loss and gradients are computed for training. Finally, you’ll walk through backpropagation across all CNN layers to understand how learning occurs.

Duration: 3Ìýhours

In this module, you’ll explore two influential deep learning architectures: ResNet and U-Net. You’ll begin by learning how ResNet uses skip connections and residual learning to enable the training of very deep networks, addressing challenges like vanishing and exploding gradients. You’ll examine how residual blocks preserve information and support higher-order logic across layers. Then, you’ll shift to U-Net, a powerful architecture for image segmentation, and study its encoder-decoder structure, skip connections, and upsampling techniques like transposed convolution. By the end of this module, you’ll understand how both architectures enhance learning efficiency and performance in complex vision tasks.

Duration: 90 minutes per attempt - 2 attempts allowed

This module contains materials for the final exam. If you've upgraded to the for-credit version of this course, please make sure you review the additional for-credit materials in the Introductory module and anywhere else they may be found.

The final exam for this course is an in course assessment with 51 questions. An 81% or higher is considered passing.

Notes

  • Cross-listed Courses: CoursesÌýthat are offered under two or more programs. Considered equivalent when evaluating progress toward degree requirements. You may not earn credit for more than one version of a cross-listed course.
  • Page Updates: This page is periodically updated. Course information on the Coursera platform supersedes the information on this page. Click theÌýView on CourseraÌýbuttonÌýabove for the most up-to-date information.