UC Davis course EEC277 introduces the design and analysis of the architecture of computer graphics systems. Topics include the graphics pipeline, general-purpose programmability of modern graphics architectures, exploiting parallelism in graphics, and case studies of noteworthy and modern graphics a…
In our final content lecture, we look at how to parallelize the graphics pipeline. What is challenging about parallelizing the GPU? What are the ways we could parallelize it? We discuss the sorting taxonomy of parallelism strategies, look at different ways to communicate within a multi-node system, and analyze the taxonomy using historical graphics architectures.
Jeremy Sugerman from Stanford describes GRAMPS, a programming model for graphics pipelines and heterogeneous parallelism.
We turn away from a fixed-function graphics pipeline and explore what we can do with a user-programmable pipeline, where not only pipeline stages but also the structure of the pipeline can be customized. We look at Reyes, delay streams, and the programmable culling unit.
This lecture contains the overflow from the 4 pipeline lectures, mostly the composition/display lecture.
The final stage of the graphics pipeline is composition/display. In this lecture we look at antialiasing algorithms, compositing, the depth buffer, and monitors. [Note: The beginning part of this lecture is the remainder of the rasterization lecture, and this lecture spills into the overflow lecture.]
John Nickolls, chief compute architect for NVIDIA's GPUs, discusses NVIDIA GPU graphics and compute architecture.
Texturing is the process of applying images to geometry. We look at the function of texture and how we filter texture, and then how graphics hardware has implemented texturing. We also look at texture caching and texture compression. [Note: The first part of this lecture is the remainder of the rasterization lecture, and texturing spills into the next "composition/display" lecture.]
Justin Hensley of AMD/ATI Graphics describes the latest GPUs from AMD's ATI Graphics division.
Rasterization is the GPU stage that produces fragments from screen-space triangles. We look at both pixel coverage and parameter interpolation algorithms. We also discuss perspective correction and look at historical SGI machines. [Note: The rasterization part of this lecture starts in the middle of the lecture and spills over into the texture lecture.]
In this lecture, we take a close look at the geometry stage of the graphics pipeline: transformations, homogeneous coordinates, the OpenGL lighting model, primitive assembly, clipping, and culling. We also look at ways to save computation and bandwidth: vertex arrays, vertex caches, and geometry compression. [Note: This lecture spills over into the "rasterization" lecture.]
In this lecture we turn to the technology fundamentals behind the rise of the GPU: what are the technology trends of today's VLSI designs and how and why do they impact the GPU and its architecture? We also contrast CPUs and GPUs as well as the differences between task-parallel and time-multiplexed architectures.
The modern GPU can be used as a general-purpose processor. This field of "GPGPU" (general-purpose programmability of graphics hardware) or "GPU computing" is having an increasing impact on GPU architecture, GPU software and programming environments, and the computing industry. These two lectures discuss the fundamentals of GPGPU: the programming model, the hardware, and some fundamental algorithms. We use NVIDIA's CUDA and G80 architecture as a representative example.
The modern GPU can be used as a general-purpose processor. This field of "GPGPU" (general-purpose programmability of graphics hardware) or "GPU computing" is having an increasing impact on GPU architecture, GPU software and programming environments, and the computing industry. These two lectures discuss the fundamentals of GPGPU: the programming model, the hardware and some fundamental algorithms. We use NVIDIA's CUDA and G80 architecture as a representative example.
The graphics pipeline has recently added programmable stages. This lecture covers the software and hardware fundamentals of the GPU's programmable stages, in particular the vertex shader and fragment shader.
How do we measure graphics performance? How do we characterize graphics hardware? What are the bottlenecks in a graphics application and how do we detect them? What are benchmarks, what makes a good benchmark, and how do we use benchmarks?
What are different ways that we might consider doing rendering? Why did OpenGL make the decisions it did and what does the OpenGL pipeline look like?
Introduction to the course: why we should study graphics architecture, history of graphics architecture, overview of the course, administrivia.