Programming Massively Parallel Processors with CUDA

Programming Massively Parallel Processors with CUDA

Follow Programming Massively Parallel Processors with CUDA
Share on
Copy link to clipboard

Virtually all semiconductor market domains, including PCs, game consoles, mobile handsets, servers, supercomputers, and networks, are converging to concurrent platforms. There are two important reasons for this trend. First, these concurrent processors can potentially offer more effective use of chi…

Stanford University


    • Jun 9, 2010 LATEST EPISODE
    • infrequent NEW EPISODES
    • 58m AVG DURATION
    • 16 EPISODES


    More podcasts from Stanford University

    Search for episodes from Programming Massively Parallel Processors with CUDA with a specific topic:

    Latest episodes from Programming Massively Parallel Processors with CUDA

    16. Parallel Sorting (April 20, 2010)

    Play Episode Listen Later Jun 9, 2010 66:00


    Michael Garland, of NVIDIA Research, discusses sorting methods in order to make searching, categorization, and building of data structures in parallel easier. (April 20, 2010)

    15. Optimizing Parallel GPU Performance (May 20, 2010)

    Play Episode Listen Later Jun 9, 2010 74:15


    John Nicholis discusses how to optimize with Parallel GPU Performance. (May 20, 2010)

    14. Path Planning System on the GPU (May 18, 2010)

    Play Episode Listen Later Jun 9, 2010 50:15


    Avi Bleiweiss delivers a lecture on the path planning system on the GPU. (May 18, 2010)

    6. Parallel Patterns I (April 15, 2010)

    Play Episode Listen Later May 27, 2010 37:23


    Students are taught how to effectively program massively parallel processors using the CUDA C programming language. Students also develop familiarity with the language itself and are exposed to the architecture of modern GPUs. (April 15, 2010)

    12. NVIDIA OptiX: Ray Tracing on the GPU (May 11, 2010)

    Play Episode Listen Later May 26, 2010 69:27


    Steven Parker, Director of High Performance Computing and Computational Graphics at NVIDIA, speaks about ray tracing. (May 11, 2010)

    13. Future of Throughput (May 13, 2010)

    Play Episode Listen Later May 26, 2010 73:13


    William Dally guest-lectures on the end of denial architecture and the rise of throughput computing. (May 13, 2010)

    11. The Fermi Architecture (May 6, 2010)

    Play Episode Listen Later May 17, 2010 72:34


    Michael C Shebanow, Principal Research Scientist with NVIDIA Research, talks about the new Fermi architecture. This next generation CUDA architecture, code named "Fermi" is the most advanced GPU computing architecture ever built. (May 6, 2010)

    10. Solving Partial Differential Equations with CUDA (May 4, 2010)

    Play Episode Listen Later May 17, 2010 64:46


    Jonathan Cohen, a Senior Research Scientist at NVIDIA Research, talks about solving partial differential equations with CUDA. (May 4, 2010)

    9. Sparse Matrix Vector Operations (April 29, 2010)

    Play Episode Listen Later May 17, 2010 38:33


    Nathan Bell from NVIDIA Research talks about sparse matrix-vector multiplication on throughput-oriented processors. (April 29, 2010)

    8. Introduction to Thrust (April 27, 2010)

    Play Episode Listen Later May 17, 2010 44:29


    Nathan Bell of NVIDIA Research talks about Thrust, a productivity library for CUDA. (April 27, 2010)

    7. Parallel Patterns II (April 22, 2010)

    Play Episode Listen Later May 17, 2010 41:30


    David Tarjan continues his discussion on parallel patterns. (April 22, 2010)

    5. Performance Considerations (April 13, 2010)

    Play Episode Listen Later Apr 21, 2010 59:46


    Lukas Biewald of Delores Labs, discusses performance considerations including: memory coalescing, shared memory bank conflicts, control-flow divergence, occupancy, and kernel launch overheads. (April 13, 2010)

    4. CUDA Memories (April 8, 2010)

    Play Episode Listen Later Apr 21, 2010 57:48


    Jared Hoberock of NVIDIA lectures on CUDA memory spaces for CS 193G: Programming Massively Parallel Processors. (April 8, 2010)

    3. CUDA Threads & Atomics (April 6, 2010)

    Play Episode Listen Later Apr 15, 2010 49:49


    Atomic operations in CUDA and the associated hardware are discussed. (April 6, 2010)

    2. Introduction to CUDA (April 1, 2010)

    Play Episode Listen Later Apr 14, 2010 75:40


    science, technology, computer science, CS, software engineering, programming, parallel processors, CUDA, language, code, Computers, coding, MP0, MP1, hardware, software, memory management, GPU, CPU, memory, parallel code, kernel, threads, launch, thread b

    1. Introduction to Massively Parallel Computing (March 30, 2010)

    Play Episode Listen Later Apr 9, 2010 67:34


    Jared Hoberock of NVIDIA gives the introductory lecture to CS 193G: Programming Massively Parallel Processors. (March 30, 2010)

    Claim Programming Massively Parallel Processors with CUDA

    In order to claim this podcast we'll send an email to with a verification link. Simply click the link and you will be able to edit tags, request a refresh, and other features to take control of your podcast page!

    Claim Cancel