Podcast appearances and mentions of ravi ramamoorthi

  • 3PODCASTS
  • 5EPISODES
  • 33mAVG DURATION
  • ?INFREQUENT EPISODES
  • May 2, 2021LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about ravi ramamoorthi

Latest podcast episodes about ravi ramamoorthi

Yannic Kilcher Videos (Audio Only)
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis (ML Research Paper Explained)

Yannic Kilcher Videos (Audio Only)

Play Episode Listen Later May 2, 2021 33:55


#nerf​ #neuralrendering​ #deeplearning​ View Synthesis is a tricky problem, especially when only given a sparse set of images as an input. NeRF embeds an entire scene into the weights of a feedforward neural network, trained by backpropagation through a differential volume rendering procedure, and achieves state-of-the-art view synthesis. It includes directional dependence and is able to capture fine structural details, as well as reflection effects and transparency. OUTLINE: 0:00​ - Intro & Overview 4:50​ - View Synthesis Task Description 5:50​ - The fundamental difference to classic Deep Learning 7:00​ - NeRF Core Concept 15:30​ - Training the NeRF from sparse views 20:50​ - Radiance Field Volume Rendering 23:20​ - Resulting View Dependence 24:00​ - Positional Encoding 28:00​ - Hierarchical Volume Sampling 30:15​ - Experimental Results 33:30​ - Comments & Conclusion Paper: https://arxiv.org/abs/2003.08934​ Website & Code: https://www.matthewtancik.com/nerf​ My Video on SIREN: https://youtu.be/Q5g3p9Zwjrk​ Abstract: We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location (x,y,z) and viewing direction (θ,ϕ)) and whose output is the volume density and view-dependent emitted radiance at that spatial location. We synthesize views by querying 5D coordinates along camera rays and use classic volume rendering techniques to project the output colors and densities into an image. Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses. We describe how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrate results that outperform prior work on neural rendering and view synthesis. View synthesis results are best viewed as videos, so we urge readers to view our supplementary video for convincing comparisons. Authors: Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, Ren Ng Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick​ YouTube: https://www.youtube.com/c/yannickilcher​ Twitter: https://twitter.com/ykilcher​ Discord: https://discord.gg/4H8xxDF​ BitChute: https://www.bitchute.com/channel/yann...​ Minds: https://www.minds.com/ykilcher​ Parler: https://parler.com/profile/YannicKilcher​ LinkedIn: https://www.linkedin.com/in/yannic-ki...​ BiliBili: https://space.bilibili.com/1824646584​ If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannick...​ Patreon: https://www.patreon.com/yannickilcher​ Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n

Computer Science Channel (Video)
Teaching Computer Science Online

Computer Science Channel (Video)

Play Episode Listen Later Dec 6, 2016 14:24


From Bioinformatics, software design, and computer graphics to interaction design, half a dozen professors of computer science and engineering at UC San Diego have become pioneers in the world of online learning, developing some of the most highly enrolled courses for the two largest online learning platforms, Coursera and edX. They share their perspective and experience, and what online learning means for the future of learning. Series: "Computer Science Channel" [Science] [Education] [Show ID: 31279]

Computer Science Channel (Audio)
Teaching Computer Science Online

Computer Science Channel (Audio)

Play Episode Listen Later Dec 6, 2016 14:24


From Bioinformatics, software design, and computer graphics to interaction design, half a dozen professors of computer science and engineering at UC San Diego have become pioneers in the world of online learning, developing some of the most highly enrolled courses for the two largest online learning platforms, Coursera and edX. They share their perspective and experience, and what online learning means for the future of learning. Series: "Computer Science Channel" [Science] [Education] [Show ID: 31279]

Computer Science Channel (Video)
Computing Primetime: Visual Computing

Computer Science Channel (Video)

Play Episode Listen Later Aug 24, 2015 52:17


On this edition of Computing Primetime Ravi Ramamoorthi, director of the new UC San Diego Center for Visual Computing - or VisComp - is joined by two other faculty members on the interdisciplinary roster of UC San Diego researchers in the center: Cognitive Science professor Zhuowen Tu, and Qualcomm Institute research scientist Jurgen Schulze, who also teaches computer graphics in the Computer Science and Engineering department. In a wide-ranging conversation they discuss the three grand research themes that underpin VisComp activities: Mobile visual computing and digital imaging to capture, process and display the visual world with smartphones and other devices; Interactive digital (augmented) reality to allow us to render and mix real and virtual content seamlessly and realistically in real time, and the ability to automate computer-based visual understanding of the world from small-scale underwater organisms to large cities. Series: "Computing Primetime" [Science] [Show ID: 29675]

Computer Science Channel (Audio)
Computing Primetime: Visual Computing

Computer Science Channel (Audio)

Play Episode Listen Later Aug 24, 2015 52:17


On this edition of Computing Primetime Ravi Ramamoorthi, director of the new UC San Diego Center for Visual Computing - or VisComp - is joined by two other faculty members on the interdisciplinary roster of UC San Diego researchers in the center: Cognitive Science professor Zhuowen Tu, and Qualcomm Institute research scientist Jurgen Schulze, who also teaches computer graphics in the Computer Science and Engineering department. In a wide-ranging conversation they discuss the three grand research themes that underpin VisComp activities: Mobile visual computing and digital imaging to capture, process and display the visual world with smartphones and other devices; Interactive digital (augmented) reality to allow us to render and mix real and virtual content seamlessly and realistically in real time, and the ability to automate computer-based visual understanding of the world from small-scale underwater organisms to large cities. Series: "Computing Primetime" [Science] [Show ID: 29675]