Podcasts about hevc

  • 64PODCASTS
  • 120EPISODES
  • 49mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Apr 19, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about hevc

Latest podcast episodes about hevc

MP3 – mintCast
459 – Travel Digital Hygene

MP3 – mintCast

Play Episode Listen Later Apr 19, 2025 120:31


First up in the news: Mint Monthly News, Linux 6.16 To Add Asahi UAPI Header For Apple Silicon, Switzerland battles privacy intrusions, Firefox adds HEVC playback in Linux, Debian releases APT 3.0, Apple may add Mx GCC core support, Git turns 20, ProtonMail adds advanced features, ArcoLinux ends it all Then in our Wanderings: Bill is having trouble on the road and won't be here, Joe returns to us, Moss juggles tablets, Majid learns things, and Eric is AWOL In our Innards section: we talk travel computing In Bodhi Corner, Robert Wiley releases a script which can be used to install Moksha on any version of Debian, including Trixie

Voices of VR Podcast – Designing for Virtual Reality
#1557: Apple Immersive Video Behind-the-Scenes & Overcoming Fears with “Adventure” Series Athlete

Voices of VR Podcast – Designing for Virtual Reality

Play Episode Listen Later Apr 8, 2025 53:32


Ant Williams is a freediving athlete featured in the third episode of The Adventure Series titled "Ice Dive" that's co-produced by Apple and Atlantic Studios (formerly Atlantic Productions). In the episode, Williams attempts to swim a world record distance 182 meters under ice, and I wanted to get some additional behind-the-scenes context on his experience as well as what it was like to have the most intense month of his life condensed down into a 15-minute Apple Immersive Video. Williams is a sports psychologist who wanted to put his theories into practice by taking what he calls "positive, calculated risk-taking challenges" that allow him to deal with overwhelming anxiety, and overcome his fears, uncertainty, and self-doubt. I also wanted to get some additional context on the production of the episode as Apple has otherwise been pretty tight-lipped about the series, launched with which launched with "Highlining" on the same day as the Apple Vision Pro launch on February 2, 2024. Apple Immersive Video is a different format than spatial video. Apple says "spatial videos are captured in 1080p at 30 frames per second in standard dynamic range," and these are what can be captured by either an iPhone or Apple Vision Pro, and they are displayed in a windowed frame where you see the stereoscopic effects. Apple describes Apple Immersive Video as "a remarkable storytelling format that leverages 3D video recorded in 8K with a 180-degree field of view and Spatial Audio to transport viewers to the center of the action." Apple Immersive Video is much closer to what we've seen from the XR industry and VR 180 filmmakers from the past decade, and Apple's technology is likely derived from their 2020 acquisition of NextVR. NextVR focused on live stereoscopic broadcasts of sports events on VR headsets starting with the Samsung Gear VR and Oculus Rift in 2014. A lot of the technical specifications of the Apple Immersive Video format have not been officially confirmed by Apple, but there are a couple of breadcrumbs that give us some more details. Thanks to iFixIt's breakdown of the Apple Vision Pro on February 7, 2024, then we know the microOLED display size is reported as "the lit area totals 3660 px by 3200 px." 360 Labs' Mike Rowell wrote a post on March 19, 2024 saying, "Apple Vision Pro's screens are a whopping 3660 x 3200 pixels per eye. Although they haven't made any official claims as to the FOV of the headset, 3rd party developers claim that it looks to be around 100° horizontal. With each screen having 3,660 horizontal pixels, this would mean that a 180° immersive experience would need about 6,000 x 6,000 pixels per eye to saturate the display. Apple's own immersive experiences have been reported at being 4320x4320 per eye at 90fps and in HDR10." The reporting of Apple's immersive experiences was detailed by Mike Swanson, who announced a spatial video tool on March 7, 2024 that leverages the Apple's AVFoundation to properly encode video into the "multiview extensions of the HEVC codec, known as MV-HEVC" format. Swanson says in his post, "I receive multiple messages and files every day from people who are trying to find the limits of what the Apple Vision Pro is capable of playing. You can start with the 4320×4320 per-eye 90fps content that Apple is producing and go up from there. I've personally played up to “12K” (11520×5760) per eye 360-degree stereo video at 30fps." Another clue can be found in the Blackmagic URSA Cine Immersive camera that was announced on June 10, 2024, which says, "The sensor delivers 8160 x 7200 resolution per eye with pixel level synchronization and an incredible 16 stops of dynamic range, so cinematographers can shoot 90fps stereoscopic 3D immersive cinema content to a single file." Incidentally, Currents director Jake Oleson told me that he used Swanson's tool to create his immersive film after shooting it in 8k on the Canon EOS R5 Camera Body with Canon's Dual Fisheye lens.

Linux Weekly Daily Wednesday
Welcome To The Thunderverse

Linux Weekly Daily Wednesday

Play Episode Listen Later Apr 2, 2025 39:22


Thunderbird announces Thundermail and Thunderbird Pro Services, Firefox enables HEVC playback on Linux, Windows 95 on Fedora, and April Fools' tech articles are awesome!

Focus Check
ep38 - FUJIFILM GFX ETERNA Insights | Final Cut Pro 11 Introduced | ZEISS Radiance Zooms | RØDE Wireless Micro

Focus Check

Play Episode Listen Later Nov 14, 2024 49:18


The filmmaking world turns its attention to Japan as FUJIFILM enters the professional digital film camera market with the FUJIFILM GFX ETERNA Filmmaking Camera that's in development. Johnnie traveled to Japan to interview Yuji Igarashi-san from FUJIFILM about the groundbreaking new camera. But that's not all—Nino shares insights on the brand-new update to Final Cut Pro 11 for Mac, Final Cut Pro for iPad 2.1 and Final Cut Camera 1.1 for iPhone, all of which were just released by Apple. Additionally, they talk about the new RØDE Wireless Micro for smartphone shooters, and the new super high-end ZEISS Supreme Radiance Zooms. Stay tuned until the end! Sponsor: This episode is sponsored by FUJIFILM. Check it out at (37:08). Chapters & articles mentioned in this episode: (00:00) - Intro   (00:51) - FUJIFILM Filmmaking Camera Unveiled – “GFX ETERNA”, Large Format Sensor, Planned Release in 2025 https://www.cined.com/fujifilm-filmmaking-camera-unvailed-gfx-eterna-large-format-sensor-planned-to-be-released-in-2025/   (11:44) - Final Cut Pro 11 Introduced: Magnetic Mask & Transcribe to Captions https://www.cined.com/final-cut-pro-11-introduced-magnetic-mask-transcribe-to-captions-more/   (31:40) - Final Cut Pro for iPad 2.1: Enhance Light and Color Added https://www.cined.com/final-cut-pro-for-ipad-2-1-enhance-light-and-color-added/   (34:19) - Final Cut Camera 1.1 Released: Log Recording in HEVC and More https://www.cined.com/final-cut-camera-1-1-released-log-recording-in-hevc-and-more/   (38:09) - RØDE WIRELESS MICRO Released – Pocket-Sized Wireless Mic for Smartphones https://www.cined.com/rode-wireless-micro-released-pocket-sized-wireless-mic-for-smartphones/   (44:01) - ZEISS Supreme Zoom Radiance Zoom Lens Trio Unveiled https://www.cined.com/zeiss-supreme-zoom-radiance-zoom-lens-trio-unveiled/   We hope you enjoyed this episode! You have feedback, comments, or suggestions? Write us at podcast@cined.com 

The Making Of
Cinematographer Nancy Schreiber, ASC on A Life Making Movies

The Making Of

Play Episode Listen Later Sep 23, 2024 52:51


In this episode, we welcome cinematographer Nancy Schreiber, ASC. Nancy has a long career working in film, television and documentaries. Some of her credits include Your Friends and Neighbors, Loverboy, The Nines, November, Path to War, Linda Ronstadt: The Sound of My Voice, Mapplethorpe, Visions of Light — and TV shows such as “P-Valley,” “Station 19,” “The Family,” “Blue” and “The Comeback.” In our chat, Nancy shares about her upbringing in Michigan, early days working in New York City, on through lensing countless feature films and TV shows. She also talks about her workflow, technologies used, and other insights from a life making movies. “The Making Of” is presented by AJA:Meet AJA Ki Pro GO2Easily record up to four channels of simultaneous HEVC or AVC to cost-efficient USB drives and/or network storage with flexible connectivity, including four 3G-SDI and four HDMI digital video inputs, to connect to a wide range of video sources.Learn more hereOWC Atlas Ultra CFexpress Cards:Experience the unparalleled performance and reliability of Atlas Ultra CFexpress Type B 4.0 cards purpose-built for professional filmmakers and photographers to capture flawlessly and offload files quickly in the most demanding scenarios.Check it out hereFrom our Friends at Broadfield…All-new pricing for RED KOMODO and KOMODO-X unlocks exceptional cinema quality, global shutter performance, and the power of RED to filmmakers at every level. The KOMODO is a compact cinema camera featuring RED's unparalleled image quality, color science, and groundbreaking global shutter sensor technology in a shockingly small and versatile form factor. The KOMODO-X is the next evolution with all-new sensor technology that multiplies frame rate and dynamic range performance within a new advanced platform.Inquire hereFeatured Event:Cine Gear Atlanta | October 4-5thThousands of industry professionals will surge to attend this year at Trilith Studios in Fayetteville, Georgia. A focal point of Southern filmmaking, Cine Gear 2023 drew thousands to the studio, which houses productions like Black Adam and Francis Ford Coppola's Megalopolis. Visitors met with equipment exhibitors from across the globe, attended panels and workshops from the International Cinematographer's Guild, the ASC, and numerous tech brands, and partied at the Friday night Southern Cine Soirée.Comp passes hereFeatured Book:Cronenberg on CronenbergCronenberg on Cronenberg charts Cronenberg's development from maker of inexpensive 'exploitation' cinema to internationally renowned director of million-dollar movies, and reveals the concerns and obsessions which continue to dominate his increasingly rich and complex work.Available here Podcast Rewind:Sept 2024 - Ep. 48…“The Making Of” is published by Michael Valinsky.Partner with us and feature your products to 85,000 film, TV, video and broadcast professionals reading this newsletter. Email us at mvalinsky@me.com Get full access to The Making Of at themakingof.substack.com/subscribe

The Making Of
“The Americans" Cinematographer Richard Rutkowski ASC on His Career, Inspiration, & More

The Making Of

Play Episode Listen Later Sep 20, 2024 84:42


In this episode, we welcome cinematographer Richard Rutkowski, ASC. Richard has lensed top television shows such as “Masters of the Air,” “Sugar,” “The Americans,” “Tom Clancy's Jack Ryan,” “Castle Rock” and “Manhattan” — and films including “Interview with the Assassin.” In our chat, Richard shares his backstory, film education, road to working in television, and approach to crafting various projects. He also provides invaluable advice and tips for emerging filmmakers today.The Making Of is presented by AJA:Meet AJA Ki Pro GO2Easily record up to four channels of simultaneous HEVC or AVC to cost-efficient USB drives and/or network storage with flexible connectivity, including four 3G-SDI and four HDMI digital video inputs, to connect to a wide range of video sources.Find out more hereFrom our Friends at Broadfield…All-new pricing for RED KOMODO and KOMODO-X unlocks exceptional cinema quality, global shutter performance, and the power of RED to filmmakers at every level. The KOMODO is a compact cinema camera featuring RED's unparalleled image quality, color science, and groundbreaking global shutter sensor technology in a shockingly small and versatile form factor. The KOMODO-X is the next evolution with all-new sensor technology that multiplies frame rate and dynamic range performance within a new advanced platform.Inquire hereIgelkott Studios Offers World Class In-Camera Visual EffectsOur expertise lies in delivering end-to-end ICVFX productions, which include 360 plate creation and directing the in-studio operation. We are proud to have been industry leaders in ICVFX, image development, and image workflows since 2018.Visit us hereFeatured Event:Cine Gear Atlanta | October 4-5thThousands of industry professionals will surge to attend this year at Trilith Studios in Fayetteville, Georgia. A focal point of Southern filmmaking, Cine Gear 2023 drew thousands to the studio, which houses productions like Black Adam and Francis Ford Coppola's Megalopolis. Visitors met with equipment exhibitors from across the globe, attended panels and workshops from the International Cinematographer's Guild, the ASC, and numerous tech brands, and partied at the Friday night Southern Cine Soirée.Get your passes hereOWC Atlas Ultra CFexpress Cards:Experience the unparalleled performance and reliability of Atlas Ultra CFexpress Type B 4.0 cards purpose-built for professional filmmakers and photographers to capture flawlessly and offload files quickly in the most demanding scenarios.Check it out hereFeatured Coffee: New York-based Devoción is a game changer as the only true Origin-to-Cup roaster flying in their direct trade, single-source beans via FedEx from Bogota to NYC weekly. With year-round harvests, Devoción ensures its coffee's supreme purity, freshness, and integrity. Taste the difference in a single sip.Click here to subscribe today.  Podcast Rewind:Sept 2024 - Ep. 47…“The Making Of” is published by Michael Valinsky.Partner with us and promote your products to 85,000 film, TV, video and broadcast professionals reading this newsletter. Simply email us at mvalinsky@me.com Get full access to The Making Of at themakingof.substack.com/subscribe

The Making Of
The New York Times' Jonah Kessel on Visual Journalism, 2024 Election Coverage, & More

The Making Of

Play Episode Listen Later Sep 16, 2024 52:51


In this episode, we welcome Jonah Kessel. Jonah is the Deputy Director of Opinion Video at The New York Times. His work there is a hybrid of explanatory and investigative short form documentary and other innovative forms of visual journalism. In his career, he's been recognized by a variety of organizations, including two World Press Photo awards, four times as a Multimedia Journalist of the Year from Pictures of the Year International, the Robert F. Kennedy Award for Justice and Human Rights Reporting, and the Innovative Storytelling Award from the National Press Foundation. In our chat, Jonah shares his backstory, path to The New York Times, and his experiences helping run the Opinion Video department. In addition, he talks at length about covering the 2024 U.S. Presidential Election. The Making Of is presented by AJA:Meet AJA Ki Pro GO2Easily record up to four channels of simultaneous HEVC or AVC to cost-efficient USB drives and/or network storage with flexible connectivity, including four 3G-SDI and four HDMI digital video inputs, to connect to a wide range of video sources.Find out more hereZEISS Cinema & The Making Of present: A Conversation with Lawrence Sher, ASCZEISS Cinema is pleased to host a live interview with Lawrence Sher, ASC. Join Michael Valinsky from the podcast The Making Of as he discusses Lawrence's work on the upcoming feature JOKER: FOLIE à DEUX, as well as his past films and the indispensable filmmaking website ShotDeck. The ZEISS team will be on hand with our lenses and camera technologies as well!Join us for bites, beer, wine and a conversation not to be missed! Register for free hereFrom our Friends at Broadfield…All-new pricing for RED KOMODO and KOMODO-X unlocks exceptional cinema quality, global shutter performance, and the power of RED to filmmakers at every level. The KOMODO is a compact cinema camera featuring RED's unparalleled image quality, color science, and groundbreaking global shutter sensor technology in a shockingly small and versatile form factor. The KOMODO-X is the next evolution with all-new sensor technology that multiplies frame rate and dynamic range performance within a new advanced platform.Inquire hereUpcoming Event: Cine Gear Atlanta | October 4-5thThousands of industry professionals will surge to attend this year at Trilith Studios in Fayetteville, Georgia. A focal point of Southern filmmaking, Cine Gear 2023 drew thousands to the studio, which houses productions like Black Adam and Francis Ford Coppola's Megalopolis. Visitors met with equipment exhibitors from across the globe, attended panels and workshops from the International Cinematographer's Guild, the ASC, and numerous tech brands, and partied at the Friday night Southern Cine Soirée.Get your passes hereOWC Atlas Ultra CFexpress Cards:Experience the unparalleled performance and reliability of Atlas Ultra CFexpress Type B 4.0 cards purpose-built for professional filmmakers and photographers to capture flawlessly and offload files quickly in the most demanding scenarios.Check it out herePodcast Rewind:Sept 2024 - Ep. 46…“The Making Of” is published by Michael Valinsky.Partner with us and promote your products to 82,000 film, TV, video and broadcast professionals reading this newsletter. Simply email us at mvalinsky@me.com Get full access to The Making Of at themakingof.substack.com/subscribe

The Making Of
Andy Hutton on Anton/Bauer Power Solutions, Product Innovation, & More

The Making Of

Play Episode Listen Later Sep 12, 2024 33:46


In this episode, we welcome Andy Hutton. Andy is the Head of Product Management for Batteries at Videndum Production Solutions. His role at Anton/Bauer is to help innovate and oversee the legendary brand, used daily by professionals in film, TV, broadcast, sports production, and beyond. In our chat, Andy shares his backstory from England, his education and career arc — and all about his role running product at Anton/Bauer. He also provides best practices for keeping your power solutions in optimal shape, and other insights from his world of supporting the industry. The Making Of is presented by AJA:Meet AJA Ki Pro GO2Easily record up to four channels of simultaneous HEVC or AVC to cost-efficient USB drives and/or network storage with flexible connectivity, including four 3G-SDI and four HDMI digital video inputs, to connect to a wide range of video sources.Find out more hereFrom our Friends at Broadfield…All-new pricing for RED KOMODO and KOMODO-X unlocks exceptional cinema quality, global shutter performance, and the power of RED to filmmakers at every level. The KOMODO is a compact cinema camera featuring RED's unparalleled image quality, color science, and groundbreaking global shutter sensor technology in a shockingly small and versatile form factor. The KOMODO-X is the next evolution with all-new sensor technology that multiplies frame rate and dynamic range performance within a new advanced platform.Inquire hereOWC Atlas Ultra CFexpress Cards:Experience the unparalleled performance and reliability of Atlas Ultra CFexpress Type B 4.0 cards purpose-built for professional filmmakers and photographers to capture flawlessly and offload files quickly in the most demanding scenarios.Check it out hereFeatured Coffee:New York-based Devoción is a game changer as the only true Origin-to-Cup roaster flying in their direct trade, single-source beans via FedEx from Bogota to NYC weekly. With year-round harvests, Devoción ensures its coffee's supreme purity, freshness, and integrity. Taste the difference in a single sip.Click here to subscribe today.  Talking Cinematography with Documentarian Jennifer CoxJennifer Cox is a director of photography, documentarian and owner of Moto Films LLC based in New York. Cox procured one of the first sets of ZEISS Nano Prime lenses and used them on three diverse documentary projects. She tested the unique traits across a Beatles Fan Fest feature film shoot, a short form promotion for non-profit Free Arts NYC and as part of the 2024 Courage Awards from PEN America.Podcast Rewind:Sept 2024 - Ep. 45…“The Making Of” is published by Michael Valinsky.Partner with us and promote your products to 78,000 film, TV, video and broadcast professionals reading this newsletter. Simply email us at mvalinsky@me.com Get full access to The Making Of at themakingof.substack.com/subscribe

The Making Of
Tony Denison on his Acting Career, Process & Perspective

The Making Of

Play Episode Listen Later Sep 9, 2024 58:37


In this episode, we welcome Tony Denison. Tony is a veteran actor with starring roles in Michael Mann's “Crime Story,” “The Closer” and “Major Crimes” — and in films such as City of Hope, The Amy Fisher Story, and Getting Gotti. He's also played characters in “Wiseguy,” “Melrose Place,” “JAG,” “Prison Break,” “Sons of Anarchy” and “Criminal Minds.” In our chat, Tony shares his backstory, about his early theater work in New York City, and how he landed his seminal role in “Crime Story.” From there, he walks us through other key parts and offers invaluable advice for creatives coming up in the business today. The Making Of is presented by AJA:Meet AJA Ki Pro GO2Easily record up to four channels of simultaneous HEVC or AVC to cost-efficient USB drives and/or network storage with flexible connectivity, including four 3G-SDI and four HDMI digital video inputs, to connect to a wide range of video sources.Find out more hereOWC Atlas Ultra CFexpress Cards:Experience the unparalleled performance and reliability of Atlas Ultra CFexpress Type B 4.0 cards purpose-built for professional filmmakers and photographers to capture flawlessly and offload files quickly in the most demanding scenarios.Check it out hereFeatured Film Book:Kiss Me Quick Before I ShootThis film memoir is all about the magic of filmmaking and forging a cinematic personal life in Hollywood. It's full of invaluable experiences and unique industry stories of a celebrated film/TV career, and should be required reading for every film lover. It includes stories of a producer turning out to be the assassin of the Mafia boss who allowed The Godfather to film in NY, to shooting the pyramids of Egypt for Battlestar Galactica, to directing a grumbling Mr. T on The A-Team, to almost decapitating a young Drew Barrymore right after ET, and to unwittingly almost delaying James Cameron's career!“Finally, a book for all who love the movies written by a filmmaker who has walked the walk in TV and film. A very entertaining journey of fascinating industry stories providing a true look behind the curtain of filmmaking.” — Joe Alves, Production Designer - JAWS, British Academy Award for CLOSE ENCOUNTERS OF THE THIRD KIND.Pickup a copy at KISS ME QUICK BEFORE I SHOOT Kindle Ebook AmazonFrom our Friends at Broadfield…All-new pricing for RED KOMODO and KOMODO-X unlocks exceptional cinema quality, global shutter performance, and the power of RED to filmmakers at every level. The KOMODO is a compact cinema camera featuring RED's unparalleled image quality, color science, and groundbreaking global shutter sensor technology in a shockingly small and versatile form factor. The KOMODO-X is the next evolution with all-new sensor technology that multiplies frame rate and dynamic range performance within a new advanced platform.Inquire hereHonoring a Pioneer: Nancy Schreiber ASC Receives the Trailblazer AwardThe Manaki Brothers Film Festival introduces the Trailblazer Award, a prestigious honor celebrating individuals who have not only excelled in their craft but have also forged new paths for future generations. The inaugural Trailblazer Award will be presented to the distinguished cinematographer Nancy Schreiber, ASC. Schreiber's illustrious career spans decades, contributing to some of the most visually compelling projects, particularly in the independent filmmaking sector. As one of the first women to join the American Society of Cinematographers (ASC), Schreiber has been a driving force in challenging industry norms and advocating for greater inclusion.More about the festival hereTalking Cinematography with Documentarian Jennifer CoxJennifer Cox is a director of photography, documentarian and owner of Moto Films LLC based in New York. Cox procured one of the first sets of ZEISS Nano Prime lenses and used them on three diverse documentary projects. She tested the unique traits across a Beatles Fan Fest feature film shoot, a short form promotion for non-profit Free Arts NYC and as part of the 2024 Courage Awards from PEN America.Featured Coffee: New York-based Devoción is a game changer as the only true Origin-to-Cup roaster flying in their direct trade, single-source beans via FedEx from Bogota to NYC weekly. With year-round harvests, Devoción ensures its coffee's supreme purity, freshness, and integrity. Taste the difference in a single sip.Click here to subscribe today.  Podcast Rewind:August 2024 - Ep. 44…“The Making Of” is published by Michael Valinsky.Partner with us and promote your products to 78,000 film, TV, video and broadcast pros reading this newsletter. Simply email us at mvalinsky@me.com Get full access to The Making Of at themakingof.substack.com/subscribe

The Making Of
Angenieux's Tim Smith on Cinema Lenses, The Industry, & More

The Making Of

Play Episode Listen Later Sep 3, 2024 41:20


In this episode, we welcome Tim Smith. Tim is Executive Director for the Americas at Angenieux. In our chat, Tim shares about his upbringing in Maine, his early interest in photography, and stories from his tenure at Canon, where he witnessed the “DSLR Revolution” up-close. He also talks about his current role at Angenieux and about their high-end lenses for cinematographers. In addition, Tim provides insights about the state of the industry and where things may be headed in 2025.The Making Of is presented by AJA:Meet AJA Ki Pro GO2Easily record up to four channels of simultaneous HEVC or AVC to cost-efficient USB drives and/or network storage with flexible connectivity, including four 3G-SDI and four HDMI digital video inputs, to connect to a wide range of video sources.Find out more hereFrom our Friends at Broadfield…All-new pricing for RED KOMODO and KOMODO-X unlocks exceptional cinema quality, global shutter performance, and the power of RED to filmmakers at every level. The KOMODO is a compact cinema camera featuring RED's unparalleled image quality, color science, and groundbreaking global shutter sensor technology in a shockingly small and versatile form factor. The KOMODO-X is the next evolution with all-new sensor technology that multiplies frame rate and dynamic range performance within a new advanced platform.Inquire here Featured Film Book: Kiss Me Quick Before I Shoot This film memoir is all about the magic of filmmaking and forging a cinematic personal life in Hollywood. It's full of invaluable experiences and unique industry stories of a celebrated film/TV career, and should be required reading for every film lover. It includes stories of a producer turning out to be the assassin of the Mafia boss who allowed The Godfather to film in NY, to shooting the pyramids of Egypt for Battlestar Galactica, to directing a grumbling Mr. T on The A-Team, to almost decapitating a young Drew Barrymore right after ET, and to unwittingly almost delaying James Cameron's career!“Finally, a book for all who love the movies written by a filmmaker who has walked the walk in TV and film. A very entertaining journey of fascinating industry stories providing a true look behind the curtain of filmmaking.” —Joe Alves, Production Designer - JAWS, British Academy Award for CLOSE ENCOUNTERS OF THE THIRD KIND.Pickup a copy at KISS ME QUICK BEFORE I SHOOT Kindle Ebook AmazonMeet Igelkott Studios:Our expertise lies in delivering end-to-end ICVFX productions, which include 360 plate creation and directing the in-studio operation. We are proud to have been industry leaders in ICVFX, image development, and image workflows since 2018.Learn more hereIntroducing Something New…OWC Atlas Ultra CFexpress Cards:Experience the unparalleled performance and reliability of Atlas Ultra CFexpress Type B 4.0 cards purpose-built for professional filmmakers and photographers to capture flawlessly and offload files quickly in the most demanding scenarios.Check them out hereTalking Cinematography with Documentarian Jennifer CoxJennifer Cox is a director of photography, documentarian and owner of Moto Films LLC based in New York. Cox procured one of the first sets of ZEISS Nano Prime lenses and used them on three diverse documentary projects. She tested the unique traits across a Beatles Fan Fest feature film shoot, a short form promotion for non-profit Free Arts NYC and as part of the 2024 Courage Awards from PEN America.Podcast Rewind:August 2024 - Ep. 43…The Making Of is published by Michael Valinsky.To showcase your products to 75,000 filmmakers and industry pros reading this newsletter, please email us at mvalinsky@me.com Get full access to The Making Of at themakingof.substack.com/subscribe

The Making Of
Larry O'Connor on OWC's History, Solutions and Future

The Making Of

Play Episode Listen Later Aug 28, 2024 48:14


In this episode, we welcome Larry O'Connor. Larry is the founder and CEO of OWC, the industry leader in storage solutions. We also welcome Jon Hoeg, OWC's Director of Marketing Communications. In our chat, we hear Larry's backstory, how he created his company from the start, and its evolution over the last thirty-six years. In addition, Larry and Jon share about their solutions for production and post pros, offer best practices for storage and archiving your assets, and provide insights on A.I.The Making Of is presented by AJA:Meet AJA Ki Pro GO2Easily record up to four channels of simultaneous HEVC or AVC to cost-efficient USB drives and/or network storage with flexible connectivity, including four 3G-SDI and four HDMI digital video inputs, to connect to a wide range of video sources.Find out more hereTalking Cinematography with Documentarian Jennifer CoxJennifer Cox is a director of photography, documentarian and owner of Moto Films LLC based in New York. Cox procured one of the first sets of ZEISS Nano Prime lenses and used them on three diverse documentary projects. She tested the unique traits across a Beatles Fan Fest feature film shoot, a short form promotion for non-profit Free Arts NYC and as part of the 2024 Courage Awards from PEN America.From our Friends at Broadfield…V-RAPTOR® [X] 8K VV combines the strengths of RED's two families of cameras into one powerful all-purpose workhorse. The frame rates, lowlight performance, and resolution of the V-RAPTOR® line combined with the global shutter advancements of KOMODO®, the V-RAPTOR [X] 8K VV sensor is the culmination of the latest advancements in digital cinema image making. Using RED's newest 8K VV sensor, V-RAPTOR [X] leverages the benefits and flexibility of large format, global shutter, high framerate, 8K acquisition, all inside of a compact and feature rich body weighing just over 4lbs.Read more hereFeatured Book: Images: My Life in FilmIn this new edition, Ingmar Bergman presents an intimate view of his own unique body of work in film. His career spanned forty years and produced more than fifty films, many of which are considered classics: The Seventh Seal, The Virgin Spring, Persona, Smiles of a Summer Night, Wild Strawberries, and Fanny and Alexander, to name but a few. When he began this book, Bergman had not seen most of his movies since he made them. Resorting to scripts and working notebooks, and especially to memory, he comments brilliantly and always cogently on his failures as well as his successes; on the themes that bind his work together; on his concerns, anxieties, and moments of happiness; on the relationship between his life and art.Available here OWC Atlas Ultra CFexpress Cards:Experience the unparalleled performance and reliability of Atlas Ultra CFexpress Type B 4.0 cards purpose-built for professional filmmakers and photographers to capture flawlessly, and offload files quickly in the most demanding scenarios.Learn more herePodcast Rewind:August 2024 - Ep. 42…The Making Of is published by Michael Valinsky.To promote your products to 70,000 filmmakers and industry pros reading this newsletter, please email us at mvalinsky@me.com Get full access to The Making Of at themakingof.substack.com/subscribe

The Making Of
Ed Begley Jr. on A Life and Career in Hollywood

The Making Of

Play Episode Listen Later Aug 15, 2024 56:58


In this episode, we welcome Ed Begley Jr. Ed is a legendary actor with roles in films including This Is Spinal Tap, A Mighty Wind, Best in Show, Pineapple Express, Whatever Works, Recount, For Your Consideration, The Accidental Tourist, Paul Schrader's Auto Focus, Blue Collar, and Cat People — and shows such as “Better Call Saul,” “Curb Your Enthusiasm,” “Arrested Development,” “Portlandia,” “Six Feet Under,” “The Larry Sanders Show,” “Battlestar Galactica” and “St. Elsewhere”. In our chat, Ed shares all about his background, experience as a Camera Assistant, and path from early roles on through working on today's top films and TV shows. He also describes his process for preparing for each project, and offers priceless advice for storytellers today.The Making Of is presented by AJA:Meet AJA Ki Pro GO2Easily record up to four channels of simultaneous HEVC or AVC to cost-efficient USB drives and/or network storage with flexible connectivity, including four 3G-SDI and four HDMI digital video inputs, to connect to a wide range of video sources.Find out more hereTalking Cinematography with Jack Schurman:Emmy Award winning cinematographer Jack Schurman sat with ZEISS Cinema to talk about using the new ZEISS Nano Primes for his upcoming short, We Regret to Inform You.Watch the conversation hereFrom our Friends at Broadfield…V-RAPTOR® [X] 8K VV combines the strengths of RED's two families of cameras into one powerful all-purpose workhorse. The frame rates, lowlight performance, and resolution of the V-RAPTOR® line combined with the global shutter advancements of KOMODO®, the V-RAPTOR [X] 8K VV sensor is the culmination of the latest advancements in digital cinema image making. Using RED's newest 8K VV sensor, V-RAPTOR [X] leverages the benefits and flexibility of large format, global shutter, high framerate, 8K acquisition, all inside of a compact and feature rich body weighing just over 4lbs.Read more hereFeatured Book:To the Temple of Tranquility... and Step on It!: A MemoirBeloved actor and environmental activist Ed Begley Jr. shares hilarious and poignant stories of his improbable life, focusing on his relationship with his legendary father, adventures with Hollywood icons, the origins of his environmental activism, addiction and recovery, and his lifelong search for wisdom and common ground.Ed Begley Jr. is truly one of a kind, a performer who is known equally for his prolific film and television career and his environmental activism. From an appearance on My Three Sons to a notable role in Mary Hartman, Mary Hartman to starring in St. Elsewhere—as well as films with Jack Nicholson, Meryl Streep, and mockumentarian Christopher Guest—Begley has worked with just about everyone in Hollywood. His "green" bona fides date back to 1970, and have been the topic of two books, a reality show, countless media appearances, and even repeated spoofs on The Simpsons (in one episode, Begley's solar‑powered car stalls out on train tracks, but is saved when the train is revealed to be an "Ed Begley Solar‑Powered Train”).Begley's unmistakable voice is honest and revealing in a way that only a comic of his caliber can accomplish. Behind all the stories, Begley has wisdom to impart. This is a book about family, friends, addiction, failure, and redemption.Pickup a copy hereOWC Atlas Ultra CFexpress Cards:Experience the unparalleled performance and reliability of Atlas Ultra CFexpress Type B 4.0 cards purpose-built for professional filmmakers and photographers to capture flawlessly, and offload files quickly in the most demanding scenarios.Learn more hereUpcoming Event: Celebrate the top talent in entertainment at the HPA Awards, hosted at the Television Academy's Wolf Theatre on November 7, 2024. Since 2006, the HPA Awards have set the standard for creative achievement, exceptional artistry, and engineering excellence in an industry that continues to embrace groundbreaking technologies and expanding creativity.Tickets now available herePodcast Rewind:August 2024 - Ep. 41…The Making Of is published by Michael Valinsky.To promote your products or services to 51,000 industry pros and filmmakers reading this newsletter, please email us at mvalinsky@me.com Get full access to The Making Of at themakingof.substack.com/subscribe

The Making Of
Director Charlotte Brändström on "The Lord of the Rings: The Rings of Power"

The Making Of

Play Episode Listen Later Aug 12, 2024 49:33


In this episode, we welcome Charlotte Brändström. Charlotte is an award-winning Director working on today's top shows including “The Lord of the Rings: The Rings of Power,” “Shōgun,” “The Consultant,” “The Outsider,” “The Witcher,” “The Man in the High Castle,” “Madam Secretary,” “Conspiracy of Silence” and “Outlander”. In our chat, Charlotte shares about her early days in Europe, path into the industry, on through directing many of the biggest shows streaming today. She also speaks about her creative workflow, collaborating with cinematographers and editors — and offers recommendations for up-and-coming filmmakers. The Making Of is presented by AJA Video Systems:Meet AJA Ki Pro GO2Easily record up to four channels of simultaneous HEVC or AVC to cost-efficient USB drives and/or network storage with flexible connectivity, including four 3G-SDI and four HDMI digital video inputs, to connect to a wide range of video sources.Find out more hereFrom our Friends at Broadfield…V-RAPTOR® [X] 8K VV combines the strengths of RED's two families of cameras into one powerful all-purpose workhorse. The frame rates, lowlight performance, and resolution of the V-RAPTOR® line combined with the global shutter advancements of KOMODO®, the V-RAPTOR [X] 8K VV sensor is the culmination of the latest advancements in digital cinema image making. Using RED's newest 8K VV sensor, V-RAPTOR [X] leverages the benefits and flexibility of large format, global shutter, high framerate, 8K acquisition, all inside of a compact and feature rich body weighing just over 4lbs.Read more hereFeatured Book:To the Temple of Tranquility... and Step on It!: A MemoirBeloved actor and environmental activist Ed Begley Jr. shares hilarious and poignant stories of his improbable life, focusing on his relationship with his legendary father, adventures with Hollywood icons, the origins of his environmental activism, addiction and recovery, and his lifelong search for wisdom and common ground.Ed Begley Jr. is truly one of a kind, a performer who is known equally for his prolific film and television career and his environmental activism. From an appearance on My Three Sons to a notable role in Mary Hartman, Mary Hartman to starring in St. Elsewhere—as well as films with Jack Nicholson, Meryl Streep, and mockumentarian Christopher Guest—Begley has worked with just about everyone in Hollywood. His "green" bona fides date back to 1970, and have been the topic of two books, a reality show, countless media appearances, and even repeated spoofs on The Simpsons (in one episode, Begley's solar‑powered car stalls out on train tracks, but is saved when the train is revealed to be an "Ed Begley Solar‑Powered Train”).Begley's unmistakable voice is honest and revealing in a way that only a comic of his caliber can accomplish. Behind all the stories, Begley has wisdom to impart. This is a book about family, friends, addiction, failure, and redemption.Get your copy hereTalking Cinematography with Jack Schurman:Emmy Award winning cinematographer Jack Schurman sat with ZEISS Cinema to talk about using the new ZEISS Nano Primes for his upcoming short, We Regret to Inform You.Watch the conversation hereOWC Atlas Ultra CFexpress Cards:Experience the unparalleled performance and reliability of Atlas Ultra CFexpress Type B 4.0 cards purpose-built for professional filmmakers and photographers to capture flawlessly, and offload files quickly in the most demanding scenarios.See hereFilmmakers Call for Change:Join the dozens of global organizations urging Camerimage to increase its support for women by signing the petition below. These include: Women Behind the Camera, IMAGO Diversity & Inclusion Committee, illuminatrix, fDOP, WIFT-tech, CINEMATOGRAPINNEN, Crew United, Apertura, Primetime, Indian Women of Cinematography, Directoras de Fotografía, DAFB, Lumbre Colectiva. Women and dissidents of Chilean cinematography, Women in Media, & more.Read more hereSupport a Friend of The Making Of…Unfortunately, our friend Mark Foley has been diagnosed with cancer. He has started a treatment plan including both chemotherapy and radiation. With that said, he's facing medical and day-to-day expenses. It'd be incredibly helpful if you could support Mark and his efforts in this battle. Anything you can do is most appreciated.Please visit herePodcast Rewind:August 2024 - Ep. 40…The Making Of is published by Michael Valinsky.To promote your products or services to 50,000 industry pros and filmmakers reading this newsletter, please email us at mvalinsky@me.com Get full access to The Making Of at themakingof.substack.com/subscribe

The Making Of
Eric Hasso on Igelkott Studios, In-Camera VFX, & More

The Making Of

Play Episode Listen Later Aug 6, 2024 34:24


In this episode, we welcome Eric Hasso. Eric is the founder of Igelkott Studios, a world-class In-Camera Visual FX company. His clients include studios such as Netflix, Warner Bros, Amazon Prime, MAX, and Sony Pictures. In our chat, Eric shares about his early days in Sweden, about launching his company, Igelkott Studios, and his experience working on shows such as “The Playlist.” Eric also provides insights on the art and science of in-camera visual FX.The Making Of is presented by AJA Video Systems:Meet AJA Ki Pro GO2Easily record up to four channels of simultaneous HEVC or AVC to cost-efficient USB drives and/or network storage with flexible connectivity, including four 3G-SDI and four HDMI digital video inputs, to connect to a wide range of video sources. Find out more here From our Friends at Broadfield…V-RAPTOR® [X] 8K VV combines the strengths of RED's two families of cameras into one powerful all-purpose workhorse. The frame rates, lowlight performance, and resolution of the V-RAPTOR® line combined with the global shutter advancements of KOMODO®, the V-RAPTOR [X] 8K VV sensor is the culmination of the latest advancements in digital cinema image making. Using RED's newest 8K VV sensor, V-RAPTOR [X] leverages the benefits and flexibility of large format, global shutter, high framerate, 8K acquisition, all inside of a compact and feature rich body weighing just over 4lbs.Browse hereTalking Cinematography with Jack Schurman:Emmy Award winning cinematographer Jack Schurman sat with ZEISS Cinema to talk about using the new ZEISS Nano Primes for his upcoming short, We Regret to Inform You.Watch the conversation hereOWC Atlas Ultra CFExpress Cards:Experience the unparalleled performance and reliability of Atlas Ultra CFExpress Type B 4.0 cards purpose-built for professional filmmakers and photographers to capture flawlessly, and offload files quickly in the most demanding scenarios.Learn more hereFilmmakers Call for Change:Join the dozens of global organizations urging Camerimage to increase its support for women by signing the petition below. These include, just to name a few: Women Behind the Camera, IMAGO Diversity & Inclusion Committee, illuminatrix, fDOP, WIFT-tech, CINEMATOGRAPINNEN, Crew United, Apertura, Primetime, Indian Women of Cinematography, Directoras de Fotografía, DAFB, Lumbre Colectiva. Women and dissidents of Chilean cinematography, & more.Read more hereRecommended Film Book: A History of Narrative FilmSophisticated in its analytical content, current in its coverage, and informed throughout by fascinating historical and cultural contexts, A History of Narrative Film is one of the most respected and widely read texts in film studies. This Fifth Edition features a new chapter on twenty-first century film, and includes refreshed coverage of contemporary digital production, distribution, and consumption of film. Buy hereHat Tip to Jay Holben Support a Friend of The Making Of…Unfortunately, our friend Mark Foley has been diagnosed with cancer. He has started a treatment plan including both chemotherapy and radiation. With that said, he's facing large medical and day-to-day expenses. It'd be enormously helpful if you could support Mark and his efforts in this battle. Anything you can do would be most appreciated.Please visit herePodcast Rewind:July 2024 - Ep. 39…The Making Of is published by Michael Valinsky.To promote your products or services to 45,000 filmmakers and industry professionals reading this newsletter, please email us at mvalinsky@me.com Get full access to The Making Of at themakingof.substack.com/subscribe

The Making Of
Neil Matsumoto on Filmmaking Technologies, Cinematography, & More

The Making Of

Play Episode Listen Later Jul 25, 2024 57:07


In this episode, we welcome Neil Matsumoto. Neil has a long history in the industry and has worked at Panasonic LUMIX for many years. In our conversation, he shares all about his roots, early independent filmmaking experiences, path into the industry, and current role at LUMIX. Neil also provides insights for filmmakers and other recommendations for cinematographers and creatives today.The Making Of is presented by AJA Video Systems:Meet AJA Ki Pro GO2Easily record up to four channels of simultaneous HEVC or AVC to cost-efficient USB drives and/or network storage with flexible connectivity, including four 3G-SDI and four HDMI digital video inputs, to connect to a wide range of video sources. Find out more hereOther World Computing Atlas Ultra CFexpress Cards:Experience the unparalleled performance and reliability of Atlas Ultra CFExpress Type B 4.0 cards purpose-built for professional filmmakers and photographers to capture flawlessly, and offload files quickly in the most demanding scenarios.Check it out hereFrom our Friends at Broadfield…V-RAPTOR® [X] 8K VV combines the strengths of RED's two families of cameras into one powerful all-purpose workhorse. The frame rates, lowlight performance, and resolution of the V-RAPTOR® line combined with the global shutter advancements of KOMODO®, the V-RAPTOR [X] 8K VV sensor is the culmination of the latest advancements in digital cinema image making. Using RED's newest 8K VV sensor, V-RAPTOR [X] leverages the benefits and flexibility of large format, global shutter, high framerate, 8K acquisition, all inside of a compact and feature rich body weighing just over 4lbs.Inquire here Support a Friend of The Making Of: Unfortunately, our friend Mark Foley has been diagnosed with cancer. He has started a treatment plan including both chemotherapy and radiation. With that said, he is facing tremendous medical and day-to-day expenses. It'd be incredibly helpful if you could support Mark and his efforts in this battle. Anything you can do would be most appreciated.To support, visit hereZEISS Conversations with Jack SchurmanJoin us on August 1st LIVE as Matt Duclos of Duclos Lenses interviews cinematographer Jack Schurman live in our showroom. Jack will discuss his most recent work, We Regret to Inform You, shot for Sony on the new ZEISS Nano Lenses.  This event will include live lens demos and a Q&A about the creative and technical aspects of the lenses. You'll definitely want to join us for this one!Register for free herePodcast Rewind:July 2024 - Ep. 38…The Making Of is published by Michael Valinsky.To promote your products or services to over 40,000 filmmakers and industry pros reading this newsletter, please email us at mvalinsky@me.com Get full access to The Making Of at themakingof.substack.com/subscribe

The Making Of
"Bridgerton" Cinematographer Alicia Robbins on Her Career and Crafting The Look for a New Season

The Making Of

Play Episode Listen Later Jul 10, 2024 54:57


In this episode, we welcome cinematographer Alicia Robbins. Alicia has worked on top television shows such as “Bridgerton” Season 3, “Grey's Anatomy,” “Quantum Leap,” and “For The People,” as well as feature films including Creed II and Dawn of the Planet of the Apes. In our chat, she shares about her early years, path to studying at AFI, experiences working on low-budget projects, on through shooting one of Netflix's most popular titles of all time. Alicia also discusses her favorite films, what keeps her inspired, and other insights from a life on set.The Making Of is presented by AJA Video Systems:Meet AJA Ki Pro GO2Easily record up to four channels of simultaneous HEVC or AVC to cost-efficient USB drives and/or network storage with flexible connectivity, including four 3G-SDI and four HDMI digital video inputs, to connect to a wide range of video sources. Find out more: hereFrom our Friends at Videoguys…Ninja your iPhone 15 Pro or Pro Max into a 1600nit, 10-bit, 2,000,000:1 contrast ratio, 460ppi, HDR OLED, ProRes monitor-recorder for any pro HDMI camera. Attach the Ninja Phone to your iPhone 15 Pro or Pro Max, plug in an HDMI-equipped camera, and you've got the best display on the planet with fast, low-latency connectivity.Take a look hereExplore ZEISS' Nano Prime LensesZEISS Nano Primes are the first high-speed (T1.5 throughout) cine lenses made specifically for mirrorless full frame cameras – initially available with Sony E-mount. These primes offer a pleasing, versatile look that is adaptable for an extensive range of shooting situations and a compact, lightweight design that makes them easy to use on any set or location. Available in six focal lengths (18mm, 24mm, 35mm, 50mm, 75mm, 100mm), this matched set conveniently covers wide-angle to telephoto. Learn more hereFeatured Book: In Rebel Without a Crew, screenwriter and director Robert Rodriguez discloses all the unique strategies and original techniques he used to make his remarkable debut film, El Mariachi, on a shoestring budget. This is both one man's remarkable story and an essential guide for anyone who has a celluloid story to tell and the dreams and determination to see it through. Get a copy herePodcast Rewind:June 2024 - Ep. 37…The Making Of is published by Michael Valinsky.To advertise your products or services to over 35,000 filmmakers and industry pros reading this newsletter, please email us at mvalinsky@me.com Get full access to The Making Of at themakingof.substack.com/subscribe

The Making Of
"Shōgun" Cinematographer Christopher Ross, BSC On His Path and Creating The Look of a Show

The Making Of

Play Episode Listen Later Jun 29, 2024 47:53


In this episode, we welcome cinematographer Christopher Ross, BSC. Chris has worked on critically-acclaimed series including “Shōgun,” “Top Boy” and “Trust” as well as films such as Yesterday, The Great Escaper, Eden Lake, and Room. In our chat, we hear his backstory, how he started in the industry, and about his process prepping and shooting various projects. Chris also takes us behind-the-scenes of “Shōgun” — sharing the techniques and technologies used to create this epic show. The Making Of is presented by AJA Video Systems:Meet AJA Ki Pro GO2Easily record up to four channels of simultaneous HEVC or AVC to cost-efficient USB drives and/or network storage with flexible connectivity, including four 3G-SDI and four HDMI digital video inputs, to connect to a wide range of video sources. Find out more: here From our Friends at Videoguys…Ninja your iPhone 15 Pro or Pro Max into a 1600nit, 10-bit, 2,000,000:1 contrast ratio, 460ppi, HDR OLED, ProRes monitor-recorder for any pro HDMI camera. Attach the Ninja Phone to your iPhone 15 Pro or Pro Max, plug in an HDMI-equipped camera, and you've got the best display on the planet with fast, low-latency connectivity.Take a look here“I of The Lens” Photo Exhibit At Euro Cine Expo 2024 in MunichA unique exhibit showcasing the external and internal expression of a cinematographer.IMAGO Camera, the world's only analogue, large format camera designed for life-sized, self-portraits, captivated audiences with an extraordinary exhibition featuring stunning black and white portraits of cinematographers from across the globe. This collection, curated by Vika Safrigina, producer and Susanna Kraus, the visionary artist behind IMAGO Camera, is on display at the Euro Cine Expo in Munich, June 27-29.The IMAGO Camera is a true interactive object d'art and transcends traditional photography by allowing cinematographers to step into the spotlight and become authors of their own images. This unique walk-in camera was invented by the physicist Werner Kraus and artist Erhard Hoesle in 1972 in Munich, 20 years before the IMAGO federation was founded. As photographer and subject converge, each sitter transforms into the artist behind the lens, creating captivating self-portraits that reveal a unique perspective of themselves.In partnership with IMAGO — International Federation of Cinematographers' Diversity and Inclusion Committee, SUMOLIGHT lighting solutions, and Leitz Cine, the exhibition aims to celebrate the diversity of filmmakers who breathe life into the grand canvas of the movie screen.Learn more here Explore ZEISS' Nano Prime Lenses ZEISS Nano Primes are the first high-speed (T1.5 throughout) cine lenses made specifically for mirrorless full frame cameras – initially available with Sony E-mount. These primes offer a pleasing, versatile look that is adaptable for an extensive range of shooting situations and a compact, lightweight design that makes them easy to use on any set or location. Available in six focal lengths (18mm, 24mm, 35mm, 50mm, 75mm, 100mm), this matched set conveniently covers wide-angle to telephoto.Thanks to the integrated electronic interface, metadata such as focal length, focus distance and aperture value are transmitted to the camera in real time. Additional lens data for distortion and vignetting is available in the ZEISS CinCraft ecosystem and thus for post-production (CinCraft Mapper) as well as in the recently introduced CinCraft Scenario camera tracking system. Adding to their versatility, Nano Primes are ready for the simple exchange of additional mounts thanks to the proven ZEISS IMS (Interchangeable Mount System). Learn more hereTips from theC47:Beach Read:The JAWS LogWinner of three Oscars and the highest-grossing film of its time, Jaws was a phenomenon, and this is the only book on how twenty-six-year-old Steven Spielberg transformed Peter Benchley's number-one bestselling novel into the classic film it became.Hired by Spielberg as a screenwriter to work with him on the set while the movie was being made, Carl Gottlieb, an actor and writer, was there throughout the production that starred Roy Scheider, Robert Shaw, and Richard Dreyfuss. After filming was over, with Spielberg's cooperation, Gottlieb chronicled the extraordinary yearlong adventure in The Jaws Log, which was first published in 1975 and has sold more than two million copies. This expanded edition includes a photo section, an introduction by Benchley, and an afterword by Gottlieb that gives updates about the people and events involved in the film, ultimately providing a singular portrait of a famous movie and inspired moviemaking.Get yours herePodcast Rewind:June 2024 - Ep. 36…The Making Of is published by Michael Valinsky.To advertise your products or services to over 30,000 filmmakers and industry pros reading this newsletter, please email us at mvalinsky@me.com Get full access to The Making Of at themakingof.substack.com/subscribe

MacMost - Mac, iPhone and iPad How-To Videos
What Are HEIC Files? (MacMost #3117)

MacMost - Mac, iPhone and iPad How-To Videos

Play Episode Listen Later Apr 5, 2024


View in HD at . By default your iPhone saves photos as HEIC files. These are compressed images, like JPEG files, but will use less space on your iPhone, Mac and iCloud. You can easily convert these to JPEG files when you need to. Your iPhone also saves video files in HEVC format.

Voces de Ferrol - RadioVoz
Faltan dos días para el apagón de los canales de televisión que no emiten en HD - Alta Definición. ¿Qué hay que hacer?

Voces de Ferrol - RadioVoz

Play Episode Listen Later Feb 12, 2024 14:46


en dos días, el 14 de febrero, desaparecerán las emisiones en SD (con resoluciones máximas de 576p) y todas deberán estar obligatoriamente en resoluciones superiores al 720p (conocido comercialmente como HD Ready). ¿Qué equipo necesitamos para los nuevos canales en HD? Realmente ninguno, siempre que contemos con equipos recientes. Eso sí, nuestros dispositivos deberán cumplir unos requisitos: Un televisor compatible con resoluciones HD, 720p o superiores. Un decodificador compatible con DVB-T2, con resoluciones HD, y con el códec H.265/HEVC. Cumpliendo estos requisitos será tan sencillo como resintonizar los canales a partir del citado día. Así, empezaremos a visualizar las tradicionales cadenas en una calidad superior. De hecho, el salto puede ser aún mayor: La1 de TVE emitirá en 4K/UHD en los próximos días. En este caso, nuestro televisor y deco también deberán aceptar la resolución 2.160p.

iOS Today (Video HI)
iOS 675: iPhone Camera Tips & Tricks - Apple ProRAW, Apple ProRes, Photographic Styles, HEIF/HEVC

iOS Today (Video HI)

Play Episode Listen Later Oct 17, 2023 107:51


As iPhone cameras continue to get more powerful, the settings and UI for the Camera app gets more complicated. Rosemary Orchard and Mikah Sargent walk you through every Camera app setting, button, dial, and interface feature, giving you loads of photography tips and tricks along the way. News Apple introduces new Apple Pencil, bringing more value and choice to the lineup Apple developing 'pad-like device' that can update iPhone firmware while still sealed in the box Shortcuts Corner Jake wants to create a Shortcut that turns off an LED light strip when charging an iPhone after 10 p.m. Feedback & Questions Mike wants the iPhone to cease auto-unlocking when using AirPods to take calls. Ken needs help locking in camera orientation whilst taking top-down photos and help in keeping the iPhone from auto-switching camera lenses during photo capture. App Caps Rosemary's App Cap: Hush Nag Blocker Mikah's App Cap: Permission Slip Hosts: Mikah Sargent and Rosemary Orchard Download or subscribe to this show at https://twit.tv/shows/ios-today. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit You can contribute to iOS Today by leaving us a voicemail at 757-504-iPad (757-504-4723) or sending an email to iOSToday@TWiT.tv. Sponsor: mylio.com/TWIT

iOS Today (MP3)
iOS 675: iPhone Camera Tips & Tricks - Apple ProRAW, Apple ProRes, Photographic Styles, HEIF/HEVC

iOS Today (MP3)

Play Episode Listen Later Oct 17, 2023 107:51


As iPhone cameras continue to get more powerful, the settings and UI for the Camera app gets more complicated. Rosemary Orchard and Mikah Sargent walk you through every Camera app setting, button, dial, and interface feature, giving you loads of photography tips and tricks along the way. News Apple introduces new Apple Pencil, bringing more value and choice to the lineup Apple developing 'pad-like device' that can update iPhone firmware while still sealed in the box Shortcuts Corner Jake wants to create a Shortcut that turns off an LED light strip when charging an iPhone after 10 p.m. Feedback & Questions Mike wants the iPhone to cease auto-unlocking when using AirPods to take calls. Ken needs help locking in camera orientation whilst taking top-down photos and help in keeping the iPhone from auto-switching camera lenses during photo capture. App Caps Rosemary's App Cap: Hush Nag Blocker Mikah's App Cap: Permission Slip Hosts: Mikah Sargent and Rosemary Orchard Download or subscribe to this show at https://twit.tv/shows/ios-today. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit You can contribute to iOS Today by leaving us a voicemail at 757-504-iPad (757-504-4723) or sending an email to iOSToday@TWiT.tv. Sponsor: mylio.com/TWIT

iOS Today (Video)
iOS 675: iPhone Camera Tips & Tricks - Apple ProRAW, Apple ProRes, Photographic Styles, HEIF/HEVC

iOS Today (Video)

Play Episode Listen Later Oct 17, 2023 107:51


As iPhone cameras continue to get more powerful, the settings and UI for the Camera app gets more complicated. Rosemary Orchard and Mikah Sargent walk you through every Camera app setting, button, dial, and interface feature, giving you loads of photography tips and tricks along the way. News Apple introduces new Apple Pencil, bringing more value and choice to the lineup Apple developing 'pad-like device' that can update iPhone firmware while still sealed in the box Shortcuts Corner Jake wants to create a Shortcut that turns off an LED light strip when charging an iPhone after 10 p.m. Feedback & Questions Mike wants the iPhone to cease auto-unlocking when using AirPods to take calls. Ken needs help locking in camera orientation whilst taking top-down photos and help in keeping the iPhone from auto-switching camera lenses during photo capture. App Caps Rosemary's App Cap: Hush Nag Blocker Mikah's App Cap: Permission Slip Hosts: Mikah Sargent and Rosemary Orchard Download or subscribe to this show at https://twit.tv/shows/ios-today. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit You can contribute to iOS Today by leaving us a voicemail at 757-504-iPad (757-504-4723) or sending an email to iOSToday@TWiT.tv. Sponsor: mylio.com/TWIT

The Dan Rayburn Podcast
Episode 71: No Viewership Transparency Included in The WGA Deal Terms; Court Rules Netflix Infringed on Broadcom's HEVC Patent

The Dan Rayburn Podcast

Play Episode Listen Later Oct 3, 2023 32:38


This week we detail the language in the new terms announced by the WGA in their deal with the studios (AMPTA) that have come as a result of the writers strike. The new deal provides no real transparency into viewership numbers outside of total hours streamed and only for self-produced high budget streaming programs. We also discuss Amazon's news that starting in early 2024 Prime Video shows and movies will include limited advertisements. Finally, we provide some details on the recent ruling from a German court that Netflix is infringing on Broadcom's video patent related to HEVC.

nScreenMedia
nScreenNoise – Impact of ATSC 3.0 and HEVC patent disputes

nScreenMedia

Play Episode Listen Later Sep 28, 2023 16:31


ATSC 3.0 and HEVC patent disputes could have big impacts on Netflix and NextGen TV. NextGen TV could see adoption slow, and Netflix could see streaming costs grow.

TWiT Bits (MP3)
iOS Clip: iOS Camera Settings You Should Know!

TWiT Bits (MP3)

Play Episode Listen Later Sep 6, 2023 10:12


On iOS Today, Mikah Sargent explores Apple's Camera app and how to use it to its fullest potential. Mikah and co-host Dan Moren go over choosing the best frame rate and format, in addition to the Scene Detection and Smart HDR features. Finally, they discuss how to zoom and capture photos/videos. Full episode at https://twit.tv/shows/ios-today/episodes/669 Hosts: Mikah Sargent and Dan Moren You can find more about TWiT and subscribe to our podcasts at https://podcasts.twit.tv/ Sponsor: GO.ACILEARNING.COM/TWIT

TWiT Bits (Video HD)
iOS Clip: iOS Camera Settings You Should Know!

TWiT Bits (Video HD)

Play Episode Listen Later Sep 6, 2023 10:12


On iOS Today, Mikah Sargent explores Apple's Camera app and how to use it to its fullest potential. Mikah and co-host Dan Moren go over choosing the best frame rate and format, in addition to the Scene Detection and Smart HDR features. Finally, they discuss how to zoom and capture photos/videos. Full episode at https://twit.tv/shows/ios-today/episodes/669 Hosts: Mikah Sargent and Dan Moren You can find more about TWiT and subscribe to our podcasts at https://podcasts.twit.tv/ Sponsor: GO.ACILEARNING.COM/TWIT

Only In Theaters
OinT 09 - The.Pope's.Exorcist.2023.1080p.10bit.WEBRip.6CH.x265.HEVC-PSA.mp3

Only In Theaters

Play Episode Listen Later May 13, 2023 106:24


greggy, "casey" and someone who isn't houston follow Gabriele Amorth, the Vatican's leading exorcist, as he investigates the possession of a child and uncovers a conspiracy the Vatican has tried to keep secret

iWeek (la semaine Apple)
La (vraie) puissance du Mac mini M2 Pro révélée

iWeek (la semaine Apple)

Play Episode Listen Later Feb 8, 2023 89:12


Bienvenue dans cet épisode 123 d'iWeek (la semaine Apple), le podcast. La (vraie) puissance du Mac mini M2 Pro révélée. Enregistré le mardi 7 février 2023 à 17h50. Présentation : Benjamin Vincent. Intervenants : Elie Abitbol, Fabrice Neuman. Production : OUATCH Audio. Pour soutenir iWeek : patreon.com/iweek. Cette semaine : le Mac mini M2 Pro ressemble de plus en plus au Mac dont nous avons toujours rêvé. Tout petit, il offre une puissance phénoménale dont on mesure un peu plus l'étendue grâce à notre invité, Pierre Chevalier, directeur technique chez Softron Media Services, éditeur de logiciel de capture, de playout (lecture) et de diffusion pour les radios et chaines TV. Pierre partage avec nous ses tests ultimes d'encodage ProRes HQ et HEVC et il les compare avec le Mac mini M1 et d'autres Mac. Et c'est très impressionnant. Une semaine marquée par une accélération sur le front de l'intelligence artificielle appliquée au moteur de recherche. Mardi soir, Microsoft, Bing avec chatGPT et OpenAI ; mercredi après-midi, Google, BARD et LaMDA, son modèle conversationnel ; et la semaine prochaine, Apple qui tiendra son sommet interne annuel sur l'IA... resté confidentiel jusqu'à ces derniers jours. Bref, ça chauffe et les enjeux sont énormes puisqu'après 25 ans d'existence et de quasi-monopole, Google se retrouve bousculé sur son terrain de jeu : la recherche sur Internet. Apple qui n'a a encore rien montré, en la matière, peut-il être plus qu'un observateur de ce duel ? Et puis, Apple est-elle en train de retomber dans ses vieux travers, alors que Cupertino cherche toujours une cohérence à sa gamme iPhone dont le 2e modèle - l'iPhone 14 Plus - tourne au bide. A vendre toujours plus et toujours plus cher, il est maintenant question d'un futur iPhone 15 ou 16 Ultra... Après Plus, Pro et Pro Max... quelle lisibilité pour la future gamme ? Retour sur l'évolution de notre présence sur Patreon avec le carton rouge vif adresse par Benjamin Vincent à la plate-forme de financement des créateurs de contenus qui, contrairement à ce qu'elle annonce sur son site, n'est pas encore en mesure de vous permettre de basculer sur un abonnement au mois ou à l'année. Nous voilà donc momentanément sans solution pour délivrer toutes les nouveautés que nous vous présentions, la semaine dernière. Nous cherchons activement un plan B. Et puis, ne manquez pas le bonus exclusif qui vous est réservé, chère communauté. À la tête de l'équipe Design chez Apple, Evans Hankey - qui avait succédé à Jony Ive - est partie et n'est pas remplacée. Et ça nous inquiète. Enfin, nos coups de coeur de la semaine ! Fabrice Neuman vous conseille Unshaky, l'utilitaire pour enlever les répétitions de touches avec un Mac (qui plus est, à clavier papillon). Elie Abitbol est enthousiaste devant la sortie de deux boitiers (non officiels) Carlinkit pour profiter de CarPlay sur une Tesla (100 à 130€ sur Amazon). Quant à Pierre Chevalier, il nous fait découvrir switchbot, un appareil avec un petit bras qui pousse sur un bouton pour controler à distance un appareil qui n'a pas d'intelligence. Rendez-vous mercredi 15 février 2023 au soir, pour l'épisode 124 d'iWeek (la semaine Apple) ! Par ailleurs, retrouvez la version vidéo du podcast en avant-première sur Patreon, puis, à partir du dimanche, sur la chaîne YouTube d'iWeek ! Abonnez-vous à la chaîne YouTube d'iWeek. Si l'actualité Apple vous passionne, abonnez-vous gratuitement à "la quotidienne iWeek", le 1er podcast quotidien sur l'actu Apple : 5 minutes par jour, 5 jours par semaine, du lundi au vendredi, avec l'essentiel de l'info Apple quotidienne. Un podcast désormais chapitré et illustré dans votre application de podcasts préférée. la quotidienne iWeek sur Apple Podcasts : https://apple.co/3lTrLe6 la quotidienne iWeek sur Spotify : https://sptfy.be/6reqf (attention : nouveau lien !) Pour avoir les dernières nouvelles d'iWeek, suivez notre compte Twitter : @iweeknews et notre compte Instagram : @iweek.news !

The Dan Rayburn Podcast
Episode 35: Amazon NFL Viewership Numbers; HEVC Support Added to Chrome; IMAX Buys SSIMWAVE; Twitch Encoding Costs

The Dan Rayburn Podcast

Play Episode Listen Later Sep 26, 2022 40:36


This week we breakdown the viewership numbers from Amazon's first Thursday Night Football game and the differences between all the numbers reported. (11.8M, 13M, 15.3M) We also discuss the news of Google adding HEVC support in Chrome and some cost details given out by Twitch of how expensive it is to deliver HD, low latency, always available live video to nearly every corner of the world. Finally, we highlight IMAX's acquisition of SSIMWAVE and their goal to bring a specific level of video quality to consumers at home.Companies, and services mentioned: Amazon Prime Video, NFL, NBC Sports, DIRECTV, NFL+, NFL Sunday Ticket, CBS Sports, Bally Sports, Twitch, Chrome, IMAX, SSIMWAVE, Diamond Sports, Nielsen.Questions or feedback? Contact: dan@danrayburn.com

The Video Insiders
Are We Compressed Yet?

The Video Insiders

Play Episode Listen Later Jul 7, 2022 40:11


Ramzi Khsib  LinkedIn profileAWS Elemental website---------------------------------------------------Join our LinkedIn Group so that you can get the latest video insider news and participate in the discussion.Email thevideoinsiders@beamr.com to be a guest on the show.Learn more about Beamr

Choses à Savoir TECH
L'internet français monopolisé par... Netflix ?!

Choses à Savoir TECH

Play Episode Listen Later Jul 5, 2022 3:19


À votre avis, quels sont les sites et services qui consomment le plus d'internet en France ? Si vous êtes un auditeur assidu du podcast, alors vous savez que les services vidéos et sites de streaming montent largement sur le podium... Mais tout en haut de la montage, on retrouve Netflix ! La compagnie américaine consomme en effet 1/5e de la bande passante française à elle seule ! C'est donc d'un poids très lourd que Netflix pèse sur l'internet français d'après l'Arcep. À en croire la dernière étude de l'autorité de régulation des communications, le service de vidéo à la demande représentait à lui seul 20% de la consommation de bande-passante dans le pays en 2021. Il y a un an, l'organisme avait déjà alerté sur cette consommation excessive où, il faut s'en souvenir, Netflix affichait déjà un tel score. Ce n'est pas une situation nouvelle, mais l'enseignement principal, c'est surtout qu'en douze mois, la situation n'a absolument pas changé. À bien y regarder, la part de Netflix sur l'internet français ne cesse d'augmenter. Il y a six ans, en 2016, le géant du divertissement représentait déjà 8% du trafic total. Si l'on regarde juste derrière, Google arrive 2e avec 13% d'occupation de la bande-passante, suivi de près par Akamai et ses serveurs informatiques, et enfin Facebook et Amazon. Je cite le président de l'Arcep, « en 2021, c'est le trafic vidéo qui occupe la majorité de nos réseaux de télécommunications, avec cinq gros fournisseurs qui utilisent 50% de notre trafic » fin de citation. Plusieurs éléments expliquent cette consommation grandissante. En premier lieu, les Français s'intéressent de plus en plus aux services vidéos pour se divertir avec une exigence de latence très faible et de qualité toujours plus élevée. L'Arcep fait d'ailleurs une parenthèse très intéressante dans son étude à propos des codecs et de leur rôle dans l'acheminement des flux vidéo. Si vous ne savez pas ce qu'est un codec, il s'agit tout simplement d'un dispositif qui permet de mettre en œuvre l'encodage et le décodage d'un flux de données, tout en réduisant le poids de ce même flux. Aujourd'hui, la majorité du trafic internet dans le monde est faite de données vidéo compressées. S'il existe certains codecs très performants, capables de réduire drastiquement la bande passante consommée (HEVC, VP9, AV1), ces derniers peuvent créer des incompatibilités, ce que les hébergeurs souhaitent absolument éviter. Bien entendu, la France n'est pas le seul pays touché par ce phénomène, car au niveau mondial, 53% du trafic internet est occupé par de la vidéo sur YouTube, Netflix, Amazon Prime Video ou encore Disney+. Côté politique, la Commission européenne planche sur plusieurs mesure pour enrayer cette hausse comme celle d'appliquer une taxe aux GAFAM et aux services de streaming pour participer au financement et à l'entretien des réseaux des opérateurs. Un projet de loi devrait d'ailleurs être présenté à Bruxelles d'ici la fin de l'année. Learn more about your ad choices. Visit megaphone.fm/adchoices

Choses à Savoir TECH
L'internet français monopolisé par... Netflix ?!

Choses à Savoir TECH

Play Episode Listen Later Jul 5, 2022 2:49


À votre avis, quels sont les sites et services qui consomment le plus d'internet en France ? Si vous êtes un auditeur assidu du podcast, alors vous savez que les services vidéos et sites de streaming montent largement sur le podium... Mais tout en haut de la montage, on retrouve Netflix ! La compagnie américaine consomme en effet 1/5e de la bande passante française à elle seule !C'est donc d'un poids très lourd que Netflix pèse sur l'internet français d'après l'Arcep. À en croire la dernière étude de l'autorité de régulation des communications, le service de vidéo à la demande représentait à lui seul 20% de la consommation de bande-passante dans le pays en 2021. Il y a un an, l'organisme avait déjà alerté sur cette consommation excessive où, il faut s'en souvenir, Netflix affichait déjà un tel score. Ce n'est pas une situation nouvelle, mais l'enseignement principal, c'est surtout qu'en douze mois, la situation n'a absolument pas changé. À bien y regarder, la part de Netflix sur l'internet français ne cesse d'augmenter. Il y a six ans, en 2016, le géant du divertissement représentait déjà 8% du trafic total. Si l'on regarde juste derrière, Google arrive 2e avec 13% d'occupation de la bande-passante, suivi de près par Akamai et ses serveurs informatiques, et enfin Facebook et Amazon. Je cite le président de l'Arcep, « en 2021, c'est le trafic vidéo qui occupe la majorité de nos réseaux de télécommunications, avec cinq gros fournisseurs qui utilisent 50% de notre trafic » fin de citation.Plusieurs éléments expliquent cette consommation grandissante. En premier lieu, les Français s'intéressent de plus en plus aux services vidéos pour se divertir avec une exigence de latence très faible et de qualité toujours plus élevée. L'Arcep fait d'ailleurs une parenthèse très intéressante dans son étude à propos des codecs et de leur rôle dans l'acheminement des flux vidéo. Si vous ne savez pas ce qu'est un codec, il s'agit tout simplement d'un dispositif qui permet de mettre en œuvre l'encodage et le décodage d'un flux de données, tout en réduisant le poids de ce même flux. Aujourd'hui, la majorité du trafic internet dans le monde est faite de données vidéo compressées. S'il existe certains codecs très performants, capables de réduire drastiquement la bande passante consommée (HEVC, VP9, AV1), ces derniers peuvent créer des incompatibilités, ce que les hébergeurs souhaitent absolument éviter.Bien entendu, la France n'est pas le seul pays touché par ce phénomène, car au niveau mondial, 53% du trafic internet est occupé par de la vidéo sur YouTube, Netflix, Amazon Prime Video ou encore Disney+. Côté politique, la Commission européenne planche sur plusieurs mesure pour enrayer cette hausse comme celle d'appliquer une taxe aux GAFAM et aux services de streaming pour participer au financement et à l'entretien des réseaux des opérateurs. Un projet de loi devrait d'ailleurs être présenté à Bruxelles d'ici la fin de l'année. Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.

Ideas on Video Communications | Wireless | Cellular | Fiber Optics | IPTV | Video over IP
VISLINK HCAM HEVC/UHD Wireless Camera System with Camera Control [Recorded Download]

Ideas on Video Communications | Wireless | Cellular | Fiber Optics | IPTV | Video over IP

Play Episode Listen Later May 25, 2022 51:48


The HCAM is highly flexible and features configurable mounting options and video interfaces, allowing the unit to be mounted to broadcast cameras for sporting events, ENG cameras for news and even prosumer cameras to broaden the market reach. With user interchangeable RF modules and a range of software options, the HCAM continues the line of innovative, high-performance wireless camera systems from Vislink. The post VISLINK HCAM HEVC/UHD Wireless Camera System with Camera Control [Recorded Download] appeared first on VidOvation Corporation.

The SEP Couch with Tim Pohlmann
#12 Paul Bawel | How to Manage And Set Up Patent Pools

The SEP Couch with Tim Pohlmann

Play Episode Listen Later Apr 12, 2022 44:40


Paul is Senior Vice President at Access Advance LLC where he is responsible for business development.  He has been involved in multiple patent pools and licensing program during his 25+ years as an intellectual property attorney, including for MPEG 2, MPEG 4 Part 2, AVC, HDDVD, BluRay, HEAAC, HEVC and now VVC. Paul started his career to work for General Electric's licensing department. He went on to work for General Instrument, first as the IP Portfolio Law Director, and then as the Broadband Sector IP Law Director (for Motorola after it purchased GI) managing the IP law department and IP related matters.  After that Paul worked for Microsoft as a Business Division Patent Counsel to then work for Acacia Research Group identifying, valuing, and purchasing patent portfolios. Since 2015, Paul has worked for Access Advance, first as Senior VP of Licensing building their HEVC Advance licensing program, and now as Senior VP of Business Development developing, launching and now building their VVC Advance licensing program. Paul believes that patent pools are important to facilitate standardized technologies such as HEVC or VVC. Paten pools reduce the transaction costs for all implementers. Havening more than just one patent pool (HEVC is subject to 3 patent pools, VVC currently has 2 patent pools set up), also will in his view not hamper standards adoption. Also, two or three patent pools still reduce the number of licensors. There has been criticism that HEVC was not as successful as AVC, where Paul argues that there is a lot of data tell a different story and that provides evidence of the success and wide adoption of HEVC. In his view the HEVC patent pool situation supported that success. Also, the recent litigation between Access Advance and Vestel was no setback for Access Advance, Paul argues. Here media did not tell the whole story. What is true due to a substantial number of overlapping patents in the HEVC Advance Patent Pool and MPEG LA's HEVC patent pool to which Vestel was licensed to, the German court In Düsseldorf found the Access Advanced HEVC license not FRAND. Access Advanced in March 2022 therefore revised its policy responding to the Düsseldorf District Court's December 21, 2021 ruling. Importantly, the court once again did not express concerns with any other facet of the HEVC Advance Patent Pool, including its royalty rates.One reason why more than just one patent pool was formed for VVC is that not only the licensing rates and licensing models differ across the pool programs, but also the internal revenue sharing policies can be very different. At Access Advance Paul states that the pool considers the internal royalty sharing counting patents on a patent family basis so that there are no incentives for patent pool licensors to file e.g. multiple divisional patent applications that cover very minor inventions just to increase their share in the patent pool. The different rules and licensing rates therefore attract different SEP licensors to either join Access Advance or MPEG LA's VVC patent pool.

The Video Insiders
Navigating the Video Codec Landscape

The Video Insiders

Play Episode Listen Later Dec 10, 2020 43:39


Brian Alvarez LinkedIn profileVittorio Giovara Blog---------------------------------------------------Join our LinkedIn Group so that you can get the latest video insider news and participate in the discussion.Email thevideoinsiders@beamr.com to be a guest on the show.Learn more about Beamr

The Video Insiders
HEVC Market Perspectives

The Video Insiders

Play Episode Listen Later Sep 17, 2020 56:39


Thierry Fautier LinkedIn profile Harmonic websiteBen Mesander LinkedIn profile Wowza websiteWalid Hamri LinkedIn profile SeaChange websiteWade Wan LinkedIn profile Broadcom websiteOur previous panel on extending the life of H.264 is here---------------------------------------------------Join our LinkedIn Group so that you can get the latest video insider news and participate in the discussion.Email thevideoinsiders@beamr.com to be a guest on the show.Learn more about Beamr

The Video Insiders
Relieving the bandwidth squeeze with content-adaptive encoding.com

The Video Insiders

Play Episode Listen Later Jun 22, 2020 37:32


Gregg Heil LinkedIn profileEncoding.com, Gregg's company, is hereBeamr CABR on encoding.com is here---------------------------------------------------Join our LinkedIn Group so that you can get the latest video insider news and participate in the discussion.Email thevideoinsiders@beamr.com to be a guest on the show.Learn more about Beamr

The Video Insiders
The Super Bowl of HDR.

The Video Insiders

Play Episode Listen Later Feb 17, 2020 35:09


Michael Drazin LinkedIn profileRelated episode: HDR from glass-to-glass--------------------------------------The Video Insiders LinkedIn Group is where thousands of your peers are discussing the latest video technology news and sharing best practices. Click here to joinWould you like to be a guest on the show? Email: thevideoinsiders@beamr.comLearn about Beamr--------------------------------------

The Video Insiders
The technology behind building value on CTV platforms with advertising.

The Video Insiders

Play Episode Listen Later Feb 3, 2020 37:24


Download the Innovid 2020 State of Connected TV ReportLearn about InnvoidTal Chalozin LinkedIn profileRelated episode: Direct-to-consumer streaming service launches and first impressionsListen to Episode 20 for more information on interactive advertising and video monetization technology--------------------------------------The Video Insiders LinkedIn Group is where thousands of your peers are discussing the latest video technology news and sharing best practices. Click here to joinWould you like to be a guest on the show? Email: thevideoinsiders@beamr.comLearn about Beamr--------------------------------------TRANSCRIPT: (edited lightly to improve readability)Tal Chalozin:       00:00          Innovid is what we call a video marketing platform. It's a technology platform sold to marketers, brands executives, and agencies that lets them do three things. First and foremost what is called an ad server. It's a technology that actually streams the ad to every website. So if a marketer, let's say Chrysler, or Proctor & Gamble or Best Buy, or others is advertising on YouTube or Hulu or Fox or NBC or New York Times there's a centralized platform that you can actually manage the campaign, upload the MP4's and actually do the streaming and make decisions on now on which video file to serve. So right now we're very fortunate to be the largest video ad server in the world and in many other countries in the United States and many other countries that we operate in. Tal Chalozin:       00:51          A little over a third of all video ads in the United States are being streamed by Innovid. So if you tune into every website and every app, let's say Hulu, one out of three ads, and as a matter of fact on Hulu, it's probably even higher than that. Almost one of every two ads would be, one's coming from Innovid every day. We stream roughly 450 years' worth of ads. And this is just ads content. So we stream a lot of videos. To complete the story of our platform. At a core it's an ad server. And then on top of that there are two applications. One is around creative and the other one is around measurement. Announcer:          01:31          The video insiders is the show that makes sense of all that is happening in the world of online video as seen through the eyes of a second generation codec nerd and a marketing guy who knows what I-frames and macro blocks are. And here are your hosts, Mark Donnigan and Dror Gill. Dror Gill:          01:51          Today we have a very special guest and an old friend of mine Tal Chalozin who is the CTO of Innovid. Hi Tal. Welcome to The Video Insiders. Tal Chalozin:       01:59          Hello Dror. Hello Mark. Thanks for having me. It's a true honor. Mark Donnigan:      02:03          Yeah, welcome Tal. So tell us about Innovid. Tal Chalozin:       02:07          Innovid is a software company that I had the honor of starting together with my two friends and co founders, Zvika Netter our CEO and Zack Zigdon who runs all of our international business. And myself, it's a company that we started back in 2007. Before I explain what we do, just to take you back almost 13 years ago, this is the time after Google acquired YouTube and Hulu as a streaming site was kind of an inception mode. NBC and News Corp started this operation to bring streaming television into the internet. Tal Chalozin:       02:49          And what we said back then is that we believe that the future of television is over IP and to be streamed. We thought that when this would happen the one thing that we really want to tackle is the viewing experience around the advertising. Because it was clear that marketers and ad dollars take a very, very important part of the experience of television subsidizing content and creating the access to so many different people. But it's also clear that sitting through a pretty boring 30 second spot and that every person around the United States in a broadcast time window would see the exact same ad. It's kind of silly. And so we went on a journey to build a software that helps to create a better viewing experience around commercials. Tal Chalozin:       03:44          So we started with the technology, with technology that allows what is called in kind of layman terms virtual product placement. It was a computer vision technology that lets you process videos and reconstruct the 3D. So understanding occlusions and backgrounds and foregrounds and planes and allow you to render a product a 3D product in 3D images into the shot. And it looks like as if it was there while the content was shot while reproducing all the shades and lighting and again, occlusion and, and things like that. This was where we started. We got a bunch of patents. This is how we raised our A round back then. We got so many awards. It was awesome. But then what we learned is that it's amazing, but advertising is a business of scale for marketers to actually play. Tal Chalozin:       04:38          One of the main things that marketers gain out of television is a massive megaphone that lets you tell your story to millions, if not hundreds of millions of people in 30 seconds. So then we went on a journey to better learn this business and expanded more and more capability and fast forward to today. Innovid is what we call a video marketing platform. It's a technology platform sold to marketers, brands executives and agencies that lets them do three things. First and foremost what is called an ad server. It's a technology that actually streams the ad to every website. So if a marketer, let's say Chrysler or Procter and Gamble or Best Buy or others is advertising on YouTube or Hulu or Fox or NBC or New York times there's a centralized platform that you can actually manage the campaign, upload the MP4's and actually do the streaming and make decisions on which video file to serve to the individual that is streaming the content. Tal Chalozin:       05:48          So right now we're very fortunate to be the largest video ad server in the world. And in many other countries in the United States, many other countries that we operate in a little over a third of all video ads in the United States are being streamed by Innovid. At a core it's an ad server. And then on top of that, there are two applications. One is around creative and the other one is around measurement. Our headquarters in New York. There's 350 people, a big R&D center in Israel and then offices across the U S and in Europe. And in APAC. If you read the trades, it seems like the future of television has no ads. Disney Plus, Netflix, Amazon, Apple, all of the big services that made a lot of splash in the press toot the horn of no ads. Tal Chalozin:       06:43          This is very nice for marketing, but in reality advertising dollars pays the bills that makes so many pieces of content to be streamed. The subscription services could not really thrive on subscription alone, let alone when you're talking about a massive global service that would like to reach hundreds of millions of subscribers. You cannot do that only with subscribing. With subscription dollars or advertising is a very strong market and in the future will be that. Easy testament is that just last week NBC launched or Comcast launched there foray into that game called Peacock. And the main thing that they said is that, Hey there's so much noise around advertising, about no ads. This cannot work. We will include ads. Tal Chalozin:       07:36          And this makes to the second part of what I wanted to say about the future is that, but they put a lot of emphasis around ed experience. So it's not that you will see ads in the same way that you're used to watching television. There will still be ad breaks, but it will look and feel very, very different than what it used to be on television. And we play a very big role there and in other places. And we think that yes, the future of television is over the internet, over IP. The future of television is with ads, or at least in some capacity of it, but it would look and feel much different. Dror Gill:          08:14          I want to ask a question regarding the, the ad server component. And these ads go interleaved into content experiences sometimes before or after or during the actual streaming of the content. So how do you match the resolution and the quality of the ad that you provide to the actual content that is being streamed? Because I don't assume that somebody watching a 4K movie would like to be interrupted by like an, you know, an SD, low quality ad. It would probably be quite annoying. Tal Chalozin:       08:52          I have so many things to say about this stuff. First of all, before I answer exactly how we did it I can tell you that people think that the internet is so advanced in 2020 so all of this problem is practically solved. And there is no real problem to bring television over the internet and it's not really true. I'm sure you know you know, very well the general standard in the video ads industry right now is that we as the server that generated the file and hosts them, would create an XML template called vast V A S T and put multiple video renditions in a file and create a manifest that would have different renditions of and actually different encoders as well. Tal Chalozin:       09:44          Of the file. It used to be, we used to put FLV and other stuff. But right now it's all MP4 containers. But anyway, you put multiple renditions and then the actual player picks the right one and the player, essentially what it's doing is doing playlisting. So picking the right ad at the right time and there is a, in the last, the last few years, but honestly, just in the last year, there is a big change in the way video ads are being streamed. Moving from what used to be called CSAI client side ad insertion, AKA playlisting. So on the client you download some, some type of playlisting and then you just move between different files even if it's the main content - it doesn't matter the rendition, you would still switch between different files that you do progressive downloads for. Tal Chalozin:       10:45          Most of the very large sites and today apps are what is called SSAI server-side ad insertion. Essentially it doesn't matter what file we bring. You convert it into an HLS stream, create TS files, and then do kind of the, the term that everyone is using is manifest manipulation. So just manipulate the M3u8 and swap packets, TS files inside the M3u8. I hope that I don't need to explain everything that I'm just saying, but stop me if you want me to. So essentially let's say on Hulu, this is how it works. You will tune into a stream and you hit play on an episode of a, I don't know, The Good Wife on Hulu. What they will do, they will go, let's say this is 48 minutes of an episode or 21 minutes of an episode with multiple ads that need to be weaved throughout. So what they will do, they will do a server side call to all the different ads and then get either an MP4 and do just in time transcoding for it. Or, if it's pre-prepared, like a lot of the things that we do you would get the actual TS file and then just merge it into a single M3u8 with content TS files in the right rendition and the ads. Mark Donnigan:      12:09          So Tal, are you actually able to get the, you know, I'll call it the mezzanine file of the ad, and then you can create a high quality or at least the highest quality possible for the, you know, target resolution and bit rate or are you limited by the fact that sometimes, you know, you may get a mezzanine quality and other times it may just be a 1080p in which case Dror's example of like a 4K. You're just limited. I mean, you have the quality you have. So can you tell us, shed some light on that? Tal Chalozin:       12:43          It's a fascinating point. This is an uphill battle for us because we are, we're still an intermediary. We're not the post production shop at that makes the video file, so we're limited to whatever you would get. So yeah, the intention is to get a Pro Res or a mez file, mezzanine file, of the ad that allows us to do transcoding into whatever we want. But, that's not the reality all the time. In many cases we would get to your example, a 1080p is a good case. In some cases we get 720 and sometimes we even need to up convert it, which clearly is not really working. Tal Chalozin:       13:34          And the reality is that the 4K streaming of ad supported content is not a real thing as of right now. But, 1080p is definitely one that is. And again, we're in 2020 right now and you can open whatever app without naming names, but you can open one of the biggest apps out there and I'm sure you would get to an ad break and even an unaided eye can see that it's a totally different rendition of the ad, even different audio, let alone volume normalization. But even just the quality of the encoding is significantly different or lower than the actual content. And this is a common case or the state of the internet right now. Dror Gill:          14:24          But this is something you're trying to avoid? Tal Chalozin:       14:26          We're definitely trying to avoid the way that we're doing it is that if you think about it, there are two inputs to our system. One is the ad itself, literally, again the mez file, Pro Res, whatever container that is, an MP4. And then, what is called in ad terms a media plan. Media plan is saying that we are Chrysler, the campaign starts in this date and ends on this date, there is X number of million impressions on YouTube, then on Snapchat, then on Hulu, and then the full list. It's a very complicated meta data of the whole campaign. So those are the two inputs that we're getting. Historically that was just an upload. So in our system, you would go and just upload the files. Tal Chalozin:       15:13          More and more we're trying to get down to the source and create some type of an integration with the, with the DAM, the digital asset manager. Let's say, again, this is a Chrysler commercial, Chrysler 300 commercial. Someone actually did the post for it, and they do have the approved asset at the best quality possible. But those are not our customers. So sometimes we don't get access to that and we need to beg the customer to get that and try to explain what's the outcome if they don't get it. So what we're trying to do is to get down to the source as close as possible. So then that post-production shop would actually have an API to us, or even if they upload, they would upload the source and not have a downsample of it. Mark Donnigan:      16:05          So our audience, are largely encoding engineers, video engineers, and we just hear over and over again incredible frustration about this. Dror and I were just talking to a very large live sports streaming service last week and the person responsible for encoding was lamenting that whenever there's issues with quality, it's because he can't do any better. It's a source issue! The high quality asset exists. Why can't we get access to it so that we can provide an incredible advertising experience. And I'm just wondering, how do we fix this? Tal Chalozin:       16:50          How do we fix that? As more hours per day continues to pour into the connected, let's call it the connected television space, and as more and more ad dollars flow in there, and then more and more people cut their cord or shave their cord or are cord nevers and haven't even been exposed to traditional television, this becomes the norm and not the new thing. It's essentially a supply chain or a workflow problem because as you said, the file is there. It's not that someone is shooting on an SD camera and now you, you're stuck with a shitty file. People are using RED cameras to shoot it. So yeah, so it's more of a workflow problem. And this is what we set out to do is to just remove the clutter and connect everything in an industry that wasn't connected. Ads on television, still are being delivered predominantly through FedEx with cassette tapes that are being sent to local TV stations. Tal Chalozin:       17:50          This is still a thing. We're moving from this world and now talking about getting a mezzanine or 4K file. I'll tell you about one thing that I'm very keen on, is that another thing is getting the raw asset is one thing. And then another thing, if you look at it, there's multiple parties on the internet that are getting an asset and transcoding it. So let's say that we get the video file. Probably Facebook got the video file as well, maybe not through Innovid. And they also transcoded the video file and then YouTube or Tik Tok got the video file somehow. And then sometimes clients would use Innovid. Sometimes you would go directly into YouTube and upload the raw file. And maybe NBC would get it through some other distribution channel to the broadcast side. Tal Chalozin:       18:44          And then when they run it online, they would take the broadcast file and transcode it as well. So there was multiple people or organization that got the raw footage and then they're in charge of transcoding. This is pretty stupid. It should be some type of a centralized repository because there is an ID to every file and there is an initiative called the Ad ID to make sure that there will be a unified numbering system, and a catalog. And by virtue of that, meta data and tracking just in the ad space so in every ad and then not only did you have a catalog, you can access all different resolutions in a centralized place. So then if YouTube wants a a downsampled version, then you just pick the resolution you want. You don't take the raw and then encode it as well. Tal Chalozin:       19:32          There's an initiative. There are several companies trying to do that. It's kind of a hurding cats type of an initiative. But it's almost a necessity because unless you do that, you will always have those artifacts. Mark Donnigan:      19:46          Yeah, that's right. And that Ad ID in your experience does that travel, I'll use the word seamlessly, you know, between these various systems or is that even an issue of keeping that ad ID intact? Tal Chalozin:       19:59          You know that it is a meta data but in reality again, we are one of the largest platforms that actually accesses files and stream them out and encode them. Most people that do encoding do not carry on all the meta data. That's one thing. Second thing is that most people, actually, most platforms don't even look into that meta data. So don't even expose that or do anything with it. Tal Chalozin:       20:22          Several encoders do not put it in there. So right now, yes, it is there, but it's not fully available. So the solution that is used mostly right now, which you would laugh, is putting it in the actual file name. So literally as an unstructured text on the file before the dot and before you put an underscore and then the the actual file, which clearly doesn't carry through anywhere. So that's the reality again, right now in 2020. It's almost like Dror do you remember Yossi Vardi's example of pigeons carrying DVDs in order to transfer a lot of files, large files? Dror Gill:          21:04          He also did another experiment. He took a snail and he stuck a USB drive on the back of the snail. And then he had two computers connected with a crossed ethernet cable and he was trying to see how the data will go faster through the cable or the snail that is moving slowly between the computers with the USB drive on his back. And I'm sorry to say, but the snail won! Tal Chalozin:       21:28          The industry from the outside seems like, again, it all problems are solved, but it's far from it. You know, the Superbowl is coming up very soon and Fox is going to air the Superbowl and like every year you can access it in streaming as well. And it's still a discussion every year. Is the internet already for that? The term for ad serving in real time in the world of television is called DAI dynamic ad insertion. Every broadcaster that gets the right to stream the Superbowl is asking, are we ready or are we safe to do DAI for the ads or to play it safe are we gonna take the broadcast feed and then just retransmit? I Can tell you a funny story, that last year we did a really cool experiment. Tal Chalozin:       22:19          CBS had the rights for the Superbowl and they use a system that takes the SCTE tone and converts it into an ID3 tag for digital systems. And then on the ID3 we put the marker of the ads, we put the actual Innovid URL of the the ad that is about to play. Originally the system was architected for measurement. So you can do measurements from the client side. So there is something on the client side, gate the ID3 tag and then fire that just do an HTTP get call that URL in order to track track the ads from the client in the most accurate way. But then what we did last year together with CBS is add the ability to also run overlays on top of the video. Tal Chalozin:       23:09          So that URL was not just for measurement, but also downloaded graphics to be displayed as a kind of, as a transparent layer on top of that on the device itself. So if you stream live stream. This is not VOD or anything like that. You do live stream of the Superbowl. Last year many devices on CBS Sports had a small SDK that again, took the SCTE tone converted to an ID3 tag, get a URL for a PNG file or whatever that is rendered in near real time. And then every house on the United States gets something else. We did an experiment together with Pringles. The whole commercial was some type of a game with Pringles. So you would get a message that is tailored to you. Tal Chalozin:       24:00          So, it literally featured the name of your city on it. And then it allows it to use your remote, let's say Apple TV. You can use your remote to left and right to swipe and play some, some kind of a funky game as the ad was playing. So funny thing again, this is 2019. You would imagine that we would have that technology available. This is not rocket science. We're talking about a lot more advanced things on the internet. But even that was super revolutionary and this year this capability will not be available because the way that Fox works is different. But that count is super cutting edge. Mark Donnigan:      24:40          Now Tal, I know that you're working very closely with Roku, so why don't you share with us what you're doing with them. Mark Donnigan:      24:49          Share what you can and tell us about what's happening on the Roku platform because I think that's very important to all of us in, in streaming media streaming video. Tal Chalozin:       25:00          Roku is a streaming device. It is divided into two parts of their platform. One is a device a streaming stick and streaming box. But, Roku first and foremost is an operating system that runs on that device or licensed to TV manufacturers, to TV OEMs. And right now there's eleven OEMs that carries that. Anything from TCL, or Insignia, all the way to LG, and on some SKUs from Sharp as well. And by numbers, Roku is the largest television operating system right now in the United States. The most amount of TV's purchased in 2019 was Roku powered or TVs or streams were powered by Roku. Tal Chalozin:       25:47          So this is larger than Amazon Fire, way way larger than Apple TV or Xbox or PlayStation or whatnot. So this is, this is Roku. Back in the early, early days of Roku this dating back to, to 2014 or 15, we did the first advertising oriented deal with Roku to create a small library and SDK that would be part of their firmware that many years later, the name is Roku ad framework, or RAF. Which is a set of libraries that lets app developers, Roku app developers get access to to stuff they need to run ads inside the app without a lot of work that allows us to create a technology for like, for example, interactive television, something that can be done in a very scalable way because now every app on Roku has the ability to render ads that can have overlays. Tal Chalozin:       26:47          You can press the remote and you can purchase things or send things to your phone or whatever activity you would like. So this is the first thing we've done with Roku and enabled that technology at a mass scale. This is many, many years before Roku was a big success. But at the end of last year, in September we, together with Roku, we announced kind of the second, second act of the innovation on the future of television, which is around measurement. I mentioned at the beginning, the top of the, of the show that we have three parts to our platform, the ad server, which we talked a lot about, different tools around creative. And the third one would be measurement capabilities. On the measurement side this is an area that the television industry, we talked a lot about things that require innovation. Tal Chalozin:       27:41          Measurement is maybe at the top of the list cause right now measurement on television is dominated by a company called Nielsen, which I'm sure many people know that the way they measure television because of lack of connectivity is by putting a people meter or a device in people's home. In very, very few households in the United States that act as a sample or as a panel which presumably should represent every household in the United States. So there's roughly 20,000 families in the United States that represent the television ecosystem, which there is north of 100 million households in the United States. And maybe 80 or 90 million households that are watching broadcast television and they're being paneled by 20,000 that essentially measuring what do people actually watch. Tal Chalozin:       28:42          So, we want to change that. We, and many other important, an important point is that many other companies are, are at it. Because, it's obvious this needs to be changed. But we teamed up with Roku that every one of the devices that carries their operating system, so every one of those TVs that have Roku as an operating system have a small chip called ACR. Stands for automatic content recognition that essentially knows what you're watching. So it records everything that hits the glass. And it doesn't matter if it hits the glass because it's an app on the Roku platform, let's say Hulu or YouTube or Netflix, or you plugged in via HDMI, your set top box or you plugged in an antenna to to the TV or even you have a DVR or VHS plugged into your television. Tal Chalozin:       29:32          Doesn't matter if it's rendered on the screen, then Roku would know what it is. They do a second by second or almost a frame by frame to a catalog. And then know what exactly you're watching and at what time code. We can talk about privacy as well, which is a very important part of it. But this is all opted in. You don't have to contribute this data, but most people do. And then we get this data. We don't care about the individual household, but we can use that as you don't, you don't need a panel anymore where every television is telling you what exactly you're watching. So we are, we're on a mission to reinvent that television measurement in a much better way. Dror Gill:          30:15          That's really amazing. So the television is actually watching what you are watching. Even if it's not streamed through that Roku platform, it's watching everything that is projected to the screen and not only you know, like recording the pixels or they're actually using this automatic content recognition system. Analyzing and knowing what content, what piece of content this is, whether it's a live broadcast or a video on demand. It could be a DVD or a VHS, time shifted or it's an ad. Exactly. Mark Donnigan:      30:51          Where is that fingerprint happening Tal? Tal Chalozin:       31:01          And by the way, a disclaimer, I don't work for Roku and I don't know any internal data about Roku. We have a strong partnership with them. So Roku is unique technology. And by the way, other TV manufacturers are doing the same thing. This is not limited to Roku. Vizio who made a lot of noise around that as well. And many others, Sony and Toshiba and others. Are using similar technologies. What's on the device is mainly picking up multiple pixels, hashing it together and sending it to the cloud. The matching to the catalog is not happening on the device. There's clearly no need for that. And there are several companies that create this catalog and does essentially the pattern matching between the set of temporal data of that set of frames, consecutive frames to a catalog to know exactly what you're watching. Tal Chalozin:       31:55          Is it - what show? What episode? Is it an ad? So one thing is to know the catalog. The other one is to know what is on right now in every... It's a very complicated problem, because sometimes you are you, you may be watching it live. Again, tuning into, I dunno, ABC, but right now because that show is a local show, you would watch it streamed by the Kansas city, Missouri ABC affiliate and it's not a national show. So you can't really match it to a catalog and know is it live or not live? And then when it comes to ads, it gets even more complicated because some of the ads are inserted in real time. So you need to know that that ad is inserted in real time so then it doesn't impact the idea of the stream. You didn't really change the channel. It's just dynamic insertion. Dror Gill:          32:48          So doing all of this measurement, I think it probably puts a lot of responsibility on your part of the value chain on the software that you create, on the reports that you generate. Because based on this I guess is how the content providers get paid right. For showing those ads, as you said. Tal Chalozin:       33:12          We are what is called the system of record for billing. So I mentioned that roughly a third of the ads are being transacted by us. This is a very rough number because the dollars don't go through us. We're just creating the billing. We are the actual counter of something like $5 billion of of ad dollars. So again, YouTube and Snapchat and New York times and NBC and Fox and TubiTV and many other channels and apps are being paid based on our numbers. And in order for that, we need to do a lot of filtration, detecting what is fraud, and making sure there's no false positives, and and many other things like that. And for it, we go through an audit process. So Ernst & Young is the auditor and there's an organization called the Media Rating Council that we go through an audit every year to make sure that what we say we do, we actually do. Tal Chalozin:       34:12          And there's no there's no problems in the counting. And yeah, it happens all the time that we are counting, but also clearly broadcasters or apps would count for their own use as well. And sometimes, unfortunately, the numbers are not the same. So we would say that P&G ran 10 million ads and the broadcaster, NBC, Discovery, what have you, would say that actually it's 10 and a half million ads. So then they need to get paid more. But the way that the contract is written is that Innovid numbers because we're unbiased is what is what will dictate the payment. So you're like the gold standard in measurements. But it's a very interesting, a very interesting world. Tal Chalozin:       35:08          It's an ever changing world. So counting ads 10 years ago and counting ads today is a very, very different business. Mark Donnigan:      35:14          There's a lot of studies and I think you even have one that you can cite if you'd like to that say very clearly that consumers are not opposed to ads. This whole notion that people "hate ads" is actually not true. What they hate is a bad or an irrelevant experience. If the platform happens to know that I'm looking for a new car and I get served a great car ad, guess what? And especially if it piques my interest, that's actually a good experience. Tal Chalozin:       35:48          100%. Yeah. We always use exactly the same term that you mentioned. People don't hate ads, they just hate bad ads. And that's absolutely true. And when you ask people, when you again, when you read the trades, it looks like ads are a very gloomy thing. Tal Chalozin:       36:06          And then you go to platforms like, in my mind, Instagram is the best ad experience ever made. When you see ads on Instagram, it's significantly better. And it's not disruptive at all. You have your thumb there and you can continue scrolling. And then many, many people choose to actually watch that. So completely reverse model. It's not that I'm forced to watch the ad. I literally can continue scrolling the same way that I'm scrolling there. But people literally are choosing to watch that because it's good ads. Mark Donnigan:      36:43          This has been a really amazing discussion and you know we have to do a part two. Yeah, there are few issues we did not cover and we must cover them and it's really been fascinating. Yeah, absolutely. Thanks for joining us Tal. Tal Chalozin:       36:57          I'd love to, thank you so much. Thanks, Mark. Thanks Dror. Thanks everyone that listened. Thanks Beamr. Announcer:          37:04          Thank you for listening to The Video Insiders podcast, a production of Beamr limited. To begin using Beamr's codecs today. Go to beamr.com/free to receive up to 100 hours of no cost HEVC and H.264 transcoding every month.

The Video Insiders
Video coding retrospective with codec expert Pankaj Topiwala.

The Video Insiders

Play Episode Listen Later Jan 24, 2020 54:08


Click to watch SPIE Future Video Codec Panel DiscussionRelated episode with Gary Sullivan at Microsoft: VVC, HEVC & other MPEG codec standardsInterview with MPEG Chairman Leonardo Charliogne: MPEG Through the Eyes of it's ChairmanLearn about FastDVO herePankaj Topiwala LinkedIn profile--------------------------------------The Video Insiders LinkedIn Group is where thousands of your peers are discussing the latest video technology news and sharing best practices. Click here to joinWould you like to be a guest on the show? Email: thevideoinsiders@beamr.comLearn more about Beamr--------------------------------------TRANSCRIPT:Pankaj Topiwala: 00:00 With H.264 H.265 HEVC in 2013, we were now able to do up to 300 to one to up to 500 to one compression on a, let's say a 4K video. And with VVC we have truly entered a new realm where we can do up to 1000 to one compression, which is three full orders of magnitude reduction of the original size. If the original size is say 10 gigabits, we can bring that down to 10 megabits. And that's unbelievable. And so video compression truly is a remarkable technology and you know, it's a, it's a marval to look at Announcer: 00:39 The Video Insiders is the show that makes sense of all that is happening in the world of online video as seen through the eyes of a second generation codec nerd and a marketing guy who knows what I-frames and macro blocks are. And here are your hosts, Mark Donnigan and Dror Gill. Speaker 3: 00:39 Dror Gill: 01:11 Today we're going to talk with one of the key figures in the development of a video codecs and a true video insider Pankaj Topiwala. Hello Pankaj and welcome to The Video Insiders podcast. Pankaj Topiwala: 01:24 Gentlemen. hello, and thank you very much for this invite. It looks like it's going to be a lot of fun. Mark Donnigan: 01:31 It is. Thank you for joining Pankaj. Dror Gill: 01:33 Yeah, it sure will be a lot of fun. So can you start by telling us a little bit about your experience in codec development? Pankaj Topiwala: 01:41 Sure, so, I should say that unlike a number of the other people that you have interviewed or may interview my background is fair bit different. I really came into this field really by a back door and almost by chance my degree PhD degree is actually in mathematical physics from 1985. And I actually have no engineering, computer science or even management experience. So naturally I run a small research company working in video compression and analytics, and that makes sense, but that's just the way things go in the modern world. But that the effect for me was a, and the entry point was that even though I was working in very, very abstract mathematics I decided to leave. I worked in academia for a few years and then I decided to join industry. And at that point they were putting me into applied mathematical research. Pankaj Topiwala: 02:44 And the topic at that time that was really hot in applied mathematics was a topic of wavelets. And I ended up writing and edited a book called wavelet image and video compression in 1998. Which was a lot of fun along with quite a few other co authors on that book. But, wavelets had its biggest contribution in the compression of image and video. And so that led me finally to enter into, and I noticed that video compression was a far larger field than image compression. I mean, by many orders, by orders of magnitude. It is probably a hundred times bigger in terms of market size than, than image compression. And as a result I said, okay, if the sexiest application of this new fangled mathematics could be in video compression I entered that field roughly with the the book that I mentioned in 1998. Mark Donnigan: 03:47 So one thing that I noticed Pankaj cause it's really interesting is your, your initial writing and you know, research was around wavelet compression and yet you have been very active in ISO MPEG, all block-based codecs. So, so tell us about that? Pankaj Topiwala: 04:08 Okay. Well obviously you know when you make the transition from working on the wavelets and our initial starting point was in doing wavelet based video compression. When I started first founded my company fastVDO in 1998, 1999 period we were working on wavelet based video compression and we, we pushed that about as much as we could. And at that, at one point we had what we felt was the world's best a video compression using wavelets in fact, but best overall. And it had the feature that you know, one thing that we should, we should tell your view or reader listeners is that the, the value of wavelets in particular in image coding is that not only can you do state of the art image coding, but you can make the bitstream what is called embedded, meaning you can chop it off at anywhere you like, and it's still a decodable stream. Pankaj Topiwala: 05:11 And in fact it is the best quality you can get for that bit rate. And that is a powerful, powerful thing you can do in image coding. Now in video, there is actually no way to do that. Video is just so much more complicated, but we did the best we could to make it not embedded, but at least scalable. And we, we built a scalable wavelet based video codec, which at that time was beating at the current implementations of MPEG4. So we were very excited that we could launch a company based on a proprietary codec that was based on this new fangled mathematics called wavelets. And lead us to a state of the art codec. The facts of the ground though is that just within the first couple of years of running our company, we found that in fact the block-based transformed codecs that everybody else was using, including the implementers of MPEG4. Pankaj Topiwala: 06:17 And then later AVC, those quickly surpassed anything we could build with with wavelets in terms of both quality and stability. The wavelet based codecs were not as powerful or as stable. And I can say quite a bit more about why that's true. If you want? Dror Gill: 06:38 So when you talk about stability, what exactly are you referring to in, in a video codec? Pankaj Topiwala: 06:42 Right. So let's let's take our listeners back a bit to compare image coding and video coding. Image coding is basically, you're given a set of pixels in a rectangular array and we normally divide that into blocks of sub blocks of that image. And then do transforms and then quantization and than entropy coding, that's how we typically do image coding. With the wavelet transform, we have a global transform. It's a, it's ideally done on the entire image. Pankaj Topiwala: 07:17 And then you could do it multiple times, what are called multiple scales of the wavelet transform. So you could take various sub sub blocks that you create by doing the wavelet transfer and the low pass high pass. Ancs do that again to the low low pass for multiple scales, typically about four or five scales that are used in popular image codecs that use wavelets. But now in video, the novelty is that you don't have one frame. You have many, many frames, hundreds or thousands or more. And you have motion. Now, motion is something where you have pieces of the image that float around from one frame to another and they float randomly. That is, it's not as if all of the motion is in one direction. Some things move one way, some things move other ways, some things actually change orientations. Pankaj Topiwala: 08:12 And they really move, of course, in three dimensional space, not in our two dimensional space that we capture. That complicates video compression enormously over image compression. And it particularly complicates all the wavelet methods to do video compression. So, wavelet methods that try to deal with motion were not very successful. The best we tried to do was using motion compensated video you know, transformed. So doing wavelet transforms in the time domain as well as the spatial domain along the paths of motion vectors. But that was not very successful. And what I mean by stability is that as soon as you increase the motion, the codec breaks, whereas in video coding using block-based transforms and block-based motion estimation and compensation it doesn't break. It just degrades much more gracefully. Wavelet based codecs do not degrade gracefully in that regard. Pankaj Topiwala: 09:16 And so we of course, as a company we decided, well, if those are the facts on the ground. We're going to go with whichever way video coding is going and drop our initial entry point, namely wavelets, and go with the DCT. Now one important thing we found was that even in the DCT- ideas we learned in wavelets can be applied right to the DCT. And I don't know if you're familiar with this part of the story, but a wavelet transform can be decomposed using bits shifts and ads only using something called the lifting transform, at least a important wavelet transforms can. Now, it turns out that the DCT can also be decomposed using lifting transforms using only bit shifts and ads. And that is something that my company developed way back back in 1998 actually. Pankaj Topiwala: 10:18 And we showed that not only for DCT, but a large class of transforms called lab transforms, which included the block transforms, but in particular included more powerful transforms the importance of that in the story of video coding. Is that up until H.264, all the video codec. So H.261, MPEG-1, MPEG-2, all these video codecs used a floating point implementation of the discrete cosign transform and without requiring anybody to implement you know a full floating point transform to a very large number of decimal places. What they required then was a minimum accuracy to the DCT and that became something that all codecs had to do. Instead. If you had an implementation of the DCT, it had to be accurate to the true floating point DCT up to a certain decimal point in, in the transform accuracy. Pankaj Topiwala: 11:27 With the advent of H.264, with H.264, we decided right away that we were not going to do a flooding point transform. We were going to do an integer transform. That decision was made even before I joined, my company joined, the development base, H.264, AVC, But they were using 32 point transforms. We found that we could introduce 16 point transforms, half the complexity. And half the complexity only in the linear dimension when you, when you think of it as a spatial dimension. So two spatial dimensions, it's a, it's actually grows more. And so the reduction in complexity is not a factor of two, but at least a factor of four and much more than that. In fact, it's a little closer to exponential. The reality is that we were able to bring the H.264 codec. Pankaj Topiwala: 12:20 So in fact, the transform was the most complicated part of the entire codec. So if you had a 32 point transform, the entire codec was at 32 point technology and it needed 32 points, 32 bits at every sample to process in hardware or software. By changing the transform to 16 bits, we were able to bring the entire codec to a 16 bit implementation, which dramatically improved the hardware implementability of this transfer of this entire codec without at all effecting the quality. So that was an important development that happened with AVC. And since then, we've been working with only integer transforms. Mark Donnigan: 13:03 This technical history is a really amazing to hear. I, I didn't actually know that Dror or you, you probably knew that, but I didn't. Dror Gill: 13:13 Yeah, I mean, I knew about the transform and shifting from fixed point, from a floating point to integer transform. But you know, I didn't know that's an incredible contribution Pankaj. Pankaj Topiwala: 13:27 We like to say that we've saved the world billions of dollars in hardware implementations. And we've taken a small a small you know, a donation as a result of that to survive as a small company. Dror Gill: 13:40 Yeah, that's great. And then from AVC you moved on and you continued your involvement in, in the other standards, right? That's followed. Pankaj Topiwala: 13:47 in fact, we've been involved in standardization efforts now for almost 20 years. My first meeting was a, I recall in may of 2000, I went to a an MPEG meeting in Geneva. And then shortly after that in July I went to an ITU VCEG meeting. VCEG is the video coding experts group of the ITU. And MPEG is the moving picture experts group of ISO. These two organizations were separately pursuing their own codecs at that time. Pankaj Topiwala: 14:21 ISO MPEG was working on MPEG-4 and ITU VCEG was working on H.263, and 263 plus and 263 plus plus. And then finally they started a project called 263 L for longterm. And eventually it became clear to these two organizations that look, it's silly to work on, on separate codecs. They had worked once before in MPEG-2 develop a joint standard and they decided to, to form a joint team at that time called the joint video team, JVT to develop the H.264 AVC video codec, which was finally done in 2003. We participate participated you know fully in that making many contributions of course in the transform but also in motion estimation and other aspects. So, for example, it might not be known that we also contributed the fast motion estimation that's now widely used in probably nearly all implementations of 264, but in 265 HEVC as well. Pankaj Topiwala: 15:38 And we participated in VVC. But one of the important things that we can discuss is these technologies, although they all have the same overall structure, they have become much more complicated in terms of the processing that they do. And we can discuss that to some extent if you want? Dror Gill: 15:59 The compression factors, just keep increasing from generation to generation and you know, we're wondering what's the limit of that? Pankaj Topiwala: 16:07 That's of course a very good question and let me try to answer some of that. And in fact that discussion I don't think came up in the discussion you had with Gary Sullivan, which certainly could have but I don't recall it in that conversation. So let me try to give for your listeners who did not catch that or are not familiar with it. A little bit of the story. Pankaj Topiwala: 16:28 The first international standard was the ITU. H.261 standard dating roughly to 1988 and it was designed to do only about 15 to one to 20 to one compression. And it was used mainly for video conferencing. And at that time you'd be surprised from our point of view today, the size of the video being used was actually incredibly tiny about QCIP or 176 by 144 pixels. Video of that quality that was the best we could conceive. And we thought we were doing great. And doing 20 to one compression, wow! Recall by the way, that if you try to do a lossless compression of any natural signal, whether it's speech or audio or images or video you can't do better than about two to one or at most about two and a half to one. Pankaj Topiwala: 17:25 You cannot do, typically you cannot even do three to one and you definitely cannot do 10 to one. So a video codec that could do 20 to one compression was 10 times better than what you could do lossless, I'm sorry. So this is definitely lossy, but lossy with still a good quality so that you can use it. And so we thought we were really good. When MPEG-1 came along in, in roughly 1992 we were aiming for 25 to one compression and the application was the video compact disc, the VCD. With H.262 or MPEG-2 roughly 1994, we were looking to do about 35 to one compression, 30 to 35. And the main application was then DVD or also broadcast television. At that point, broadcast television was ready to use at least in some, some segments. Pankaj Topiwala: 18:21 Try digital broadcasting. In the United States, that took a while. But in any case it could be used for broadcast television. And then from that point H.264 AVC In 2003, we jumped right away to more than 100 to one compression. This technology at least on large format video can be used to shrink the original size of a video by more than two orders of magnitude, which was absolutely stunning. You know no other natural signal, not speech, not broadband, audio, not images could be compressed that much and still give you high quality subjective quality. But video can because it's it is so redundant. And because we don't understand fully yet how to appreciate video. Subjectively. We've been trying things you know, ad hoc. And so the entire development of video coding has been really by ad hoc methods to see what quality we can get. Pankaj Topiwala: 19:27 And by quality we been using two two metrics. One is simply a mean square error based metric called peak signal to noise ratio or PSNR. And that has been the industry standard for the last 35 years. But the other method is simply to have people look at the video, what we call subjective rating of the video. Now it's hard to get a subjective rating. That's reliable. You have to do a lot of standardization get a lot of different people and take mean opinion scores and things like that. That's expensive. Whereas PSNR is something you can calculate on a computer. And so people have mostly in the development of video coding for 35 years relied on one objective quality metric called PSNR. And it is good but not great. And it's been known right from the beginning that it was not perfect, not perfectly correlated to video quality, and yet we didn't have anything better anyway. Pankaj Topiwala: 20:32 To finish the story of the video codecs with H.265 HEVC in 2013, we were now able to do up to 300 to one to up to 500 to one compression on let's say a 4K. And with VVC we have truly entered a new realm where we can do up to 1000 to one compression, which is three full orders of magnitude reduction of the original size. If the original size is say, 10 gigabits, we can bring that down to 10 megabits. And that's unbelievable. And so video compression truly is a remarkable technology. And you know, it's a, it's a marvel to look at. Of course it does not, it's not magic. It comes with an awful lot of processing and an awful lot of smarts have gone into it. That's right. Mark Donnigan: 21:24 You know Pankaj, that, is an amazing overview and to hear that that VVC is going to be a thousand to one. You know, compression benefit. Wow. That's incredible! Pankaj Topiwala: 21:37 I think we should of course we should of course temper that with you know, what people will use in applications. Correct. They may not use the full power of a VVC and may not crank it to that level. Sure, sure. I can certainly tell you that that we and many other companies have created bitstreams with 1000 to one or more compression and seeing video quality that we thought was usable. Mark Donnigan: 22:07 One of the topics that has come to light recently and been talked about quite a bit. And it was initially raised by Dave Ronca who used to lead encoding at Netflix for like 10 years. In fact you know, I think he really built that department, the encoding team there and is now at Facebook. And he wrote a LinkedIn article post that was really fascinating. And what he was pointing out in this post was, was that with compression efficiency and as each generation of codec is getting more efficient as you just explained and gave us an overview. There's a, there's a problem that's coming with that in that each generation of codec is also getting even more complex and you know, in some settings and, and I suppose you know, Netflix is maybe an example where you know, it's probably not accurate to say they have unlimited compute, but their application is obviously very different in terms of how they can operate their, their encoding function compared to someone who's doing live, live streaming for example, or live broadcast. Maybe you can share with us as well. You know, through the generation generational growth of these codecs, how has the, how has the compute requirements also grown and has it grown in sort of a linear way along with the compression efficiency? Or are you seeing, you know, some issues with you know, yes, we can get a thousand to one, but our compute efficiency is getting to the, where we could be hitting a wall. Pankaj Topiwala: 23:46 You asked a good question. Has the complexity only scaled linearly with the compression ratio? And the answer is no. Not at all. Complexity has outpaced the compression ratio. Even though the compression ratio is, is a tremendous, the complexity is much, much higher. And has always been at every step. First of all there's a big difference in doing the research, the research phase in development of the, of a technology like VVC where we were using a standardized reference model that the committee develops along the way, which is not at all optimized. But that's what we all use because we share a common code base. And make any new proposals based on modifying that code base. Now that code base is always along the entire development chain has always been very, very slow. Pankaj Topiwala: 24:42 And true implementations are anywhere from 100 to 500 times more efficient in complexity than the reference software. So right away you can have the reference software for say VVC and somebody developing a, an implementation that's a real product. It can be at least 100 times more efficient than what the reference software, maybe even more. So there's a big difference. You know, when we're developing a technology, it is very hard to predict what implementers will actually come up with later. Of course, the only way they can do that is that companies actually invest the time and energy right away as they're developing the standard to build prototype both software and hardware and have a good idea that when they finish this, you know, what is it going to really cost? So just to give you a, an idea, between, H.264 and Pankaj Topiwala: 25:38 H.265, H.264, only had two transforms of size, four by four and eight by eight. And these were integer transforms, which are only bit shifts and adds, took no multiplies and no divides. The division in fact got incorporated into the quantizer and as a result, it was very, very fast. Moreover, if you had to do, make decisions such as inter versus intra mode, the intra modes there were only about eight or 10 intra modes in H.264. By contrast in H.265. We have not two transforms eight, four by four and eight by, but in fact sizes of four, eight, 16 and 32. So we have much larger sized transforms and instead of a eight or 10 intra modes, we jumped up to 35 intra modes. Pankaj Topiwala: 26:36 And then with a VVC we jumped up to 67 intro modes and we just, it just became so much more complex. The compression ratio between HEVC and VVC is not quite two to one, but let's say, you know, 40% better. But the the complexity is not 40% more. On the ground and nobody has yet, to my knowledge, built a a, a, a fully compliant and powerful either software or hardware video codec for VVC yet because it's not even finished yet. It's going to be finished in July 2020. When it, when, the dust finally settles maybe four or five years from now, it will be, it will prove to be at least three or four times more complex than HEVC encoder the decoder, not that much. The decoder, luckily we're able to build decoders that are much more linear than the encoder. Pankaj Topiwala: 27:37 So I guess I should qualify as discussion saying the complexity growth is all mostly been in the encoder. The decoder has been a much more reasonable. Remember, we are always relying on this principle of ever-increasing compute capability. You know, a factor of two every 18 months. We've long heard about all of this, you know, and it is true, Moore's law. If we did not have that, none of this could have happened. None of this high complexity codecs, whatever had been developed because nobody would ever be able to implement them. But because of Moore's law we can confidently say that even if we put out this very highly complex VVC standard, someday and in the not too distant future, people will be able to implement this in hardware. Now you also asked a very good question earlier, is there a limit to how much we can compress? Pankaj Topiwala: 28:34 And also one can ask relatively in this issue, is there a limit to a Moore's law? And we've heard a lot about that. That may be finally after decades of the success of Moore's law and actually being realized, maybe we are now finally coming to quantum mechanical limits to you know how much we can miniaturize in electronics before we actually have to go to quantum computing, which is a totally different you know approach to doing computing because trying to go smaller die size. Well, we'll make it a unstable quantum mechanically. Now the, it appears that we may be hitting a wall eventually we haven't hit it yet, but we may be close to a, a physical limit in die size. And in the observations that I've been making at least it seems possible to me that we are also reaching a limit to how much we can compress video even without a complexity limit, how much we can compress video and still obtain reasonable or rather high quality. Pankaj Topiwala: 29:46 But we don't know the answer to that. And in fact there are many many aspects of this that we simply don't know. For example, the only real arbiter of video quality is subjective testing. Nobody has come up with an objective video quality metric that we can rely on. PSNR is not it. When, when push comes to shove, nobody in this industry actually relies on PSNR. They actually do subjective testing well. So in that scenario, we don't know what the limits of visual quality because we don't understand human vision, you know, we try, but human vision is so complicated. Nobody can understand the impact of that on video quality to any very significant extent. Now in fact, the first baby steps to try to understand, not explicitly but implicitly capture subjective human video quality assessment into a neural model. Those steps are just now being taken in the last couple of years. In fact, we've been involved, my company has been involved in, in getting into that because I think that's a very exciting area. Dror Gill: 30:57 I tend to agree that modeling human perception with a neural network seems more natural than, you know, just regular formulas and algorithms which are which are linear. Now I, I wanted to ask you about this process of, of creating the codecs. It's, it's very important to have standards. So you encode a video once and then you can play it anywhere and anytime and on any device. And for this, the encoder and decoder need to agree on exactly the format of the video. And traditionally you know, as you pointed out with all the history of, of development. Video codecs have been developed by standardization bodies, MPEG and ITU first separately. And then they joined forces to develop the newest video standards. But recently we're seeing another approach to develop codecs, which is by open sourcing them. Dror Gill: 31:58 Google started with an open source code, they called VP9 which they first developed internally. Then they open sourced it and and they use it widely across their services, especially in, YouTube. And then they joined forces with the, I think the largest companies in the world, not just in video but in general. You know those large internet giants such as Amazon and Facebook and and Netflix and even Microsoft, Apple, Intel have joined together with the Alliance of Open Media to jointly create another open codec called AV1. And this is a completely parallel process to the MPEG codec development process. And the question is, do you think that this was kind of a one time effort to, to to try and find a, or develop a royalty free codec, or is this something that will continue? And how do you think the adoption of the open source codecs versus the committee defined codecs, how would that adoption play out in the market? Pankaj Topiwala: 33:17 That's of course a large topic on its own. And I should mention that there have been a number of discussions about that topic. In particular at the SPIE conference last summer in San Diego, we had a panel discussion of experts in video compression to discuss exactly that. And one of the things we should provide to your listeners is a link to that captured video of the panel discussion where that topic is discussed to some significant extent. And it's on YouTube so we can provide a link to that. My answer. And of course none of us knows the future. Right. But we're going to take our best guesses. I believe that this trend will continue and is a new factor in the landscape of video compression development. Pankaj Topiwala: 34:10 But we should also point out that the domain of preponderance use preponderant use of these codecs is going to be different than in our traditional codecs. Our traditional codecs such as H.264 265, were initially developed for primarily for the broadcast market or for DVD and Blu-ray. Whereas these new codecs from AOM are primarily being developed for the streaming media industry. So the likes of Netflix and Amazon and for YouTube where they put up billions of user generated videos. So, for the streaming application, the decoder is almost always a software decoder. That means they can update that decoder anytime they do a software update. So they're not limited by a hardware development cycle. Of course, hardware companies are also building AV1. Pankaj Topiwala: 35:13 And the point of that would be to try to put it into handheld devices like laptops, tablets, and especially smartphones. But to try to get AV1 not only as a decoder but also as an encoder in a smartphone is going to be quite complicated. And the first few codecs that come out in hardware will be of much lower quality, for example, comparable to AVC and not even the quality of HEVC when they first start out. So that's... the hardware implementations of AV1 that work in real time are not going to be, it's going to take a while for them to catch up to the quality that AV1 can offer. But for streaming we, we can decode these streams reasonably well in software or in firmware. And the net result is that, or in GPU for example, and the net result is that these companies can already start streaming. Pankaj Topiwala: 36:14 So in fact Google is already streaming some test streams maybe one now. And it's cloud-based YouTube application and companies like Cisco are testing it already, even for for their WebEx video communication platform. Although the quality will not be then anything like the full capability of AV1, it'll be at a much reduced level, but it'll be this open source and notionally, you know, royalty free video codec. Dror Gill: 36:50 Notionally. Yeah. Because they always tried to do this, this dance and every algorithm that they try to put into the standard is being scrutinized and, and, and they check if there are any patents around it so they can try and keep this notion of of royalty-free around the codec because definitely the codec is open source and royalty free. Dror Gill: 37:14 I think that is, is, is a big question. So much IP has gone into the development of the different MPEG standards and we know it has caused issues. Went pretty smoothly with AVC, with MPEG-LA that had kind of a single point of contact for licensing all the essential patents and with HEVC, that hasn't gone very well in the beginning. But still there is a lot of IP there. So the question is, is it even possible to have a truly royalty free codec that can be competitive in, in compression efficiency and performance with the codec developed by the standards committee? Pankaj Topiwala: 37:50 I'll give you a two part answer. One because of the landscape of patents in the field of video compression which I would describe as being, you know very, very spaghetti like and patents date back to other patents. Pankaj Topiwala: 38:09 And they cover most of the, the topics and the most of the, the tools used in video compression. And by the way we've looked at the AV1 and AV1 is not that different from all the other standards that we have. H.265 or VVC. There are some things that are different. By and large, it resembles the existing standards. So can it be that this animal is totally patent free? No, it cannot be that it is patent free. But patent free is not the same as royalty free. There's no question that AV1 has many, many patents, probably hundreds of patents that reach into it. The question is whether the people developing and practicing AV1 own all of those patents. That is of course, a much larger question. Pankaj Topiwala: 39:07 And in fact, there has been a recent challenge to that, a group has even stood up to proclaim that they have a central IP in AV1. The net reaction from the AOM has been to develop a legal defense fund so that they're not going to budge in terms of their royalty free model. If they do. It would kill the whole project because their main thesis is that this is a world do free thing, use it and go ahead. Now, the legal defense fund then protects the members of that Alliance, jointly. Now, it's not as if the Alliance is going to indemnify you against any possible attack on IP. They can't do that because nobody can predict, you know, where somebody's IP is. The world is so large, so many patents in that we're talking not, not even hundreds and thousands, but tens of thousands of patents at least. Pankaj Topiwala: 40:08 So nobody in the world has ever reviewed all of those patent. It's not possible. And the net result is that nobody can know for sure what technology might have been patented by third parties. But the point is that because such a large number of powerful companies that are also the main users of this technology, you know, people, companies like Google and Apple and Microsoft and, and Netflix and Amazon and Facebook and whatnot. These companies are so powerful. And Samsung by the way, has joined the Alliance. These companies are so powerful that you know, it would be hard to challenge them. And so in practice, the point is they can project a royalty-free technology because it would be hard for anybody to challenge it. And so that's the reality on the ground. Pankaj Topiwala: 41:03 So at the moment it is succeeding as a royalty free project. I should also point out that if you want to use this, not join the Alliance, but just want to be a user. Even just to use it, you already have to offer any IP you have in this technology it to the Alliance. So all users around the world, so if tens of thousands and eventually millions of you know, users around the world, including tens of thousands of companies around the world start to use this technology, they will all have automatically yielded any IP they have in AV1, to the Alliance. Dror Gill: 41:44 Wow. That's really fascinating. I mean, first the distinction you made between royalty free and patent free. So the AOM can keep this technology royalty free, even if it's not patent free because they don't charge royalties and they can help with the legal defense fund against patent claim and still keep it royalty free. And, and second is the fact that when you use this technology, you are giving up any IP claims against the creators of the technology, which means that if any, any party who wants to have any IP claims against the AV1 encoder cannot use it in any form or shape. Pankaj Topiwala: 42:25 That's at least my understanding. And I've tried to look at of course I'm not a lawyer. And you have to take that as just the opinion of a video coding expert rather than a lawyer dissecting the legalities of this. But be that as it may, my understanding is that any user would have to yield any IP they have in the standard to the Alliance. And the net result will be if this technology truly does get widely used more IP than just from the Alliance members will have been folded into into it so that eventually it would be hard for anybody to challenge this. Mark Donnigan: 43:09 Pankaj, what does this mean for the development of so much of the technology has been in has been enabled by the financial incentive of small groups of people, you know, or medium sized groups of people forming together. You know, building a company, usually. Hiring other experts and being able to derive some economic benefit from the research and the work and the, you know, the effort that's put in. If all of this sort of consolidates to a handful or a couple of handfuls of, you know, very, very large companies, you know, does that, I guess I'm, I'm asking from your view, will, will video and coding technology development and advancements proliferate? Will it sort of stay static? Because basically all these companies will hire or acquire, you know, all the experts and you know, it's just now everybody works for Google and Facebook and Netflix and you know... Or, or do you think it will ultimately decline? Because that's something that that comes to mind here is, you know, if the economic incentives sort of go away, well, you know, people aren't going to work for free! Pankaj Topiwala: 44:29 So that's of course a, another question and a one relevant. In fact to many of us working in video compression right now, including my company. And I faced this directly back in the days of MPEG-2. There was a two and a half dollar ($2.50) per unit license fee for using MPEG-2. That created billions of dollars in licensing in fact, the patent pool, MPEG-LA itself made billions of dollars, even though they took only 10% of the proceeds, they already made billions of dollars, you know, huge amounts of money. With the advent of H.264 AVC, the patent license went not to from two and a half dollars to 25 cents a unit. And now with HEVC, it's a little bit less than that per unit. Of course the number of units has grown exponentially, but then the big companies don't continue to pay per unit anymore. Pankaj Topiwala: 45:29 They just pay a yearly cap. For example, 5 million or 10 million, which to these big companies is is peanuts. So there's a yearly cap for the big companies that have, you know, hundreds of millions of units. You know imagine the number of Microsoft windows that are out there or the number of you know, Google Chrome browsers. And if you have a, a codec embedded in the browser there are hundreds of millions of them, if not billions of them. And so they just pay a cap and they're done with it. But even then, there was up till now an incentive for smart engineers to develop exciting new ideas in a future video coding. But, and that has been up the story up till now. But when, if it happens that this AOM model with AV1 and then AV2, really becomes a dominant codec and takes over the market, then there will be no incentive for researchers to devote any time and energy. Pankaj Topiwala: 46:32 Certainly my company for example, can't afford to you know, just twiddle thumbs, create technologies for which there is absolutely no possibility of a royalty stream. So we, we cannot be in the business of developing video coding when video coding doesn't pay. So the only thing that makes money, is Applications, for example, a streaming application or some other such thing. And so Netflix and, and Google and Amazon will be streaming video and they'll charge you per stream but not on the codec. So that that's an interesting thing and it certainly affects the future development of video. It's clear to me it's a negative impact on the research that we got going in. I can't expect that Google and Amazon and Microsoft are going to continue to devote the same energy to develop future compression technologies in their royalty free environment that companies have in the open standards development technology environment. Pankaj Topiwala: 47:34 It's hard for me to believe that they will devote that much energy. They'll devote energy, but it will not be the the same level. For example, in developing a video standards such as HEVC, it took up to 10 years of development by on the order of 500 to 600 experts, well, let's say four to 500 experts from around the world meeting four times a year for 10 years. Mark Donnigan: 48:03 That is so critical. I want you to repeat that again. Pankaj Topiwala: 48:07 Well, I mean so very clearly we've been putting out a video codec roughly on the schedule of once every 10 years. MPEG-2 was 1994. AVC was 2003 and also 2004. And then HEVC in 2013. Those were roughly 10 years apart. But VVC we've accelerated the schedule to put one out in seven years instead of 10 years. But even then you should realize that we had been working right since HEVC was done. Pankaj Topiwala: 48:39 We've been working all this time to develop VVC and so on the order of 500 experts from around the world have met four times a year at all international locations, spending on the order of $100 million per meeting. You know so billions of dollars have been spent by industry to create these standards, many billions and it can't happen, you know without that. It's hard for me to believe that companies like Microsoft, Google, and whatnot, are going to devote billions to develop their next incremental, you know, AV1and AV2 AV3's. But maybe they will it just, that there's no royalty stream coming from the codec itself, only the application. Then the incentive, suppose they start dominating to create even better technology will not be there. So there really is a, a financial issue in this and that's at play right now. Dror Gill: 49:36 Yeah, I, I find it really fascinating. And of course, Mark and I are not lawyers, but all this you know, royalty free versus committee developed open source versus a standard those large companies who some people fear, you know, their dominance and not only in video codec development, but in many other areas. You know, versus you know, dozens of companies and hundreds of engineers working for seven or 10 years in a codec. So you know, it's really different approaches different methods of development eventually to approach the exact same problem of video compression. And, and how this turns out. I mean we, we cannot forecast for sure, but it will be very interesting, especially next year in 2020 when VVC is ratified. And at around the same time, EVC is ratified another codec from the MPEG committee. Dror Gill: 50:43 And then AV1, and once you know, AV1 starts hitting the market. We'll hear all the discussions of AV2. So it's gonna be really interesting and fascinating to follow. And we, we promise to to bring you all the updates here on The Video Insiders. So Pankaj I really want to thank you. This has been a fascinating discussion with very interesting insights into the world of codec development and compression and, and wavelets and DCT and and all of those topics and, and the history and the future. So thank you very much for joining us today on the video insiders. Pankaj Topiwala: 51:25 It's been my pleasure, Mark and Dror. And I look forward to interacting in the future. Hope this is a useful for your audience. If I can give you a one parting thought, let me give this... Pankaj Topiwala: 51:40 H.264 AVC was developed in 2003 and also 2004. That is you know, some 17 years or 16 years ago, it is close to being now nearly royalty-free itself. And if you look at the market share of video codecs currently being used in the market, for example, even in streaming AVC dominates that market completely. Even though VP8 and VP9 and VP10 were introduced and now AV1, none of those have any sizeable market share. AVC currently dominates from 70 to 80% of that marketplace right now. And it fully dominates broadcast where those other codecs are not even in play. And so they're 17, 16, 17 years later, it is now still the dominant codec even much over HEVC, which by the way is also taking an uptick in the last several years. So the standardized codecs developed by ITU and MPEG are not dead. They may just take a little longer to emerge as dominant forces. Mark Donnigan: 52:51 That's a great parting thought. Thanks for sharing that. What an engaging episode Dror. Yeah. Yeah. Really interesting. I learned so much. I got a DCT primer. I mean, that in and of itself was a amazing, Dror Gill: 53:08 Yeah. Yeah. Thank you. Mark Donnigan: 53:11 Yeah, amazing Pankaj. Okay, well good. Well thanks again for listening to the video insiders, and as always, if you would like to come on this show, we would love to have you just send us an email. The email address is thevideoinsiders@beamr.com, and Dror or myself will follow up with you and we'd love to hear what you're doing. We're always interested in talking to video experts who are involved in really every area of video distribution. So it's not only encoding and not only codecs, whatever you're doing, tell us about it. And until next time what do we say Dror? Happy encoding! Thanks everyone. 

The Video Insiders
Overcoming innovation hurdles: a conversation with Unified Patents.

The Video Insiders

Play Episode Listen Later Jan 8, 2020 34:50


Learn about Unified Patents hereCheck out Unified Patents Objective PAtent Landscape OPAL toolRead the Independent economic study for HEVC royaltiesShawn Ambwani LinkedIn profileRelated episode: VVC, HEVC & other MPEG codec standardsThe Video Insiders LinkedIn Group is where thousands of your peers are discussing the latest video technology news and sharing best practices. Click here to join --------------------------------------Would you like to be a guest on the show? Email: thevideoinsiders@beamr.comLearn more about Beamr-------------------------------------- TRANSCRIPT (edited slightly for readability)Narrator: 00:00 The Video Insiders is the show that makes sense of all that is happening in the world of online video as seen through the eyes of a second generation codec nerd and a marketing guy who knows what I-frames and macro blocks are. Here are your hosts, Mark Donnigan and Dror Gill. Mark Donnigan: 00:19 Well, welcome back to The Video Insiders. Dror, how you doing today? I'm doing great. How are you Mark? I am doing awesome. As always. I am super pleased to welcome Shawn Ambwani who is co-founder of Unified Patents and Shawn is gonna tell us all about what Unified Patents does and we're going to dive into, you know, just a really tremendous discussion. But Shawn, Welcome to the podcast! Shawn Ambwani: 00:46 Hey guys. Thanks Mark. Thanks for, for allowing me to participate on your wonderful podcast. I look at this as similar to 'All Things Considered' and 'How I built this', two of my favorite podcasts. Mark Donnigan: 01:01 Those are awesome podcasts by the way. What an honor? Yeah. Wow. The level that I expect you guys to be at in traffic very shortly. That's right. Well, we hope so to. Well, why don't you introduce yourself you know, and give us a quick snapshot of your background and then let's let's hear about what Unified Patents is doing. Shawn Ambwani: 01:23 It's kind of a, I have an interesting I mean some might say not so interesting, but I think it's interesting background related to this area since, you know, the first startup that I did and the second one were all related to MPEG4. So I co-founded a company called Envivio, which way back when was actually one of the original MPEG4 companies when they just had simple profile actually out there doing encoders and decoders. And then I went to a Korean company called NexStreaming, which actually still exists, which is doing encoders as well, but more for the mobile space and decoders. So it's an area I'm quite familiar with. I wasn't really being an attorney back then. Now I'm kind of more of an attorney than I was back then, but I tried to avoid being an attorney as much as possible in general. Shawn Ambwani: 02:16 And basically I helped co-found a company called Unified Patents. And what unified patents does is it gets contributions from member companies as well as it allows small companies to join for free and they participate in joining what we call zones. And these different zones are intended to protect against what we consider unsubstantiated or invalid patent assertions. And the goal of these zones is to deter those from occurring in the first place. So if you imagine the kind of a technology area, let it be content or let it be video codec in this case or other things as having a bunch of companies that have a common interest in maintaining, you know, patents and ensuring patents that are asserted in that space are valid, which means that no one invented the idea beforehand. And also that it's fairly priced and you know, people are explaining the rationale behind what they're doing and they're not basically just attempting to get people to settle out, not because the assertions are valid or good, but simply because the cost of litigation is so high when it comes to patents. Shawn Ambwani: 03:35 And we want to deter that type of activity because there's been a lot of investment in that activity so far. In fact, most litigation's are by NPE's and so Unified started by doing those zones and, and, and we've have a bunch of them now. We just launched an open source zone in fact, but with you know, Linux foundation and OIN and IBM and others and the video codec zone was something that we were thinking about for a long time. It's something that I'm very familiar with from my past dealings with MPEG LA and other pools. And it was a big issue I think. And it has been a constant issue, which is how do you deal with multiple pools or multiple people asking for money in a standard? How do you deal with the pricing of it? Especially if you're smaller entities and you don't have the information that may be larger companies might have. Shawn Ambwani: 04:26 How do you deal with that and how do you deal with all the invalid assertions that are being done or declarations that occur in this area? How do you figure out who you have to pay? And how much you have to pay. All these add a level of complexity to deploying these standards, which makes adoption harder and creates the uncertainty that causes people to go to proprietary solutions, which I think is a negative in the end. So that's why Unified Patents really created this area and created the video codec zone. And basically we've been pretty, I think, successful so far now actually going through and doing each one of the things we said we were going to do. Dror Gill: 05:06 So what, what are those things basically when you set up a zone and, and want to start finding those patterns that may be invalid how do you go about doing that? What is the process? Shawn Ambwani: 05:18 Yeah, so I mean there's two major things in the SEP zones that it's not, it's not just about finding invalid patents, although I can tell you it's relatively easy to find invalid patents in any of these zones. That's not a difficulty. The hard part is figuring out which ones to go to or which ones are going to be the most interesting to go after. And that takes a lot of art. Essentially identifying them, finding out good prior art that we feel comfortable with, hiring good counsel. There's all kinds of weighing mechanisms that go into it, who the entity is, how it came about, how old the patents are, where it came from. All of these variables go into that kind of equation of when we decide. What's kind of unique about the way we work is we work independently of our members, so our members are funding these activities and some join for free. Shawn Ambwani: 06:11 So we have a number of members of the video codecs and we use all this money and information in our activities to basically go back and decide what to do, with the objective of deterring you know, what we consider bad assertions in this space. And then that's one part of it. The other part of it is that SEP's are all about, you know, an area called FRAND, fair, reasonable, non-discriminatory. And part of all of that involves negotiation. And so what we provide are tools to allow companies to negotiate we think in a fair and more transparent way to licensors as to pricing. But also explaining why the pricing is the way it is. Because one of the problems that we've had in the big picture is that a lot of these licensors have been asking for money, whether their own pools, whether outside of pools, whatever. Shawn Ambwani: 07:12 But no one can really explain why the price is what it is. And I think that leads to a lot of people to just stop paying or stop wanting to get into licensing discussions. And that's not beneficial for the market. And so by explaining how the price comes out the way it is and providing a very, we consider, solid methodology for it, it allows our members but also licensors to better understand who owns what and how much value is in the standard. So what they should reasonably expect to get for that technology and how much licensees reasonably should expect to pay in order to deploy the technology. Mark Donnigan: 07:55 Now my question, you know, Shawn is when you are getting into these conversations with the parties or party that, you know, owns this IP and I'm speaking more around sort of the pricing and the model and that sort of thing. Are you then...is that information available to your members or is it more that you're sort of helping facilitate, helping bring some rationality, you know, so that then that body can turn around and make public: "Hey, great news!" We've decided that all digitally distributed content doesn't carry, you know, a royalty cost. What exactly, I guess what my question is, what exactly is, is your role then in informing the market? Shawn Ambwani: 08:40 I think that, well, I mean there's a number of things to talk about, but what's I think most important is that we, you know, we don't know necessarily what the right price is. We hired an outside economist to look into that and he came back with a pricing range in you know, a report that we gave the highlights too and there was some press over and it's on our website but you can also look at it through a number of particles and basically he came back with a price of between 8 cents and 28 cents I believe if I'm accurate. Is what he believed the estimate to be for the value of the technology including everything. And it ranged based on I think the device and like other factors and stuff like that. Now that high level information we provided publicly and in fact we provided the information on who made the report when it was created and what it was based on. Shawn Ambwani: 09:38 And we even provided kind of the overall methodology of how it was done, which is basically being used at a very high level. They used MPEG LA's AVC license as the starting point or the foundation for deciding what HEVC in this case, which is what he was looking at, pricing should be based on his expert analysis. And then he modified that based on switching costs based on the cost of bandwidth, the cost of storage and quality and other factors basically that are valuable. So, that's where we went. Now, what's important to understand is that we published that information so anyone could take a look at the, at the high level. And the methodology pretty much tells you the roadmap of where we started and how we ended up where we are. The other part is how do you decide who you have to pay and how much each person gets. Even assuming that you figure out that, let's say it's 25 cents, that you think the royalty rate should be for it. And I'm not saying that's the number, but everyone can decide on whatever number they feel comfortable. Our expert created this report and we published it. Other people can create other reports and I'm sure they have their own kind of versions. But what's important for us is that, you know, people should explain why they came out with their pricing. And unfortunately in pools and licensing organizations in general, that just doesn't happen. Dror Gill: 11:05 So basically you're finding economical reason behind a certain price for for this technology. In this case, HEVC. And now companies who want to use HEVC, how do they use this number? Because they have your number, which is the total, and then they have royalty rates that are asked that, you know, certain patent pools are asking and they add up to a different number that could be a higher number. So do they just you know, divide the number that they think is the right one among the different patent pools and pay them the amount they think they should pay or do they just use it as a negotiating tool when they talk to them and, and you know, and negotiate the actual world, the rates that they will have to pay? Shawn Ambwani: 11:52 By providing a lot of this information. Some of it publicly like economic report in some format. The hope is that smaller entities instead of rolling over when licensing people come by and say, Hey, take it or leave it, they really have an ability to make a fair response, a good faith response with information that allows them to then basically justify why they came up with a price and really push back and say, listen, you know, this is what my methodology came out to. Now. It could be right, could be wrong. You know, in the end in FRAND negotiations, I have to make a good faith offer. That's really the intent. So that's an important aspect of pushing back on this kind of, we think less information that is occurring in the marketplace and more fragmentation. And I think they're all interrelated because of the less information you have more fragmentation. Cause if everyone could agree on a price and everyone agreed that this is the fair value for the technology, there really wouldn't be multiple pools in my opinion or multiple licensors, because everyone would know what the number is. And so why would you separate? Dror Gill: 13:07 But basically you're saying that even if a patent pool set, the royalty rates and those royalty rates in some cases are public, at least for some of the patent pools, this is not what a licensor would pay. This is just kind of a starting point for a negotiation and you're providing tools for this type of negotiation. Shawn Ambwani: 13:23 We also think that validity is a big issue because none of these entities look at validity when they're incorporating patents into their pools or into their licensing. It's really up to the licensees or the people who are potentially taking the license to have the responsibility to go out and figure that out, which can be very, very costly. Dror Gill: 13:44 You assume they're valid, right? If they're licensing patents to you, you assume that they're licensing valid patents. Right? These are kind of, you know, respectable patent holders and patent pools. Why would they license something that's not valid? Shawn Ambwani: 13:57 I mean, it's a great point. I mean, the argument would be that they want to license patents. Mark Donnigan: 14:03 That's their business at the end of the day. Yeah. Shawn Ambwani: 14:08 Right. So, you know, if you had a car and you're trying to sell a car, you're going to accentuate the good things about the car. Not that it's a rebuilt or something like that or you know, like it's, it's been, you know, it's been in a crash or accident like, yeah. Like you're going to show what you want to show. Right. And that's natural in any of these cases. The unfortunate fact is that it's very costly to figure out that stuff and there's no really organization you'd think a licensing organization like MPEG LA or others. And I'm not saying MPEG LA is doing a bad job necessarily, I'm just pointing them out as an example, would do a better job of vetting to some degree on that type of activity. But they don't, and I think there's a number of reasons Mark Donnigan: 14:53 Why do they want to do that? I almost liken this to the 500 channel cable bundle of which there's about 15 high quality channels and there's 485 that are anywhere from just a, you know, not, not relevant, not interesting to, you know, to even lower quality than that, but, but you know, but Hey, I got a 500 channel bundle, right? So I feel like, wow, it must be worth $100 a month, you know, or whatever. Shawn Ambwani: 15:23 The idea that that licensing organizations like MPEG LA or (HEVC) Advance or other ones like that aren't doing it to the benefit of their licensors. It just seems ridiculous to me. I mean the people on their, on their management and the people who are actually owning that organization, typically it's managed and owned and administrative fees are paid to licensors. And traditionally the money flows one way from licensees to licensors. It's for the benefit of the licensors. And the rules that they put in are essentially to make sure that those guys are protected. They have no incentive in general of saying people's patents are invalid. And, and that's just a bad fact pattern for them. If basically they get back and say, Hey, listen this patent... Yeah, no, it's bad. Mark Donnigan: 16:16 Exactly. So, so in that context then it completely makes sense that they don't vet you know, at the level that you are and why, you know, Unified Patents needs to exist, you know, is because we need this sort of independent third party. I guess. I, I, you know, that's, that's out there doing this work. Now, Shawn, one of the things that I noticed is you're acting both against NPE's, so, non-practicing entities, and against SEP's. So standard essential patents. What are the issues with SEP's? Shawn Ambwani: 16:51 Well, I mean the general assumption has been, and I don't know where this assumption came from, was that standard essential patents or people who declare their patents to be standard essential are more likely to be valid than other patents. And in the real world where there's litigation and there's challenges and things get checked out or vetted essentially, adversarially, the reality is that standard essential patents in all the studies that have been done fair, far worse, than normal patents do on average. And you know, it's not shocking actually when you think about it. Obviously there's a lot of self selection here, but part of the reason why is, you know, when you're submitting into pools or in when you're getting these patents, when part of a standardization body or doing other activities, there's a lot of other people involved and it's usually built on other ideas that people have had in the past. Shawn Ambwani: 17:59 And it's not surprising that a lot of these patents have underlying ideas that had been done in the past or other people had brought up previously. Sometimes they weren't accepted, sometimes they were or sometimes they were put on hold. Who knows? But there's a lot of prior art oftentimes in these areas. These aren't open fields, these aren't brand new innovations that typically come up. And so that's not surprising. Now, you know, there's also a general belief that standard essential patterns are more valuable. And I think, you know, that's a pretty, I would say, you know, I dunno if it's absolutely valid, but it's not unreasonable to believe that if you declare a patent, as standardized, if you look at the average patent and compared to that patent, it's probably your, it's probably more valuable, at that point. Because you basically said it's part of a standard that people are probably going to adopt at that point versus a patent in general, which you never know most of the time, whether anyone's going to use that patent. Shawn Ambwani: 19:02 I mean the vast majority of patents are never actually used in any way whatsoever. They're not enforceable because they're just ideas that people have most of the time, and these patents are arguable more likely than not to be in a standard and that standard might or might not actually get used in the end. Inherently you get - they're more valuable. The problem is there's tons of over declaration that occurs in this area. There's very little incentive. I mean some places there's more of an incentive than others, but the way MPEG works specifically is that you can do blanket declarations and so you don't have to declare specific patents. And, other standards, you have to basically declare each individual patent that you have. So, I mean, there's all kinds of trade offs, and all these different things, but the reality is that no one really knows exactly how many patents need to be licensed. And that just creates a lot of uncertainty. And you know, a lot of companies who are trying to make money, not off products but off of doing licensing thrive on uncertainty because that's where they can make money. Is basically by, you know, saying, okay, well who knows what can happen, but if you take care of me now, I can make sure that I'm not going to cause you issues. Dror Gill: 20:23 Right. And that's why uncertainty is in the middle of FUD, fear, uncertainty, doubt, which is one of those tactics and uncertainty is definitely a big part of that. Shawn Ambwani: 20:33 Yeah. I mean, the other thing is that companies in general, it seems like a one way street a lot of the time, which is pretty unfortunate in that although I'm not sure if I have a good solution, you know, a lot of companies, the licensors have a way of getting together, agreeing on a price and then licensing through an organization like MPEG LA or others to do that type of activity or Velos (Media), or whatever it is. They choose, you know, they can select a price, they can work together, agree on a price. And the reason why they can do that according to the DOJ is because it's a different product than what's available before. So it decreases uncertainty by making it easier for people to take a license of convenience for that specific technology area. Dror Gill: 21:21 Otherwise, it might might've been considered price setting, Shawn Ambwani: 21:24 Right? Yeah. It would be considered price. It would be considered price setting. But in this case, the argument is always that you can always go to each individual company and get a license or negotiate a separate license. This is a license of convenience for this technology area from all these companies for one price. And that makes it a lot easier for people on both sides to be able to know exactly how much they're going to be getting and how much they're going to have to pay for clearing this risk. Which makes sense. I fundamentally have no problem with pools and what they do. The, the issue comes up is that a lot of these pools, A) don't talk about the pricing, they don't look at the validity. They don't really have a great essentially checking on top of it. And they're very much incentivized to help out the licensors, not the licensees figure stuff out. And what ends up happening is over time you kind of, and you have companies also that are not interested in making products, which is unfortunate. They're just interested in making money off of their licensing. Which is unfortunate because there's a lot of games that can be played in the standardization world to get your stuff in and then get your patents in basically. Mark Donnigan: 22:44 Well, it ultimately, it, it stops innovation. I mean, at the end of the day, you know, and one thing, and Dror and I have talked about this on episodes and we've certainly talked about this a lot, you know, privately within Beamr is, you know, it's a little bit mystifying as well because okay, so HEVC clearly was set back as a result of, of many issues. But you know, largely what we'd been talking about for the last 35 minutes and the adoption of HEVC. And yet these people, as you point out, the licensors, they don't make money if nobody's using the technology. So, so what's mystifying to us is that this is not, you know, it's not like somehow they're getting paid still. You know, even though the adoption of the technology is not in place or it's not being used, they're not getting paid. And so it seems like at some point, you know, a rational actor would stand up and say, wait a second, I'd rather get something rather than nothing! But, it's almost like they, they're not acting that way. Dror Gill: 23:46 But, but it did happen. They did reduce the royalty rate. Yes, yes, yes. Certainly. And they did come to their senses and they did put a cap and then initially it was uncapped and they did remove royalties from content. And you know, they did a lot of things in the right direction after the pressure from the market when they realized they're not going to get anything. And when AV1 started to happen, you know, and they were pressured by that, by a competing codec that was supposedly a royalty free and didn't have these issues. So I think the situation is improved. But you've launched a specific zone. It's called the video codec zone, but basically right now it deals only with HEVC. Shawn Ambwani: 24:33 A lot of these patents that we've challenged relate not just to HEVC but potentially to AV and other codecs like AVC as well. Cause there's such overlap between these things. That's why we generically call it a video codec zone. So, obviously a lot of the things that we've looked at in like the economic report and everything else and landscape, a lot of the focus has been on HEVC. Dror Gill: 24:59 So you examined the HEVC and and you saw this situation that you have three patent pools. One of them hasn't even announced the royalty rates and, and you have a lot of independent patent holders who claim to have standard essential patents for HEVC. And this is kind of your, you're opening a, a situation. So what, what was the first thing that, that you did, how did you start to, to approach the HEVC pattern topic and what actions did you take? Shawn Ambwani: 25:34 Like I said, we've done a bunch of different stuff. We had a submission repo called open, which where we collated all the prior art, not prior art, but submissions into the standard for HEVC and AVC and other standards from MPEG so people can make it easily searchable. In fact, 50% of the priority art that we got for our patent challenges came from the submission repo, which is great, which is basically, you know, previous submissions to the same standard. We have OPAL, which is our landscape tool. And then, you know, obviously we have OPEN which is our evaluation report that I mentioned for HEVC. And then we did a bunch of reviews of validity and challenged a bunch of patents in different licensing entities. I mean, Velos, I think they don't consider themselves a pool. Just to be clear. Dror Gill: 26:29 Because they actually own the patents. They've licensed those patents on their own? Shawn Ambwani: 26:34 Well, I think they just don't consider themselves tackling a patent pool in the way that MPEG LA and HEVC Advanced does simply that would throw them into a different bucket and they would have all kinds of requirements on them that they don't want basically. So you know when the DOJ kind of made the rules or kind of the lawyers decided what the right rules are to make it work, you know like you've got to show your stuff. Basically you got to show your price, you've got to make sure it's reasonable or it's, you know, like there's, there's no most favored nation clause. I mean there is a most favorite nation (MFN) let me rephrase this. So all these things to make sure that everything is very transparent in order to allow this kind of companies to get together and set a price for how much they want to license for it, which typically would have huge anti-competitive or antitrust issues. Right. They made all these rules and Velos I think would not consider themselves technically a patent pool like those guys because that would make them have the similar requirements. Dror Gill: 27:40 So they're like an independent patent holder? Shawn Ambwani: 27:42 I don't really know what they call themselves. I've definitely never heard them say that they're a patent pool. I've heard other people call them a patent pool. I probably have at some point, but I don't really know if they actually consider themselves a patent pool Dror Gill: 27:55 Because I noticed that your litigation was against the patent holders. Companies like GE and KBS and against Velos Media itself. Yeah. Shawn Ambwani: 28:07 Yeah. Well Velos is you know, an unusual beast in that it owns a number of patents that got transferred to it as well as it provides licenses to the people who participated. You know, the other patent holders in general are much more traditional in their patent pool type activity in that the patent holders are different from the people who are doing the licensing. Dror Gill: 28:28 And you're not suing the patent pools like MPEG LA and HEVC Advanced or not your targets? Shawn Ambwani: 28:32 Well, they don't own patents directly, so really nothing to do as far as I know. I mean, you could say, you know, part of it is we're challenging them to a certain degree on their pricing and kind of their whole model of not looking at validity by challenging some of their patents as well as, you know, putting them on notice that as they get more patents in, we might challenge further patents for validity. So why don't they do it ahead of time? I mean, the idea that, you know, validity is a victimless crime if you don't check for validity, it doesn't hurt anyone. It's just not true in my opinion. It's just not true because you are hurting the people who actually innovated. There's a set amount of money that goes to everyone. If you have a bunch of patents and they're just like, you're checking for essentiality before you allow a patent in, you check for validity because there's a bunch of patents that just aren't valid that shouldn't be, they should not be making money off of. It just incentivizes people to get more invalid patents in the same space that they can stick into a pool to get a bigger share of it, like a giant game. Right? Mark Donnigan: 29:43 Yeah, that's a really good point. I'm wondering what is the cost to test for essentiality? Is some of this just sort of practical like it's just either too time consuming or costly to test? Yeah. I mean Shawn Ambwani: 29:58 Esentiality is often times more expensive than validity in some cases, but I mean they do test for essentiality. The companies pay to have their own patents tested often times for essentiality, but there is no test for validity that they enforce. So no one actually does it. You know, if they did ask for it, I'm sure people in some cases would pay for it, but more importantly, people who didn't think their patents would be found valid, probably wouldn't submit them in the first place then. Then there would be, there'd be huge disincentive for people who had that risk of that happening. They just wouldn't submit it, which you know, obviously it's going to hurt the pool because they get less patents. And at the same time, the hope is that people will think twice before they submit stuff they know is crap. Anyway. Mark Donnigan: 30:43 So what is, what is your bar for determining low quality? I mean, what does that process look like? Shawn Ambwani: 30:52 We have a bunch of patents that come into our hopper that we're constantly looking at in every single zone that we're in and we're constantly looking and seeing if it's a valid patent or not. And there's multiple ways of doing that. We have crowd sourcing that we do for that. We just pay people, you know, in order to do prior art experts for example, to do prior art searches. You can prior art search infinitely long these, there's no stopping. You know, what you can do. But you know, in the end there's only so much you can reasonably expect to find. And so from my perspective, you know, there's definitely been situations where we've looked at patents and we've said, okay, we don't think we have good prior art. We're just not going to do anything about it. And that's okay. In fact, I mean it's okay if a licensing entity or licensor has a valid patent, that's perfectly fine with me. Shawn Ambwani: 31:49 I think if they have a valid patent, they should be able to make money off of it. I have no problem with, it should be a fair amount if it's in a standard based on FRAND principles, but in general, people should be able to make money off of a valid patent. The problem is is that a lot of people are making a lot of money, in my opinion, off of a lot of bad patents. And invalid patents, which hurts the people who actually do have good patents because they're getting crowded out, which is sad because that really is the disincentive for innovation then is when the people actually are innovating aren't making money off of it because they're getting crowded out by the people who are just playing a good game. Dror Gill: 32:25 You described earlier the, the process with a standard setting bodies such as MPEG where you declare your patents but you only, you can declare them as, as a pool or as a bunch of patents and not specifically, and then you can basically, Dror Gill: 32:40 You know, create a pool and charge as much as you want if it's under the FRAND principles. Do you think there's anything broken in the standard setting process itself? Do those committees need to do something else in order to make sure that when they create a standard, the situation of royalties of, of the situation of, of IP which is essential to that standard is more well known that you have less uncertainty in that IP? Shawn Ambwani: 33:09 Yeah, I'm not sure. I mean there's always ways of like tweaking the system. Every standards body has different ways of managing it. I mean the only really clean way of doing it is saying it's royalty free and having anyone who participates in the standard agree that it's royalty free. Anything above that, just you know, you can play all these different types of rules and machinations and 3GPP has their own and other people, organizations have their own. But in the end it ends up being the same issue of you know, under declaring over declaring - issues with essentiality, validity, all kinds of other things. So I'm not sure if you, unless you go to that binary level, how much, you know, changing that up is going to change things fundamentally. I think the more fundamental thing is that, you know, the idea that I think the fundamental reason why you have these patent pools and other things like that was to clear risk and decrease uncertainty. Unfortunately I'd say uncertainty is actually increasing in some of these cases not decreasing by all these different groups asking for money at this point, which is unfortunate. Dror Gill: 34:17 No, that's a very interesting insight, really. Mark Donnigan: 34:19 Hey, thanks for joining us, Shawn. This was really an amazing discussion and we definitely have to have a part two. Shawn Ambwani: 34:26 All right, well, thanks for your time, gentlemen. I really appreciate it. Dror Gill: 34:28 Thank you. Narrator: 34:30 Thank you for listening to The Video Insiders podcast, a production of Beamr Imaging limited. To begin using Beamr's codecs today, go to beamr.com/free to receive up to 100 hours of no cost HEVC and H.264 transcoding every month.

The Video Insiders
Direct-to-consumer streaming service launches and first impressions.

The Video Insiders

Play Episode Listen Later Dec 24, 2019 43:23


The NAB Streaming Experience website can be found hereLearn about NAB Streaming Summit hereDan Rayburn LinkedIn profileRelated episode: What happens when content owners go directThe Video Insiders LinkedIn Group is where over 1,600 of your peers are discussing the latest news and sharing information of interest. Click here to joinWould you like to be a guest on the show? Send an email to: thevideoinsiders@beamr.comLearn more about Beamr TRANSCRIPTION (Note: This is machine generated and may have been lightly edited)Dan Rayburn: 00:00 There's seven, eight years ago when we were all playing in this arena and trying to really figure out the business model today, this is big business. We have tens of billions of dollars at stake. This stuff has to work. It has to be right and there is a lot of pressure on these new conglomerates to make sure that the video workflows, they're building out work properly because it is truly the future of their business. And I think the great way to really drive that point home is just remember all the services that Dan Rayburn: 00:25 launched say five years ago in the market. When the services came out, one, there was no investor day because investors didn't care what you were launching because at the time you weren't spending that much money and it was still a newish experience from a quality standpoint. Today, every single service that's launching is having an investor day where before the service is even out they're projecting to investors of when these services will become profitable. Talk about a shift in our industry. Announcer: 00:53 The video insiders is the show that makes sense of all that is happening in the world of online video as seen through the eyes of a second generation codec nerd and a marketing guy who knows what I frames and macro blocks are. And here are your hosts Mark Donnigan and Dror Gill. Mark Donnigan: 01:07 Welcome to another super exciting episode of the video insiders. We have Dan Rayburn with us again. Uh, this is part two. Yeah, it's amazing. And his first interview was one of the most popular ones on our podcast. We have to say it's the most downloaded and we have a lot to talk to talk about because since the last podcast episode where Dan was interviewed here on The Video Insiders, a lot has happened in the OTT space. I mean, really a lot. Yeah, a lot. So Dan, you know, welcome back to The Video Insiders. Thanks guys. Thanks for having me. You know me, I always have lots to talk about, so I love chatting about the industry. Well you do, you are an easy guest to host, that's for sure. Dan Rayburn: 01:53 I've always got stuff to say, right? I have an opinion on everything, but uh, it's, it's an exciting time in the space. And since we last talked to your point, we've got, we've got Disney out, we've got Apple plus out, we've got some new announcements from, from NBC regarding Peacock. We've, got a lot going on in the industry right now. A lot of confusion as well though. Dror Gill: 02:11 So let me ask you, when you, when you to watch some TV in the evening, can you really focus on the content or are you always looking for kind of artifacts, HDR levels? You know, stuff like that. Dan Rayburn: 02:26 I really don't want to think about the business because I do so much of that when I am reviewing the services from a business or content standpoint, you know, to your, to your point in terms of yeah, I am constantly looking at bitrates. I am looking at, okay, what's coming through my router because I want to see what the maximum stream is that I'm getting from the Mandalorian. You know, I probably have 40 different streaming services here at home and I've got anywhere between 10 and 12 TV's set up. Just sort of a lab environment and plus all the iPads, iPhones, MacBooks, like it's ridiculous, like a Best Buy here. I'm not the average consumer obviously, but, uh, I think like the average consumer in many cases, we are all looking at where the content is. So I've got some friends who, huge Rick and Morty fans and the new Rick and Morty season is out and this and that. Dan Rayburn: 03:10 And I said to him, you know, Hey, next year you're going to be able to stream this. And they're like, yeah, but I can't figure out where. Well, that's a great point. They can't figure out where, because where it's currently is and where it's currently at right now is going to be removed because AT&T has said that this is going to be exclusively under the new HBO Max brand. The average consumer isn't going to know that. So we're still going to have content fragmentation problems. So as a consumer, I think that's the biggest thing that we look at is just what content do we want to watch? Mark Donnigan: 03:39 You bring up something really interesting Dan, and this is a huge hole that I see in reporting on all the new services. It seems like so much of the press is writing about, you know, this service killing the next service. Dan Rayburn: 03:53 The problem is, look, the problem is the vast majority of people who are writing about our industry don't actually use the product. Mark Donnigan: 04:00 Yeah. They don't have 12 TV set up in their house, you know, like you do. Dan Rayburn: 04:03 They don't have one. All of these major platforms that are either telcos, carriers, wireless operators, content owners, distributors, whatever, whatever you want to call them, they're all creating brand new digital platforms for the future. And by that I mean this, when you think of what's taking place in the market right now with mergers, Viacom, CBS, Pluto, right, CBS All Access, CBS sports, CBS news, they are now all going to be converging and building out a new platform for all of these different products and services. That's one. Now throw in NBC sports, NBC news, Playmaker, Peacock, a, what do they also own? New England sports network. One of the other sports things. Throw all those guys in. That's now a brand new stack in the ecosystem. Now let's move on to AT&T. AT&T, Warner, Turner, HBO Max. That's now a whole system. Oh, and I forgot SKY when you're talking about NBC, you got to throw SKY in there too. So think about some of the largest companies that we have out there that are now creating a brand new stack end-to-end to fuel all these different new properties that they have. The biggest thing that you need there when you do that is what? Expertise, Mark Donnigan: 05:22 there might be some cues here because of course news just came out literally a couple of days ago. Fox signed with AWS a a very large deal. Now, um, I was reading some analysis on this and you know, it's because 21st Century Fox when they were acquired by Disney, you know, so there was a split, right? So the studio was acquired by Disney and all of those technical services actually went with you know, 21st Century Fox and of course then being a part of Disney. And then with BAM, now you've got this huge, you know, service organization that's available. And then here was Fox, the TV studio, the sports, you know, the sports side of Fox that needed a complete, you know, service provider. And it appears that they have selected AWS for even more than just, you know, on demand instances. So it's even more than a data center play. Um, and, and so that would seem to give credence to what your saying that, you know, BAM is far more than just a streaming service that, you know, there's, there's a lot of technical expertise and services they're providing Disney. Dan Rayburn: 06:36 Yeah, there's a huge amount and people, you know, really don't understand. I think a lot of people, even in our industry, don't understand what goes into all these services. Just the amount of beacons that are deployed, right. Just the amount of APIs you have to check. All the QoS and QoE reporting that has to come in and the analytics. And that's before you're doing any advertising. So anything advertising based obviously has more complexity tying into all the ad flows. And if you're doing live, okay, now you're talking about stream stitching for inserting ads into a live stream- that adds complexity. You have to think about latency and different ways to do chunked encoding. There's things you can tweak with HLS. There's just so much going on with these workflows and platforms that you really have to have that expertise. And some companies, you know think of Discovery, right? Dan Rayburn: 07:19 We heard from Discovery six, seven months ago when they announced they were going to hire 200 people to build a new streaming department to run all of Discovery's properties. So in some cases you have companies like that go, we want to own this, we want to build it, we'll bring it in house and it'll take them some time to get to market with that expertise. But they'll get there. And then you have other companies like Fox here where they signed that deal with Amazon and you know what they're really using AWS for is a couple of different things on the Cloud Front side, it's to deliver Thursday night football. Amazon already does live football. They kicked off the Premier League a what, two days ago? Three days ago from when we're talking now. So Amazon obviously has expertise in live streaming. The Premier League went off well with no major hitches. Dan Rayburn: 08:01 You did have some users complaining about latency, but that wasn't a problem Amazon was trying to fix just like we saw with the past Superbowl. That wasn't something where they were like, okay, we want to get latency to the same as broadcast. That was not the goal. So I don't see that as a problem. So they're using AWS for video workflows, editing and graphic storage, but also for this new product AWS calls Local Zone that puts cloud computing hardware closer to the edge and the edge is a broad term. Netflix has also signed on to be one of one of the first customers for this new AWS Local Zone service as well. So, it's super important, you know, we as consumers, we all want a good quality service and we expect it and now we're paying for it. So today this is big business. We have tens of billions of dollars at stake. This stuff has to work. It has to be right. And there is a lot of pressure on these new conglomerates to make sure that the video workflows, they're building out work properly because it is truly the future of their business. Dror Gill: 08:58 We're done with experimenting. Now we need to show the money. Dan Rayburn: 09:01 That's right. And you're spending a lot of money to do this. Look at how much money Disney's lost so far just on Hulu and then the acquisition of BAMTech. But they've already said to investors, here's one, we're going to make it back. Here's where we're going to become profitable. So you saw AT&T do that in their, in their HBO Dan Rayburn: 09:17 Max day. And NBC just announced they're going to have an investor day in January for Peacock. We're in a different era. Mark Donnigan: 09:22 Now for, you know, almost the first time, what is being done in engineering and R&D, can actually move a stock price. You know, meaning that the decisions that are made, whether that's technology choices, um, you know, codecs, certain stacks, architectures... If It doesn't work, like the stock is gonna move. And when the stock moves, it has the attention of everyone, you know, up until this point, you know? Yeah. The tech blogs, you know, would, would "dis a service" for an outage or for, you know, poor quality or you know, so yes it would get coverage but it never moved a stock price. You know? Or maybe there was a one day blip and you know, but, but basically it was kind of a non-event. Now that is no longer the case. Right? Dan Rayburn: 10:13 The bottom line is you have to think about profitability. And it's interesting that we're talking about this at a time when, if you think about Uber and WeWork, and some of these other services, what are investors clamoring for now? Profits, forget all this Amazon model of getting big, fast and burn as much money as possible. Thank God we seem to be getting out of that from a investment standpoint right now and in the streaming space, even more so also, look who's getting into the space? AT&T I think right now is the most heavily indebted US company right now. I mean it's insane how much debt that they have. So you also have companies, some of these that are already very deeply in debt that investors want to see anything new that they get into where they're spending billions of dollars to do it. They better turn a profit pretty quickly. Dror Gill: 11:03 But, but uh, Dan, let's look at the other side of the coin. A company that has tons of money, um, in the, in the bank and now they need to find some creative ways to use it in order to get those profits, uh, coming in again. And of course we're talking about Apple. Um, after selling a, you know, so many devices and now they, they've realized that services would be a much larger part of their revenues moving forward. So they, they really in a, in a spending mode and uh, the real question is will they be successful in catching up to the existing services and competing with all this new stuff that is coming out? Dan Rayburn: 11:47 Well, see I don't think they have to catch up though. That's the difference cause their, their business model is different. That's the other thing is people don't look at the business model of these services. You know, if you think about Apple services revenue, it was twelve and a half billion dollars, um, in the last quarter, which is pretty amazing. Their services business grew 13% year over year, so they're certainly doing a good job there. And Apple TV Plus, you know, the whole deal of that is just drive more usage on Apple's platform and services. But the unique thing with Apple of course is, well they own the hardware as we know, but they also own the OS. They own the browser, they own the store, they own the entire ecosystem. What does Netflix own? They don't own anything except content, right? So it's two different business models and everybody throws these, these folks in together and people go, Apple didn't have a successful launch. Dan Rayburn: 12:38 Well they did. They weren't trying to license back catalog. They weren't trying to launch with a hundred shows. That wasn't the goal of their platform because they're driving revenue in different ways. So it's the same way right now that Roku doesn't make a lot of money on their hardware, their seeding it out in the market to obviously drive the advertising business and the Roku channel, you know, the platform business. And Amazon pushing out Amazon fire TVs is what, $20 on black Friday for those sticks. They're not making much money in that either. So I think it's always bad when you see all these services compared to one another in the media and this horrible term streaming war because it's not a war- hate that term. Uh, and a lot of these services are not competing with one another. They don't see each other as competition. Apple is not trying to do the same business model as a Netflix, nor do they need to because it's a different type of company. Mark Donnigan: 13:31 It's excellent you brought up Roku. I'm looking at their Q3 numbers. They just came out like three weeks ago, um, or early November, I believe. And they're advertising revenue for the period was just under 180 million, 179.3 million. It was up 79% from the previous year's quarter, almost double and their device revenue was up 11% so that's good. But it was 81 million. So the point is their advertising was more than double their device revenue, you know, and their, and their numbers are showing on the advertising platform side, you know, just tremendous growth. And of course that's ultimately what they're really reporting around. I mean, yes, their device revenues are significant enough, you know, they're reporting that. Dan Rayburn: 14:22 Yeah, they shifted their business model. Right. I mean Anthony was smart. Keep in mind, Anthony came out of, came out of Netflix, that's where the Roku was born. Yeah, it was incubated there. Initially. Right. And that's where they got some of the money from and, and they realized longterm, I'd say two things were really smart. In the beginning Roku realized Netflix realized they didn't want to be invested in any one hardware company because then they couldn't be Switzerland. They couldn't be neutral. So that was smart to diverse, diversify from the Roku investment that they have. But then Roku also realized, they were smart to realize the writing on the wall here, you're not going to compete longterm on the hardware side. Hardware pricing always gets pushed down and back then if you remember all the different devices, I mean at one point we had 20 different streaming players in the market. Dan Rayburn: 15:03 It was ridiculous how many were out there. Even Vizio had one. Uh, but then really, I think what changed was when Amazon came into the market. Because we all know Amazon pushes pricing, pricing down on everything and we're, we're at a point soon of where I, I, this isn't official, Amazon hasn't told me this, but I will pretty much bet anything that at some point you're going to sign up for prime and you're going to get a stick for free because at $19 now on black Friday, this thing is getting close to being free. And if you're in the hardware business, do you want to be competing with Amazon on something like that? Absolutely not. So Roku realized that Roku had to become not a hardware device, but a platform. And the key thing there was obviously them getting their platform into smart TVs and especially a lot of smart TVs that are not the high end ones, not that TCL, doesn't make some good "high-ender" TVs. Dan Rayburn: 15:54 But you know, the average Roku enabled TV that's being sold is probably $300. Hisense, TCL, some of the others. So they're getting more of them out there. And, and that's really what Roku has become is that platform and their, their latest acquisition of Dataxu. You know, that's interesting because that is a platform that basically will allow Roku advertisers to better plan and optimize their ad spend across TV and OTT providers. And, and that's really smart of Roku. Uh, because this is the future of the company. You're talking about a company that's doing over a billion dollars a year now, in 2019, if I remember that number correctly. So you have to think about how Roku can capture a larger share of the market because as well known as a brand that Roku is, they still have a very small percentage of total households in the U S when you look at the numbers of, they don't call it consumers anymore, devices. Dan Rayburn: 16:55 Um, you know, which is good because like I have a bunch of devices in my house, but I'm one person. So they're growing, but that's something that they have to continue to do. Their monthly active users has to continue to go up. But yeah, Roku is in a really interesting spot in that regard. Their, their stock is incredible in terms of how much volatility it has in any given day or week. Sometimes. Uh, I think the Roku channel is an interesting thing where, you know, they go out and they're starting, they start offering content for free just like Tubi and Pluto and you know, IMDB TV by Amazon and that market is getting very crowded. And frankly, I don't quite understand that market because the content on those platforms is just so old and outdated. I really don't know who's clamoring to see Gilligan's Island. Mark Donnigan: 17:37 Well, Dan, so how should services be measured, you know, from a QoS standpoint? Dan Rayburn: 17:43 Uh, boy, that's a great question. Uh, I think first and foremost you have to look at what the methodology is. Methodology is the key for anything. So, you know, as an analyst, I don't frankly care about opinions so much. I care about data. I think companies should base how a service is doing, whether that's financially, whether it's technically, whether how it's scaling. They should base that on data because data can't be argued with really in most cases. Uh, so I, I think first and foremost is the methodology. And I think what you have to understand there is different companies have different ways of measuring performance. When I go out and do surveys to CDN customers on how they measure, some go, I care all about time to first frame or startup time. Others go, no, I only care about rebuffering. Some go, well, to me latency is most important. Dan Rayburn: 18:25 Well, none of those are more important than the other. It depends on who the customer is. And what their business model is. So as an industry we have to continue to think about these services as, as isolated services as opposed to every throwing everybody in this group of, Oh, you're a video service, you should measure your video quality this way. Not necessarily. So I think methodology first and foremost is most important. I think sharing that methodology is key as well. Uh, but, but I think you should always value a service based on quality over quantity. And we hear that a lot. The opposite of that in the advertising side where everybody talks about how many ads were delivered. But the question I always then ask is to a brand, would you rather deliver fewer ads and have a better viewing experience or do you just care about how many ads you pushed out there? Dan Rayburn: 19:16 And we have to think about that the same way on services that are not ad based. So I think what we obviously know from consumers from all these reports that we've seen, and frankly I don't think we need any more. I don't know why people keep pushing out more reports saying that if the video doesn't start up quickly, consumers are unhappy. Yeah, thanks. We know that. I think measuring quality has to first and foremost come down to what is the experience that you want a consumer to have with your content. That's the first thing. Once you define that experience, now how do you actually decide how to achieve that? Well, there's different ways to do that. We know that some of the basic ones are startup time. We know that customers get frustrated when something takes long to start. We also know rebuffering is a huge issue as well, which is obviously why we use adaptive bit rate encoding hopefully to relieve those issues. Dan Rayburn: 20:03 But it's interesting when you look online you don't see a lot of complaints honestly around rebuffering you see more with just initial startup time, but the biggest complaint you see actually doesn't have to do with the video. It has to do with just getting to the video. So you're having all these other issues in the stack before it actually gets to delivering the video bits and those are the things that really have to be solved. Those are the things that really have to be scaled because scaling the video is not that hard for someone like Disney Plus. Disney Plus launches that day, let's say it was 10 million actual individual subscribers and let's say they were all watching at the same time, 10 million streams across the five CDNs that Disney was using. That's not a big deal at all. It's 2 million streams a CDN, that's nothing. That's not hard, so people always think it's the CDN. Dan Rayburn: 20:56 I think when you're determining quality first and foremost you have to have a good understanding internally at your company, what you think good quality is to you for your service based on your business model, based on your consumers and also based on the type of device they're watching on is the vast majority of your content on mobile. And the reason I say that is as an example, when Quibi comes out next year, it's a hundred percent mobile focused. Do you think their methodology to measure quality should be the same as a Netflix? Because we know everything's going to be viewed on a small screen in short form content for Quibi. It's a different way to measure. I think there's lots of good services out there to help you measure there. There's, there's newer ones coming to the market in terms of what's being measured. You've got services that are measuring how well API's are doing versus how well streaming servers are doing versus ad servers and ad platforms and exchanges. Dan Rayburn: 21:43 And then you think of their traditional stuff that's been out there in terms of telcos and carriers, last mile providers, how they're doing transit providers. When you put all that together, it gives you a much better holistic view of what QoS looks like across the internet from end to end, from glass to delivery. Uh, but we still have a ways to go in terms of really showcasing that. And unfortunately none of these companies after the fact ever share any sort of methodology and they don't ever share any kind of numbers. You know, I worked on those Superbowl was CBS this year and I can't talk to the, you know, the numbers. I know, but you know, it's too bad. CBS doesn't put out from their Conviva dashboard and Mux and all the other services being used here was the rebuffering rate because you know what, it was really, really, really low. Like why not put that out? It shows a great quality service. Mark Donnigan: 22:32 You made a good point earlier that it's very interesting that now, all these big companies are actually staging investor days, or investor conferences around their services, which is like has never happened previously. I wonder if this methodology is going to begin to make it in, you know, to some of the public disclosures, you know, in some way? Dan Rayburn: 22:55 Sounds great. But, come on, if you deal with investors, you know that you start talking even bit rate calculations with them and they can't figure it out. Right? I mean, so no, investors aren't worried about that stuff. They don't understand it. Um, I mean it's amazing how many people just just on LinkedIn alone, let alone the media, was comparing the success of Disney Plus based on the metric of when Netflix launched and it just, it boggles your mind, right? Because I stuck up on LinkedIn just real quickly, and this is all factual information you can easily look up, which you know, the media doesn't want to do. The year Netflix launched, there was only 34 million iPhones in the market. That's it. Now, smart TVs didn't exist at all. And two years later, in 2012, only 12 million were connected to the internet. And at the end of the first year of Netflix, Apple had sold 7.5 million tablets. So now you're going to compare Disney Plus launching in an era with over a billion iPhones alone and I don't know how many Apple iPads, smart TVs, and you're going to compare that and go, we've now deemed this a success because it's beaten something that launched nine years earlier. Yeah. The methodology is flawed, and forget bandwidth. I mean bandwidth back then compared to now. It's night and day. Mark Donnigan: 24:22 I was there. I was there in 2007 we were just launching VUDU and you know, on a dedicated set top box because that was the way that we could bring a guaranteed experience to the home. You know, it wasn't because, you know, VUDU wanted to be in the hardware business. Uh, and ultimately, you know, the company of course pivoted, you know, to an app on devices. But um, I can, I can remember having to think that that the average broadband capacity in the US in most markets was around two megabits. Dan Rayburn: 24:57 It was a different time, comparing something that long ago. But here's the biggest thing. The media doesn't write for accuracy like we talked about before. They write for one thing, headlines. So the moment you say this kills Netflix and this crushes Netflix or this did better than Netflix, what happens? People click on it because everybody's heard of Netflix. Cause the only way these guys make money is page views. So that's a whole different discussion. We're not going to get into, cause that's a whole different podcast. But the entire model for news on the internet is broken. And has been broken for years. When, it's based on just here's how many page views you have. So let's cram out more articles that are 800 words or less instead of actually telling us. Mark Donnigan: 25:38 So I think it's a interesting, you know, to talk about devices and since we are talking a little bit about history now, you know, there was a time where it was really critical that you got your service on a device and I'm kind of, you know, using "air quotes" there. Um, because if you were on a device that was widely sold, then you, you know, you had, um, you had an ecosystem you're a part of now with SDKs and API APIs and, and it's far more ubiquitous, you know, HTML5 apps and things like that, you know, with the app stores being clearly defined. Um, you know, basically you need to be in the, uh, Apple app store. You know, you need to be in the Google, uh, store, you know, for Android. Um, you need to be on about half a dozen connected TV platforms and then Roku and you've covered like 99% of the market. Right. Um, so what's your perspective of, you know, even like Nvidia launching, you know, the, Shield TV. Dan Rayburn: 26:43 And you know, just the role of devices. What are you, um, uh, you know, what are you seeing there? Well, you know, I think over time devices play less of an important role. And the only reason I say that is to your point, it's really about the platform now and it's about ecosystems and people pick certain devices or services because I'm already in the Apple ecosystem already. And the Android ecosystem, I already have a, you know, an Xbox one. Typically people who have an Xbox one is they're not going to then go out and buy a PS4 just because of a new service. So what we've seen over the years is no longer have services launched with exclusives on platforms. Like we saw when HBO Now launched, it was only available for the first 90 days on Apple TV. That's actually a disservice to the service. Dan Rayburn: 27:28 It's getting in fewer people's hands. So I think the devices we have in the market, I don't see that changing at all. Right. I think you have the major devices between Xbox, PlayStation, Chromecast, Apple TV, Roku, Amazon. Uh, who am I missing? Those are the seven major ones. I look at something like the Shield TV, which now has two new models from Nvidia, which I've, I've tested and played with. Yeah, it's a good device cause it's super fast. And the fact that it's built on Android, you know, you, you can go in there and you can install a Plex server on it, which works really well. It's a great device for Plex media server. Uh, but who's the video really targeting with the device? It's $200. Dror Gill: 28:09 People who like a nice design. I mean look at the shield TV. It's a cylinder shape. It looks exactly like the Roku Sound Bridge come to think of it. Dan Rayburn: 28:18 So the lower end model does, that's the one that's $149, the $199 model, which has storage in it and two USB ports. The original one you're talking about has no USB port, so you can't add additional storage, which is kind of a problem. Uh, you know, $200. Your really targeting the person who wants to build something at home. The enthusiasts, right? That's who you're targeting. I think that's great. Like there's nothing wrong with that, but I, you know, I questioned like, is that Nvidia's core business? No, it's not. But since they're making the chip inside, I get it. Their cost to produce that hardware is probably much cheaper than others because they're not paying for the chips since they own it. Um, but I don't think the hardware changes going forward. I, I do think we've seen an amazing amount of progress with smart TVs over the last five or six years. Dan Rayburn: 29:04 They actually work. Um, if you remember five or six years ago, you never wanted to launch an app on your smart TV cause you didn't know how long it would take a load. Now they work really well. They're pretty seamless. I mean, the new LG device that I just got the remote's really well thought out. It's smart. Uh, it's clean and simple. There's not a lot of bloatware on it. That's the other thing is a lot of these smart TVs used to have so much bloatware, especially Samsung, they've gotten much better at reducing that with removing what used to be mandatory ads. So I think the smart TV has gotten much better there. And I think for a lot of people that continues to be a device that grows down the line because it's all integrated into one. And that's also part of the, the reason Amazon came out with the cube and now the second generation cube, you know, really cool device that is voice-based and will automatically, when you say turn on Hulu, will know how to change your input know how to turn on your TV. It can also control your lights. We're starting to see more streaming services on these platforms that are being combined into the connected home. Dror Gill: 30:05 Right. And you see this with a, with Nvidia shield TV, right? It connects to your, uh, um, uh, nest to the Phillips Hue, to Netgear, all of that. Dan Rayburn: 30:14 I think that that's the future where some of this is going is they're no longer these companies and platforms and no longer looking at streaming services as an isolated service. It's one of multiple services in your house. It provides entertainment or lighting or something of that nature. And the Cube is a really cool device. I've spent a lot of time with the Cube. Um, we recently at the NAB streaming summit in October, we had one of the executives on stage doing a fireside chat with me. Really talking about the technology that went into it. And audio is really hard and I don't think people understand in the audio side just how hard it is to do things on the voice side and actually have it work on the back end and have it worked quickly and in real time. Uh, I would say right now Amazon is by far leading the market when it comes to the technology that they have for voice enabled applications. And you see that with the Cube, especially from first gen, the second gen, and on black Friday the price was down to $90. What do you think is going to be next year? Right. It's probably going to be 70 bucks, you know, just keep dropping. So yeah, I think that's pretty neat to see in our industry, just how streaming is now thought of as a one of many things in the home that we're using for entertainment. Yeah. Dror Gill: 31:24 And, and people are using voice actually they got used to talking to their devices? Dan Rayburn: 31:28 Well, from what we're hearing and the data we've been given. Dan Rayburn: 31:30 Hulu at the show said that uh, people who were using voice to find content tied into Amazon's products were watching 40% more Hulu and it makes sense because people know how to use their voice and they know what to say. When you're doing a search in, um, one of these services, do you put in the title? If the title is not perfect, what you put in, do you still get the right results? Many times? No. Whereas with your voice, it's much more natural in terms of how you're going to search for content. Dror Gill: 32:00 The LG remote, you mentioned earlier, it has like a single button. Then you talk to the remote and it automatically searches on all the applications that you haven't stalled on the TV and finds the content very simple. Dan Rayburn: 32:12 Also, if you don't want to do that, the pointer system's very simple. If you don't instead want to have to type stuff in, they give you flexible options, which I like as consumers, we will all want options and I think options are good. The downside to options obviously is too much choice, too much confusion, not sure what the business model is. And that's why a lot of consumers are going to jump amongst these services in 2020 because when you can try them for a week or 30 days, why wouldn't you? Mark Donnigan: 32:38 Well, Dan, I know you were telling us before we started recording about something really exciting you're doing at the NAB show, um, around devices. So, um, why don't you tell us, you know, what you got planned. Dan Rayburn: 32:52 Yeah. So this, this is pretty cool. Um, and we're going to have some, we're going to have some information on the website up pretty soon and you'll see me announce it sort of everywhere. Dan Rayburn: 32:59 But one of the problems I've always seen at conferences talking about our industry is we're all there talking about video, but nobody is showing it. We're talking about devices, but nobody's getting hands on with them. Nobody can see these platforms in action. And the three of us on the phone, we eat, sleep and breathe this industry. So we see all this stuff. We use all this stuff, but we're not the average consumer. We're not the average industry participant. So my idea here was the NAB show is, is the largest collection of people in the video world. Maybe not all streaming, obviously a lot of traditional broadcast, but those are the people we actually have to educate even more than people in our industry. So what we're going to do in April is for anybody who walked into the North hall lobby, if you remember, there wasn't really much in the North hall lobby. Dan Rayburn: 33:45 There's some little booths and some other things. Well, we're going to take over the North hall lobby and we're going to call it the streaming experience. And we're building out 12 living room style, uh, seating with large screen TVs. And every single TV in all 12 locations is going to be XBox, PS4, Apple TV, Chromecast, Roku, uh, what did I forget? It's basically gonna be every hardware device in the market today of the seven that we talked about earlier. And then on each one of those, there's going to be 50 different OTT platforms that you can test and these will be pay services, these will be AVOD services, these will be authenticated services. Think like a CBS sports or something like that. And any attendee to the NAB show can walk right in and say, you know, I really wanted to see what Netflix, HDR looks like here compared to you know, Amazon HDR or I want to see what bundling of content looks like. Dan Rayburn: 34:41 I want to see what UI and UX is compared to these services. I want to see how the ad supported services are doing pre-roll. I want to see what live sporting personalization looks like. I want to actually test an Amazon Cube and see how good it is in terms of understanding voice recognition. So we're calling it the streaming experience. We're going to have it out for three days. It's going to be a place where people can also just come to get questions answered about these platforms. I'm going to personally have my folks manning every single one of the stations. Uh, and in addition we're going to be giving away every single piece of hardware that we are installing during the event. We're going to be giving that away after. So, it's about $10,000 in gear, not including the TVs, which those are rentals, but everything else, uh, that we're buying, we're going to be giving away. Dan Rayburn: 35:33 So you're going to be able to get into some amazing raffles, some really good gear. And then in addition to that, we are also going to have a location in the middle of that area. The streaming pavilion, Oh, sorry. Streaming experience where you're going to be able to also test these streaming services on phones and tablets. Oh, that is awesome. And because we have to bring that experience in as well we can't only think large screen and if all works out, hopefully we might even have 5G demos. So these services working across 5G. So think of every service in the market, you know, all the live linear services, the on demand services, the free services, the authenticated services. I basically challenged people to come to the streaming experience and find a service that we don't have on on those devices and we'll have, we will have services from other countries. Dan Rayburn: 36:26 It's not just going to be the US I won't have everything. Obviously there's, there's some of these services that only work based on certain geo-fencing and certain locations. But we also already have some OTT providers who were saying, Hey, we're going to give you special accounts so that the services work for you as a demo even if it's not available in that region. So we have a lot of OTT companies that are working with us. We've got some that are partnering with us on a sponsorship level to really promote the service. And the other thing we're going to do is for the companies that really wants some feedback, we're going to have a, an attendee who comes up and let's say they use Hulu's service for a couple minutes and then they walk away before they walk away. We're going to say, Hey, fill out this quick card that has five questions on it. Dan Rayburn: 37:09 Would you buy this feature functionality? And then we're going to dump all that data back to the OTT platforms. Because now they're going to collect thousands, hopefully of real world feedback from customers who are using the service or thinking about using the service. So we want this to become a focal point for the show where people can come and just talk about these services, see them, compare them, test them. Win Some of this product, uh, get your questions answered. And then also use it as a way to collect data for the industry to share with the platform providers what is actually taking place. So I don't know of any other show that's doing it. It's something that I've been wanting to do for quite some time at this size and scale. And when you have the NAB behind it and once they start promoting it and we've got dedicated bandwidth for it. Dan Rayburn: 37:55 So we're making sure the experience is really good and I'm curating the entire thing so I am going to make sure everything works beforehand. We're there days in advance, I've already bought all the devices for the, for the event for months prior, right were we had them like it's about 2,600 accounts you have to set up across all the devices. It's a big undertaking. This is, this is serious, but it's going to be a good as we're calling it experience. So whether you're in the advertising market and you want to see what ads look like or you're in the compression business and you want to look at artifacting from one service to another, you want to look at 4K and lighting and HDR. You want to come. I think UI and UX is super important. So all those people that come to the NAB show that are doing design or creative UI and UX will come compare how they work and work between mobile and larger screen. So really whatever industry you're in and the NAB gets a lot of different people from different verticals and industries and regions of the world, this is going to be relevant to you in some way, shape or form and you're going to be able to see it free of charge. Dror Gill: 38:59 This really sounds amazing Dan. It's kind of a combination of a, of a playground that everybody wants to play with and also a way to experience, uh, all of this tests, right? And, and the way to experience a lot of things that you don't have access to because nobody can buy all of that gear and get access to all of those services at the same time. So you can really come in and experiment and see video quality as you said, UX, advertising, integration, everything. And also be able to talk to people who are, who are experts in this and can walk you through it. And the fact that you're feeding back the information and the comments from, uh, from the visitors, you know, back to the services is, is really a great service to the industry because then you can finally get those comments and uh, and information back. Dan Rayburn: 39:49 And we're also going to share it with the industry as a whole. We're definitely going to share here are some of the highlights we've seen from what consumers have been saying. And the other way I'm looking at this too is it educates two other portions of the market that are really important. It educates the media because now it's going to happen is when somebody wants to do an interview with Hulu who speaking at the show and you know, wants to talk about the platform. Somebody from Hulu is going to be able to walk them to the streaming experience and actually show it to them, which means hopefully they actually get the coverage accurate. So it's really important that the media sees the stuff. And second, the other market that we have at the show is investors. There's a lot of investors at the NAB show, institutional investors, and they don't get to see this stuff. Dan Rayburn: 40:29 So when they're making predictions about stock and about revenue and loss and capex and OPEX and all these other things that they use to determine success or failure of companies, the best way to do that is to actually see the product in action. So now you're also going to have investors who are going to be able to get hands on with this stuff even from a high level, which is going to benefit them. So I think overall it just benefits the industry. It benefits the platform providers, the consumers, the media, the investors. Those are really the five vertical markets that I'm trying to target. Dror Gill: 40:57 We need something like this. Um, you know, as an installation permanently somewhere. Dan Rayburn: 41:02 Yeah, maybe. I mean, I'm doing this with the NAB and that's, that's the exclusive, you know, group I'm working with now. I'm certainly not going to bring this to other conferences, but this is something that you're going to see now moving forward at NAB show in Vegas for sure. New York is much more difficult to do this only because of unions, some other, some other rules around that. But, uh, in Vegas, this is, you know, this is DnaB also planting a stake in the ground going, listen, you know, last year you walked into the North hall lobby and it was still so much of a focus on broadcast and traditional TV. Well, users are in for a, you know, wake up when they walk in this time and go, wow, what is all this streaming stuff? Mark Donnigan: 41:38 This is an amazing service that you're providing Dan. Uh, and we're gonna promote it and encourage everyone, uh, you know, our customers and those that are in, you know, in our sphere of influence, uh, to check it out, you know, really, cause this is, this is amazing. Dan Rayburn: 41:52 I'm excited for it. It's a lot of work and it's a huge undertaking. It is a lot of work. Yeah. It scares me at times. Just cause to do it right. It's, it's a lot of work. Um, but I'm going to have a good, I'm going to have a good team. I'm going to be flying in some, uh, some of my buddies from the special operations community who are, who are tech guys and they're, they're going to come help me in the booth and whatnot. And, uh, it's, it's going to be a good three days. Well, Mark Donnigan: 42:18 Dan, uh, this is, uh, you've been yet another amazing interview. Thank you so much for coming on the video insiders. Dan Rayburn: 42:26 Thank you for having me again. As you know, I can talk all day about this stuff. So it's a good thing you have to edit this down into something shorter. Mark Donnigan: 42:30 The next time we have you on, uh, I think, uh, will time, the timing will be good with some new, uh, things you have going. Dan Rayburn: 42:41 There'll be some other new things in the new year that I can't talk about now, but yeah, yeah. The, the, the idea of wanting to inform the market more and providing more resources for the community. That's, that's something that's coming up. Dror Gill: 42:51 Great. So thanks again. Thanks again for joining us today. Dan Rayburn: 42:54 Thank you guys. Announcer: 42:55 Thank you for listening to The Video Insiders podcast. A production of Beamr Imaging, Ltd. To begin using Beamr's codecs today, go to beamr.com/free to receive up to 100 hours of no cost HEVC and H.264 transcoding every month.

The Video Insiders
How Beamr scales subjective measurement using crowd sourcing.

The Video Insiders

Play Episode Listen Later Dec 14, 2019 39:22


Download: Rethinking Lossy Compression: The Rate-Distortion-Perception Tradeoff published by Technion–Israel Institute of Technology, Haifa, Israel. Authors: Yochai Blau , Tomer Michaeli.Today's guest: Tamar ShohamRelated episode: E32 - Objectionable Uses of Objective Quality MetricsThe Video Insiders LinkedIn Group is where we host engaging conversations with over 1,500 of your peers. Click here to joinLike to be a guest on the show? We want to hear from you! Send an email to: thevideoinsiders@beamr.comLearn more about Beamr's technology TRANSCRIPTION (Note: This is machine generated and may have been lightly edited)Tamar Shoham: 00:00 No matter which application or which area of video compression and you know, image compression, video compression that we're looking at.Tamar Shoham: 00:08 There is finally a growing awareness that without subjective testing you cannot validate your results. You cannot be sure of your quality of the video because at least for now it's still people watching the video. At the end of the day, we don't have our machines watching, at least not quite yet.Dror Gill: 00:42 Hello everybody, and welcome to another episode of The Video Insiders. With me is my co-host Mark Donnigan. I'm really excited today because we have a guest that was on our podcast before today. She's coming back for more. So I would like to welcome Beamr's, own VP of Technology Tamar Shoham to the show. Hi Tamar. Welcome to The Video Insiders again.Tamar Shoham: 01:06 Hi Dror, Hi Mark, great to be here again.Dror Gill: 01:08 And today we're going to discuss with Tamar a topic which has been a very hot lately and this is a topic of a video quality measurement. And I think it's something that's a very important to anybody in in video. And we have various ways to measure quality. We can look at the video or we can compute some some formula that will tell us how good that video is. And this is exactly what we're going to discuss today. We're going to discuss objective quality measurement and subjective quality measurement. So let's start with the objective metrics and Tamar can you give us an overview of what is an objective metric? And what are the most common ones?Tamar Shoham: 01:55 Fortunately the world of video compression has come a long way in the last decade or so. It used to be very common to find video compression evaluated using only PSNR. So that's peak signal to noise ratio, which basically is just looking at how much distortion MSE (mean square error) there is between a reconstructed compressed video. And the source. And while this is, you know, in a very easy to compute metric and it does give some indication of the distortion introduced its correlation with subjective or perceptive quality is very, very low. And even though everybody knows that most papers I'd say up till about a decade ago started with, you know, PSNR is a very bad metric, but it's what we have. So we're going to show our results on a PSNR scale. I mean, everybody knew it. It wasn't a good way to do it, but it was sort of the only way available.Tamar Shoham: 02:58 Then objective metrics started coming in. So there was SSIM the structural similarity which said, Hey, you know, a picture isn't just a group of pixels, it has structures and those are very important perceptually. So it attempts to measure the preservation of the structure as well as just the pixel by pixel difference. Then multi-scale SSIM came on the arena, sorry. And it said, well, it's not only the structure at a particular scale, we want to see how this behaves on different scales of the image. So that's multi-scale SSIM and it's, it's actually not a bad metric for getting an impression of how distorted your video is. Netflix did a huge service to the industry when they developed and open source their VMAF metric a few years back. And this was a breakthrough for two reasons. The first is almost all the metrics used before to evaluate the quality of video were image metrics. And they were actually only measuring the, the per image quality. We're not looking at a collection of images, we're looking at video. And while there were a few other attempts, there was WAVE I think by, by Alan Bovik's group and a few other attempts.Dror Gill: 04:28 So VMAF basically takes several different objective metrics and combines them together. And this combination is controlled by some machine learning process?Tamar Shoham: 04:40 VMAF was a measure that incorporated a temporal component from day one. So that's one place that really helped. The second place is when you're Netflix, you can do a lot of subjective testing to verify and as a part of the process of developing the metric and verifying it and calibrating it and essentially the way they did it by using existing powerful metrics such as VIF and and adding as we said, the temporal component and additional components but then fusing them altogether. That's where the F from VMAF comes from. Fusing them together using an sophisticated machine learning neural network, a base model. So that was a big step forward and we now do have an objective measure that can tell us, you know, what the quality of the video is across a large spectrum of distortion. They did a large round of subjective testing and a graded the quality of distorted videos using actual users. And then they took the results of a few existing metrics. Some of them were shaped slightly for their needs and added a pretty simple temporal component and then took for each distorted video the result of these metrics and essentially learned how to fuse them to get as close as possible to the subjective MOS score for that data.Mark Donnigan: 06:19 One of the questions I have Tamar is the Netflix library of content that they use to, to train VMAF, you know, entertainment focused, kind of, you know, major Hollywood movies, but there's things like live sports. Does that mean that VMAF works equally well, you know, with something like live sports, which I actually don't know, maybe they trained, you know, Netflix trained, but that's certainly not a part of their regular catalog. Or do we know if there's some content that, you know, maybe it needs some more training, or it's not optimized for?Tamar Shoham: 06:54 Yeah. So, so Netflix and being very upfront about the fact that VMAF was developed for their purposes, using clips from their catalogs and using AVC encoding with the encoder that they commonly use to create these clips that were distorted and evaluated subjectively and used to create the models, which means that a, it may not apply as well across the board for all codecs or all configurations. And all types of content. That's something that we actually hope to discuss with Netflix in the immediate future and maybe work together to make VMAF even better for the entire industry. Another issue with VMAF, and it's interesting that you know that you mentioned live in sports is that it's computational complexity is very high. If you are Netflix and you're doing offline optimization and you've got all the compute power that you need, that that's not a limitation.Tamar Shoham: 07:56 It's a consideration and it's fine. But if you want to somehow have a more live feedback on your quality or be able to optimize and evaluate your quality with reasonable compute power VMAF is going to pose a bit of a problem in, in that respect. In any case these are all objective metrics and as I said, you know, they go from the bottom of the scale in both performance required to compute them and reliability or correlation with subjective opinions and up to VMAF, which is probably today top of the scale for a correlation with subjective quality. But it's also very heavy to compute. But all of these metrics have one thing in common. Unfortunately, there are a number, they measure distortion. They're not a subjective estimation or evaluation of perceptual quality. That's a good point. Yeah. So I recently had the pleasure of hearing a very interesting PhD dissertation by Yochai Blau at the Technion under the supervision of professor Tomer Michaeli.Tamar Shoham: 09:10 And the title of his work is the perception distortion tradeoff. And what he shows there is he shows both experimentally with two sets of extensive experiments that they performed and mathematically using modeling of perceptual, indication of quality and statistical representations for that versus the mathematical model of various distortion metrics. And he shows in that work that it's sort of mutually exclusive. So if you're optimizing your solutions specifically, for example, a neural net based image processing solution. If you're optimizing for distortion, you're going to have a less acceptable perceptual result. And if you're optimizing for perception, you're inherently going to have to slightly increase the distortion that you introduce. And there's like a convex hall curve, which finds this trade off. So mathematical distortion, you know, a minus B, no matter how sophisticated your distance metric is, is inherently opposing in some ways to perception. Because our HVS, our human visual system is so sophisticated and does so much mathematical operation on the data that the distance between the points or some transform or wavelit done on these points can never fully represent what our human visual system does to analyze it. And, and that's, I mean, a fascinating work. I think it's the first time that it's been proven mathematically that this convex hall exists and there is a bound to how well you're going to do perceptually if you're optimizing for distortion and vice versa.Dror Gill: 11:07 And I think we also see this in in, in video compression. For example, in the open source x264 codec and also other codecs. You can tune the codec to give you better PSNR results or better SSIM results, you can use the tune PSNR or SSIM flag to to actually optimize or configure the encoder to make decisions which maximize those objective metrics. But, it is well known that when you use those flags, subjective quality suffers.Tamar Shoham: 11:43 Yup. Yup. That's an excellent point. And, and continuingly that x264 and most other codecs generally have a PSY or a psycho-visual rate distortion mode. And as you said, it's well known that if you're going to turn that on, you're going to drop in your PSNR and your objective metrics. So it's something that has been known. You know what the reason I, I'm, I'm very vocal about this work at the Technion is it's the first time I'm aware of that it's being proven mathematically, that there's like a real model to back it up. And I think that's very exciting because it's something, you know, we've known for a while and, and now it's actually been proven. So we've known it, but now we know why! For the nerds among us, we can prove that if it's even mathematical, it's not just a known fact.Tamar Shoham: 12:28 This is coming up everywhere. And there's growing awareness that, you know, the objective metrics and perception are not necessarily well correlated. But I think in the last month I've probably heard it more times than I've heard it in the five years before that it's like there's really an awareness. Just the other day when, when Google was presenting Stadia, Khaled (Abdel Rahman), one of the leads on the Stadia project at Google specifically said that, you know, they were testing quality and they were doing subjective testing and they had some modifications that every single synthetic or objective measure they tried to test said there was no difference and yet every single player could see the difference in quality.Dror Gill: 13:21 Hmm. Wow.Tamar Shoham: 13:23 No matter which application or which area of video compression and you know, image compression, video compression that we're looking at. There is finally a growing awareness that without subjective testing, you cannot validate your results. You cannot be sure of your quality of the video because at least for now, it's still people watching the video. At the end of the day, we don't have our machines watching it for us quite yet.Dror Gill: 13:49 And Tamar, I think this would be a good point to discuss how user testing is done subjective testing, but actually it's, it's user opinion testing is done. I know there are some there are some standards for that?Tamar Shoham: 14:05 Right. So I think one of the reasons we're seeing more acceptance of subjective testing in recent years is that, originally there were quite strict standards about how to perform a subjective testing or visual testing of video. And it started with the standard or the recommendation by ITU, which is BT.500 which was the gold basis for all of this. And it defines very strict viewing conditions, including the luminance of the display, maximum observation angle, the background chromosome chromaticity. So you have to do these evaluations against a wall that's painted gray in a specific shade of gray. You need specific room illumination, monitor resolution, monitor, contrast. So it was like if you wanted to really comply with this objective testing of the standard, there was so much overhead and it was so expensive that although there were companies, you know, that specialized in offering these services, it wasn't something that a video coding developer would say, Oh, okay.Tamar Shoham: 15:16 You know, I'm going to test now and see what the subjective opinion or what the subjective video quality of my video is. And I think two things happened that helped this move along. One is that more standards came out, or more recommendations, which isn't always a good thing, but in this case, the newer documents were less normative, less constraining and allowed to do easier user subjective testing. In parallel, I think people started to realize that, okay, it doesn't have to be all or nothing. Okay. If I'm not going to do a rigorous BT.500 test, that doesn't mean I don't want to collect subjective opinions about what my video looks like, and that I won't be able to learn and evolve from that. At Beamr, we have a very convenient tool which we developed called Beamr View, which allows to compare two videos side by side, played back in-sync and really get a feel for how the two videos compare.Tamar Shoham: 16:23 So while the older metrics were a very, very rigorous in their conditions and it was very difficult to do testing subjective testing and confirm with these standards, at some point we all started realizing that it doesn't have to be black and white. It doesn't have to be either you are doing BT.500 subjective testing by all those definitions or you know, you're just not doing any subjective testing. And Using our tool Beamr View, which I presume many of you in the industry use. We often compare videos side by side and try to form a subjective opinion of, you know, comparing two video coding and configurations or checking our perceptually optimized video to make sure it's perceptually identical to the source, et cetera. And then the idea came along saying, okay, you know what if we took this Beamr View tool and added a bit of API and some backend and made this into a subjective test that was trivial for an average user to do in their own time on their own computer.Tamar Shoham: 17:27 Okay. Because if it's really easy and you're just looking at two videos playing back side by side and someone else is taking care of opening the files and doing the blind testing so you don't know which video is on which side and you just have to, you know, press a button and say, Oh, A looks better, the left looks better or the left looks worse. That makes the testing process very, very easy to do. So at this point we developed what we nicknamed VISTA. VISTA basically takes Beamr View, which is a side-by-side video viewer and it corresponds with a backend that says, okay, these are the files I want to compare. And the user just has to look at it and say, Hmm, I don't know yet. Replay. Hmm, yeah, a definitely, you know the left definitely looks worse. So I'm going to press that button and then you get fed the next pair.Tamar Shoham: 18:25 So we're making this visual side by side comparison really, really easy to do. And that was the first step to making large scale subjective testing, a reality that we could actually use. And if before you know, Oh gee, I've got two configurations and I want to know which looks better, you know, you have to go and pay a company to do BT.500 testing and get results two weeks down. Well now at least we had a tool that we could use internally in the company, get a few of our colleagues together and say, okay, you know, run this test session. Let me know what you think. And while it's true that this wasn't scaling yet, you know, so we would collect five or 10 opinions over our set of 10 or 20 clips that we were testing. We could always complete these evaluation with objective measures. But no matter how many objective measures you measure, okay, you're always going to get, for example, going back to the example we mentioned before, if you're turning on psy-RD, you can run a thousand tests. PSNR is going to be lower.Dror Gill: 19:35 Yeah. And that's the problem. I mean, the advantage of objective metrics is that they can scale infinitely, right? You can run thousands of servers on AWS that would just compute the objective metrics all day, but they're not very accurate. They don't reflect real user opinions. And on the other hand, if you want to run subjective testing or user testing at scale, you have a problem because either it costs very much, you need to go to those dedicated labs. Or if you do it internally, you know, with a few people in the company it's not a large enough sample. And another problem with doing it with your colleagues is that they are not average users. Most of them are either golden eyes or after working for a few years in a video company, they become golden eyes. And you want people that don't just compare videos all day. People who are really average users.Tamar Shoham: 20:30 Exactly. So, so you highlighted on, on the exact three problems that we set out to then solve. So, so we have this tool that you know, allowed for very easy comparison of video, but how are we going to scale it? How will we be able to do that cheaply and how would we get an average user because average users are very slippery beings. Even someone that is an average user today, after they've watched hours and hours and days of video and comparisons, they start to pick up on the nuances between the compressed or the processed video and the input. And, and then they're broken. They're not an average user anymore. But at some point you just want to know, okay, what will the average user think, so we took this VISTA you know, how do you solve today a problem of I need lots of average user? Crowdsourcing. And we specifically went with mechanical Turk, which gives you access to practically an endless supply of average users.Dror Gill: 21:35 Amazon mechanical Turk. If somebody doesn't know this, a platform, it's basically in the same way that you can launch up a computer servers on the internet, you can launch actual users, right? You can set up a task and, and people, real people from all over the world can bid on this task and perform it for you.Tamar Shoham: 21:54 Yeah. And it's really amazing to see the feedback we get from some of these users because there are all kinds of tasks from Amazon mechanical Turk and some of them, you know, might not be as entertaining, here, we're just paying people to look at videos and express an opinion. So we also try where possible, where we have control over the content selected to choose videos that you know, are visually pleasing or interesting. And people tend to really enjoy these tasks and give quite good feedback on, you know, cool. And we, we also have a problem with repeat users or workers that want to do our tasks again and again. And it's actually interesting to watch the curve of how they become more and more professional and can detect more and more mild artifacts as they repeat the process. So we're actually adding now some screening process that, you know, understand before we're looking at a user who has a very present, they have opinions or if they are still an average user, but we do need to, to verify our users.Tamar Shoham: 23:08 So we, because I mean, this is mechanical Turk, so you know, how do you know if the user is doing the test thoroughly, or if they're just choosing randomly they have, they have to go through the entire process of the test. So they have to play the videos at least once. But you know, what, if they're just choosing randomly or have even managed to configure some bumps to take this test for them? Yeah. So we prevent that by interspersing user validation tests where we, these are sort of like control questions where we know there is a visible degradation on one of the sides but it's not obvious or you know, something that PSNR would pick up on and only users that get those answers right will be included in the pool and only their answers will be incorporated. So, you know, what we do is we launch these hits, which is the name of a task on Amazon mechanical Turk and people register through the hits, complete them, get paid and we start collecting statistics. So first we weed out any sense where either if there were problems with the application and they didn't complete or they just chose not to complete or they didn't answer the user validation questions correctly. And then we have our set for statistical analysis and then we can start looking at and collecting the information and collecting the opinions and very cheaply, very quickly, get a reliable subjective indication of what the average user thought of our pairs of videos.Mark Donnigan: 24:51 This is really interesting Tamar. I'm wondering, do we have some data on how, how this does correlate this average user, these average user test results, do they correlate pretty close to what a "golden eye", you know, would also pick up on, I mean, you did mention that some of these people have become quite proficient, so they're almost becoming trained just through completing these tasks. But you know, I'm curious if someone is listening and maybe they're saying, okay, this sounds really interesting, but my requirements are you know, for someone who maybe fits more of a Goldeneye profile, are we finding that these quote unquote average users are actually the results line up pretty closely to what a golden eye might see?Tamar Shoham: 25:46 So it depends on the question you're posing? So when we start a round like this of testing, the first thing you need to do is pose a question that you want to answer. For example, I have configuration A of the encoder and configuration B. Configuration B is a bit faster. Okay. But I want to make sure it doesn't compromise perceptual quality. Okay. So that's one type of question. And in that type of question, what you're trying to verify is, are the videos in set A perceptually equivalent to the videos in set B? And in that case, okay, you may not want the opinion of a golden eye because even if a golden eye can see the difference, you might be better off as a streaming content provider to go with a faster encode that 95% of your viewers won't be able to distinguish between them.Tamar Shoham: 26:44 So, sometimes you really don't want to know what the golden eye thinks you want to know what the average viewer is going to do. But, we actually can control the level of I guess how professional or how, what shade of gold our users are. And the way we can do that is by changing the level of degradation in the user validation pairs. So if we have a test where we really only want to include results from people who have very high sensitivity to degradation in video, we can use user validation pairs where the degradation is subtle and if they pick up on all of those user validation pairs, then we know that the opinion that they're offering us, you know, is valid. I need to emphasize, maybe I didn't make it clear these user validations are randomly inserted along the test session.Tamar Shoham: 27:38 The user has no idea that there is anything special about these pairs. Do we know of any other solution that works like this? Have you come across anything? So we've come across another similar solution. It's called subjectify.us. It's coming out of MSU, Moscow State University. And I presume everyone in the field, you know, has heard of the MSU video comparison reports that they give out annually to compare different implementations of video codecs. And it seems that they went through the same path that we did saying, okay, you know, we've got metrics but we really need subjective opinions. And they have a solution that is a actually a service that you can apply to and pay for where you give them your images or a video that you want to compare and they perform similar kinds of tests for you. In our solution.Tamar Shoham: 28:45 We, we have many components that are specific to the kind of questions that we want to answer that might be a bit different, but it's actually very encouraging to see similar solutions coming out of other institutions or companies. Because you know, it means that this understanding is finally dawning on everyone that A), you do not have to do BT.500 compliant testing to get interesting subjective feedback on what you do. B), this should be incorporated as part of codec development codec optimization. And, and you know, we're not at the days where you can publish a paper and say I brought the PSNR down and therefore it is by definition good. No, it has to look good as well.Dror Gill: 29:42 And I think, the MSU, the latest report, I'm not sure about the previous ones. They have two reports comparing codecs. One of them is with objective metrics and one of them with subjective. So I guess they developed this external tool subjectify.us first to internally so they can use it when comparing the codecs in the test they do. And then they decided to to make it available to the industry as well.Tamar Shoham: 30:05 Yeah. And, you know, I, don't see it as competition at all. You know, I see it as synergy of all of us figuring out how to work correctly. You know, in this field of video compression, video streaming and a recognition that sure, we want to make better objective metrics or numerical metrics because that's an indispensable tool. But it can never be the only ruler that we measure by. It's just not enough. You need the subjective element and the more, you know, solutions out there to do this. I think it's great for the video streaming community.Mark Donnigan: 30:47 What if somebody wanted to build their own system because this isn't a commercial offer. Although we've had many, many of our customers suggest that we should offer it that way, but it, at this time, you know, we're, we're not planning to do that. So how, how would someone get started, you know, if they listen and say, wow, that's a brilliant idea, you know, mechanical Turk and, but how would I build this?Tamar Shoham: 31:13 Okay. So, so I think the answer is in two parts. The, the important part if you're going to do video comparison is the player and the client. And that's something that if you're starting from scratch is going to be a bit challenging because over the years we've invested a lot of effort in our Beamr View player. And you know, there aren't a lot of equivalent tools like that. You need a very reliable client that can accurately show the video frames side by side in motion, you know, if that's what you're testing, frame synchronized and aligned. And I mean we, we originally did our first round of subjective testing, which actually was BT.500 compliant as we did it in a facility that had all the required gray paints with wall and calibrated monitors. Yeah. And, and we did that for images and building a client for that.Tamar Shoham: 32:13 That was for Beamr's JPEGmini product, building a client that compares two images is quite easy and straightforward. Okay. But building a client that reliably displays side by side video and synchronized playback is, is might be the biggest obstacle to, you know, some companies saying, that's cool, I want to do this. Then you have the second part, which is, you know, the backend creating the test sets. We put a fair bit of thought into how to create tests sets that, you know, we can really rule out unreliable users easily and get good coverage over the video comparisons that we want to make to be able to collect reliable statistics. So, that's like a coding task.Mark Donnigan: 33:03 But the point is there's logic that has to be built around that. And you have to really put thought into, you know, how you are going to either weight or screen out someone's result.Tamar Shoham: 33:15 Definitely, definitely. And, and then you get, I mean, so you have the part of, you know, having a client, you have the design of the test sets on the back end and the part that's you know, building these test sets so that you get a good result. And then you have the third part, which is collecting all the data and doing a, you know, making sense of it, doing a statistical analysis, understanding you know, what the confidence intervals are for the results that you've collected. If maybe you need to collect more results in order to be happy with with your confidence level. And so there are, you know, elements here, some of them are design and understanding how to build it and some of them are coding challenges. And then you have the client, which you know, you need to create. So it's, it's not a trivial thing to build from scratch given the components that we had in the understanding that we had. It, it was quite, quite doable with reasonable investment. And you know, now we're reaping the benefitsDror Gill: 34:17 And, and it's really amazing. You know, for me, for each time we, we want to test something or to to check some of our codec parameters, which one is better or compare two versions of an encoder or etc. You know, you can launch this test and basically overnight, I mean, the next morning you can come in and you'll have 100, 200 user opinions, whatever your budget is averaged that, give you a, an answer, give you a real answer based on user opinions, which one is better...Tamar Shoham: 34:53 It's an invaluable tool. So literally, if before, you know, we would be able to look at two or three clips and saying, yeah, I think this is a good algorithm and you know, this makes it look better. Now, as you said, overnight, you can collect data on dozens of clips over dozens of users and get an opinion and really integrate it into your development cycles. So it really is very, very useful.Mark Donnigan: 35:21 And you know, there's an application that comes to mind. I'm curious if, if we have used the tool for this or we know someone who has and that is for determining optimal ABR ladders. And I'm just curious, is that an application for VISTA?Tamar Shoham: 35:38 So, I mean, as I said before, basically it's a matter of selecting your question before you start designing your test. And what we have built over a brief time and maybe haven't mentioned yet, we call it auto VISTA says that, okay, if I have a question. Okay. I can go from question to answer. Basically by, you know, pulling the big lever on the machine because we have a fully automated system that says this is the encoder I wanted to test with this configuration. Okay. That's A the second encoder or configuration or something I wanted to test is B, you know, take these configuration files, take, these are the inputs I want to work on and do the rest. Okay. And it will set up EC2 instances on Amazon AWS and perform the encodes and create the pairs and send that to the backend and create the test sessions and start a launch around the testing and enable, you know, access to the database to collect the results.Tamar Shoham: 36:51 So with that, you can basically, it's just about posing the question. So if the question you want to answer is, I have, you know, I can either get this layer or I can get that layer, you know, which of them looks better, then yes, you can use VISTA to, you know, create a set that corresponds to one ABR ladder, create a set that corresponds to another and you would need to build the pairs correctly. Okay. For this comparison, what you consider a pair, but that that's again, just in technicalities. Basically for any task that says, I want to compare that pair, does set A look like set B or does set A look better than set B. Okay. Those are the two kind of questions that we can answer. And you know, we've, we've invested a fair bit of effort in making it as easy to use as possible so that it's practical to use it really in answering our development question.Mark Donnigan: 37:52 Well, I think we just exposed to the entire industry what our secret weapon is!Tamar Shoham: 37:59 You know, better than that Mark. It's just one of our secret weapons!Mark Donnigan: 38:03 And you know, I think we should give you an opportunity to give an invitation because I think you are wanting to pull together your own episode and interview?Tamar Shoham: 38:15 This is a shout out to all you women video insiders and we know you're out there. So if you'd like to come on for either a regular podcast interview on the amazing things you are doing in streaming media, then we're very, very happy for you to reach out to either Dror, Mark, or myself, so we can arrange interview. And if some of you don't feel comfortable or are not allowed to expose your trade secrets on the air, then we're thinking of also looking in to do a specific special episode on what it means to be a woman in the video insiders world. Thank you, Tamara, for joining us on this really engaging episode. Thanks so much for having me again.Narrator: 39:01 Thank you for listening to The Video Insiders podcast, a production of Beamr Imaging Ltd. To begin using Beamr's codecs today. Go to beamr.com/free to receive up to 100 hours of no cost HEVC and H.264 transcoding every month.

The Video Insiders
VVC, HEVC & other MPEG codec standards.

The Video Insiders

Play Episode Listen Later Dec 6, 2019 59:19


Resources:Download HEVC deployment statistics document here: JCTVC-AK0020Related episode: E08 with MPEG Chairman Leonardo CharliogneThe Video Insiders LinkedIn Group is where we host engaging conversations with over 1,500 of your peers. Click here to joinLike to be a guest on the show? We want to hear from you! Send an email to: thevideoinsiders@beamr.comLearn more about Beamr's technology 

The Video Insiders
Live Encoding Beyond 32 Million Pixels for VR

The Video Insiders

Play Episode Listen Later Oct 4, 2019 50:16


Rob Koenen, Co-Founder of Tiledmedia, discusses the latest advancements in HEVC VR encoding with The Video Insiders. You will learn about video encoding issues relating to HEVC tile encoding, 8K, MP4 metadata optimization for high-resolution files, and more.Join the conversation by jumping into The Video Insiders LinkedIn Group.If you would like to be a guest on the show, send an email to thevideoinsiders@beamr.com.For more podcast episodes, click here.Learn more about Beamr's technology.Today's guest: Rob Koenen.

This Week in Computer Hardware (Video HI)
TWiCH 534: AMD's EPYC CPU & Gaming on Intel's Ice Lake - The 1st CPU to encode real-time 8K HEVC video

This Week in Computer Hardware (Video HI)

Play Episode Listen Later Sep 19, 2019 65:05


Patrick Norton is joined by Jim Tanous, Managing Editor at PC Perspective, to chat about AMD's EPYC 7742 server processor, the encoding company Beamr, Corsair's VENGEANCE LPX DDR4 Memory, Intel's Ice Lake graphics leap, the Core i9-9900KS TDP leak, Huawei's new Mate 30 Pro phone, Facebook's 2nd-gen Portal products, and more! Host: Patrick Norton Guest: Jim Tanous Download or subscribe to this show at https://twit.tv/shows/this-week-in-computer-hardware. Send your computer hardware questions to twich@twit.tv. Sponsor: plex.tv/twit code TWIT10