Device or software for encoding or decoding a digital data stream
POPULARITY
Victoria Snowfields - https://www.visitvictoria.com/see-and-do/outdoor-and-adventure/snow-and-skiing/melbournes-snowfields Australian Shark Attacks - https://cosmosmagazine.com/nature/marine-life/australia-a-hotspot-as-shark-attack-deaths-rise/ Zone Rouge (France) - https://en.wikipedia.org/wiki/Zone_rouge British Cloud Seeding - https://www.yourweather.co.uk/news/science/cloud-seeding-an-experiment-with-unpredictable-consequences-weather-manipulation-climate.html NATO - https://www.nato.int/ Devon Flood - https://en.wikipedia.org/wiki/Lynmouth_Flood Veganism - https://en.wikipedia.org/wiki/Veganism Beyond Meat - https://www.beyondmeat.com/en-US/ Guiness - https://www.guinness.com/en-us Veganuary Challenge - https://veganuary.com/en-us/ Iisinglass - https://en.wikipedia.org/wiki/Isinglass Apiary - https://bestbees.com/2022/12/06/what-is-an-apiary/ Vegan Kit-Kat - https://veganessentials.com/products/kit-kat-vegan-chocolate-bar Nestle Chocolate - https://www.nestle.com/ Microplastics - https://oceanservice.noaa.gov/facts/microplastics.html Teflon - https://thegoodhuman.com/what-teflon-is-and-why-you-should-avoid-it/ Dark Waters (film with Mark Ruffalo) - https://www.rogerebert.com/reviews/dark-waters-movie-review-2019 Dupont - https://www.dupont.com/ Erin Brockovich - https://abcnews.go.com/US/erin-brockovich-real-story-town-decades/story?id=78180219 Jalapeno - https://pepperscale.com/jalapeno-peppers/ Hawaiian Pizza - https://thecozycook.com/hawaiian-pizza/ Why Tomatoes Were Thought To Be Poisonous? - https://www.smithsonianmag.com/arts-culture/why-the-tomato-was-feared-in-europe-for-more-than-200-years-863735/ Nightshade Vegetables - https://www.realsimple.com/health/nutrition-diet/what-are-nightshade-vegetables Queue (British) - https://www.thoughtco.com/cue-and-queue-1689358 Abel Tasman - https://en.wikipedia.org/wiki/Abel_Tasman Ivanhoe (Town in New South Wales) - https://www.abc.net.au/news/2019-02-09/outback-town-ivanhoe-fights-for-survival/10723852 British Vs. American Car Terminology - https://www.hemmings.com/stories/2014/03/12/a-conversion-guide-to-british-auto-terminology British Vs. American spelling - https://blog.collinsdictionary.com/language-lovers/9-spelling-differences-between-british-and-american-english/ Thongs Vs Flip Flops - https://threadcurve.com/flip-flops-vs-thongs/ Thongs Vs G-strings - https://www.thelist.com/614429/when-should-you-wear-a-g-string-or-a-thong/ Long Nose Vs Cab Over Trucks - https://www.carscoops.com/2023/04/this-is-why-america-stopped-building-european-style-cab-over-trucks/ Road Train (truck) - https://blog.smartsense.co/road-trains Kangaroo Road Accidents - https://www.dinggo.com.au/blog/kangaroo-accident-statistics Coyotes In Boston Mass - https://www.nbcboston.com/news/local/more-coyotes-are-creeping-into-boston-neighborhoods-heres-why/2301397/ https://www.nbcboston.com/news/local/coyote-sightings-in-jamaica-plain-have-residents-on-edge/3106545/ https://www.cbsnews.com/boston/news/cohasset-coyotes-video-pack/ Blue Heeler - https://www.thesprucepets.com/blue-heeler-4176567 Dingo - https://australian.museum/learn/animals/mammals/dingo/ Rabies - https://www.who.int/news-room/fact-sheets/detail/rabies Beaver Restoration - https://www.beaverinstitute.org/library_category/stream-restoration/ Various Species of Carp - https://eatingthewild.com/types-of-carp/ Silencers (Suppressors UK) for firearms - https://www.americanrifleman.org/content/suppressors-what-we-can-learn-from-the-uk-experience/ Calicivirus - https://www.merck-animal-health-usa.com/condition/feline-calicivirus Snowshoe Hare - https://www.nwf.org/Educational-Resources/Wildlife-Guide/Mammals/Snowshoe-Hare Weasels - https://www.mentalfloss.com/article/64193/7-fierce-facts-about-weasels Wolverine - https://www.nwf.org/Educational-Resources/Wildlife-Guide/Mammals/Wolverine Ferret - https://www.ferret-world.com/ferret-facts/types-of-ferrets/ Polecat - https://www.wildlifetrusts.org/wildlife-explorer/mammals/polecat Bobcat - https://a-z-animals.com/animals/bobcat/ Lynx - https://animalcorner.org/animals/lynx/ Mountain Lion - https://www.nwf.org/Educational-Resources/Wildlife-Guide/Mammals/Mountain-Lion Black Angus Beef - https://www.thespruceeats.com/what-to-know-about-angus-beef-333745 Heating Pad For Screen Repair - https://www.ifixit.com/products/cpb-heating-pad-for-screen-and-battery-replacement Raspberry Pi 400 - https://www.raspberrypi.com/products/raspberry-pi-400/ Micro HDMI - https://www.howtogeek.com/745530/hdmi-vs-mini-hdmi-vs-micro-hdmi-whats-the-difference/ MX Linux - https://mxlinux.org/ Raspberry Pi 5 - https://www.raspberrypi.com/products/raspberry-pi-5/ Orange Pi - http://www.orangepi.org/ Transcoding - https://corp.kaltura.com/blog/what-is-transcoding/ CODEC - https://www.howtogeek.com/763274/what-is-a-codec/ Geekworm Raspberry Pi Cases - https://geekworm.com/collections/raspberry-pi Seedstudio - https://www.seeedstudio.com/ Argon One Pi Case - https://argon40.com/collections/raspberry-pi-cases
Big Dipper and Meatball catch up about their travels in Boise and New York, while trying to understand the psychology of hot guys. Then they are joined by Gem and Julio from Haus of Codec to talk about the good work they are doing in Providence, Rhode Island. Plus, they listen to some of your unhinged voicemails. And congrats to former guest of the pod, Tia Kofi on her Drag Race win!! Get your tickets to Sloppy Seconds Live on April 9th in Brooklyn with Special Guests: Macy Rodman and Theda Hammel from Nymphowars! www.thesultanroom.com If you are able, DONATE to the Haus of Codec at www.hausofcodec.org And if you are in Rhode Island check out “modiste: Anything But Modest” hausofcodec.org/events Listen to Sloppy Seconds Ad-Free AND One Day Early on MOM Plus Call us with your sex stories at 213-536-9180! Or e-mail us at sloppysecondspod@gmail.com FOLLOW SLOPPY SECONDS FOLLOW BIG DIPPER FOLLOW MEATBALL SLOPPY SECONDS IS A FOREVER DOG AND MOGULS OF MEDIA (M.O.M.) PODCAST Learn more about your ad choices. Visit megaphone.fm/adchoices
Un autre Alexandre est arrivé dans le numéro 3 du POD. Alexandre Héraud. Le Pod Village et le POD 2 ont été très bien accueillis, on s'est dit qu'il fallait continuer … mais … le covid nous a tous obligés à changer nos habitudes … on pensait que ça ne durerait pas… et ça a duré. Mais on a tenu. Alexandre Héraud était passé en 2019 au salon avec son LEM en main (un micro français super solide) et son Zoom pour faire un sujet sur le salon dans son podcast « en roues libres ». J'ai trouvé sa manière d'interviewer tellement naturelle que j'ai voulu en savoir plus sur lui. Il avait une présence de petite souris. Il était là sans être là et nous faisait parler d'une manière simple et humble. J'ai tellement aimé son univers, que je lui ai demandé de produire des pastilles micro-trottoir pour le salon de la radio 2020 et il l'a fait de manière exemplaire. Je vous en ferai écouter une. J'ai suivi son parcours depuis et le confinement nous a fait enregistrer un échange à distance. Je lui ai parlé d'une application gratuite que les radios utilisaient pour leurs directs et qui était idéale pour les podcasts distants. Cleanfeed. Si vous ne connaissez pas, c'est Cleanfeed.net C L E A N F E E D. Codec gratuit qui enregistre en Wav sur chrome en mutlipiste… pour plus de deux personnes c'est payant … et voilà de quoi nous avons parlé… écoutez-bien … je lui avais dit que je publierais cette interview un jour, mais que je ne savais pas quand .. il était temps … Découvrez tous les bonus et la genèse du POD et de Podcast magazine en vous abonnant à Podcast Magazine et à ses bonus : https://www.podcastmagazine.fr/boutique/ Participez à notre Groupe WhatsApp et parlez-nous de vous : https://chat.whatsapp.com/Lm3msD6VCmfGzLr3Adidpn
Vamos pra ação tática de espionagem na obra prima de Hideo Kojima, Metal Gear Solid! O Galinha monta um esquadrão de excelência com Marcelo do Multipop e Kate do Gamer Como a Gente pra invadir Shadow Moses e comentar um pouco sobre essa revolução dos videogames! Praticamente inaugurando todo um novo gênero de jogos, Metal Gear Solid é um marco por vários motivos - seja de game design e gameplay, que praticamente inauguraram todo um novo gênero de jogo, quanto da parte audiovisual, com suas cutscenes cinematográficas e efeitos sonoros super marcantes. Conecte seu Codec na nossa frequência e ouça esse episódio na surdina.LINKS DA GALINHAApoie a Galinha no Catarse: catarse.me/galinhaviajanteSiga a Galinha no Instagram e Twitter: @galinhaviajanteSite da Galinha: galinhaviajante.com.brTwitch da Galinha: twitch.tv/galinhaviajanteLojinha da Galinha: loja.galinhaviajante.com.brFale com a Galinha: cast@galinhaviajante.com.br♪ TRILHA SONORASnake Eater (8-Bit Big Band)Main Theme, VR Training, Discovery, Encounter, The Best Is Yet To Come - Metal Gear Solid OST (Harry Gregson-Williams)Even Flow (Pearl Jam)Sandstorm (Darude)Never Gonna Give You Up (Rick Astley)Chega (Gabriel O Pensador)Shadow Moses (Bring Me The Horizon)Clash On The Jazzy Bridge (Quasar)EDIÇÃO: Samuel R. Auras @samucarohlingARTE DA VITRINE: Samuel R. Auras @samucarohling00:00:00 - Abertura do Episódio00:04:41 - Hideo Kojima e o Início de Metal Gear00:23:50 - Ação Tática de Espionagem00:55:32 - Audiovisual e Lore Maluca01:29:57 - Encerramento do EpisódioO Galinha vai ao ar toda semana graças aos Escudeiros da Galinha Viajante, nossos queridos apoiadores no Catarse: Wagner Schiffler, Fausto Guimarães, Carlos Kopperschmidt, Thiago Esgalha, Matheus Menuci, Eduardo de Castro, Fábio Queiroz, Alexsandro Schneider, Marcela Versiani, Yan Amorim, Camila Candomil, Renan Ramos, Felipe Fernandes, Cecília Schiffler, Mario Buzete, Claudia Marcarini, André Gomes, Lucas Nicholas, Daniel Barboza, Evandro Poppi Júnior, Ágata Sofia, Cleiton Oliveira, Jair Cerqueira, Matheus Carvalho, Ernesto Melo, Eduardo Pontes, Guilherme Garcia, Henrique de Souza, Marcelo Delgado, Adriel Pizetti, Paulo Nagahara, Maria Luisa Puyol, Lucas Toso, André Montibeller, Bruno Techeira, José Antônio, Amanda Evangelista, Schneider de Souza, Vinícius Bento, Leandro Rodrigues, Kate Schmitt, Lucas Nicolau, Adriano Ramos, Vitor Estevan, Lucas Lanza, Leo Izidoro, Gabriel Menino, Ludmam Alves, Ivan Machado, Rafael Parrini e Vortex Indie Games.Contato: cast@galinhaviajante.com.brSupport the Show.
In this feature Drew Morahan, Head of Business Change and Design at Codec, one of Ireland's leading IT companies, explores the importance of understanding the human factor in digital transformation projects and how change management could be the key to unlocking success. Digital transformation is a complex and sometime fraught undertaking. With so many elements to consider, it can be a mammoth commission. When rolling out a new technology or way of working, one element that is often underestimated is the human factor. If those undertaking the transformation project are not fully bought into the project from the outset, digital transformation will not be successful. In fact, a study by McKinsey in 2021 reported that 70% of digital transformation projects fail, with one of the key contributing factors being organisations failing to get employees onboard. How to achieve successful Digital Transformation A great example of this oversight was a project I worked on a few years ago as part of a team tasked with implementing a new company-wide technology solution for a major retail bank which included updating all their computer systems. However, a change management consultation with staff, and with customers, revealed the biggest challenge for the bank was customer wait times. No amount of technology implementations or upgrades to systems was going to help address the queues within branches, or the frustration of customers waiting to access a kiosk to undertake their banking requirements. On the back of this consultation, it was concluded that having handheld tablets that staff could use around the branch would resolve much of this problem. They could attend to customers swiftly and move around rather than having a long queue of customers on their feet, waiting for prolonged periods of time. Without consulting the people at the heart of the business, the new technology roll-out would have had limited success. Upon implementation, staff and customers alike would have found that their pain points still existed and the investment in the new technology would have been largely wasted. It's easy to forget that humans are at the centre of technology; they design it, they pay for it, and they are the ones that use it. But so often, organisations fail to consult the people involved and can blindly implement fantastic, state-of-the-art solutions that still fall short of their intended outcome by underestimating the human factor in digital transformation. Stakeholder Analysis Humans are hard-wired to resist change. It's coded into our DNA as a safety net and for many people, it's hard to override. When it comes to digital transformation, we encounter different groups of people all with their own approach. In these instances, stakeholder engagement is key. To be able to engage with stakeholders, it's vital to fully understand who your stakeholders are within a project, and the various categories that they fall under. Working out who within the business are your 'allies' and who are the 'lifers' is crucial. Those dedicated to the current state of play, can help when going through change management, as well as those who are reticent to change in the design process. Workshops are crucial to better understand their needs and concerns and have an open dialogue on ways to overcome these. A stakeholder analysis sounds very formal, but it's simply an opportunity to look at the human side of the project; getting to know those people who might be impacted by the project to gain a better understanding of their nuances can be the difference between a successful adoption and a failed project. Decision-makers are vital for every part of the transition. Rather than having one senior member of staff that needs to approve all decisions or be part of every conversation, having 'boots on the ground' team members who are empowered to make decisions that will impact their part of the business is invaluable and increases the likelihood of a smooth and on-schedule ...
This week, Mike is joined by Evan and Codec to discuss the Geonosians' debut, as well as examples of building lists to deal with armor and horde style armies. We then roundtable on the best and worst non-force using commanders in the game.
Elias Garcia, from Miami, is a DJ, producer, and the owner of Sceptre Records. With an Infra Boston residency, Elias continues to captivate audiences with his signature sound. His music is a compelling fusion of raw, deep, hypnotic techno, with influences drawing from the realms of sci-fi and minimalism. Central to his DJ sets is the iconic TR09 drum machine, infusing his performances with a unique and captivating sonic character. Sharing the stage with legends such as Jeff Mills, DVS1, Richie Hawtin, Robert Hood. Elias has demonstrated his remarkable talent and versatility. His discography boasts releases on esteemed labels like Modularz, Subsist, Analog Solutions, Codec, and Ominidsic, further solidifying his reputation as a producer to watch. Elias Garcia's journey is a testament to his dedication to pushing the boundaries of techno, and his future in the electronic music landscape promises to be as exhilarating as his music. Tracklist via -Spotify: http://bit.ly/SRonSpotify -Reddit: www.reddit.com/r/Slam_Radio/ -Facebook: bit.ly/SlamRadioGroup Archive on Mixcloud: www.mixcloud.com/slam/ Subscribe to our podcast on -iTunes: apple.co/2RQ1xdh -Amazon Music: amzn.to/2RPYnX3 -Google Podcasts: bit.ly/SRGooglePodcasts -Deezer: bit.ly/SlamRadioDeezer Keep up with SLAM: fanlink.to/Slam Keep up with Soma Records: fanlink.to/SomaRecords For syndication or radio queries: harry@somarecords.com & conor@glowcast.co.uk Slam Radio is produced at www.glowcast.co.uk
We reach the emotional and mind bending finale of MGS 2. A fourth wall breaking treaty on video games, free will and human nature that shaped who Luke is. But before that we have to meet, rescue and talk about Otacon's cute step-sister. We talk about: Thirsty Suitors, Huey, Star Wars, G -1, Emma, Snake Is Bad At Computers, Lil Liquid, The Artistic Need To Show Hog, The Colonel Talks To Us, We Get A Katana, Geoff Keighley is Raiden, Swag Solidus, Ocelot's Snake Harem, 9/l1, Got The Zoomies, We Don't Like The Romance, Stew, Early Internet
Die Golem.de Redakteure Sebastian Grüner, Oliver Nickel und Martin Wolf tauchen in die Geschichte von Bluetooth ein.
Christian war vor Ort auf der Connect 2023. Ben saß vorm Livestream. War das Event der Abgesang auf VR? Und wieso findet Ben, dass ein Lex-Fridman-Podcast die bessere Connect war? Beide haben die Quest 3 ausprobiert: Geil oder Fail? Wie gut ist die Mixed Reality und kann Assassin's Creed Nexus überzeugen? Sind die Ray-Ban | Meta Smartglasses zu gebrauchen? Fragen über Fragen, die nur der große MIXEDCAST-Talk über die Connect-Woche beantworten kann! Hinweis: Da Christian bei der Aufnahme in den USA war, hat uns das Delay dazwischen gepfuscht, weshalb wir leider immer wieder ineinander reden. Wir entschuldigen uns für die Unannehmlichkeiten.
This week, we are joined by Anthony Nelson, a seasoned audio forensic expert from Codec Forensics, known for his expertise in revealing audio mysteries and more. It was discovered that we didn't have the best version of the 911 call, so many sounds and noises are considered to be uncertain. One thing is certain, however: a woman's voice can be heard. Why has no one mentioned a woman throughout this investigation? Why only "three men"? Join us as we peel back more layers of this intricate onion to hopefully expose what really happened to Grant Solomon.DISCLAIMER:The following podcast episode features commentary of a 911 call, breaking down the background noise of the call. The guest commentator is considered to be an expert in audio forensics, however, it was concluded that the audio version he was sent wasn't of the best quality to confidently assess each question.Accuracy of Information: The information presented in this podcast episode is based on the available evidence, reports, and public records. While efforts have been made to ensure accuracy, details may be subject to errors or omissions. Listeners are encouraged to conduct their own research as well.Opinion and Speculation: Throughout the podcast, there may be instances where opinions and/or speculation are expressed regarding certain events, individuals, or circumstances. These are the personal perspectives of the podcast hosts or guests and should not be taken as conclusive or factual statements.EPISODE NOTES:http://www.justiceforgrantsolomon.com/https://www.codecforensics.com/https://www.linkedin.com/in/anthonyfnelson/
We are now almost 3 years into the current gaming hardware generation! On this first episode back, the Codec Call crew discusses where each platform is at this far into the generation. You can subscribe to Codec Call on Spotify, Apple Podcasts, and Google Podcasts! You can follow us on Twitter: @ZTargeting2016 @lukedolla23 @FoxDye89 @braugaming @crazzero Soundcloud: ztargeting IG: ztargeting2016 Email us at: ZTargeting2016@gmail.com Music: soundcloud.com/apextony
Un autre Alexandre est arrivé dans le numéro 3 du POD. Alexandre Héraud. Le Pod Village et le POD 2 ont été très bien accueillis, on s'est dit qu'il fallait continuer … mais … le covid nous a tous obligés à changer nos habitudes … on pensait que ça ne durerait pas… et ça a duré. Mais on a tenu. Alexandre Héraud était passé en 2019 au salon avec son LEM en main (un micro français super solide) et son Zoom pour faire un sujet sur le salon dans son podcast « en roues libres ». J'ai trouvé sa manière d'interviewer tellement naturelle que j'ai voulu en savoir plus sur lui. Il avait une présence de petite souris. Il était là sans être là et nous faisait parler d'une manière simple et humble. J'ai tellement aimé son univers, que je lui ai demandé de produire des pastilles micro-trottoir pour le salon de la radio 2020 et il l'a fait de manière exemplaire. Je vous en ferai écouter une. J'ai suivi son parcours depuis et le confinement nous a fait enregistrer un échange à distance. Je lui ai parlé d'une application gratuite que les radios utilisaient pour leurs directs et qui était idéale pour les podcasts distants. Cleanfeed. Si vous ne connaissez pas, c'est Cleanfeed.net C L E A N F E E D. Codec gratuit qui enregistre en Wav sur chrome en mutlipiste… pour plus de deux personnes c'est payant … et voilà de quoi nous avons parlé… écoutez-bien … je lui avais dit que je publierais cette interview un jour, mais que je ne savais pas quand .. il était temps … Découvrez tous les bonus et la genèse du POD et de Podcast magazine en vous abonnant à Podcast Magazine et à ses bonus : http://www.podcastmagazine.fr/boutique/ ou simplement aux bonus sur notre plateforme communautaire COOL en cliquant ICI.
La nuova sperimentazione potrebbe portare al 4K sul digitale terrestre. Alcuni infortunati utenti stanno ricevendo il nuovo Pixel Fold di Google con un mese di anticipo. Questa settimana mister gadget Daily è realizzato in collaborazione con iRobot Learn more about your ad choices. Visit megaphone.fm/adchoices
Original Recording Date: Wednesday Dec 18, 2019 In this interview from the SHIRO! ARCHIVE, we sit down with Dr. Eric Ameres, the former CTO of Duck Corporation (later On2 Technologies) to discuss the TrueMotion video codec, used in various Saturn & Dreamcast games and beyond... https://segasaturnshiro.podbean.com/e/season-4-ep-3-duck-tales-with-dr-eric-ameres-of-duck-corporation Dr. Eric Ameres' YouTube Channels: https://www.youtube.com/@UCzhVmOq43r5prl7u9M72wuA https://www.youtube.com/@UCkeZ4_KPp_bj7jlciHr9IVw https://www.youtube.com/@UCi7TsMas2UEJ6EqohW53zcg https://www.youtube.com/@UCxgygbwe0JFl9KC-dM1YU9g Twitter: https://twitter.com/eameres https://twitter.com/erasermice https://twitter.com/learningmax RPI: https://faculty.rpi.edu/eric-ameres Linkedin: https://www.linkedin.com/in/eameres Facebook: https://www.facebook.com/eameres
Tieline: The Codec Company is an entity many radio and television industry owners and executives are likely familiar with. But, they may not even know what a Codec is.What's the quick explanation as to how this Australia-based company's products aid the production and technical side of a broadcast media property's operations? We asked Jacob Daniluck, who is an Indianapolis-based Technical Sales Specialist for Tieline, serving the Americas alongside Doug Ferber.In this InFOCUS Podcast, presented by dot.FM, Daniluck shares more about how Tieline works with audio content creation and distribution companies, how its new MPX Product received an NAB Product of the Year honor, and how native Livewire+ in high density Gateway codecs can streamline integration into a Livewire system in layman's terms.
In this week's episode I speak to Lianre Robinson, CEO at Codec.Join us whilst we talk about the key things you need to set you up to have an honest conversation and why there isn't only one way to get to yourgoal.Lianre also shares her thoughts about what it's like to become a CEO and the importance of having a support network.Key takeaways include:● Why we must have honest conversations● Choosing to stay mainly remote working and the learnings from this● What it feels like to be a CEOLianre is an award-winning digital marketer, who is passionate about technology, culture, communities, and their impact on brands.With 20+ years experience across global companies like LADbible, Livity and Jack Morton Worldwide, she has a wealth of experience in designingand delivering new business and diversification strategies to drive growth and optimise business performance.Useful links:website: https://www.thechangecreators.comlinkedin: https://www.linkedin.com/in/joannahowes/For Leadership and team coaching and training, you can message me at joanna.howes@thechangecreators.com and we can book a call. website: https://www.thechangecreators.com linkedin: https://www.linkedin.com/in/joannahowes/youtube: https://www.youtube.com/channel/UC2kZ-x8fDHKEVb222qpQ_NQ
En el #One2One hoy te explicamos los cuatro factores que debes tomar en cuenta al momento de hacer tu selección: • Factor #1: El precio • Factor #2: El Almacenamiento • Factor #3: EL Finishing o acabado • Factor #4: Tu hardware de edición Recuerda que si tienes algún término o proceso que te gustaría entender, puedes un DM a nuestro instagram: @wrapitup.mexico ¡Yo soy Marco Cabrera y nos escuchamos el próximo martes de Wrap It Up! #ONE2ONE, nuestra guía básica para entender el ecosistema audiovisual. Gracias por apoyar este proyecto, prometemos seguir creando contenido de calidad para todxs ustedes.Si bien nuestro pódcast es gratuito, puedes invitarnos un café o una chela suscribiéndote a WIU!plus https://plus.acast.com/s/wrap-it-up-one2one. Hosted on Acast. See acast.com/privacy for more information.
©️ 2022 Gain Records | Gain Plus www.gainrecords.com #WeAreWhatWePlay #Dreamtechno
Jeff Gerstmann returns to discuss installing a new graphics card and how putting a 4090 in his machine revealed an insidious software issue. Then we'll talk monitors, RTX Remix, The Game Awards, the latest in the ongoing merger wars, and the very idea of NINE Bayonetta games??? That seems like a lot of Bayonetta games.
Jeremy Parish, Kat Bailey, and Shane Bettenhausen connect via Codec to debate the ultimate truth of the entire Metal Gear saga: Which Metal Gear was best? Spanning the full history of the franchise, the canonical answer lies within! Retronauts is made possible by listener support through Patreon! Support the show to enjoy ad-free early access, better audio quality, and great exclusive content. Learn more at http://www.patreon.com/retronauts
My guests this week are Mike Dickey and Russ Gavin, of JackTrip Labs. The JackTrip Foundation is a non-profit collaboration between Stanford University's Center for Computer Research in Music and Acoustics (CCRMA) and Silicon Valley software entrepreneurs. JackTrip Labs provides a low-latency collaboration tool that makes it easy for musicians to perform together online. Mike Dickey, who's the CEO of JackTrip Labs, founded his first company in high school, and dropped out of Carnegie Mellon to become a full-time entrepreneur. Since then, Mike has built and sold three startup companies. His latest venture was Cloudmeter, which Splunk acquired in 2013. Prior to co-founding JackTrip, Mike held various leadership roles at Splunk focusing on Engineering, Architecture, Infrastructure and Product Management. Besides being the co-founder and COO of JackTrip Labs, Dr. Russ Gavin is also the Director of Bands at Stanford University. A lifelong advocate for accessible music education, Russ has taught music in K-12 and collegiate environments, continuously seeking and creating opportunities to utilize technology in the music classroom. His research publications have appeared in the Journal of Research in Music Education, the International Journal of Music Education, Psychology of Music, the Journal of Music Teacher Education, and the International Journal of Clinical and Experimental Hypnosis. During the interview we spoke about how JackTrip Labs started, the things that contribute to latency in online collaboration, the musicians who are the most sensitive to latency, why zero latency can actually be disconcerting to some players, why conference calling apps won't cut it for music, and much more. I spoke with Russ and Mike via zoom. On the intro I'll take a look at what TikTok's decreasing revenue target means, and Facebook's new audio data compression codec. var podscribeEmbedVars = { epId: 84118014, backgroundColor: 'white', font: undefined, fontColor: undefined, speakerFontColor: undefined, height: '600px', showEditButton: false, showSpeakers: true, showTimestamps: true };
Links Shotcut video editor website Useful Shortcut keys for the Shotcut video editor C = copy V = paste A = duplicate X = ripple delete Ctrl + X = ripple delete but send to clipboard S = split Tip not covered in my Podcast Splits are not fixed and can be adjusted. Once you've split up clips and put them in the right order on the timeline you can still adjust the cut point even though you previously split the clip because the clip is referenced to the original file in the playlist. Introduction Hello and welcome Hacker Public Radio audience my name is Mr X welcome to this podcast. As per usual I'd like to start by thanking the people at HPR for making this podcast possible. HPR is a Community led podcast provided by the community for the community that means you can contribute to. The HPR team have gone to great deal of effort to simplify and streamline the process of providing podcasts. There are many ways to record an episode these days using phones tablets PCs and alike. The hardest barrier is sending in your first show. Don't get too hung up about quality, it's more important just to send something in. The sound quality of some of my early shows wasn't very good. If I can do it anyone can and you might just get hooked in the process. Well it's been almost a year since I've sent in a show. Looking at the HPR site my last episode was back in November 2021. I suspect like many others life has become more complicated and I find I have much less spare time and because I have much less spare time I have much less time to pursue my hobbies and because of this I have less to speak about and because of this I have less time to record what I've been doing and it all turns into to vicious circle. Fortunately I recently had some time off work and had a lovely holiday. During the holiday I ended up recording some video which I decided I wanted to edit. I've done some video editing in the past using various video editing packages. The best and most recent of which is shotcut. Specific details and equipment Video resolution 1920 x 1080, Codec h264 mpeg-4, Frame rate 30 frames per second. Computer Dell Optiplex 780. Fitted with 4 GB of internal RAM and onboard video graphics card. Shotcut version 22.06.23 Shotcut is a free open-source cross-platform video editor licenced under the GNU general public licence version 3.0 This episode will only cover basic shotcut video editing techniques. Shotcut contains many advanced features and effects that will not be covered in this episode. A lot of the workflow I’ll share with you today is intended to get around limitations imposed by my low spec PC I'll try my best to cover the video editing process in this podcast using words alone; however I am conscious that an accompanying video would make it easier to follow along. Shotcut workflow Start by creating a folder to hold all the required media files. Audio tracks and sound effects can be added to this folder later. Make sure all your video files are using the same frame rate in my case 30 frames per second. Open each video file in VLC one at a time going through each video file looking for the best portions of video. Make a note of where the best portions of the video are by writing down the start and end points in minutes and seconds. I do this because the interface of VLC is more responsive than shortcut and the resolution of displayed video is far greater than the preview in shortcut. This makes it quicker and easier to find the best portions of video. Open shortcut and make sure the new project is set to the same frames per second as the media files you're working with, in my case 30 frames per second. You can check the frame rate of your project by looking at the selected video mode in the new projects window. If you select automatic it will ensure the project resolution and frame rate automatically match that of your media files. Start by adding all the video files to the playlist, this can be done in a number of ways for example it can be done by clicking on the open file button in the top toolbar or within the open files menu. Alternatively you can drag and drop files into the playlist. I find this to be the easiest way to add media files to a project. Once this is done save your project. Drag the first file from the Playlist to the timeline making sure that the start of the video starts at 0 seconds. Click on the timeline in the position where the first start point of interest is needed. Use the S key to split the video at this point. Don't worry about being too accurate as this can be moved at a later stage. Repeat this process for the end point of interest. Repeat this again for all the other sections of start and end points of interest. Remove the unwanted sections of video by clicking on a section then hitting the delete key. This will remove the unwanted section leaving an empty space behind. Once all the unwanted sections are removed click on the sections of video and pull them to the left to close the gaps up. I find it useful to leave some space between the good sections of video as it makes it easier to see where splits are and makes it easier later on to rearrange the order of the individual clips. Check the start and end points of the remaining sections of video to see that the start and end points stop in the correct place. You can do this by clicking the play button on the preview window. The video start and end points can be adjusted by dragging the section left or right in in the timeline section; this is where leaving spaces Between each section of video can be handy as it allows for fine tuning. Add a new blank video track to the timeline to hold the next video. Note this wasn't required when adding the first video track but it is needed for each subsequent track. A video track can be added by right clicking on an empty portion of the timeline and selecting add video track. Alternatively use the ctrl + I key. Drag your second video from the playlist onto the newly created blank video track in the timeline. As before make sure that the start of the video starts at 0 seconds. Before previewing any section of the second video track click the small eye shaped hide icon in the left section of the first video track labelled output. This will prevent previewing both video tracks at the same time. Repeat the process above of chopping the second video track into sections using the S key to split the video up. Remove the unwanted sections. Finally adjust the start and end points of the remaining sections. Repeat the steps above to add the remaining video files one at a time from the playlist to the timeline. When complete you end up with separate video tracks in the timeline each containing good sections of video. At this stage I can't be too specific about how to continue as there are a number of different options depending on your particular Project. You can for example start by combining the good sections of video into one video track by dragging them from one track to another then add if required an audio track or you can add the audio track first and then try to sync things up to the audio track moving bits and pieces of video into one video track remembering to hide the unwanted sections of video by clicking on the small hide eye icons. Don't do too much editing without saving the project. If you get a message about low memory save the project then reopen it. To export the final video click on the export button in the toolbar. I pick the default option, this creates an H.264/AAC MP4 file suitable for most users and purposes. You can check the frame rate is the same as your original media files by clicking on the advanced tab. Click the export file button and give it a file name. It may take some time to create the export file. This will be dependent on the speed of your computer and the length and resolution of your project. While Shotcut is far from perfect on my puny PC it is surprisingly usable and stable and is the best option I’ve found so far. Finally here are some general shotcut tips I have when doing video editing on a puny PC with limited ram, slow processor and built in graphics card such as mine. General Tips when working with a low powered PC Close all open applications leaving only shortcut open this helps with RAM usage Shortcut is surprisingly stable with a feeble PC such as mine. I would still recommend saving your project regularly as it is quick and very easy to do. If you get a message about running out of RAM then try not to do too much more editing before saving the project. Once saved close shotcut and then reopen it. The longer your project is and the higher your project resolution the more RAM you will need. When you are about to export your final video save the project close shortcut reopen shotcut and immediately export your project as any previous editing may be taking up precious ram. Be patient when clicking on the timeline to repositioned the play head. Always wait for the preview window to update. This can sometimes take a few seconds. When trying to sync video to audio you need to zoom in in quite a long way before getting an audio preview. When doing this and moving the play head you'll get a choppy version of the audio with this it is still perfectly possible to find the beat of the music allowing you to sync your video to the music. If this doesn't seem to work for you then try zooming in closer. Ok that's about it for this podcast. Hope it wasn't too boring and it made some sense. If you want to contact me I can be contacted at mrxathpr at googlemail. Thank you and goodbye.
In this fascinating episode, Jane and Martin discuss how to build a relationship between the community and a brand and why this is so important in our current climate of social media marketing. Martin shares some of the biggest challenges he has had to overcome and how he has done so, going on to achieve a number of exits, £40m in investment and £100m in revenue in the last 10 years. Martin Adams is the Founder of Codec.ai, a cultural intelligence platform that helps build community-driven brands: fusing AI with human imagination to tap in to pockets of culture that fuel growth. Do you have systems in place to help grow you understand your audience? ABOUT THE HOST: Jane Bayler is a serial entrepreneur, investor, speaker, event host and business scale up expert. She had a 20 year history in global media and advertising, before becoming a serial entrepreneur herself, with multiple businesses in real estate, marketing and education. Having grown and sold a £6M brand identity business to US communications group Interpublic, today she is most passionate about and committed to serving other entrepreneurs – helping them grow their businesses and achieve their best lives. Enquire about working 1:1 with Jane, book a call here: https://bit.ly/2Z07DML Join Jane's free Masterclass to discover her Triple C HyperGrowth system - to scale up your business and attract your ideal clients, here: https://idealclientsuccess.com/masterclass
Lianre is CEO at Codec, a cultural intelligence platform. She is an award-winning digital marketer, who is passionate about technology, culture, communities and their impact on brands. With 20+ years experience across global companies like LADbible, Livity and Jack Morton Worldwide, she has a wealth of experience in designing and delivering new business and diversification strategies to drive growth and optimise business performance. She is truly passionate about talent development, DE&I and increasing team happiness and performance. She is a member of leading industry organisation WACL (Women Advertising & Communications Leadership) and a proud mentor for the She Says ‘Who's Your Momma' program. Follow Us INSTAGRAM - www.instagram.com/activeintworld TWITTER - twitter.com/ActiveIntlUK KARIM - twitter.com/karimkanji PODCAST WEBSITE - www.thewhatsnextpodcast.com The podcast is brought to you by Active International, a global leader in Corporate Trade within the Media & Advertising industry.
Tom's bio:Based in London, prior to Metaphysic, Tom founded OmniSci (previously MapD), the world's fastest database and first GPU in-memory analytics engine backed by Tiger Global, NEA, in-Q-Tel, NVIDIA and Google. He is co-founder of Codec.ai, a content marketing analytics tool used by Redbull, Unilever, L'Oreal, Nestle and more. --- Support this podcast: https://anchor.fm/crypto-hipster-podcast/support
If you're reading this, odds are good you've used VLC before. The most capable video player out there got its start in surprising ways, and on this ep we're joined by project founder Jean-Baptiste Kempf to talk about both VLC's origins and everything else, from '90s MPEG2 decoder hardware to the French Minitel system, the state of modern DRM and upcoming video codecs, VideoLAN's business model, friction with Apple on the App Store, and plenty more.The FOSS Pod is brought to you by Google Open Source. Find out more at https://opensource.google
The 16:9 PODCAST IS SPONSORED BY SCREENFEED – DIGITAL SIGNAGE CONTENT If a company wants to hang its business hat on the proposition that it is very good at visualizing real-time data to screens, it helps to have a big, very familiar client that heavily uses that sort of thing. A small New York City start-up called Zignage has that in the New York Stock Exchange - providing and maintaining a platform that shows the numbers and trends charting on screens around the hyper-kinetic trading floor in Wall Street. The company grew out of an NYU media lab and spent its first few years working mostly behind the curtains, developing signage and data-handling capabilities to software firms and end-user clients. But a few years ago, the company made the decision to develop a brand and start selling its data-centric capabilities directly to end-users. I had a great chat with Alex Epshteyn, the CEO and Founder of the company, about how it got started, where its headed, who it all serves, and how there can be a huge gulf between software shops that can take a number from a shared data table somewhere, and running mission-critical, hyper-secure visualizations on a stock exchange floor. Subscribe to this podcast: iTunes * Google Play * RSS TRANSCRIPT Alex, thank you for joining me. Can you give me a rundown on what Zignage is all about, how they started and how long you've been around? Alex Epshteyn: Absolutely. Thank you for having me, Dave. Zignage started in 2009 formally, and we started at the NYU Incubator while I was doing my graduate work at the Media Lab in NYU and suffice to say the company was more interesting than the graduate works. So I started doing that, even though I'm from the east coast and this doesn't typically happen, it kinda happened here. So initially, conceptually, we were gonna get into the digital out home space and we were gonna build an auction backend that people can bid for spots on digital signs. So kinda a slightly novel idea, especially in digital signage and we couldn't do a big enough raise, and then we found a number of these sort of remnant advertising platforms coming into the market and we decided, since I have a pretty good little black book of enterprise clients, and we built the platform to about 50% at that point, in mid to end of 2009, let's try our hand at some enterprise folks, and what ended up happening is a trajectory that basically pushed us for about eight years, which is we built a middleware and a toolkit, essentially our own toolkit, that enabled us to build very quickly CMSs and builds and anything related to that, data bindings for third party systems like CRM systems and CRP systems, a variety of backends essentially, and we essentially entered OEM space. So we built products for other companies. Some of them were large, some of them were small. We had a tremendous amount of NDAs and non-competes, as you can imagine. These companies would not like you to advertise your own stuff while you were building it for them and typically we would have maybe one or two of these customers at the same time. So from 2009 to about 2017, maybe a little bit later even, we basically did work for third parties and we built a lot of different solutions, and around 2018, we decided that we were gonna attempt to productize. That means, essentially build our own, front facing, become a brand, and move away from a pure sort of project solution, even though we had a product in there. But it was a product for us, not so much for the end customer and to get into the market and so we did, and in the meanwhile we had two direct customers during almost all the time. NYU was one. We had a number of schools at NYU that we were able to pitch, and successfully had running, so NYU Law School, NYU Engineering School, where I was a student and then NYSE where we initially partnered Thomson Reuters. So Thomson Reuters did the data and, most of the application stack actually, and what we provided is a device management framework and advanced players to run the WebGL and all the other things that they needed to run for the New York Stock Exchange. This was under the NYSE-Euronext regime, which has since been bought by the Intercontinental Exchange. This was in 2017, which was a formative year for us. As I mentioned, NYSE under the new ownership came to us and said, “Look, Thomson Reuters is relatively expensive and essentially they're reselling us their data, how about you guys take on their responsibility?” You get nine months to replicate and you get this support contract that basically takes over for them, at a discount for them but it was a nice option for us. We took on the challenge. Because we were able in these intermittent years we built up so much experience and know how to deal with realtime sources, realtime data sources, and WebGL specifically to make things pretty bulletproof whereas perhaps some other HTML5 technology that is fairly popular in digital signage would is maybe not robust or maybe not as performant. So we took that toolkit and applied it to over essentially at the New York Stock Exchange and took the contract over and successfully we did that. So at the New York Stock Exchange today, they're actually running two separate solutions from us. They have our more standard on print solution for their marketing group and then they have a much more customized, almost like an OEM version for their trading real time data, which are now classed as a number of financial data widgets. So if I'm at the NYSE and I'm looking down on the floor, or I'm walking around the floor with all the guys with the funny jackets and everything, those various dashboard screens that I see with all the pricing indicators and everything else, that's all being driven by you? Alex Epshteyn: That is correct. So everything essentially above the workstation level, everywhere above the trader level, if you just look up above the 5'8” level from the ground, you'll basically be looking at our solutions. It actually is a full gambit of our capabilities. We have synchronized video, real time widgets for financial data consumption, charting types of things and a lot of different ticker technologies that we've custom built and some of our generic ones, and streaming as well. The only other company that works with us at the site is Haivision, so they provide the backend system and supplementary streaming solutions. So we consume their feeds and also feed them. They're a video distribution company? Alex Epshteyn: That's right. So we're actually partnered with them. So they're one of our partners in space. We like working with them, they are a nice Canadian company to say the least and I know some of the original folks that sort of constituted the company and they have grown as a company tremendously through the years. So we really like working with them. Yeah, this must have been a really big holy shit moment for you guys when you got that deal because it's not like winning a hundred locations QSR chain or something, this is the New York Stock Exchange. It's on the TV every day with endless photos and everything else, and it's mission critical. Like you can't say, oh, we're just doing a software update and we'll be back in 10 minutes? Alex Epshteyn: Indeed, and the escalations we get are pretty hardcore. We have just a few minutes to get things going, and philosophically, we try to blend some aspects of redundancy with a lot of resiliency because redundancy itself, some folks who deal with these sorts of mission critical situations, could itself present its own set of problems, right? So you want the system, the platform itself to be as resilient or high availability as possible to use a term out of the server space. So yeah, it was a huge thing for us and ultimately, we specialized in a lot of financial services and non-retail banking is a more generic category or an area we do very well in and we work with some integrators in the space that are known for it as well in terms of channel. Currently our CTO is actually the chief architect of the Thompson Reuter solution. He came on board with us a year ago, a year and a half ago as a full time hire. He was a consultant for many years after Thomson Reuters got customization space, and he worked with us for a long time and then finally our CTO to do other stuff, and Steve came on board. So we're very well positioned for this work. So for your company, if you had to do an elevator pitch saying what all you do, what do you rattle off for them? Alex Epshteyn: I think what we would do is, as you mentioned, mission critical type of usages, whichever vertical, right? We've done things with SCADA. We've done things in transportation that I wish I was at liberty to say, maybe soon, and it doesn't have to just be financial data. It could be sports feeds. It could be building services, things of that sort that are critical for the use. That's one of our specialty points. The other is, I would say, while we're very happy to have relationships with a number of hardware companies, we still have really some high end hardware that we field. So what we do is, for very demanding applications, not necessarily mission critical ones, but those overlap obviously, we provide a full-stack solution, and these players, we're getting into the realm of show control type of players, really beefy and professional level graphics capabilities. So we do sell those. Those are fully our stack, and this way we can guarantee basically the solution as opposed to having us do a certain portion system integrated to another and so forth. The last thing I would say is while we still support some level of OEM work, we currently have two customers that we work with. Our business model changed a bit in the last three years of supporting them. We have our standard SaaS business and in some cases we modified it for on-prem. So it's already flexible, but we also have a platform as a service offering to really support those OEM customers. So it's a lot less expensive in volume, very scalable, and I would say those are the things that really make us stand out. It's real time data, data visualization, full-stack solution with hardware to do very difficult things often, and finally, configuration where people assume real, ad-hocs customization. There's an assumption there, right? If you're doing something very bespoke, the assumption there is that it's gonna be insanely expensive and take a long time to build and that's true if you haven't built two dozen variants of it and you don't have a toolkit to basically assemble it from parts like a LEGO set, which we do. I would assume that your calling card when you go in to talk to opportunities, when you can say, yeah, we do the New York Stock Exchange, we do all the data handling on that, and you could imagine it's more than a little bit secure and mission critically oriented. I suspect that makes the target customers feel pretty comfy? Alex Epshteyn: It does, and even before them, it makes consultants who put us on the bid lists and generally are interested in finding parties that can actually fulfill the scope, call us. So we don't really advertise much, and that's gonna change, I think, maybe next year. We're gonna do maybe a marketing splash at some point next year. Right now, it's all word of mouth, and we do get a lot of calls. There's a lot of projects we actually pass on because they're not in our sweet spot and they're distractions, but the projects that we do take on are often difficult. We even do work in retail, as I mentioned to you, and the types of deals we take in are always really heavy data integration, visualization, where they are very automated workflows, there's almost no humans involved where the humans are basically special events, and then the system essentially corrects for automation again. Yeah, I've been writing about data visualization for 6-7 years now, and when I started writing about it, it was pretty rare and beyond FIDS displays and things like that but it's now pretty standard. I'm curious because you guys are obviously super deep and experienced in that area, when you see all the other software companies saying, yeah, we do real time data, we can do realtime data handling, we can integrate, we have APIs and this and that. When you get into a conversation with a prospect, how do you distinguish what you do versus other companies who say, yeah we do all that too, cuz I suspect it's different? Alex Epshteyn: It is. One of the first things we've put on a table is that we can mostly guarantee our resolution time SLA, nobody else can pretty much. Most people will be aggressive, pick up the phone and work the problem, but the way that our stuff is built, we can fix the problem. We can guarantee fixing the problem within a certain period of time. Now it's not inexpensive, sometimes it's actually affordable for a lot of types of businesses where a fully custom solution would not be. The other one is that most data visualization takes a lot of shortcuts, it really leverages, not to get too deep in technicalities unless you want me to, basically JavaScript and CSS, the mainstay of HTML5. But all of our data visualizations are built in WebGL. It's like the difference between driving a car on the road and driving a bullet train on tracks, right? There's no interruptions to the bullet train. It'll just go and it'll be on schedule. There's no interruptions. There's no jitter. There's no movement. That sort of paradigm. So we like to guarantee behavior of our data visualization, especially dynamic like charting or graphing libraries that we use and implement. It's actually extremely difficult to build something that you would think is easy like a ticker or crawler. Whatever data that's feeding it, I'm sure we both have probably seen a lot of instances where it stutters, it has problems, it doesn't refresh on time and doesn't deal well with different fonts and whatnot. That's just not true of our solution. Our solution is, I would say, cutting edge on dynamic data visualization. So for an end user or for an integrator, they have to educate themselves that just because a company says they can do real time data doesn't mean they can really do it. That means they might be able to reflect a number that's in a data table and show it on a screen, and that's quite a bit different from what you're talking about. Alex Epshteyn: It is and maybe the third aspect is most of the companies we work with already have accounts with the big data warehouse places like Refinitive, IBS, and a number of others, so we already are super familiar with these back ends. In fact, we have things that monitor the APIs. We routinely do a lot of monitoring of real time or just dynamic sources. So this is a huge value add in the industry, and I wish more providers would do that because ultimately, if you are a data fed platform, it's up to you to tell the customer something's failing on the back end because they won't know, they'll assume all sorts of things, but you need to critically have the tools inside to tell what's going on, and if you build it out in a smart way, you can also alert the right people at the right time that something's happening and to look into it. So you can be proactive about it. That's the third item, I'd say. They also change like the schemas and everything without telling people, right? Alex Epshteyn: That's true. But it's a super exciting space. Once you have the core technology built out. You could really do a lot, in terms of, consuming this kind of data and I think generally, signage, we're in a slightly privileged position regarding this, but I think there's a move into industry towards generative and procedural content away from more Codec-heavy content. Although, there's obviously gonna be overlap for many years for both. We certainly support Codec playback in a variety of ways, synchronized, on different players and so forth, and there's nice innovations like AV1 coming onto the market nowadays. But you could do so much more with generative dynamic content, it's a big difference. For instance, we had a client that wanted us to expose much more of the controllability of a layout, standard design tool inside of our platform. Now, typically we would not wanna do that because there's some nice tools on the market like Premiere, like After Effects, real tools that they generally use. But the problem that certain customers power users I would say are having is they don't wanna have to export an After Effects file and have it encoded in something, that's time, that's sometimes money because they do it externally because they don't have a kit on-prem, or in the cloud. So what we've done is basically have a simpler version of something like Adobe Premiere or After Effects that lets them make quick changes in some key framing or some transitory effects and they don't have to put the whole thing into a codec. So that seemed to really resonate with certain power users that we have and directionally, it's the area that we'd like to innovate in. Is it important to make a distinction between generative data for business applications and generative data for artwork? Because I see a lot of video walls out there that are set and forget. They're driven by generative data and it's just these abstract visuals that are swirl and kind of bloom and everything else, but that's very different from, I think what you're talking about, which is what on the screen in terms of charting or what appears is based on what the data is influencing, it's it's shaping what appears? Alex Epshteyn: That's correct. A lot of general data is canned, right? It's almost like a video basically, and some experts, some design shops typically would change it for you, and it becomes evergreen content, day two, three, and day four. What we try to do is something a little bit different and we work with some really nice design companies as well. So just to be completely clear, we don't do the design ourselves. We typically either partner with a company that's really good at it. Sometimes the company brings us into the opportunity, right? The consultant can also spec us to partner with somebody or the end client may have relationships with companies that do this very well. But, I would say the formulation, the recipe for this kind of thing, to make it dynamic is a few things, and that's where this sort of generative content becomes more like a Zignage type of problem, as opposed to something that you could hire a design house to basically build for you, right? One is that you could update content even if the filters or the generative piece is running. Separately you might be able to in CMS have the tools to change the filters of the generative option, just as I explained prior, and finally have trigger conditions. We do mostly casting, right? There are some great companies in space. I think they're very good at that kinda stuff. They do a lot of smart interactive signage. We do a little bit of that, but we mostly do narrowcasting. So in our world trigger conditions come from some sort of backend system. It could be a calendaring system, it could be something smarter, right? Where it's not just a boolean condition. It could be a multivariable that basically has to click off a list of things that can happen. And that's really where we can add a lot of value and it overlaps with the kinda work we do with the New York Stock Exchange. We generally term it as business logic So we really do some smart business logic and I think it's actually, there's a lot of growth in that area once we apply modern sort of machine learning to it to make it extensible to go further. But with that kind of approach you have an ability to modify a piece of content continuously, right? It's a living piece of generative content, even if it's not dynamically fed with financial data sources, or sports data sources. I haven't seen your user experience, but I'm guessing people listening to this are thinking, this is really interesting, but I'd be terrified to try to use this software. What's it actually like? Alex Epshteyn: You're not gonna be terrified because we are one of the proponents of nearly or fully automated systems. So often what we do for non-power users is to give a build out to the software that our customers use, and then everything is essentially this business logic that I'm describing to you. It's kinda like a headless CMS? Alex Epshteyn: It's like a headless CMS for the non-power users. For the power users that really like their tools like Adobe, or you could just use a Dropbox or some sort of hotfolder mechanism. We're also partnered with a number of DAM solutions. There's a lot of workflow that happens in digital asset management solutions, including tag based workflows. We do a lot of tag based workflows nowadays, where we consume the tags that are done in a DAM, and essentially they find their way onto the right players at the right time, and on the flip side, we do have a standard suite. It's actually going through a major overhaul at the end of the year, what we call Z Cast 6. It does have a number of these power tools. But our CMS generally follows a certain idea. It was popular for a while and it's hard to execute unless you have our kinds of customers, which is what we call an additive UX. So it's the opposite of something like Microsoft Office, right where you have a billion features and there's a long learning curve if you wanna learn everything. What we do is really try to identify the user story behind what needs to be done. We create the access controls that really expose certain parts of the CMS, and even within the same context, add or remove tools as needed. That creates a situation where there's almost really minimal training. I think one of the biggest problems we're trying to solve for our direct customers, or channel customers is the attrition that happens in major enterprises for users of digital signage, right? Like one of the biggest problems we face even in huge banks is the fact that digital signage is consigned to a webmaster subcategory. Like they manage the CMS that's published on their portal, and then somebody in that team or a few people in that team handles digital signage as well. So that's historically been a problem for our whole industry, and what we're trying to tackle is kinda remove both the friction of adoption and also try to give them the tools that they need, and if they use tools, bridge those tools, that's our philosophy on that end. So what's the structure of your company? Are you a private company? Alex Epshteyn: We are a private company. We're an LLC in New York, and we're about 20 people. Most of our development used to take place until very recently in Ukraine because one of my partners and I from there originally. So as this topic is in the news, unfortunately, forget about our team. The fact is cities in the eastern part of Ukraine are partially destroyed but luckily a lot of the folks that we would use are in the Western part of Ukraine now, and we continue to use them but not all of them unfortunately. So you're having to manage your way through that along with other things, right? Alex Epshteyn: We did, and they're very talented folks. We have worked on so many projects. Yeah, it's interesting. I was trading LinkedIn messages with another company and he was talking about operating out of Odessa and they're still like opening QSRs and things like that and putting in menu boards. Alex Epshteyn: Good for them. That's exactly what they should do. Yeah, and I was thinking, boy, all the other challenges you have out there, like supply chain and everything else, layer in a hot war on top of that. Good lord. Alex Epshteyn: Our problems are very small compared to the real problems in Ukraine and the world. But it's a small world. You sort of face these things as they come. Well, hopefully someway or other, it gets resolved. I'm not quite sure how, but this was great. Can you let people know where they can find your company online? Alex Epshteyn: Sure. It's Zignage.com So signage with a Z on the front? Alex Epshteyn: Correct. The last word is Zignage. You find me on LinkedIn, Alex Epshteyn. That's where mostly we do our sort of minimum branding that we do. All right, but we'll be looking for more later in the year, right? Alex Epshteyn: Absolutely. We're excited to make some announcements in the transportation space, some more in the financial industry and some more in retail. All right. Great to hear it's going well for you. Thanks so much for spending the time with me. Alex Epshteyn: Thank you, Dave. My pleasure.
On today's episode of the Entrepreneur Evolution Podcast, we are joined by Martin Adams, Founder of Codec.ai. He speaks internationally on innovation, creativity, Artificial Intelligence, and digital transformation in the private and public sectors. Recent talks include keynotes to the European Commission, Deloitte, Unilever, Arabian Business and Social Media Week. As an entrepreneur, Martin has brought his own ideas to life and as an investor, advisor, and consultant he has helped hundreds of other people do the same. He has experience in business strategy, fundraising, go-to-market, growth (paid and organic), and commercial strategy. However, he's also collaborated with companies across areas including blockchain, consumer brands, decentralized finance, education, entertainment, FinTech, media, and mental health. In the last 10 years Martin has worked with companies to achieve a number of exits, £40m in investment and £100m in revenue. He knows that creativity and out-of-the-box thinking leads to wildly efficient growth and so he's keen to pass his hard-fought learnings on through podcasting. He is also eager to help people earn freedom to take more risks to build original products and services, pursue projects that matter to them, and create innovative businesses that succeed. Martin believes that making your own unique contribution to the world leads to a more interesting life, and a happier and richer world. To learn more about Codec.ai, visit https://www.codec.ai/ We would love to hear from you, and it would be awesome if you left us a 5-star review. Your feedback means the world to us, and we will be sure to send you a special thank you for your kind words. Don't forget to hit “subscribe” to automatically be notified when guest interviews and Express Tips drop every Tuesday and Friday. Interested in joining our monthly entrepreneur membership? Email Annette directly at yourock@ievolveconsulting.com to learn more. Ready to invest in yourself? Book your free session with Annette HERE. Keep evolving, entrepreneur. We are SO proud of you! --- Support this podcast: https://anchor.fm/annette-walter/support
We chat with Martin Adams about being an entrepreneur and advisor. Martin is currently working on a machine learning business called Codec which helps brands to discover and tap in to the pockets of culture that drive growth. This is a must listen for anyone thinking of going down the entrepreneurial route, Martin discusses his thoughts on University and how he approached it, as well as his first few years out in the world of work. From Lawyer to Entrepreneur, From London to New York and back, its quite the journey. Martin also shares his thoughts on what he looks for when hiring.
Every once in a while, producers will provide video elements as a separate file for the RGB video and another for the Alpha video. To merge them together into a single 32-bit video, use the XPression Video Coder. Audio can also be added together if required. Living Live! with Ross Video www.rossvideo.com/XPression-U
When users try to share XPression Video Codec files for review by users without XPression, the ability to play the files on a Windows 10 PC seems to befuddle some folks. Installing the XPression Video Codec is pretty easy, but some users prefer using VideoLAN's VLC media player over the native tools in Windows. Ross Video has developed the VLC plugin, for the XPression Video Codec, so now VLC users with the XPression Video Codec installed can view XPression Video assets and Clips. Living Live! with Ross Video www.rossvideo.com/XPression-U
This week Tayla is joined by Rush from Youth Pride Inc. and Gem from Haus of Codec for this year's pride episode. They discussed the work each of their organization do for the queer community in RI. They also discussed Stranger Things and Bob's Burgers. During The Last Chapter they discuss the question: If you were to write a book, what book would you want it to be like? Like what you hear? Rate and review Down Time on Apple Podcasts or your podcast player of choice! If you'd like to submit a topic for The Last Chapter you can send your topic suggestions to downtime@cranstonlibrary.org. Our theme music is Day Trips by Ketsa and our ad music is Happy Ukulele by Scott Holmes. Thanks for listening! Books The Hunger Games by Suzanne Collins The Memory Librarian by Janelle Monáe Come As You Are by Emily Nagoski Ph.D. The Invention of Hugo Cabret by Brian Selznick AV Stranger Things (2016- ) Abbott Elementary (2021- ) Bob's Burgers (2011- ) The Bob's Burgers Movie (2022) Other Youth Pride Inc., Providence, RI Support Youth Pride Inc. (GoFundMe) Haus of Codec, Providence, RI Suport Haus of Codec (Patreon)
We talk about UploadVR's correspondents going inside the very first Meta Store In Burlingame, California, venues moving inside Horizon Worlds, codec avatars and the chips being built to drive them, new ultra thin holographic glasses research, and Meta's neural wristband input for future glasses.Note there's some echoing in this week's audio, sorry about that and we will try to get it fixed next week.
We discuss new research into using VR controllers to track body movements and the latest look at Meta's 'Codec' avatars as well as the prospect of VR calling. We also talk about Meta's growing expenses and the details we learned about Project Cambria and the company's next VR headsets.
Marcian Ted Hoff 130 Creating the Microprocessor and beyond with Marcian "Ted" Hoff BIOGRAPHY OF MARCIAN E. HOFF Dr. Marcian Edward "Ted" Hoff was born in Rochester, New York. His degrees include a Bachelor of Electrical Engineering from Rensselaer Polytechnic Institute, Troy, New York, (1958) and an MS (1959) and a Ph.D. (1962), both in Electrical Engineering, from Stanford University, Stanford, California. In the 1959-1960 time frame he and his professor, Bernard Widrow, co-developed the LMS adaptive algorithm which is used in many modern communication systems, e.g. adaptive equalizers and noise-cancelling systems. In 1968 he joined Intel Corporation as Manager of Applications Research and in 1969 proposed the architecture for the first monolithic microprocessor or computer central processor on a single chip, the Intel 4004, which was announced in 1971. He contributed to several other microprocessor designs, and then in 1975 started a group at Intel to develop products for telecommunications. His group produced the first commercially- available monolithic telephone CODEC, the first commercially-available switched-capacitor filter and one of the earliest digital signal processing chips, the Intel 2920. He became the first Intel Fellow when the position was created in 1980. In 1983 he joined Atari as Vice President of Corporate Research and Development. In 1984 he left Atari to become an independent consultant. In 1986 he joined Teklicon, a company specializing in assistance to attorneys dealing with intellectual property litigation, as Chief Technologist, where he remained until he retired in 2007. He has been recognized with numerous awards, primarily for his microprocessor contributions. Those awards include the Kyoto Prize, the Stuart Ballantine Medal and Certificate of Merit from the Franklin Institute, induction into the National Inventors Hall of Fame and the Silicon Valley Engineering Hall of Fame, the George R. Stivitz Computer Pioneer Award, the Semiconductor Industry 50th Anniversary Award, the Eduard Rhein Foundation Technology Award, the Ron Brown Innovation Award, the Davies Medal and induction into their Hall of Fame from Rensselaer Polytechnic Institute, and the National Medal of Technology and Innovation. He has been recognized with several IEEE awards including the Cledo Brunetti Award (1980), the Centennial Medal (1984), and the James Clerk Maxwell Award (2011). He was made a Fellow of the IEEE in 1982 "for the conception and development of the microprocessor" and is now a Life Fellow. He is a named inventor or co-inventor on 17 United States patents and author or co-author of more than 40 technical papers and articles. We talk about How do you see the value of IP? what should investors be thinking when they are studying a company's IP? What technologies were developed long ago that we are just now starting to see or as a society to adopt? What was it like being one the inventors of the microprocessor? How did Intel grow after the invention of the 4004 How have “Innovation” in Silicon Valley Changed over the decades And much more... Connect with Marcian “Ted” Hoff Best to connect through Mike, President of Intel Alumni (2) Mike Trainor | LinkedIn
Tras 7 años de su lanzamiento seguimos hablando largo y tendido del mejor juego de la década Episodio (muy) especial en el que revisitamos Yarham y sus calles 7 años después con mi amigo Nacho Cerrato (podéis escucharle en Codec) para hablar de Bloodborne, uno de los juegos más influyentes, al menos de nuestras vidas. Inspiraciones, trascendencia dentro del sector y en resumen, una carta de amor en formato podcast a la obra maestra de Hidetaka Miyazaki. Hazte mecenas y apóyame en Patreon Sigue a @a_marquino en Twitter.
Tom's bio:Based in London, prior to Metaphysic, Tom founded OmniSci (previously MapD), the world's fastest database and first GPU in-memory analytics engine backed by Tiger Global, NEA, in-Q-Tel, NVIDIA and Google. He is co-founder of Codec.ai, a content marketing analytics tool used by Redbull, Unilever, L'Oreal, Nestle and more. Managing Hyper-Real Likeness in the Metaverse, Tom Graham About Metaphysic Our mission is to empower individuals by putting them at the center of the immersive content economies that will define how we use the internet in the future. By building AI content generation tools and infrastructure that lets users own and control their biometric data, we are building towards an ethical web3 economy where every internet user can access the limitless potential of the hyperreal metaverse. Our project lead speaks about #Hyperreal Synthetic media for the #metaverse in @NFTLAlive. EAO (@EAONFT) March 30, 2022 Jamil Hasan is a crypto and blockchain focused podcast host at the Irish Tech News and spearheads our weekend content “The Crypto Corner” where he interviews founders, entrepreneurs and global thought leaders. Prior to his endeavors into the crypto-verse in July 2017, Jamil built an impressive career as a data, operations, financial, technology and business analyst and manager in Corporate America, including twelve years at American International Group and its related companies. Since entering the crypto universe, Jamil has been an advisor, entrepreneur, investor and author. His books “Blockchain Ethics: A Bridge to Abundance” (2018) and “Re-Generation X” (2020) not only discuss the benefits of blockchain technology, but also capture Jamil's experience on how he has transitioned from being a loyal yet downsized former corporate employee to a self sovereign individual. With over one hundred podcasts under his belt since he joined our team in February 2021, and with four years of experience both managing his own crypto portfolio and providing crypto guidance and counsel to select clients, Jamil continues to seek opportunities to help others navigate this still nascent industry. Jamil's primary focus outside of podcast hosting is helping former corporate employees gain the necessary skills and vision to build their own crypto portfolios and create wealth for the long-term. See more podcasts here. More about Irish Tech News Irish Tech News are Ireland's No. 1 Online Tech Publication and often Ireland's No.1 Tech Podcast too. You can find hundreds of fantastic previous episodes and subscribe using whatever platform you like via our Anchor.fm page here: If you'd like to be featured in an upcoming Podcast email us at Simon@IrishTechNews.ie now to discuss. Irish Tech News have a range of services available to help promote your business. Why not drop us a line at Info@IrishTechNews.ie now to find out more about how we can help you reach our audience. You can also find and follow us on Twitter, LinkedIn, Facebook, Instagram, TikTok and Snapchat.
Tom's bio:Based in London, prior to Metaphysic, Tom founded OmniSci (previously MapD), the world's fastest database and first GPU in-memory analytics engine backed by Tiger Global, NEA, in-Q-Tel, NVIDIA and Google. He is co-founder of Codec.ai, a content marketing analytics tool used by Redbull, Unilever, L'Oreal, Nestle and more. Jamil Hasan is a crypto and blockchain focused podcast host at the Irish Tech News and spearheads our weekend content “The Crypto Corner” where he interviews founders, entrepreneurs and global thought leaders. Prior to his endeavors into the crypto-verse in July 2017, Jamil built an impressive career as a data, operations, financial, technology and business analyst and manager in Corporate America, including twelve years at American International Group and its related companies. Since entering the crypto universe, Jamil has been an advisor, entrepreneur, investor and author. His books “Blockchain Ethics: A Bridge to Abundance” (2018) and “Re-Generation X” (2020) not only discuss the benefits of blockchain technology, but also capture Jamil's experience on how he has transitioned from being a loyal yet downsized former corporate employee to a self sovereign individual. With over one hundred podcasts under his belt since he joined our team in February 2021, and with four years of experience both managing his own crypto portfolio and providing crypto guidance and counsel to select clients, Jamil continues to seek opportunities to help others navigate this still nascent industry. Jamil's primary focus outside of podcast hosting is helping former corporate employees gain the necessary skills and vision to build their own crypto portfolios and create wealth for the long-term.
In today's special Who To Watch episode, we are joined by George Evans Marley (GEM), the Treasurer and all around operations person at Haus of Codec (along with Julio E. Berroa, Haley Johnson, Alexander Ruiz, and Charlotte Gagnon). GEM, Nick, and Sascha discuss their story of growing up in our “weird” state, their own experience with homelessness, the many obstacles of starting a shelter in RI, Pokemon GO, and more. Take a listen to learn more about GEM and their work with Haus of Codec to bring safe spaces to LGBTQQIA+ youth in PVD and statewide.
Welcome back ya'll! We decided to quit show notes. If you REALLY relied on the show notes, let us know, otherwise we thought it was a useless and time consuming step to uploading. In this episode we talked about much of the news that has happened since we've been gone including cops not getting fired for being unvaccinated, the sleep-in outside the state house, student walk-outs, and redistricting. As always, if you like what you hear and want to hear more and get merch please support what we do by becoming a patreon member at www.patreon.com/plrpodcast The intro for PLR and the intermission provided by Ryan Jackson Music. This episode's end music provided by Dusknight -----> https://dusknight.bandcamp.com/
Wilfried Van Baelen is back with more! This episode he talks CODEC's and how you can take a 24bit PCM stereo audio file, make it mono, and take it back to stereo! No loss in audio quality! He also explains to Michael how to mount heights in his room with a tricky layout. And so much more! --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app --- Send in a voice message: https://anchor.fm/dailyhifi/message Support this podcast: https://anchor.fm/dailyhifi/support
This week on Business Without Bullsh-t, we take a look back at the month's episodes in conversation with Martin Adams from Codec, Jonathan Wood from C2 Cyber and our Budget Special hosted by Oury Clark's very own Richard Oury, Ian Phipps, Simon Walsh, Jeremy Coker alongside Andrew Oury and Dominic Frisby.You'll hear new, unheard parts of our conversations that take a deeper look at the topics we discussed. There's more from Martin Adams telling us about the top 3 business trends he's got his eye on at the moment, as well as giving holistic advice for up-and-coming entrepreneurs. Jonathan Wood talks about the realities of transitioning from military life back to normal living, how the entrepreneurial community can work closely with the military to facilitate veteran's transitions back into work and how Jonathan likes to approach managing his company's workforce and productivity. Plus there's more from Oury Clarks' very own Ian Phipps and Richard Oury talking about the new UK budget, in particular about the new requirements for EMI (Enterprise Management Incentive) schemes as well as Richard explaining what the UK needs to do to get Free Ports working.Another fully loaded barrel of business talk with experts, talking like people. No bullsh-t. So pull up that chair and press play (and always have a notepad handy)Business Without Bullsh-t is powered by Oury Clark
Martin Adams, Founder of Codec joins Andy and Dominic this week on Business Without Bullsh-t to talk about how Codec's cutting-edge AI and Deep Learning technology is enabling them to help companies gain a better understanding of their audience's cultural behaviour through the content they interact with. Fascinating and smart to say the least. But the focus wasn't just on plugging Codec.The three went deeper and spoke about the nefarious side of Ad Tech fraud (those cheeky Russian gangsters!), the ever-evolving alliance between government and Big Tech; will we even need governments as public service providers in the future when the tech industry is doing a much better job? and the truth behind the Games Stop fiasco, an indecent which clearly exemplified the power of harnessing online communities to create seismic change in the “real” world.In fact, so much valuable ground was covered in this conversation that we thought why not deliver the goodness indigestible portions? so expect to hear more from Martin about his top 3 business trends, plus some sage advice for future entrepreneurs, later this month.In the meantime, pull up a chair, press play, and get stuck into this week's serving.Business Without Bullsh-t is powered by Oury Clark
Recorded February 19, 2021 (The audio quality gets pretty spotty at times on this recording. Sorry, listeners.) Years ago, the only way to do video conferencing was via proprietary hardware. They had limited interoperability— remember the old People + Content model? — but you could always tell when you connecting to a different system. Then we started exploring the idea of using PCs as the center of a video conferencing system. Software and hardware had caught up to where the quality of the call was roughly equivalent to the hardware appliances— and PCs are more flexible and can have added features during a call. Now we have systems that want to take control of the PC, essentially turning them back into dedicated appliances. And at CES this year, Logitech announced a new line of VC appliances that work only with their cameras, and their microphones. Are we going back to the proprietary model of appliances?
Brian Alvarez LinkedIn profileVittorio Giovara Blog---------------------------------------------------Join our LinkedIn Group so that you can get the latest video insider news and participate in the discussion.Email thevideoinsiders@beamr.com to be a guest on the show.Learn more about Beamr
The Last Of Us 2 Review, H.266, Thunderbolt 4, Smart Trash Cans, Jarron's wallet hurts, $70 for new games, New BR from Ubisoft, Quibi is DOA, Jameses Gameses
Click to watch SPIE Future Video Codec Panel DiscussionRelated episode with Gary Sullivan at Microsoft: VVC, HEVC & other MPEG codec standardsInterview with MPEG Chairman Leonardo Charliogne: MPEG Through the Eyes of it's ChairmanLearn about FastDVO herePankaj Topiwala LinkedIn profile--------------------------------------The Video Insiders LinkedIn Group is where thousands of your peers are discussing the latest video technology news and sharing best practices. Click here to joinWould you like to be a guest on the show? Email: thevideoinsiders@beamr.comLearn more about Beamr--------------------------------------TRANSCRIPT:Pankaj Topiwala: 00:00 With H.264 H.265 HEVC in 2013, we were now able to do up to 300 to one to up to 500 to one compression on a, let's say a 4K video. And with VVC we have truly entered a new realm where we can do up to 1000 to one compression, which is three full orders of magnitude reduction of the original size. If the original size is say 10 gigabits, we can bring that down to 10 megabits. And that's unbelievable. And so video compression truly is a remarkable technology and you know, it's a, it's a marval to look at Announcer: 00:39 The Video Insiders is the show that makes sense of all that is happening in the world of online video as seen through the eyes of a second generation codec nerd and a marketing guy who knows what I-frames and macro blocks are. And here are your hosts, Mark Donnigan and Dror Gill. Speaker 3: 00:39 Dror Gill: 01:11 Today we're going to talk with one of the key figures in the development of a video codecs and a true video insider Pankaj Topiwala. Hello Pankaj and welcome to The Video Insiders podcast. Pankaj Topiwala: 01:24 Gentlemen. hello, and thank you very much for this invite. It looks like it's going to be a lot of fun. Mark Donnigan: 01:31 It is. Thank you for joining Pankaj. Dror Gill: 01:33 Yeah, it sure will be a lot of fun. So can you start by telling us a little bit about your experience in codec development? Pankaj Topiwala: 01:41 Sure, so, I should say that unlike a number of the other people that you have interviewed or may interview my background is fair bit different. I really came into this field really by a back door and almost by chance my degree PhD degree is actually in mathematical physics from 1985. And I actually have no engineering, computer science or even management experience. So naturally I run a small research company working in video compression and analytics, and that makes sense, but that's just the way things go in the modern world. But that the effect for me was a, and the entry point was that even though I was working in very, very abstract mathematics I decided to leave. I worked in academia for a few years and then I decided to join industry. And at that point they were putting me into applied mathematical research. Pankaj Topiwala: 02:44 And the topic at that time that was really hot in applied mathematics was a topic of wavelets. And I ended up writing and edited a book called wavelet image and video compression in 1998. Which was a lot of fun along with quite a few other co authors on that book. But, wavelets had its biggest contribution in the compression of image and video. And so that led me finally to enter into, and I noticed that video compression was a far larger field than image compression. I mean, by many orders, by orders of magnitude. It is probably a hundred times bigger in terms of market size than, than image compression. And as a result I said, okay, if the sexiest application of this new fangled mathematics could be in video compression I entered that field roughly with the the book that I mentioned in 1998. Mark Donnigan: 03:47 So one thing that I noticed Pankaj cause it's really interesting is your, your initial writing and you know, research was around wavelet compression and yet you have been very active in ISO MPEG, all block-based codecs. So, so tell us about that? Pankaj Topiwala: 04:08 Okay. Well obviously you know when you make the transition from working on the wavelets and our initial starting point was in doing wavelet based video compression. When I started first founded my company fastVDO in 1998, 1999 period we were working on wavelet based video compression and we, we pushed that about as much as we could. And at that, at one point we had what we felt was the world's best a video compression using wavelets in fact, but best overall. And it had the feature that you know, one thing that we should, we should tell your view or reader listeners is that the, the value of wavelets in particular in image coding is that not only can you do state of the art image coding, but you can make the bitstream what is called embedded, meaning you can chop it off at anywhere you like, and it's still a decodable stream. Pankaj Topiwala: 05:11 And in fact it is the best quality you can get for that bit rate. And that is a powerful, powerful thing you can do in image coding. Now in video, there is actually no way to do that. Video is just so much more complicated, but we did the best we could to make it not embedded, but at least scalable. And we, we built a scalable wavelet based video codec, which at that time was beating at the current implementations of MPEG4. So we were very excited that we could launch a company based on a proprietary codec that was based on this new fangled mathematics called wavelets. And lead us to a state of the art codec. The facts of the ground though is that just within the first couple of years of running our company, we found that in fact the block-based transformed codecs that everybody else was using, including the implementers of MPEG4. Pankaj Topiwala: 06:17 And then later AVC, those quickly surpassed anything we could build with with wavelets in terms of both quality and stability. The wavelet based codecs were not as powerful or as stable. And I can say quite a bit more about why that's true. If you want? Dror Gill: 06:38 So when you talk about stability, what exactly are you referring to in, in a video codec? Pankaj Topiwala: 06:42 Right. So let's let's take our listeners back a bit to compare image coding and video coding. Image coding is basically, you're given a set of pixels in a rectangular array and we normally divide that into blocks of sub blocks of that image. And then do transforms and then quantization and than entropy coding, that's how we typically do image coding. With the wavelet transform, we have a global transform. It's a, it's ideally done on the entire image. Pankaj Topiwala: 07:17 And then you could do it multiple times, what are called multiple scales of the wavelet transform. So you could take various sub sub blocks that you create by doing the wavelet transfer and the low pass high pass. Ancs do that again to the low low pass for multiple scales, typically about four or five scales that are used in popular image codecs that use wavelets. But now in video, the novelty is that you don't have one frame. You have many, many frames, hundreds or thousands or more. And you have motion. Now, motion is something where you have pieces of the image that float around from one frame to another and they float randomly. That is, it's not as if all of the motion is in one direction. Some things move one way, some things move other ways, some things actually change orientations. Pankaj Topiwala: 08:12 And they really move, of course, in three dimensional space, not in our two dimensional space that we capture. That complicates video compression enormously over image compression. And it particularly complicates all the wavelet methods to do video compression. So, wavelet methods that try to deal with motion were not very successful. The best we tried to do was using motion compensated video you know, transformed. So doing wavelet transforms in the time domain as well as the spatial domain along the paths of motion vectors. But that was not very successful. And what I mean by stability is that as soon as you increase the motion, the codec breaks, whereas in video coding using block-based transforms and block-based motion estimation and compensation it doesn't break. It just degrades much more gracefully. Wavelet based codecs do not degrade gracefully in that regard. Pankaj Topiwala: 09:16 And so we of course, as a company we decided, well, if those are the facts on the ground. We're going to go with whichever way video coding is going and drop our initial entry point, namely wavelets, and go with the DCT. Now one important thing we found was that even in the DCT- ideas we learned in wavelets can be applied right to the DCT. And I don't know if you're familiar with this part of the story, but a wavelet transform can be decomposed using bits shifts and ads only using something called the lifting transform, at least a important wavelet transforms can. Now, it turns out that the DCT can also be decomposed using lifting transforms using only bit shifts and ads. And that is something that my company developed way back back in 1998 actually. Pankaj Topiwala: 10:18 And we showed that not only for DCT, but a large class of transforms called lab transforms, which included the block transforms, but in particular included more powerful transforms the importance of that in the story of video coding. Is that up until H.264, all the video codec. So H.261, MPEG-1, MPEG-2, all these video codecs used a floating point implementation of the discrete cosign transform and without requiring anybody to implement you know a full floating point transform to a very large number of decimal places. What they required then was a minimum accuracy to the DCT and that became something that all codecs had to do. Instead. If you had an implementation of the DCT, it had to be accurate to the true floating point DCT up to a certain decimal point in, in the transform accuracy. Pankaj Topiwala: 11:27 With the advent of H.264, with H.264, we decided right away that we were not going to do a flooding point transform. We were going to do an integer transform. That decision was made even before I joined, my company joined, the development base, H.264, AVC, But they were using 32 point transforms. We found that we could introduce 16 point transforms, half the complexity. And half the complexity only in the linear dimension when you, when you think of it as a spatial dimension. So two spatial dimensions, it's a, it's actually grows more. And so the reduction in complexity is not a factor of two, but at least a factor of four and much more than that. In fact, it's a little closer to exponential. The reality is that we were able to bring the H.264 codec. Pankaj Topiwala: 12:20 So in fact, the transform was the most complicated part of the entire codec. So if you had a 32 point transform, the entire codec was at 32 point technology and it needed 32 points, 32 bits at every sample to process in hardware or software. By changing the transform to 16 bits, we were able to bring the entire codec to a 16 bit implementation, which dramatically improved the hardware implementability of this transfer of this entire codec without at all effecting the quality. So that was an important development that happened with AVC. And since then, we've been working with only integer transforms. Mark Donnigan: 13:03 This technical history is a really amazing to hear. I, I didn't actually know that Dror or you, you probably knew that, but I didn't. Dror Gill: 13:13 Yeah, I mean, I knew about the transform and shifting from fixed point, from a floating point to integer transform. But you know, I didn't know that's an incredible contribution Pankaj. Pankaj Topiwala: 13:27 We like to say that we've saved the world billions of dollars in hardware implementations. And we've taken a small a small you know, a donation as a result of that to survive as a small company. Dror Gill: 13:40 Yeah, that's great. And then from AVC you moved on and you continued your involvement in, in the other standards, right? That's followed. Pankaj Topiwala: 13:47 in fact, we've been involved in standardization efforts now for almost 20 years. My first meeting was a, I recall in may of 2000, I went to a an MPEG meeting in Geneva. And then shortly after that in July I went to an ITU VCEG meeting. VCEG is the video coding experts group of the ITU. And MPEG is the moving picture experts group of ISO. These two organizations were separately pursuing their own codecs at that time. Pankaj Topiwala: 14:21 ISO MPEG was working on MPEG-4 and ITU VCEG was working on H.263, and 263 plus and 263 plus plus. And then finally they started a project called 263 L for longterm. And eventually it became clear to these two organizations that look, it's silly to work on, on separate codecs. They had worked once before in MPEG-2 develop a joint standard and they decided to, to form a joint team at that time called the joint video team, JVT to develop the H.264 AVC video codec, which was finally done in 2003. We participate participated you know fully in that making many contributions of course in the transform but also in motion estimation and other aspects. So, for example, it might not be known that we also contributed the fast motion estimation that's now widely used in probably nearly all implementations of 264, but in 265 HEVC as well. Pankaj Topiwala: 15:38 And we participated in VVC. But one of the important things that we can discuss is these technologies, although they all have the same overall structure, they have become much more complicated in terms of the processing that they do. And we can discuss that to some extent if you want? Dror Gill: 15:59 The compression factors, just keep increasing from generation to generation and you know, we're wondering what's the limit of that? Pankaj Topiwala: 16:07 That's of course a very good question and let me try to answer some of that. And in fact that discussion I don't think came up in the discussion you had with Gary Sullivan, which certainly could have but I don't recall it in that conversation. So let me try to give for your listeners who did not catch that or are not familiar with it. A little bit of the story. Pankaj Topiwala: 16:28 The first international standard was the ITU. H.261 standard dating roughly to 1988 and it was designed to do only about 15 to one to 20 to one compression. And it was used mainly for video conferencing. And at that time you'd be surprised from our point of view today, the size of the video being used was actually incredibly tiny about QCIP or 176 by 144 pixels. Video of that quality that was the best we could conceive. And we thought we were doing great. And doing 20 to one compression, wow! Recall by the way, that if you try to do a lossless compression of any natural signal, whether it's speech or audio or images or video you can't do better than about two to one or at most about two and a half to one. Pankaj Topiwala: 17:25 You cannot do, typically you cannot even do three to one and you definitely cannot do 10 to one. So a video codec that could do 20 to one compression was 10 times better than what you could do lossless, I'm sorry. So this is definitely lossy, but lossy with still a good quality so that you can use it. And so we thought we were really good. When MPEG-1 came along in, in roughly 1992 we were aiming for 25 to one compression and the application was the video compact disc, the VCD. With H.262 or MPEG-2 roughly 1994, we were looking to do about 35 to one compression, 30 to 35. And the main application was then DVD or also broadcast television. At that point, broadcast television was ready to use at least in some, some segments. Pankaj Topiwala: 18:21 Try digital broadcasting. In the United States, that took a while. But in any case it could be used for broadcast television. And then from that point H.264 AVC In 2003, we jumped right away to more than 100 to one compression. This technology at least on large format video can be used to shrink the original size of a video by more than two orders of magnitude, which was absolutely stunning. You know no other natural signal, not speech, not broadband, audio, not images could be compressed that much and still give you high quality subjective quality. But video can because it's it is so redundant. And because we don't understand fully yet how to appreciate video. Subjectively. We've been trying things you know, ad hoc. And so the entire development of video coding has been really by ad hoc methods to see what quality we can get. Pankaj Topiwala: 19:27 And by quality we been using two two metrics. One is simply a mean square error based metric called peak signal to noise ratio or PSNR. And that has been the industry standard for the last 35 years. But the other method is simply to have people look at the video, what we call subjective rating of the video. Now it's hard to get a subjective rating. That's reliable. You have to do a lot of standardization get a lot of different people and take mean opinion scores and things like that. That's expensive. Whereas PSNR is something you can calculate on a computer. And so people have mostly in the development of video coding for 35 years relied on one objective quality metric called PSNR. And it is good but not great. And it's been known right from the beginning that it was not perfect, not perfectly correlated to video quality, and yet we didn't have anything better anyway. Pankaj Topiwala: 20:32 To finish the story of the video codecs with H.265 HEVC in 2013, we were now able to do up to 300 to one to up to 500 to one compression on let's say a 4K. And with VVC we have truly entered a new realm where we can do up to 1000 to one compression, which is three full orders of magnitude reduction of the original size. If the original size is say, 10 gigabits, we can bring that down to 10 megabits. And that's unbelievable. And so video compression truly is a remarkable technology. And you know, it's a, it's a marvel to look at. Of course it does not, it's not magic. It comes with an awful lot of processing and an awful lot of smarts have gone into it. That's right. Mark Donnigan: 21:24 You know Pankaj, that, is an amazing overview and to hear that that VVC is going to be a thousand to one. You know, compression benefit. Wow. That's incredible! Pankaj Topiwala: 21:37 I think we should of course we should of course temper that with you know, what people will use in applications. Correct. They may not use the full power of a VVC and may not crank it to that level. Sure, sure. I can certainly tell you that that we and many other companies have created bitstreams with 1000 to one or more compression and seeing video quality that we thought was usable. Mark Donnigan: 22:07 One of the topics that has come to light recently and been talked about quite a bit. And it was initially raised by Dave Ronca who used to lead encoding at Netflix for like 10 years. In fact you know, I think he really built that department, the encoding team there and is now at Facebook. And he wrote a LinkedIn article post that was really fascinating. And what he was pointing out in this post was, was that with compression efficiency and as each generation of codec is getting more efficient as you just explained and gave us an overview. There's a, there's a problem that's coming with that in that each generation of codec is also getting even more complex and you know, in some settings and, and I suppose you know, Netflix is maybe an example where you know, it's probably not accurate to say they have unlimited compute, but their application is obviously very different in terms of how they can operate their, their encoding function compared to someone who's doing live, live streaming for example, or live broadcast. Maybe you can share with us as well. You know, through the generation generational growth of these codecs, how has the, how has the compute requirements also grown and has it grown in sort of a linear way along with the compression efficiency? Or are you seeing, you know, some issues with you know, yes, we can get a thousand to one, but our compute efficiency is getting to the, where we could be hitting a wall. Pankaj Topiwala: 23:46 You asked a good question. Has the complexity only scaled linearly with the compression ratio? And the answer is no. Not at all. Complexity has outpaced the compression ratio. Even though the compression ratio is, is a tremendous, the complexity is much, much higher. And has always been at every step. First of all there's a big difference in doing the research, the research phase in development of the, of a technology like VVC where we were using a standardized reference model that the committee develops along the way, which is not at all optimized. But that's what we all use because we share a common code base. And make any new proposals based on modifying that code base. Now that code base is always along the entire development chain has always been very, very slow. Pankaj Topiwala: 24:42 And true implementations are anywhere from 100 to 500 times more efficient in complexity than the reference software. So right away you can have the reference software for say VVC and somebody developing a, an implementation that's a real product. It can be at least 100 times more efficient than what the reference software, maybe even more. So there's a big difference. You know, when we're developing a technology, it is very hard to predict what implementers will actually come up with later. Of course, the only way they can do that is that companies actually invest the time and energy right away as they're developing the standard to build prototype both software and hardware and have a good idea that when they finish this, you know, what is it going to really cost? So just to give you a, an idea, between, H.264 and Pankaj Topiwala: 25:38 H.265, H.264, only had two transforms of size, four by four and eight by eight. And these were integer transforms, which are only bit shifts and adds, took no multiplies and no divides. The division in fact got incorporated into the quantizer and as a result, it was very, very fast. Moreover, if you had to do, make decisions such as inter versus intra mode, the intra modes there were only about eight or 10 intra modes in H.264. By contrast in H.265. We have not two transforms eight, four by four and eight by, but in fact sizes of four, eight, 16 and 32. So we have much larger sized transforms and instead of a eight or 10 intra modes, we jumped up to 35 intra modes. Pankaj Topiwala: 26:36 And then with a VVC we jumped up to 67 intro modes and we just, it just became so much more complex. The compression ratio between HEVC and VVC is not quite two to one, but let's say, you know, 40% better. But the the complexity is not 40% more. On the ground and nobody has yet, to my knowledge, built a a, a, a fully compliant and powerful either software or hardware video codec for VVC yet because it's not even finished yet. It's going to be finished in July 2020. When it, when, the dust finally settles maybe four or five years from now, it will be, it will prove to be at least three or four times more complex than HEVC encoder the decoder, not that much. The decoder, luckily we're able to build decoders that are much more linear than the encoder. Pankaj Topiwala: 27:37 So I guess I should qualify as discussion saying the complexity growth is all mostly been in the encoder. The decoder has been a much more reasonable. Remember, we are always relying on this principle of ever-increasing compute capability. You know, a factor of two every 18 months. We've long heard about all of this, you know, and it is true, Moore's law. If we did not have that, none of this could have happened. None of this high complexity codecs, whatever had been developed because nobody would ever be able to implement them. But because of Moore's law we can confidently say that even if we put out this very highly complex VVC standard, someday and in the not too distant future, people will be able to implement this in hardware. Now you also asked a very good question earlier, is there a limit to how much we can compress? Pankaj Topiwala: 28:34 And also one can ask relatively in this issue, is there a limit to a Moore's law? And we've heard a lot about that. That may be finally after decades of the success of Moore's law and actually being realized, maybe we are now finally coming to quantum mechanical limits to you know how much we can miniaturize in electronics before we actually have to go to quantum computing, which is a totally different you know approach to doing computing because trying to go smaller die size. Well, we'll make it a unstable quantum mechanically. Now the, it appears that we may be hitting a wall eventually we haven't hit it yet, but we may be close to a, a physical limit in die size. And in the observations that I've been making at least it seems possible to me that we are also reaching a limit to how much we can compress video even without a complexity limit, how much we can compress video and still obtain reasonable or rather high quality. Pankaj Topiwala: 29:46 But we don't know the answer to that. And in fact there are many many aspects of this that we simply don't know. For example, the only real arbiter of video quality is subjective testing. Nobody has come up with an objective video quality metric that we can rely on. PSNR is not it. When, when push comes to shove, nobody in this industry actually relies on PSNR. They actually do subjective testing well. So in that scenario, we don't know what the limits of visual quality because we don't understand human vision, you know, we try, but human vision is so complicated. Nobody can understand the impact of that on video quality to any very significant extent. Now in fact, the first baby steps to try to understand, not explicitly but implicitly capture subjective human video quality assessment into a neural model. Those steps are just now being taken in the last couple of years. In fact, we've been involved, my company has been involved in, in getting into that because I think that's a very exciting area. Dror Gill: 30:57 I tend to agree that modeling human perception with a neural network seems more natural than, you know, just regular formulas and algorithms which are which are linear. Now I, I wanted to ask you about this process of, of creating the codecs. It's, it's very important to have standards. So you encode a video once and then you can play it anywhere and anytime and on any device. And for this, the encoder and decoder need to agree on exactly the format of the video. And traditionally you know, as you pointed out with all the history of, of development. Video codecs have been developed by standardization bodies, MPEG and ITU first separately. And then they joined forces to develop the newest video standards. But recently we're seeing another approach to develop codecs, which is by open sourcing them. Dror Gill: 31:58 Google started with an open source code, they called VP9 which they first developed internally. Then they open sourced it and and they use it widely across their services, especially in, YouTube. And then they joined forces with the, I think the largest companies in the world, not just in video but in general. You know those large internet giants such as Amazon and Facebook and and Netflix and even Microsoft, Apple, Intel have joined together with the Alliance of Open Media to jointly create another open codec called AV1. And this is a completely parallel process to the MPEG codec development process. And the question is, do you think that this was kind of a one time effort to, to to try and find a, or develop a royalty free codec, or is this something that will continue? And how do you think the adoption of the open source codecs versus the committee defined codecs, how would that adoption play out in the market? Pankaj Topiwala: 33:17 That's of course a large topic on its own. And I should mention that there have been a number of discussions about that topic. In particular at the SPIE conference last summer in San Diego, we had a panel discussion of experts in video compression to discuss exactly that. And one of the things we should provide to your listeners is a link to that captured video of the panel discussion where that topic is discussed to some significant extent. And it's on YouTube so we can provide a link to that. My answer. And of course none of us knows the future. Right. But we're going to take our best guesses. I believe that this trend will continue and is a new factor in the landscape of video compression development. Pankaj Topiwala: 34:10 But we should also point out that the domain of preponderance use preponderant use of these codecs is going to be different than in our traditional codecs. Our traditional codecs such as H.264 265, were initially developed for primarily for the broadcast market or for DVD and Blu-ray. Whereas these new codecs from AOM are primarily being developed for the streaming media industry. So the likes of Netflix and Amazon and for YouTube where they put up billions of user generated videos. So, for the streaming application, the decoder is almost always a software decoder. That means they can update that decoder anytime they do a software update. So they're not limited by a hardware development cycle. Of course, hardware companies are also building AV1. Pankaj Topiwala: 35:13 And the point of that would be to try to put it into handheld devices like laptops, tablets, and especially smartphones. But to try to get AV1 not only as a decoder but also as an encoder in a smartphone is going to be quite complicated. And the first few codecs that come out in hardware will be of much lower quality, for example, comparable to AVC and not even the quality of HEVC when they first start out. So that's... the hardware implementations of AV1 that work in real time are not going to be, it's going to take a while for them to catch up to the quality that AV1 can offer. But for streaming we, we can decode these streams reasonably well in software or in firmware. And the net result is that, or in GPU for example, and the net result is that these companies can already start streaming. Pankaj Topiwala: 36:14 So in fact Google is already streaming some test streams maybe one now. And it's cloud-based YouTube application and companies like Cisco are testing it already, even for for their WebEx video communication platform. Although the quality will not be then anything like the full capability of AV1, it'll be at a much reduced level, but it'll be this open source and notionally, you know, royalty free video codec. Dror Gill: 36:50 Notionally. Yeah. Because they always tried to do this, this dance and every algorithm that they try to put into the standard is being scrutinized and, and, and they check if there are any patents around it so they can try and keep this notion of of royalty-free around the codec because definitely the codec is open source and royalty free. Dror Gill: 37:14 I think that is, is, is a big question. So much IP has gone into the development of the different MPEG standards and we know it has caused issues. Went pretty smoothly with AVC, with MPEG-LA that had kind of a single point of contact for licensing all the essential patents and with HEVC, that hasn't gone very well in the beginning. But still there is a lot of IP there. So the question is, is it even possible to have a truly royalty free codec that can be competitive in, in compression efficiency and performance with the codec developed by the standards committee? Pankaj Topiwala: 37:50 I'll give you a two part answer. One because of the landscape of patents in the field of video compression which I would describe as being, you know very, very spaghetti like and patents date back to other patents. Pankaj Topiwala: 38:09 And they cover most of the, the topics and the most of the, the tools used in video compression. And by the way we've looked at the AV1 and AV1 is not that different from all the other standards that we have. H.265 or VVC. There are some things that are different. By and large, it resembles the existing standards. So can it be that this animal is totally patent free? No, it cannot be that it is patent free. But patent free is not the same as royalty free. There's no question that AV1 has many, many patents, probably hundreds of patents that reach into it. The question is whether the people developing and practicing AV1 own all of those patents. That is of course, a much larger question. Pankaj Topiwala: 39:07 And in fact, there has been a recent challenge to that, a group has even stood up to proclaim that they have a central IP in AV1. The net reaction from the AOM has been to develop a legal defense fund so that they're not going to budge in terms of their royalty free model. If they do. It would kill the whole project because their main thesis is that this is a world do free thing, use it and go ahead. Now, the legal defense fund then protects the members of that Alliance, jointly. Now, it's not as if the Alliance is going to indemnify you against any possible attack on IP. They can't do that because nobody can predict, you know, where somebody's IP is. The world is so large, so many patents in that we're talking not, not even hundreds and thousands, but tens of thousands of patents at least. Pankaj Topiwala: 40:08 So nobody in the world has ever reviewed all of those patent. It's not possible. And the net result is that nobody can know for sure what technology might have been patented by third parties. But the point is that because such a large number of powerful companies that are also the main users of this technology, you know, people, companies like Google and Apple and Microsoft and, and Netflix and Amazon and Facebook and whatnot. These companies are so powerful. And Samsung by the way, has joined the Alliance. These companies are so powerful that you know, it would be hard to challenge them. And so in practice, the point is they can project a royalty-free technology because it would be hard for anybody to challenge it. And so that's the reality on the ground. Pankaj Topiwala: 41:03 So at the moment it is succeeding as a royalty free project. I should also point out that if you want to use this, not join the Alliance, but just want to be a user. Even just to use it, you already have to offer any IP you have in this technology it to the Alliance. So all users around the world, so if tens of thousands and eventually millions of you know, users around the world, including tens of thousands of companies around the world start to use this technology, they will all have automatically yielded any IP they have in AV1, to the Alliance. Dror Gill: 41:44 Wow. That's really fascinating. I mean, first the distinction you made between royalty free and patent free. So the AOM can keep this technology royalty free, even if it's not patent free because they don't charge royalties and they can help with the legal defense fund against patent claim and still keep it royalty free. And, and second is the fact that when you use this technology, you are giving up any IP claims against the creators of the technology, which means that if any, any party who wants to have any IP claims against the AV1 encoder cannot use it in any form or shape. Pankaj Topiwala: 42:25 That's at least my understanding. And I've tried to look at of course I'm not a lawyer. And you have to take that as just the opinion of a video coding expert rather than a lawyer dissecting the legalities of this. But be that as it may, my understanding is that any user would have to yield any IP they have in the standard to the Alliance. And the net result will be if this technology truly does get widely used more IP than just from the Alliance members will have been folded into into it so that eventually it would be hard for anybody to challenge this. Mark Donnigan: 43:09 Pankaj, what does this mean for the development of so much of the technology has been in has been enabled by the financial incentive of small groups of people, you know, or medium sized groups of people forming together. You know, building a company, usually. Hiring other experts and being able to derive some economic benefit from the research and the work and the, you know, the effort that's put in. If all of this sort of consolidates to a handful or a couple of handfuls of, you know, very, very large companies, you know, does that, I guess I'm, I'm asking from your view, will, will video and coding technology development and advancements proliferate? Will it sort of stay static? Because basically all these companies will hire or acquire, you know, all the experts and you know, it's just now everybody works for Google and Facebook and Netflix and you know... Or, or do you think it will ultimately decline? Because that's something that that comes to mind here is, you know, if the economic incentives sort of go away, well, you know, people aren't going to work for free! Pankaj Topiwala: 44:29 So that's of course a, another question and a one relevant. In fact to many of us working in video compression right now, including my company. And I faced this directly back in the days of MPEG-2. There was a two and a half dollar ($2.50) per unit license fee for using MPEG-2. That created billions of dollars in licensing in fact, the patent pool, MPEG-LA itself made billions of dollars, even though they took only 10% of the proceeds, they already made billions of dollars, you know, huge amounts of money. With the advent of H.264 AVC, the patent license went not to from two and a half dollars to 25 cents a unit. And now with HEVC, it's a little bit less than that per unit. Of course the number of units has grown exponentially, but then the big companies don't continue to pay per unit anymore. Pankaj Topiwala: 45:29 They just pay a yearly cap. For example, 5 million or 10 million, which to these big companies is is peanuts. So there's a yearly cap for the big companies that have, you know, hundreds of millions of units. You know imagine the number of Microsoft windows that are out there or the number of you know, Google Chrome browsers. And if you have a, a codec embedded in the browser there are hundreds of millions of them, if not billions of them. And so they just pay a cap and they're done with it. But even then, there was up till now an incentive for smart engineers to develop exciting new ideas in a future video coding. But, and that has been up the story up till now. But when, if it happens that this AOM model with AV1 and then AV2, really becomes a dominant codec and takes over the market, then there will be no incentive for researchers to devote any time and energy. Pankaj Topiwala: 46:32 Certainly my company for example, can't afford to you know, just twiddle thumbs, create technologies for which there is absolutely no possibility of a royalty stream. So we, we cannot be in the business of developing video coding when video coding doesn't pay. So the only thing that makes money, is Applications, for example, a streaming application or some other such thing. And so Netflix and, and Google and Amazon will be streaming video and they'll charge you per stream but not on the codec. So that that's an interesting thing and it certainly affects the future development of video. It's clear to me it's a negative impact on the research that we got going in. I can't expect that Google and Amazon and Microsoft are going to continue to devote the same energy to develop future compression technologies in their royalty free environment that companies have in the open standards development technology environment. Pankaj Topiwala: 47:34 It's hard for me to believe that they will devote that much energy. They'll devote energy, but it will not be the the same level. For example, in developing a video standards such as HEVC, it took up to 10 years of development by on the order of 500 to 600 experts, well, let's say four to 500 experts from around the world meeting four times a year for 10 years. Mark Donnigan: 48:03 That is so critical. I want you to repeat that again. Pankaj Topiwala: 48:07 Well, I mean so very clearly we've been putting out a video codec roughly on the schedule of once every 10 years. MPEG-2 was 1994. AVC was 2003 and also 2004. And then HEVC in 2013. Those were roughly 10 years apart. But VVC we've accelerated the schedule to put one out in seven years instead of 10 years. But even then you should realize that we had been working right since HEVC was done. Pankaj Topiwala: 48:39 We've been working all this time to develop VVC and so on the order of 500 experts from around the world have met four times a year at all international locations, spending on the order of $100 million per meeting. You know so billions of dollars have been spent by industry to create these standards, many billions and it can't happen, you know without that. It's hard for me to believe that companies like Microsoft, Google, and whatnot, are going to devote billions to develop their next incremental, you know, AV1and AV2 AV3's. But maybe they will it just, that there's no royalty stream coming from the codec itself, only the application. Then the incentive, suppose they start dominating to create even better technology will not be there. So there really is a, a financial issue in this and that's at play right now. Dror Gill: 49:36 Yeah, I, I find it really fascinating. And of course, Mark and I are not lawyers, but all this you know, royalty free versus committee developed open source versus a standard those large companies who some people fear, you know, their dominance and not only in video codec development, but in many other areas. You know, versus you know, dozens of companies and hundreds of engineers working for seven or 10 years in a codec. So you know, it's really different approaches different methods of development eventually to approach the exact same problem of video compression. And, and how this turns out. I mean we, we cannot forecast for sure, but it will be very interesting, especially next year in 2020 when VVC is ratified. And at around the same time, EVC is ratified another codec from the MPEG committee. Dror Gill: 50:43 And then AV1, and once you know, AV1 starts hitting the market. We'll hear all the discussions of AV2. So it's gonna be really interesting and fascinating to follow. And we, we promise to to bring you all the updates here on The Video Insiders. So Pankaj I really want to thank you. This has been a fascinating discussion with very interesting insights into the world of codec development and compression and, and wavelets and DCT and and all of those topics and, and the history and the future. So thank you very much for joining us today on the video insiders. Pankaj Topiwala: 51:25 It's been my pleasure, Mark and Dror. And I look forward to interacting in the future. Hope this is a useful for your audience. If I can give you a one parting thought, let me give this... Pankaj Topiwala: 51:40 H.264 AVC was developed in 2003 and also 2004. That is you know, some 17 years or 16 years ago, it is close to being now nearly royalty-free itself. And if you look at the market share of video codecs currently being used in the market, for example, even in streaming AVC dominates that market completely. Even though VP8 and VP9 and VP10 were introduced and now AV1, none of those have any sizeable market share. AVC currently dominates from 70 to 80% of that marketplace right now. And it fully dominates broadcast where those other codecs are not even in play. And so they're 17, 16, 17 years later, it is now still the dominant codec even much over HEVC, which by the way is also taking an uptick in the last several years. So the standardized codecs developed by ITU and MPEG are not dead. They may just take a little longer to emerge as dominant forces. Mark Donnigan: 52:51 That's a great parting thought. Thanks for sharing that. What an engaging episode Dror. Yeah. Yeah. Really interesting. I learned so much. I got a DCT primer. I mean, that in and of itself was a amazing, Dror Gill: 53:08 Yeah. Yeah. Thank you. Mark Donnigan: 53:11 Yeah, amazing Pankaj. Okay, well good. Well thanks again for listening to the video insiders, and as always, if you would like to come on this show, we would love to have you just send us an email. The email address is thevideoinsiders@beamr.com, and Dror or myself will follow up with you and we'd love to hear what you're doing. We're always interested in talking to video experts who are involved in really every area of video distribution. So it's not only encoding and not only codecs, whatever you're doing, tell us about it. And until next time what do we say Dror? Happy encoding! Thanks everyone.
Resources:Download HEVC deployment statistics document here: JCTVC-AK0020Related episode: E08 with MPEG Chairman Leonardo CharliogneThe Video Insiders LinkedIn Group is where we host engaging conversations with over 1,500 of your peers. Click here to joinLike to be a guest on the show? We want to hear from you! Send an email to: thevideoinsiders@beamr.comLearn more about Beamr's technology
We didn't see that coming! In this episode, The Video Insiders learn something shocking about HEVC support on devices running the THEOplayer SDK. Pieter-Jan Speelmans, CTO of THEOplayer, educates the video insiders about the state of modern video players and codec interchangeability.