Device or software for encoding or decoding a digital data stream
POPULARITY
Jeremy Parish, Shane Bettenhausen, and Tomm Hulett dial in their Codecs and resume their transmission about the story of the Metal Gear series from where it left off. Don't expect things to suddenly start making sense, though... Retronauts is made possible by listener support through Patreon! Support the show to enjoy ad-free early access, better audio quality, and great exclusive content. Learn more at http://www.patreon.com/retronauts
This week, Mark Donnigan and I recap news from IBC about using cloud-based compute services for video workflows, encoding optimization, AV1/codec adoption, live encoding use cases, and AI's potential impact on encoding platforms. We also give updates on the Venu/Fubo lawsuit, price increases for NBA League Pass, the upgraded Roku Ultra device, Google rebranding their FAST channels, and some viewership numbers from Peacock. Finally, we discuss some interesting statements from Netflix regarding their ad tier, the NFL games on Christmas and how they view one-off sporting events versus full seasons.Companies mentioned include Akamai, Amagi, AOM, ATEME, AWS, Bitmovin, Brightcove, CDN77, Disney+ Hotstar, Dolby, Edgio, Harmonic, iSIZE, Neilsen, Netflix, NETINT, NVIDIA, THEO Technologies, Visionular, WaveOne, YouTube.Thanks to this week's podcast sponsors: Integrated Digital Solutions and Netskrt Systems.Podcast produced by Security Halt Media
Do pen and paper have any role in your productivity system these days? If not, you might be missing out on something very special. You can subscribe to this podcast on: Podbean | Apple Podcasts | Stitcher | Spotify | TUNEIN Links: Email Me | Twitter | Facebook | Website | Linkedin The Working With… Weekly Newsletter The FREE Beginners Guide To Building Your Own COD System Carl Pullein Learning Centre Carl's YouTube Channel Carl Pullein Coaching Programmes The Working With… Podcast Previous episodes page Script | 311 Hello, and welcome to episode 311 of the Working With Podcast. A podcast to answer all your questions about productivity, time management, self-development and goal planning. My name is Carl Pullein, and I am your host for this show. A few weeks ago, I posted a video on YouTube that demonstrated how I have gone back to using a pen—or rather, a few of my old fountain pens—and some paper to start planning a project. I've since added doing my weekly planning on paper too. This video and a subsequent follow-up video garnered a lot of interest and some fantastic questions. It also goes back to a question I was asked on this podcast last year on whether it was possible to create an analogue version of the Time Sector System. This week's question is a follow-up to that question, and I hope my answer will encourage you to explore some of the unique ways the humble pen and paper can aid in your productivity journey. So, with that said, let me hand you over to the Mystery Podcast Voice for this week's question. This week's question comes from Tom. Tom asks, hi, Carl, I recently saw your video on going back to pen and paper. What was your thinking behind that decision? Hi Tom, thank you for your question. In many ways, the reason for the “experiment” was something I tried when I was flying over to Ireland for the Christmas break. I decided to take a pen and notebook with me to see if my planning and thoughts would flow better on paper rather than how I usually do it through a keyboard. The idea came from a video I had seen with Tim Ferriss, where he discussed how he finds his ideas flow better when he puts pen to paper. Plus, I have seen Robin Sharma, Tony Robbins, Andrew Huberman and read about many historical figures such as Presidents Kennedy and Nixon as well as Winston Churchill, Ian Fleming and Charles Darwin all take copious notes on paper. I wondered if there was something in it. When you think about it, the chances are you spend far too long in from of a screen these days. If it's not your computer, it's going to be your phone or TV. We just don't seem to be able to get away from them. When you pick up a pen and a pad of paper, you are no longer looking at a screen. The whole effect on your eyesight is going to change. This is certainly something I was beginning to feel. Pretty much everything I do involves a screen. There's even a heads-up display in my car! I just don't seem to be able to get away from them. Then there's the type. I was recently looking through some of my old planners from 2009 and 2010 and found myself being transported back fifteen years to what I was thinking back then. It was a wonderful, nostalgic journey. My handwriting was unique; I could tell which pen I used and even the ink I was using back then. I can look at a digital document I created ten years ago, and it's boring Helvetica in black. It pretty much looks the same as any document I create today. There's nothing nostalgic. There's a wonderful video on YouTube by Adam Savage (yes, the Adam Savage formerly of Mythbusters) where he shows an exact copy of one of Leonardo Da Vinci's Codecs. WOW! I was blown away. It looked gorgeous—even though Da Vinci wrote backwards. The aged paper, the diagrams, the pen strokes. Everything looked so beautiful. So, as I was thinking about how I could bring pen and paper back into my system, I realised the one area where paper, for me, always works better than digital is in planning—well, certainly the initial planning stages. I also find despite Apple's attempts at creating quick notes using the Apple Pencil, it's still not faster than having a notebook next to you on your desk with a pen. Now, one problem many people face with using pen and paper is you end up with a load of half-empty notebooks all over the place. I can assure you if you think there are too many productivity apps around, wait until you begin going down the notebook rabbit hole. There's thousands of different styles, colours and papers. You'll learn about the incredible quality of Japanese paper and what constitutes fountain pen-friendly papers. You'll learn about dot grids, grids, graph and lined paper. Then there are the covers—leather bound, ring bound, sewn, bonded and WOW! So many decisions. You've been warned. And if you start investigating fountain pens, you'll find yourself in serious trouble. YouTube is full of videos on what constitutes the best pens for all kinds of writers. You'll learn about grail pens—pens everyone wants in their collection. I confess I have a soft spot for the Namiki Urushi and a Montblack 149. Anyway, don't say I didn't warn you. Now, back to how I am utilising pen and paper in my system. I have two notebooks. The main one is my planning book. This is an A4 lined notebook where I will begin any planning session. I write the title of what I am planning at the top and then brainstorm in one colour—usually blue. Now, I find the best place to do this is at the dining table, not at my desk. There are no screens on the dining table. So all I have is my notebook and my blue inked fountain pen. This is what call my first pass. Now, the trick here is to write whatever comes into your head and write anywhere on the page. Remember, this is the first pass. It doesn't matter how good or bad any idea is. Just get it out of your head. Even the craziest ideas may contain a seed of something special. Once you've finished and can think of nothing else, close your planning book and leave it for twenty-four hours. Let your subconscious brain do its thing. After twenty-four hours or so, come back to your note and, with a different colour pen, expand your initial thoughts. You could also bring your highlighters to the table if you prefer. One reason I use royal blue as an ink colour for my first session, is a simple pencil looks great next to royal blue. But I do like to use black, green colours too. What you will find is you'll begin adding more ideas, and the initial ideas you had will sprout new, better ideas. This is what I call the first pass. If there is time pressure, I will move on to the next step now. However, I prefer to have time to run a second and third pass just to get all my ideas out. So, what is the next step? This is where I scan the paper note into my notes app. From here, I can pull out the key points and ideas and begin developing the project or video idea. There's often research to be done at this stage and also to decide what action steps I need to take. All of which will likely require a computer. The second notebook I have is my scratch pad. Now, this could be down to my age, but even at school, I always had a pad of paper and pen next to me for jotting down quick notes and random thoughts. There's something comforting about having it next to you. I could, for instance, be writing this script and suddenly have an idea, and I can quickly write it down on my scratch pad for later. Once it's written down, it's not going to be forgotten. I can deal with it later. This notebook is an A5 ring-bound notebook. It's a perfect size for scratching down ideas, and the ring binding allows me to lay the book flat on my desk. At the end of the day, I will go through the captured notes to see if anything needs to be transferred to my task manager. Anything I have dealt with previously, I will simply cross out. However, the most important thing here is stepping away from the screen, and all the distractions a computer will throw at you and just focusing on thinking about the project, goal or whatever you need to think about. There's something about the feel of a pen on paper that no digital tool can replicate. I've tried things like Remarkable 2 and many of the other so-called “paper replacements”. Sorry, but they cannot replicate that exquisite feel of a fountain pen nib flowing across paper. I suspect this is why fountain pens are still popular among so many writers today. Handwriting is in our DNA - from the thousands of years old cave paintings to the ancient Egyptian hieroglyphics we've been writing for years. Keyboards and typing are relatively modern, and anything you type looks the same—after all we generally use the same fonts for everything. With handwriting, you're creating art. It's unique. Each new note is going to look different from a previous note. You can choose different pens and colours and take them anywhere and just sit and write. It is such a different experience from sitting at a computer screen and typing. That difference will give you different ideas and thoughts. Funnily enough, I have returned to writing my journal by hand again after five years in the digital journaling world. While it was very convenient to be able to add a photo to each new journal entry, I realised when I was reading through my old planners and handwritten journals there was something so different about what I was reading. I rarely read through my old typed journal entries, but I was captivated by my old-written journals. I could have sat there for hours reliving my life though a handwritten page. So, there you go, Tom. That is why I have returned to the analogue world. I would also add, that I have started doing my weekly planning on paper too. If you are familiar with my Weekly Planning Matrix. You can draw out the four squares in your planning notebook and give yourself twenty minutes to think about what needs to be done next week. If feels like you are tapping into a different way of thinking which is clearer, more focused on the bigger picture and in a way more emotional than trying to do this digitally. I hope that has inspired many of you to go out and get yourself a notebook and pen. Have a go at it. See what happens. You might just fall in love with pen and paper all over again. Just be careful, there's a whole world out there of notebooks and pens. For me, my trusty old fountain pens and some Rhodia notebooks do the trick. (Although, O confess I've ordered some of the famous Japanese paper to test) Thank you, Tom, for your question and thank you, to you too, for listening. It just remains for me now to wish you all a very very productive week.
Bonne année ! On vous souhaite plein de bonnes lectures, mais surtout des découvertes étonnantes qui vous feront sortir des sentiers battus. Et c'est ce que nous commençons à faire dès cette émission, ou contre toute attente Guillaume nous propose un documentaire, Pié! sort un pavé autobiographique, Tio s'envoie dans l'espace, et Thierry nous fait réviser nos classiques. Bonne écoute ! Télécharger l'émission (52 Mo) – Regarder sur YoutubeS'abonner au One Eye Club – S'abonner à toutes nos émissionsChroniques[02:47] Alice au Pays des Cryptos Daniel Villa Monteiro Nicolas Balas[14:16] Replay Jordan Mechner[22:36] Prima Spatia n°1 Denis-Pierre Filippi Silvio Camboni Francesco Segala[29:23] Crénom, Baudelaire ! n°1 Dominique & Tino Gelli, d'après Jean TeulléUn Œil sur …[41:27] Pluto, la série NetflixD'après le manga de Naoki Urasawa d'après le manga de Osamu TezukaLégende : Scénario – Dessin – Couleur – Coup de cœur – Service de Presse – Le Vote des TipeursGénérique et jingles : Spanish Samba (Oursvince) / Dialup (Jlew) / backstraight (Heigh-hoo)
Bonne année ! On vous souhaite plein de bonnes lectures, mais surtout des découvertes étonnantes qui vous feront sortir des sentiers battus. Et c'est ce que nous commençons à faire dès cette émission, ou contre toute attente Guillaume nous propose un documentaire, Pié! sort un pavé autobiographique, Tio s'envoie dans l'espace, et Thierry nous fait réviser nos classiques. Bonne écoute ! Télécharger l'émission (52 Mo) – Regarder sur YoutubeS'abonner au One Eye Club – S'abonner à toutes nos émissionsChroniques[02:47] Alice au Pays des Cryptos Daniel Villa Monteiro Nicolas Balas[14:16] Replay Jordan Mechner[22:36] Prima Spatia n°1 Denis-Pierre Filippi Silvio Camboni Francesco Segala[29:23] Crénom, Baudelaire ! n°1 Dominique & Tino Gelli, d'après Jean TeulléUn Œil sur …[41:27] Pluto, la série NetflixD'après le manga de Naoki Urasawa d'après le manga de Osamu TezukaLégende : Scénario – Dessin – Couleur – Coup de cœur – Service de Presse – Le Vote des TipeursGénérique et jingles : Spanish Samba (Oursvince) / Dialup (Jlew) / backstraight (Heigh-hoo)
Mit Nobara Linux gibt es eine einfach zu benutzende Fedora-Variante für alle User, die proprietäre Grafiktreiber und Codecs benötigen - beispielsweise für Streaming und Gaming. Zusammen mit Maximilian klären wir, wie gut das Projekt sein Ziel erreicht und welche Fallstricke bestehen.
The 16:9 PODCAST IS SPONSORED BY SCREENFEED – DIGITAL SIGNAGE CONTENT Everybody who is active and experienced in the digital signage space knows the big evergreen challenge for solutions providers and end-users is content production - keeping programming on screens fresh and relevant, but also attractive. A lot of companies in the ecosystem - and not just the software guys - have some degree of template libraries and finished content that can be updated or pushed straight to screens. That's a piece of the solution. But there's also a demand for tools that make it easy and efficient to produce good-looking material for screens. In looking over the exhibitor list for the upcoming DSE trade show, I came across Design Huddle, and wondered, "Who is that and what do they do?" It's a small West Coast US startup that has B2B graphic design software that allows brands, agencies, and other platforms to create what it describes as lockable digital, video, print, and presentation templates for their users. There are some similarities to solutions like Canva, but also a lot of distinctions. The one that would particularly interest a lot of tech companies in this industry is the ability to fully integrate and white label the Design Huddle toolset inside something like a CMS. I haD a great chat with CEO and co-founder Dave Stewart, who is based (I'm jealous) in Huntington Beach, California. Yeah, there's LA traffic, but it's lovely by the water ... Subscribe from wherever you pick up new podcasts. TRANSCRIPT Dave, thank you for joining me. Can you tell me what the Design Huddle is all about? Because it's unfamiliar to me. Dave Stewart: Yeah, absolutely. Ultimately, we are an enterprise focused on software as a service platform that focuses on templating and content creation in an easy and accessible way. We're definitely API-first, so we have a big focus on platform integrations where our customers are programmatically creating content, but then we're also really focused on end-user experience so people who are actually designing, whether that's static content or motion content in a browser, are able to really easily fill in pieces of a video template or create content really for any purpose. What kind of content would they be creating in the context of digital signage, which is obviously what I'm interested in? Dave Stewart: Yeah, absolutely. So yeah, we were actually really surprised. We're relatively new to digital signage, and within the last year, we had to get up to speed ultimately because a couple of players in this industry came to us and really expressed, “Hey, content is a big issue for us, right? We can sell these really expensive screens and they're great, but our customers are just really struggling with what are we going to put on them and how's that going to look good, right? We can have a great-looking screen without good-looking content, so there's a problem.” So, I've been educating ourselves on this very recently, and it's really a combination of things like static content where it's like, I'm just displaying basic information that might be somewhat real-time or just informational, then also, motion content for things like, imagine the signs that are up on a football stadium or in a basketball gym, where you want to show basic animated content, that's talking about whatever the context is for that sport or things like that. So it's been a little bit of everything, but imagine anything that can be shown on a sign, someone's creating that somewhere, right? Right. Is the core idea that the end user, the operator would be selecting from a template library or are they creating stuff from scratch or how does it work? Dave Stewart: Yeah, absolutely. We are actually just the software. We're not actually playing in the content game ourselves. We just make it really easy to create content on our platform, and generally, that's going to mean importing from existing design files and animation files that you've created elsewhere. We can import PDFs and maintain all the layers. So any static content that's generated in any Adobe product or Figma, we can essentially just import it in and maintain that. In After Effects, you can now export to a format called Lottie files and Lottie files can be imported into our system and now essentially we can have really rich animations generating After Effects that are really easily customizable by an end user and also programmatically via API. So that's the starting point for most of our customers is generating their content on their side, whether they're contracting with an agency or they have a team internally, it's building these things. The main thing they're focused on is, we just don't want to have to do these customs per customer. I was super surprised to find out that some of the initial interest from us, these hardware companies have content teams that are literally generating content individually for their customers and to me, that was crazy. but they had to, because that was the way they were going to sell their hardware. So we're just changing that a bit where it's like, just do that once, right? Generate some templates for them and then give them the power, empower them to actually make the changes for themselves, or, again, do it programmatically for them. So I'm curious. Is this the sort of thing that is best suited to somebody who's already a motion graphics designer, an animator, somebody with quite a deep set of creative skills or maybe technical skills? Dave Stewart: I would say a big focus of ours is when it comes to who we are going to sell to. Definitely, software companies are high up on that list who have a general system that's trying to do a lot of things and specifically in digital signage, that might be a CMS or any of these other acronyms that we've come to find out exist here where they're trying to do a lot of things. We're just the content piece, and we feel like we can really stand out by creating a best-in-breed, seamlessly integrated white-labeled product that can fit into their platform in a way that feels proprietary but adds best-of-breed, innovative content creation ability. Now, when it comes to who's creating that content whether they have an internal design team with some expertise or whether they contract an agency just to initially create them a set of templates, it can work either way. I will also say, though, that we do work with brands directly, where brands are creating branded content that might be shown on lots of screens but they want to empower regular users to be able to make changes to those templates while still adhering to brand consistency and their brand guidelines and so like our locking feature is big in that situation because someone creates a template but then now anybody can actually make basic adjustments to it. So it sounds like it's a little reminiscent of what I've been hearing in the last year about AI and how generative AI isn't going to really replace designers, but it does add a considerable layer of efficiency in that you can remove some of the drudgery and some of the building block stuff and automate that or streamline that but it's not meant to just take designers out of the equation. Dave Stewart: No, definitely not. I feel like we're really excited about AI and everyone says that, but I'll get more specific for you. I think, for us right now, we actually just sent out an AI survey to our customers to try to prioritize the main things that they're really interested in. For us, the basic stuff, like background removal, like removing background from images, which we already do, and background from videos. You have things like speech-to-text to provide like auto captioning and things like that. Obviously, generative AI, where you're prompting via text to say, “Hey, I want an image that shows this, or I want to alter this one image to include this”, all those things fit in really well with what we do, but where we want to take that even further is, okay, let me generate a whole bunch of template ideas for you that are basic iteration changes from a set of templates that we may train a model on. So we're actually gonna take all your content you've made and the holy grail for us is, let us shoot out and show you a bunch of previews of a bunch of similar-looking templates that follow the same kind of styles, maybe themes or layouts. But in a new way you're still starting with the designer that needs to set the standard but you're able to generate content in a much quicker way and remove a lot of the monotonous activity that's usually involved there. So what would be involved in using it? Dave Stewart: Yeah, absolutely. So typically what will happen is, again, two sides of our business. We have a platform side where we're going to be very hands-on with our customers and integrate this into some platform that they already have, where there are already users where they need to add on templating or improve some existing content creation suite that they have inside that. So, we would inherit those users and they seamlessly became part of that platform. The other side of the business is, okay, their turnkey solution where we might work with an agency or brand directly. We white label it and they log into a portal that like we create, but it's white labeled for you on your domain. and the idea is that a user is just signing up and accessing a template in a way where you are just a distribution mechanism to provide them content that way. Either way, it's going to be in the context of a browser, whether that's on desktop or mobile and generally it's going to be filling out a template that someone has gotten you, let's call it 80 percent of the way there. Okay. So like you were saying earlier, it's not really that you would go in and say, I want to do a 15-second promotional spot for a car dealer and I would go find a template that seems to be about retail or car dealers or whatever it may be and I can monkey around with that. This is more important than what you already have and automating and making it much more efficient to do that sort of thing. Dave Stewart: Honestly, I think it's both. We have some customers that definitely fall more in the former, for sure, where they have more generic content that they're trying to reach a lot of people with and they're creating more generic content that could be used for different purposes while still allowing the user to really personalize it for themselves. But then, we also have customers that are trying programmatically. So, let's walk through the car dealer one then. If I'm Bob's shovels in Fairbanks, Alaska or whatever it may be and I want to create five ads for our fall clearance event and I don't have Motion graphics animator on my team or anything like that. What would I do? Dave Stewart: Yeah. No, absolutely. So in that situation, again, they're not necessarily like someone that small isn't going to be our customer directly. We're going to inherit them from the fact that they work with some other company, whether that's an agency or they have digital signage. Let's imagine that. They bought a digital sign and part of that came a subscription to some sort of content creation suite and we just designed how it all just so happens to power that content creation suite. That would be the scenario where we might be involved with a small business like that. In that situation, that would entail that the agency or the hardware company that is providing that software suite has created some basic templates for this type of customer, which is exactly what we're seeing happen by the way. and again, I was very surprised about this, that these hardware companies would actually have content teams doing this but that's exactly what's happening. and so, the content teams are just really excited that they don't have to do super personalizing custom graphics, both motion and static for the customer anymore. They can just create templates and let the customer have them themselves. So one of the main reasons that end users and solutions providers to some degree struggle with all of this in terms of content is cost. Agency costs are higher and everything else and the idea of these kinds of tools is attractive for a number of reasons. But one of them is, this will lower my costs of producing content. I assume you guys have done some sort of calculations to say to your potential customers that if you use our stuff, you can potentially save this kind of money. Dave Stewart: Yeah. Ultimately, not that we're in the business of replacing designers that you might already have on staff, but most of the time we're getting brought in a situation where there's a design team and currently what they're super focused on is super monotonous, non creative work where they're taking a Photoshop file and making basic text changes and dropping in images. and think about the salary of someone like that and what you're paying for. We would say, we're not trying to replace that person but let's focus that, some of that person on something actually creative, that's going to move the needle for your business, not on this monotonous work that could absolutely be done by the user themselves in a simple templating solution. So, that's how we'd approach it and so when we talk about cost savings, again, you could think about the fact that, Hey, this salary is gone, but ultimately we'd say, no, let's just repurpose that salary for something useful. Okay. So do you want to go back to skill sets? What realistically do you need to use this? You're going to be a designer or something already? Dave Stewart: Yeah. I would say, look, Canva is a really interesting thing to look at because Canva came on the scene and showed everyone that a platform like this in the browser can be really easy to use and we can remove a lot of the friction and difficulty that's been associated with static and motion content in the past. And so Canva has really educated the market on what's possible and that anybody can kind of design following templates and ultimately, I would say, while we're not trying to be Canva whatsoever, there's clearly a lot of overlap in what we do in terms of a simple user interface, a really easy to create templating solution. The big differentiation there is clearly that we're fully white labels and we're embedding this into some proprietary solution, typically in a way that really well fits into that ecosystem, whatever it might be in a seamless way. So, how did the company get started? Dave Stewart: Yeah, absolutely. So as I mentioned, digital signage is relatively new for us and we're really excited about it, but ultimately, we operate in other verticals, so the opportunity originally was more like as we do in terms of media types, we support print, even large format prints. For instance, we were at ISA earlier this year and our focus going to that was actually more on non-digital billboards and things like that. That was actually really interesting, by the way, as an aside, because, on the plane ride there, some people behind me were talking about one of our larger customers who's actually a major player in digital signage and it opened my eyes to, wow, this is a much bigger company than I even realized. and they're having content issues. There must be lots of additional opportunities here. So, going into that show, again we shifted and pivoted. It's like, Hey, you know what? Digital signage is actually a bigger opportunity than we thought. But to answer your question, again, starting in some of those other media types, we just saw the need for really simple white labeled, digital content creation, whether that be for ads, whether that's just basic social media graphics and posts and basic print collateral. There are lots of sites that are just offering like, whether that's a printing website or whether that's an agency just providing content to their users. Content is content and at the end of the day and it can be all sorts of things. We've really just focused on how do we create a really consistent experience for both motion and static? How can we really seamlessly tie together? Even like print and digital content in a really simple to use easy editor and that has ended up applying to lots of industries and it's been really exciting to find that out In terms of the business itself, what would be the breakdown roughly of what you're doing for print, what you're doing for online, what you're doing for digital display, like digital signage is digital signage? Is digital signage a big component of it, or is it just something you're trying to educate the market on and grow? Dave Stewart: Yeah. Honestly, like I mentioned, we've just gotten into digital signage recently, so clearly it's not a huge piece of the pie yet. We do have very large goals in digital signage Though, we actually do see digital signage being a pretty decent slice of the pie, within the next two years, but as of right now, I'd say that it's hard because of the number of customers versus actual revenue. A lot of our revenue is tied to digital, for sure. So, there are a lot of use cases for ads, social media graphics, things like that, which were our bread and butter. We have a lot of print-focused customers. The revenue is not as high there. There's just more of them, quantity-wise. But I would say that both of those are fairly client counts evenly split. It's definitely skewed more revenue-wise toward digital and what's been really interesting is a lot of these digital-focused, even with social media. They are the ones that push us into video, right? So, like motion content, as it pertains to digital signage, we were already creating HD-quality video just to try to serve that digital market with a priority to know about digital signage. So it's been really interesting to see that a lot of the things that we've done can apply in other areas and it's really just about how we can make a better mousetrap when it comes to end user simplicity of content customization and then programmatic API first control of a platform like this? Are you constrained at all in terms of formats and resolutions and things like that are obviously day to day things in digital signage? Dave Stewart: Yeah, what's really cool is that from the beginning, we've made it really easy to do basic resizing and that an end user could actually resize. So, if there's a slightly different aspect ratio as we can obviously find very frequently on digital signage. Our algorithm will automatically move things around for you and try to keep the design kind of integrity maintained. Now that doesn't work perfectly when you have huge aspect ratio shifts, like clearly if you're going portrait to landscape, it's not going to necessarily work as well. But yeah, it is a big component of this. I would say that on the other side of it, on the programmatic side, we will have customers that will create different templates for slightly different aspect ratios and then ultimately they'll use our API to populate all of them at once with the same data. So you're now spitting out a whole bunch of creativity at one time, leveraging the same data images, text colors, all of it. Now you've just generated a whole pack for users that might have signs of different sizes. So in terms of outputs, you can do HD video. Dave Stewart: Absolutely. Yeah. Now we haven't gotten into a 4k yet. There hasn't been demand because typically 4k is going to be created on professional desktop software. We can do it and we are thinking we're going to get pushed into that. and honestly, it's just going to take one customer that kind of just tells us they really need it to pull the trigger on it, but absolutely, 1080p video we've been doing from the beginning. And are there any other issues around the output files? Like the video is 30 frames per second, that sort of thing? Dave Stewart: Oh yeah, absolutely. Yeah. So we're trying to follow all the industry standards there and honestly, even if a client has very specific requirements when it comes to Codecs and it comes to specific quality of specific items. We're a very customizable platform like we have settings for all of those things that we can match what you need. One of the bigger things has been transparent video. So, we actually are one of the few browser tools that actually supports transparent video, which is difficult because it's not cross browser. There's not one format that works cross browser on that and so importing transparent video files and maintaining them is obviously huge for things like background removal and things like that. But that's been a big one because combining that with our support for Lottie files, which I mentioned Lottie files earlier, but they're really exciting what you can do with them and that plus just bit motion clips that you've either pulled from our stock libraries or that you've shot yourself. Putting all that together, there's a lot of really cool things you can do and they're now attainable by a user who's not a professional motion graphic artist. So yeah, it's really cool. What's possible now. So I'm very curious about the programmatic piece, and I think for people listening, it's important to understand we're not talking about programmatic advertising here where you're talking about programmatic content creation. Dave Stewart: Yeah, and I will say the overlap there is we do have some clients that are in ads and they will actually use our template platform to do A/B testing on those ads where we'll pass in slightly different colors, slightly didn't copy, to generate a bunch of creatives at once. That's our overlap in the ad space, but yes, when we talk about programmatic, I really just talk about programmatic content creation and the fact that with our API, you can generate all sorts of variations of content very quickly, including videos. We have some clients that don't even show our editor to the user. It's really just about, Hey, I want to generate a video that's 15 seconds from this template where it incorporates the customer's brand, their colors and their tagline and their company name. So, spit this out and show them this. It's that easy, right? You don't even have to have them open the editor and do it themselves. Can you give me a good example of how. You could use APIs and data tables and everything else to automate the production of a whole bunch of media pretty quickly. Dave Stewart: Yeah, absolutely. So if you are at a campaign that you were pushing, where you're really just trying to get out consistent messaging and you were needing to do that again, I won't even limit this to digital signage because a lot of our clients will choose us because of the fact that we can operate there and across their other marketing collateral at the same time. But the idea would be if the messaging is the same and you already have branded templates that are the starting point for a lot of different content you might be creating, great. Pass in the mess, the specific messaging, pass in specific keywords to generate images or pass in the specific images directly. Let us fill all of those in at once and generate a whole campaign pack for you in one shot. What about for scale? Let's say you have, I don't know, a retailer that has 800 locations across North America and they want to be hyperlocal about the marketing or messaging or, “Here's our store manager for this location” or whatever. They have a template. They want to knock out 800 unique versions of this or with some variations on it. What kind of time is involved in doing that? Dave Stewart: It's a great question. I'm glad you brought that model because we were actually operating in the franchise space before we even looked into digital signage at all because franchises specifically that have these locations all over the place had this issue with print, had this issue with social media that's been around for a very long time and so they would come to us because what will happen is those store managers or locations are either one requesting individual personalized graphics from the corporate design team on a very regular basis and kind of and completely, taking off all their time doing that, or two, they're going rogue and building off-brand content and it looks terrible and the marketing manager is finding it online and is just pissed off. So one of those two things is happening and where we would come in is look, the only way that you're going to solve that is if you make it easy for them, because if it's not easy, they're going to try and do it themselves. or if they have to wait for you to do it for them, they will do it themselves. So the only way to do it is, Hey, how do we make this such an easy process that anybody can come in and feel like this is going to be the fastest way anyway and it's also going to look great. Why not use that? So ultimately what will happen is, again, the brand manager, corporate team, or whoever is going to create the template. Ultimately, that franchise, franchisee, that store manager, whoever it is, is going to log into the system and they're going to find the template. If they just and , most of the time, these are super locked down. So I have this template and ultimately, I just want to let the store put their store hours right here and maybe some sort of sale information on a specific percentage discount on something, whatever that thing might be and so literally, the user is just going to click that, change the text and then export it, right? It doesn't take any time whenever you've really focused on the template. So yeah, they can't go in and change it to Comic Sans or put in a picture of their dog or whatever. Dave Stewart: No, our locking feature is something we spent a lot of time on. You can take it very far. Most of our clients will lock down almost everything, but we've made it to where you have full control over exactly what users can and can't do. You were talking earlier about Canva and there are a few kinds of platforms out there that are variations on this, or do some of what you're doing. I'm also thinking about Promo and Shaker Media over in Korea. When you get asked about your company versus those kinds of companies, particularly Canva, what do you say? Dave Stewart: Yeah, no, absolutely. It's really interesting because, again, we don't really compete with Canva, like even with Canva Enterprise solution, we don't really compete with them because ultimately, customers are coming to us because they want this white-labeled and embeddable into their own platform or make it seem proprietary. They want to have control. Right now, when you go to Canva, you have no control, right? They control the interface. They control the layout. They control the flow. You have zero say in terms of what the user then can do and where they can go and go off crazy and get lost inside the Canva ecosystem. We're like the opposite, right? The whole goal of this is you make it what you want. You show exactly what you want. You lock down what you want and it looks like it's yours and that's why people are going to come to us. A lot of overlap and functionality, like you said, when it comes to content creation, features and things like that, we definitely have focused on more of some of the more niche-specific things that Canva hasn't, like for instance, for print, we have full CMYK capability, Canva doesn't really. It's a conversion process for them, but we started from the ground up. For large format prints, we support really large format printing for things like large banners and things. That's not something you're really going to do on Canva. For video, this idea that we can support, like these, Lottie files and transparent video, like Canva just launched Lottie files, but their implementation is really simple where you can only really use basic, almost GIF-type content. We've taken it way further. We just go deeper on the more professional aspects and then, again, are more focused on the white-label, embeddable nature of it. You have a booth of some kind at Digital Signage Experience. I assume you're there to start building partnerships and creating awareness that you exist. If I'm a CMS software company, that is probably the best example, what kind of work is involved if I say, “This is awesome, I'd love to integrate this into my overall solutions offer and have it white labeled.” Is that a three-month journey, a twelve-month journey, or allocating five people to work on it for a month or just how does all that come together? Dave Stewart: Yeah, that's a great question. Now we're really excited about DSE coming up. This is the first time we're even attending and we're really excited to exhibit based on, again, what we've heard and who's going to be there. So super excited about that and I'd say that when it comes to who we're trying to reach there and understanding how it would work to work with us, for a CMS company, honestly, our messaging, you'll see this in our booth is all about and we feel like you've ever actually tried to do some level of content creation already as part of a platform and so our messaging is mostly, “Hey, let's upgrade that. Let's make that a little better. Let's improve that inside of your system because we can do that and make it still feel like it's yours.” So that is our focus in terms of messaging to them and I would say that in terms of the actual implementation for a company like that, we have a lot of walk-before-you-run type solutions when it comes to integrations. So a lot of our customers will actually start by initially just using our kind of turnkey portal that we have out of the box and then getting their initial customer buying on there and starting to create the templates that way, before actually doing the deep integration. While they're doing that, they're slowly starting to build the integration in and they could do a really basic integration where they're mostly just embedding all of our components in a simple way and then facilitating fairly basic workflows and then that's like a starting point. Then we would say that the next step is, okay, how do we incorporate some of the other data that you have in your CMS to do the automatic population of content where we can take event-specific information or location-specific information and start injecting it automatically, leveraging our API. So that would be like a second step and then how do we make sure that this feels seamless at every part of your workflow, maybe that's a third step. So we would say that a really basic integration takes a team one or two months, typically, just to get started and then we would say that if you're doing something really deep, maybe a few months after that, over time, starting to get it ingrained more and more. And what are the commercial aspects of this? If I am a CMS software company, I think this is really intriguing. What's it going to cost me to work with Design Huddle? Dave Stewart: Yeah. So again, being enterprise-focused, we found that there are no two customers alike. We actually assign what we call personas to their end users and we say, we have some customers, like their users come into the system once a year and we have some customers where they're using the system every day. We can't price that the same; it's going to be a little bit different. When we talk about API-driven fully use cases where there's no end-user or direct interaction with our editor, that's a little simpler because we can just price it based on API activity and it's fairly straightforward. But when we talk about end users, no users are the same, so we actually do a pretty custom proposal process for customers and we dig into their specific use cases and try to assign a persona to these users. Still, ultimately, the idea would be that, in a user-based kind of pricing proposal like that, the more users you bundle, the bigger discount there is and then we have overage tiers where the cost per user gets cheaper as they grow. The idea is that we're scaling together and things get cheaper and you get to get more profitable over time. But for the purposes of referencing this, I'm sure there are people listening, thinking this is really interesting, but is this going to cost me like a quarter million dollars or something? Dave Stewart: Oh, no! It's $500 for Starter, $750 a month for Pro and then you've got Enterprise and as you said, that depends on all kinds of variables. Dave Stewart: Yeah and each one of those, just to be clear, includes a certain number of users, right? and the number of users that's included, again, is getting into what I was just trying to describe as it can vary a little bit. But yes, we're definitely not a quarter-million-dollar product, starting point, right? We have a basic setup fee, which is usually in the low thousand s and then, in the hundreds typically for most initial engagements or low thousand s. For that setup, that's because you're going to spend all this time working with your partner companies to sort out how to do this. Dave Stewart: We are very hands-on. I know a lot of companies say that, but honestly, for us, it's a huge waste of our time to spend a lot of time with you upfront, try to get it going and then it does not succeed. So we do everything we possibly can at the beginning of the engagement to make sure that you have the tools you need. We actually create custom documentation for every customer that lays out exactly what they need to do based on a consultation session where we talk through the specific platform, what they need to do, what they're trying to accomplish, give them tips and tricks and advice based on what we've seen successful for other customers. That's all part of it. In addition, obviously, training for content creation, like getting your templates in the system. All of that is very front-loaded and so that's where our setup fee is really focusing on that initial time we're going to spend with you to make sure that it's successful. Yeah, I've certainly seen some setup fees from software companies where I thought, okay, that's just a cash grab. But that definitely doesn't sound like the case here. Dave Stewart: No, it really isn't. Honestly, we're probably doing that at a cost, to be honest and then the idea is that once you're in, it's a great thing, like as much as we make our team available around the clock to answer, to always be around support wise, like we hear, as you can imagine, less and less from clients over time, right? So if we can make them successful at the beginning, They're really easy long term and we're just growing together and they're happy and, then, all of our support costs are front-loaded for that reason. You're a virtual company, West Coast. How many people are in the company? Dave Stewart: So yeah, the latest count is, I'm about to hire another one, so around 12-13. We're relatively small. Canva's got 3,500 or something like that? Dave Stewart: Oh yeah and it's fun, right? We're a really nimble team. You know, this is my second go-around. My last company, I took it to about 150 employees before I exited, so we're still pretty early on our journey here and that's really exciting for us because we see so much opportunity in this. I do expect this to grow a lot in the next two years. but we are a lean team of seasoned and professional software professionals and we're able to do a lot with a fairly small team right now. And is this bootstrapped or venture-backed? Dave Stewart: Yeah, great question. My previous company actually started in the 2009 timeframe when everything crashed and there was really no money going around the way that it was capitalized. It ended up biting me in the end, and it left a bad taste in my mouth. So going into this, my partners and I were really trying to bootstrap this from the beginning. I wanted full control over how this is going to work. That said, very early on, we had a large company come to us and say, “Hey, we really want to use you guys, but we're too worried about whether you're going to be around next year.” That company is Smartsheet, right? They own a company called Brand folder, which was the one interested in us. Smartsheets is a public company, they're very large, so they ended up becoming a small minority partner. They did basically a strategic round with us. That's a very small percentage, but ultimately it gives a lot of people a little bit more comfortability working with us because they're our backstop. The only reason that they invested was really just to make sure that we were going to stick around because they were going to be so invested in us. So they're there for that reason that said we are fully, sustainably and profitable at this point. So we, actually, are currently setting our own. Of course, we're in a really good position and we're excited about that coming from my previous experience. If people are going to DSE, they'll be able to find you on the exhibit floor and I know you're coming to the mixer; and if they want to find you online, how do they do that? Dave Stewart: Yeah, absolutely. They can definitely check out our website, designhuddle.com. You can reach out to schedule some time with us. We are doing some of the DSE kind of promotional material. You may have just seen an email about us there where you can schedule some time with us at the show. but yeah, we would love to hear from you. We'd love to talk with everyone. As I mentioned, we're excited to learn more about this industry and get deeper into it and we'd love to have all the conversations we need to figure that out. Great. All right, thank you, Dave. Much appreciated. Dave Stewart: Awesome. Thanks so much, Dave. It was a pleasure.
Our last few experiments with playing video+audio on the ESP32-S3 involved converting an MP4 to MJpeg + MP3, where MJpeg is just a bunch of jpegs glued together in a file, and MP3 is what you expect. This works, but we maxed out at 10 or 12 fps on a 480x480 display. You must manage two files, and the FPS must be hardcoded. With this demo https://github.com/moononournation/aviPlayer we are using avi files with Cinepak https://en.wikipedia.org/wiki/Cinepak and MP3 encoding - a big throwback to when we played quicktime clips on our Centris 650. The decoding speed is much better, and 30 FPS is easily handled, so the tearing is not as visible. The decoder keeps up with the SD card file, so you can use long files. This makes the board a good option for props and projects where you want to play ~480p video clips with near-instant startup and want to avoid the complexity of running Linux on a Raspberry Pi + monitor + audio amp. The only downside right now is the ffmpeg cinepak encoder is reaaaaallly slooooow. Visit the Adafruit shop online - http://www.adafruit.com ----------------------------------------- LIVE CHAT IS HERE! http://adafru.it/discord Subscribe to Adafruit on YouTube: http://adafru.it/subscribe New tutorials on the Adafruit Learning System: http://learn.adafruit.com/ ----------------------------------------- #adafruit #startrek #voyager #startrekday #espressif #esp32 #espfriends #display #videocodecs #esp32s3 #mjpeg #aviplayer #cinepak #retrotech #videoplayback #quicktime #decoder #techinnovation
This week Samsung is bringing an 89” Micro LED TV to market and it will only cost you $102,000USD. We also take a look at AV receivers that we don't typically talk about. And as usual we read your emails and look at the week's news stories. News: How DIY Is Blurring The Lines In Smart Home Security LG's most popular OLED TVs just fixed a big brightness problem with a new update LG's ‘wireless' and wildly expensive 97-inch OLED TV sees first global release Sony's 2023 A95L QD-OLED TV up for preorder in August starting at $2,800 Other: LEICA CINE 1 THE ART OF HOME CINEMA. Samsung's Stunning 89-Inch MicroLED TV Could Be What's Next After OLED When it comes to the best TVs, one of the biggest challenges is figuring out which display technology is truly the best. One of the contenders? MicroLED, which promises a serious upgrade over existing MiniLED technologies. Full article here… Receivers We Don't Typically Talk About We get questions about receiver recommendations and we typically stay with brands we have experience with. We have never had someone come back at us to complain about a Denon, Yamaha, or Marantz receiver. We run these in our homes everyday so we feel confident recommending them to you. But there are other brands that have loyal followings that you may want to consider. These brands are typically more expensive and full featured. Below are the lowest cost receivers from brands we don't typically talk about. Arcam AVR10 7.2-channel home theater receiver with Bluetooth® and Apple AirPlay® 2 The AVR10 is a high-performance audio/visual receiver that delivers stunning realism for the ultimate home cinema experience. With an impressive 12-channel surround solution and featuring all the latest CODECs from Dolby, DTS, Auro-3D and IMAX Enhanced, the AVR10 exemplifies sound quality and engineering excellence. Audiophile listening experiences are optimised with full 12-channel Dirac calibration on board as well as simple streaming with a mobile device using the native app of choice via Apple AirPlay2 or Google Chromecast. You can find the Arcam AVR10 at Crutchfield for $2200 NAD T 758 V3i A/V Surround Sound Receiver A performance update to our award-winning T 758 A/V Surround Sound Receiver, the T 758 V3i continues NAD's ‘simple is better' design philosophy by delivering a fluid user-friendly experience. From lifelike surround sound performance to heart thumping power, the T 758 V3i is a true treat for the senses. Employing NAD's proprietary MDC technology, the T 758 V3i is ready for future upgrades and features. With 4K UltraHD video, the T 758 V3i offers a vivid and engaging presentation when it comes to the latest in digital video technology. Complete with AV presets that are yours to customise, the T 758 V3i gives you total control of what you hear and how you see it. The NAD AV Remote iOS app to make your smartphone a remote control is available as a free download. Available at NAD's website for $1699. Emotiva BasX MR1L 9.2 Channel Dolby Atmos® & DTS:X™ Cinema Receiver How long have you been waiting for a receiver that can actually deliver the superb uncompromising performance of separate components? The BasX MR1L cinema receiver combines a high performance 13.2 channel immersive surround sound processor, and an audiophile quality 9 channel amplifier, in a single chassis. The processor section of the MR1L supports 4k UHD video, including HDR and Dolby Vision, enhanced ARC (eARC), and the latest Dolby Atmos® and DTS:X™ immersive surround sound formats. The MR1L features six HDMI 2.0b video inputs, all of which support 4k UHD HDR video, and includes support for enhanced ARC (eARC). Included with the MR1L is a measurement microphone and the latest version of EmoQ, our well-regarded automatic room correction system. The MR1L also offers multiple analog and digital audio inputs, and an integrated Bluetooth receiver with aptX. Available at Emotiva's website for $1599. Anthem MRX 540 8K 5.2-channel home theater receiver with Dolby Atmos®, Wi-Fi®, Bluetooth®, and Apple AirPlay® 2 Anthem's MRX 540 8K receiver is an excellent option for creating a high-performance home theater in a smaller room. It has everything you need — fantastic A/V processing, robust amplification, and exceptional room calibration — without extra channels of power that would go unused. This upscale receiver is an especially good choice if you plan to play premium content through it — like 4K Blu-ray discs or uncompressed music files from a high-resolution library. It even has the latest HDMI technology for 8K video sources, including premium gaming consoles. The MRX 540 8K is engineered to squeeze every drop of detail out of these high-res formats, and that's why it's worth considering over more modestly priced 5.1-channel receivers. Available at Anthem's website for $1900.
Comento los siguientes titulares: - Nvidia elimina algunas limitaciones de codificación de video de las GPU de consumo - Imagen de fugas masivas de socket Intel LGA7529 en línea - Gigabyte afirma que las CPU Ryzen de próxima generación llegarán a AM5 este año - Huawei desarrolla herramientas de diseño para chips de 14 nm en medio de la prohibición de EE. UU. - Retro TechTuber agrega una antigua ranura ISA para PC a través del encabezado TPM moderno FUENTES - https://www.tomshardware.com/news/nvidia-increases-concurrent-nvenc-sessions-on-consumer-gpus - https://www.tomshardware.com/news/detailed-image-of-intels-lga7529-socket-leaks-online - https://www.tomshardware.com/news/amd-ryzen-zen4-next-gen-2023-gigabyte - https://www.tomshardware.com/news/huawei-develops-tools-for-14nm-chips - https://www.tomshardware.com/news/isa-slot-tpm-soundblaster-header-pc --- Send in a voice message: https://podcasters.spotify.com/pod/show/infogonzalez/message
XR Today's Demond Cureton hosts Robert Green, Senior Manager of Pro AV, Broadcast & Consumer, AMDIn our interview, we discuss the following,The latest AMD updates after the Integrated Systems Europe (ISE) 2023 eventOngoing work with Canon and the Versal AI Core chipset for pro-AV sports broadcasts AMD's High-Throughput JPEG 2000 Video Codec and democratising solutions for lower-tier clientsAMD's IPMX partnerships and their contributions to XRThanks for watching, and if you'd like to see more content like this, kindly follow us on our Twitter and LinkedIn pages.
Ep. 21 ND Filters and Fuji Codecs This week we discuss Lucas' ND filter research, Fuji Codes and debut a new segment Links below to product sites are affiliate links and may result in a commission to the Camera Gear Podcast Pre-show: The Oscars The Batman Cinematography Oscar Noms ND Filters Haida Variable ND Moment Variable ND Polar Pro Variable ND Gerald Done Variable ND explainer Lucas' Legendary Lens....Labyrinth?...Layer?...Living Room? History of the Noct Nikkor Noct (new) Nikkor 58mm 1.2 review (vintage) Fuji Codecs Supported Film Simulations
SHOW NOTES ►► https://tuxdigital.com/podcasts/this-week-in-linux/twil-217/
On this episode of This Week in Linux: we've got a new release from the WINE project, elementary OS 7 is out, we'll get some previews for the next releases of GNOME and KDE Plasma. Plus we've got some news from openSUSE & Ubuntu and so much more on Your Weekly Source for Linux GNews! […]
Ryan and his team found a quick way of reducing the compute resources spent on encoding videos for Instagram by 94%, but that was actually the easy part. Tune in to learn what the fix was and how you roll out changes that can affect the user experience of billions of users. Got feedback? Send it to us on Twitter (https://twitter.com/metatechpod), Instagram (https://instagram.com/metatechpod) and don't forget to follow our host @passy (https://twitter.com/passy and https://mastodon.social/@passy). Fancy working with us? Check out https://www.metacareers.com/. Links: Reducing Instagram's basic video compute time by 94 percent - Meta Engineering Blog: https://engineering.fb.com/2022/11/04/video-engineering/instagram-video-processing-encoding-reduction/ The Diff: https://thediffpodcast.com/ Unix Signals in Production - Dangers and Pitfalls: https://developers.facebook.com/blog/post/2022/09/27/signals-in-prod-dangers-and-pitfalls/ Introducing Velox: An open source unified execution engine - https://engineering.fb.com/2022/08/31/open-source/velox/ Timestamps: Intro 0:06 Intro Ryan 1:40 Transcoding Video at Instagram 2:52 Codecs and Tradeoffs 5:33 Client Support 7:13 Where did the compute go? 9:15 ABR 10:59 Progressive/Non-ABR Encodings 12:31 Saving Encoding Time 13:10 Testing the Changes 17:39 Results 26:43 Popularity Predictions 28:32 Outro 36:31
Mit Python 3.11 und Fedora 37 besprechen wir zwei langersehnte Versionsupdates. Letzteres wurde aufgrund des berüchtigten CVE-2022-3602 und 3786 mehrfach verschoben. Ein neues Unternehmen rund um Gitea beschert uns einen Fork, während eine Diskussion über Blockchain-Technologie und LibreOffice die Gemüter erhitzt. Microsoft stellt seine neue Teams-PWA und Lennart Poettering mit Unified Kernel Images einen Ansatz vor, um den Bootvorgang weiter abzusichern. Feedback / AnkündigungenReddit-Post zu Ubuntu ARM64: https://www.reddit.com/r/thinkpad/comments/y9t4ns/thinkpad_x13s_running_arm64_ubuntu/Tweet über OpenBSD: https://twitter.com/mlarkin2012/status/1541760799533944832Linaro Installationsdokumentation: https://docs.google.com/document/d/1WuxE-42ZeOkKAft5FuUk6C2fonkQ8sqNZ56ZmZ49hGI/mobilebasic#heading=h.d1689esafskyTwitter is going great: https://twitterisgoinggreat.com/LNP445 Das Goldkettchen des Internets: https://logbuch-netzpolitik.de/lnp445-das-goldkettchen-des-internetsProxmox-Wiki zu Migrationen: https://pve.proxmox.com/wiki/Migration_of_servers_to_Proxmox_VEEure Fragen / Feedback / Sprachnachrichten für das Ask Us Anything-Special im Januar: podcast@sva.deCVE-2022-3602 / 3786https://github.com/colmmacc/CVE-2022-3602https://linuxnews.de/2022/11/02/openssl-luecke-nicht-so-gravierend-wie-angenommen/Python 3.11Phoronix-Artikel: https://www.phoronix.com/news/Python-3.11-ReleasedRelease Notes: https://www.python.org/downloads/release/python-3110/Fedora 37https://linuxnews.de/2022/11/15/fedora-linux-37-mit-gnome-43-und-plasma-5-26/https://fedoramagazine.org/announcing-fedora-37/https://gnulinux.ch/mesa-mit-h264-h265-und-vc1-decoding-unter-fedora-37https://www.heise.de/news/Linux-Distribution-Fedora-37-Viele-Varianten-weniger-Codecs-7340921.htmlhttps://fedoramagazine.org/fedora-linux-37-update/SHAttered-Sicherheitslücke: http://shattered.io/LibreOffice dachte über Blockchain nach: https://blog.documentfoundation.org/blog/2022/11/15/libreoffice-and-blockchain-what-cool-things-are-possible/Gitea Limited / Gitea-ForkAnkündigung von Gitea: https://blog.gitea.io/2022/10/open-source-sustainment-and-the-future-of-gitea/Offener Brief an Gitea Limited: https://gitea-open-letter.coding.social/Absage von Lunny Xiao: https://blog.gitea.io/2022/10/a-message-from-lunny-on-gitea-ltd.-and-the-gitea-project/Gitea-Fork: https://codeberg.org/forgejo/forgejoNamensdiskussion: https://codeberg.org/Forgejo/meta/issues/1Microsoft Teams PWA angekündigt: https://techcommunity.microsoft.com/t5/microsoft-teams-blog/microsoft-teams-progressive-web-app-now-available-on-linux/ba-p/3669846Unified Kernel Imagehttps://linuxnews.de/2022/10/29/ausblick-auf-fedora-linux-38-uki-und-ostree-native-container/https://wiki.archlinux.org/title/Unified_kernel_imageKurznewsFirefox wird 18: https://www.reddit.com/r/linux/comments/yqewhh/long_live_firefox/Thunderbird Supernova: https://9to5linux.com/thunderbirds-supernova-release-promises-revamped-calendar-ui-firefox-sync-supportAlmaLinux 9.1 veröffentlicht: https://linuxnews.de/2022/11/17/almalinux-os-9-1-mit-php-8-1-veroeffentlicht/RHEL 9.1 veröffentlicht: https://www.redhat.com/en/blog/rhel-91-now-availableRHEL 8.7 Release Notes: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/8.7_release_notes/indexUbuntu 23.04: https://linuxnews.de/2022/11/14/ubuntu-23-04-heisst-lunar-lobster/TooltipsMastodon Instance-Finder: https://instances.social/Fedifinder: https://fedifinder.glitch.metwitter-archive-parser: https://github.com/timhutton/twitter-archive-parserpytest: https://docs.pytest.org/en/7.2.x/acme.sh: https://acme.sh
Mit Python 3.11 und Fedora 37 besprechen wir zwei langersehnte Versionsupdates. Letzteres wurde aufgrund des berüchtigten CVE-2022-3602 und 3786 mehrfach verschoben. Ein neues Unternehmen rund um Gitea beschert uns einen Fork, während eine Diskussion über Blockchain-Technologie und LibreOffice die Gemüter erhitzt. Microsoft stellt seine neue Teams-PWA und Lennart Poettering mit Unified Kernel Images einen Ansatz vor, um den Bootvorgang weiter abzusichern. Feedback / AnkündigungenReddit-Post zu Ubuntu ARM64: https://www.reddit.com/r/thinkpad/comments/y9t4ns/thinkpad_x13s_running_arm64_ubuntu/Tweet über OpenBSD: https://twitter.com/mlarkin2012/status/1541760799533944832Linaro Installationsdokumentation: https://docs.google.com/document/d/1WuxE-42ZeOkKAft5FuUk6C2fonkQ8sqNZ56ZmZ49hGI/mobilebasic#heading=h.d1689esafskyTwitter is going great: https://twitterisgoinggreat.com/LNP445 Das Goldkettchen des Internets: https://logbuch-netzpolitik.de/lnp445-das-goldkettchen-des-internetsProxmox-Wiki zu Migrationen: https://pve.proxmox.com/wiki/Migration_of_servers_to_Proxmox_VEEure Fragen / Feedback / Sprachnachrichten für das Ask Us Anything-Special im Januar: podcast@sva.deCVE-2022-3602 / 3786https://github.com/colmmacc/CVE-2022-3602https://linuxnews.de/2022/11/02/openssl-luecke-nicht-so-gravierend-wie-angenommen/Python 3.11Phoronix-Artikel: https://www.phoronix.com/news/Python-3.11-ReleasedRelease Notes: https://www.python.org/downloads/release/python-3110/Fedora 37https://linuxnews.de/2022/11/15/fedora-linux-37-mit-gnome-43-und-plasma-5-26/https://fedoramagazine.org/announcing-fedora-37/https://gnulinux.ch/mesa-mit-h264-h265-und-vc1-decoding-unter-fedora-37https://www.heise.de/news/Linux-Distribution-Fedora-37-Viele-Varianten-weniger-Codecs-7340921.htmlhttps://fedoramagazine.org/fedora-linux-37-update/SHAttered-Sicherheitslücke: http://shattered.io/LibreOffice dachte über Blockchain nach: https://blog.documentfoundation.org/blog/2022/11/15/libreoffice-and-blockchain-what-cool-things-are-possible/Gitea Limited / Gitea-ForkAnkündigung von Gitea: https://blog.gitea.io/2022/10/open-source-sustainment-and-the-future-of-gitea/Offener Brief an Gitea Limited: https://gitea-open-letter.coding.social/Absage von Lunny Xiao: https://blog.gitea.io/2022/10/a-message-from-lunny-on-gitea-ltd.-and-the-gitea-project/Gitea-Fork: https://codeberg.org/forgejo/forgejoNamensdiskussion: https://codeberg.org/Forgejo/meta/issues/1Microsoft Teams PWA angekündigt: https://techcommunity.microsoft.com/t5/microsoft-teams-blog/microsoft-teams-progressive-web-app-now-available-on-linux/ba-p/3669846Unified Kernel Imagehttps://linuxnews.de/2022/10/29/ausblick-auf-fedora-linux-38-uki-und-ostree-native-container/https://wiki.archlinux.org/title/Unified_kernel_imageKurznewsFirefox wird 18: https://www.reddit.com/r/linux/comments/yqewhh/long_live_firefox/Thunderbird Supernova: https://9to5linux.com/thunderbirds-supernova-release-promises-revamped-calendar-ui-firefox-sync-supportAlmaLinux 9.1 veröffentlicht: https://linuxnews.de/2022/11/17/almalinux-os-9-1-mit-php-8-1-veroeffentlicht/RHEL 9.1 veröffentlicht: https://www.redhat.com/en/blog/rhel-91-now-availableRHEL 8.7 Release Notes: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/8.7_release_notes/indexUbuntu 23.04: https://linuxnews.de/2022/11/14/ubuntu-23-04-heisst-lunar-lobster/TooltipsMastodon Instance-Finder: https://instances.social/Fedifinder: https://fedifinder.glitch.metwitter-archive-parser: https://github.com/timhutton/twitter-archive-parserpytest: https://docs.pytest.org/en/7.2.x/acme.sh: https://acme.sh
Você já se perguntou por que o seu vídeo não passa no seu computador ou sai travando após você trabalhar no After Effects? O motivo pode ser os seus codecs e ainda mais, por que prestar atenção no bitrate do seu vídeo? É sobre isso que vou conversar com o Anderson Gaveta neste episódio, borá … Continue lendo "Codecs e Bitrate – Layers ponto tech #107"
We tried Fedora 37 on the Pi 4, the Google surprise this week, and our thoughts on the WSL 1.0 release.
We tried Fedora 37 on the Pi 4, the Google surprise this week, and our thoughts on the WSL 1.0 release.
Why this latest release of Fedora misses the mark, and Ubuntu's quiet backing away from ZFS.
Felix erfüllt einen weiteren Bildungsauftrag unserer Reaktion. Canonical veröffentlicht Ubuntu 22.10 und verärgert gleichzeitig die Community mit Werbung zu einem eigentlich interessanten neuen Angebot. Red Hat publiziert mit 8.7 und 9.1 zwei neue Betas, während SUSE mit ALP einen ersten Einblick in das kommende SLE-next gewährt. Debian entscheidet zugunsten der User und einige Jubiläen wollen gefeiert werden. Links zu dieser Folge: FeedbackPCLinuxOS: http://www.pclinuxos.com/22° ist die optimale Ausrichtung für Entwickler:innen: https://sprocketfox.io/xssfox/2021/12/02/xrandr/AWK eBook: https://opensource.com/article/20/9/awk-ebookSUSE ALPAnkündigung: https://www.suse.com/c/the-first-prototype-of-adaptable-linux-platform-is-live/Heise-Artikel: https://www.heise.de/news/SUSEs-Server-Zukunft-ALP-Enterprise-Linux-radikal-neu-gedacht-7285958.htmlUyuni 2022.10Release Notes: https://www.uyuni-project.org/doc/2022.10/release-notes-uyuni-server.htmlKritischer Patch: https://lists.opensuse.org/archives/list/announce@lists.uyuni-project.org/thread/K3OOSVLANV3XFMWQV3TGA3EY5VANLJUN/Patch-Repository: https://www.uyuni-project.org/pages/patches.htmlUbuntu 22.10Canonical-Blog: https://ubuntu.com/blog/canonical-releases-ubuntu-22-10-kinetic-kuduLinuxNews-Artikel: https://www.omgubuntu.co.uk/2022/08/ubuntu-22-10-release-new-featuresDebian entscheidet über Firmware-Diskussion: https://linuxnews.de/2022/10/firmware-debian-hat-im-sinne-der-anwender-entschieden/TUXEDO OSTUXEDO OS 1 veröffentlicht: https://linuxnews.de/2022/09/tuxedo-os-1-zum-download-verfuegbar/TUXEDO Tomte: https://www.tuxedocomputers.com/de/Infos/Hilfe-Support/Haeufig-gestellte-Fragen/Was-ist-eigentlich-TUXEDO-Tomte-.tuxedoSystem76 arbeitet an neuem UI-Framework icedNews-Artikel: https://www.phoronix.com/news/COSMIC-Desktop-Iced-ToolkitReddit-Post: https://www.reddit.com/r/pop_os/comments/xs87ed/is_iced_replacing_gtk_apps_for_the_new_cosmic/GNOME-Entwickler:innen äußern sich abfällig: https://twitter.com/jeremy_soller/status/1577061838910390272Kritik gegenüber System76: https://blogs.gnome.org/christopherdavis/2021/11/10/system76-how-not-to-collaborate/RHEL 8.7 und 9.1 Beta: https://www.redhat.com/en/blog/top-new-features-red-hat-enterprise-linux-87-and-91-betaRed Hats Storage-Teams wechseln zu IBMIBM-Ankündigung: https://newsroom.ibm.com/2022-10-04-IBM-Redefines-Hybrid-Cloud-Application-and-Data-Storage-Adding-Red-Hat-Storage-to-IBM-OfferingsHeise-Artikel: https://www.heise.de/news/IBM-schnappt-sich-das-Storage-Team-Filetiert-Big-Blue-jetzt-Red-Hat-7286138.htmlUbuntu ProAnkündigung: https://ubuntu.com/blog/ubuntu-pro-beta-releaseLinuxNews-Artikel: https://linuxnews.de/2022/10/canonical-erweitert-ubuntu-pro/StackExchange-Thread über apt-Werbung: https://askubuntu.com/questions/1434512/how-to-get-rid-of-ubuntu-pro-advertisement-when-updating-aptdecanonical-Rolle: https://galaxy.ansible.com/stdevel/decanonicalFramework bringt Chromebook Edition heraus: https://www.phoronix.com/news/Framework-Laptop-ChromebookLinux-Distributionen entfernen Codecs: https://www.golem.de/news/video-codecs-fedora-deaktiviert-hardware-beschleunigung-wegen-patenten-2209-168588.htmlOSAD 2022Slides: https://osad-munich.org/archiv/eindruecke-vom-osad-2022/YouTube-Playlist: https://www.youtube.com/playlist?list=PLMHkNniqa2bhtUDfqbuUUlCSDWl2QymyxTuxCare: https://tuxcare.com/Enterprise Linux Security-Podcast: https://podcasts.apple.com/ca/podcast/enterprise-linux-security/id1586386560VirtualBox 7.0Changelog: https://www.virtualbox.org/wiki/ChangelogNews-Artikel: https://9to5linux.com/virtualbox-7-0-released-with-dxvk-and-secure-boot-support-full-encryption-and-moreKurznewsDie CD-ROM wird 40: https://edition.cnn.com/2012/09/28/tech/innovation/compact-disc-turns-30/index.html 4 Jahre SerenityOS: https://serenityos.org/happy/4th/5 Jahre Pop!_OS: https://blog.system76.com/post/celebrating-5-years-of-pop_osSerenityOS-Folge: https://ageofdevops.de/index.php/podcast/serenityos/Asahi-Linux äußert sich zu Manjaros Apple Silicon-Support: https://twitter.com/AsahiLinux/status/1576356115746459648TooltippsHugo: https://gohugo.ioHugo Themes: https://themes.gohugo.ioWordpress XML to Markdown: https://github.com/ytechie/wordpress-to-markdowndirenv: https://direnv.net/
Felix erfüllt einen weiteren Bildungsauftrag unserer Reaktion. Canonical veröffentlicht Ubuntu 22.10 und verärgert gleichzeitig die Community mit Werbung zu einem eigentlich interessanten neuen Angebot. Red Hat publiziert mit 8.7 und 9.1 zwei neue Betas, während SUSE mit ALP einen ersten Einblick in das kommende SLE-next gewährt. Debian entscheidet zugunsten der User und einige Jubiläen wollen gefeiert werden. Links zu dieser Folge: FeedbackPCLinuxOS: http://www.pclinuxos.com/22° ist die optimale Ausrichtung für Entwickler:innen: https://sprocketfox.io/xssfox/2021/12/02/xrandr/AWK eBook: https://opensource.com/article/20/9/awk-ebookSUSE ALPAnkündigung: https://www.suse.com/c/the-first-prototype-of-adaptable-linux-platform-is-live/Heise-Artikel: https://www.heise.de/news/SUSEs-Server-Zukunft-ALP-Enterprise-Linux-radikal-neu-gedacht-7285958.htmlUyuni 2022.10Release Notes: https://www.uyuni-project.org/doc/2022.10/release-notes-uyuni-server.htmlKritischer Patch: https://lists.opensuse.org/archives/list/announce@lists.uyuni-project.org/thread/K3OOSVLANV3XFMWQV3TGA3EY5VANLJUN/Patch-Repository: https://www.uyuni-project.org/pages/patches.htmlUbuntu 22.10Canonical-Blog: https://ubuntu.com/blog/canonical-releases-ubuntu-22-10-kinetic-kuduLinuxNews-Artikel: https://www.omgubuntu.co.uk/2022/08/ubuntu-22-10-release-new-featuresDebian entscheidet über Firmware-Diskussion: https://linuxnews.de/2022/10/firmware-debian-hat-im-sinne-der-anwender-entschieden/TUXEDO OSTUXEDO OS 1 veröffentlicht: https://linuxnews.de/2022/09/tuxedo-os-1-zum-download-verfuegbar/TUXEDO Tomte: https://www.tuxedocomputers.com/de/Infos/Hilfe-Support/Haeufig-gestellte-Fragen/Was-ist-eigentlich-TUXEDO-Tomte-.tuxedoSystem76 arbeitet an neuem UI-Framework icedNews-Artikel: https://www.phoronix.com/news/COSMIC-Desktop-Iced-ToolkitReddit-Post: https://www.reddit.com/r/pop_os/comments/xs87ed/is_iced_replacing_gtk_apps_for_the_new_cosmic/GNOME-Entwickler:innen äußern sich abfällig: https://twitter.com/jeremy_soller/status/1577061838910390272Kritik gegenüber System76: https://blogs.gnome.org/christopherdavis/2021/11/10/system76-how-not-to-collaborate/RHEL 8.7 und 9.1 Beta: https://www.redhat.com/en/blog/top-new-features-red-hat-enterprise-linux-87-and-91-betaRed Hats Storage-Teams wechseln zu IBMIBM-Ankündigung: https://newsroom.ibm.com/2022-10-04-IBM-Redefines-Hybrid-Cloud-Application-and-Data-Storage-Adding-Red-Hat-Storage-to-IBM-OfferingsHeise-Artikel: https://www.heise.de/news/IBM-schnappt-sich-das-Storage-Team-Filetiert-Big-Blue-jetzt-Red-Hat-7286138.htmlUbuntu ProAnkündigung: https://ubuntu.com/blog/ubuntu-pro-beta-releaseLinuxNews-Artikel: https://linuxnews.de/2022/10/canonical-erweitert-ubuntu-pro/StackExchange-Thread über apt-Werbung: https://askubuntu.com/questions/1434512/how-to-get-rid-of-ubuntu-pro-advertisement-when-updating-aptdecanonical-Rolle: https://galaxy.ansible.com/stdevel/decanonicalFramework bringt Chromebook Edition heraus: https://www.phoronix.com/news/Framework-Laptop-ChromebookLinux-Distributionen entfernen Codecs: https://www.golem.de/news/video-codecs-fedora-deaktiviert-hardware-beschleunigung-wegen-patenten-2209-168588.htmlOSAD 2022Slides: https://osad-munich.org/archiv/eindruecke-vom-osad-2022/YouTube-Playlist: https://www.youtube.com/playlist?list=PLMHkNniqa2bhtUDfqbuUUlCSDWl2QymyxTuxCare: https://tuxcare.com/Enterprise Linux Security-Podcast: https://podcasts.apple.com/ca/podcast/enterprise-linux-security/id1586386560VirtualBox 7.0Changelog: https://www.virtualbox.org/wiki/ChangelogNews-Artikel: https://9to5linux.com/virtualbox-7-0-released-with-dxvk-and-secure-boot-support-full-encryption-and-moreKurznewsDie CD-ROM wird 40: https://edition.cnn.com/2012/09/28/tech/innovation/compact-disc-turns-30/index.html 4 Jahre SerenityOS: https://serenityos.org/happy/4th/5 Jahre Pop!_OS: https://blog.system76.com/post/celebrating-5-years-of-pop_osSerenityOS-Folge: https://ageofdevops.de/index.php/podcast/serenityos/Asahi-Linux äußert sich zu
Linus Tech Tips blows it again, and we clean up. Plus, we push System76's updated Thelio Workstation to the breaking point.
What the heck is going on? Fedora is dropping features, GNOME is getting Iced, and the mistake we'll never make again. We've got a lot to sort out.
There has been so much crazy stuff going on in the Linux space from Fedora dropping support for h264 in it's mesa package to Asahi Lina finally getting actual working GPU drivers on Linux on the M1 systems and much much more. ==========Support The Show========== ► Patreon: https://www.patreon.com/brodierobertson ► Paypal: https://www.paypal.me/BrodieRobertsonVideo ► Amazon USA: https://amzn.to/3d5gykF ► Other Methods: https://cointr.ee/brodierobertson =========Video Platforms==========
Tieline manufactures STL, audio distribution and remote broadcast digital audio codecs.For those outside of the engineering department, that may put a glazed look on the faces of important C-Suite executives.Enter Doug Ferber. Many RBR+TVBR readers know Doug for his role as a media broker, heading Dallas-based DEFcom Advisors, and for his work alongside now-retired Foster Garvey attorney Erwin Krasnow. Since February 2020, radio industry engineers and C-Suiters charged with approving expenses have gotten to know Ferber for his role as VP of Sales for the Americas at Tieline.In this role, Ferber explains in easy terminology the vital role these play for Radio in this InFOCUS Podcast, presented by dot.FM, hosted by Editor-in-Chief Adam R Jacobson.
The Faultline Podcast is an audio companion to Rethink Technology Research's Faultline service, a weekly news service that examines the video market – focused on Pay TV, OTT, SVoD, and the technology that supports them. Occasionally, our Rethink TV research wing stops by, to talk about upcoming forecasts and macroeconomic trends we're seeing. Hosted by Alex Davies, Tommy Flanagan, and Rafi Cohen, The Faultline Podcast hits the most important points from the last week's news. If you're in the business world and deal with video content, Faultline is a service you'll want to pay attention to. Find out more at: https://rethinkresearch.biz/product/faultline/ We're on Twitter too: https://twitter.com/_Faultline_ And LinkedIn: https://www.linkedin.com/showcase/faultline/ And YouTube! - https://www.youtube.com/channel/UCgGzAgB9b1I4KiWIQ4gcvAQ
Yurong Jiang LinkedIn profileLinkedIn Engineering blog post about video conferencing ---------------------------------------------------Join our LinkedIn Group so that you can get the latest video insider news and participate in the discussion.Email thevideoinsiders@beamr.com to be a guest on the show.Learn more about Beamr
Jan Ozer LinkedIn profileStreaming Learning Center websiteJan Ozer on Streaming Media MagazineJan Ozer on OTTVerse---------------------------------------------------Join our LinkedIn Group so that you can get the latest video insider news and participate in the discussion.Email thevideoinsiders@beamr.com to be a guest on the show.Learn more about Beamr
I'm Dr. Krishna Rao Vijayanagar, and I have worked on Video Compression (AVC, HEVC, MultiView Plus Depth), ABR streaming, and Video Analytics (QoE, . Basic Steps to HLS Packaging using FFmpeg. Resize a Video to Multiple Resolutions using FFmpeg ffmpeg -i brooklynsfinest_clip_1080p.mp4 -filter_complex "[0:v]split=3[v1][v2][v3]; [v1]copy[v1out]; [v2]scale=w=1280:h=720[v2out]; [v3]scale=w=640:h=360[v3out]" Transcode a Video to Multiple Bitrates for HLS Packaging using FFmpeg. -map [v1out] -c:v:0 libx264 -x264-params "nal-hrd=cbr:force-cfr=1" -b:v:0 5M -maxrate:v:0 5M -minrate:v:0 5M -bufsize:v:0 10M -preset slow -g 48 -sc_threshold 0 -keyint_min 48 -map [v2out] -c:v:1 libx264 -x264-params "nal-hrd=cbr:force-cfr=1" -b:v:0 3M -maxrate:v:0 3M -minrate:v:0 3M -bufsize:v:0 3M -preset slow -g 48 -sc_threshold 0 -keyint_min 48 -map [v3out] -c:v:2 libx264 -x264-params "nal-hrd=cbr:force-cfr=1" -b:v:0 1M -maxrate:v:0 1M -minrate:v:0 1M -bufsize:v:0 1M -preset slow -g 48 -sc_threshold 0 -keyint_min 48 -map a:0 -c:a:0 aac -b:a:0 96k -ac 2 -map a:0 -c:a:1 aac -b:a:1 96k -ac 2 -map a:0 -c:a:2 aac -b:a:2 48k -ac 2 Creating HLS Playlists (m3u8) using FFmpeg. -f hls -hls_time 2 -hls_playlist_type vod -hls_flags independent_segments -hls_segment_type mpegts -hls_segment_filename stream_%v/data%02d.ts -var_stream_map “v:0,a:0 v:1,a:1 v:2,a:2” stream_%v/stream.m3u8. Create an HLS Master Playlist (m3u8) using FFmpeg. -master_pl_name master.m3u8. Final Script for HLS Packaging using FFmpeg – VOD ffmpeg -i brooklynsfinest_clip_1080p.mp4 -filter_complex "[0:v]split=3[v1][v2][v3]; [v1]copy[v1out]; [v2]scale=w=1280:h=720[v2out]; [v3]scale=w=640:h=360[v3out]" -map [v1out] -c:v:0 libx264 -x264-params "nal-hrd=cbr:force-cfr=1" -b:v:0 5M -maxrate:v:0 5M -minrate:v:0 5M -bufsize:v:0 10M -preset slow -g 48 -sc_threshold 0 -keyint_min 48 -map [v2out] -c:v:1 libx264 -x264-params "nal-hrd=cbr:force-cfr=1" -b:v:1 3M -maxrate:v:1 3M -minrate:v:1 3M -bufsize:v:1 3M -preset slow -g 48 -sc_threshold 0 -keyint_min 48 -map [v3out] -c:v:2 libx264 -x264-params "nal-hrd=cbr:force-cfr=1" -b:v:2 1M -maxrate:v:2 1M -minrate:v:2 1M -bufsize:v:2 1M -preset slow -g 48 -sc_threshold 0 -keyint_min 48 -map a:0 -c:a:0 aac -b:a:0 96k -ac 2 -map a:0 -c:a:1 aac -b:a:1 96k -ac 2 -map a:0 -c:a:2 aac -b:a:2 48k -ac 2 -f hls -hls_time 2 -hls_playlist_type vod -hls_flags independent_segments -hls_segment_type mpegts -hls_segment_filename stream_%v/data%02d.ts -master_pl_name master.m3u8 -var_stream_map "v:0,a:0 v:1,a:1 v:2,a:2" stream_%v.m3u8. #EXTM3U. #EXT-X-VERSION:6. #EXT-X-STREAM-INF:BANDWIDTH=5605600,RESOLUTION=1920x1080,CODECS="avc1.640032,mp4a.40.2" stream_0.m3u8. #EXTM3U. #EXT-X-VERSION:6. #EXT-X-TARGETDURATION:2. #EXT-X-MEDIA-SEQUENCE:0. #EXT-X-PLAYLIST-TYPE:VOD. #EXT-X-INDEPENDENT-SEGMENTS. #EXTINF:2.002000, data00.ts. #EXTINF:2.002000, data01.ts. #EXTINF:2.002011, data02.ts. #EXTINF:2.002000, data03.ts. #EXTINF:2.002000, data04.ts. #EXTINF:2.002000, data05.ts. #EXTINF:2.002000, data06.ts. #EXTINF:2.002000, data07.ts. #EXTINF:2.002011, data08.ts. #EXTINF:2.002000, data09.ts. #EXTINF:0.041711, data10.ts. #EXT-X-ENDLIST. Live HLS Packaging using FFmpeg. #EXTM3U. #EXT-X-VERSION:6. #EXT-X-TARGETDURATION:2. #EXT-X-MEDIA-SEQUENCE:1. #EXT-X-INDEPENDENT-SEGMENTS. #EXTINF:2.002000, data01.ts. #EXTINF:2.002011, data02.ts. #EXTM3U. #EXT-X-VERSION:6. #EXT-X-TARGETDURATION:2. #EXT-X-MEDIA-SEQUENCE:2. #EXT-X-INDEPENDENT-SEGMENTS. #EXTINF:2.002011, data02.ts. #EXTINF:2.002000, data03.ts. Other useful HLS Packaging options in FFmpeg. Conclusion. Basic Steps to HLS Packaging using FFmpeg. Resize a Video to Multiple Resolutions using FFmpeg ffmpeg -i brooklynsfinest_clip_1080p.mp4 -filter_complex "[0:v]split=3[v1][v2][v3]; [v1]copy[v1out]; [v2]scale=w=1280:h=720[v2out]; [v3]scale=w=640:h=360[v3out]" Transcode a Video to Multiple Bitrates for HLS Packaging using FFmpeg. -map [v1out] -c:v:0 libx264 -x264-params "nal-hrd=cbr:fo...
In diesem Audiokolleg widmen wir uns ganz und gar dem Thema Headsets und was alles dazu gehört. Welche Headsets gibt es? Welche sind empfehlenswert? Was sind Codecs? Welcher ist der Beste? Worauf ist bei den Streaminganbietern zu achten und was ist überhaupt ein guter Klang? Wir wollen euch zukünftig stärker in den Podcast einbinden. Ihr habt Fragen an uns? Dann fragt uns doch einfach. Ihr Fragen zu unseren Testkandidaten? Ihr wollte eure Meinung zu einem unserer Themen loswerden? Immer her damit oder ihr habt Vorschläge für Themen? Dann könnt ihr diese vorschlagen. Die Mailadresse ist: podcast (a) mobi-test.de
Assista o vídeo completo aqui. Assine o canal Akitando no YouTube! Originalmente publicado em 01/10/2021
Después del anuncio de Apple Music HIFI, existen muchas dudas sobre las calidades y los dispositivos que pueden ser compatibles o no. Hemos decido hacer un especial sobre dos calidades de sonido tanto HIFI como MASTER, y nos hemos traído a nuestros compañeros Julio Cesar y Oliver Nabani, y dos amigos que también controlan muchísimo sobre el tema, nuestro amigo Luis Cervantes y Diego Villavicencio. Tocaremos muchos aspectos del audio y mucho hardware y software, Codecs, bluetooth, Compresiones, y mucho mucho mas, episodio para audiófilos que seguro les encantara este episodio, y para los que no, pues seguro que aprenderán muchísimo a la hora de elegir auriculares o DAC o algún dispositivo externo. Esperamos que os guste mucho este episodio y agradecer a todos los compañeros y amigos por la participación en el episodio. Un saludo Applelianos. //Enlaces de interés https://drive.google.com/file/d/1MbJkvtwloKoh5z6ZiP_PXtx-2bXUdHel/view https://www.youtube.com/channel/UCJs7N6RMoqx-aTKnq4wpuBw/videos https://drive.google.com/drive/folders/10cfy3PLV_Yvb3knr5OLiphuB18Gp8puC https://t.me/deezer2drivebot https://t.me/tidal_dumps https://t.me/HiResAudiobyPeluche/4420 http://www.2l.no/hires/index.html CLA Mixing Vocals https://youtu.be/HrXy9w6GnDA?t=17 Mastering Pharrell Williams https://youtu.be/Pkgz3pdJBNI Guion : https://docs.google.com/document/d/1-5IyZWoOhq7fFlx6EzFO9gbQ44sF5IP1VFL7VIO--18/edit?usp=sharing https://apps.apple.com/es/app/amarra-play/id1061144320 https://www.aliexpress.com/item/1005001407979845.html?spm=a2g0s.8937460.0.0.371a2e0evothKV https://www.aliexpress.com/item/1005001353687655.html?spm=a2g0s.8937460.0.0.1a182e0ewPC4Mw https://www.aliexpress.com/item/32910725335.html?spm=a2g0s.8937460.0.0.1a182e0ewPC4Mw https://www.aliexpress.com/item/4000328408705.html?spm=a2g0s.8937460.0.0.69162e0eodRK1W https://es.creative.com/p/super-x-fi/creative-sxfi-amp https://www.bhphotovideo.com/c/product/1635820-REG/ifi_audio_0311002_n00004_zen_dac_v2.html https://store.hiby.com/products/hiby-fc3 https://www.aliexpress.com/item/32900967118.html?spm=a2g0s.8937460.0.0.69162e0eodRK1W https://www.aliexpress.com/item/4001208562098.html?spm=a2g0s.8937460.0.0.1fc62e0eXLvgqn https://www.aliexpress.com/item/32812032366.html?spm=a2g0s.8937460.0.0.88d42e0eaLYZMT https://mh-maker-help.ueniweb.com/ https://productividaddigitalblogsite.com https://t.me/joinchat/AAAAAExVGdcSnbHeIgLocA //Donde encontrarmos 👇🏻 Canal Twitch Oficial https://www.twitch.tv/applelianosdirectos Amazon Afiliado http://amzn.to/303LRYD Grupos Telegram Applelianos Oficial https://t.me/ApplelianosPodcast Twitter Oficial https://twitter.com/ApplelianosPod Apple Podcasts https://podcasts.apple.com/es/podcast/applelianos-podcast/id993909563 Ivoox https://www.ivoox.com/podcast-applelianos-podcast_sq_f1170563_1.html Spotify https://open.spotify.com/show/2P1alAORWd9CaW7Fws2Fyd?si=6Lj9RFMyTlK8VFwr9LgoOw Youtube https://www.youtube.com/c/ApplelianosApplelianos/featured
Apple have some musical announcements and questions about some of Amazon's delivery tactics. If you're listening on the go, check out munchtech.tv/mobile to find out more about our mobile applications. Enjoy the show? We'd appreciate if you could leave an iTunes rating or review to let us know!
Oft lese ich, dass man ein Video nicht in Cubase importieren kann. Es gibt verschiedene Formate und Codecs. Das Tool meiner Wahl ist XMedia Recode: https://www.xmedia-recode.de/ Hier das Video sehen: https://youtu.be/y9ejXtVQVnM Fragen und Anregungen an sounthcast@sounth.de Newsletter abonnieren: http://eepurl.com/dJaBMD Spotify: https://open.spotify.com/show/5VSIxSdavASHfETN64Ow8I Facebook-Gruppe: https://www.facebook.com/groups/309751689699537/ Wenn ich Dir helfen konnte, freue ich mich über einen virtuellen Kaffee ;-) https://ko-fi.com/timheinrich
Welcome to a brand new replay of our Thursday YouTube live show! This episode is about the diluted market of streaming tv content! New episodes are released every Monday, Wednesday and Friday. The beginning of the month we discuss our staple “What the Hell” tech moments that happened the month prior. We continue through each month discussing tech events and interviewing other content creators or people that are influential in many different tech related fields. Beyond the Streams originated from 2 YouTube content creators NxTLvLTech and Rohas Reviews. After working on a countless number of live streams on the Youtube platform they began to realize that there were many great conversations that would take place “Beyond the Streams". Get Free Stonks with Webull: Webull Canada's best commission free trading platform: Wealthsimple Check out our sponsor here!: https://buzztvglobal.com/ IPVanish VPN Strong VPN Ohmnilabs Contact us: rohasentertainment@gmail.com New Beyond the Streams YouTube channel: Check out the YouTube channel here! Follow NxTLvL here: YouTube - Live Streams Every Friday 3PM EST Twitter Instagram Donate VPN Follow Rohas here: YouTube #1 - Live Streams Every Thursday 7PM EST YouTube #2 YouTube #3 Facebook Instagram Donate VPN
Here's a question: how do you listen to music? Maybe through speakers, or more likely headphones? Do you use a music streaming service, or do you prefer having your own music files? Well, we're not gonna give you FLAC on any of those choices, but maybe we'll clear up what an opus is. (as in, the file format- not the magnum) Sources can be found on our website, as always. -- Website: https://techthoughts.gay Instagram: https://instagram.com/techthoughtspodcast/ Opening Music: Another World by BETTOGH
This week I talk about codecs and why they are important. I talk about the differences between these two codecs that Apple help create one and Google created the other. So basically it turned into a Google vs Apple war with codecs. I talk why I am rooting for one codec over the other.
Links e descrição completa em santamaedoisoalto.com.br
In today's podcast we cover four crucial cyber and technology topics, including: 1. Liquid cryptocurrency exchange breached, passwords accessed, but assets "accounted for"2. Malsmoke campaign ramps up against Pornography viewers with adobe pop-up attacks 3. Thousands of fingerprints and ID cards exposed in cloud misconfiguration 4. Newly uncovered Chinese actors dubbed FunnyDream target South East Asia I'd love feedback, feel free to send your comments and feedback to | cyberandtechwithmike@gmail.comAs a side-note/Correction: Yesterday was TUESDAY, and I mistakenly recorded as if it were Wednesday. Apologies as TODAY, the 18th is actually the MID-point of the work-week.
miniBill/elm-codec - JSON codec libraryMartinSStewart/elm-codec-bytesBackwards compatibilityelm-codec's FAQ section on backwards compatabilityMario Rogic's talk on Elm EvergreenMario Rogic's talk on LamderaKeeping data in syncElm Radio episode on elm-graphqlTwo Generals ProblemElm codec API - string, Bool, objectMartinSStewart/elm-serialize/latest/, encodes in a format that is optimized for compactness
La documentation et les liens de cette émission sont sur http://cpu.pm/0147 . Cette release fait partie de la série “Radio numérique”. Dans cette release : J't'prends un signal, j'le filtre haut et bas, j'l'explose, j'disperse, j'ventile… Suite de notre série sur la radio numérique. Chapitres : Bonjour à toi, Enfant du Futur Immédiat : Trop gros, passera pas — (1:28) ♪ Tara King…
Simbora para umas dicas do que fazer depois de instalar o Kubuntu 20.04. COMANDOS - Suporte RAR: sudo apt install rar unrar p7zip-full p7zip-rar - Codecs de áudio e vídeo: sudo apt install kubuntu-restricted-addons kubuntu-restricted-extras ---$$$--- ÚLTIMOS ÁLBUMS DO VARTROY ---$$$--- https://syl.vartroy.com https://whatif.vartroy.com https://daydreams.vartroy.com ---!!!--- NOVIDADE ---!!!--- O #Podcast da #Vartroy #Tecnologia está disponível nas principais plataformas de podcast. Procure por "Vartroy Cast" no #Spotify, #Google Podcasts, #Apple Podcasts, RadioPublic, Breaker, Stitcher e PocketCasts. --------------- VARTROY TECNOLOGIA --------------- Soluções inovadoras com Linux, software livre e open source - Consultoria - Terceirização de TI - Suporte e Manutenção - Desenvolvimento de sites - Tradução, revisão e versão - Interpretação - http://tecnologia.vartroy.com - tecnologia@vartroy.com - Ribeirão Preto / SP --------------- CONHEÇA TAMBÉM --------------- Vartroy Music Project - http://music.vartroy.com - https://www.youtube.com/user/vartroy - https://soundcloud.com/vartroy - https://vartroy.bandcamp.com - https://www.facebook.com/vartroyband Nosso blog de tecnologia - http://tec.vartroy.com.br --------------- SIGA-NOS --------------- - Facebook: https://www.facebook.com/vartroytec/ - Twitter: https://twitter.com/GarciaVartroy
Katie McMurran is a sound engineer with over 10 years of experience recording, editing, mixing and mastering audio for public radio and podcasts. She has also provided sound design and original music for theater, film, corporate conferences and art installations. In this episode, we discuss: NPR KQED Podcasts Interviewing for radio gigs CODECS Staying Calm Under Pressure Rates Links and Show Notes: Information is Beautiful https://bit.ly/2WpsWWS Coronavirus COVID-19 Global Cases https://bit.ly/2WkqGR4 Michael Osterholm Interview: https://youtu.be/E3URhJx0NSw Comrex: https://www.comrex.com/ Katie on Linkedin: https://www.linkedin.com/in/katiemcmurran/ Support WCA - Go Ad-Free! https://glow.fm/workingclassaudio/ Connect with Matt on Linkedin: https://www.linkedin.com/in/mattboudreau/ Current sponsors & promos: https://bit.ly/2WmKbFw Working Class Audio Journal: https://amzn.to/2GN67TP Credits: Guest: Katie McMurran Host: Matt Boudreau WCA Theme Music: Cliff Truesdell Announcer: Chuck Smith Editing: Anne-Marie Pleau & Matt Boudreau Additional Music: The License Lab
Join Scott, Damian and special guest Michael Kammes this week for a deep dive into the world of Codecs! In addition to talking about the basics, the guys answer questions submitted from Twitter to help demystify codecs. Enjoy! Make sure to check out ProvideoCoalition.com for more in-depth articles on the topics talked about today. Also, make sure to subscribe to the podcast so you don't miss future episodes!
Home Cinema Streaming We were alerted to an article at Home Theater Review by one of our listeners, Tom Green, regarding Home Movie streaming. We link to it here: Home Cinema's Streaming Future Is Now. The author states that for reference material they still use UHD Bluerays “But for day-to-day viewing, many of us here on staff have migrated almost entirely to streaming consumption.” They note that this fact infuriates their readers: And it only takes a quick glance at the comments section here and on our accompanying Facebook page to see that this fact infuriates the most vocal amongst our commentariat. One commenter stated - that any given standard-definition DVD from 20 years ago is far superior to the Netflix 4K HDR today. The author doesn't even bother replying to this but we will. This is just plane wrong. Codecs are better which means you get more actual data on lower data rates. Streaming data rates can be higher than the highest DVD data rate of 9.5 Mbps and with better codecs that should be enough, but there is more! There is higher resolution, dvd is only 480p. Plus you can get better color and HDR via streaming which is not even an option on DVD. So no DVD is not better than Netflix 4K HDR. Then there is the "Well, UHD Blu-ray delivers generally 80 to 100 Mbps or more, and streaming is only 16 Mbps, so that makes UHD Blu-ray six times better." To try and refute the more bits are better than less bits the author of the article asks: What's the difference between "1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1" and "1 + 3 x 4"? In reality, nothing. But functionally, the second equation is more efficient. It's also interestingly more prone to error The issue here is that if you are not good in math you may say the second equation is equal to 16 and not 13 because you don't understand that you do multiplication before addition. So while it is more efficient it is also more prone to error. Today we have more powerful processors so we can use more efficient, and complicated codecs. That means with better compression and 10 or 12 bit color, the compressed video of today is vastly superior to that of DVD and approaching the quality of some Blurays. "But what about the crappy low-bitrate Dolby Digital+ audio?! When we talked to Dolby more than 10 years ago we brought up that many people are saying that Dolby Digital+ is imperceptible from Dolby True HD. They would not confirm that it was the case. There statement was, and we are paraphrasing here because it was so long ago, “Isn't that a good thing that our Dolby Digital+ compression is so good!” In this article Dolby now goes on record saying that: Dolby has done extensive listening panels and firmly established that, at the bitrates now employed by Netflix, Vudu, and Disney+ (up to 768 kbps), Dolby Digital+ is perceptually transparent. So Audio is as good as UHD! Conclusion Streaming will only get better with advancements in compression. And unlike discs or over the air TV, providers like Netflix can roll out changes on their own terms. As new devices come on the market completely new compression schemes become possible. And the beauty of this system is the servers can detect what capability your streaming box can support and stream the appropriate level of compressed content. There will come a day, and it will be here sooner than later, that you won't be able to tell the difference between streamed and physical content. And that has us very excited.
Click to watch SPIE Future Video Codec Panel DiscussionRelated episode with Gary Sullivan at Microsoft: VVC, HEVC & other MPEG codec standardsInterview with MPEG Chairman Leonardo Charliogne: MPEG Through the Eyes of it's ChairmanLearn about FastDVO herePankaj Topiwala LinkedIn profile--------------------------------------The Video Insiders LinkedIn Group is where thousands of your peers are discussing the latest video technology news and sharing best practices. Click here to joinWould you like to be a guest on the show? Email: thevideoinsiders@beamr.comLearn more about Beamr--------------------------------------TRANSCRIPT:Pankaj Topiwala: 00:00 With H.264 H.265 HEVC in 2013, we were now able to do up to 300 to one to up to 500 to one compression on a, let's say a 4K video. And with VVC we have truly entered a new realm where we can do up to 1000 to one compression, which is three full orders of magnitude reduction of the original size. If the original size is say 10 gigabits, we can bring that down to 10 megabits. And that's unbelievable. And so video compression truly is a remarkable technology and you know, it's a, it's a marval to look at Announcer: 00:39 The Video Insiders is the show that makes sense of all that is happening in the world of online video as seen through the eyes of a second generation codec nerd and a marketing guy who knows what I-frames and macro blocks are. And here are your hosts, Mark Donnigan and Dror Gill. Speaker 3: 00:39 Dror Gill: 01:11 Today we're going to talk with one of the key figures in the development of a video codecs and a true video insider Pankaj Topiwala. Hello Pankaj and welcome to The Video Insiders podcast. Pankaj Topiwala: 01:24 Gentlemen. hello, and thank you very much for this invite. It looks like it's going to be a lot of fun. Mark Donnigan: 01:31 It is. Thank you for joining Pankaj. Dror Gill: 01:33 Yeah, it sure will be a lot of fun. So can you start by telling us a little bit about your experience in codec development? Pankaj Topiwala: 01:41 Sure, so, I should say that unlike a number of the other people that you have interviewed or may interview my background is fair bit different. I really came into this field really by a back door and almost by chance my degree PhD degree is actually in mathematical physics from 1985. And I actually have no engineering, computer science or even management experience. So naturally I run a small research company working in video compression and analytics, and that makes sense, but that's just the way things go in the modern world. But that the effect for me was a, and the entry point was that even though I was working in very, very abstract mathematics I decided to leave. I worked in academia for a few years and then I decided to join industry. And at that point they were putting me into applied mathematical research. Pankaj Topiwala: 02:44 And the topic at that time that was really hot in applied mathematics was a topic of wavelets. And I ended up writing and edited a book called wavelet image and video compression in 1998. Which was a lot of fun along with quite a few other co authors on that book. But, wavelets had its biggest contribution in the compression of image and video. And so that led me finally to enter into, and I noticed that video compression was a far larger field than image compression. I mean, by many orders, by orders of magnitude. It is probably a hundred times bigger in terms of market size than, than image compression. And as a result I said, okay, if the sexiest application of this new fangled mathematics could be in video compression I entered that field roughly with the the book that I mentioned in 1998. Mark Donnigan: 03:47 So one thing that I noticed Pankaj cause it's really interesting is your, your initial writing and you know, research was around wavelet compression and yet you have been very active in ISO MPEG, all block-based codecs. So, so tell us about that? Pankaj Topiwala: 04:08 Okay. Well obviously you know when you make the transition from working on the wavelets and our initial starting point was in doing wavelet based video compression. When I started first founded my company fastVDO in 1998, 1999 period we were working on wavelet based video compression and we, we pushed that about as much as we could. And at that, at one point we had what we felt was the world's best a video compression using wavelets in fact, but best overall. And it had the feature that you know, one thing that we should, we should tell your view or reader listeners is that the, the value of wavelets in particular in image coding is that not only can you do state of the art image coding, but you can make the bitstream what is called embedded, meaning you can chop it off at anywhere you like, and it's still a decodable stream. Pankaj Topiwala: 05:11 And in fact it is the best quality you can get for that bit rate. And that is a powerful, powerful thing you can do in image coding. Now in video, there is actually no way to do that. Video is just so much more complicated, but we did the best we could to make it not embedded, but at least scalable. And we, we built a scalable wavelet based video codec, which at that time was beating at the current implementations of MPEG4. So we were very excited that we could launch a company based on a proprietary codec that was based on this new fangled mathematics called wavelets. And lead us to a state of the art codec. The facts of the ground though is that just within the first couple of years of running our company, we found that in fact the block-based transformed codecs that everybody else was using, including the implementers of MPEG4. Pankaj Topiwala: 06:17 And then later AVC, those quickly surpassed anything we could build with with wavelets in terms of both quality and stability. The wavelet based codecs were not as powerful or as stable. And I can say quite a bit more about why that's true. If you want? Dror Gill: 06:38 So when you talk about stability, what exactly are you referring to in, in a video codec? Pankaj Topiwala: 06:42 Right. So let's let's take our listeners back a bit to compare image coding and video coding. Image coding is basically, you're given a set of pixels in a rectangular array and we normally divide that into blocks of sub blocks of that image. And then do transforms and then quantization and than entropy coding, that's how we typically do image coding. With the wavelet transform, we have a global transform. It's a, it's ideally done on the entire image. Pankaj Topiwala: 07:17 And then you could do it multiple times, what are called multiple scales of the wavelet transform. So you could take various sub sub blocks that you create by doing the wavelet transfer and the low pass high pass. Ancs do that again to the low low pass for multiple scales, typically about four or five scales that are used in popular image codecs that use wavelets. But now in video, the novelty is that you don't have one frame. You have many, many frames, hundreds or thousands or more. And you have motion. Now, motion is something where you have pieces of the image that float around from one frame to another and they float randomly. That is, it's not as if all of the motion is in one direction. Some things move one way, some things move other ways, some things actually change orientations. Pankaj Topiwala: 08:12 And they really move, of course, in three dimensional space, not in our two dimensional space that we capture. That complicates video compression enormously over image compression. And it particularly complicates all the wavelet methods to do video compression. So, wavelet methods that try to deal with motion were not very successful. The best we tried to do was using motion compensated video you know, transformed. So doing wavelet transforms in the time domain as well as the spatial domain along the paths of motion vectors. But that was not very successful. And what I mean by stability is that as soon as you increase the motion, the codec breaks, whereas in video coding using block-based transforms and block-based motion estimation and compensation it doesn't break. It just degrades much more gracefully. Wavelet based codecs do not degrade gracefully in that regard. Pankaj Topiwala: 09:16 And so we of course, as a company we decided, well, if those are the facts on the ground. We're going to go with whichever way video coding is going and drop our initial entry point, namely wavelets, and go with the DCT. Now one important thing we found was that even in the DCT- ideas we learned in wavelets can be applied right to the DCT. And I don't know if you're familiar with this part of the story, but a wavelet transform can be decomposed using bits shifts and ads only using something called the lifting transform, at least a important wavelet transforms can. Now, it turns out that the DCT can also be decomposed using lifting transforms using only bit shifts and ads. And that is something that my company developed way back back in 1998 actually. Pankaj Topiwala: 10:18 And we showed that not only for DCT, but a large class of transforms called lab transforms, which included the block transforms, but in particular included more powerful transforms the importance of that in the story of video coding. Is that up until H.264, all the video codec. So H.261, MPEG-1, MPEG-2, all these video codecs used a floating point implementation of the discrete cosign transform and without requiring anybody to implement you know a full floating point transform to a very large number of decimal places. What they required then was a minimum accuracy to the DCT and that became something that all codecs had to do. Instead. If you had an implementation of the DCT, it had to be accurate to the true floating point DCT up to a certain decimal point in, in the transform accuracy. Pankaj Topiwala: 11:27 With the advent of H.264, with H.264, we decided right away that we were not going to do a flooding point transform. We were going to do an integer transform. That decision was made even before I joined, my company joined, the development base, H.264, AVC, But they were using 32 point transforms. We found that we could introduce 16 point transforms, half the complexity. And half the complexity only in the linear dimension when you, when you think of it as a spatial dimension. So two spatial dimensions, it's a, it's actually grows more. And so the reduction in complexity is not a factor of two, but at least a factor of four and much more than that. In fact, it's a little closer to exponential. The reality is that we were able to bring the H.264 codec. Pankaj Topiwala: 12:20 So in fact, the transform was the most complicated part of the entire codec. So if you had a 32 point transform, the entire codec was at 32 point technology and it needed 32 points, 32 bits at every sample to process in hardware or software. By changing the transform to 16 bits, we were able to bring the entire codec to a 16 bit implementation, which dramatically improved the hardware implementability of this transfer of this entire codec without at all effecting the quality. So that was an important development that happened with AVC. And since then, we've been working with only integer transforms. Mark Donnigan: 13:03 This technical history is a really amazing to hear. I, I didn't actually know that Dror or you, you probably knew that, but I didn't. Dror Gill: 13:13 Yeah, I mean, I knew about the transform and shifting from fixed point, from a floating point to integer transform. But you know, I didn't know that's an incredible contribution Pankaj. Pankaj Topiwala: 13:27 We like to say that we've saved the world billions of dollars in hardware implementations. And we've taken a small a small you know, a donation as a result of that to survive as a small company. Dror Gill: 13:40 Yeah, that's great. And then from AVC you moved on and you continued your involvement in, in the other standards, right? That's followed. Pankaj Topiwala: 13:47 in fact, we've been involved in standardization efforts now for almost 20 years. My first meeting was a, I recall in may of 2000, I went to a an MPEG meeting in Geneva. And then shortly after that in July I went to an ITU VCEG meeting. VCEG is the video coding experts group of the ITU. And MPEG is the moving picture experts group of ISO. These two organizations were separately pursuing their own codecs at that time. Pankaj Topiwala: 14:21 ISO MPEG was working on MPEG-4 and ITU VCEG was working on H.263, and 263 plus and 263 plus plus. And then finally they started a project called 263 L for longterm. And eventually it became clear to these two organizations that look, it's silly to work on, on separate codecs. They had worked once before in MPEG-2 develop a joint standard and they decided to, to form a joint team at that time called the joint video team, JVT to develop the H.264 AVC video codec, which was finally done in 2003. We participate participated you know fully in that making many contributions of course in the transform but also in motion estimation and other aspects. So, for example, it might not be known that we also contributed the fast motion estimation that's now widely used in probably nearly all implementations of 264, but in 265 HEVC as well. Pankaj Topiwala: 15:38 And we participated in VVC. But one of the important things that we can discuss is these technologies, although they all have the same overall structure, they have become much more complicated in terms of the processing that they do. And we can discuss that to some extent if you want? Dror Gill: 15:59 The compression factors, just keep increasing from generation to generation and you know, we're wondering what's the limit of that? Pankaj Topiwala: 16:07 That's of course a very good question and let me try to answer some of that. And in fact that discussion I don't think came up in the discussion you had with Gary Sullivan, which certainly could have but I don't recall it in that conversation. So let me try to give for your listeners who did not catch that or are not familiar with it. A little bit of the story. Pankaj Topiwala: 16:28 The first international standard was the ITU. H.261 standard dating roughly to 1988 and it was designed to do only about 15 to one to 20 to one compression. And it was used mainly for video conferencing. And at that time you'd be surprised from our point of view today, the size of the video being used was actually incredibly tiny about QCIP or 176 by 144 pixels. Video of that quality that was the best we could conceive. And we thought we were doing great. And doing 20 to one compression, wow! Recall by the way, that if you try to do a lossless compression of any natural signal, whether it's speech or audio or images or video you can't do better than about two to one or at most about two and a half to one. Pankaj Topiwala: 17:25 You cannot do, typically you cannot even do three to one and you definitely cannot do 10 to one. So a video codec that could do 20 to one compression was 10 times better than what you could do lossless, I'm sorry. So this is definitely lossy, but lossy with still a good quality so that you can use it. And so we thought we were really good. When MPEG-1 came along in, in roughly 1992 we were aiming for 25 to one compression and the application was the video compact disc, the VCD. With H.262 or MPEG-2 roughly 1994, we were looking to do about 35 to one compression, 30 to 35. And the main application was then DVD or also broadcast television. At that point, broadcast television was ready to use at least in some, some segments. Pankaj Topiwala: 18:21 Try digital broadcasting. In the United States, that took a while. But in any case it could be used for broadcast television. And then from that point H.264 AVC In 2003, we jumped right away to more than 100 to one compression. This technology at least on large format video can be used to shrink the original size of a video by more than two orders of magnitude, which was absolutely stunning. You know no other natural signal, not speech, not broadband, audio, not images could be compressed that much and still give you high quality subjective quality. But video can because it's it is so redundant. And because we don't understand fully yet how to appreciate video. Subjectively. We've been trying things you know, ad hoc. And so the entire development of video coding has been really by ad hoc methods to see what quality we can get. Pankaj Topiwala: 19:27 And by quality we been using two two metrics. One is simply a mean square error based metric called peak signal to noise ratio or PSNR. And that has been the industry standard for the last 35 years. But the other method is simply to have people look at the video, what we call subjective rating of the video. Now it's hard to get a subjective rating. That's reliable. You have to do a lot of standardization get a lot of different people and take mean opinion scores and things like that. That's expensive. Whereas PSNR is something you can calculate on a computer. And so people have mostly in the development of video coding for 35 years relied on one objective quality metric called PSNR. And it is good but not great. And it's been known right from the beginning that it was not perfect, not perfectly correlated to video quality, and yet we didn't have anything better anyway. Pankaj Topiwala: 20:32 To finish the story of the video codecs with H.265 HEVC in 2013, we were now able to do up to 300 to one to up to 500 to one compression on let's say a 4K. And with VVC we have truly entered a new realm where we can do up to 1000 to one compression, which is three full orders of magnitude reduction of the original size. If the original size is say, 10 gigabits, we can bring that down to 10 megabits. And that's unbelievable. And so video compression truly is a remarkable technology. And you know, it's a, it's a marvel to look at. Of course it does not, it's not magic. It comes with an awful lot of processing and an awful lot of smarts have gone into it. That's right. Mark Donnigan: 21:24 You know Pankaj, that, is an amazing overview and to hear that that VVC is going to be a thousand to one. You know, compression benefit. Wow. That's incredible! Pankaj Topiwala: 21:37 I think we should of course we should of course temper that with you know, what people will use in applications. Correct. They may not use the full power of a VVC and may not crank it to that level. Sure, sure. I can certainly tell you that that we and many other companies have created bitstreams with 1000 to one or more compression and seeing video quality that we thought was usable. Mark Donnigan: 22:07 One of the topics that has come to light recently and been talked about quite a bit. And it was initially raised by Dave Ronca who used to lead encoding at Netflix for like 10 years. In fact you know, I think he really built that department, the encoding team there and is now at Facebook. And he wrote a LinkedIn article post that was really fascinating. And what he was pointing out in this post was, was that with compression efficiency and as each generation of codec is getting more efficient as you just explained and gave us an overview. There's a, there's a problem that's coming with that in that each generation of codec is also getting even more complex and you know, in some settings and, and I suppose you know, Netflix is maybe an example where you know, it's probably not accurate to say they have unlimited compute, but their application is obviously very different in terms of how they can operate their, their encoding function compared to someone who's doing live, live streaming for example, or live broadcast. Maybe you can share with us as well. You know, through the generation generational growth of these codecs, how has the, how has the compute requirements also grown and has it grown in sort of a linear way along with the compression efficiency? Or are you seeing, you know, some issues with you know, yes, we can get a thousand to one, but our compute efficiency is getting to the, where we could be hitting a wall. Pankaj Topiwala: 23:46 You asked a good question. Has the complexity only scaled linearly with the compression ratio? And the answer is no. Not at all. Complexity has outpaced the compression ratio. Even though the compression ratio is, is a tremendous, the complexity is much, much higher. And has always been at every step. First of all there's a big difference in doing the research, the research phase in development of the, of a technology like VVC where we were using a standardized reference model that the committee develops along the way, which is not at all optimized. But that's what we all use because we share a common code base. And make any new proposals based on modifying that code base. Now that code base is always along the entire development chain has always been very, very slow. Pankaj Topiwala: 24:42 And true implementations are anywhere from 100 to 500 times more efficient in complexity than the reference software. So right away you can have the reference software for say VVC and somebody developing a, an implementation that's a real product. It can be at least 100 times more efficient than what the reference software, maybe even more. So there's a big difference. You know, when we're developing a technology, it is very hard to predict what implementers will actually come up with later. Of course, the only way they can do that is that companies actually invest the time and energy right away as they're developing the standard to build prototype both software and hardware and have a good idea that when they finish this, you know, what is it going to really cost? So just to give you a, an idea, between, H.264 and Pankaj Topiwala: 25:38 H.265, H.264, only had two transforms of size, four by four and eight by eight. And these were integer transforms, which are only bit shifts and adds, took no multiplies and no divides. The division in fact got incorporated into the quantizer and as a result, it was very, very fast. Moreover, if you had to do, make decisions such as inter versus intra mode, the intra modes there were only about eight or 10 intra modes in H.264. By contrast in H.265. We have not two transforms eight, four by four and eight by, but in fact sizes of four, eight, 16 and 32. So we have much larger sized transforms and instead of a eight or 10 intra modes, we jumped up to 35 intra modes. Pankaj Topiwala: 26:36 And then with a VVC we jumped up to 67 intro modes and we just, it just became so much more complex. The compression ratio between HEVC and VVC is not quite two to one, but let's say, you know, 40% better. But the the complexity is not 40% more. On the ground and nobody has yet, to my knowledge, built a a, a, a fully compliant and powerful either software or hardware video codec for VVC yet because it's not even finished yet. It's going to be finished in July 2020. When it, when, the dust finally settles maybe four or five years from now, it will be, it will prove to be at least three or four times more complex than HEVC encoder the decoder, not that much. The decoder, luckily we're able to build decoders that are much more linear than the encoder. Pankaj Topiwala: 27:37 So I guess I should qualify as discussion saying the complexity growth is all mostly been in the encoder. The decoder has been a much more reasonable. Remember, we are always relying on this principle of ever-increasing compute capability. You know, a factor of two every 18 months. We've long heard about all of this, you know, and it is true, Moore's law. If we did not have that, none of this could have happened. None of this high complexity codecs, whatever had been developed because nobody would ever be able to implement them. But because of Moore's law we can confidently say that even if we put out this very highly complex VVC standard, someday and in the not too distant future, people will be able to implement this in hardware. Now you also asked a very good question earlier, is there a limit to how much we can compress? Pankaj Topiwala: 28:34 And also one can ask relatively in this issue, is there a limit to a Moore's law? And we've heard a lot about that. That may be finally after decades of the success of Moore's law and actually being realized, maybe we are now finally coming to quantum mechanical limits to you know how much we can miniaturize in electronics before we actually have to go to quantum computing, which is a totally different you know approach to doing computing because trying to go smaller die size. Well, we'll make it a unstable quantum mechanically. Now the, it appears that we may be hitting a wall eventually we haven't hit it yet, but we may be close to a, a physical limit in die size. And in the observations that I've been making at least it seems possible to me that we are also reaching a limit to how much we can compress video even without a complexity limit, how much we can compress video and still obtain reasonable or rather high quality. Pankaj Topiwala: 29:46 But we don't know the answer to that. And in fact there are many many aspects of this that we simply don't know. For example, the only real arbiter of video quality is subjective testing. Nobody has come up with an objective video quality metric that we can rely on. PSNR is not it. When, when push comes to shove, nobody in this industry actually relies on PSNR. They actually do subjective testing well. So in that scenario, we don't know what the limits of visual quality because we don't understand human vision, you know, we try, but human vision is so complicated. Nobody can understand the impact of that on video quality to any very significant extent. Now in fact, the first baby steps to try to understand, not explicitly but implicitly capture subjective human video quality assessment into a neural model. Those steps are just now being taken in the last couple of years. In fact, we've been involved, my company has been involved in, in getting into that because I think that's a very exciting area. Dror Gill: 30:57 I tend to agree that modeling human perception with a neural network seems more natural than, you know, just regular formulas and algorithms which are which are linear. Now I, I wanted to ask you about this process of, of creating the codecs. It's, it's very important to have standards. So you encode a video once and then you can play it anywhere and anytime and on any device. And for this, the encoder and decoder need to agree on exactly the format of the video. And traditionally you know, as you pointed out with all the history of, of development. Video codecs have been developed by standardization bodies, MPEG and ITU first separately. And then they joined forces to develop the newest video standards. But recently we're seeing another approach to develop codecs, which is by open sourcing them. Dror Gill: 31:58 Google started with an open source code, they called VP9 which they first developed internally. Then they open sourced it and and they use it widely across their services, especially in, YouTube. And then they joined forces with the, I think the largest companies in the world, not just in video but in general. You know those large internet giants such as Amazon and Facebook and and Netflix and even Microsoft, Apple, Intel have joined together with the Alliance of Open Media to jointly create another open codec called AV1. And this is a completely parallel process to the MPEG codec development process. And the question is, do you think that this was kind of a one time effort to, to to try and find a, or develop a royalty free codec, or is this something that will continue? And how do you think the adoption of the open source codecs versus the committee defined codecs, how would that adoption play out in the market? Pankaj Topiwala: 33:17 That's of course a large topic on its own. And I should mention that there have been a number of discussions about that topic. In particular at the SPIE conference last summer in San Diego, we had a panel discussion of experts in video compression to discuss exactly that. And one of the things we should provide to your listeners is a link to that captured video of the panel discussion where that topic is discussed to some significant extent. And it's on YouTube so we can provide a link to that. My answer. And of course none of us knows the future. Right. But we're going to take our best guesses. I believe that this trend will continue and is a new factor in the landscape of video compression development. Pankaj Topiwala: 34:10 But we should also point out that the domain of preponderance use preponderant use of these codecs is going to be different than in our traditional codecs. Our traditional codecs such as H.264 265, were initially developed for primarily for the broadcast market or for DVD and Blu-ray. Whereas these new codecs from AOM are primarily being developed for the streaming media industry. So the likes of Netflix and Amazon and for YouTube where they put up billions of user generated videos. So, for the streaming application, the decoder is almost always a software decoder. That means they can update that decoder anytime they do a software update. So they're not limited by a hardware development cycle. Of course, hardware companies are also building AV1. Pankaj Topiwala: 35:13 And the point of that would be to try to put it into handheld devices like laptops, tablets, and especially smartphones. But to try to get AV1 not only as a decoder but also as an encoder in a smartphone is going to be quite complicated. And the first few codecs that come out in hardware will be of much lower quality, for example, comparable to AVC and not even the quality of HEVC when they first start out. So that's... the hardware implementations of AV1 that work in real time are not going to be, it's going to take a while for them to catch up to the quality that AV1 can offer. But for streaming we, we can decode these streams reasonably well in software or in firmware. And the net result is that, or in GPU for example, and the net result is that these companies can already start streaming. Pankaj Topiwala: 36:14 So in fact Google is already streaming some test streams maybe one now. And it's cloud-based YouTube application and companies like Cisco are testing it already, even for for their WebEx video communication platform. Although the quality will not be then anything like the full capability of AV1, it'll be at a much reduced level, but it'll be this open source and notionally, you know, royalty free video codec. Dror Gill: 36:50 Notionally. Yeah. Because they always tried to do this, this dance and every algorithm that they try to put into the standard is being scrutinized and, and, and they check if there are any patents around it so they can try and keep this notion of of royalty-free around the codec because definitely the codec is open source and royalty free. Dror Gill: 37:14 I think that is, is, is a big question. So much IP has gone into the development of the different MPEG standards and we know it has caused issues. Went pretty smoothly with AVC, with MPEG-LA that had kind of a single point of contact for licensing all the essential patents and with HEVC, that hasn't gone very well in the beginning. But still there is a lot of IP there. So the question is, is it even possible to have a truly royalty free codec that can be competitive in, in compression efficiency and performance with the codec developed by the standards committee? Pankaj Topiwala: 37:50 I'll give you a two part answer. One because of the landscape of patents in the field of video compression which I would describe as being, you know very, very spaghetti like and patents date back to other patents. Pankaj Topiwala: 38:09 And they cover most of the, the topics and the most of the, the tools used in video compression. And by the way we've looked at the AV1 and AV1 is not that different from all the other standards that we have. H.265 or VVC. There are some things that are different. By and large, it resembles the existing standards. So can it be that this animal is totally patent free? No, it cannot be that it is patent free. But patent free is not the same as royalty free. There's no question that AV1 has many, many patents, probably hundreds of patents that reach into it. The question is whether the people developing and practicing AV1 own all of those patents. That is of course, a much larger question. Pankaj Topiwala: 39:07 And in fact, there has been a recent challenge to that, a group has even stood up to proclaim that they have a central IP in AV1. The net reaction from the AOM has been to develop a legal defense fund so that they're not going to budge in terms of their royalty free model. If they do. It would kill the whole project because their main thesis is that this is a world do free thing, use it and go ahead. Now, the legal defense fund then protects the members of that Alliance, jointly. Now, it's not as if the Alliance is going to indemnify you against any possible attack on IP. They can't do that because nobody can predict, you know, where somebody's IP is. The world is so large, so many patents in that we're talking not, not even hundreds and thousands, but tens of thousands of patents at least. Pankaj Topiwala: 40:08 So nobody in the world has ever reviewed all of those patent. It's not possible. And the net result is that nobody can know for sure what technology might have been patented by third parties. But the point is that because such a large number of powerful companies that are also the main users of this technology, you know, people, companies like Google and Apple and Microsoft and, and Netflix and Amazon and Facebook and whatnot. These companies are so powerful. And Samsung by the way, has joined the Alliance. These companies are so powerful that you know, it would be hard to challenge them. And so in practice, the point is they can project a royalty-free technology because it would be hard for anybody to challenge it. And so that's the reality on the ground. Pankaj Topiwala: 41:03 So at the moment it is succeeding as a royalty free project. I should also point out that if you want to use this, not join the Alliance, but just want to be a user. Even just to use it, you already have to offer any IP you have in this technology it to the Alliance. So all users around the world, so if tens of thousands and eventually millions of you know, users around the world, including tens of thousands of companies around the world start to use this technology, they will all have automatically yielded any IP they have in AV1, to the Alliance. Dror Gill: 41:44 Wow. That's really fascinating. I mean, first the distinction you made between royalty free and patent free. So the AOM can keep this technology royalty free, even if it's not patent free because they don't charge royalties and they can help with the legal defense fund against patent claim and still keep it royalty free. And, and second is the fact that when you use this technology, you are giving up any IP claims against the creators of the technology, which means that if any, any party who wants to have any IP claims against the AV1 encoder cannot use it in any form or shape. Pankaj Topiwala: 42:25 That's at least my understanding. And I've tried to look at of course I'm not a lawyer. And you have to take that as just the opinion of a video coding expert rather than a lawyer dissecting the legalities of this. But be that as it may, my understanding is that any user would have to yield any IP they have in the standard to the Alliance. And the net result will be if this technology truly does get widely used more IP than just from the Alliance members will have been folded into into it so that eventually it would be hard for anybody to challenge this. Mark Donnigan: 43:09 Pankaj, what does this mean for the development of so much of the technology has been in has been enabled by the financial incentive of small groups of people, you know, or medium sized groups of people forming together. You know, building a company, usually. Hiring other experts and being able to derive some economic benefit from the research and the work and the, you know, the effort that's put in. If all of this sort of consolidates to a handful or a couple of handfuls of, you know, very, very large companies, you know, does that, I guess I'm, I'm asking from your view, will, will video and coding technology development and advancements proliferate? Will it sort of stay static? Because basically all these companies will hire or acquire, you know, all the experts and you know, it's just now everybody works for Google and Facebook and Netflix and you know... Or, or do you think it will ultimately decline? Because that's something that that comes to mind here is, you know, if the economic incentives sort of go away, well, you know, people aren't going to work for free! Pankaj Topiwala: 44:29 So that's of course a, another question and a one relevant. In fact to many of us working in video compression right now, including my company. And I faced this directly back in the days of MPEG-2. There was a two and a half dollar ($2.50) per unit license fee for using MPEG-2. That created billions of dollars in licensing in fact, the patent pool, MPEG-LA itself made billions of dollars, even though they took only 10% of the proceeds, they already made billions of dollars, you know, huge amounts of money. With the advent of H.264 AVC, the patent license went not to from two and a half dollars to 25 cents a unit. And now with HEVC, it's a little bit less than that per unit. Of course the number of units has grown exponentially, but then the big companies don't continue to pay per unit anymore. Pankaj Topiwala: 45:29 They just pay a yearly cap. For example, 5 million or 10 million, which to these big companies is is peanuts. So there's a yearly cap for the big companies that have, you know, hundreds of millions of units. You know imagine the number of Microsoft windows that are out there or the number of you know, Google Chrome browsers. And if you have a, a codec embedded in the browser there are hundreds of millions of them, if not billions of them. And so they just pay a cap and they're done with it. But even then, there was up till now an incentive for smart engineers to develop exciting new ideas in a future video coding. But, and that has been up the story up till now. But when, if it happens that this AOM model with AV1 and then AV2, really becomes a dominant codec and takes over the market, then there will be no incentive for researchers to devote any time and energy. Pankaj Topiwala: 46:32 Certainly my company for example, can't afford to you know, just twiddle thumbs, create technologies for which there is absolutely no possibility of a royalty stream. So we, we cannot be in the business of developing video coding when video coding doesn't pay. So the only thing that makes money, is Applications, for example, a streaming application or some other such thing. And so Netflix and, and Google and Amazon will be streaming video and they'll charge you per stream but not on the codec. So that that's an interesting thing and it certainly affects the future development of video. It's clear to me it's a negative impact on the research that we got going in. I can't expect that Google and Amazon and Microsoft are going to continue to devote the same energy to develop future compression technologies in their royalty free environment that companies have in the open standards development technology environment. Pankaj Topiwala: 47:34 It's hard for me to believe that they will devote that much energy. They'll devote energy, but it will not be the the same level. For example, in developing a video standards such as HEVC, it took up to 10 years of development by on the order of 500 to 600 experts, well, let's say four to 500 experts from around the world meeting four times a year for 10 years. Mark Donnigan: 48:03 That is so critical. I want you to repeat that again. Pankaj Topiwala: 48:07 Well, I mean so very clearly we've been putting out a video codec roughly on the schedule of once every 10 years. MPEG-2 was 1994. AVC was 2003 and also 2004. And then HEVC in 2013. Those were roughly 10 years apart. But VVC we've accelerated the schedule to put one out in seven years instead of 10 years. But even then you should realize that we had been working right since HEVC was done. Pankaj Topiwala: 48:39 We've been working all this time to develop VVC and so on the order of 500 experts from around the world have met four times a year at all international locations, spending on the order of $100 million per meeting. You know so billions of dollars have been spent by industry to create these standards, many billions and it can't happen, you know without that. It's hard for me to believe that companies like Microsoft, Google, and whatnot, are going to devote billions to develop their next incremental, you know, AV1and AV2 AV3's. But maybe they will it just, that there's no royalty stream coming from the codec itself, only the application. Then the incentive, suppose they start dominating to create even better technology will not be there. So there really is a, a financial issue in this and that's at play right now. Dror Gill: 49:36 Yeah, I, I find it really fascinating. And of course, Mark and I are not lawyers, but all this you know, royalty free versus committee developed open source versus a standard those large companies who some people fear, you know, their dominance and not only in video codec development, but in many other areas. You know, versus you know, dozens of companies and hundreds of engineers working for seven or 10 years in a codec. So you know, it's really different approaches different methods of development eventually to approach the exact same problem of video compression. And, and how this turns out. I mean we, we cannot forecast for sure, but it will be very interesting, especially next year in 2020 when VVC is ratified. And at around the same time, EVC is ratified another codec from the MPEG committee. Dror Gill: 50:43 And then AV1, and once you know, AV1 starts hitting the market. We'll hear all the discussions of AV2. So it's gonna be really interesting and fascinating to follow. And we, we promise to to bring you all the updates here on The Video Insiders. So Pankaj I really want to thank you. This has been a fascinating discussion with very interesting insights into the world of codec development and compression and, and wavelets and DCT and and all of those topics and, and the history and the future. So thank you very much for joining us today on the video insiders. Pankaj Topiwala: 51:25 It's been my pleasure, Mark and Dror. And I look forward to interacting in the future. Hope this is a useful for your audience. If I can give you a one parting thought, let me give this... Pankaj Topiwala: 51:40 H.264 AVC was developed in 2003 and also 2004. That is you know, some 17 years or 16 years ago, it is close to being now nearly royalty-free itself. And if you look at the market share of video codecs currently being used in the market, for example, even in streaming AVC dominates that market completely. Even though VP8 and VP9 and VP10 were introduced and now AV1, none of those have any sizeable market share. AVC currently dominates from 70 to 80% of that marketplace right now. And it fully dominates broadcast where those other codecs are not even in play. And so they're 17, 16, 17 years later, it is now still the dominant codec even much over HEVC, which by the way is also taking an uptick in the last several years. So the standardized codecs developed by ITU and MPEG are not dead. They may just take a little longer to emerge as dominant forces. Mark Donnigan: 52:51 That's a great parting thought. Thanks for sharing that. What an engaging episode Dror. Yeah. Yeah. Really interesting. I learned so much. I got a DCT primer. I mean, that in and of itself was a amazing, Dror Gill: 53:08 Yeah. Yeah. Thank you. Mark Donnigan: 53:11 Yeah, amazing Pankaj. Okay, well good. Well thanks again for listening to the video insiders, and as always, if you would like to come on this show, we would love to have you just send us an email. The email address is thevideoinsiders@beamr.com, and Dror or myself will follow up with you and we'd love to hear what you're doing. We're always interested in talking to video experts who are involved in really every area of video distribution. So it's not only encoding and not only codecs, whatever you're doing, tell us about it. And until next time what do we say Dror? Happy encoding! Thanks everyone.
Los codecs, elementos que nos permiten disfrutar de vídeo y audio, tanto en los formatos físicos digitales como el DVD o Bluray, así como a través de internet. Repasamos la historia de los codecs más importantes y buscamos la explicación de por qué iOS no reproduce vídeo 4K de Youtube (o incluso el Apple TV 4K) y explicamos su evolución hasta nuestros días. También, miramos al futuro y cómo el nuevo codec AV-1 viene para poner paz en toda la industria y crear un formato para dominarlos a todos, incluso Apple. Descubre nuestras ofertas para oyentes: "Concurrencia en iOS con Swift" en Udemy por $20,99/20,99€. "Swift de lado servidor con Vapor" en Udemy por $69,99/69,99€. "Desarrollo Seguro en iOS con Swift" en Udemy por $124,99/124,99€. "Aprendiendo Swift 5.2" en Udemy por $74,99/74,99€. Apple Coding Academy Suscríbete a Apple Coding en nuestro Patreon. Canal de Telegram de Swift. Acceso al canal. --------------- Consigue las camisetas oficiales de Apple Coding con los logos de Swift y Apple Coding. Logo Apple Coding (negra, logo blanco) Logo Swift (negra, logo blanco) Logo Swift (blanco, logo color original Swift) Logo Apple Coding (blanco, logo negro) --------------- Sigue nuestro canal en Youtube en: Canal de Youtube de Apple Coding Tema musical: "For the Win" de "Two Steps from Hell", compuesto por Thomas Bergensen. Usado con permisos de fair use. Escúchalo en Apple Music o Spotify.
Food safety is a very important international issue and few people have been as engaged on this topic as our guest today, Awilo Ochieng Pernet, a senior advisor on international matters related to food safety, nutrition, water and veterinary issues at the Swiss Federal Food Safety and Veterinary Office. About Awilo Ochieng Pernet Awilo Ochieng Pernet studied law, human nutrition and international food regulatory affairs. She is now a senior advisor on international matters related to food safety, nutrition, water and veterinary issues at the Swiss Federal Food Safety and Veterinary Office. She has also had a distinguished international policy career, particularly with a Codex Alimentarius Commission in the international food standards setting organization, which is part of the Food and Agriculture Organization and the World Health Organization of the United Nations. Interview Summary So you have been involved in the development of international food safety and quality standards for nearly 20 years now. I know that you're very strongly committed to food safety advocacy work, protect the health of consumers and have carried out food safety awareness raising missions in numerous countries. What are the main challenges that you see related to unsafe food and who should be involved to ensure that the food is safe? Thinking all the way from primary production consumption. Thank you very much for the introduction and for that question. Food safety is an issue of great concern to the entire world's consumers because we all need to have access to safe and nutritious food every day. According to the Codex Alimentarius Commission, food safety is the assurance that food will not cause harm to the consumer when it is prepared and/or eaten according to its intended use. Now the issue is that today's food supply system is extremely complex and it involves a range of different stages including on-farm production, slaughtering, harvesting, processing, storage, transport, distribution, and until their food reaches the consumer. There are so many possibilities for contamination to take place along the entire food chain. According to the World Health Organization, there are more than 200 diseases are spread by contaminated food. It's incredible. One in 10 people all over the world will fall sick each year after eating contaminated food. The WHO also estimates that 420,000 people die each year as a result of contaminated foods. This is really critical and it's very serious. It is also estimated that children under five years of age are at particularly high risk. Considering the food chain that I mentioned earlier, we have to ensure that food is safe along the entire food chain. So all the stakeholders from primary production to consumption, including the consumers ourselves, we have to take measures to ensure that food is not contaminated. That's where the challenge is because there's so many stakeholders and so many opportunities for food to be contaminated along the entire food chain. So everybody has to remain vigilant. They have to apply the good hygiene principles and practices to ensure that food is not contaminated. Then at the end of the chain, we consumers also have to take measures to ensure that we do not contaminate food, which we bought from the shops or from the market which was safe, but due to the poor handling practices at home, we render it unsafe and it make us fall sick. Are there particular countries or regions of the world where there's a special vulnerability to food safety issues and why might that be? Yes. According to the WHO study that I had mentioned earlier, which was a FERG report, FERG, which came out in 2015, there is no area in the world, which is what I would say, which is immune or protected from foodborne diseases. Both developing and developed countries are exposed to contaminated food. Now, however, in that report, we clearly stated that the African region and also South East Asian region, they bear the heaviest burden of foodborne diseases. However, developed countries, for example, even here in Switzerland, we also have our challenges. I know that even in the United States of America and other European countries, it is a priority to ensure that food that is sold to consumers is safe. So you see the scale of unsafe food made at home in the home setting, but also in mass catering, it can really have very dramatic consequences on the consumer. In Africa, Asia, they really have to improve also in their hygiene and raise awareness among food handlers, street food vendors. But we also have challenges in developed countries With food supply chains being so global, has that made the problem more complex? Oh yes it has. Because food which is produced in one part of the of the world is shipped, sold all over the world in such a short period. So if it's contaminated at the source, all the areas where it's distributed will be affected. We saw that recently, two or three years ago with the melamine crisis with the milk from China. We realized with a recall that every region of the world was affected, you see? So the globalization of the food chain and the food supply has really made it more difficult and it's even faster. But the good thing is that there are measures being taken. There's a lot of collaboration at the international level, information networks and rapid exchange of information so that food can be recalled before it gets to the plate of the consumer and that way they can do some harm reduction. So countries will have their own food safety standards and try to monitor those carefully as possible. You mentioned a little bit some of the international things that can be done. Who are the major players on this codec certainly, but who are the other players that are important? Indeed, the Codex Alimentarius tourist develops the food standards, which are approved by the 189 members, which I would say almost the entire world. But from that stage, then the members themselves have to take it upon themselves to translate these international standards into the national legislation. Then they have to ensure, the authorities have to ensure that all the stakeholders along the food chain implement these standards. At that level, it's really the national authorities who have to ensure that the standards, the national standards, which now reflect the Codec standards or take into account the Codec standards, are implemented. At the international level, the organizations which coordinate for example, coordinate cases of recall and in case of food contamination across borders, there is the Food and Agriculture Organization and the World Health Organization who have a network, which is called the Infrasound. Codex members and many countries are members to that group. So if there is an issue is brought to their attention, they disseminate the information through that network so that the authorities then try to ensure that they take the necessary measures to prevent the contaminated foods from reaching the consumer. So during your time as chairperson of the Codex Alimentarius Commission, in 2015, you called for the establishment of a World Food Safety Day. Then three years later in 2018, the United Nations General Assembly adopted resolution designating June 7th as World Food Safety Day. Can you talk about what actions that you recommend in order to ensure the goal of raising awareness about food safety is achieved and that food safety becomes a priority around the world? Well, first of all, thank you very much. The adoption of that resolution is a tremendous achievement for the entire world and above all, for the entire world's consumers because at the end of the day, we are the ones who are going to consume the food and we do not want to endanger our health. I would like to say that the main goal is to raise awareness, to raise awareness about the importance of food safety. I'd like all relevant stakeholders to engage, to really take concrete measures to ensure food safety and above all, to be committed to food safety. We should try to achieve a food safety culture. You see, since there's so many players, we have to address all the players, including the consumers of course. Many of the things are not in our hands as consumers because they are the food producers who also have to ensure that they produce the food in a safe manner and then the processes and the transporters and foot handlers. So there are many, many stakeholders. Each one has a specific role to play to ensure that the food that they handling is safe so that ultimately, the consumer's health will not be endangered by the food that a consumer will take. Now while for safety, they should also help us to maybe demystify food safety. We have achieved the World Food Safety Day, but I would like to say that we are the beginning of the work now. You know? We need to see how we can achieve the knowledge and reach people in their homes, reach people in their kitchens so that the street food vendors know food safety is important, "Am I doing the right thing?" You know? So we have a food safety culture and that it's really part of our being and this is what we have to do now to ensure that indeed, while food safety there is each day, really live food safety, have that food safety culture in us, in us. So you mentioned consumer several times in that previous comment and it's obvious that they have an important role to play in preventing foodborne illnesses as well as the producers and suppliers. What steps can consumers take to make sure that they protect themselves and their families from food poisoning? I know from talking to so many consumers that they're interested in food safety, but sometimes they don't have the knowledge or the information to take the necessary steps. Sometimes they're also, they're repeating behaviors that maybe they saw or they learned, which are not necessarily the right behaviors to protect themselves. Now we'll talk about the international level. At the World Health Organization level and also the Codex level, we promote the five keys to safer food. Of course the first key is cleanliness, keep clean, hygiene, washing your hands before handling food and of course often during the preparation, so protecting your kitchen areas and food from insects, pests and other animals. The second key is separating raw from cooked, especially separating raw meat, poultry and seafood from other foods of course, to avoid cross contamination. Then the third key to cook thoroughly because we know that under cooked products, especially meat, poultry, eggs and seafood will cause foodborne diseases. Then the fourth key is to keep food at safe temperatures. Consumers should not leave cooked food at room temperature for more than two hours. The bacteria, the microorganisms will grow and multiply and then the food will be unsafe. We should refrigerate food promptly. If we remove food from the fridge, we should heat it properly to kill any microorganisms which would be in the food. The fifth key is to use safe water and safe ingredients and raw materials to ensure that once we have followed all those five steps, we enjoy food, which will not endanger our health. So those are the five keys to save her food from the World Health Organization, but I know that different countries have different approaches to consumer education and information. I think what is very crucial is that whatever information is given by introspects, the cultural setup of the target group or the target audience, and also the materials which are used, that they correspond to the cultural understanding and the setup of the target group. What might be appropriate and relevant to a European audience would not be the same for Africa. I'll give you an example. There's a case in Africa when they used a chat illustrating the five keys to safer food. So the first key was illustrated just simply by tap and running water and hands under the tap. You see, just that picture. For people in Europe, for people in the US in developed countries, I mean this is evidence, that is clear. We see taps all over the place, but in a setting in an African village where they do not have running water, they do not have taps, they get water from the borehole, this image does not evoke anything in their minds. These are the things where you have to adapt the resources, the messages to your target group. Now, of course, washing your hands with soap, they will understand that, but don't illustrate that with the water coming from a tap because they don't know what this is. The second example was also from Africa where on one of the charts, when they said, "Keep food at a safe temperatures," and they use a thermometer. But for many people in the village, [inaudible 00:13:03] they only see it when they go to the hospital when they have to take their temperature to see where they have fever. So this group of people said, "Oh, but if we do all this, we're going to fall sick." So they do not understand the message. Well, those are fascinating examples. When people think about food safety issues, they're mainly thinking about things like bacteria and viruses, but food allergies are also an interesting issue. How are these taken into account by work at Codecs and what's being done around the world? Oh yes. It is an issue, which is also very dear to me because I believe that we need to do more. We need to do more to protect consumers against suffering from these allergies and sometimes people are aware of the allergy that they have and sometimes they do not. They're not aware of it, and they only learn that they're allergic to certain food when they're in a very critical condition. In the Codex Alimentarius, the Codex Committee on food labeling developed a set of, a list in fact, of foods which are acknowledged internationally to cause allergic reactions. However, right now, this list is being reviewed because the international community believes that maybe it's not sufficient to really protect the consumers. Sometimes it can be difficult, especially we talked about the international trade, that certain allergies are not taking into account when foods are shipped from one area to the other. So that is an area that has to really be improved on. Just one final question. Thank you for that description as well. Can you briefly explain how food fraud can affect food safety? Oh, yes. Food fraud, of course it's a criminal activity. We have a very recent example here in Europe. Authorities in Italy and Serbia came across a criminal group, which was using contaminated food and rotten raw materials to make food which was being labeled as organic. Here, this is really a clear example of how food fraud can threaten our safety because a person, that consumer will find a product on which is labeled, "fresh organic orange juice," but this product, first of all, is not organic, but it's also unsafe because they used in this case, decomposed apples, which are found which are contaminated with mycotoxins and other toxic chemical substances and suitable for human consumption and dangerous to public health. So that's how food fraud really impacts our health, that we as consumers... It's difficult for us to act on that. We have to rely on our authorities.
Dagens avsnitt ägnar vi helhjärtat åt video codecs. Vi ger en översikt över de video codecs inom video streaming som finns och vilka som lurar runt hörnet. Fler aktörer ansluter till streaming-arenan, tyska samarbetet Joyn lanserar internationellt och japanska Rakuten ger sig in i leken. I veckans streamingnyheter från veckan som gått får vi också höra om en ny streamingtjänst som kommer till Sverige i december. Medverkande: Jonas Rydholm Birmé och Magnus Svensson från Eyevinn Technology Producent: Jonas Rydholm Birmé Programmet produceras av Eyevinn Technology - leverantörsoberoende specialister inom videoteknik och mediadistribution.
Codecs, PowerApps, and previews. https://auphonic.com/blog/2018/06/01/codec2-podcast-on-floppy-disk/
Is there really any advantage to building your software vs installing the package? We discuss when and why you might want to consider building it yourself. Plus some useful things Mozilla is working on and Cassidy joins us to tell us about elementary OS' big choice. Special Guests: Brent Gervais, Cassidy James Blaede, and Martin Wimpress.
E07: The Video Insiders talk with a pioneering software development company who is at the center of the microservices trend in modern video workflows. Featuring Dom Robinson & Adrian Roe from id3as. Beamr blog: https://blog.beamr.com/2019/02/04/microservices-good-on-a-bad-day-podcast/ Following is an undedited transcript of the episode. Enjoy, but please look past mistakes. Mark & Dror Intro: 00:00 The Video Insiders is the show that makes sense of all that is happening in the world of online video as seen through the eyes of a second generation Kodak nerd and a marketing guy who knows what I-frames and Macroblocks are. And, here are your hosts, Mark Donnigan and Dror Gill. Mark Donnigan: 00:22 Well, welcome back to the Video Insiders. It's so great to be here. Dror, how are you doing? Dror Gill: 00:29 I'm doing great and I'm really excited to do another episode of the Video Insiders. I would say this is probably the best part of my day now doing the Podcast. Although, watching video all day isn't bad at all. Mark Donnigan: 00:45 That's not a bad job. I mean, hey, what do you tell your kids? Dror Gill: 00:49 So, exactly, this is [crosstalk 00:00:52]. I work part-time out of my home office and my daughter comes in after school and she sees me watching those videos and she says, "Dad, what are you doing?" So, I said, I'm watching videos, it's part of my work. I'm checking quality, stuff like that. Then she says, "What? That's your work? You mean they pay you to do that? Where can I get a job like that? You get paid to watch TV." Dror Gill: 01:18 Now, of course, I'm not laid back on a sofa with some popcorn watching a full length movie, no. I'm watching the same boring video clip again and again, the same 20, 30 seconds segments, and I'm watching it with our player tool, with Beamr view and typically one half is flipped over like butterfly mode. And then, you're pausing on a frame and you're looking for these tiny differences in artifacts. So, it's not exactly like watching TV in the evening, but you get to see stuff, you get to watch content, it's nice but could get tiring after a while. But, I don't think I'll ever get tired of this Podcast Mark. Mark Donnigan: 02:04 No, no. I know I won't. And, I think going back to what you do in your day job watching video, I think our listeners can relate to. It's a little bit of a curse, because here you are on a Friday night, you want to relax, you just want to enjoy the movie, and what do you see? All of the freaking artifacts and all the ... And, you're thinking that ABR recipe sure could have been better because I can see it just switched and it shouldn't have, anyway, I think we can all relate to that. Enough about us, let's launch into this episode, and I know that we're both super excited. I was thinking about the intro here, and one of the challenges is all of our guests are awesome, and yet it feels like each guest is like this is the best yet. Dror Gill: 02:56 Yeah. Really today we have two of really the leading experts on video delivery. I've been running into these guys at various industry events and conferences, they also organize conferences and moderate panels and chair sessions, and really lead the industry over the top delivery and CDNs and all of that. So, it's a real pleasure for me to welcome to today's Podcast Dom and Adrian from id3as, hi there? Adrian Roe: 03:26 Hey, thank you very much. Dom Robinson: 03:27 Hey guys. Adrian Roe: 03:27 It's great to be on. Dom Robinson: 03:28 How are you doing? Dror Gill: 03:29 Okay. So, can you tell us a little bit about id3as and stuff you do there? Adrian Roe: 03:34 Sure. So, id3as is a specialist media workflow creation company. We build large scale media systems almost always dealing with live video, so live events, be that sporting events or financial service type announcements, and we specialize in doing so on a very, very large scale and with extremely high service levels. And, both of those I guess are really crucial in a live arena. You only get one shot at doing a live announcement of any sort, so if you missed the goal because the stream was temporary glitch to that point, that's something that's pretty hard to recover from. Adrian Roe: 04:14 We've passionate about the climate and how that can help you build some interesting workflows and deliver some interesting levels of scale and we're primary constructors. Yeah, we're a software company first and foremost, a couple of the founders have a software background. Dom is one of the original streamers ever, so Dom knows everything there is to know about streaming and the rest of us hang on his coattails, but have some of the skills to turn that into one's a note, so work for our customers. Dror Gill: 04:46 Really Dom, so how far back do you go in your streaming history? Dom Robinson: 04:50 Well, anecdotally I sometimes like to count myself in the second or third webcasters in Europe. And interestingly, actually one of the people who's slightly ahead of me in the queue is Steve Clee who works with you guys. So, did the dance around Steve Clee in the mid '90s. So, yeah, it's a good 20, 23 years now I've been streaming [inaudible 00:05:12]. Dror Gill: 05:11 Actually, I mean, we've come a long way and probably we'll talk a bit about this in today's episode. But first, there's something that really puzzles me is your tagline. The tagline of id3as is, good on a bad day. So, can you tell us a bit more about this? What do you mean by good on a bad day? Adrian Roe: 05:33 We think is probably the most important single facet about how your systems behave, especially again in a live context. There are hundreds or possibly even thousands of companies out there who can do perfectly good A to B video encoding and transcoding and delivery when they're running in the lab. And, there's some great tools, open source tools to enable you to do that, things like FFmpeg and so on. What differentiates a great service from a merely good service though is what happens when things go wrong. And especially when you're working at scale, we think it's really important to embrace the fact that things will go wrong. If you have a thousand servers running in your x hundred events at any one particular time, every now and then, yeah, one of those servers is going to go up in a puff of smoke. Your network's going to fail, or a power supply is going to blow up, or whatever else it may be. Adrian Roe: 06:31 And so, what we think differentiates a great service from a merely good one is how well it behaves when things are going wrong or ranji, and partly because of the technology we use and partly because of the background we come from. Technically, when we entered the media space, so as a company that was about eight years ago, obviously Dom's been in the space forever, but as a company it's been eight years or so, we came to it from exactly that angle of how can we ... So, our first customer was Nasdaq delivering financial announcements on a purely cloud based system, and they needed to be able to deliver SLAS to their customers that were vastly higher than the SLAS you could get for any one particular cloud service or cloud server. And so, how you can deliver a fantastic end to end user experience even when things inside your infrastructure are going wrong, we think is much more important than merely, can you do an A to B media chain? Mark Donnigan: 07:27 That's interesting Adrian. I know you guys are really focused on micro services, and maybe you can comment about what you've built and why you're so vested in data architecture. Adrian Roe: 07:39 With both things, there's nothing new in technology. So, Microservices as a phrase, I guess has been particularly hot the last, I don't know, three, four years. Mark Donnigan: 07:49 Oh, it's the buzzy, it's the buzzy word. Dror loves buzzy words. Dror Gill: 07:54 Micro services, buzz, buzz. Mark Donnigan: 07:54 There we go. I'm afraid you have to hear the rap, you have to hear his rap. I'm telling you it's going to be number one on the radio, number one on the charts. It's going to be a hit, it's going to be viral, it's going to be [inaudible 00:08:08]. Adrian Roe: 08:09 So, our approach to Microservices I'm afraid is grounded in the 1980s, so if we're going to do a rap at that point, I'd need to have a big bouffant hair or something in order to do my Microservices- Mark Donnigan: 08:18 And new eyes. Dom Robinson: 08:21 You left your flares in my house dude. Adrian Roe: 08:23 Oh, no, my spare pairs are on, it's okay. Actually, a lot of that thinking comes from the Telco space where when we were starting to get into ... In a past life I used to build online banks and big scale systems like that, but one of the things that was interesting when we came to media is actually if you've got 500 live events running, that's a big system. The amount of data flowing through that with all the different bit rates and so on and so forth is extremely high. Those 500 events might be running on a thousand servers plus in order to give you a full scale redundancy and so on and so forth, and those servers might well be spread across three, four, five different data centers in three, four, five different continents. Adrian Roe: 09:14 And, there are some properly difficult problems to solve in the wider space rather than specifically in the narrow space of a particular single element to that workflow. And, we did some research a while back, we said actually other people must have faced some of these challenges before. And, in particular the Telco space has faced some of these challenges for a long time, and people get so used to just being able to pick up the phone and have the call go from A to B, and the technology by and large works so well that you don't really notice it's there, which is actually another good strap line I think, technology is so good you ignore it, that's what we aspire to. Adrian Roe: 09:51 So, we came across a technology called Erlang, which takes a whole approach to how you build systems. It's different to the traditional. As I say, in itself is not a new technology and that's one of the things we like about it, but basically it says the problems that Erlang was trying to solve when it was created back in the '80s was specifically for things like mobile phones, which is where you would have a mobile phone switch, would be a whole bunch of proprietary boards, each of which could handle maybe, I don't know, five or 10 calls or something, and they'd be stuck together and a dish great big rack with some kind of backplane joining them altogether. And, the boards themselves were not very reliable, and in order for the mobile or for the Telcos to be able to deliver a reliable service using this kind of infrastructure, if any one particular board blew up, the service itself had to continue and other calls, it was really important to those other calls weren't impacted and so on and so forth. Adrian Roe: 10:48 So, this language Erlang was invented specifically to try and solve that class of problem. Now, what was interesting is if you then wind the clock forward 20, 30 years from that particular point and you consider something like the cloud, the cloud is lots and lots of individual computers that on their own aren't particularly powerful and on their own aren't particularly reliable, but they're probably connected together with some kind of LAN or WAN that actually is in pretty good shape. Adrian Roe: 11:17 And, the challenges that back then were really customed to the mobile and network space suddenly become incredibly good patterns of behavior for how you can build high scale cloud systems and super reliable cloud systems. And so, this as is always the case, these new shiny technologies, Erlang, for example, had its moment in the sun about a year or so back when WhatsApp was bought by Facebook, because when WhatsApp were bought by Facebook for $18,000,000,000 or whatever it was, I believe that WhatsApp had a total of 30 technical staff of which only 10 were developers, and they build all of their systems on top of Erlang and got some major advantage from that. Adrian Roe: 11:57 And so, when we came into the whole media space, we thought that there were some very interesting opportunities that would be presented by adopting those kinds of strategies. And now, what's nice then about what a Microservices come into that, so in Erlang or the way we build systems, you have lots of single responsibility, small bits of function, and you gather those bits of function together to make bigger, more complex bits of function and then you gather those together to make progressively more larger scale and more complex workflows. And, what's really nice about that as a strategy so people are increasingly comfortable with using Microservices where I'll have this to do my packaging and this to do my encoding, and then I'll plug these together and so on and so forth. Adrian Roe: 12:46 But, when your language itself is built in those kinds of terms, it gives you a very consistent way of describing about the user experience all the way through your stack. And, the sorts of strategies you have for dealing with challenges or problems that are very low level are exactly the same as the strategies you have for dealing with server outages, and so on and so forth. So, it gives you a very consistent way that you can think about the kind of problems you're trying to solve and how to go about them. Dror Gill: 13:10 Yeah, that's really fascinating. So basically, we're talking about building a very reliable system out of components where not all of these components are reliable all the time, and inside those components are made out of further sub components, which may fail. Adrian Roe: 13:28 Correct, yeah. Dror Gill: 13:29 And then, when you employ a strategy of handling those failures and failing over to different components, you can apply that strategy at all levels of your system from the very small components to the large servers that do large chunks of work. Adrian Roe: 13:45 I could not have put it better myself, that is exactly right. And, you get some secondary benefits, so one is I am strongly of the opinion that when you have systems as large and as complex as the media workflows that we all deal in, there will be issues. Things will go wrong either because of physical infrastructure role, just because of the straight complexity of the kinds of challenges you're looking to meet. So, Erlang would take an approach that says let's treat errors as a first class citizen, let's not try and pretend they're never going to happen, but let's instead have a very, very clear pattern of behavior about how you go about dealing with them, so you can deal with them in a very systematic way. And, if those errors that are very, very micro level, then the system will probably replace the things that's gone bad, and do so in a few well under a fractions of a millisecond. So, you literally don't notice. Adrian Roe: 14:41 We had one particular customer where they had a component that allowed them to patch audio into a live media workflow, and they upgraded their end of that particular system without telling us or going through a test cycle or something which was kind of disappointing. And, a week or so after their upgrade, we were looking at just some logs from an event somewhere, and they seemed a bit noisier than usual. We couldn't work out why and the event had been perfect, nothing had gone wrong, and we discovered that they started to send us messages, one part of the protocol, so they were just incorrectly sending us messages as part of this audio integration that they'd done and they were just sending us junk. Adrian Roe: 15:24 And, the handler forwarded our end was doing what it ought to do in those particular cases that was crashing and getting itself replaced. But, because we designed the system really well, the handler and the logic for it got replaced. The actual underlying TCP connection, for example, stayed up and there wasn't a problem. And, actually we're having to restart the handler several times a second on a live two way audio connection and you literally couldn't hear that it was happening. Mark Donnigan: 15:49 Wow. Adrian Roe: 15:49 Yeah. So yeah, you can get ... But, what's nice is exactly the same strategy in the way of thinking about things and works. Yeah, right at the other level where I've gone seven data centers, and 1000, or 1500 servers running and so on and so forth, and it gives you a camera and a consistent strategy for how you reason about how you're going to behave in order to deliver a service that just keeps on running and running and running even when things go bad. I will give one example, then I'll probably let Dom share some of his views for a second, which was there was a reasonably famous incident a few years back when Amazon in US East just disappeared off the map for about four days and a number of very large companies had some really big challenges with that, and frankly we were just offline for four days. Adrian Roe: 16:36 We had 168 servers running in US East at the time for Nasdaq, one of our customers, we did not get a support call. And so, all of the events that were running on there failed over to other servers that we're running in US West typically. About five minutes later we were back in a fully resilient setup, because we'd created infrastructure in Tokyo and Dublin and various other data center, so that had US West disappeared off the face of the earth as well. Again, we might've got a support call the second time around, but we literally read about it in the papers the next day. Mark Donnigan: 17:06 That's pretty incredible. Are there any other video systems platforms that are architected on Erlang, or are you guys the only ones? Adrian Roe: 17:15 The only other one I am aware of out of the box is a company that specializes more in the CDN and final content delivery side of things, so we're not quite unique, but we are certainly highly, highly unusual. Mark Donnigan: 17:28 Yeah. Yeah. I didn't want to go to Dom, and Dom with your experience in the industry, I'm curious what you're seeing in terms of how companies are architecting their workflows. Are you getting involved in, I guess evolutionary projects, that is you're extending existing platforms and you're in some cases probably shoe honing, legacy approaches, solutions, technologies, et cetera, to try and maybe bring them to the cloud or provide some sort of scale or redundancy that they need? Or, are people just re architecting and building from the ground up? What are people doing out there and what are specifically your clients doing in terms of- Dom Robinson: 18:20 So, it's interesting, I was talking, I did a big review of the Microservices space for Streaming Media Magazine, which came out I think in the October edition this year, and that generated quite a lot of conversations and panel sessions and so on. When we've been approached by broadcasters who have established working workflows, and they're sometimes quite testy because they've spent a lot of time and then they're emotionally quite invested in what they might have spent a decade building and so on. So, they often come with quite testy challenges, what advantages would this bring me? And quite often, there's very little advantage in just making the change for the sake of making the change. The value really comes when you're trying to scale up or take benefit from scaling down. So, with a lot of our financial needs clients the cycle of webcasts, if you'd like a strongly quarterly though, it's all about financial reporting at the end of financial quarters. So, they often want to scale down their infrastructure while during the quiet weeks or quiet months because it saves them costs. Dom Robinson: 19:25 Now, if you're doing 24/7 linear broadcasting, the opportunity to scale down may simply never present itself, you just don't have the opportunity to scale down. Scaling up is a different question, but scaling down, if it's 24/7, there's no real advantage to scaling down, and this is true of cloud as much as it is of Microservices specifically. But, when people come to us and say, right, we've really want to make that migration, they sometimes start with the premise that they'd like to take tiny little pieces of the workflow, and just migrate those little tiny incremental steps. In some cases we may do that, but we tend to try to convince them to actually build a Microservice architecture or virtualized architecture to run in parallel. So, quite often we might start with the client by proposing that they look at their virtualized strategies disaster recovery strategy in the first instance. And then, what happens is after the first disaster, they never go back to their old infrastructure. Mark Donnigan: 20:21 I'm sure, yeah. Dom Robinson: 20:22 And after that, they suddenly see they have all the benefits and it is reliable and despite the fact that they have no idea where on earth this physically is happening, it's working and it works really reliably. And, when it goes wrong, they can conjure up another one in a matter of seconds or minutes. These are not apparent until the broadcaster actually puts them into use. I spent 20 years trying to convince the broadcast industry that IP was going to be a thing, and then overnight they suddenly embraced it fully, and these things people do have epiphany's and they suddenly understand the value. Dom Robinson: 20:56 Disaster recovery has been a nice way to make people feel comfortable because it's not a suggestion of one day we're going to turn off your trusted familiar, nailed down tin and move it all into something you have no idea where it is, what it's running on, how it's running and so on. People are risk averse naturally in taking that type of leap of faith, but once they've done it, they almost invariably see the benefits and so on. So, it's about waiting for the culture in the larger broadcasters to actually place that confidence in the, if you like, the internet era, which generally means as people who are being cynical. I used to make testy comments on panel sessions about the over '50s, '60s, I don't know where you want to put your peg in there. Once those guys finally let internet natives take control, that's when the migration happens. Mark Donnigan: 21:48 Yeah, that's interesting. I can remember going back, oh, 10 years or more and sitting in the cable show which no longer exists, but certain sessions there and Cisco was presenting virtualized network function. And, when the room would always be packed and you'd have a sense if you're sitting in these sessions like this is really happening. This is, wow, this is really happening in all the biggest MSLs were there, all the people were there, right? And then, you'd come back the next year, it'd be the same talk the same people in the room, then come back the next year after that and nobody was [crosstalk 00:22:25], because it's the future. Dom Robinson: 22:23 Yeah, absolutely. Dror Gill: 22:28 It was always the future I was making fun of. Mark Donnigan: 22:30 Now, the switch has absolutely flipped and we're seeing that even on the codecs side, because there was a time where unless you were internet native as you said, you needed a full solution, a black box. It had to go on a rack, it had to ... That's what they bought. And so, selling a codec alone was a little bit of a challenge, but now they can't use black boxes, and they're ... So. Dom Robinson: 22:58 Sometimes I liken it to the era of HI-FI as digital audio and MP3 started to arrive, I was quite involved in MP3 as it emerged in the mid '90s. And, I have over the last two decades flip flop from being the musicians, worst enemy to best friend to worst enemy to best friend, and everybody just depends on the mood of the day. I was reflecting, and this is a bit of a tangent, but I was reflecting when you guys were talking about watching for artifacts in videos. I've spent so long watching 56K blocky video that Adrian, Nick and Steven, the rest of the team never ever let me give any opinion on the quality of video, because I'm quite happy watching a 56K video projected on my wall three meters wide and it doesn't bother me, but I'm sure Dror would be banging his head against the wall if he [inaudible 00:23:47] videos. Dror Gill: 23:49 No, I also started with 56K video and real video, and all of those the players and still in the '90s, but I managed to upgrade myself to SD and then to HD, and now if it's not HDR, it's difficult to view. But in any case, if we look at this transition that is happening, there are several levels to this transition. I mean, first of all, you make the transition from hardware to software then from the software to the cloud, and then from regular software running in the cloud and VMs to this kind of Microservices architecture with Dockers. And, when I talk to customers they say, yeah, we need it as a Docker, we're going to do everything as a Docker. But then, as Mark said, you're not always sure if they're talking about the far future, the new future, the present, and of course it changes if you're talking to the R&D department or you're talking with the people who are actually doing the day to day production. Adrian Roe: 24:51 There were some interesting ... And, I think Docker, this maybe a slightly unpopular thing to say, but yeah, so I think Docker is fantastic and yeah, we use it on a daily basis and development and it's a great on my laptop, I can simulate a cluster of eight servers or doing stuff and failing over between them and so on and so forth and it's fantastic. And, and we've had Docker based solutions in production for four years, five years, certainly a long time, and actually we were starting to move away from Docker as a delivery platform. Dror Gill: 25:22 Really? That's interesting. So, you were in the post Docker era? Adrian Roe: 25:26 Yes, I think just as other people are getting very excited that their software can run on Docker, which I always get confused with announcements like that, because Docker is essentially another layer of virtualization, and strangely enough people first all got excited because their software would run not on a machine but on a virtual machine and it takes quite a strange software requirement before the software can really even tell the difference between those. And then, you move from a virtual machine to a Docker type environment. Adrian Roe: 25:52 Yeah. Docker of course being conceptually nothing new and yeah, it's a wrapper around something the Linux kernel has been able to do for 10 years or so. Yeah. And, it gives you certain guarantees about kerniless and that the sandbox isn't going to interfere with the sandbox and so on and so forth. And, if those things are useful to you, then absolutely use Docker to solve those business problems. Adrian Roe: 26:13 And another thing that Docker can do that again solves a business problem for me when I'm developing is I can spin up a machine, I can instantiate a whole bunch of stuff, I can create virtual networks between them, and then when I rip it all down my laptop's back in pretty much the same state as it was before I started, and I have some guarantees around that. But especially in a cloud environment where I've got a premium job coming in of some sort, I'll spin up a server to do that and probably two servers in different locations to be able to do that. And, they'll do whatever they need to do and yeah, there'll be some complex network flows and so on and so on and so forth to deliver that. Adrian Roe: 26:48 And then, when that event's finished, what I do is I throw that server in the bin. And so, actually Docker there typically is just adding an extra abstraction layer, and that abstraction layer comes at a cost in particular incidence of disk I/O and network I/O that for high quality video workflows you want to go into with your eyes open. And so, when it's solving a business problem for you, I think Docker is a fantastic technology, and some very clever people are involved and so on and so forth. I think there's a massive amount of koolaid been drunk just to see if Docker where it's actually adding complexity and essentially no value. Dror Gill: 27:25 So, I would say that if you have, as I said, if you have a business problem, for example, you have Linux and Windows servers, it's a given you can't change that infrastructure and then you want to deploy a certain task with certain servers, and you wanted to work across them seamlessly with those standard interfaces that you mentioned, then Docker could be a good solution. On the other hand, what you're saying is that if I know that my cluster is fully Linux, certain version of Ubuntu, whatever, and because that's how I set it up, there's no advantage in using the Dockers because I can plan the workflow or the workload on each one of those servers, and at the level of cloud instances launch and terminate them, and then I don't need the Docker. And the issue of overhead, we haven't seen a very large overhead for Docker, we always compare it to running natively. However, we did find that if your software is structured in a certain way, it can increase the overhead of Docker beyond the average. Dom Robinson: 28:31 Something important that came up in some of the panels, Streaming Media West and content delivery world recently on this topic, at the moment people talk synonymously about Microservices and Docker, and that's not true. Just because something's running in Docker does not mean you're running a Microservices architecture. In fact if you dig under the ... All too often- Dror Gill: 28:50 Right, it could be a huge one of the thick servers. Servers that are just running on Docker. Dom Robinson: 28:54 Exactly. All too often people have just simply drop their monolith into a Docker container and called it a Microservice, and that's a ... Well, I won't say it on your Podcast, but that's not true. And, I think that's very important, hence we very much describe our own Erlang based architecture as a Microservices architecture. Docker is as Adrian was explaining, it's nice to have in certain circumstances, it's an essential, but in other circumstances it's just not relevant to us. So, it is important that Docker is a type of virtualization and is nothing to do with Microservices architecture, and it's a very different thing. So, well Adrian might kick me under the virtual table. Adrian Roe: 29:27 No, no, that's all ... Yeah, there's a lot of people who will say if you take an application and you turn it into ... You take a monolithic application and Microservicize it what you have is a monolithic application that's now distributed. So, you've taken a hard problem and made it slightly harder. Dom Robinson: 29:44 Exactly. Adrian Roe: 29:45 So, what's probably more important is that you have good tools and skills and understanding to deal with the kinds of challenges you get in distributed environments. And, actually understanding your own limitations is interesting there. I think if you look at how one coordinate stuff within a particular OS application, then Microservices are a great way of structuring individual applications, and they can cooperate, and they're all in the same space, and you can replace bits of them and that's cool. And then, if you look at one particular server, again, you're Microservices architecture there might go, okay, this component is an an unhealthy state, I'm going to replace it with a clean version and yeah, you can do that in very, very quick time and that's all fantastic. Adrian Roe: 30:33 And then, maybe even if I'm running in some kind of local cluster, I can make similar decisions, but as soon as I'm running in some kind of local cluster, you have to ask the question, what happens if the network fails? What's the probability of the network failing? And if it does, what impact is that going to have on my service? Because yeah, it's just as bad typically to have two servers trying to deliver the same instance of the same live services as it is to have none, because there'll probably be a closed network floods and all sorts of bad things can happen as a result, so. Adrian Roe: 31:08 And then, if you look at a system that's distributed over more than one day center that absolutely just going, oh, I can't see that other service. Yeah, so Microservice is part of my overall delivery. Making decisions based on that is is something you need to do extremely carefully and there's an awful lot of academic work done around consensus algorithms in the presence of network splits and so on and so forth, and it's not until you understand the problem quite well that you actually understand how damned hard the problem is. You're just naive understanding of it is, oh, how hard can it be just to have three servers agree on which of them should currently be doing x, y, z job? Turns out it's really, really, really hard, and that you stand on the shoulders of giants because there's some amazing work done by the academic community over the last few decades, go and leverage the kind of solutions that they've put together to help facilitate that. Dom Robinson: 31:59 I think one of the upsides of Docker though is it has subtly changed how dev teams are thinking, and I think it's because it represents the ability to build these isolated processes and think about passing data between processes rather than just sharing data in a way a monolith might have done. I think that started people to architect in a Microservices architecture. I think people think that that's a Docker thing, but it's not. Docker is more of a catalyst to it than actually bringing about the Microservices architecture. Mark Donnigan: 32:33 That's interesting Dom. I was literally just about to make the point or ask the question even. I wonder if Docker is the first step towards truly Microservices architecture for a lot of these organizations, and I think Adrian did a great job of breaking down the fact that a lot of maybe what is getting sold or assumed to be Microservices really isn't, but in reality it's kind of that next step towards a Microservices architecture. And, it sounds like you agree with that. Dom Robinson: 33:09 Yeah, yeah, yeah. I think it's part of the path, but it's a- Mark Donnigan: 33:12 That's right. Dom Robinson: 33:13 Going back to my original statement Doc- Adrian Roe: 33:13 I am not even sure that strongly it's an available tool in this space. Mark Donnigan: 33:18 It's an available tool, yeah. Adrian Roe: 33:18 You can absolutely build Microservices at dentonville Docker anywhere. Yeah. Mark Donnigan: 33:24 Sure. Absolutely. Yeah. I wasn't saying that Docker's a part of that, but I'm saying if you come from this completely black box environment where everything's in a rack, it's in a physical location, the leap to a truly Microservices architecture is massive. I mean, it's disruptive on every level. Adrian Roe: 33:46 And, it's a great tool, it's part of that journey. I completely do agree with that. Mark Donnigan: 33:48 Yeah, exactly. Exactly. Well, this leads into a conversation or a topic that's really hot in the industry right now, and that's a low latency. I was chuckling, I was walking around Streaming Media West just couple of weeks ago, and I don't think there was one booth, maybe there was one, I just didn't see it. Maybe the Panasonic camera booth, they didn't have low latency plastered all over it, but every booth, low latency, low latency, Adrian Roe: 34:16 There's some interesting stuff around low latency because there's a beautiful reinvention of the wheel happening because, [crosstalk 00:34:28]. Mark Donnigan: 34:29 Well, let's talk about this because maybe we can pull back a little bit of the, I don't know the myths that are out there right now. And also, I'd like to have a brief real honest conversation about what low latency actually means. I think that's one of the things that, again, everybody's head nods, low latency. Oh yeah, yeah, yeah, yeah. We want that too, but then you're like what does it mean? Dror Gill: 34:57 Yeah, everybody wants it. Why do they want it, is an interesting question. And, I heard a very interesting theory today because all the time you hear about this effect of if you're watching a soccer game and you have a lot of latency because you're viewing it over the internet and somebody else has cable or satellite and they view it before you, then you hear all those roars of the goal from around the neighborhood and this annoys the viewer. Dror Gill: 35:25 So, today I heard another theory that that's not the problem of low latency because to block those roars you can just isolate your house and put on headphones or whatever. The real problem that I heard today is that, if there's a large latency between when the game actually happens and when you see it, then you cannot affect the result of the game. Okay? So, the theory goes like this, you're sitting at home, you're wearing your shirt and your fan, and you're sitting in a position that is a lucky position that will help your team. So, if the latency is high then anything you do cannot affect the game because it's too late, because the latency is low you'll have some effect over the result of the game. Adrian Roe: 36:13 When TiVo was brand new and there was the first personal video digital video recorders were a thing. They had this fantastic advert where somebody was watching an american football game, and as they're in sudden death overtime and the kick is just about to do a 45 yard kick. Yeah, and if it goes over, they win the game and if it doesn't, they lose the game. Kickers just running up towards it and he hits pools on the live stream, runs off to the church, prays for half an hour, comes back, and it's really good. Dror Gill: 36:47 Oh, so that's the reason for having a high latency. Adrian Roe: 36:55 It's interesting, the primary businesses in broadcast distribution as In over the air type distribution, but we do a bunch of the hybrid TV services, and as part of that we actually have to do the direct hand off to a bunch of the TVs and set top boxes and so on and so forth. Principally because, the TVs and sets of boxes are so appallingly behaved in terms of the extent to which they deal with then follow standards and so on. So, in order to deliver the streams to a free view plus HDTV in the UK, we just deliver them a broadcast quality transport stream as a progressive download, and entirely so this has been live in the field for, I don't, seven years or something. And entirely without trying to, we have an end to end latency of around two seconds from when the viewer in the home sees it on the TV, as opposed to the original signal coming off the satellite. And nowadays, that would be called super low latency and actually clever and remarkable and so on and so forth. And actually, it's primarily created by the lack of segmentation. Mark Donnigan: 38:01 That's right. Adrian Roe: 38:03 What's happened that suddenly made you have an RTMP streams. It's depended a little bit on how much buffering you had in the player and so on, but they typically have an end to end latency in a video workflow based around RTMP, five, six seconds, that was normal and they would really comment on it. And now, suddenly that you have segment oriented distribution mechanisms like HLS and Dash and all these kinds of things, people talk about low latency and suddenly they mean five to 10 seconds and so on and so forth. And, that's actually all been driven by the fact that I think by and large CDNS hate media, and they want to pretend that all media or assets are in fact JPEGS or JavaScript files and so and so forth. Dror Gill: 38:48 Or webpages. Adrian Roe: 38:49 Exactly. Dror Gill: 38:50 Yeah, like small chunks of data that's what they know how to handle best. Adrian Roe: 38:52 Exactly. And so, the people distributing the content like to treat them as static assets, and they all have their infrastructures built around the very, very efficient delivery of static assets, and that creates high high latency. So, you then get technologies like WebRTC which is emerging, which we use heavily in production for ... So, one of our customers is a sports broadcaster, their customers can deliver their own live commentary on a system over WebRTC, and it basically doesn't add any latency to the process because while we'll hand off a low latency encoder of the feed over WebRTC to wherever the commentator is, the commentator will view the stream and commentate. Adrian Roe: 39:34 In the meantime, we're going to a really high quality encode. In fact, this might be a mutual customer, but I probably won't say their name on air. We're going to do a really high quality encoder that same content in the meantime, and by the time we get the audio back from the commentator, we just mix that in with the crowd noise, add it to the video that we've already encoded at that point and away you go. And, you're pretty much getting live commentary on a system for free in terms of end to end latency. Yeah, and then sports, so we should be using WebRTC, we should be in this ... Adrian Roe: 40:05 The problem, CDNS don't like WebRTC not at least because it's a connection oriented protocol. You can't just do the same for everybody. You've got to have separate encryption keys and it's all peer to peer and so on and so forth. And so, it doesn't scale using their standard models. And so, most of the discussion around low latency as far as I can tell is the extent to which you can pretend that your segmented assets are in fact live streams, and so Akamai has this thing where they'll start playing a segment before it's finished and so on and so forth. Well actually, it starts to look an awful lot like a progressive download at that point. Mark Donnigan: 40:41 That's a great point. That's absolutely. Absolutely. And, what I find as I've walked around, like I said, walking around Streaming Media West, and looking at websites, reading marketing material, of everybody who has a low latency solution with a few exceptions, nobody's addressing the end to end factor of it. So, it cracks me up when I see an encoding vendor really touting low latency, low latency and I'm sitting here thinking, I mean Dror, what are we like 20 milliseconds? How much more low latency can you get than that? Dror Gill: 41:19 Yeah, at the Kodak level it is very low. Mark Donnigan: 41:21 Yeah, at the Kodak level. And then, when you begin to abstract out and of course the process adds time, right? But still, I mean the point is, is like it's ... I don't know, I guess part of what am reacting to and what I'm looking for, even in your response is that end to end, yes, but addressing latency end to end is really complicated because now just as you said, Adrian, now you have to look at the CDN, and you have to look at what you're doing on packaging, and you have to look at even your player architecture like progressive download. Some players can deal with that, great, other players can't. So, what do you do? Dom Robinson: 42:04 So, one of the things that I think just stepping back and having a reasonably long game view of the evolution of the industry over here in, in the UK, particularly in Europe general, low latency has been a thing for 15, 20 years. And, the big thing that's changed and why low latencies all over the global US driven press is the deregulation of the gambling market, and that's why everyone's interested in low latency. Over here in the UK, we've had gambling online for live sports for 15, 20 years. And, for everyone ... I used to run a CDN from 2001 to end of the 2000s, and all the clients were interested in was fast start for advertising for VOD assets and low latency for betting delivery. And obviously, low latency is important because the lower the latency, the later you can shut your betting gates. And, if you've got a ten second segment or 30 seconds to an hour, three segments to wait, you've got to shut your betting maybe a minute, half a minute before the race finishes or before the race starts, whichever way you're doing the betting. Dom Robinson: 43:14 And, that was very important over here. You didn't have a gambling market in the states online until last year I believe. And so, low latency just really wasn't very interesting. People were really only interested in can actually deliver reliably a big audience rather than can I deliver this to even small audiences, but with low latency, because I've got a betting market going on. And, as that betting deregulations come in, suddenly all the US centric companies have become really fascinated in whether they can shorten that low latency and so on and so forth. And, that's why companies 15, 20 years ago, over here, some of the big sports broadcast and so on, they were using RTMP extensively so that they could run their betting gates until the last second, and it really ramps up the amount of betting in those few seconds before the race starts. Dom Robinson: 44:03 So, that's why it's important. It's not for any other reason. In fact, I sometimes rather sourly ask audiences if they really ever heard their neighbors cheering to a football game before they've seen it because being caught on a sweeney of socially gathering around the TV, and it's an important game like that where your neighbors might have have their TV on loud enough, you frankly got a TV and it's on as well. Dom Robinson: 44:28 The real benchmark of the whole thing is can you beat the tweet, that's the measurable thing, and there's absurd little data in a tweet and a lot of tweets are machine generated, a goal is scored and it doesn't even take a fan in the stadium to type it, and send it to his friends, it's just instantly updated trying to beat a few packets of data across the world compared to trying to compress video, get it buffered, get it distributed across probably two or three stages of workflow decoded in the player and rendered. You're never going to be to tweet at that level. So, really the excitement is about betting, the deregulation of the betting market and gambling market. Dror Gill: 45:06 So, that's interesting. Today you don't measure the latency between and over the air broadcast and the top over the internet broadcasts, but you want to beat another over the internet broadcast, which is a very small packets of the tweet. So. Adrian Roe: 45:22 Exactly right. Dror Gill: 45:23 Actually, competing with the social networks and other broadcast networks. Dom Robinson: 45:26 Exactly. Adrian Roe: 45:28 I can remember, there were tongue in cheek when WhatsApp were bought, they were boasting about the number of messages that they dealt with a day, and yeah it was very large number, billions of messages a day. And, I remembered a little back of an envelope calculation that if you ... Based on the adage that a picture was worth a thousand words, and across all the various different events and channels and live sports and stuff like that we cover, if you counted a thousand words for every frame of video that we delivered, we were two orders of magnitude higher than WhatsApp. Dror Gill: 46:07 So, yeah. So, you had more traffic in your small company, you had more traffic than WhatsApp. Adrian Roe: 46:11 Yeah. Dror Gill: 46:13 A picture is worth a thousand words, and then you have 25 or 50 pictures every second. And, this is across all of your channels. So, yeah [crosstalk 00:46:22]. Mark Donnigan: 46:21 That's a lot of words. It maybe chuckle up. Well, this is- Dror Gill: 46:27 We always say video is complicated and now we know why. Mark Donnigan: 46:32 Exactly. Well, this has been an amazing discussion, and I think we should bring into a close with, I'd really like your perspective, Adrian and Dom, you're working with broadcasters and presumably sitting right in the middle of this OTT transition. Dom, I know you mentioned that for 20 years you'd been evangelizing IP, and now finally it's a thing, everybody gets it. But, just curious, maybe you can share with the listeners some trends that you're seeing, how is a traditional broadcast or someone who's operating a little more of your traditional infrastructure, et cetera, how are they adopting OTT into their workflows? Are they building parallel workflows? Are some fork lifting and making the full IP transition. I think this is a great conversation to end with. Adrian Roe: 47:25 I think we're right at the cusp of exactly that. So, none of our customers are doing it side by side if they are full blown traditional broadcasters. I think increasingly a lot of our customers who may be deliver exclusively over the internet would also consider themselves broadcasters, and so I think the parlance is perhaps slightly out of date, but that's one of the things that I think is really interesting is some of the cultural challenges that come out of this. So, one of our customers who is a full blown traditional broadcaster, when you're dealing with fault tolerant large scale systems of the sort, that idea is built, then one of the things that's a given is that it's going to be a computer that decides which server is going to be responsible for which particular, this is BBC one's encoder, this is ... Yeah, whatever ITVs encoder or whatever. It's going to be a computer that makes those decisions because a computer can react in milliseconds if one of those services is no longer available and reroute it somewhere else. Adrian Roe: 48:28 And, this wasn't a public cloud implementation it was a private cloud implementation that they had couple of racks of servers and data management infrastructure on top that was doing all of the dynamic allocation and tolerance and all this clever stuff. And they said, so when we're showing our customers around, if channel four comes around, how can we tell then which is their encoder? And we said, you count. There isn't a channel four encoder there's an encoder that might be doing the job. Adrian Roe: 48:55 And, one of the features we had to add to the product as just to get over the cultural hurdle with them was the concept of a preferred encoder. So, if everything was in its normal happy state, then yeah, this particular encoder, halfway down on the right hand side of rack three, was going to be the one doing channel four, and just those simple things where they think people do still think in terms of appliances and raw rian and so on and so forth, and some of the challenges to move away from that into cloud thinking bit actually on the cloud or not, cloud thinking still applies it. It's funny where people trip up. Dom Robinson: 49:36 One of my bugbears in the industry, I'm a bit of a pedant with some of the terminology that gets used and so on. One of my bugbears is the term OTT. So, having spent a good long while playing with video and audio distribution over IP networks and so on, I struggle to think of any broadcast technology, which doesn't use IP at some point in this either production or distribution workflow, there just isn't any now. And so, if you're watching live news, the contribution visa coming over cell phones which are contribution is some sort of streaming protocol or a film or TV program production people are emailing files or they're dropboxing files, or they're sending them through digital asset management systems or however it may be. Dom Robinson: 50:20 But, the programs are being created using IP and have been for quite a while and increasingly nobody replaces technology with some sort of proprietary non IP based tool these days at any level in the broadcast industry. I rather store everything I can to try to avoid using the word OTT. And being a pedant about it, OTT simply means the paywall is outside of this last mile access network. That's all it means. It has nothing whatsoever to do with video distribution or streaming or anything like that. It's simply to do with where you take your payment from somebody. Dom Robinson: 50:57 So, Netflix has a hybridized side, but Netflix, you generally access through an ISP and when you make your payment, you pay Netflix directly. You don't pay through your ISP, that is an OTT service. Skype is an OTT service. Again, you connect through your phone service, your cable service, whatever it may be, but you actually subscribe directly with Skype, that is a true OTT service, and that's what OTT means. It's become in the last eight years synonymous with streaming ,and I can't think of a broadcast network which doesn't at some point use IP either streaming or file transfer based technologies to compose the program. Dom Robinson: 51:37 So, broadcast is streaming, streaming is broadcast. They have been synonymous for over a decade. It is how you connect the payment, which defines something as OTT, and it may well be that you can receive a video stream outside of one particular ISPs network, but that doesn't really mean anything. So, this battle between broadcast and OTT, it's a meaningless decision of where you're collecting payments for me. It really doesn't have any bearing on the technologies that we all work with which are video compression and distribution and so on. So. Mark Donnigan: 52:11 That's brilliant. That is really, really a smart observation and analysis there Dom. Well, I think we should wrap it up here. We definitely need to do a part two. I think we will have you guys back, there's so much more we could be talking about, but I want to thank our amazing audience, without you the Video Insiders Podcast would just be Dror and me talking to ourselves. Dror Gill: 52:38 Buzzing to ourselves some buzzy words. Mark Donnigan: 52:40 Buzzy words, buzzing, buzzing, taking up bits on a server somewhere and this has been a production of Beamer Imaging Limited, you can subscribe at thevideoinsiders.com where you can listen to us on Spotify, on iTunes, on Google Play, and more platforms coming soon. And, if you'd like to try out Beamer Codecs in your lab or production environment, we're actually giving away up to 100 hours of HEVC and H.264 encoding every month. Just go to beamer.com/free, that's F-R-E-E to get started. And until next time, thank you and have an awesome day encoding video. Speaker 1: 53:30 Thank you for listening to the Video Insiders Podcast, a production of Beamr Limited. To begin using Beamrs' Codecs today, go to https://beamr.com/free to receive up to 100 hours of no cost HEVC and H.264 trans coding every month.
In this episode, we catch up with the President & CEO of the company who won the 2018 Frost & Sullivan Global Content Protection Entrepreneurial Company of the Year Award, where we talk about DRM and its ever-changing role in the video industry. Mark Donnigan: 00:00 This episode on DRM was so meaty, that we decided to jump right into a nine minute segment of Dror talking with Christopher Levy, who is President and CEO of BuyDRM, about how DRM technology fragmentation came to be, and the strategies behind DRM as a platform lock in. After this extended clip, we resumed the rest of the interview. You will definitely want to keep listening. Here's Dror and Christopher Levy. Dror Gill: 00:35 This is really an interesting trend you're talking about. On one hand, you have these silos, and the silos include the software platforms, the hardware devices, the content, and the DRM mechanism, which is made by a certain, by a specific company. Now, some of these companies have interest only in parts of this type of ecosystem. For example, Samsung have devices, they have a software platform, they don't have their own DRM, and they don't have much content of their own. So, now this collaboration with Apple is bringing more content, a lot more content, to Samsung devices, and bringing a lot more devices to Apple's content. We all know, all of you know, the rumors about Apple expanding their content service to be much wider than it is today, so it really makes sense. Dror Gill: 01:37 The topic you raise of which DRM will be used to enable this collaboration or cross-streaming of content between platform is really a very interesting issue. Another point you mentioned, which, you know, I can really resonate with, is the fact that standardization has happened across the video ecosystem in things, in components such as codex, packaging, controller mechanisms, manifests, things like that. And, DRM, although there have been attempts to standardize DRM, there has always been some internal component of that DRM that remained proprietary. That remained part of a closed, in siloed ecosystem such as PlayReady and Widevine, and this always struck me as kind of odd that everything else is standardized, and even the you know, mechanisms of exchanging keys in DRM's are defining DRM protocols. Dror Gill: 03:08 Everything is standardized but finally, the key. Those very large companies do not want to give up the key. The key is what they control, and it is the key of opening the content, but also the key to the whole ecosystem, and platform which enables their own platforms to grow. Dror Gill: 03:31 My question is, and referring to the fact that you also said that more and more layers or components of DRM are being standardized. Do you see somewhere in our near future that finally this content protection component will also be fully standardized, and in the same way that we're now having the harmonization of HLS and DASH with CMF, have harmonization of different DRM systems, and no single company would control those, this key to the industry? Christopher Levy: 04:10 You make a really good point that, you know, in essence DRM and Codec have had similar kind of evolutions over time. If you look specifically at the DRM industry, and not to make a short story long, but to kind of paint a picture of why we're at, where we're at, you've got an odd mix of singularities that it would seem would leave almost no possibility that there would be a marketplace for DRM where their companies would have to pay for it, or that companies would continue to invest in it. Christopher Levy: 04:46 I mean, if you fall way back to the beginning of the invention of DRM per se, as we know it, you fall way back to a meeting between Intertrust and Microsoft in, I think late 1999, where they agreed they were going to collaborate on some stuff. But then, at some point when Reciprocal launched, and decided that they were gonna partner close with Microsoft, Intertrust made an offer to Microsoft." Hey, give us two hundred and fifty million dollars, and license our technology," and a certain gentleman at Microsoft made the decision with his team to say, no. Only to later than lose a multi-billion dollar lawsuit to Intertrust, and Bill Gates wrote them a check that later allowed them then to go pursue every single company in the world that uses DRM. And so now, you've got Intertrust, who has a DRM, Marlin, that nobody uses in the U.S., only uses it in China, but Intertrust doesn't have a browser or an operating system. But, they own all the intellectual property around DRM, and so Apple, Microsoft, Google, Samsung, Sony, anyone in the world who touches DRM has had to take a license from Intertrust. Christopher Levy: 06:00 But, then Intertrust, wasn't able to be successful with their own DRM technology, because, as I mentioned, they're locked out when it comes to having a browser or an operating system. So, they actually have somewhat abandoned Marlin, and moved to support Google, Apple, and Microsoft's DRM's. But then, you look at them and you say, "Okay, what would drive these companies to integrate such, so they an be interoperable?" Because that's kind of what we're talking about here, is how are Samsung and Apple gonna interop, but how is that gonna help everyone? Including HEVC, and what you find out is, that you know, DRM was clearly created. When I say created, when it was commercialized by Apple, Google, and Microsoft, it was obviously done on two kinda bifurcated paths. Christopher Levy: 06:46 One, to satisfy potential looming lawsuits related to record labels, and studios, and artists, and creators, and content owners, pointing a finger at these large companies, saying your technology platforms are massive piracy platforms. Secondly, it was done as a platform play, to get you to use the platform. I mean if we look back at PlayReady. PlayReady was a technology that was completely driven to lock you in to using Windows based technologies, and Microsoft based technologies. Christopher Levy: 07:16 Now, if you pull that out, if you pull Intertrust, and Microsoft completely out of the DRM discussion, and you just look at Apple and Google, who really are driving the entire industry now. They both have been using DRM to date, and on both those paths. To satisfy the lawyers, and to satisfy the lock in, and that is just where we're at, but, now the market has gotten so saturated. Christopher Levy: 07:42 Google has not been successful selling devices. The Google Chromebook is a disaster. The Google Pixel phones are not selling as well as Google would expect they would sell, as the inventor, and owner of Android. So, now you get down to, okay, DRM previously was a legal thing, it was a lock in thing, but now, what is it? And I think what we're starting to see come to light is, that with the movement of common encryption, by you know, different various parties, the movement towards CMAF, the movement away ASCTR encryption, that was designed in PlayReady, into CBC encryption, we're really close to having a CMF, CMAF file, that using common encryption would have decryptors for Fairplay, PlayReady, and Widevine. Christopher Levy: 08:37 So, we're getting very close to that. A deal like this, that Apple and Amazon have struck. It really could be the gas to the match. I sense that there's gonna be a push through here, the technology, Apple's Fairplay has gotten a lot of deployment experience now, so there's a big community contributing back to Apple. Christopher Levy: 08:57 Apple has a very small team, if you knew the number of people working a DRM in Google and Apple, you would be shocked, and yet, they're converging. And, I think the reason they're converging, is that, you know, the consumer in the end, is dictating what they want, and consumers have made it very clear they want, you know, Samsung smart TV's. They want Apple TV's. They want Android tablets. They want Apple IPhones. Christopher Levy: 09:24 I think both of them now, are gonna take a little play out of Steve Jobs DRM playbook, and probably find a way to cross pollinate their businesses, because Apple's not in the search business, you know. They try and interact in the home marketplace, but Google already owns the home, outside of Alexa. So, it's interesting, you know, to just clearly take one stab at it. I would say that we are headed towards complete inter op ability and that has a lot of benefits. Christopher Levy: 09:57 It benefits operators, in cost reductions. It benefits consumers, in less confusion and playback stops. But mostly, it's gonna give Google a shot at, you know, exposing their offering to Apple's audience and vice versa. Announcer: 10:15 The Video Insiders is the show that makes sense of all that is happening in the world of online video as seen through the eyes of a second generation Codec nerd, and a marketing guy who knows what Iframes and macro blocks are. Here are your hosts, Mark Donnigan, and Dror Gill. Mark Donnigan: 10:36 Let's rejoin the interview with Christopher Levy from BuyDRM. Christopher Levy: 10:41 To kind of just give a quick summary, the company is one of the dark horses of the content protection, and DRM business. We have a pretty well known brand as a company. We have extended our platform out pretty widely in the business. So we have a Multi-DRM platform called KeyOS, and we have a couple of different components of it. Christopher Levy: 11:03 We have the encryption tools, we have the licensing tools, and we have the player tools, and we're integrated with about fifty different encoder server player companies in the marketplace. We service some of the major brands that you might be familiar with, like BBC iplayer BBC sounds, Sony Crackle, Showtime OTT, Blizzard, Warner Brothers, and we do a lot of work that we're not really at the liberty to discuss. Christopher Levy: 11:30 But we do a lot of pre-release work as well. So, a lot of the focus in the business is on consumer media, but we also have a pretty significant business that's, you know, pre-release. So, Digital Daily, Screeners, Academy voters. We are very active in the Academy voter space. We currently host Apple Fairplay certificates for the five largest media companies in the world today. Some of which you're familiar with, I'm sure. Christopher Levy: 11:57 To kind of fast forward, the company is privately owned. We are profitable. We own the company, myself, and the Chairman, Ron Baker, is my partner in the business, and we have different development teams based around the world. We've got our core team in Riga, Latvia. We have a team in Moscow, and a couple of people in St. Petersburg, and then we also have some people in Paris that work on our Android and IOS SDK's and our CTO is in Vancouver, and the company and myself, and the sales marketing management teams are all based in Austin, Texas, and yeah. Christopher Levy: 12:34 Just to fast forward, we, late last year, for the first time ever, in the Frost and Sullivan Global Content Protection Report. This report is, you know, it's kind of a bigger picture report. It's kind of what they call content protection includes casts and DRM, so we are listed in the report with some of the heavy weights like you know NAGRA or Detto. But we were included in that report, and we ultimately were selected as the entrepreneurial company of the year for our variety of different business models. Christopher Levy: 13:07 You know, we pride ourselves on having a very strong core DRM platform. But, we also now license our technology, so we've expanded into India, and all over Europe. We have several large major gaming companies, media companies that now run our software in their own data center, in their own cloud. So, that kind of vision shift in the company, I think is what got us over the goal line with the award. Christopher Levy: 13:30 But, we're just you know, wrapping up one of our best years ever, if not our best year ever, last year. And, we're just kind of waiting to see all the different crazy announcements that come out of CES, you mentioned our team is there on site. But, I'm closely watching the announcements that Apple made about partnering with Samsung in LG, because it creates some very interesting possible synergies that all of us can benefit from. Mark Donnigan: 13:56 Definitely. We're tracking that very closely as well. I mean, let's start there. Christopher Levy: 14:03 Well, you know, the DRM industry at large is very interesting, because it has become a bit of the political third rail of digital media, as I'm sure you all know. At this point, each DRM technology is siloed into a global technology company. So, if you start left to right based on the kind of market, you know, availability of the product, you had Microsoft with PlayReady. PlayReady runs in IE and EDGE, and on Windows natively. You've got Google with Widevine that runs in Chrome, primarily on Windows and Android, but also runs in IOS. It's the one technology that runs on all three platforms, and you've got Apple's Apple Fairplay DRM, which really only works in Safari on MacOS, and Safari on IOS, and it works for tvOS. It will also work possibly on other products, we may find out here soon. Christopher Levy: 15:11 I have to be careful what I say, but to kind of track what's going on, you know this announcement that Apple made about being able to move their business offering over to other platforms, I think, was largely driven by the tipping point of the iPhone sales over the past couple of years. It's no secret that Apple's last couple of iPhone product lines have not sold that well, so that's created kind of a tipping point in the company where now, they're trying to figure out, okay, where do we go next? And clearly Apple has a massive media empire. Christopher Levy: 15:45 They're one of the first companies to ever have a license to just about every song, and movie, and TV show that consumers in America are familiar with. And, they obviously have a globally strong brand. But, because DRM has been a political silo, today, you know, iTunes doesn't appear on Android. It doesn't appear on Tizen. It's not on Ruku. It's not on Smart TV's. But, that is going to change, and the question is, how will it change? Christopher Levy: 16:13 And to kind of give an example, if you take a look at Roku, who has gone through a similar transition where they were a streaming puck company, they were a streaming stick company. Then, Amazon entered the streaming stick company, and entered with Amazon prime, and Roku then suddenly decides, now, it's a content company. But, it also wants to get eyeballs and users onto it's platform, regardless of the direction it's going, and so Roku had to go. To support YouTube, they had to work with Google to implement Widevine DRM on the Roku platform, which previously was a PlayReady, and Verimatrix platform, natively and solely, and so that model where Roku kind of stepped over the fence and implemented Google's DRM to get YouTube is an interesting example of maybe what's going on with Samsung. We don't totally know yet. What Samsung and LG are doing, and we have our feelers out, and of course, we've talked to Apple pretty extensively about it, because we have a very close relationship with Apple as one of their frontline partners in the industry. Christopher Levy: 17:18 But, I think it plays out one of two ways, and it is somewhat DRM dependent, and Codec dependent, because of the fact that Apple is either going to allow Samsung to distribute iTunes on their platform, or really Apple is gonna distribute, I say, because it's an open app marketplace. But, Apple has a decision to make, and it's, do they deploy it using WideVine, and reformat their application platform to use Widevine DRM instead of Fairplay, or does Samsung jump the shark and implement Fairplay? Christopher Levy: 17:54 Because at the core of all these DRM's, the encryption decryption components are almost identical. At this point, all three DRM's us AES one twenty in encryption. There are some various different tweaks there, with regards to, the encryption mode CBC, verses CBR, but we're starting to see some standardization. I'm sure you're familiar with, with around formats. I personally believe it could go either way, or it could go both ways, because if Samsung were to implement Fairplay on their newer platforms, that would create a whole new synergy between Samsung and Apple that, oddly enough, hasn't been destroyed by the multi-billion dollar IP lawsuits that have gone back and forth between the two of them as vendors and competitors. Christopher Levy: 18:40 But on the other hand, I could see, you know, Apple just wanting to push it out through Widevine, because if they got iTunes to work with Widevine, and I mean iTunes video is what I'm focused on, then the majority of the relatively, recently shipped Samsung TV's, more than likely, can all support iTunes. Which would be kind of cosmic shift in these siloed offerings that all fall back to DRM. Am I right? I mean, Apple's got iTunes on Fairplay. Google's got Google Play on Widevine. So, it's an interesting thing that's gonna happen. I am very curious myself. Mark Donnigan: 19:21 It does sound like really good news ultimately. It's interesting your observation about, you know, the platform lock in. I'm thinking back to when I was active in the DECE, which became the ultraviolet, you know, which, was really revolutionary at the time. Because, you know, back then, you consume content from a particular store, if that was Vudu, for example. You were locked into Vudu, right. You know if Vudu wasn't on a particular device, then I was also locked in to the devices I could watch it on. Mark Donnigan: 20:00 So,the consumer now is going to enjoy the benefit of this truly, any content, anywhere, on any device, at any time. You know, so, that's all very good things. You know Christopher, I was reading your blog and by the way, listeners should definitely go to the blog, why don't you tell them again, I just I don't recall the actual url. Tell them the address of your blog. Christopher Levy: 20:29 Yeah, it's really simple to remember it's: thedrmblog.com. Mark Donnigan: 20:35 That's it. Thedrmblog.com that's awesome. Yeah, kinda like thevideoinsiders.com, that's right. No, Christopher, I want to get your comment on, I think it's your latest post, where you're talking about HTML5, kind of the App-less approach, and you know, I appreciated the article. Mark Donnigan: 21:01 It was presenting a little bit of the pro's and the con's of, and I think you were doing it in the context of inflight entertainment. And, I know that people, if you're running a video service, if you're Amazon, if you're Netflix, you know, even if you're Vudu, Hulu, whatever, you know, they have to maintain up, hundred, you know, multiple hundreds of different player SDK's. You know, it's incredibly complex. So, the idea that you could perhaps, just scale that way back, and perhaps just go to an HTML5 app, is interesting. So, maybe you can share with the listeners, both, your thoughts, and the pro's and the con's, and give kind of a recap of that blog post. Christopher Levy: 21:48 You bet. And, I mean clearly, that obviously, is also effected by the evolution of Codec, and HEVC and others, but there's this trend, and the in-flight entertainment space is an interesting creature. I've spent the past two years researching this space because previously BuyDRM had a bunch of clients in the space, but they were through third parties. So, you know, we had a business with Lufthansa, Technology Solutions, where they were deploying our technology in Virgin Airlines, LL Airlines, Lufthansa Airlines. Christopher Levy: 22:24 They put the technology on Greyhound buses. Post Bus, which is the largest bus company in Germany, and we also have a little bit of business with companies like Global Eagle, and some others, and we started to look at, you know what's the opportunity for us to enter the space directly. Christopher Levy: 22:41 So, we started going, attending shows, and doing research, talking to people. So, the way that in-flight entertainment systems make it on airplanes is different than you might expect. The airline industry has about four conglomerates that all, kind of, control what you call, you know, in-flight experience. Now, the in- flight experience, you know, the video piece is what we're focused on, but it includes interiors. It includes catering. It includes environment. It includes wifi. It includes being green. Entertainment's one component of it, but it's locked in with all these other kind of aspects of the business, and so therefore, it's treated in a very, what I would say, in a very institutional manner. Christopher Levy: 23:19 To date, in-flight systems have been wired, and they're in your head rest, or it's a fold up screen if you're a business, and you're first class, it extends out a little booth you're in, and you're limited to watch videos that are in a dedicated platform that's hardwired on to the plane, and that was the experience. Christopher Levy: 23:38 Then, along came satellite. Then, along came in-flight wifi. And, IFEC, you know, in-flight entertainment. The connected version with wires, suddenly pivoted to in- flight entertainment overnight. Which means wireless, and then DRM became a big topic. But, what you started to see DRM really drive, was the issue of, do airlines want to maintain premium content apps for their clients so they can watch content? Or do they just want them to open their browser, and get on the wifi network, sign in, and then have access to all the content through a browser? Christopher Levy: 24:12 There's this trend in the business where a lot of companies have gone the direction of the browser, so like, if you get on a Southwest Airlines flight, you want to watch Dish TV live, you know, the implementation is there, on the plane. There's a dish receiver on the top of the plane that's got multiple different LMB's. Each channel is switchable. They got an encoder on the plane that takes the MPEG transport stream coming down over the dish, converts it, encrypts it, shoots it out of a server, on the plane to your browser. And that's easy, and it's fun. And it works, and it's especially effective for live TV. Christopher Levy: 24:47 Stepping away from that, when you start to talk about doing things that are more efficient, and I think where consumers are headed, which is downloads, offline playback, bring your own device, the browser kind of starts to die because it doesn't work offline well. It doesn't do downloads well, because each browser has a protected limited amount of storage on the device it's running for security reasons. And browsers, the implementation, most players in them are not that efficient, and so what you find is that the browser is quicker, it's faster, it's dirtier, it's cheaper, but it opens up the door for a bunch of fails on the consumer side. Which is, decreased battery life, forced to use streaming, which uses the wifi radio which is decreased battery life, increased overhead on the aircraft. Christopher Levy: 25:43 You don't get offline playback or download, so you can't download a stream and play it in a browser effectively offline. And lastly, consumers are very comfortable with their devices. Like, if I'm given the option of watching my ten point seven iPad pro with my bose qc thirty-five-two headphones, I'm gonna pick that every time over plugging some crappy, hand wiped headphones that hardly fit, that sound terrible, into a jack that's crackling, so I can watch a screen that has a four inch thick screen protector on it. Christopher Levy: 26:20 The airline industries are trying to figure out, okay, well what do we do, because we're not OTT operators, but how do we make clients happy? And so, they're caught in a dilemma right now. Now, you know, I see it going two ways. I frankly think the live TV will continue to be in the browser. Remember, DRM adds some overhead cause you gotta decrypt the content and add some CPU overhead therefore decrease battery life. Christopher Levy: 26:44 When you move to an app, I think apps are gonna be a lot more prevalent for VOD content and shifted viewing, and TV viewing. The last thing that's going on, that the airlines don't totally understand, and I've spent a lot of time trying to educate them about, and this is kind of a tangential issue, but I'm sure you can understand, is that every single passenger that's on an airplane, more than likely, and I said within a ninety percent or higher realm of operation, especially on International and business flights, have a Netflix, Itunes ,Google Play, Hulu account. And now, with DRM they can download all the movies and TV shows they want to their device, and just go on the plane, and have every blockbuster, every TV show, every highlight, every documentary, every podcast, that they want to. Access on their own device, and use it in their own way in their own time, in their own comfort. Christopher Levy: 27:42 So, that's kind of the big divide right now, is companies are trying to figure out, well, we can save time and money on not having to build IFE apps, and just go to the browser, but we lose a bunch of things that consumers might want. There's a couple of other things which are also driving that, and those are accessibility issues which I think, will drive a lot of companies be forced to maintain apps, and those accessibility issues. Christopher Levy: 28:03 Accessibility use on devices, you know, iPhone and Android have different functions for people that have disabilities or motor challenged, and aren't, you know, able to use the device the same way they would use an IFE platform where they gotta touch the screen in front of them, you know, reach up, and et cetera, et cetera. Christopher Levy: 28:24 Secondly is multi-track audio. Thirdly, is multi-language caption support. I think those are the three issues, are more gracefully handled within applications. Christopher Levy: 28:34 Lastly, I think applications are more likely to support advanced codecs, like HEVC, sooner. Because the applications are running on devices that are being modernized, updated, purchased more widely across a wider range of markets. And so, the people that design the player SDK's and apps, and the operating systems in the devices, are much more likely to embrace newer codecs like HEVC, then browser operators are. Because browsers update at a crawl. Christopher Levy: 29:09 So, I mean, Google is the fastest browser updater in the business. But then, if you look at Safari, and IE and Edge, it's like, you know, waiting for your Grandmother to mail your birthday present. You get it like, four months later. But you're happy you got it, so I think that's the last kind of hidden thing, is that you know, within premium apps on devices, in a bring your own device model there's a greater chance that you're gonna get higher quality content sooner with DRM than you would in a set top box, or in a seat back implementation. Mark Donnigan: 29:40 Yeah this is a really important discussion, I think for any of our listeners who are planning video services, and maybe, sort of haven't been able to do that next level of research and are thinking, "hey, you know, I can just appoint HTML five, it will reduce complexity, it will get me to market faster." Those are all true, but you have to know what you're also not gonna be able to deliver to your customer. Mark Donnigan: 30:07 One of the other things, that I didn't hear you say, maybe I missed it, but I know one hundred percent, you know to be true, is that content licensing in some cases prohibits for example, HD in a browser, or certain browsers or in certain configurations. So yeah, you may be able to deliver in to that browser, but you're limited to SD, you know? 480p or maybe 720p, but not 1080p, so you're not able to deliver even the full quality. Mark Donnigan: 30:41 Now, in-flight entertainment, the bandwidths are so low that you know, I think 1080p is not very common anyway, but the point is, is that those are even things that you have to think about. Christopher Levy: 30:53 Well, a researcher David McCannon, he's pretty famous, he's a young guy over in the UK who previously, was responsible for a pretty significant kind of white hat hack that started to turn ugly. He's a pretty brilliant guy. He published some stuff on Monday of last week that indicated that he had breached Widevine's level three DRM. Which is the lowest level of DRM, mostly used in the Chrome browser, now it appears that what he breached, wasn't exactly Google's technology, but a third party[inaudible 00:31:30] technology that Google was using to wrap up their content decryption module that sits inside Chrome. Christopher Levy: 31:37 But, it's a good example of where, devices, especially Android devices, you know, they have hardware in them that allows hardware assisted key management. So, they have a hardware manage black box that sits on the device that is basically impenetrable. And so, that's another benefit of using devices. Christopher Levy: 32:02 Apple has the same thing, so Fairplay on IOS, taps into a trusted computing module that's on the chip that's in the iPhone or iPad. Same thing with Android with Google's DRM, you can get level one Widevine playback for HD, and 4K content on the device and then you can cast that out to a much bigger screen if you want over Chromecast, or over Airplay for example. So that's, that's another example where, you know, apps are much more secure than play back in the browser. Christopher Levy: 32:34 So, what has to happen now is Google's gotta go modify, and what they're in the process of, from what I understand, of updating their content decryption module for Widevine and Chrome, so that their level three use, which is what most of the operators use, is safe. Christopher Levy: 32:51 But again, they're operating on a non-native platform to them. Windows, in the most cases. Widevine also runs in Chrome on, on, on MacOS, but in those browser models, browsers are sitting on top of operating systems that the operator doesn't always own, and so that's again another benefit to using Premium apps. Mark Donnigan: 33:15 This is an awesome lead in to a discussion about AV1, and DRM support. I don't know, have you had the chance to do some research around you know DRM support for AV1? Christopher Levy: 33:30 Yeah, I mean, we've been following it pretty closely. We are really closely aligned with some companies that are working pretty seriously on it, I mean. We're very aligned with Google, and Bitmovin, and Amazon and Intel, and some of the other people that are involved in it. Christopher Levy: 33:47 But again, the big question is, at what point does AV1 start to appear in content in browsers with DRM's? And I guess, the problem that we kinda have right now, is that that hasn't really happened, and they've done some kind of stuff playing around with Firefox, to play AV1 content. But really, it's gonna be up to, again, it's gonna be up to Apple, Google, and Microsoft. Right? Because they are the ones that own the DRM, and the browser, and so again, you've got a weird. It's not a simple economy of supply and demand, it's, there's this third, you know, Robby Botter's hidden hand that's influencing who is gonna implement what. Christopher Levy: 34:43 You've got HEVC, which is widely deployed, heavily proven in the marketplace. It's gone through some royalty and licensing politics that are pretty consistent that all codecs go through. I kinda wish sometime, the encoding business had the same oligarchy god that DRM has, where Intertrust can just licensed everybody and be done with it. Christopher Levy: 35:07 But, HEVC in comparison to AV1, in HEVC there are tons of documents on you know, Apple's developer page, Google's developer portal, Microsoft's developer portal. Showing how to use their DRM with HEVC on different platforms, and there are numerous, numerous gibset manufacturers as you well know, and which we provided you a list of, that support it. And also, its supported in a lot of the browsers already, if not all of them. Christopher Levy: 35:36 AV1, on the other hand, is kind of nowhere with any of that yet, but it seems to be, you know, a little less encumbered with the intellectual property issues. But frankly, I feel like as it gets closer to being deployed, and people start to really get their hooks in to it, we're probably going to see the same thing happen to AV1 that happened to HEVC. But, I think it's gonna happen before it ever gets widely deployed, in my opinion. Dror Gill: 36:06 You think, when you say the thing that's gonna happen are you referring to patent accusations or patent infringement? Christopher Levy: 36:21 Yeah, I try not to pick a side, because you know let's face it. If you picked our entire industry, the two most researched dollar intensive things are codecs and DRM. You could build a Codec, and at the end of spending millions, throw it in the trash, because it didn't scale. You could build a DRM, and in the end, because you weren't doing a freedom to operate analysis ahead of time, find out that you built a great technology, but it's never gonna see the light of day in the market because you are infringing on someone else's IP. Christopher Levy: 36:55 I think what's going on with HEVC is kind of normal, right? Like, all these companies invested in it. And clearly, they intend to see their return on the investment, and they're looking at what happened with H- two-sixty-four, the patent pull stuff, all the, the kind of facts that we all know that there's quite a few companies in the business that aren't reporting royalties properly, and have kind of jumped the shark there. Christopher Levy: 37:20 So, I think HEVC has a better chance than AV1, if I were to weigh the two. Just because, it's, you know, all the points I've mentioned; much further widely deployed, chip support, browser support, DRM support. AV1 doesn't have any of that, and it doesn't have the encumbrances of potential legal battles, yet. But, I don't know. What do you guys think is gonna happen, when it comes time to walk the aisle with AV1? Dror Gill: 37:46 Indeed, nobody is giving you identification against any patent lawsuits for AV1. The companies involved in developing the codec itself, have signed agreements that they will not switch other, or the users of AV1, but this doesn't mean that somebody else will not claim any IP rights on algorithms used in AV1. Dror Gill: 38:18 And, on the other hand, the conclusion that we reached is that, the fact, it is well know that AV1, right now, is much more computationally complex then HEVC. Right now, it's like a hundred times more complex, and even the people involved in AV1 development have told us that in the end, when everything is optimized, it will still be five to ten time more complex than HEVC. Dror Gill: 38:49 And, we think that one of the reasons for that is all of that side stepping of patents. All of these techniques, which to be efficient in terms of bitrate consumption, as HEVC, but cannot use the same tools, and therefore I have to go in very weird ways around those protected methods in order to achieve the same result. And this is part of the problem, and why it is so computationally complex. Dror Gill: 39:26 Recently, I've come up with yet another conspiracy theory after hearing that a lot of the decisions made, somebody wrote this in a blog post. A lot of the decisions that were made during the development of AV1 were driven by the hardware companies were members of the AOM. Christopher Levy: 39:47 I was just gonna say that, Dror, is that A. There's not free lunch whether it's physics, mathematics, which is you know, part of physics in technology, in relationships, in religion, and that doesn't surprise me. Christopher Levy: 40:02 But, what I was gonna point out, was Occam's razor says, "the simplest answer is more than likely, the answer," is correct. I would say, that is what's driving it, because let's face it, I mean, there's not a person working on it that doesn't benefit from that. I'm pretty sure that Google, Amazon, Microsoft, Apple, all the other companies sell computing software, and technology and silicon, and intel, so I can't imagine why that wouldn't be the case. Christopher Levy: 40:32 But, you make a good point, that regardless of the fact that their trying to ignore the three laws of thermodynamics. I imagine they are, have a strategy for how they're going to sort that out, but the question is, will it really work? And, the only thing too, is if they don't adopt DRM into their message share pretty soon, and start showing examples of AV1 content with DRM, it's just gonna be another Ultraviolet. It's gonna be shiny. It's gonna sparkle. It's gonna have all the right looks and feels. It's got a cool logo. The stuff on the side is really cool, but will people use it, or is it just gonna be another augmented reality, virtual reality three-d, a year from now? Mark Donnigan: 41:14 You know, I sometimes find myself feeling a little agitated or sitting in a conference, and I'm listening to a panel, and I'm hearing either a panelist or even Mozilla, you know, saying, you know," it's coming, player support is coming. It's just months away. It's gonna be in the browser." And then they start, and I'm going, so, really? So, Sony pictures, and Warner Brothers is gonna allow you to play their movies inside a browser without DRM? Yeah. Let's see how that works. You know? Like? Then you've got up on the stage usually, or you hear speakers, and they're throwing off big service names, and Netflix is heavily behind AV1, so I am not naïve that Netflix is having these discussions, I'm sure. Mark Donnigan: 42:04 But the point is, that DRM is DRM. It has to be implemented. It has to, to work with the standards the content owners accept. But the fact that you don't hear DRM, it's sort of just... it's almost like, oh yeah, yeah. It's gonna be in the browser. It's gonna be supported. I'm like, that's just not how it works. It will come later. It's coming, don't worry about it.[inaudible 00:42:29] Christopher Levy: 42:28 I mean, nevermind the battle that was fought at the W3C by all the media companies just named, and a hundred more, along with Google, and Apple, and Microsoft to implement DRM in the browser, because they know that's where people want to view content on their computers, whether it be desktop or laptop. Christopher Levy: 42:46 But, they didn't do all the work, and engineering to get MSC and CDM's working to just all of the sudden, say, "see we're gonna throw it out the window because there's this new Codec in town." Mark Donnigan: 43:00 Yeah, exactly. Exactly. So, wow. Well, I'm looking at our time here. This has been an amazing discussion Christopher, and we absolutely, need to have you back because we didn't get to talk about players, and I know you guys are active, and I know also in the player development. So, I think, Dror, what do you think? I think a part two should be players. Dror Gill: 43:24 You know, Mark, Christopher did make this analogy between Codecs and DRM in one of the first episodes we told. Like the story of the Codecs, how they've been developed and DRM is also really a fascinating story, and even more because it's beyond standards that spam dozens of companies. It's really a few companies holding the power, holding the key, and that's also the DRM key. In the whole industry, and how it's gonna develop in the future I think would be really interesting to see whether we are going for true standards, finally, and a much easier life for consumers to play their content anywhere, or do we still have few years of struggling? So, really, thank you very much, Christopher. Mark Donnigan: 44:16 Christopher, your website is Buydrm.com, correct? Christopher Levy: 44:23 That's correct, and the blog is thedrmblog.com and once you guys get this podcast up and done, we'll go ahead and feature it on the blog, and I just wanted to quickly mention that in the next couple of days, we're gonna have a new blog post come out about deploying secure SDK's. And, we tackle a lot of the issues we talked about here in a generalist way. We do talk about our own SDK players, but I'll notify you when that blog is up. I think your readers will find it interesting. Christopher Levy: 44:53 We're also have an HEVC update on our blog, but after today once you post the final edited blog, then we'll go ahead and roll out our update that I provided you with regards to kinda where the market's at as well. Mark Donnigan: 45:09 Awesome. Awesome. Sounds good. Okay, well, we want to thank you again for listening to this incredibly engaging episode of the video insiders and until next time, what do we say Dror? Encode on? Is that our new..? Dror Gill: 45:29 Encode on! Encode happily! Mark Donnigan: 45:30 Encode happily, we've got to come up with something. Dror Gill: 45:32 Yeah, we need to invent something like, you can never compress too much. Mark Donnigan: 45:36 That's right, you can never compress too much, but you must preserve all the original quality. Alright, have a great day everyone. Thank you for listening. Christopher Levy: 45:45 Thank you Announcer: 45:47 Thank you for listening to the Video Insiders podcast. A production of Beamer limited. To begin using Beamer's Codecs today, go to Beamer.com/free to receive up to one hundred hours of no cost HEVC and H.264 transcoding every month.
Open source codec pioneer, Tom Vaughan, talks about the advantages & disadvantages of proprietary & open source technology. What he says may surprise you – despite which side of the fence you are on. The following blog post first appeared on the Beamr blog at: https://blog.beamr.com/2019/01/24/in-the-battle-between-open-source-proprietary-technology-does-video-win-podcast/ Video engineers dedicated to engineering encoding technologies are highly skilled and hyper-focused on developing the foundation for future online media content. Such a limited pool of experts in this field creates a lot of opportunity for growth and development, it also means there must be a level of camaraderie and cooperation between different methodologies. In past episodes, you've seen The Video Insiders compare codecs head-to-head and debate over their strengths and weaknesses. Today, they are tackling a deeper debate between encoding experts: the advantages and disadvantages of proprietary technology vs. community-driven open source. In Episode 05, Tom Vaughan surprises The Video Insiders as he talks through his take on open source vs. proprietary technology. Press play to hear a snippet from Episode 05, or click here for the full episode. Want to join the conversation? Reach out to TheVideoInsiders@beamr.com TRANSCRIPTION (lightly edited to improve readability only) Mark Donnigan: 00:00 In this episode, we talk with a video pioneer who drove a popular open source codec project before joining a commercial codec company. Trust me, you want to hear what he told us about proprietary technology, open source, IP licensing, and royalties. Announcer: 00:18 The Video Insiders is the show that makes sense of all that is happening in the world of online video, as seen through the eyes of a second generation codec nerd and a marketing guy who knows what iframes and macroblocks are. Here are your hosts, Mark Donnigan and Dror Gill. Mark Donnigan: 00:35 Okay. Mark Donnigan: 00:35 Well, welcome back everyone to this very special edition. Every edition is special, isn't it, Dror? Dror Gill: 00:43 That's right. Especially the first editions where everybody's so excited to see what's going to happen and how it would evolve. Mark Donnigan: 00:49 You know what's amazing, Dror, we had in the first 48 hours, more than 180 download. Dror Gill: 00:55 Wow. Mark Donnigan: 00:56 You know, we're like encoding geeks. I mean, are there even 180 of us in the world? Dror Gill: 01:01 I don't know. I think you should count the number of people who come to Ben Wagoner's compressionist breakfast at NAB, that's about the whole industry, right? Mark Donnigan: 01:09 Yeah. That's the whole industry. Mark Donnigan: 01:11 Hey, we want to thank, seriously in all seriousness, all the listeners who have been supporting us and we just really appreciate it. We have an amazing guest lined up for today. This is a little personal for me. It was IBC 2017, I had said something about a product that he was representing, driving, developing at the time. In fact, it was factually true. He didn't like it so much and we exchanged some words. Here's the ironic thing, this guy now works for us. Isn't that amazing, Dror? Click to view x265 vs. Beamr 5 speed and performance test. Dror Gill: 01:49 Yeah, isn't that amazing? Mark Donnigan: 01:52 You know what, and we love each other. The story ended well, talk about a good Hollywood ending. Mark Donnigan: 01:58 Well, we are talking today with Tom Vaughn. I'm going to let you introduce yourself. Tell the listeners about yourself. Tom Vaughn: 02:10 Hey Mark, hey Dror. Good to be here. Tom Vaughn: 02:12 As Mark mentioned, I'm Beamr's VP of strategy. Joined Beamr in January this year. Before that I was Beamr's, probably, primary competitor, the person who started and led the x265 project at MulticoreWare. We were fierce competitors, but we were always friendly and always friends. Got to know the Beamr team when Beamr first brought their image compression science from the photo industry to the video industry, which was three or four years ago. Really enjoyed collaborating with them and brainstorming and working with them, and we've always been allies in the fight to make new formats successful and deal with some of the structural issues in the industry. Dror Gill: 03:02 Let me translate. New formats, that means HEVC. Structural issues, that means patent royalties. Tom Vaughn: 03:08 Yes. Dror Gill: 03:09 Okay, you can continue. Tom Vaughn: 03:11 No need to be subtle here. Tom Vaughn: 03:13 Yeah, we had many discussions over the years about how to deal with the challenging macro environment in the codec space. I decided to join the winning team at Beamr this year, and it's been fantastic. Mark Donnigan: 03:28 Well, we're so happy to have you aboard, Tom. Mark Donnigan: 03:32 I'd like to just really jump in. You have a lot of expertise in the area of open source, and in the industry, there's a lot of discussion and debate, and some would even say there's religion, around open source versus proprietary technology, but you've been on both sides and I'd really like to jump into the conversation and have you give us a real quick primer as to what is open source. Tom Vaughn: 04:01 Well, open source is kind of basic what it says is that you can get the full source code to that software. Now, there isn't just one flavor of open source in terms of the software license that you get, there are many different open source licenses. Some have more restrictions and some have less restrictions on what you can do. There are some well known open source software programs and platforms, Linux is probably the most well known in the multimedia space, there's FFmpeg and Libav. There's VLC, the multimedia player. In the codec space, x264, x265, VP9, AV1, et cetera. Dror Gill: 04:50 I think the main attraction of open source, I think, the main feature is that people from all over the world join together, collaborate, each one contributes their own piece, then somehow this is managed together. Every bug that is discovered, anyone can fix it, because the source is open. This creates kind of a community and together a piece of software is created that is much larger and more robust than anything that a single developer could do on his own. Tom Vaughn: 05:23 Yeah, ideally the fact that the source code is open means that you have many sets of eyes, not only trying the program, but able to go through the source code and see exactly how it was written and therefore more code review can happen. On the collaboration side, you're looking for volunteers, and if you can find and energize many, many people worldwide to become enthusiastic and devote time or get their companies motivated to allocate developers full- or part-time to a particular open source project, you get that collaboration from many different types of people with different individual use cases and motivations. There are patches submitted from many different people, but someone has to decide, does that patch get committed or are there problems with that? Should it be changed? Tom Vaughn: 06:17 Designed by a committee isn't always the optimal, so someone or some small group has to decide what should be included, what should be left out. Dror Gill: 06:27 It's interesting to see, actually, the difference between x264 and x265 in this respect, because x264, the open source implementation of x264 was led by a group of developers, really independent developers, and no single company was owning or leading the development of that open source project. However, with x265, which is the open source implementation of HEVC, your previous company, MulticoreWare, has taken the lead and devoted, I assume, most of the development resources that have gone into the open source development, most of the contributions came from that company, but it is still an open source project. Tom Vaughn: 07:06 That's right. x264 was started by some students at a French university, and when they were graduating, leaving the university, they convinced the university to enable them to take the code with them, essentially under an open source license. It was very much grassroots open source beginnings and execution where developers may come and go, but it was a community collaboration. Tom Vaughn: 07:31 I started x265 at MulticoreWare with a couple of other individuals, and the way we started it was finding some commercial companies who expressed a strong interest in such a thing coming to life and who were early backers commercially. It was quite different. Then, because there's a small team of full-time developers on it working 40 hours plus a week, that team is moving very fast, it's organized, it's within a company. There was less of a need for a community. While we did everything we could to attract more external contributors, attracting contributors is always a challenge of open source projects. Mark Donnigan: 08:14 What I hear you saying, Tom, is it sounds like compared to the x264 project, the x265 project didn't have as large of a independent group of contributors. Is that …? Tom Vaughn: 08:29 Well, x264 was all independent contributors. Mark Donnigan: 08:32 That's right. Tom Vaughn: 08:33 And still is, essentially. There are many companies that fund x264 developers explicitly. Chip companies will fund individual developers to optimize popular open source software projects for their instruction set. AVX, AVX2, AVX512, essentially, things like that. Tom Vaughn: 08:58 HEVC is significantly more complex than AVC, and I think, if I recall correctly, x265 already has three times the number of commits than x264, even though it's only been in existence for one third of the life. Dror Gill: 09:12 So Tom, what's interesting to me is everybody's talking about open source software being almost synonymous with free software. Is open source really free? Is it the same? Tom Vaughn: 09:23 It can be at times. One part depends on the license and the other part depends on how you're using the software. For example, if it's a very open license like Apache, or BSD, or UIUC, that's an attribution only license, and you're pretty much free to create modifications, incorporate the software in your own works and distribute the resulting system. Tom Vaughn: 09:49 Software programs like x264 and x265 are licensed under the GNU GPL V2, that is an open source license that has a copyleft requirement. That means if you incorporate that in a larger work and distribute that larger work, you have to open source not only your modifications, but you have to open source the larger work. Most commercial companies don't want to incorporate some open source software in their commercial product, and then have to open source the commercial product. The owners of the copyright of the GPL V2 code, x264 LLC or MulticoreWare, also offer a commercial license, meaning you get access to that software, not under the GNU GPL V2, but under a separate, different license, in which case for you, it's not open source anymore. Your commercial license dictates what you can and can't do. Generally that commercial license doesn't include the copyleft requirement, so you can incorporate it in some commercial product and distribute that commercial product without open sourcing your commercial product. Dror Gill: 10:54 Then you're actually licensing that software as you would license it from a commercial company. Tom Vaughn: 10:59 Exactly. In that case it's not open source at all, it's a commercial license. Dror Gill: 11:04 It's interesting what you said about the GPL, the fact that anything that you compile with it, create derivatives of, incorporate into your software, you need to open source those components that you integrate with as well. I think this is what triggered Steve Ballmer to say in 2001, he said something like, “Open source is a cancer that spreads throughout your company and eats your IP.” That was very interesting. I think he meant mostly GPL because of that requirement, but the interesting thing is that he said that in 2001, and in 2016 in an interview, he said, “I was wrong and I really love Linux.” Today Microsoft itself open sources a lot of its own development. Mark Donnigan: 11:48 That's right. Yeah, that's right. Mark Donnigan: 11:50 Well Tom, let's … This has been an awesome discussion. Let's bring it to a conclusion. When is proprietary technology the right choice and when is open source maybe the correct choice? Can you give the listeners some guidelines? Tom Vaughn: 12:08 Sure, people are trying to solve problems. Engineers, companies are trying to build products and services, and they have to compete in their own business environment. Let's say you're a video service and you run a video business. The quality of that video and the efficiency that you can deliver that video matters a lot. We know what those advantages of open source are, and all things being equal, people gravitate towards open source a lot because engineers feel comfortable actually seeing the source code, being able to read through it, find bugs themselves if pushed to the limit. Tom Vaughn: 12:45 At the end of the day, if an open source project can't produce the winning implementation of something, you shouldn't necessarily use it just because it's open source. At the end of the day you have a business to run and what you want is the most performant libraries and platforms to build your business around. If you find that a proprietary implementation in the long run is more cost effective, more efficient, higher performance, and the company that is behind that proprietary implementation is solid and is going to be there for you and provide a contractual commitment to support you, there's no reason to not choose some proprietary code to incorporate into your product or service. Tom Vaughn: 13:32 When we're talking about codecs, there are particular qualities I'm looking for, performance, how fast does it run? How efficiently does it utilize compute resources? How many cores do I need in my server to run this in real time? And compression efficiency, what kind of video quality can I get at a given bit rate under a given set of conditions? I don't want the second best implementation, I want the best implementation of that standard, because at scale, I can save a lot of money if I have a more efficient implementation of that standard. Mark Donnigan: 14:01 Those are excellent pointers. It just really comes back to we're solving problems, right? It's easy to get sucked into religious debates about some of these things, but at the end of the day we all have an obligation to do what's right and what's best for our companies, which includes selecting the best technology, what is going to do the best job at solving the problems. Mark Donnigan: 14:24 Thank you again for joining us. Tom Vaughn: 14:25 My pleasure, thank you. Dror Gill: 14:26 I would also like to thank you for joining us, not only joining us on this podcast, but also joining Beamr. Mark Donnigan: 14:32 Absolutely. Mark Donnigan: 14:33 Well, we want to thank you the listener for, again, joining The Video Insiders. We hope you will subscribe. You can go to thevideoinsiders.com and you can stream from your browser, you can subscribe on iTunes. We're on Spotify. We are on Google Play. We're expanding every day. Announcer: 14:57 Thank you for listening to The Video Insiders podcast, a production of Beamr Limited. To begin using Beamr's codecs today, go to Beamr.com/free to receive up to 100 hours of no-cost HEVC and H.264 transcoding every month.
David Kay, Digital Archivist for Optimity Advisors and the founder of Digital Archivy (http://www.digitalarchivy.com/), tells the story of his fall into archives (particularly digital archives), his work as an archivist for an animated television program, and his efforts to help design the Society of American Archivist's Digital Archives Specialist certification program. This is the first of our episodes to end with a limerick.
David Kay, Digital Archivist for Optimity Advisors and the founder of Digital Archivy (http://www.digitalarchivy.com/), tells the story of his fall into archives (particularly digital archives), his work as an archivist for an animated television program, and his efforts to help design the Society of American Archivist's Digital Archives Specialist certification program. This is the first of our episodes to end with a limerick.
E03: What does the future hold for video codecs? This week, The Video Insiders look at the past and present to assess the future landscape of video encoding as they discuss where AVC, VP9, and VVC fit into the codec stew. The following blog post first appeared on the Beamr blog at: https://blog.beamr.com/2018/12/15/the-future-of-3-character-codecs-avc-vp9-vvc/ Anyone familiar with the streaming video industry knows that we love our acronyms. You would be hard-pressed to have a conversation about the online video industry without bringing one up… In today's episode, The Video Insiders focus on the future of three-character codecs: AVC, VP9, and VVC. But before we can look at the future, we have to take a moment to revisit the past. The year 2018 marks the 15-year anniversary of AVC and in this episode, we visit the process and lifecycle of standardization to adoption and what that means for the future of these codecs. Want to join the conversation? Reach out to TheVideoInsiders@beamr.com. TRANSCRIPTION (lightly edited for improved readability) Mark Donnigan: 00:49 Well, Hi, Dror! Dror Gill: 00:50 Is this really episode three? Mark Donnigan: 00:52 It is, it is episode three. So, today we have a really exciting discussion as we consider the future of codecs named with three characters. Dror Gill: 01:03 Three character codecs, okay, let's see. Mark Donnigan: 01:06 Three character codecs. Dror Gill: 01:09 I can think of … Mark Donnigan: 01:09 How many can you name? Dror Gill: 01:10 Let's see, that's today's trivia question. I can think of AVC, VP9, AV1, and VVC? Mark Donnigan: 01:21 Well, you just named three that I was thinking about and we're gonna discuss today! We've already covered AV1. Yeah, yeah, you answered correctly, but we haven't really considered where AVC, VP9, and VVC fit into the codec stew. So when I think about AVC, I'm almost tempted to just skip it because isn't this codec standard old news? I mean, c'mon. The entire video infrastructure of the internet is enabled by AVC, so what is there to discuss? Dror Gill: 01:57 Yeah. You're right. It's like the default, but in fact, the interesting thing is that today, we're (in) 2018 and this is the twenty year anniversary of AVC. I mean, ITU issued the call for proposals, their video coding expert group, issued the call for proposal for a project. At the time was called H26L, and their target was to double the coding efficiency, which effectively means halving the bit rate necessary for given level of fidelity. And that's why it was called H26L, it was supposed to be low bit rate. Mark Donnigan: 02:33 Ah! That's an interesting trivia question. Dror Gill: 02:35 That's where the L came from! Mark Donnigan: 02:36 I wonder how many of our listeners knew that? That's kind of cool. H26L. Dror Gill: 02:42 But they didn't go alone. It was the first time they joined forces in 2001 with the ISO MPEG, that's the same Motion Pictures Experts Group, you know we discussed in the first episode. Mark Donnigan: 02:56 That's right. Dror Gill: 02:57 And they came together, they joined forced, and they created JVT, that was the Joint Video Team, and I think it's a great example of collaboration. There are standards by dealing with video communication standards, and ISO MPEG, which is a standards body dealing with video entertainment standards. So, finally they understood that there's no point in developing video standards for these two different types of applications, so they got all the experts together in the JVT and this group developed what was the best video compression standard at the time. It was launched May 30, 2003. Mark Donnigan: 03:35 Wow. Dror Gill: 03:36 There was one drawback with this collaboration in that the video standard was known by two names. There was the ITU name which is H.264. And then there's the ISO MPEG name which is AVC, so these created some confusion at the start. I think by now, most of our listeners know that H.264 and AVC are two of the same. Mark Donnigan: 03:57 Yeah, definitely. So, AVC was developed 15 years ago and it's still around today. Dror Gill: 04:02 Yeah, yeah. I mean, that's really impressive and it's not only around, it's the most popular video compression standard in the world today. I mean, AVC is used to deliver video over the internet, to computers, televisions, mobile devices, cable, satellite, broadcast, and even blu-ray disks. This just shows you how long it takes from standardization to adoption, right? 15 years until we get this mass market adoption market dominance of H.264, AVC as we have today. Dror Gill: 04:31 And the reason it takes so long is that, we discussed it in our first episode, first you need to develop the standard. Then, you need to develop the chips that support the standard, then you need to develop devices that incorporate the chip. Even when initial implementation of the codec got released, they are still not as efficient as they can be, and it takes codec developers more time to refine it and improve the performance and the quality. You need to develop the tools, all of that takes time. Mark Donnigan: 04:59 It does. Yeah, I have a background in consumer electronics and because of that I know for certainty that AVC is gonna be with us for a while and I'll explain why. It's really simple. Decoding of H.264 is fully supported in every chip set on the market. I mean literally every chip set. There is not a device that supports video which does not also support AVC today. It just doesn't exist, you can't find it anywhere. Mark Donnigan: 05:26 And then when you look at in coding technologies for AVC, H.264, (they) have advanced to the point where you can really achieve state of the art for very low cost. There's just too much market momentum where the encode and decode ecosystems are just massive. When you think about entertainment applications and consumer electronics, for a lot of us, that's our primary market (that) we play in. Mark Donnigan: 05:51 But, if you consider the surveillance and the industrial markets, which are absolutely massive, and all of these security cameras you see literally everywhere. Drone cameras, they all have AVC encoders in them. Bottom line, AVC isn't going anywhere fast. Dror Gill: 06:09 You're right, I totally agree with that. It's dominant, but it's still here to stay. The problem is that, we talked about this, video delivery over the internet. The big problem is the bandwidth bottleneck. With so much video being delivered over the internet, and then the demand for quality is growing. People want higher resolution, they want HDR which is high dynamic range, they want higher frame rate. And all this means you need more and more bit rate to represent the video. The bit rate efficiency that is required today is beyond the standard in coding in AVC and that's where you need external technologies such as content adaptive encoding perceptual optimization that will really help you push AVC to its limits. Mark Donnigan: 06:54 Yeah. And Dror, I know you're one of the inventors of a perceptual optimization technique based on a really unique quality measure, which I've heard some in the industry believe could even extend the life of AVC from a bit rate efficiency perspective. Tell us about what you developed and what you worked on. Dror Gill: 07:13 Yeah, that's right. I did have some part in this. We developed a quality measure and a whole application around it, and this is a solution that can reduce the bit rate of AVC by 30%, sometimes even 40%. It doesn't get us exactly to where HEVC starts, 50% is pretty difficult and not for every content (type). But content distributors that recognize AVC will still be part of their codec mix for at least five years, I think what we've been able to do can really be helpful and a welcome relief to this bandwidth bottleneck issue. Mark Donnigan: 07:52 It sounds like we're in agreement that for at least the midterm horizon, the medium horizon, AVC is gonna stay with us. Dror Gill: 08:01 Yeah, yeah. I definitely think so. For some applications and services and certain regions of the world where the device penetration of the latest, high end models is not as high as in other parts, AVC will be the primary codec for some time to come. Dror Gill: 08:21 Okay, that's AVC. Now, let's talk about VP9. Mark Donnigan: 08:24 Yes, let's do that. Dror Gill: 08:25 It's interesting to me, essentially, it's mostly a YouTube codec. It's not a bad coded, it has some efficiency advantages over AVC, but outside of Google, you don't see any large scale deployments. By the way, if you look at Wikipedia, you read about the section that says where is VP9 used, it says VP9 is used mostly by YouTube, some uses by Netflix, and it's being used by Wikipedia. Mark Donnigan: 08:50 VP9 is supported fairly well in devices. Though, it's obviously hard to say exactly what the penetration is, I think there is support in hardware for decode for VP9. Certainly it's ubiquitous on Android, and it's in many of the UHD TV chip sets as well. So, it's not always enabled, but again, from my background on the hardware side, I know that many of those SOCs, they do have a VP9 decoder built into them. Mark Donnigan: 09:23 I guess the question in my mind is, it's talked about. Certainly Google is a notable both developer and user, but why hasn't it been adopted? Dror Gill: 09:33 Well, I think there are several issues here. One of them is compression efficiency. VP9 brings maybe 20, 30% improvement in compression efficiency over AVC, but it's not 50%. So, you're not doubling your compression efficiency. If you want to replace the codec, that's really a big deal. That's really a huge investment. You need to invest in coding infrastructure, new players. You need to do compatibility testing. You need to make sure that your packaging and your DRM work correctly and all of that. Dror Gill: 10:04 You really want to get a huge benefit to offset this investment. I think people are really looking for that 50% improvement, to double the efficiency, which is what you get with HEVC but not quite with VP9. I think the second point is that VP9, even though it's an open source coder, it's developed and the standard is maintained by Google. And some industry players are kind of afraid of the dominance of Google. Google has taken over the advertising market online. Mark Donnigan: 10:32 Yes, that's a good point. Dror Gill: 10:34 You know, and search and mobile operating systems, except Apple, it's all Android. So, those industry players might be thinking, I don't want to depend on Google for my video compression format. I think this is especially true for traditional broadcasters. Cable companies, satellite companies, TV channels that broadcast over the air. These companies traditionally like to go with established, international standards. Compression technologies that are standardized, they have the seal of approval by ITU and ISO. Dror Gill: 11:05 They're typically following that traditional codec developer past. ISO MPEG too, now it's AVC, starting with HEVC. What's coming next? Mark Donnigan: 11:16 Well, our next three letter codec is VVC. Tell us about VVC, Dror. Dror Gill: 11:21 Yeah, yeah, VVC. I think this is another great example of collaboration between ITU and ISO. Again, they formed a joint video experts team. This time it's called JVET. Dror Gill: 12:10 So, JVET has launched a project to develop a new video coding standard. And you know, we had AVC that was advanced video coding. Then we had HEVC which is high efficiency video coding. So, they thought, what would be the next generation? It's already advanced, it's high efficiency. So, the next one, they called it VVC, which is versatile video code. The objective of VVC is obviously to provide a significant improvement in compression efficiency over the existing HEVC standard. Development already started. The JVET group is meeting every few in months in some exotic place in the world and this process will continue. They plan to complete it before the end of 2020. So, essentially in the next two years they are gonna complete the standard. Dror Gill: 13:01 Today, already, even though VVC is in early development and they haven't implemented all the tools, they already report a 30% better compression efficiency than HEVC. So, we have high hopes that we'll be able to fight the video tsunami that is coming upon us with a much improved standard video coder which is VVC. I mean, its improved at least on the technical side and I understand that they also want to improve the process, right? Mark Donnigan: 13:29 That's right, that's right. Well, technical capabilities are certainly important and we're tracking of course VVC. 30% better efficiency this early in the game is promising. I wonder if the JVET will bring any learnings from the famous HEVC royalty debacles to VVC because I think what's in everybody's mind is, okay, great, this can be much more efficient, technically better. But if we have to go round and round on royalties again, it's just gonna kill it. So, what do you think? Dror Gill: 14:02 Yeah, that's right. I think it's absolutely true and many people in the industry have realized this, that you can't just develop a video standard and then handle the patent and royalty issues later. Luckily some companies have come together and they formed an industry group called The Media Coding Industry Forum, or MC-IF. They held their first meeting a few weeks ago in Macau during empic meeting one through four. Their purpose statement, let me quote this from their website, and I'll give you my interpretation of it. They say the media coding industry forum (MC-IF) is an open industry forum with a purpose of furthering the adoption of standards initially focusing on VVC, but establishing them as well accepted and widely used standards for the benefit of consumers and the industry. Dror Gill: 14:47 My interpretation is that the group was formed in an effort for companies with interest in this next generation video codec to come together and attempt to influence the licensing policy of VVC and try to agree on a reasonable patent licensing policy in advance to prevent history from repeating itself. We don't want that whole Hollywood story with the tragedy that took a few years until they reached the happy ending. So, what are you even talking about? This is very interesting. They're talking about having a modular structure for the codec. These tools of the codecs, the features, can be plugged in and out, very easily. Dror Gill: 15:23 So, if some company insists on reasonable licensing terms, this group can just decide not to support the feature and it will be very easily removed from the standard, or at least from the way that companies implement that standard. Mark Donnigan: 15:37 That's an interesting approach. I wonder how technically feasible it is. I think we'll get into that in some other episodes. Dror Gill: 15:46 Yeah. That may have some effect on performance. Mark Donnigan: 15:49 Exactly. And again, are we back in the situation that the Alliance for Open Media is in with AV1. Where part of the issue of the slow performance is trying to work around patents. At the end of the day you end up with a solution that is hobbled technically. Dror Gill: 16:10 Yeah. I hope it doesn't go there. Mark Donnigan: 16:13 Yeah, I hope we're not there. I think you heard this too, hasn't Apple joined the consortium recently? Dror Gill: 16:21 Yeah, yeah, they did. They joined silently as they always do. Silently means that one day somebody discovers their logo… They don't make any announcement or anything. You just see a logo on the website, and then oh, okay. Mark Donnigan: 16:34 Apple is in the building. Mark Donnigan: 16:41 You know, maybe it's good to kind of bring this discussion back to Earth and close out our three part series by giving the listeners some pointers. About how they should be thinking about the next codec that they adopt. I've been giving some thought as we've been doing these episodes. I think I'll kick it off here Dror if you don't mind, I'll share some of my thoughts. You can jump in. Mark Donnigan: 17:11 These are complex decisions of course. I completely agree, billing this as codec wars and codec battles, it's not helpful at the end of the day. Maybe it makes for a catchy headline, but it's not helpful. There's real business decisions (to be made). There are technical decisions. I think a good place to start for somebody who's listening and saying “okay great, I now have a better understanding of the lay of the land of HEVC, for AV1, I can understand VP9, I can understand AVC and what some of my options are to even further reduce bit rate. But now, what do I do?” Mark Donnigan: 17:54 And I think a good place to start is to just look at your customers, and do they lean towards early adopters. Are you in a strong economic environment, which is to say quite frankly, do most of your customers carry around the latest devices? Like an iPhone X, or Galaxy 9. If largely your customers lean towards early adopter and they're carrying around the latest devices, then you have an obligation to serve them with the highest quality and the best performance possible. Dror Gill: 18:26 Right. If your customers can receive HEVC, and it's half the bit rate, then why not deliver it to them better quality, or say when you see the end cost with this more efficient codec and everybody is happy. Mark Donnigan: 18:37 Absolutely, and again, I think just using pure logic. If somebody could afford a more than $1000 device in their pocket, probably the TV hanging on the wall is a very new, UHD capable (one). They probably have a game console in the house. The point is that you can make a pretty strong argument and an assumption that you can go, what I like to think of as all in HEVC including even standard definition, just SDR content. Mark Donnigan: 19:11 So, the industry has really lost sight in my mind of the benefits of HEVC as they apply across the board to all resolutions. All of the major consumer streaming services are delivering 4K using HEVC, but I'm still shocked at how many, it's kind of like oh, we forget that the same advantages of bit rate efficiency that work at 4K apply at 480p. Obviously, the absolute numbers are smaller because the file sizes are smaller, etc. Mark Donnigan: 19:41 But the point is, 30, 40, 50% savings applies at 4K as it does at 480p. I understand there's different applications in use cases, right? But would you agree with that? Dror Gill: 19:55 Yeah, yeah, I surely agree with that. I mean, for 4K, HEVC is really an enabler. Mark Donnigan: 20:00 That's right. Dror Gill: 20:01 For HEVC, you would need like 30, 40 megabits of video. Nobody can stream that to the home, but change it to 10, 15, that's reasonable, and you must use HEVC for 4k otherwise it won't even fit the pipe. But for all other resolutions, you get the bang with the advantage or you can trade it off for a quality advantage and deliver higher quality to your users, or higher frame rate, or enable HDR. If all of these possibilities that you can do with HD and even SD content, give them a better experience using HEVC and being able to stream on devices that your users already have. So yeah, I agree. I think it's an excellent analysis. Obviously if you're up in an emerging market, or your consumers don't have high end devices, then AVC is a good solution. If there are network constraints, and there are many places in the world that network conductivity isn't that great. Or in rural areas where we have very large parts of the population spread out (in these cases) bandwidth is low and you will get into a bottleneck even with HD. Mark Donnigan: 21:05 That's right. Dror Gill: 21:06 That's where perceptual optimization can help you reduce the bit rate even for AVC and keep within the constraints that you have. When your consumers can upgrade their devices and when the cycle comes in a few years when every device has HEVC support, then obviously you upgrade your capability and support HEVC across the board. Mark Donnigan: 21:30 Yeah, that's a very important point Dror, is that this HEVC adoption curve in terms of silicon, on devices. It is in full motion. Just the planning life cycles. If you look at what goes into hardware, and especially on the silicon side, it doesn't happen that way. Once these technologies are in the designs, once they are in the dies, once the codec is in silicon, it doesn't get arbitrarily turned on and off like light switches. Mark Donnigan: 22:04 How should somebody be looking at VP9, VVC, and AV1? Dror Gill: 22:13 Well, VP9 is an easy one. Unless you're Google, you're very likely gonna skip over this codec. Not just that the VP9 isn't the viable choice, it simply doesn't go so far as HEVC in terms of bit rate efficiency and quality. Maybe two years back we would consider it as an option for reducing bit rate, but now with the HEVC support that you have, there's no point in going to VP9. You might as well go to HEVC. If you talk about VVC, (the) standard is still a few years from being ratified so, we actually don't have anything to talk about. Dror Gill: 22:49 The important point is again to remember, even when VVC launches, it will still be another 2 to 3 years after ratifying the standard before you have even a very basic playback ecosystem in place. So, I would tell our listeners if you're thinking, why should I adopt HEVC, because VVC is just around the corner, well, that corner is very far. It's more like the corner of the Earth than the corner of the next block. Mark Donnigan: 23:15 That's right. Dror Gill: 23:18 So, HEVC today, VVC will be the next step in a few years. And then there's AV1. You know, we talked a lot about AV1. No doubt, AV1 has support from huge companies. I mean Google, Facebook, Intel, Netflix, Microsoft. And those engineers, they know what they're doing. But now, it's quite clear that compression efficiency is the same as HEVC. Meanwhile, after removing other royalty cost for content delivery, HEVC Advance removed it. The license situation is much more clear now. You add to this the fact that at the end of the day, two to three years, you're gonna need five to ten times more compute power to encode AV1, reaching effectively the same result. Now Google, again. Google may be that they have unlimited compute resources, they will use it. They developed it. Dror Gill: 24:13 The smaller content providers, all the other ones, the non Googles of the world and other broadcasters with growing support for HEVC that we expect in a few years. I think it's obvious. They're gonna support HEVC and then a few years later when VVC is ratified, when it's supported in devices, they're gonna move to VVC. Because this codec does have the required compression efficiency improvement over HEVC. Mark Donnigan: 24:39 Yeah, that's an excellent summary Dror. Thank you for breaking this all down for our listeners so succinctly. I'm sure this is really gonna provide massive value. I want to thank our amazing audience because without you, the Video Insiders Podcast would just be Dror and me taking up bits on a server somewhere. Dror Gill: 24:59 Yeah, talking to ourselves. Mark Donnigan: 25:01 As you can tell, video is really exciting to us and so we're so happy that you've joined us to listen. And again, this has been a production of Beamr Imaging Limited. Please, subscribe on iTunes and if you would like to try out beamer codecs in your lab or your production environment, we are giving away up to $100 of HEVC and H264 in coding every month. That's each and every month. Just go to https://beamer.com/free and get started immediately.
When shopping for a nice set of headphones for yourself or someone on your holiday list, you might run into codecs, and you might not be sure what to do with them. What are they? Do they matter? Help us SoundGuys! Fortunately, we’re here to do just that. Including a special appearance by Gary Sims, we break it all down for you. We have that and a few more surprises along the way in this episode of the SoundGuys podcast! Full transcript available at http://www.soundguys.com/podcast SoundGuys is: Chris Thomas - @CThomasTech Lily Katz - @KatzGame Adam Molina - @AdamLukas17 Special appearance by: Gary Sims - @GarySims from @GaryExplains Produced by:Adam Doud Make sure to subscribe to the SoundGuys Podcast on iTunes and visit www.soundguys.com and YouTube for reviews, news, and everything you ever wanted to know about sound and audio equipment.
Temple Grandin, an animal scientist and autism advocate, describes how she uses sound to make cattle slaughterhouses more humane. Journalist Bella Bathurst describes how she lost her hearing while conducting interviews with the last generation of Scottish lighthouse keepers and then how it felt, twelve years later, to regain it. Along the way, we'll listen deeply to ABBA and the Beach Boys and hear an excerpt from Alexander Provan's experimental essay/soundscape/bildungsroman Measuring Device with Organs, which explores sound connoisseurship and the tones that high-fidelity eliminates as unsavory. Provan's work is available as both an LP and MP3 from Triple Canopy.
FCPRadio 070 Gary Adcock RAW! In this episode we talk with tech wiz Gary Adcock and be forewarned, Gary uses some colorful language. We talk all about HDR, ProRes RAW, BMD RAW, the new MacPro, Codecs, Thunderbolt 3, USB-C cables, the FCPX Creative Summit, Future of the Mac platform and of course Final Cut Pro. Final Cut Pro Radio is sponsored by Lumaforge.com Twitter @fcpradio1 FCPRadio.com Facebook facebook.com/groups/FinalCutProRadio/
Sabe o que é um codec? Pois é, codecs impactam diretamente na vida das pessoas e elas nem tem ideia. Descubra tudo sobre os codecs nesse podcast imperdível! Continue lendo em: Ultrageek 325 – O maravilhoso mundo dos codecs
Codecs gibt es nicht nur für Video sondern auch für Audio und für Bilder. Wenn eine Software beispielsweise ein Videodatei erstellen soll, nutzt sie dazu einen Codec für Video und Audio und erstellt daraus eine Videodatei.
After the briefest of hiatuses we are BACK! Did you miss us? This episode Josh and Nick get into the topic of codecs: When to use which ones? How to optimize your encoding settings for faster exports/higher quality videos. How to establish delivery specs with clients early on And some other goodie topics for you As well there a number of links you should click on this week. Nick is trying Audio Network's new plugin for Adobe Premiere CC that makes it incredibly easy and fast to find the right music for your edit. It includes a panel right in the Premiere interface so you can search, select and edit with stock music tracks without ever leaving Premiere. They are looking for Beta testers to sign up and try it out. (http://us.audionetwork.com/content/whats-new/adobe-premiere-pro-plugin? If you're looking for a basic tutorial on getting started in Premiere...this ain't it, but it'll give you a laugh. Some people are just not in the right emotional and mental state for making tutorials (https://www.youtube.com/watch?time_continue=2&v=eIMwx56aJEo). Some Premiere editors complain about it's lack of native motion blur...but it's actually in there. Find out how to utilize motion blur for your project's titles and graphics (http://premierepro.net/editing/motion-blur/). Some additional reading on codecs including really tech-talk explanations on how they work over on Premium Beat (http://www.premiumbeat.com/blog/everything-you-need-to-know-about-codecs/) As well as a slightly more condensed explanation if you're short on time (http://www.premiumbeat.com/blog/a-compressed-explanation-of-video-compression/) --- Follow us on Twitter: www.twitter.com/CommandEdit Join our Facebook Group: https://www.facebook.com/groups/CommandEdit/ Get more of the podcast at http://www.CommandEditPodcast.com
This week we dive into the introduction to the teen-noir world of Veronica Mars. Fanboy over Darran Norris and one of us bemoans the terrible third season of this great show. Alan hurt himself. New microphones. Tumblr Interludes. General gushing about how good Veronica Mars was. General gushing about Darran Norris and his many talents. How much more successful the Kane Software is than YouTube. Codecs and super fast uploads. TS Eliot references and shilling for this Fake ads. Hotel room towels placement. Alan dies. Guessing what video we filmed the day of the Veronica Mars kickstarter. Humanities Greatest Achievement Far too many references to Eagles songs.
Existe una manera de ser inmortal gracias a internet: crear un clon mental de ti mismo basado en los datos que puedes aportar en la web lifenbaut.com. Bina 48 es la primera conciencia transferida que tiene forma física siendo un busto ruborizado. Hablamos de cómo fue su creación y las posibilidades que abre. En el abecedario de internet seguimos con la C y hablamos de los Codecs de vídeo, las Cookies y definimos el Cyberbullying. Micro espacio semanal sobre internet, redes sociales y tecnología que Caín Santamaría – de Innovanity | Diseño, web y comunicación – realiza en Radio Castilla, de Cadena SER Burgos, junto a Francho Pedrosa. Emitido el 03/11/2015 Más información y todos los podcasts en http://cainsan.com. Pregúntame en Twitter: https://twitter.com/cainsan
Sean and Derek circle back on HTTP before diving into unsafe rust, and finally the merits of a small standard library. HTTP2 implementation status libffmpeg unsafe rust uninitialized memory in Rust stdx - the missing batteries of Rust NPM 3.0.0 NPM Shrinkwrap
What is a codec? What do I need to be aware of when choosing a codec? What codec should I edit with? How do I convert from one codec to another? How do I get the best look for _______ ? The post 5 THINGS: on Demystifying Video Codecs appeared first on 5 THINGS - Simplifying Film, TV, and Media Technology.
5 THINGS - Simplifying Film, TV, and Media Technology - Audio Only
What is a codec? What do I need to be aware of when choosing a codec? What codec should I edit with? How do I convert from one codec to another? How do I get the best look for _______ ? The post Demystifying Video Codecs appeared first on 5 THINGS - Simplifying Film, TV, and Media Technology.
What is a codec? What do I need to be aware of when choosing a codec? What codec should I edit with? How do I convert from one codec to another? How do I get the best look for _______ ? The post 5 THINGS: on Demystifying Video Codecs appeared first on 5 THINGS - Simplifying Film, TV, and Media Technology.
What is a codec? What do I need to be aware of when choosing a codec? What codec should I edit with? How do I convert from one codec to another? How do I get the best look for _______ ? The post Demystifying Video Codecs appeared first on 5 THINGS - Simplifying Film, TV, and Media Technology.
In this program we explore the technology behind sharing your videos over the internet. We'll look at a primer on Video Compression and gain an understanding of CODECS and Container Formats. Lastly Chet shares some specific websites for sharing videos. Specific info mentioned in the program relate to YouTube, Facebook, Picasa, Flickr, SmugMug.Show notes are available at http://www.YourTechnologyTutor.com/12
Diese Animation stammt aus dem Kurs Kommunikationsnetze 2 im Online Fernstudiengang Medieninformatik. Mehr Infos: http://oncampus.de/index.php?id=320 Ein Mixer wird z.B. zur Anpassung von Datenströmen in Audiokonferenzen benutzt, wenn diese in unterschiedlichen Codecs vorliegen. Dadurch ist es möglich, dass jeder Teilnehmer einer Konferenz weiter seinen eigenen Codec verwendet.
An all-you-can eat historical and technical look at web video. What's up with HTML5 and these competing codecs? How can you use video today? Where are things going in the future? Videoblog innovator Michael Verdi joins host Jen Simmons for a double-length show on video video video.
In Episode 44 des Hackerfunks besprechen wir ein Thema, was teilweise immer noch etwas den Hauch von Raketentechnologie hat – Videokompression! Codecs, Container und Qualitätsstufen, und dies nicht nur der Alliteration wegen. Bei XTaran und Venty zu Gast ist diesmal Christof Bürgi. Trackliste One Dice – Wake up, Sleeper Timbral – Elektronika Shox Lukhash – We ain’t finished yet” Kraftfuttermischwerk – Das alte Schulhaus Nächste Sendung am Samstag, 02. April 2011, 19:00 Uhr Intro Intro :: Walliser werben für LC1! Handbrake :: Sehr benutzerfreundlicher Videotranscoder für Linux, Windows und Mac Mplayer & Mencoder :: Universeller Mediaplayer und Encoder für Linux FFMPeg :: Komplexer Videoencoder für die Kommandozzeile, meist als Backend verwendet VLC :: Videolan Player und Transcoder für Linux, Windows und Mac WebM :: WebM Spezifikationen und FAQ MKV :: Matroska Projekt Spezifikationen MP4 :: Wikipedia über MP4 MP4 :: MP4 Formatsbeschreibung MPEG :: Wikipedia mit Links zu den verschiedenen MPEG-Versionen MPEG :: Beschreibung des MPEG Videostreams MPEG :: MPEG Homepage File Download (59:38 min / 75 MB)
In Episode 44 des Hackerfunks besprechen wir ein Thema, was teilweise immer noch etwas den Hauch von Raketentechnologie hat – Videokompression! Codecs, Container und Qualitätsstufen, und dies nicht nur der Alliteration wegen. Bei XTaran und Venty zu Gast ist diesmal Christof Bürgi. Trackliste One Dice – Wake up, Sleeper Timbral – Elektronika Shox Lukhash – We ain’t finished yet” Kraftfuttermischwerk – Das alte Schulhaus Nächste Sendung am Samstag, 02. April 2011, 19:00 Uhr Intro Intro :: Walliser werben für LC1! Handbrake :: Sehr benutzerfreundlicher Videotranscoder für Linux, Windows und Mac Mplayer & Mencoder :: Universeller Mediaplayer und Encoder für Linux FFMPeg :: Komplexer Videoencoder für die Kommandozzeile, meist als Backend verwendet VLC :: Videolan Player und Transcoder für Linux, Windows und Mac WebM :: WebM Spezifikationen und FAQ MKV :: Matroska Projekt Spezifikationen MP4 :: Wikipedia über MP4 MP4 :: MP4 Formatsbeschreibung MPEG :: Wikipedia mit Links zu den verschiedenen MPEG-Versionen MPEG :: Beschreibung des MPEG Videostreams MPEG :: MPEG Homepage File Download (59:38 min / 75 MB)
This week we take an in depth look at compression and what all those crazy settings mean, and how to best distribute your end user. PLUS we announce the Great Game Show giveaway, get some sweet resources from the stockpile and give you a glimpse at the state of the motion design industry. All this and more on this weeks CMD TV.
This week we take an in depth look at compression and what all those crazy settings mean, and how to best distribute your end user. PLUS we announce the Great Game Show giveaway, get some sweet resources from the stockpile and give you a glimpse at the state of the motion design industry. All this and more on this weeks CMD TV.
PIP 39 – Gravado dia 04 de janeiro de 2011 02:53 – Previsões para o PIP 2011 07:05 – Review da Nikon D7000 13:24 – Cartões SD (SDXC) de 64GB e 128GB 25:18 – Compressões e Codecs de Vídeo 27:29 – Método caseiro de medir ruído 36:56 – Lentes zoom são ruins nos extremos 41:05 […]
Aplicaciones gratis para Windows, las tangas de las TecnoCasters y mas!!
Audio- und Videostreaming im Internet gibt es zwar schon lange, gehört aber zu den Technologien, die sich vergleichsweise langsam etabliert haben, da lange über Formate und Protokolle gestritten wurde. In letzter Zeit zeichnen sich aber neue Tendenzen ab, die die Echtzeitübertragung von Bild und Ton im Netz sowohl potentiellen Sendern als auch Empfängern einfacher zugänglich machen könnten. Im Gespräch mit Tim Pritlove erläutert Nikolai Longolius Geschichte und Technik von Streaming-Protokollen und -Formaten. Dabei kommen unter anderem zur Sprache: Streaming vs. Progressive Download, Server-basierter Direktzugriff auf Mediendateien, Container und Codecs, Struktur einer streambaren Datei, die Bedeutung von Key Frames, Mitschneiden von verschlüsselten Streams, proprietäre und freie Streaming Server, der Trend zu HTTP-basiertem Streaming, Restriktive Lizenzen vs. freie Codecs, Livestreaming-Dienste und anderes.
Tim Pritlove und Michael Feiri unterhalten sich über Multimedia-Datenformate mit einem Schwerpunkt auf Videoformaten und Multimedia-Frameworks in Betriebssystemen. Diskutiert wird ausführlich über die Geschichte und Entwicklung von Container- und Videoformaten, den technischen Besonderheiten und Unterschiede dieser Lösungen und Fragen wie Patente und Lizenzzahlungen.
Aqui iniciamos nuestra segunda temporada. Con nuevas secciones, mas produccion y mas entusiasmo. En esta ocasion hablaremos del "Rayo Verde" y recomendaremos un novela de Julio Verne. En la nueva seccion "Guia del Internauta Galactico" aprenderemos que son los codecs y como instalarlos en nuestra computadora para ver todos los formatos multimedia existentes. Tambien conoceremos que son los fanfilms y disfrutaremos de algunas de dichas producciones. La musica en esta ocasion es de Hans Zimmer, de las dos ultimas entregas de "Los piratas del Caribe", asi como algunos temas esporadicos de "El Secreto del Abismo" y "2010". Todos los contenidos y enlaces mencionados en este podcast los encuentras en nuestra bitacora: http://megacosmos.blogspot.com/2007/06/contenidos-del-podcast-22.html Recuerda que espero tus criticas, sugerencias y contenidos sugeridos en megacosmos72@gmail.com Te recuerdo que la demanda del podcast es tal que continuamente estamos reventando el ancho de banda de nuestro proveedor Podomatic. Asi que si no puedes descargar el podcast desde la pagina de Podomatic, tienes como alternativa descargarlo en formato MP3 en nuestro Portal Megacosmos. www.mega-cosmos.com