POPULARITY
Open Tech Talks : Technology worth Talking| Blogging |Lifestyle
In this episode of Open Tech Talks, host Kashif Manzoor sits down with Maurice McCabe, founder of AIASystems, AI specialist, and software architect with decades of experience in Silicon Valley and Los Angeles. Maurice shares his journey from early machine learning applications in mobile advertising to leading-edge work in Generative AI and Agentic AI systems. The conversation on the concept of the AI Factory, a framework that transforms enterprise workflows and subject-matter expertise into scalable AI startups and SaaS products. Maurice explains how his team is building real-time AI agents, avatars, and voice-based AI systems. He also introduces the ADAPT methodology, designed to help enterprises accelerate AI adoption and move beyond slow, traditional management cycles. Key insights include how to evaluate AI maturity models, integrate generative AI into an enterprise architecture, and address the security challenges posed by unstructured data. Listeners will learn practical lessons for founders, consultants, and enterprises from how to bootstrap AI ventures and filter out hype, to why unstructured data is emerging as the next frontier of enterprise AI innovation. Episode # 168 Today's Guest: Maurice McCabe, Co-Founder of AIA Systems He has spent 20 years developing systems that ensure things actually work, from scalable SaaS platforms and real-time data pipelines to voice agents deployed in production environments. Website: AIASystems What Listeners Will Learn: How enterprises can spin off niche products from workflows and subject-matter expertise. Emerging trends such as real-time avatars, voice-based agents, and multi-agent swarms. Why LiveKit, WebRTC, and specialized APIs are enabling scalable real-time AI systems. Lessons on cash flow, partnerships, and productivity tools for founders entering the generative AI space. A framework to help enterprises strategically adopt AI and outpace competitors. How to assess readiness through data, processes, and accessibility for generative AI adoption. Current gaps, risks of prompt manipulation, and the need for evaluation loops in AI systems. How to integrate unstructured data workflows into existing structured enterprise systems. Techniques for filtering hype, testing pilots, and staying grounded while AI evolves weekly. Resources: AIASystems
Francois Daost is a W3C staff member and co-chair of the Web Developer Experience Community Group. We discuss the W3C's role and what it's like to go through the browser standardization process. Related links W3C TC39 Internet Engineering Task Force Web Hypertext Application Technology Working Group (WHATWG) Horizontal Groups Alliance for Open Media What is MPEG-DASH? | HLS vs. DASH Information about W3C and Encrypted Media Extensions (EME) Widevine PlayReady Media Source API Encrypted Media Extensions API requestVideoFrameCallback() Business Benefits of the W3C Patent Policy web.dev Baseline Portable Network Graphics Specification Internet Explorer 6 CSS Vendor Prefix WebRTC Transcript You can help correct transcripts on GitHub. Intro [00:00:00] Jeremy: today I'm talking to Francois Daoust. He's a staff member at the W3C. And we're gonna talk about the W3C and the recommendation process and discuss, Francois's experience with, with how these features end up in our browsers. [00:00:16] Jeremy: So, Francois, welcome [00:00:18] Francois: Thank you Jeremy and uh, many thanks for the invitation. I'm really thrilled to be part of this podcast. What's the W3C? [00:00:26] Jeremy: I think many of our listeners will have heard about the W3C, but they may not actually know what it is. So could you start by explaining what it is? [00:00:37] Francois: Sure. So W3C stands for the Worldwide Web Consortium. It's a standardization organization. I guess that's how people should think about W3C. it was created in 1994. I, by, uh, Tim Berners Lee, who was the inventor of the web. Tim Berners Lee was the, director of W3C for a long, long time. [00:01:00] Francois: He retired not long ago, a few years back. and W3C is, has, uh, a number of, uh. Properties, let's say first the goal is to produce royalty free standards, and that's very important. Uh, we want to make sure that, uh, the standard that get produced can be used and implemented without having to pay, fees to anyone. [00:01:23] Francois: We do web standards. I didn't mention it, but it's from the name. Standards that you find in your web browsers. But not only that, there are a number of other, uh, standards that got developed at W3C including, for example, XML. Data related standards. W3C as an organization is a consortium. [00:01:43] Francois: The, the C stands for consortium. Legally speaking, it's a, it's a 501c3 meaning in, so it's a US based, uh, legal entity not for profit. And the, the little three is important because it means it's public interest. That means we are a consortium, that means we have members, but at the same time, the goal, the mission is to the public. [00:02:05] Francois: So we're not only just, you know, doing what our members want. We are also making sure that what our members want is aligned with what end users in the end, need. and the W3C has a small team. And so I'm part of this, uh, of this team worldwide. Uh, 45 to 55 people, depending on how you count, mostly technical people and some, uh, admin, uh, as well, overseeing the, uh, the work, that we do, uh, at the W3C. Funding through membership fees [00:02:39] Jeremy: So you mentioned there's 45 to 55 people. How is this funded? Is this from governments or commercial companies? [00:02:47] Francois: The main source comes from membership fees. So the W3C has a, so members, uh, roughly 350 members, uh, at the W3C. And, in order to become a member, an organization needs to pay, uh, an annual membership fee. That's pretty common among, uh, standardization, uh, organizations. [00:03:07] Francois: And, we only have, uh, I guess three levels of membership, fees. Uh, well, you may find, uh, additional small levels, but three main ones. the goal is to make sure that, A big player will, not a big player or large company, will not have more rights than, uh, anything, anyone else. So we try to make sure that a member has the, you know, all members have equal, right? [00:03:30] Francois: if it's not perfect, but, uh, uh, that's how things are, are are set. So that's the main source of income for the W3C. And then we try to diversify just a little bit to get, uh, for example, we go to governments. We may go to governments in the u EU. We may, uh, take some, uh, grant for EU research projects that allow us, you know, to, study, explore topics. [00:03:54] Francois: Uh, in the US there, there used to be some, uh, some funding from coming from the government as well. So that, that's, uh, also, uh, a source. But the main one is, uh, membership fees. Relations to TC39, IETF, and WHATWG [00:04:04] Jeremy: And you mentioned that a lot of the W3C'S work is related to web standards. There's other groups like TC 39, which works on the JavaScript spec and the IETF, which I believe worked, with your group on WebRTC, I wonder if you could explain W3C'S connection to other groups like that. [00:04:28] Francois: sure. we try to collaborate with a, a number of, uh, standard other standardization organizations. So in general, everything goes well because you, you have, a clear separation of concerns. So you mentioned TC 39. Indeed. they are the ones who standardize, JavaScript. Proper name of JavaScript is the EcmaScript. [00:04:47] Francois: So that's tc. TC 39 is the technical committee at ecma. and so we have indeed interactions with them because their work directly impact the JavaScript that you're going to find in your, uh, run in your, in your web browser. And we develop a number of JavaScript APIs, uh, actually in W3C. [00:05:05] Francois: So we need to make sure that, the way we develop, uh, you know, these APIs align with the, the language itself. with IETF, the, the, the boundary is, uh, uh, is clear as well. It's a protocol and protocol for our network protocols for our, the IETF and application level. For W3C, that's usually how the distinction is made. [00:05:28] Francois: The boundaries are always a bit fuzzy, but that's how things work. And usually, uh, things work pretty well. Uh, there's also the WHATWG, uh, and the WHATWG is more the, the, the history was more complicated because, uh, t of a fork of the, uh, HTML specification, uh, at the time when it was developed by W3C, a long time ago. [00:05:49] Francois: And there was been some, uh, Well disagreement on the way things should have been done, and the WHATWG took over got created, took, took this the HTML spec and did it a different way. Went in another, another direction, and that other, other direction actually ended up being the direction. [00:06:06] Francois: So, that's a success, uh, from there. And so, W3C no longer works, no longer owns the, uh, HTML spec and the WHATWG has, uh, taken, uh, taken up a number of, uh, of different, core specifications for the web. Uh, doing a lot of work on the, uh, on interopoerability and making sure that, uh, the algorithm specified by the spec, were correct, which, which was something that historically we haven't been very good at at W3C. [00:06:35] Francois: And the way they've been working as a, has a lot of influence on the way we develop now, uh, the APIs, uh, from a W3C perspective. [00:06:44] Jeremy: So, just to make sure I understand correctly, you have TC 39, which is focused on the JavaScript or ECMAScript language itself, and you have APIs that are going to use JavaScript and interact with JavaScript. So you need to coordinate there. The, the have the specification for HTML. then the IATF, they are, I'm not sure if the right term would be, they, they would be one level lower perhaps, than the W3C. [00:07:17] Francois: That's how you, you can formulate it. Yes. The, the one layer, one layer layer in the ISO network in the ISO stack at the network level. How WebRTC spans the IETF and W3C [00:07:30] Jeremy: And so in that case, one place I've heard it mentioned is that webRTC, to, to use it, there is an IETF specification, and then perhaps there's a W3C recommendation and [00:07:43] Francois: Yes. so when we created the webRTC working group, that was in 2011, I think, it was created with a dual head. There was one RTC web, group that got created at IETF and a webRTC group that got created at W3C. And that was done on purpose. Of course, the goal was not to compete on the, on the solution, but actually to, have the two sides of the, uh, solution, be developed in parallel, the API, uh, the application front and the network front. [00:08:15] Francois: And there was a, and there's still a lot of overlap in, uh, participation between both groups, and that's what keep things successful. In the end. It's not, uh, you know, process or organization to organization, uh, relationships, coordination at the organization level. It's really the fact that you have participants that are essentially the same, on both sides of the equation. [00:08:36] Francois: That helps, uh, move things forward. Now, webRTC is, uh, is more complex than just one group at IETF. I mean, web, webRTC is a very complex set of, uh, of technologies, stack of technologies. So when you, when you. Pull a little, uh, protocol from IETFs. Suddenly you have the whole IETF that comes with you with it. [00:08:56] Francois: So you, it's the, you have the feeling that webRTC needs all of the, uh, internet protocols that got, uh, created to work Recommendations [00:09:04] Jeremy: And I think probably a lot of web developers, they may hear words like specification or standard, but I believe the, the official term, at least at the W3C, is this recommendation. And so I wonder if you can explain what that means. [00:09:24] Francois: Well. It means it means standard in the end. and that came from industry. That comes from a time where. As many standardization organizations. W3C was created not to be a standardization organization. It was felt that standard was not the right term because we were not a standardization organization. [00:09:45] Francois: So recommend IETF has the same thing. They call it RFC, request for comment, which, you know, stands for nothing in, and yet it's a standard. So W3C was created with the same kind of, uh thing. We needed some other terminology and we call that recommendation. But in the end, that's standard. It's really, uh, how you should see it. [00:10:08] Francois: And one thing I didn't mention when I, uh, introduced the W3C is there are two types of standards in the end, two main categories. There are, the de jure standards and defacto standards, two families. The de jure standards are the ones that are imposed by some kind of regulation. so it's really usually a standard you see imposed by governments, for example. [00:10:29] Francois: So when you look at your electric plug at home, there's some regulation there that says, this plug needs to have these properties. And that's a standard that gets imposed. It's a de jure standard. and then there are defacto standards which are really, uh, specifications that are out there and people agree to use it to implement it. [00:10:49] Francois: And by virtue of being used and implemented and used by everyone, they become standards. the, W3C really is in the, uh, second part. It's a defacto standard. IETF is the same thing. some of our standards are used in, uh, are referenced in regulations now, but, just a, a minority of them, most of them are defacto standards. [00:11:10] Francois: and that's important because that's in the end, it doesn't matter what the specific specification says, even though it's a bit confusing. What matters is that the, what the specifications says matches what implementations actually implement, and that these implementations are used, and are used interoperably across, you know, across browsers, for example, or across, uh, implementations, across users, across usages. [00:11:36] Francois: So, uh, standardization is a, is a lengthy process. The recommendation is the final stage in that, lengthy process. More and more we don't really reach recommendation anymore. If you look at, uh, at groups, uh, because we have another path, let's say we kind of, uh, we can stop at candidate recommendation, which is in theoretically a step before that. [00:12:02] Francois: But then you, you can stay there and, uh, stay there forever and publish new candidate recommendations. Um, uh, later on. What matters again is that, you know, you get this, virtuous feedback loop, uh, with implementers, and usage. [00:12:18] Jeremy: So if the candidate recommendation ends up being implemented by all the browsers, what's ends up being the distinction between a candidate and one that's a normal recommendation. [00:12:31] Francois: So, today it's mostly a process thing. Some groups actually decide to go to rec Some groups decide to stay at candidate rec and there's no formal difference between the, the two. we've made sure we've adopted, adjusted the process so that the important bits that, applied at the recommendation level now apply at the candidate rec level. Royalty free patent access [00:13:00] Francois: And by important things, I mean the patent commitments typically, uh, the patent policy fully applies at the candidate recommendation level so that you get your, protection, the royalty free patent protection that we, we were aiming at. [00:13:14] Francois: Some people do not care, you know, but most of the world still works with, uh, with patents, uh, for good, uh, or bad reasons. But, uh, uh, that's how things work. So we need to make, we're trying to make sure that we, we secure the right set of, um, of patent commitments from the right set of stakeholders. [00:13:35] Jeremy: Oh, so when someone implements a W3C recommendation or a candidate recommendation, the patent holders related to that recommendation, they basically agree to allow royalty-free use of that patent. [00:13:54] Francois: They do the one that were involved in the working group, of course, I mean, we can't say anything about the companies out there that may have patents and uh, are not part of this standardization process. So there's always, It's a remaining risk. but part of the goal when we create a working group is to make sure that, people understand the scope. [00:14:17] Francois: Lawyers look into it, and the, the legal teams that exist at the all the large companies, basically gave a green light saying, yeah, we, we we're pretty confident that we, we know where the patterns are on this particular, this particular area. And we are fine also, uh, letting go of the, the patterns we own ourselves. Implementations are built in parallel with standardization [00:14:39] Jeremy: And I think you had mentioned. What ends up being the most important is that the browser creators implement these recommendations. So it sounds like maybe the distinction between candidate recommendation and recommendation almost doesn't matter as long as you get the end result you want. [00:15:03] Francois: So, I mean, people will have different opinions, uh, in the, in standardization circles. And I mentioned also W3C is working on other kind of, uh, standards. So, uh, in some other areas, the nuance may be more important when we, but when, when you look at specification, that's target, web browsers. we've switched from a model where, specs were developed first and then implemented to a model where specs and implementing implementations are being, worked in parallel. [00:15:35] Francois: This actually relates to the evolution I was mentioning with the WHATWG taking over the HTML and, uh, focusing on the interoperability issues because the starting point was, yeah, we have an HTML 4.01 spec, uh, but it's not interoperable because it, it's not specified, are number of areas that are gray areas, you can implement them differently. [00:15:59] Francois: And so there are interoperable issues. Back to candidate rec actually, the, the, the, the stage was created, if I remember correctly. uh, if I'm, if I'm not wrong, the stage was created following the, uh, IE problem. In the CSS working group, IE6, uh, shipped with some, version of a CSS that was in the, as specified, you know, the spec was saying, you know, do that for the CSS box model. [00:16:27] Francois: And the IE6 was following that. And then the group decided to change, the box model and suddenly IE6 was no longer compliant. And that created a, a huge mess on the, in the history of, uh, of the web in a way. And so the, we, the, the, the, the candidate recommendation sta uh, stage was introduced following that to try to catch this kind of problems. [00:16:52] Francois: But nowadays, again, we, we switch to another model where it's more live. and so we, you, you'll find a number of specs that are not even at candidate rec level. They are at the, what we call a working draft, and they, they are being implemented, and if all goes well, the standardization process follows the implementation, and then you end up in a situation where you have your candidate rec when the, uh, spec ships. [00:17:18] Francois: a recent example would be a web GPU, for example. It, uh, it has shipped in, uh, in, in Chrome shortly before it transition to a candidate rec. But the, the, the spec was already stable. and now it's shipping uh, in, uh, in different browsers, uh, uh, safari, uh, and uh, and uh, and uh, Firefox. And so that's, uh, and that's a good example of something that follows, uh, things, uh, along pretty well. But then you have other specs such as, uh, in the media space, uh, request video frame back, uh, frame, call back, uh, requestVideoFrameCallback() is a short API that allows you to get, you know, a call back whenever the, the browser renders a video frame, essentially. [00:18:01] Francois: And that spec is implemented across browsers. But from a W3C specific, perspective, it does not even exist. It's not on the standardization track. It's still being incubated in what we call a community group, which is, you know, some something that, uh, usually exists before. we move to the, the standardization process. [00:18:21] Francois: So there, there are examples of things where some things fell through the cracks. All the standardization process, uh, is either too early or too late and things that are in spec are not exactly what what got implemented or implementations are too early in the process. We we're doing a better job, at, Not falling into a trap where someone ships, uh, you know, an implementation and then suddenly everything is frozen. You can no longer, change it because it's too late, it shipped. we've tried, different, path there. Um, mentioned CSS, the, there was this kind of vendor prefixed, uh, properties that used to be, uh, the way, uh, browsers were deploying new features without, you know, taking the final name. [00:19:06] Francois: We are trying also to move away from it because same thing. Then in the end, you end up with, uh, applications that have, uh, to duplicate all the properties, the CSS properties in the style sheets with, uh, the vendor prefixes and nuances in the, in what it does in, in the end. [00:19:23] Jeremy: Yeah, I, I think, is that in CSS where you'll see --mozilla or things like that? Why requestVideoFrameCallback doesn't have a formal specification [00:19:30] Jeremy: The example of the request video frame callback. I, I wonder if you have an opinion or, or, or know why that ended up the way it did, where the browsers all implemented it, even though it was still in the incubation stage. [00:19:49] Francois: On this one, I don't have a particular, uh, insights on whether there was a, you know, a strong reason to implement it,without doing the standardization work. [00:19:58] Francois: I mean, there are, it's not, uh, an IPR (Intellectual Property Rights) issue. It's not, uh, something that, uh, I don't think the, the, the spec triggers, uh, you know, problems that, uh, would be controversial or whatever. [00:20:10] Francois: Uh, so it's just a matter of, uh, there was no one's priority, and in the end, you end up with a, everyone's happy. it's, it has shipped. And so now doing the spec work is a bit,why spend time on something that's already shipped and so on, but the, it may still come back at some point with try to, you know, improve the situation. [00:20:26] Jeremy: Yeah, that's, that's interesting. It's a little counterintuitive because it sounds like you have the, the working group and it, it sounds like perhaps the companies or organizations involved, they maybe agreed on how it should work, and maybe that agreement almost made it so that they felt like they didn't need to move forward with the specification because they came to consensus even before going through that. [00:20:53] Francois: In this particular case, it's probably because it's really, again, it's a small, spec. It's just one function call, you know? I mean, they will definitely want a working group, uh, for larger specifications. by the way, actually now I know re request video frame call back. It's because the, the, the final goal now that it's, uh, shipped, is to merge it into, uh, HTML, uh, the HTML spec. [00:21:17] Francois: So there's a, there's an ongoing issue on the, the WHATWG side to integrate request video frame callback. And it's taking some time but see, it's, it's being, it, it caught up and, uh, someone is doing the, the work to, to do it. I had forgotten about this one. Um, [00:21:33] Jeremy: Tension from specification review (horizontal review) [00:21:33] Francois: so with larger specifications, organizations will want this kind of IPR regime they will want commit commitments from, uh, others, on the scope, on the process, on everything. So they will want, uh, a larger, a, a more formal setting, because that's part of how you ensure that things, uh, will get done properly. [00:21:53] Francois: I didn't mention it, but, uh, something we're really, uh, Pushy on, uh, W3C I mentioned we have principles, we have priorities, and we have, uh, specific several, uh, properties at W3C. And one of them is that we we're very strong on horizontal reviews of our specs. We really want them to be reviewed from an accessibility perspective, from an internationalization perspective, from a privacy and security, uh, perspective, and, and, and a technical architecture perspective as well. [00:22:23] Francois: And that's, these reviews are part of the formal process. So you, all specs need to undergo these reviews. And from time to time, that creates tension. Uh, from time to time. It just works, you know. Goes without problem. a recurring issue is that, privacy and security are hard. I mean, it's not an easy problem, something that can be, uh, solved, uh, easily. [00:22:48] Francois: Uh, so there's a, an ongoing tension and no easy way to resolve it, but there's an ongoing tension between, specifying powerful APIs and preserving privacy without meaning, not exposing too much information to applications in the media space. You can think of the media capabilities, API. So the media space is a complicated space. [00:23:13] Francois: Space because of codecs. codecs are typically not relative free. and so browsers decide which codecs they're going to support, which audio and video codecs they, they're going to support and doing that, that creates additional fragmentation, not in the sense that they're not interoperable, but in the sense that applications need to choose which connect they're going to ship to stream to the end user. [00:23:39] Francois: And, uh, it's all the more complicated that some codecs are going to be hardware supported. So you will have a hardware decoder in your, in your, in your laptop or smartphone. And so that's going to be efficient to decode some, uh, some stream, whereas some code are not, are going to be software, based, supported. [00:23:56] Francois: Uh, and that may consume a lot of CPU and a lot of power and a lot of energy in the end. So you, you want to avoid that if you can, uh, select another thing. Even more complex than, codecs have different profiles, uh, lower end profiles higher end profiles with different capabilities, different features, uh, depending on whether you're going to use this or that color space, for example, this or that resolution, whatever. [00:24:22] Francois: And so you want to surface that to web applications because otherwise, they can't. Select, they can't choose, the right codec and the right, stream that they're going to send to the, uh, client devices. And so they're not going to provide an efficient user experience first, and even a sustainable one in terms of energy because they, they're going to waste energy if they don't send the right stream. [00:24:45] Francois: So you want to surface that to application. That's what the media, media capabilities, APIs, provides. Privacy concerns [00:24:51] Francois: Uh, but at the same time, if you expose that information, you end up with ways to fingerprint the end user's device. And that in turn is often used to track users across, across sites, which is exactly what we don't want to have, uh, for privacy reasons, for obvious privacy reasons. [00:25:09] Francois: So you have to balance that and find ways to, uh, you know, to expose. Capabilities without, without necessarily exposing them too much. Uh, [00:25:21] Jeremy: Can you give an example of how some of those discussions went? Like within the working group? Who are the companies or who are the organizations that are arguing for We shouldn't have this capability because of the privacy concerns, or [00:25:40] Francois: In a way all of the companies, have a vision of, uh, of privacy. I mean, the, you will have a hard time finding, you know, members saying, I don't care about privacy. I just want the feature. Uh, they all have privacy in mind, but they may have a different approach to privacy. [00:25:57] Francois: so if you take, uh, let's say, uh, apple and Google would be the, the, I guess the perfect examples in that, uh, in that space, uh, Google will have a, an approach that is more open-ended thing. The, the user agents has this, uh, should check what the, the, uh, given site is doing. And then if it goes beyond, you know, some kind of threshold, they're going to say, well, okay, well, we'll stop exposing data to that, to that, uh, to that site. [00:26:25] Francois: So that application. So monitor and react in a way. apple has a more, uh, you know, has a stricter view on, uh, on privacy, let's say. And they will say, no, we, the, the, the feature must not exist in the first place. Or, but that's, I mean, I guess, um, it's not always that extreme. And, uh, from time to time it's the opposite. [00:26:45] Francois: You will have, uh, you know, apple arguing in one way, uh, which is more open-ended than the, uh, than, uh, than Google, for example. And they are not the only ones. So in working groups, uh, you will find the, usually the implementers. Uh, so when we talk about APIs that get implemented in browsers, you want the core browsers to be involved. [00:27:04] Francois: Uh, otherwise it's usually not a good sign for, uh, the success of the, uh, of the technology. So in practice, that means Apple, uh, Microsoft, Mozilla which one did I forget? [00:27:15] Jeremy: Google. [00:27:16] Francois: I forgot Google. Of course. Thank you. that's, uh, that the, the core, uh, list of participants you want to have in any, uh, group that develops web standards targeted at web browsers. Who participates in working groups and how much power do they have? [00:27:28] Francois: And then on top of that, you want, organizations and people who are directly going to use it, either because they, well the content providers. So in media, for example, if you look at the media working group, you'll see, uh, so browser vendors, the ones I mentioned, uh, content providers such as the BBC or Netflix. [00:27:46] Francois: Chip set vendors would, uh, would be there as well. Intel, uh, Nvidia again, because you know, there's a hardware decoding in there and encoding. So media is, touches on, on, uh, on hardware, uh, device manufacturer in general. You may, uh, I think, uh, I think Sony is involved in the, in the media working group, for example. [00:28:04] Francois: and these companies are usually less active in the spec development. It depends on the groups, but they're usually less active because the ones developing the specs are usually the browser again, because as I mentioned, we develop the specs in parallel to browsers implementing it. So they have the. [00:28:21] Francois: The feedback on how to formulate the, the algorithms. and so that's this collection of people who are going to discuss first within themselves. W3C pushes for consensual dis decisions. So we hardly take any votes in the working groups, but from time to time, that's not enough. [00:28:41] Francois: And there may be disagreements, but let's say there's agreement in the group, uh, when the spec matches. horizontal review groups will look at the specs. So these are groups I mentioned, accessibility one, uh, privacy, internationalization. And these groups, usually the participants are, it depends. [00:29:00] Francois: It can be anything. It can be, uh, the same companies. It can be, but usually different people from the same companies. But it the, maybe organizations with a that come from very, a very different angle. And that's a good thing because that means the, you know, you enlarge the, the perspectives on your, uh, on the, on the technology. [00:29:19] Francois: and you, that's when you have a discussion between groups, that takes place. And from time to time it goes well from time to time. Again, it can trigger issues that are hard to solve. and the W3C has a, an escalation process in case, uh, you know, in case things degenerate. Uh, starting with, uh, the notion of formal objection. [00:29:42] Jeremy: It makes sense that you would have the, the browser. Vendors and you have all the different companies that would use that browser. All the different horizontal groups like you mentioned, the internationalization, accessibility. I would imagine that you were talking about consensus and there are certain groups or certain companies that maybe have more say or more sway. [00:30:09] Jeremy: For example, if you're a browser, manufacturer, your Google. I'm kind of curious how that works out within the working group. [00:30:15] Francois: Yes, it's, I guess I would be lying if I were saying that, uh, you know, all companies are strictly equal in a, in a, in a group. they are from a process perspective, I mentioned, you know, different membership fees with were design, special specific ethos so that no one could say, I'm, I'm putting in a lot of money, so you, you need to re you need to respect me, uh, and you need to follow what I, what I want to, what I want to do. [00:30:41] Francois: at the same time, if you take a company like, uh, like Google for example, they send, hundreds of engineers to do standardization work. That's absolutely fantastic because that means work progresses and it's, uh, extremely smart people. So that's, uh, that's really a pleasure to work with, uh, with these, uh, people. [00:30:58] Francois: But you need to take a step back and say, well, the problem is. Defacto that gives them more power just by virtue of, uh, injecting more resources into it. So having always someone who can respond to an issue, having always someone, uh, editing a spec defacto that give them more, uh, um, more say on the, on the directions that, get forward. [00:31:22] Francois: And on top of that, of course, they have the, uh, I guess not surprisingly, the, the browser that is, uh, used the most, currently, on the market so there's a little bit of a, the, the, we, we, we, we try very hard to make sure that, uh, things are balanced. it's not a perfect world. [00:31:38] Francois: the the role of the team. I mean, I didn't talk about the role of the team, but part of it is to make sure that. Again, all perspectives are represented and that there's not, such a, such big imbalance that, uh, that something is wrong and that we really need to look into it. so making sure that anyone, if they have something to say, make making sure that they are heard by the rest of the group and not dismissed. [00:32:05] Francois: That usually goes well. There's no problem with that. And again, the escalation process I mentioned here doesn't make any, uh, it doesn't make any difference between, uh, a small player, a large player, a big player, and we have small companies raising formal objections against some of our aspects that happens, uh, all large ones. [00:32:24] Francois: But, uh, that happens too. There's no magical solution, I guess you can tell it by the way. I, uh, I don't know how to formulate the, the process more. It's a human process, and that's very important that it remains a human process as well. [00:32:41] Jeremy: I suppose the role of, of staff and someone in your position, for example, is to try and ensure that these different groups are, are heard and it isn't just one group taking control of it. [00:32:55] Francois: That's part of the role, again, is to make sure that, uh, the, the process is followed. So the, I, I mean, I don't want to give the impression that the process controls everything in the groups. I mean, the, the, the groups are bound by the process, but the process is there to catch problems when they arise. [00:33:14] Francois: most of the time there are no problems. It's just, you know, again, participants talking to each other, talking with the rest of the community. Most of the work happens in public nowadays, in any case. So the groups work in public essentially through asynchronous, uh, discussions on GitHub repositories. [00:33:32] Francois: There are contributions from, you know, non group participants and everything goes well. And so the process doesn't kick in. You just never say, eh, no, you didn't respect the process there. You, you closed the issue. You shouldn't have a, it's pretty rare that you have to do that. Uh, things just proceed naturally because they all, everyone understands where they are, why, what they're doing, and why they're doing it. [00:33:55] Francois: we still have a role, I guess in the, in the sense that from time to time that doesn't work and you have to intervene and you have to make sure that,the, uh, exception is caught and, uh, and processed, uh, in the right way. Discussions are public on github [00:34:10] Jeremy: And you said this process is asynchronous in public, so it sounds like someone, I, I mean, is this in GitHub issues or how, how would somebody go and, and see what the results of [00:34:22] Francois: Yes, there, there are basically a gazillion of, uh, GitHub repositories under the, uh, W3C, uh, organization on GitHub. Most groups are using GitHub. I mean, there's no, it's not mandatory. We don't manage any, uh, any tooling. But the factors that most, we, we've been transitioning to GitHub, uh, for a number of years already. [00:34:45] Francois: Uh, so that's where the work most of the work happens, through issues, through pool requests. Uh, that's where. people can go and raise issues against specifications. Uh, we usually, uh, also some from time to time get feedback from developers and countering, uh, a bug in a particular implementations, which we try to gently redirect to, uh, the actual bug trackers because we're not responsible for the respons implementations of the specs unless the spec is not clear. [00:35:14] Francois: We are responsible for the spec itself, making sure that the spec is clear and that implementers well, understand how they should implement something. Why the W3C doesn't specify a video or audio codec [00:35:25] Jeremy: I can see how people would make that mistake because they, they see it's the feature, but that's not the responsibility of the, the W3C to implement any of the specifications. Something you had mentioned there's the issue of intellectual property rights and how when you have a recommendation, you require the different organizations involved to make their patents available to use freely. [00:35:54] Jeremy: I wonder why there was never any kind of, recommendation for audio or video codecs in browsers since you have certain ones that are considered royalty free. But, I believe that's never been specified. [00:36:11] Francois: At W3C you mean? Yes. we, we've tried, I mean, it's not for lack of trying. Um, uh, we've had a number of discussions with, uh, various stakeholders saying, Hey, we, we really need, an audio or video code for our, for the web. the, uh, png PNG is an example of a, um, an image format which got standardized at W3C and it got standardized at W3C similar reasons. There had to be a royalty free image format for the web, and there was none at the time. of course, nowadays, uh, jpeg, uh, and gif or gif, whatever you call it, are well, you know, no problem with them. But, uh, um, that at the time P PNG was really, uh, meant to address this issue and it worked for PNG for audio and video. [00:37:01] Francois: We haven't managed to secure, commitments by stakeholders. So willingness to do it, so it's not, it's not lack of willingness. We would've loved to, uh, get, uh, a royalty free, uh, audio codec, a royalty free video codec again, audio and video code are extremely complicated because of this. [00:37:20] Francois: not only because of patterns, but also because of the entire business ecosystem that exists around them for good reasons. You, in order for a, a codec to be supported, deployed, effective, it really needs, uh, it needs to mature a lot. It needs to, be, uh, added to at a hardware level, to a number of devices, capturing devices, but also, um, uh, uh, of course players. [00:37:46] Francois: And that takes a hell of a lot of time and that's why you also enter a number of business considerations with business contracts between entities. so I'm personally, on a personal level, I'm, I'm pleased to see, for example, the Alliance for Open Media working on, uh, uh, AV1, uh, which is. At least they, uh, they wanted to be royalty free and they've been adopting actually the W3C patent policy to do this work. [00:38:11] Francois: So, uh, we're pleased to see that, you know, they've been adopting the same process and same thing. AV1 is not yet at the same, support stage, as other, codecs, in the world Yeah, I mean in devices. There's an open question as what, what are we going to do, uh, in the future uh, with that, it's, it's, it's doubtful that, uh, the W3C will be able to work on a, on a royalty free audio, codec or royalty free video codec itself because, uh, probably it's too late now in any case. [00:38:43] Francois: but It's one of these angles in the, in the web platform where we wish we had the, uh, the technology available for, for free. And, uh, it's not exactly, uh, how things work in practice.I mean, the way codecs are developed remains really patent oriented. [00:38:57] Francois: and you will find more codecs being developed. and that's where geopolitics can even enter the, the, uh, the play. Because, uh, if you go to China, you will find new codecs emerging, uh, that get developed within China also, because, the other codecs come mostly from the US so it's a bit of a problem and so on. [00:39:17] Francois: I'm not going to enter details and uh, I would probably say stupid things in any case. Uh, but that, uh, so we continue to see, uh, emerging codecs that are not royalty free, and it's probably going to remain the case for a number of years. unfortunately, unfortunately, from a W3C perspective and my perspective of course. [00:39:38] Jeremy: There's always these new, formats coming out and the, rate at which they get supported in the browser, even on a per browser basis is, is very, there can be a long time between, for example, WebP being released and a browser supporting it. So, seems like maybe we're gonna be in that situation for a while where the codecs will come out and maybe the browsers will support them. Maybe they won't, but the, the timeline is very uncertain. Digital Rights Management (DRM) and Media Source Extensions [00:40:08] Jeremy: Something you had, mentioned, maybe this was in your, email to me earlier, but you had mentioned that some of these specifications, there's, there's business considerations like with, digital rights management and, media source extensions. I wonder if you could talk a little bit about maybe what media source extensions is and encrypted media extensions and, and what the, the considerations or challenges are there. [00:40:33] Francois: I'm going to go very, very quickly over the history of a, video and audio support on the web. Initially it was supported through plugins. you are maybe too young to, remember that. But, uh, we had extensions, added to, uh, a realplayer. [00:40:46] Francois: This kind of things flash as well, uh, supporting, uh, uh, videos, in web pages, but it was not provided by the web browsers themselves. Uh, then HTML5 changed the, the situation. Adding these new tags, audio and video, but that these tags on this, by default, support, uh, you give them a resources, a resource, like an image as it's an audio or a video file. [00:41:10] Francois: They're going to download this, uh, uh, video file or audio file, and they're going to play it. That works well. But as soon as you want to do any kind of real streaming, files are too large and to stream, to, to get, you know, to get just a single fetch on, uh, on them. So you really want to stream them chunk by chunk, and you want to adapt the resolution at which you send the stream based on real time conditions of the user's network. [00:41:37] Francois: If there's plenty of bandwidth you want to send the user, the highest possible resolution. If there's a, some kind of hiccup temporary in the, in the network, you really want to lower the resolution, and that's called adaptive streaming. And to get adaptive streaming on the web, well, there are a number of protocols that exist. [00:41:54] Francois: Same thing. Some many of them are proprietary and actually they remain proprietary, uh, to some extent. and, uh, some of them are over http and they are the ones that are primarily used in, uh, in web contexts. So DASH comes to mind, DASH for Dynamic Adaptive streaming over http. HLS is another one. Uh, initially developed by Apple, I believe, and it's, uh, HTTP live streaming probably. Exactly. And, so there are different protocols that you can, uh, you can use. Uh, so the goal was not to standardize these protocols because again, there were some proprietary aspects to them. And, uh, same thing as with codecs. [00:42:32] Francois: There was no, well, at least people wanted to have the, uh, flexibility to tweak parameters, adaptive streaming parameters the way they wanted for different scenarios. You may want to tweak the parameters differently. So they, they needed to be more flexibility on top of protocols not being truly available for use directly and for implementation directly in browsers. [00:42:53] Francois: It was also about providing applications with, uh, the flexibility they would need to tweak parameters. So media source extensions comes into play for exactly that. Media source extensions is really about you. The application fetches chunks of its audio and video stream the way it wants, and with the parameters it wants, and it adjusts whatever it wants. [00:43:15] Francois: And then it feeds that into the, uh, video or audio tag. and the browser takes care of the rest. So it's really about, doing, you know, the adaptive streaming. let applications do it, and then, uh, let the user agent, uh, the browser takes, take care of the rendering itself. That's media source extensions. [00:43:32] Francois: Initially it was pushed by, uh, Netflix. They were not the only ones of course, but there, there was a, a ma, a major, uh, proponent of this, uh, technical solution, because they wanted, uh, they, uh, they were, expanding all over the world, uh, with, uh, plenty of native, applications on all sorts of, uh, of, uh, devices. [00:43:52] Francois: And they wanted to have a way to stream content on the web as well. both for both, I guess, to expand to, um, a new, um, ecosystem, the web, uh, providing new opportunities, let's say. But at the same time also to have a fallback, in case they, because for native support on different platforms, they sometimes had to enter business agreements with, uh, you know, the hardware manufacturers, the whatever, the, uh, service provider or whatever. [00:44:19] Francois: and so that was a way to have a full back. That kind of work is more open, in case, uh, things take some time and so on. So, and they probably had other reasons. I mean, I'm not, I can't speak on behalf of Netflix, uh, on others, but they were not the only ones of course, uh, supporting this, uh, me, uh, media source extension, uh, uh, specification. [00:44:42] Francois: and that went kind of, well, I think it was creating 2011. I mean, the, the work started in 2011 and the recommendation was published in 2016, which is not too bad from a standardization perspective. It means only five years, you know, it's a very short amount of time. Encrypted Media Extensions [00:44:59] Francois: At the same time, and in parallel and complement to the media source extension specifications, uh, there was work on the encrypted media extensions, and here it was pushed by the same proponent in a way because they wanted to get premium content on the web. [00:45:14] Francois: And by premium content, you think of movies and, uh. These kind of beasts. And the problem with the, I guess the basic issue with, uh, digital asset such as movies, is that they cost hundreds of millions to produce. I mean, some cost less of course. And yet it's super easy to copy them if you have a access to the digital, uh, file. [00:45:35] Francois: You just copy and, uh, and that's it. Piracy uh, is super easy, uh, to achieve. It's illegal of course, but it's super easy to do. And so that's where the different legislations come into play with digital right management. Then the fact is most countries allow system that, can encrypt content and, uh, through what we call DRM systems. [00:45:59] Francois: so content providers, uh, the, the ones that have movies, so the studios here more, more and more, and Netflix is one, uh, one of the studios nowadays. Um, but not only, not only them all major studios will, uh, would, uh, push for, wanted to have something that would allow them to stream encrypted content, encrypted audio and video, uh, mostly video, to, uh, to web applications so that, uh, you. [00:46:25] Francois: Provide the movies, otherwise, they, they are just basically saying, and sorry, but, uh, this premium content will never make it to the web because there's no way we're gonna, uh, send it in clear, to, uh, to the end user. So Encrypting media extensions is, uh, is an API that allows to interface with, uh, what's called the content decryption module, CDM, uh, which itself interacts with, uh, the DR DRM systems that, uh, the browser may, may or may not support. [00:46:52] Francois: And so it provides a way for an application to receive encrypted content, pass it over get the, the, the right keys, the right license keys from a whatever system actually. Pass that logic over to the, and to the user agent, which passes, passes it over to, uh, the CDM system, which is kind of black box in, uh, that does its magic to get the right, uh, decryption key and then the, and to decrypt the content that can be rendered. [00:47:21] Francois: The encrypted media extensions triggered a, a hell of a lot of, uh, controversy. because it's DRM and DRM systems, uh, many people, uh, uh, things should be banned, uh, especially on the web because the, the premise of the web is that the, the user has trusts, a user agent. The, the web browser is called the user agent in all our, all our specifications. [00:47:44] Francois: And that's, uh, that's the trust relationship. And then they interact with a, a content provider. And so whatever they do with the content is their, I guess, actually their problem. And DRM introduces a third party, which is, uh, there's, uh, the, the end user no longer has the control on the content. [00:48:03] Francois: It has to rely on something else that, Restricts what it can achieve with the content. So it's, uh, it's not only a trust relationship with its, uh, user agents, it's also with, uh, with something else, which is the content provider, uh, in the end, the one that has the, uh, the license where provides the license. [00:48:22] Francois: And so that's, that triggers, uh, a hell of a lot of, uh, of discussions in the W3C degenerated, uh, uh, into, uh, formal objections being raised against the specification. and that escalated to, to the, I mean, at all leverage it. It's, it's the, the story in, uh, W3C that, um, really, uh, divided the membership into, opposed camps in a way, if you, that's was not only year, it was not really 50 50 in the sense that not just a huge fights, but the, that's, that triggered a hell of a lot of discussions and a lot of, a lot of, uh, of formal objections at the time. [00:49:00] Francois: Uh, we were still, From a governance perspective, interestingly, um, the W3C used to be a dictatorship. It's not how you should formulate it, of course, and I hope it's not going to be public, this podcast. Uh, but the, uh, it was a benevolent dictatorship. You could see it this way in the sense that, uh, the whole process escalated to one single person was, Tim Burners Lee, who had the final say, on when, when none of the other layers, had managed to catch and to resolve, a conflict. [00:49:32] Francois: Uh, that has hardly ever happened in, uh, the history of the W3C, but that happened to the two for EME, for encrypted media extensions. It had to go to the, uh, director level who, uh, after due consideration, uh, decided to, allow the EME to proceed. and that's why we have a, an EME, uh, uh, standard right now, but still re it remains something on the side. [00:49:56] Francois: EME we're still, uh, it's still in the scope of the media working group, for example. but the scope, if you look at the charter of the working group, we try to scope the, the, the, the, the updates we can make to the specification, uh, to make sure that we don't reopen, reopen, uh, a can of worms, because, well, it's really a, a topic that triggers friction for good and bad reasons again. [00:50:20] Jeremy: And when you talk about the media source extensions, that is the ability to write custom code to stream video in whatever way you want. You mentioned, the MPEG-DASH and http live streaming. So in that case, would that be the developer gets to write that code in JavaScript that's executed by the browser? [00:50:43] Francois: Yep, that's, uh, that would be it. and then typically, I guess the approach nowadays is more and more to develop low level APIs into W3C or web in, in general, I guess. And to let, uh. Libraries emerge that are going to make lives of a, a developer, uh, easier. So for MPEG DASH, we have the DASH.js, which does a fantastic job at, uh, at implementing the complexity of, uh, of adaptive streaming. [00:51:13] Francois: And you just, you just hook it into your, your workflow. And that's, uh, and that's it. Encrypted Media Extensions are closed source [00:51:20] Jeremy: And with the encrypted media extensions I'm trying to picture how those work and how they work differently. [00:51:28] Francois: Well, it's because the, the, the, the key architecture is that the, the stream that you, the stream that you may assemble with a media source extensions, for example. 'cause typically they, they're used in collaboration. When you hook the, hook it into the video tag, you also. Call EME and actually the stream goes to EME. [00:51:49] Francois: And when it goes to EME, actually the user agent hands the encrypted stream. You're still encrypted at this time. Uh, encrypted, uh, stream goes to the CDM content decryption module, and that's a black box well, it has some black, black, uh, black box logic. So it's not, uh, even if you look at the chromium source code, for example, you won't see the implementation of the CDM because it's a, it's a black box, so it's not part of the browser se it's a sand, it's sandboxed, it's execution sandbox. [00:52:17] Francois: That's, uh, the, the EME is kind of unique in, in this way where the, the CDM is not allowed to make network requests, for example, again, for privacy reasons. so anyway, the, the CDM box has the logic to decrypt the content and it hands it over, and then it depends, it depends on the level of protection you. [00:52:37] Francois: You need or that the system supports. It can be against software based protection, in which case actually, a highly motivated, uh, uh, uh, attacker could, uh, actually get access to the decoded stream, or it can be more hardware protected, in which case actually the, it goes to the, uh, to your final screen. [00:52:58] Francois: But it goes, it, it goes through the hardware in a, in a mode that the US supports in a mode that even the user agent doesn't have access to it. So it doesn't, it can't even see the pixels that, uh, gets rendered on the screen. There are, uh, several other, uh, APIs that you could use, for example, to take a screenshot of your, of your application and so on. [00:53:16] Francois: And you cannot apply them to, uh, such content because they're just gonna return a black box. again, because the user agent itself does not see the, uh, the pixels, which is exactly what you want with encrypted content. [00:53:29] Jeremy: And the, the content decryption module, it's, if I understand correctly, it's something that's shipped with the browsers, but you were saying is if you were to look at the public source code of Chromium or of Firefox, you would not see that implementation. Content Decryption Module (Widevine, PlayReady) [00:53:47] Francois: True. I mean, the, the, um, the typical examples are, uh, uh, widevine, so wide Vine. So interestingly, uh, speaking in theory, these, uh, systems could have been provided by anyone in practice. They've been provided by the browser vendors themselves. So Google has Wide Vine. Uh, Microsoft has something called PlayReady. Apple uh, the name, uh, escapes my, uh, sorry. They don't have it on top of my mind. So they, that's basically what they support. So they, they also own that code, but in a way they don't have to. And Firefox actually, uh, they, uh, don't, don't remember which one, they support among these three. but, uh, they, they don't own that code typically. [00:54:29] Francois: They provide a wrapper around, around it. Yeah, that's, that's exactly the, the crux of the, uh, issue that, people have with, uh, with DRMs, right? It's, uh, the fact that, uh, suddenly you have a bit of code running there that is, uh, that, okay, you can send box, but, uh, you cannot inspect and you don't have, uh, access to its, uh, source code. [00:54:52] Jeremy: That's interesting. So the, almost the entire browser is open source, but if you wanna watch a Netflix movie for example, then you, you need to, run this, this CDM, in addition to just the browser code. I, I think, you know, we've kind of covered a lot. Documenting what's available in browsers for developers [00:55:13] Jeremy: I wonder if there's any other examples or anything else you thought would be important to mention in, in the context of the W3C. [00:55:23] Francois: There, there's one thing which, uh, relates to, uh, activities I'm doing also at W3C. Um. Here, we've been talking a lot about, uh, standards and, implementations in browsers, but there's also, uh, adoption of these browser, of these technology standards by developers in general and making sure that developers are aware of what exists, making sure that they understand what exists and one of the, key pain points that people, uh. [00:55:54] Francois: Uh, keep raising on, uh, the web platform is first. Well, the, the, the web platform is unique in the sense that there are different implementations. I mean, if you, [00:56:03] Francois: Uh, anyway, there are different, uh, context, different run times where there, there's just one provided by the company that owns the, uh, the, the, the system. The web platform is implemented by different, uh, organizations. and so you end up the system where no one, there's what's in the specs is not necessarily supported. [00:56:22] Francois: And of course, MDN tries, uh, to document what's what's supported, uh, thoroughly. But for MDN to work, there's a hell of a lot of needs for data that, tracks browser support. And this, uh, this data is typically in a project called the Browser Compat Data, BCD owned by, uh, MDN as well. But, the Open Web Docs collective is a, uh, is, uh, the one, maintaining that, uh, that data under the hoods. [00:56:50] Francois: anyway, all of that to say that, uh, to make sure that, we track things beyond work on technical specifications, because if you look at it from W3C perspective, life ends when the spec reaches standards, uh, you know, candidate rec or rec, you could just say, oh, done with my work. but that's not how things work. [00:57:10] Francois: There's always, you need the feedback loop and, in order to make sure that developers get the information and can provide the, the feedback that standardization can benefit from and browser vendors can benefit from. We've been working on a project called web Features with browser vendors mainly, and, uh, a few of the folks and MDN and can I use and different, uh, different people, to catalog, the web in terms of features that speak to developers and from that catalog. [00:57:40] Francois: So it's a set of, uh, it's a set of, uh, feature IDs with a feature name and feature description that say, you know, this is how developers would, uh, understand, uh, instead of going too fine grained in terms of, uh, there's this one function call that does this because that's where you, the, the kind of support data you may get from browser data and MDN initially, and having some kind of a coarser grained, uh, structure that says these are the, features that make sense. [00:58:09] Francois: They talk to developers. That's what developers talk about, and that's the info. So the, we need to have data on these particular features because that's how developers are going approach the specs. Uh. and from that we've derived the notion of baseline badges that you have, uh, are now, uh, shown on MDN on can I use and integrated in, uh, IDE tool, IDE Tools such as visual, visual studio, and, uh, uh, libraries, uh, linked, some linters have started to, um, to integrate that data. [00:58:41] Francois: Uh, so, the way it works is, uh, we've been mapping these coarser grained features to BCDs finer grained support data, and from there we've been deriving a kind of a, a batch that says, yeah, this, this feature is implemented well, has limited availability because it's only implemented in one or two browsers, for example. [00:59:07] Francois: It's, newly available because. It was implemented. It's been, it's implemented across the main browser vendor, um, across the main browsers that people use. But it's recent, and widely available, which we try to, uh, well, there's been lots of discussion in the, in the group to, uh, come up with a definition which essentially ends up being 30 months after, a feature become, became newly available. [00:59:34] Francois: And that's when, that's the time it takes for the, for the versions of the, the different versions of the browser to propagate. Uh, because you, it's not because there's a new version of a, of a browser that, uh, people just, Ima immediately, uh, get it. So it takes a while, to propagate, uh, across the, uh, the, the user, uh, user base. [00:59:56] Francois: And so the, the goal is to have a, a, a signal that. Developers can rely on saying, okay, well it's widely available so I can really use that feature. And of course, if that doesn't work, then we need to know about it. And so we are also working with, uh, people doing so developer surveys such as state of, uh, CSS, state of HTML, state of JavaScript. [01:00:15] Francois: That's I guess, the main ones. But also we are also running, uh, MDN short surveys with the MDN people to gather feedback on. On the, on these same features, and to feed the loop and to, uh, to complete the loop. and these data is also used by, internally, by browser vendors to inform, prioritization process, their prioritization process, and typically as part of the interop project that they're also running, uh, on the site [01:00:43] Francois: So a, a number of different, I've mentioned, uh, I guess a number of different projects, uh, coming along together. But that's the goal is to create links, across all of these, um, uh, ongoing projects with a view to integrating developers, more, and gathering feedback as early as possible and inform decision. [01:01:04] Francois: We take at the standardization level that can affect the, the lives of the developers and making sure that it's, uh, it affects them in a, in a positive way. [01:01:14] Jeremy: just trying to understand, 'cause you had mentioned that there's the web features and the baseline, and I was, I was trying to picture where developers would actually, um, see these things. And it sounds like from what you're saying is W3C comes up with what stage some of these features are at, and then developers would end up seeing it on MDN or, or some other site. [01:01:37] Francois: So, uh, I'm working on it, but that doesn't mean it's a W3C thing. It's a, it's a, again, it's a, we have different types of group. It's a community group, so it's the Web DX Community group at W3C, which means it's a community owned thing. so that's why I'm mentioning a working with a representative from, and people from MDN people, from open Web docs. [01:02:05] Francois: so that's the first point. The second point is, so it's, indeed this data is now being integrated. If you, and you look, uh, you'll, you'll see it in on top of the MDN pages on most of them. If you look at, uh, any kind of feature, you'll see a, a few logos, uh, a baseline banner. and then can I use, it's the same thing. [01:02:24] Francois: You're going to get a baseline, banner. It's more on, can I use, and it's meant to capture the fact that the feature is widely available or if you may need to pay attention to it. Of course, it's a simplification, and the goal is not to the way it's, the way the messaging is done to developers is meant to capture the fact that, they may want to look, uh, into more than just this, baseline status, because. [01:02:54] Francois: If you take a look at web platform tests, for example, and if you were to base your assessment of whether a feature is supported based on test results, you'll end up saying the web platform has no supported technology because there are absolutely no API that, uh, where browsers pass 100% of the, of the, of the test suite. [01:03:18] Francois: There may be a few of them, I don't know. But, there's a simplification in the, in the process when a feature is, uh, set to be baseline, there may be more things to look at nevertheless, but it's meant to provide a signal that, uh, still developers can rely on their day-to-day, uh, lives. [01:03:36] Francois: if they use the, the feature, let's say, as a reasonably intended and not, uh, using to advance the logic. [01:03:48] Jeremy: I see. Yeah. I'm looking at one of the pages on MDN right now, and I can see at the top there's the, the baseline and it, it mentions that this feature works across many browsers and devices, and then they say how long it's been available. And so that's a way that people at a glance can, can tell, which APIs they can use. [01:04:08] Francois: it also started, uh, out of a desire to summarize this, uh, browser compatibility table that you see at the end of the page of the, the bottom of the page in on MDN. but there are where developers were saying, well, it's, it's fine, but it's, it goes too much into detail. So we don't know in the end, can we, can we use that feature or can we, can we not use that feature? [01:04:28] Francois: So it's meant as a informed summary of, uh, of, of that it relies on the same data again. and more importantly, we're beyond MDN, we're working with tools providers to integrate that as well. So I mentioned the, uh, visual Studio is one of them. So recently they shipped a new version where when you use a feature, you can, you can have some contextual, uh. [01:04:53] Francois: A menu that tells you, yeah, uh, that's fine. You, this CSS property, you can, you can use it, it's widely available or be aware this one is limited Availability only, availability only available in Firefox or, or Chrome or Safari work kit, whatever. [01:05:08] Jeremy: I think that's a good place to wrap it up, if people want to learn more about the work you're doing or learn more about sort of this whole recommendations process, where, where should they head? [01:05:23] Francois: Generally speaking, we're extremely open to, uh, people contributing to the W3C. and where should they go if they, it depends on what they want. So I guess the, the in usually where, how things start for someone getting involved in the W3C is that they have some
“Press 1 is dead. If you haven't integrated AI into your core telephony stack, you're on the path to obsolescence.” — Andy Abramson, Founder & CEO, Comunicano In this conversation with Doug Green, Publisher of Technology Reseller News, Andy Abramson—32 years into leading Comunicano—explains why legacy, menu-tree IVRs are being displaced by SIP-native AI and real-time voice agents. The result: faster resolution, lower latency, and human-like interactions that finally match the urgency of today's callers. What's changing SIP ↔ AI interconnect: Direct SIP trunking into AI (e.g., OpenAI) turns agents into callable endpoints—simplifying deployment much like early CPaaS did. Network path matters: Zero-hop/HD direct connectivity (e.g., CarrierX/Found/freeconferencecall.com) and Cloudflare's global edge for WebRTC cut jitter, packet loss, and delay—feeding cleaner “robot food” to AI. Voice that sounds human: Advances in neural voices (e.g., ElevenLabs) raise comprehension and comfort, improving CX outcomes. Tool orchestration made simple: MCP/agent frameworks (e.g., Anthropic-style tool calling) connect CRM/ERP and data sources without brittle middleware. Who wins, who loses Winners: UCaaS/CPaaS and AI-forward CCaaS that treat AI agents as first-class endpoints; telcos bundling AI with SIP routing and data plans; high-volume enterprises offloading Tier-1 to real-time AI. At risk: IVR-only vendors, low-end CCaaS, and speech-to-text middleware that don't adopt AI—“adopt or die.” Why it matters for MSPs & channel partners The migration path is here now: swap tree-based IVR for NLP-driven, real-time voice agents, integrate with existing stacks via SIP, and monetize AI minutes + memories. Business impact: shorter handle times, higher first-contact resolution, lower OpEx, and fewer abandoned calls—especially for customers calling with urgent needs. This episode includes a slide presentation outlining the end of menu trees, the SIP-AI architecture, and four go-to-market “wins” for carriers, UC/CPaaS, CCaaS, and large enterprises. Learn more about Andy's work at comunicano.com (one “m”) and his commentary at AndyAbramson.com and on LinkedIn.
Everything wrong with our homelabs, and how we're finally fixing them. Plus: two self-hosted apps you didn't know you needed.Sponsored By:Managed Nebula: Meet Managed Nebula from Defined Networking. A decentralized VPN built on the open-source Nebula platform that we love. Unraid: A powerful, easy operating system for servers and storage. Maximize your hardware with unmatched flexibility. 1Password Extended Access Management: 1Password Extended Access Management is a device trust solution for companies with Okta, and they ensure that if a device isn't trusted and secure, it can't log into your cloud apps. Unraid: A powerful, easy operating system for servers and storage. Maximize your hardware with unmatched flexibility. Support LINUX UnpluggedLinks:
OpenAI has a new API in beta called the Realtime API, which enables speech conversations with LLMs. Real-time text and audio processing means users can have conversations with voice agents and voice-enabled apps, and OpenAI makes it simple to connect via WebRTC or WebSockets.Several months ago, The Browser Company, abruptly announced they were stopping work on Arc browser in favor of building a completely new, AI-first browser called Dia. Now, the company's CEO has released a letter detailing why these decisions were made and what the future holds for The Browser Company. Details on Dia are vague, but the team is optimistic they can build “a true successor to the browser”.AI-enabled coding IDE Cursor hit v1.0, this week with a bunch of notable features. Notable highlights in this release include: BugBot, which automatically reviews PRs and identifies bugs and issues, Background Agent (Cursor's remote coding agent) for all users, Memories to help Cursor remember facts and conversations within projects for future use, and one-click MCP install and OAuth support.Timestamps:1:10 - OpenAI's realtime agent API10:03 - The Browser Company Kills Arc16:53 - Cursor 1.028:04 - Codex gets internet access31:09 - React Router's open governance34:18 - Fire Starter39:55 - What's making us happyNews:Paige - Cursor 1.0Jack - OpenAI's Realtime Agent for AI chatTJ - The Browser Company stops developing ArcLightning News:React Router Open GovernanceCodex gets access to the internetFire Starter:console.log formatting stringsWhat Makes Us Happy this Week:Paige - Clarkson's Farm season 4Jack - Shokz OpenRun Pro 2 headphonesTJ - Rotary cheese graterThanks as always to our sponsor, the Blue Collar Coder channel on YouTube. You can join us in our Discord channel, explore our website and reach us via email, or talk to us on X, Bluesky, or YouTube.Front-end Fire websiteBlue Collar Coder on YouTubeBlue Collar Coder on DiscordReach out via emailTweet at us on X @front_end_fireFollow us on Bluesky @front-end-fire.comSubscribe to our YouTube channel @Front-EndFirePodcast
LiveKit is a platform that provides developers with tools to build real-time audio and video applications at scale. It offers an open-source WebRTC stack for creation of live, interactive experiences like video conferencing, streaming, and virtual events. LiveKit has gained significant attention for its partnership with OpenAI for the Advanced Voice feature. Russ d'Sa is The post LiveKit and OpenAI with Russ d'Sa appeared first on Software Engineering Daily.
LiveKit is a platform that provides developers with tools to build real-time audio and video applications at scale. It offers an open-source WebRTC stack for creation of live, interactive experiences like video conferencing, streaming, and virtual events. LiveKit has gained significant attention for its partnership with OpenAI for the Advanced Voice feature. Russ d'Sa is The post LiveKit and OpenAI with Russ d'Sa appeared first on Software Engineering Daily.
Hi friends, today I'm kicking off a series talking about the good/bad/ugly of hosting security services. Today I talk specifically about transfer.zip. By self-hosting your own instance of transfer.zip, you can send and receive HUGE files that are end-to-end encrypted using WebRTC. Sweet! I also supplemented today's episode with a short live video over at 7MinSec.club.
For the full show notes and links visit https://sub.thursdai.news
Dan Anderson Oculum has a video conferencing solution for Service Providers. The solution is 100% white label. OmniUC™ is a white label video conferencing and unified communications platform designed specifically for the telecom industry. OmniUC is a turnkey solution that easily layers into existing voice applications with minimal development. Fully developed on WebRTC, OmniUC requires no apps, plug-ins or downloads and is compatible across all modern browsers and mobile devices. Customizable branding enables telecom providers to set the appropriate look and feel for their customers while a rich feature set offers options such as conferencing, collaboration, messaging, encryption, among many others. To learn more about Oculum's OmniUC™ white-label video conferencing and unified communications platform solutions, click here. Oculum is a PaaS (Platform as a Service) modeled technology leader and innovator of video conferencing for Telecom Service Providers and select technology industry verticals. With the support of 19 years of proven unified communications (UC) platform development, 110+ patents, and millions of subscribers, Oculum is uniquely positioned to deliver world-class virtual experiences on any type of user's device (mobile or browser).
Today Trey Ford and RSnake sit down with MirrorTab which aims to thwart client side malicious plugins from reading data meant to banks, and other sensitive websites, using clever WebRTC and video tricks.
News includes the archiving of the “Phoenix Sync” project, a major update to Gettext that enhances compilation efficiency, the release of ErrorTracker v0.2.6 with new features like error pruning and ignoring, and José Valim highlighting UX issues with ChatGPT's new UI. We were also joined by Alistair Woodman, a board member of the EEF (Erlang Ecosystem Foundation), who explained the EEF's recent efforts to stay ahead of legislation and technical regulatory shifts that may impact developers soon. Alistair discussed the changing regulatory landscape in the US and the EU due to high-profile exploits, outages, and nation-state supply chain attacks. We learned how the EEF supports Elixir and BEAM developers and what they need from the community now, and more! Show Notes online - http://podcast.thinkingelixir.com/220 (http://podcast.thinkingelixir.com/220) Elixir Community News - https://github.com/josevalim/sync (https://github.com/josevalim/sync?utm_source=thinkingelixir&utm_medium=shownotes) – The "Phoenix Sync" project has been archived with no immediate explanation yet. - https://github.com/elixir-gettext/gettext/blob/main/CHANGELOG.md#v0260 (https://github.com/elixir-gettext/gettext/blob/main/CHANGELOG.md#v0260?utm_source=thinkingelixir&utm_medium=shownotes) – Gettext has a big update to version 0.26.0 which includes a more efficient compilation. - https://github.com/elixir-cldr/cldr (https://github.com/elixir-cldr/cldr?utm_source=thinkingelixir&utm_medium=shownotes) – Gettext feels similar to how ExCldr allows defining a custom backend. - https://elixirstatus.com/p/TvydI-errortracker-v026-has-been-released (https://elixirstatus.com/p/TvydI-errortracker-v026-has-been-released?utm_source=thinkingelixir&utm_medium=shownotes) – ErrorTracker v0.2.6 has been released with key improvements like a global error tracking disable flag, automatic resolved error pruning, and error ignorer. - https://github.com/mimiquate/tower (https://github.com/mimiquate/tower?utm_source=thinkingelixir&utm_medium=shownotes) – Tower is a flexible error tracker for Elixir applications that listens for errors and reports them to configured reporters like email, Rollbar, or Slack. - https://x.com/josevalim/status/1832509464240374127 (https://x.com/josevalim/status/1832509464240374127?utm_source=thinkingelixir&utm_medium=shownotes) – José highlighted some UX issues with ChatGPT's new UI, mentioning struggles with concurrent updates. - https://x.com/josevalim/status/1833176754090897665 (https://x.com/josevalim/status/1833176754090897665?utm_source=thinkingelixir&utm_medium=shownotes) – José postponed publishing a video on optimistic updates with LiveView due to an Apple announcement. - https://github.com/wojtekmach/mixinstallexamples (https://github.com/wojtekmach/mix_install_examples?utm_source=thinkingelixir&utm_medium=shownotes) – A new WebRTC example was added to the "Mix Install Examples" project. - https://github.com/wojtekmach/mixinstallexamples/pull/42 (https://github.com/wojtekmach/mix_install_examples/pull/42?utm_source=thinkingelixir&utm_medium=shownotes) – The WebRTC example shows how to use the ex_webrtc Elixir package in a small script, compatible with Mix.install/2. - https://github.com/elixir-webrtc/ex_webrtc (https://github.com/elixir-webrtc/ex_webrtc?utm_source=thinkingelixir&utm_medium=shownotes) – The Elixir package used for the WebRTC example. - https://x.com/taylorotwell/status/1831668872732180697 (https://x.com/taylorotwell/status/1831668872732180697?utm_source=thinkingelixir&utm_medium=shownotes) – Laravel raised a $57M Series A in partnership with Accel, likely related to their Laravel Cloud hosting platform. Do you have some Elixir news to share? Tell us at @ThinkingElixir (https://twitter.com/ThinkingElixir) or email at show@thinkingelixir.com (mailto:show@thinkingelixir.com) Discussion Resources - https://en.wikipedia.org/wiki/CyberResilienceAct (https://en.wikipedia.org/wiki/Cyber_Resilience_Act?utm_source=thinkingelixir&utm_medium=shownotes) - https://news.apache.org/foundation/entry/open-source-community-unites-to-build-cra-compliant-cybersecurity-processes (https://news.apache.org/foundation/entry/open-source-community-unites-to-build-cra-compliant-cybersecurity-processes?utm_source=thinkingelixir&utm_medium=shownotes) - https://www.cisa.gov/sites/default/files/2024-05/CISA%20Secure%20by%20Design%20Pledge_508c.pdf (https://www.cisa.gov/sites/default/files/2024-05/CISA%20Secure%20by%20Design%20Pledge_508c.pdf?utm_source=thinkingelixir&utm_medium=shownotes) - https://www.whitehouse.gov/wp-content/uploads/2024/02/Final-ONCD-Technical-Report.pdf (https://www.whitehouse.gov/wp-content/uploads/2024/02/Final-ONCD-Technical-Report.pdf?utm_source=thinkingelixir&utm_medium=shownotes) - https://www.infoworld.com/article/2336216/white-house-urges-developers-to-dump-c-and-c.html (https://www.infoworld.com/article/2336216/white-house-urges-developers-to-dump-c-and-c.html?utm_source=thinkingelixir&utm_medium=shownotes) - https://en.m.wikipedia.org/wiki/CE_marking (https://en.m.wikipedia.org/wiki/CE_marking?utm_source=thinkingelixir&utm_medium=shownotes) - https://www.cisco.com/c/en/us/services/acquisitions/tail-f.html (https://www.cisco.com/c/en/us/services/acquisitions/tail-f.html?utm_source=thinkingelixir&utm_medium=shownotes) - https://digital-strategy.ec.europa.eu/en/policies/cyber-resilience-act (https://digital-strategy.ec.europa.eu/en/policies/cyber-resilience-act?utm_source=thinkingelixir&utm_medium=shownotes) - https://www.nist.gov/ (https://www.nist.gov/?utm_source=thinkingelixir&utm_medium=shownotes) - https://en.wikipedia.org/wiki/XZUtilsbackdoor (https://en.wikipedia.org/wiki/XZ_Utils_backdoor?utm_source=thinkingelixir&utm_medium=shownotes) - https://en.wikipedia.org/wiki/Log4j (https://en.wikipedia.org/wiki/Log4j?utm_source=thinkingelixir&utm_medium=shownotes) - https://en.wikipedia.org/wiki/Heartbleed (https://en.wikipedia.org/wiki/Heartbleed?utm_source=thinkingelixir&utm_medium=shownotes) - https://en.wikipedia.org/wiki/2024CrowdStrikeincident (https://en.wikipedia.org/wiki/2024_CrowdStrike_incident?utm_source=thinkingelixir&utm_medium=shownotes) - https://news.stanford.edu/stories/2024/06/stanfords-deborah-sivas-on-scotus-loper-decision-overturning-chevrons-40-years-of-precedent-and-its-impact-on-environmental-law (https://news.stanford.edu/stories/2024/06/stanfords-deborah-sivas-on-scotus-loper-decision-overturning-chevrons-40-years-of-precedent-and-its-impact-on-environmental-law?utm_source=thinkingelixir&utm_medium=shownotes) - https://openssf.org/ (https://openssf.org/?utm_source=thinkingelixir&utm_medium=shownotes) - https://www.fcc.gov/broadbandlabels (https://www.fcc.gov/broadbandlabels?utm_source=thinkingelixir&utm_medium=shownotes) - https://www.cve.org/ (https://www.cve.org/?utm_source=thinkingelixir&utm_medium=shownotes) - https://erlef.org/wg/security (https://erlef.org/wg/security?utm_source=thinkingelixir&utm_medium=shownotes) Guest Information - https://www.linkedin.com/in/alistair-woodman-51934433 (https://www.linkedin.com/in/alistair-woodman-51934433?utm_source=thinkingelixir&utm_medium=shownotes) – Alistair Woodman on LinkedIn - awoodman@erlef.org - http://erlef.org/ (http://erlef.org/?utm_source=thinkingelixir&utm_medium=shownotes) – Erlang Ecosystem Foundation Website Find us online - Message the show - @ThinkingElixir (https://twitter.com/ThinkingElixir) - Message the show on Fediverse - @ThinkingElixir@genserver.social (https://genserver.social/ThinkingElixir) - Email the show - show@thinkingelixir.com (mailto:show@thinkingelixir.com) - Mark Ericksen - @brainlid (https://twitter.com/brainlid) - Mark Ericksen on Fediverse - @brainlid@genserver.social (https://genserver.social/brainlid) - David Bernheisel - @bernheisel (https://twitter.com/bernheisel) - David Bernheisel on Fediverse - @dbern@genserver.social (https://genserver.social/dbern)
Justin Uberti, creator of WebRTC and now Founder of Fixie.ai, shares insights into the development of AI.town, a platform for engaging with AI personalities through voice, and the potential impact of conversational AI on various industries.Please support this podcast on Patreon! http://www.patreon.com/aiinsideshowINTERVIEW TOPICS- Introduction to Justin's background (WebRTC, Hangouts Video, Duo, Stadia)- The shift towards conversational AI and voice interactions- Fixie.ai and AI.town - enabling voice conversations with AI characters- Transitioning from text-based to voice-based AI interactions, potential use cases- Creating AI characters, enabling role-play conversations- Ethical considerations and voice cloning technology- The nature of human conversation (filler words, turn-taking protocols)- Incorporating human conversational quirks into AI speech- V1 vs V2 voice technology (speech recognition - text - speech vs. direct speech-to-speech)- Open-source speech AI model Ultravox.ai, leaderboard for fastest AI models (thefastest.ai) Hosted on Acast. See acast.com/privacy for more information.
We try Omakub, a new opinionated Ubuntu desktop for power users and macOS expats.Sponsored By:Core Contributor Membership: Take $1 a month of your membership for a lifetime!Tailscale: Tailscale is a programmable networking software that is private and secure by default - get it free on up to 100 devices!Kolide: Kolide is a device trust solution for companies with Okta, and they ensure that if a device isn't trusted and secure, it can't log into your cloud apps.Support LINUX UnpluggedLinks:
This is a recap of the top 10 posts on Hacker News on March 18th, 2024.This podcast was generated by wondercraft.ai(00:33): YouTube now requires to label their realistic-looking videos made using AIOriginal post: https://news.ycombinator.com/item?id=39746468&utm_source=wondercraft_ai(02:08): 900 Sites, 125M accounts, 1 VulnerabilityOriginal post: https://news.ycombinator.com/item?id=39742422&utm_source=wondercraft_ai(04:07): Dear Paul Graham, there is no cookie banner lawOriginal post: https://news.ycombinator.com/item?id=39742578&utm_source=wondercraft_ai(05:43): Stability.ai – Introducing Stable Video 3DOriginal post: https://news.ycombinator.com/item?id=39749312&utm_source=wondercraft_ai(07:38): Cranelift code generation comes to RustOriginal post: https://news.ycombinator.com/item?id=39742692&utm_source=wondercraft_ai(09:19): EPA bans asbestos, a deadly carcinogen still in use decades after partial banOriginal post: https://news.ycombinator.com/item?id=39746806&utm_source=wondercraft_ai(11:05): WebSockets vs. Server-Sent-Events vs. Long-Polling vs. WebRTC vs. WebTransportOriginal post: https://news.ycombinator.com/item?id=39745993&utm_source=wondercraft_ai(12:42): Nvidia CEO Jensen Huang announces new AI chips: ‘We need bigger GPUs'Original post: https://news.ycombinator.com/item?id=39749646&utm_source=wondercraft_ai(14:35): Elegant open source project tracking, Trello like but self-hostedOriginal post: https://news.ycombinator.com/item?id=39742114&utm_source=wondercraft_ai(15:59): Paris cycling numbers double in one year thanks to investmentOriginal post: https://news.ycombinator.com/item?id=39744932&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai
In this special episode, we kick off a brand-new series that dives into the world of Elixir—but with a twist. We're exploring the systems surrounding the language and what it takes to support and run a company or team that uses Elixir. Join us as we engage in insightful conversations with various industry voices, starting with Tyler Young, about the practical systems and solutions used by businesses like Felt.com and SleepEasy.app. This series promises to be an enlightening journey for anyone curious about the behind-the-scenes workings of an Elixir-based product. Tune in to hear the unique challenges and successes experienced by others in the field and more! Show Notes online - http://podcast.thinkingelixir.com/191 (http://podcast.thinkingelixir.com/191) Elixir Community News - https://github.com/erlang/otp/pull/8111 (https://github.com/erlang/otp/pull/8111?utm_source=thinkingelixir&utm_medium=shownotes) – Erlang's potential new OTP json module is showing significant performance improvements in recent benchmarks. - https://twitter.com/michalmuskala/status/1759932700624912832 (https://twitter.com/michalmuskala/status/1759932700624912832?utm_source=thinkingelixir&utm_medium=shownotes) – Michał Muskała shares insights online about future Elixir idiomatic wrapper around the new OTP json module. - https://www.erlang.org/news/167 (https://www.erlang.org/news/167?utm_source=thinkingelixir&utm_medium=shownotes) – OTP 27-RC1 was released with new features like the maybe expression and Triple-Quoted Strings. - https://github.com/erlang/otp/ (https://github.com/erlang/otp/?utm_source=thinkingelixir&utm_medium=shownotes) – Official repository for Erlang/OTP where the 27-RC1 release can be found. - https://twitter.com/uwucocoa/status/1758878453309505958 (https://twitter.com/_uwu_cocoa/status/1758878453309505958?utm_source=thinkingelixir&utm_medium=shownotes) – Tweet mentioning that Erlang 27.0-rc1 runs natively on ARM64 Windows. - https://fly.io/blog/tigris-public-beta/ (https://fly.io/blog/tigris-public-beta/?utm_source=thinkingelixir&utm_medium=shownotes) – Fly.io announces a new globally distributed object storage solution that supports the S3 API. - https://github.com/elixir-webrtc/ex_webrtc (https://github.com/elixir-webrtc/ex_webrtc?utm_source=thinkingelixir&utm_medium=shownotes) – New WebRTC library for Elixir called exwebrtc is introduced. - https://blog.swmansion.com/introducing-elixir-webrtc-a37ece4bfca1 (https://blog.swmansion.com/introducing-elixir-webrtc-a37ece4bfca1?utm_source=thinkingelixir&utm_medium=shownotes) – Blog post introducing exwebrtc, detailing the motivation and development of the new WebRTC library for Elixir. - https://membrane.stream/ (https://membrane.stream/?utm_source=thinkingelixir&utm_medium=shownotes) – Membrane Framework site; although exwebrtc was created due to certain challenges with Membrane, Membrane is noted for its pipeline model. - https://www.w3.org/TR/webrtc/ (https://www.w3.org/TR/webrtc/?utm_source=thinkingelixir&utm_medium=shownotes) – The W3C WebRTC specification, which exwebrtc implements in Elixir, is more JS focused. - The Erlang Ecosystem Foundation recently celebrated their 5 year anniversary, highlighting the community's achievements. - https://github.com/gleam-lang/gleam/releases/tag/v1.0.0-rc2 (https://github.com/gleam-lang/gleam/releases/tag/v1.0.0-rc2?utm_source=thinkingelixir&utm_medium=shownotes) – Release of Gleam v1.0.0-rc2 which includes a bug fix for the compiler. - Announcement about ElixirConf US, with a call for training classes and upcoming call for talks. Do you have some Elixir news to share? Tell us at @ThinkingElixir (https://twitter.com/ThinkingElixir) or email at show@thinkingelixir.com (mailto:show@thinkingelixir.com) Discussion Resources - https://felt.com/ (https://felt.com/?utm_source=thinkingelixir&utm_medium=shownotes) - https://sleepeasy.app/ (https://sleepeasy.app/?utm_source=thinkingelixir&utm_medium=shownotes) - https://twitter.com/TylerAYoung/status/1730253716073148470 (https://twitter.com/TylerAYoung/status/1730253716073148470?utm_source=thinkingelixir&utm_medium=shownotes) – Tyler shared on X when he bought his physical hardware - https://sentry.io/for/elixir/ (https://sentry.io/for/elixir/?utm_source=thinkingelixir&utm_medium=shownotes) - https://www.appsignal.com/elixir (https://www.appsignal.com/elixir?utm_source=thinkingelixir&utm_medium=shownotes) - https://felt.com/blog/startup-and-shutdown-for-phoenix-applications (https://felt.com/blog/startup-and-shutdown-for-phoenix-applications?utm_source=thinkingelixir&utm_medium=shownotes) - https://retool.com (https://retool.com?utm_source=thinkingelixir&utm_medium=shownotes) - https://www.heap.io/ (https://www.heap.io/?utm_source=thinkingelixir&utm_medium=shownotes) Guest Information - https://twitter.com/TylerAYoung (https://twitter.com/TylerAYoung?utm_source=thinkingelixir&utm_medium=shownotes) – on Twitter - https://github.com/s3cur3 (https://github.com/s3cur3?utm_source=thinkingelixir&utm_medium=shownotes) – on Github - https://fosstodon.org/@tylerayoung (https://fosstodon.org/@tylerayoung?utm_source=thinkingelixir&utm_medium=shownotes) – on Fediverse - https://tylerayoung.com/ (https://tylerayoung.com/?utm_source=thinkingelixir&utm_medium=shownotes) – Blog Find us online - Message the show - @ThinkingElixir (https://twitter.com/ThinkingElixir) - Message the show on Fediverse - @ThinkingElixir@genserver.social (https://genserver.social/ThinkingElixir) - Email the show - show@thinkingelixir.com (mailto:show@thinkingelixir.com) - Mark Ericksen - @brainlid (https://twitter.com/brainlid) - Mark Ericksen on Fediverse - @brainlid@genserver.social (https://genserver.social/brainlid) - David Bernheisel - @bernheisel (https://twitter.com/bernheisel) - David Bernheisel on Fediverse - @dbern@genserver.social (https://genserver.social/dbern) - Cade Ward - @cadebward (https://twitter.com/cadebward) - Cade Ward on Fediverse - @cadebward@genserver.social (https://genserver.social/cadebward)
Russ d'Sa is Founder of LiveKit, the real-time streaming audio, video, and data infrastructure platform for developers. Their open source project, also called livekit, provides the end-to-end stack for WebRTC and has over 6K stars on GitHub. In this episode, we dig into LiveKit's unique founding story with the idea coming from the founders' experience building a Clubhouse competitor and using Agora, early interest from companies like Pinterest that gave indications that there was a need for an open source alternative to Agora and Twilio, why Conversational AI will be a big driver for LiveKit & more!
Elon quiere la GPU de tu Tesla / Aviones espía para encontrar tesoros / ESA + SpaceLab / Aerotaxis en Nueva York / Nvidia sorprende con las nuevas H200 Patrocinador: Si eres autónomo o una pequeña empresa, ya sabes que necesitas contar con los mejores. Por eso Vodafone Business presenta la nueva tarifa Negocio a Medida. Tendrás fibra de 600 Megas y dos líneas de móvil 5G, y servicio de reparación y sustitución de dispositivos por tan solo 54,54€/mes más IVA. Elon quiere la GPU de tu Tesla / Aviones espía para encontrar tesoros / ESA + SpaceLab / Aerotaxis en Nueva York / Nvidia sorprende con las nuevas H200
Ayush Ranjan is the Co-Founder and CEO at Huddle01, whose mission is to democratize WebRTC and make it more private, secure, and scalable. Why you should listen Huddle01 is building a decentralized real-time communication network. Developers can leverage their suite of developer-friendly SDKs to enable powerful audio/video experiences on your app with just a quick plug in. Ayush talks about privacy concerns with video communication platforms like Zoom and how that led him to create Huddle01, a peer-to-peer video calling platform with added layers of security and scalability. Ayush explains how WebRTC works and its evolution from peer-to-peer to server-based communication. He highlights the challenges of centralization, such as lack of content encryption and latency issues caused by distant servers. Ayush discusses the concept of a people-powered communication network, where individuals can become nodes and power calls without relying on centralized servers. This leads to increased privacy, reduced latency, and the creation of a new bottom-up economy. Ayush also highlights features of Huddle01 that differentiate it from platforms like Zoom, such as seamless integration with popular streaming sites, enhanced privacy options, crypto primitives for avatars, and cheaper call recordings using filecoin storage. Andy acknowledges the importance of building user-friendly platforms regardless of users' familiarity with web3 technology. The conversation then shifts to the concept of "DePIN," which refers to leveraging blockchain technology for decentralized physical infrastructures. Examples include Helium's approach to creating a decentralized 5G mobile network and Filecoin's role in enabling individuals to become storage providers. The discussion concludes by emphasizing the need for more affordable communication infrastructure in order to reduce costs currently associated with centralized systems. Ayush describes the different layers of Huddle01, including the app layer for consumers, the SDK layer for developers, and the protocol layer for miners and validators. Ayush envisions a future where Huddle powers all communication platforms, making calls more immersive and solving global problems more effectively. Looking ahead, Ayush sees a utopian future where humans explore outer space and achieve higher levels of energy consumption through advanced infrastructure systems like Dyson Spheres. Supporting links Bitget Bitget Academy Bitget Research Bitget Wallet Huddle01 Andy on Twitter Brave New Coin on Twitter Brave New Coin If you enjoyed the show please subscribe to the Crypto Conversation and give us a 5-star rating and a positive review in whatever podcast app you are using.
Ubuntu 23.10 ‘Mantic Minotaur' is out! Focusrite helps out a Linux developer, Debian Bookworm powered PiOS, and self-hosting your own WebRTC server.
In this podcast, Christian Stredicke, CEO and President of Vodia, and Doug Green, Publisher of TR, discuss how in-office and intra-company video calling is making the post-Covid workplace more spontaneous, more human and perhaps even more fun. Christian and Doug discuss how video calling can help build trust in hybrid/remote environments, and the real-time combination of voice, text and video will ultimately become the new norm for office cultures. "If you think about it," Christian says, "having a video call in your private bedroom would have been unthinkable, but today's it's normal, so a lot of things have changed and I believe it will go on." Christian and Doug also discuss the challenge of translating between SIP and WebRTC via a cloud PBX, so everyone in a company can talk to everyone else, no matter their device of choice. Visit www.vodia.com
Join us for an inspiring episode of the Degree Free Podcast, where Sean Dubois shares his personal journey of finding tech success without a degree. He discusses how working at Amazon opened doors for him and made job interviews easier. Sean encourages listeners to prioritize personal growth and happiness over societal expectations. Key Discussion Points: - Sean and Ryan talk about programming and open-source projects, emphasizing the importance of solving problems you're passionate about and saying yes to opportunities. - They discuss the benefits of open source projects, such as having control over your destiny and contributing to projects you care about. They also mention the challenges and rewards of creating your own projects. - The importance of self-discovery and self-actualization in motivating oneself to work hard and succeed is discussed. They highlight the inner motivation to persevere through failures as being more important than specific programming languages or projects. - Sean talks about finding a niche in the WebRTC space and the concept of doing things in public to create opportunities and connections. They also challenge the significance of job titles in the software industry. - The conversation tackles the relationship between money and happiness, acknowledging that while money doesn't guarantee happiness, it can provide financial stability and freedom. Personal anecdotes about the impact of financial stability are shared. - The episode ends with Sean explaining why he helps others, emphasizing paying it forward and the mutual benefit of helping motivated individuals. Ryan reflects on his own experiences and expresses a desire to do more. - Sean advises that solving personal problems, finding self-confidence, and enjoying what you do can lead to career success and fulfillment. Don't miss this enlightening episode of the Degree Free Podcast, where Sean Dubois and Ryan Maruyama share their experiences and insights on success, personal growth, and finding happiness. Tune in and be inspired! To keep up with everything Degree Free check out our website: degreefree.co Join the Degree Free Network and get the support you need to get hired, get that promotion, and achieve your career goals! You'll get access to our free resources such as the 5 Degree Free Pathways and 7 Day Get Hired Challenge Course: degreefree.co/network Learn job hunting skills and learn how to land your dream job in 7 days: degreefree.co/gethired/ Starting your degree free journey but don't know where to start? Check out our free ‘5 Degree Free Pathways' Course: degreefree.co/pathways
In episode 131 of Jamstack Radio, Brian speaks with Christian Stuff and James Hush of Daily. This talk explores virtual meetings, WebRTC, and video technology. Christian and James share insights on building a video call experience in the browser and rising to the demand in recent years for world wide virtual events.
In episode 131 of Jamstack Radio, Brian speaks with Christian Stuff and James Hush of Daily. This talk explores virtual meetings, WebRTC, and video technology. Christian and James share insights on building a video call experience in the browser and rising to the demand in recent years for world wide virtual events.
The 16:9 PODCAST IS SPONSORED BY SCREENFEED – DIGITAL SIGNAGE CONTENT Using existing network infrastructure has long been talked up as an efficient way to manage and deliver digital signage solutions in large companies, but the concept has been clouded by concerns - like the cost of additional AV hardware and the impact of all that video on the company network. But we now live in a world where companies support countless video conferencing sessions with piles of users, with little or no latency. Other technologies have also caught up, and computing just keeps getting more powerful. Which is why I was interested in chatting with Shane Vega, VP of Marketing for the Silicon Valley software firm Userful, about his company's AV over IP solutions. The company has its roots in Calgary, Alberta and still does a lot of the R&D work there. Userful first showed up in digital signage circles talking about a different way, using software and endpoints, to drive video walls. But in the last few years it has been much more focused on a broader IP-driven solution that tends to start with control rooms and operations centers, but can also drive things like meeting room displays and digital signage around corporate campuses. There's been a lot of discussion about AV needs converging with IT interests, but from Vega's perspective, that convergence is already firmly in place. Subscribe from wherever you pick up new podcasts. TRANSCRIPT Shane, thank you for joining me. Where are you today? Shane Vega: I am in sunny Tampa, Florida, where although it's not all that sunny today, we've got some rain, but that's per the norm now. Now, Userful is in Silicon Valley, but a lot of the developers are in Calgary, right? Shane Vega: Yeah, that's correct. All of our R&D, engineering team, and the like, they're all up in Calgary, Canada. So you're missing the Calgary Stampede this week? Shane Vega: I am missing the Stampede. But you know what, I believe they deserve a bit of some good time because they spend the majority of the time avoiding the minus 30-degree weather. Yeah, I spent a number of years in Calgary, and it's an interesting weather city. Shane Vega: Yeah. You know it's bad when they've developed an entire infrastructure of walkways between buildings to avoid having to go outside. Yeah, just like Minneapolis. Shane Vega: Exactly. All right, so we had a quick chat in the LG booth at Infocomm, and you explained what Userful was up to with its Infinity platform and AV over IP and AV as a Service and so on, and I've seen that. I will wholeheartedly admit I don't totally get it, but how you explained it to me was very interesting, and I thought this would be useful for a lot of people to understand the infrastructure and distribution side of digital signage. We spend so much time talking about the content and business strategy and all those sorts of things, but behind-the-scenes stuff is awfully important, and maybe we could start out by just explaining what Userful is and does and where you came from because when Userful first came out, it was presented to me as video wall software, and I had a hell of a time wrapping my brain around what it was all about. But I know you guys have evolved quite a bit. Shane Vega: Yeah. I appreciate that, Dave. To answer your question, Userful has grown exponentially in the last 5+ years. John Marshall, our CEO came on board about 7 years or so ago. My timing might be a little bit off, and when he came into the organization, we were a perpetual software company, so we weren't software as a service, we weren't selling subscriptions. We were selling perpetual software… You'd buy a license and then get that supported? Shane Vega: Yeah, you'd buy a license then we support it for the duration of however long you wanted to use it, and the license for the software was pretty siloed, right? It was, “Hey, you can buy this operations center license.” Where, to your point, we were just managing content on a video wall. And it was mostly control rooms, right? Shane Vega: Mostly control rooms, almost exclusively for a time, and then we evolved into the digital signage world, and it was cloud-based digital signage exclusively. So what most folks are familiar with is hosting up in AWS, giving you some access to dynamic tools for creating templates and the like. During Infocom, what we've launched and from the time that I just mentioned until about, maybe two and a half years ago or three years ago, we've pivoted the company from perpetual to subscription-based software as a service, and that's who Userful is. We are a software company, and we've been a software company tailored to the needs of the AV industry. Most currently, we've just released our newest platform, and that's really been the biggest evolution, which is moving away from application-specific deployments into more of a platform approach for AV over IP and that is really the biggest breakthrough development that we've had here, because in the older version of our software, we were a monolithic code base. Again, we were just selling either the operation center software or we were selling some digital signage. Everything was monolithic. It was difficult for our engineering team to manage updates, firmware, bug fixes, and the like. We've now moved to a distributed code base that has given us exceptional flexibility with how we develop our software for the various use cases and applications in the AV industry. So if you think about what you've seen in the conversations you and I have had, essentially, and you hit the nail right on the head, this isn't just about fancy software managing content on a video wall. Can we do that? Of course, we've got feature sets for various different use cases, but there's also the infrastructure piece, and this was my “aha moment” through a different lens at Infocomm. AV over IP has matured through the years from IP addressable matrix switchers where everything was still very much centralized into IP addressable nodes, encoders, decoders, transmitters, receivers, and all the different AV manufacturers out there have now standardized on this proprietary hardware version of AV over IP, and I started to ask myself the question: what is their value proposition in doing that? And I overheard quite a few folks during this past Infocomm talk about the value of this distributed architecture: enabling flexibility, scalability, augmenting workflows, the total cost of ownership being lower, and I sat there a little bit baffled because these are all the same things that we talk about at Userful and so it really opened up an area where I feel like we do need to evangelize a little bit more about how Userful do AV over IP differently, and that we don't necessitate all of the hardware infrastructure. We truly are a software platform, but because of the IT protocols that currently exist, that's how we developed our software. So when you think about Userful, I've actually positioned us a little bit more as an IT solution than an AV solution, even though our entire solution is built around the AV industry and its needs. The reason I say that is because we're literally a server, non-proprietary, and an endpoint, and that endpoint is software, so our uClient application. In between the two is network infrastructure. There are no end encoders, decoders, transmitters, receivers, and the list goes on. Because we are able to transmit content and aggregate content, meaning we can pull in sources of visual information and audio information into a data library or data store that we manage on our server and distribute that information to any destination or any screen and we do that all with IP protocols. The same IP protocols, by the way, and this is how I usually get people to have the “aha moment.” If we were having this over a Teams meeting, Dave, or a Zoom meeting, we would be transmitting video two ways. In many cases, multiple participants from multiple regions of the world share two-way audio and video. We would be able to share content from our local computers into that meeting, and nobody would have to go out and buy a proprietary encoder and decoder to make that happen. So using that same infrastructure or those IT protocols that are currently at work, IP protocols like WebRTC for instance, we're able to build a solution that leverages those same advancements for the purposes of AV over IP. It's a bit of a mouthful, but that's what we're doing. So you wouldn't have been able to do some of that 10-15 years ago because the network infrastructure is a lot of larger corporations hadn't really caught up with that, so you would flood a network if you were using a lot of video and so on, but things have changed. Shane Vega: Things have changed substantially, and I would even say it's been not even 10-15 years ago, just 5-10 years ago, and the reason I say that is because there are the laws of engineering and physics like Butter's Law, Kryder's Law, Moore's Law, which talks about how rapidly the advancements of, let's say, fiber optic networks, which are doubling every nine months, the amount of bandwidth that you can get between the fiber optic cable or the amount of processing speed that you can get out of a CPU and how fast these advancements are happening. What we're doing and the way that we're doing it is taxing the CPU of that server. It's also taxing the GPU of that server, the graphics card because those are the two major components that we use for our solution. If you think about just two years ago, Dave, our servers that we were deploying in the field were 8 cores of processors. Right now, I have a server that we've certified that's 192 cores of processors, so we're able to do exceptionally and exceedingly more on a single server, which is why we've actually built our solution to be a data center solution by and large, where you take a big beefy server, you put it in your data center, and you're virtualizing all of the traditional hardware that you would need, and you're managing a wide range of AV endpoints, whether it's digital signage, meeting rooms, operations centers, or what have you. Is there a baseline for what you need in terms of the network infrastructure? I'm definitely not an IT Architect, but do you need a CAT6E, or can you do this over Wifi, I don't know, and I suspect a lot of people don't know. Shane Vega: Yeah, so it's a good question. So again, because we're optimizing for IT protocols, we're able to do a lot, right? From the screen to the switch, we're just really looking for that one-gigabit uplink, which is standard. Most folks are going to have that. From the server to the source to the server and all that infrastructure pulling into the server, we're looking for the 10 gigabit uplink. So there are some requirements for the network, but nothing that is outside the realms of standard network topology. The real intricacies or the real areas where we get into some deeper discussions are when they have multiple networks that we have to traverse. When you start getting into DOD environments where things have to be air-gapped and there's no internet connectivity and when networks start to get a little bit more complex, that's where we have to begin to get a little bit more intentional about how we design it. Now that said, we haven't yet met a deployment that we couldn't meet the network requirements for, even though some of those were those complex ones. There were two things that particularly interested me. The first was, as you laid out earlier, that you don't need all these encoders and other bits of hardware to layer into a network to make this happen. So you're cutting out conceivably a lot of capital costs and a lot of potential fail points, and I guess the other thing that intrigues me, and you can talk about that next is or after. The first question would be the idea that you can use this for multiple aspects. I suspect there are control room data dashboards, and software platforms out there, but one of the things you talked about at Infocomm is that you can cascade this out to do all kinds of different things from operation centers to experience centers off of the same platform. Shane Vega: Yeah, exactly, Dave, and to answer the first question, you hit the nail on the head with one of my areas of confusion when I was at Infocomm, and I heard people talking about the low total cost of ownership, and they were tying it to these encoders and decoders. We don't require those things. So when I think about the total cost of ownership, I think about the hard work upfront costs that you don't need to have and the additional BTU output from all of that hardware that you would normally need, that's no longer going to be there, which is going to drive your HVAC costs, right? You don't have all the power consumption. So for green initiatives and companies who are looking to do things, and this is a big one moving forward, folks want to be more green, and get green initiatives going like lower carbon emissions, lowering power consumption by not having all that hardware is yet another total cost of ownership benefit for Userful. Again, our encoding happens at the one server that we require in that Nvidia graphics card. The decoding is done by a piece of software we developed called the uClient application. Now, where that uClient application resides, we give you tons amount of flexibility. We have integrated it into certain endpoints like Web OS or Tizen or Android. And that gives us the flexibility to be able to load that client application in various different environments and use cases, depending on the display type if it's an LCD, if it's a direct view LED, and how we manage that. In some cases, we do have a small appliance that you might need at the edge, and that would be one additional piece of hardware per display, depending on the display type, and that's an Android box that we load our uClient application onto if the display doesn't have the ability to integrate with our software. So if it's a smart display that already has a system on a chip on it, conceivably you don't need that Android box? Shane Vega: Correct. So now what you're left with, as I said, is just a server with software at the edge, and network infrastructure in between. So ongoing maintenance costs are substantially lower. Initial hardware costs are lower. Your total cost of ownership around all the things I mentioned earlier is going to be lower. Therefore, your refresh costs are going to be lower. Because with hardware, every three to five years, in some cases five to seven years, you're having to do a hardware refresh. It's always tied to CapEx because it's usually proprietary. They have to budget for CapEx renewals of all this hardware. Because of Userful's deployment model, we can take on an OPEX model for those folks who would benefit from that because your hardware refresh can be built into your standard IT refreshes because you own the hardware. In many cases, as many as we can possibly, push for, we don't provide the server, we want the end user to provide the server, and that way, it gets built into your traditional OPEX refresh, and that way, the only recurring cost is the software. To your next question about what we spoke about and the benefits of the platform. This is where our software really begins to shine, right? Because our platform is accessible through a web browser, so no proprietary software needs to be downloaded for a user to access it. You access our software through a traditional HTML5 web browser. Once you access the software through a web browser, the first thing you're going to notice is we have six applications that any user can take advantage of. In most cases, folks aren't trying to eat the elephant hole, right? They'll have a use case like digital signage, or they'll have a use case like meeting rooms or experiential centers or what have you, and that's one of the reasons why we are licensing the server. We're licensing the CPU cores and the number of graphics cards that you need on that server so that if you have a smaller use case, your out-of-pocket costs are gonna be lower because you need a smaller server. But when you log in for the first time, you're gonna see, “Oh, I got this for digital signage, but I didn't know I could run my meeting room here.” or, “I didn't realize that I can do these artistic video walls,” or “I didn't realize I can incorporate these data dashboards from Power BI or Tableau as a native source and share those to any display that Userful is managing.” The value is seen almost immediately, and so what we do is try to help people understand the peripheral or parallel use cases. So I use digital signage quite a bit, and I gave you this analogy regarding airports at Infocomm, Dave, where at least half a dozen times in the last six to eight months, I've had conversations with various airports, and most of them are pulling us in because they have an operation center. Airport operations center, or security operations center, or what have you, and they'll say, “Hey, we want the Userful software to run the content on these displays and video walls in the operation center,” and when we have these discovery calls, I'll typically ask, “Hey, have you guys thought about the advantages of using our platform to help you with the signage?” And I'm usually shot down rather immediately, and most folks know Airports are convoluted in the way that they deploy their technology. They got various different groups. They're typically siloed, but specifically the airport operations centers, I'll just say, “Hey, look, I get that, but let me just throw this use case out there and see if it lands and hits you as showing value.” You're in an airport operations center. Wouldn't you want to be able to manage the entire network of screens that are currently being used to show baggage, arrivals, departures, signage, and all your wayfinding screens? Would it not be valuable to be able to manage those as part of your airport operations, also, I've noticed in many cases, they'll incorporate security into their AOC. Some of them have independent security operations centers, but in either event, I would tell them. What happens if you have an incident at the airport? Wouldn't you want to be able to take over those screens from the command center that's responsible for monitoring and sending strategic messages to people, depending on what the situation is? If there's a fire, “evacuate.” If god forbids, there's an active shooter, “take shelter in place,” and be able to send strategic messages to various screens all from within your operation center? Well, you can't currently do that because you've got multiple systems driving all of these different AV endpoints. If you had a single platform, it doesn't just give you the ability to scale your deployment, it gives you the ability to scale your workflow and become more flexible to augment those workflows where I can send strategic messages to screens, I can manage arrivals and baggage from my AOC, if that's such a thing that I need. In addition, we could help you with your meeting rooms. You can walk into a meeting room, and I can help you cast some content in a meeting room and have an impromptu meeting on a drop of a dime, as just a few use cases of what our platform can do. Sometimes, when you have these platforms that say they can do, in your case, at least six different things, there can be compromises. In other words, “Yeah, we can do all these things. That's just none of them are particularly deep, or maybe one of them is deep, and the other ones are so so.” Do you get that question at all? Shane Vega: Ironically, no. We don't get that question. But it's a question most people should be asking David, and I'll tell you that when that does come up, and it's only come up a handful of times, I'm always very candid about what we can't do as well as what we can do. And there is truth in the fact that we are software as a service, and so there are certain applications that still have roadmap features, candidly, that we're going to continue to augment and build them out. If you could probably imagine the top three or four of our use cases would be: operations centers, digital signage, meeting rooms, and data dashboards. We do those very well. With experiential environments, we manage those artistic video walls very well. Now when you talk about experiential environments, there are some things that some folks might want to get involved with, but we might have to have some deeper conversations, right? And that really is around interactivity. Do you want multi-touch video walls, like in a museum for kids or something like that? Where we have some roadmap items to help ensure that multi-touch is what people would expect, where you don't want to have the lag, you don't want to have any of those issues when people are trying to have that fun experience as a child or what have you. So there are certain features that are still roadmap items, but what I will bookend that with is, before coming over to Userful, I worked with one of the larger AV firms globally, and while I worked there, part of my interaction with customers was, “Man, I wish I could do more of these things with a single solution, I have to farm it out to so many folks.” But more than that, I would have feature requests for the stuff that was out there, and it was always in one ear, out the other. I don't care which manufacturer it was. If I went to some of these larger manufacturers and I said, hey, you really would benefit if you did this or this. It just didn't go anywhere, and then I had a similar conversation with the Userful back in about 2018 at a trade show, I said, look, your software is good, but it really needs these four or five things to really be a competitor in the space that you're looking to deploy, which at the time it was operation centers. I'd say if it was six months, it was a long time. So within six months, I got a call from the then VP of Sales who said, “Hey, I want to have a meeting with you, Shane. We've incorporated all of your requests into our software,” and that really pivoted my approach to looking at users as, alright, these guys are the future of AV and, little FYI, we actually got that award at Infocomm, the Future of AV award. But the reason for that was, look, if we're going to be software as a service, then we have to prioritize feature requests from our customers above our own market research or our own gut check, and so that's part of my role here at Userful as VP of Marketing is that I'm also over Product Marketing, which is over the roadmap, and so I get involved in customer calls quite a bit, and I'll hear some of these features that to your initial question is, “Hey, how do you go deeper with these applications?” I look for that feedback, and then I get to go back to the roadmap and go, “Hey, we need to prioritize this, this, and this feature. Push out the other features to the next release. Let's get these done because it's revenue dependent. We've got customers who would value this. Let's get it done!” We take that very seriously here at Userful, and we're at four releases a year, so you'll never have to wait all that long. So you referenced Airports. I'm curious, in the context of third-party software development, if there's a software company that works in the Airport realm but isn't doing digital signage or some of the things you do, but they want to visualize information on displays, is there an API or something that they could develop to work with Userful or does it have to be Userful development to add that capability on? Shane Vega: We have an entire program around API. So we do have our own API, currently, it's A REST API, so we can receive tons of different messages and calls to trigger certain reactions within our software. But additionally, that's got its own roadmap in and of itself. So we have our software application roadmap, and then we have our API roadmap where we're going to be developing even deeper integrations and capabilities including, but not limited to, even wanting to create possible easy configuration tools for customers who can use our API to do whatever they want, onsite. Are control rooms and operations centers the gateway for the initial point of contact, the thing that gets people interested, and then other things cascade out of that? Shane Vega: That has been our experience. We call that our land. So we're land and expand through our platform. Let's find the use case. Let's land where it makes sense, and then let's show the power of our expansion, and just because of how the company has evolved, operation centers have been kind of the tip of our spear, and it makes sense because operation centers will use two or three of our applications out of the gate, right? They'll use the operation center software, they'll use meeting rooms for war rooms or situation rooms. They'll also use our trends for dashboards and Power BI integrations, depending on what type of operation center it is, so they usually get value from several of our use cases and applications out of the gate. And if it's a large enough organization and we're typically targeting LDOs (large distributed organizations), they'll have multiple operations centers, which gives us multiple points of connection and interaction and engagement to open up opportunities to talk about the meeting rooms beyond your war room and situation room, or some operation centers are fishbowls, where they want to bring folks in their data center and they just want to use it as a showpiece to show their customers how well they manage their data, and so they might have welcome screens outside, and we'll let them know, “Hey, we can manage those welcome screens for you as well,” and that evolves into a larger digital signage strategy, corporate communications, so on and so forth. These large organizations, do they have separate AV and IT departments, or are they pretty much hiving into IT now? Shane Vega: So more and more, IT is taking over, but what's happening is it used to be that they have AV specialists on staff, and by and large, it was for the meeting rooms, and in some cases, the digital signage where they had AV technicians or AV specialists on-site, and those were the guys were the gatekeepers to decide what technology gets deployed. Yeah, and get everything working before the meeting starts somehow. Shane Vega: Exactly. “Who's got HDMI? Who's got DVI?” So to that point, people keep talking about the convergence of AV and IT, and I don't know why. That convergence happened years ago. People are now starting to realize that because of that convergence, the IT organization or the IT departments within these larger organizations are going to be the ones holding the budget and are going to be the ones responsible for managing any AV resources on the network. And so, we have intentionally built our product to cater to those IT stakeholders in the organization. When you say things like, “Hey, you can centrally monitor the entire platform from a web browser,” they really get that right. When you say, “We're an IT solution, we're not an AV solution, which means we're not going to put all this IP addressable hardware on your network,” a lot of the walls come down from their security concerns. You then begin to tell them that, look, you can augment your roles-based access control, and integrate with LDAP. Plus, we give you tools that are IT specific to help you monitor things like, what is the impact on my network? What is my current CPU utilization, or what's my current GPU utilization on the server that we're licensing? We give them all of those tools built into our software. So it's not just AV end-user tools that we're giving. We're also giving those IT tools that help the IT stakeholders manage deployments because we recognize these are going to be larger in scale. They're going to be responsible for a lot. Let's make it easy for them. When you talk about AV as a service, it's a term I've heard for a while, but you guys go at it quite a bit differently from what you're saying. Shane Vega: Yeah, we do, and Dave, I struggle with that, because we were flirting with the term AV as a service, and we started to use it quite a bit. But I know, coming from the integration world, that AV as a service historically meant we're going to just finance this stuff, right? We're going to get a leasing program, and we're going to build in the hardware, the software, the services, whatever we can into a monthly payment that makes it nice and easy for you guys. We approach it differently by saying, we are software as a service that's for the AV industry. Therefore, we are AV as a service, meaning, we don't have all that hardware that you have to purchase. You're truly able to deploy all of these AV use cases and manage an entire host of AV applications from within our platform. And we are a software that you pay for based on subscription, typically three-year plans. That's what we mean when we say AV as a service. It's exactly that. It's a software as a service, which is which is the actual term, which is software as a service for the AV world. This strikes me as something that probably has a learning curve, as every software platform does, but it is almost something you kind of have to ease your way into? Shane Vega: Believe it or not, not really, and I think that would be more pertinent if somebody was wanting to say, “Hey, I want to use your entire platform right now.” But as I said earlier, most folks are saying, “Hey, I want this operation center,” and they're familiar with Operation Center softwares. They know what they want. They know they want to be able to build custom layouts. They want to manage big, beautiful video walls. They want to be able to interact with sources with soft KVM functionalities so that they're not just visualizing the sources but they can engage with them because they've got tools, right? They got video management tools, and they've got access control, what have you, and so that software that we're providing isn't going to look and feel a whole lot different than a lot of the other softwares they're used to using. Now, we do it differently. So the real benefit, rewinding all over to the beginning of this conversation, is, yes, we're giving you all these software applications and features, but it's the infrastructure that really differentiates us. Along with removing different hardware components from this kind of a network, you're also removing potentially different software applications that you'd also need because you've got this stack of different things you can do? Shane Vega: Yeah, exactly. To that point, Dave, when I showed this at Infocomm, when I gave my demos there, typically when you deploy an AV solution, let's call it digital signage, that's the background that you're most familiar with. In digital signage, let's say, you use it for corporate communications; you'll have screens all over the office. In some cases, they'll want to be able to integrate that digital signage into their meeting rooms as well, and when the screens are in standby mode, they want to be able to have some of those corporate communications as part of the digital signage strategy, managing those meeting rooms. But when you go into the meeting room, they'll typically need some type of infrastructure to support those meetings and local collaboration. Usually, it's a network of AV infrastructure, HDMI cables, or what have you, go into some form of a matrix switch that's going to be some type of tablet controller that can give you the ability to manage what laptop is being viewed on what screen. With Userful, because the software does so much, the screens that we manage are not tied to any one specific application, and that's really the beauty of it. So I can walk into a room where they're showing corporate communication. I can sit down, open my laptop, and immediately start a meeting by screencasting whatever's on my laptop onto the screen in that room without connecting a single AV cable. I could then open up my operations center software on that same screen and turn it into an impromptu war room or situation room where I'm pulling in multiple sources and building out customized layouts, and navigating through a crisis. So there are a lot of things that we can do, and it's not dependent on the screen, and, to your point, we've reduced not just the hardware need but the software as well. All right, Shane, that was super interesting. I know much more about this space than I did half an hour ago. Shane Vega: It's been great talking to you, Dave. I appreciate it.
Allen Wyma talks with Kwindla Hultman Kramer, Founder and CEO of Daily, and João Neves, Staff Engineer at Daily. Daily provides SDKs for building video applications on top of the WebRTC standard using Rust. Contributing to Rustacean Station Rustacean Station is a community project; get in touch with us if you'd like to suggest an idea for an episode or offer your services as a host or audio editor! Twitter: @rustaceanfm Discord: Rustacean Station Github: @rustacean-station Email: hello@rustacean-station.org Timestamps [@00:00] - Introduction to Daily [@05:00] - WebRTC Implementation and sharing across different platform [@10:31] - The challenges of integrating C++ with WebRTC [@19:16] - Signaling in WebRTC - Session setup and initial configuration [@22:45] - Challenges in implementing WebRTC standards [@27:21] - Handling and working around platform and browser differences when implementing WebRTC [@30:51] - Daily's mono repo approach for code sharing [@33:30] - The process of building and releasing code in relation to different platforms and dependencies [@35:57] - Integrating Rust, C, Objective C, and Swift for iOS development [@37:20] - Daily's automated testing processes [@42:24] - Daily's network simulation layer in their testing process [@44:00] - The use of Rust in implementing network simulation for testing purposes [@49:15] - Using WebAssembly alongside native code in an application, and the potential obstacles to consider [@50:52] - Crates that are being used by Daily [@52:44] - What would differentiate Daily compared to other solutions? [@55:48] - Daily vs Zoom [@56:38] - Other open-source projects from Daily [@1:01:20] - Parting thoughts and how to get in touch with Daily Credits Intro Theme: Aerocity Audio Editing: Plangora Hosting Infrastructure: Jon Gjengset Show Notes: Plangora Hosts: Allen Wyma
Kwindla Kramer has always been interested in building things, and his parents gave him lots of opportunity. He spent his early days learning to program on the Commodore 64, and playing old games like Loadrunner. He was super interested in the internet while in College, and in 1996, he got the opportunity to be apart of the MIT Media lab. Outside of tech, he is a quasi vegan, and enjoys the foggy beach living on the western edge of San Francisco.After exiting his last startup, Kwin took some time to figure out what he wanted to do next. During that time, he just started coding on projects, and came across WebRTC, which allowed real time communication for the web. This tech catalyzed a tipping point in his mind, and led him to build video experiments on top of it.This is the creation story of Daily.SponsorsCipherstashTreblleCAST AI FireflyTursoMemberstackLinksWebsite: https://daily.coLinkedIn: https://www.linkedin.com/in/kwkramer/Support this podcast at — https://redcircle.com/code-story/donationsAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
Serene is a hacker in the truest sense of the word. She's applied a hacker mindset to learn coding, piano, and blend art and engineering in fascinating ways. You'll find her collaborating on-stage with Grimes one night and coding censorship resistant technologies the next day. As a self-taught coder she was the first engineer hired into Google Ideas when she was just a teenager. At Google she pioneered work on WebRTC proxies that she continued as a fellow at the Open Tech Fund and was eventually released as a Tor-enabling tool called Snowflake. Serene took a hiatus from working as a full-time engineer to pursue a career as a concert pianist where she quickly gained recognition for her incredible talent. She became one of the few self-taught concert pianists to perform Rachmaninoff's Piano Concerto No. 3 (which I highly recommend checking out on YouTube). Serene is also known for the audiovisual artistry of her shows which is drawn from her own experiences with synesthesia that results in her seeing music as colors. As the conflict in Ukraine started, Snowflake started to see exponential usage patterns as Russian citizens looked to circumvent state censorship and Serene decided to build a company around the technology to enhance development and build independent deployment models. That company is called Snowstorm. With Snowstorm, Serene is focused on saving cyberspace from balkanization and censorship and ensuring that all global citizens have unfiltered access to the Internet. In this OODAcast, we explore Serene's career and then dive into ways we can preserve the original intent of the Internet with censorship resistant and privacy enhancing technology stacks that can be easily deployed and scaled. Official Bio: SERENE is a concert pianist from a most unexpected trajectory. Though she never attended conservatory, her solo performances have been described by The Paris Review as a “spectacle to match the New York Philharmonic”, and today Serene has become one of the most talked about young talents in classical music, and beyond. Beyond concertizing, Serene enjoys other collaborations such as her role as composer for Kanye West's Opera, premiered at Lincoln Center & Art Basel, as well as pianist & technologist with Blue Man Group's founder, bringing futuristic innovations at the intersection of music and technology while also highlighting her own audiovisual synesthesia. Previously, Serene was a computer scientist, Google Engineer, and senior research fellow on various projects, before leaving to fully focus on the piano. In the brief years since, she has cultivated a disciplined, personal, and spiritual approach to her music. With her intersections of many disciplines, plus the “ability to enthrall audiences”, she has grown an international following. Serene is one of very few self-taught pianists who've performed Rachmaninoff's Piano Concerto No. 3, which was described as “unprecedented” —Liszt Academy. Serene loves sharing the beauty and power of classical music with all audiences, everywhere, in all venues ranging from the Vienna Musikverein, to a full orchestra in Golden Gate Park, to a decommissioned Boeing 747. Additional Links: Official Website Snowstorm Serene on Instagram Serene Rachmaninoff Concerto Book Recommendations: A Thousand Years of Non-linear History The Making of the Atomic Bomb The Metamorphosis of Prime Intellect: a novel of the singularity Accelerando
Cloud Connections 2023 CPaaS Showcase Evan Kirstel “What's most compelling are the real word stories about how CPaaS is being used,” says Evan Kirstel, a leading thought leader, writer, podcaster, and influencer, who's B@B practice has paid a lot of attention to the communications sector of technology. In this podcast Evan Kirstel joins Kevin Nethercott, Managing Partner of the CPaaSAA as they discuss what they will be looking for as judges in the upcoming CPaaS Showcase to be held at the inaugural Cloud Connections event next week in Fort Lauderdale. The event is being hosted by the Cloud Communications Alliance. Kirstel describes CPaaS as a type of technology that can deliver the type of breakthrough solutions that AI is beginning to deliver across the technology field. We learn that the judges of the showcase are looking for real world applications. “I think what you're looking to do is to empower innovators and developers to build things we don't have yet,” says Kirstel, noting how apps such as Uber brings together several different technologies such as location, payment, SMS and much more into a seamless customer experience. Kirstel notes that WebRTC began as a novel idea, while now it has become “baked-in”, a path we might see with CPaaS. Kevin Nethercott In this podcast series and at the CPaaS Showcase we will not only learn about the importance of this technology but the larger transformative picture of how this technology is set to revolutionize the way people communicate by voice, by video and much more. Visit Cloud Communications Alliance Visit Cloud Connections 2023 Visit CPaaSAA Visit Evan Kirstel
XFCE 4.18 is out! OBS adds support for WebRTC, Pine64 talks about the PineTab 2, and no Raspi 5 in 2023.
TalkingHeadz is an interview format series featuring the movers and shakers of enterprise communications - we also have great guests. In this episode, we check in with Sid Rao, the GM of Amazon Chime SDK. I fondly remember Biba, a WebRTC video/messaging startup . There was a lot of excitement over WebRTC about ten years ago. Biba was acquired by Amazon and relaunched as the Amazon Chime app in 2017. Chime was a lot like Webex and Zoom, but worked entirely in a browser. Both Vonage and CenturyLink (now Lumen) were early resellers of the meeting application. The app worked well, but wasn't a bit hit. Vonage moved to its own video solution, and CenturyLink turned to apps such as Zoom and Teams. AWS discovered that its customers were more interested in Chime's underlying WebRTC than the meeting app. Amazon created WebRTC APIs, effectively making WebRTC a service rather than a tech stack. This model was a better fit for AWS, so at its reInvent conference in 2018, AWS launched the Chime SDK. Then came the pandemic and a boom in most things video. AWS customers and partners used Chime to create all kinds of solutions. K-12 education, for example, needed video for remote education. Chime obliged, and did so with simple hardware requirements for end users. Blackboard presented its success story at reInvent 2021. As for the original Chime app, it's been getting better thanks to new capabilities of the Chime SDK. The Chime app isn't particular popular,, nor marketed. It does well within Amazon, both as an internal meeting and chat app and with many of AWS's large customers.
What makes Google's new OS so secure, a critical WiFi vulnerability in the Kernel, and why Linus is tapping the hype breaks for Linux 6.1.
Tsahi Levent-Levi LinkedIn profileBlogGeek.me websiteTweaking WebRTC Video Quality Blog Post---------------------------------------------------Join our LinkedIn Group so that you can get the latest video insider news and participate in the discussion.Email thevideoinsiders@beamr.com to be a guest on the show.Learn more about Beamr
Mike Ryan (bluetooth.expert) joins us once again to talk SDR's, bluetooth, and more! If you need some consulting help, you can find him at ice9.us. Here are some links to things we talked about: Episode with Jiska Episode with Michael Ossmann Toorcon Toorcon 13 Badge Ice9 Consulting Web of Make Believe on Netflix Caltrain MTVRE Hacking Electric Skateboards Video @ DEFCON23 Inspectrum Rapid Radio Reversing Talk by Michael Ossmann NRF24 Ubertooth CC2400 Yardstick One Waterfall display/plot OOK FSK URH Baudline GNU Radio Companion Fcc.io Alvaro's Quadcopter Reversing (github) SMC Connector RF Attenuator RF Splitter Natalie's webRTC talk where the fuzzer “Fred” is mentioned WirelessUSB BLE Coded PHY HOGP (HID over GATT Profile) You Can Lose in So Many Colors HackRF BladeRF USRP Polyphase channelizer Wireshark Wireshark's extcap Kismet Dragorn Other Mike Ryans: Michael W. Ryan - Murderer Dr. Michael J. Ryan - Epidemiologist Dr. Michael J. Ryan - Paleontologist Have comments or suggestions for us? Find us on twitter @unnamed_show, or email us at show@unnamedre.com. Music by TeknoAxe (http://www.youtube.com/user/teknoaxe)
What is more annoying than watching the big game, texting your friends about it, and then realizing they're ahead of you by 30 seconds? More than just being annoying, latency actually prevents certain business transactions, like remote-betting or live virtual voting, from occurring. As an avid sports fan and former NHL Board of Governors member Roy Reichbach, CEO of Phenix Real Time Solutions, immediately saw the massive potential for this company. Learn more about how the future for Phenix includes unlocking incredible experiences in the metaverse, as wild as you can imagine, including dancing on stage with Kanye. This is a special part of our Communications Series on #ITVisionaries. Tune in to learn more in this series about the frontier of connectivity, streaming, and communications from experts and executives at companies including Comcast, Zayo, and Axis Communications! Tune in to learn:How Phenix is solving latency issues in video streaming (04:45)What re-analyzing and seeking to improve WebRTC looked like (09:23)How Phenix is built to handle the wide range of video quality, up to 16K (14:02 )How Roy became involved with Phenix (24:29)That Phenix has the speed to be able to run real-time virtual auctions with in-person bidders (29:24)How Phenix is keeping up with the aggressive pace of innovation (32:56)IT Visionaries is brought to you by Salesforce Platform. Did you know, streaming now on Salesforce+ you can watch “Legends of Low Code”? In this series three teams of Trailblazers race to build the best low-code app. Check this out... Mission.org is a media studio producing content for world-class clients. Learn more at mission.org.
What is WebRTC, and why do you want to use it? While at NDC London, Carl and Richard talk to Liz Moy about WebRTC, the open-source library that is used by many of your favorite video chat applications. Liz talks about taking advantage of the hard work already done to control video and audio devices through the browser, as well as the various strategies for actually connecting to other people through firewalls and NAT routers. The conversation also explores where and when you would want to have integrated video, audio, screensharing, and data transfer capabilities.
Jonas Birmé LinkedIn profileEyevinn websiteWHIP protocolWHPP protocolStreaming Media Sweden 2022 Presentation on WHIP and WHPP---------------------------------------------------Join our LinkedIn Group so that you can get the latest video insider news and participate in the discussion.Email thevideoinsiders@beamr.com to be a guest on the show.Learn more about Beamr
What is WebRTC, and why do you want to use it? While at NDC London, Carl and Richard talk to Liz Moy about WebRTC, the open-source library that is used by many of your favorite video chat applications. Liz talks about taking advantage of the hard work already done to control video and audio devices through the browser, as well as the various strategies for actually connecting to other people through firewalls and NAT routers. The conversation also explores where and when you would want to have integrated video, audio, screensharing, and data transfer capabilities.Support this podcast at — https://redcircle.com/net-rocks/donations
Jeni Barcelos is the Co-Founder and CEO of Marvelous, the world's most beautiful all-in-one teaching platform. Chad talks with Jeni about what makes Marvelous different from other teaching platforms out there, the importance of elevating women to leadership positions, and why applying for and getting accepted into an accelerator program was the right path for the company. Marvelous (https://www.heymarvelous.com/) Follow Marvelous on YouTube (https://www.youtube.com/channel/UCJZ_kWoBcXJuSKP1iofXsAw), Instagram (https://www.instagram.com/heymarvelous/), or LinkedIn (https://www.linkedin.com/company/marvelous-software/). Follow Jeni on LinkedIn (https://www.linkedin.com/in/jenibarcelos/). Follow thoughtbot on Twitter (https://twitter.com/thoughtbot) or LinkedIn (https://www.linkedin.com/company/150727/). Become a Sponsor (https://thoughtbot.com/sponsorship) of Giant Robots! Transcript: CHAD: This is the Giant Robots Smashing Into Other Giant Robots Podcast, where we explore the design, development, and business of great products. I'm your host, Chad Pytel. And with me today is Jeni Barcelos, the Co-Founder, and CEO of Marvelous, the world's most beautiful all-in-one teaching platform. Jeni, thank you so much for joining me. JENI: Thank you for having me, Chad. I'm excited to be here and excited for our conversation. CHAD: So I'm really excited to dig into more about Marvelous as well. So why don't we start there? What makes Marvelous different than other teaching platforms that are out there? JENI: Marvelous is different in that we prioritize design, I would say more than any other competitive platform. And we also prioritize live events in a way that I think is pretty unique. So we started in the wellness space. We primarily are serving wellness creators, although all kinds of other creators use our tool as well. So we specifically built Marvelous with the goal of serving their unique needs, which involves a lot of teaching live classes and having a really great community space where students and clients can build relationships with each other. And then, because our audience has a particular design aesthetic and is non-technical, we've created the tool in a way that makes it really easy to make something look beautiful very quickly and simply. CHAD: So what caused you to differentiate yourself based on design? JENI: I think just personal preference and aesthetic, to be honest. As we were building the platform, I realized very quickly that people were choosing us...one of the big reasons people were choosing us was because of the simple nature of the user interface and because of the design that it produced. And so we decided really early to prioritize that. And I would say it's also just I care deeply about design, and I don't like the idea of using tools that make that an afterthought. And so I thought if I'm going to use it, and I do, I mean, we definitely dogfood our own platform and teach our own courses, we run our own communities there, I want it to look beautiful. [laughs] I want it to be a place that people enjoy spending time. We all spend more time, I think, than we want looking at screens. And so when you are going to engage in that practice and engage with other people on the internet, I think it's really nice to do it in a space that feels welcoming, and gentle, and beautiful. CHAD: So you have a co-founder, Sandy. Are either of you designers or have a background in design? JENI: Nope, not at all. Although I was one course away from minoring in art history in school. [laughter] No, I'm a lawyer. So I'm the opposite of a designer, although I think there's a part of me that thinks of myself as an artist. I wish that were my identity. CHAD: So, given the importance of design that you discovered, how did you go about executing on that? JENI: Hiring really great people, I would say, and having a really critical eye. And so, there's a tremendous amount of feedback that goes into our process. And now we have a head of brand in our company, and she can hold space for that design across both marketing and within product. So that hire, I think, has been critical for us to be able to maintain that as a priority. CHAD: Where were you in the product life-cycle and business stage where you were able to hire really great people? JENI: I would say within the last two years. So we are one of these startups that was in the right place at the right time when COVID hit. So luckily and unluckily, maybe we grew really fast in the wake of the pandemic, the beginning of the pandemic. And so that positioned us to hire pretty rapidly over the last two years. And that's when we really had the resources and the capacity to bring in that level of talent. I'll say our creative director was working with us for many years before that but just in a part-time capacity so, you know, running her own agency. And we were hiring her out as we could because we were bootstrapped. And so it wasn't until we reached a certain level of growth where we could bring her on as a permanent fixture in the company. CHAD: Yeah, that's often a way that I hear from founders to help get things off the grounds, particularly if you know of someone and would love to work with them and you know what they can produce, but you just can't afford to bring them on full-time as a member of the team at that point. Contracting with them, working with them part-time can be a great way to get that started. So let's rewind even further and tell me about the fateful day where you and Sandy first met each other. JENI: Yeah, so we first met in Colorado Springs at an entrepreneurship event. And it was for an online program we had both been in that was teaching us how to start a SaaS company. And we are two of the only people that had actually done it and gotten to paying customers within six months, which was a pretty audacious goal, I would say to go from being non-technical and having no experience in the startup world to having a product or at least an MVP with paying customers within six months and no funding. So we were 2 of maybe 5 or 6 people out of 550 in the program who did that. So we automatically kind of gravitated towards each other. And we were two of the only women also in the program. So we met at that event and got to know each other over the course of a number of days and retreated together. And then we just started really being accountability partners to one another as we were each building our own companies independently. And then that went on for another year, year and a half before Sandy joined my company. CHAD: Okay, so you were already working on Marvelous? JENI: Yeah. It had a different name at the time, but yes, I was six months into it when I met her. CHAD: How did you convince her to do that? JENI: We were actually growing...for a solo founder, we were growing to the point that I couldn't manage the company really on my own anymore. And so I applied to an accelerator just because I felt like I had other kinds of career experience but had, again, zero background in tech startups. And so, I came from teaching at a law school and building and scaling a nonprofit. And my background was in politics prior to that. So I had no idea what I was doing. Like, I didn't know what I didn't know, and so that thought scared me. And I just wanted to go immerse myself in an environment where I could ask a lot of questions and have access to resources and mentorships. So anyway, I applied to an accelerator and got accepted contingent on having a team and having co-founders. And I was like, [laughs] well, that's why I need to come here because I don't know how to do that. Like, I don't know how I would do that. And so I reached out to Sandy because I mean, she had been more closely involved in the company than anyone else because we were constantly working together online and going on Zoom and helping each other build our companies. She knew more than anyone else really what was involved. And she was always commenting how she wished she had started [chuckles] a company like this because it's just in the sector. Her background is in clinical wellness, and Marvelous really was serving yoga and wellness teachers. And so I said, "Well, why don't you come on board? I need to have at least one co-founder." Well, I was told I needed to have two, but I convinced them actually to accept me into the program with just one. So Sandy and I went into the accelerator together. CHAD: I feel like that's a great sign that you were able to convince them to bend the rule. [laughs] JENI: Yeah, I mean, I think that's actually my MO in life. So I also applied and got into graduate school at Yale without taking the GRE. So I have a history of these kinds of convincing arguments, I guess. CHAD: [laughs] JENI: And I'm a lawyer, right? So I was made for this. [chuckles] CHAD: Yes. You sound like a very enjoyable person, though. So I find it hard to believe that you're... [laughs] No, I'm kidding. JENI: I'm a human rights lawyer. So the only person I've ever represented in court was a pod of killer whales. So I'm a human rights and environmental attorney. So I'm not a corporate attorney by any means. CHAD: So some people might describe going after things, bending the rules as ambition. And I was reading some of the things that you've written, so I'm not pulling this out of thin air. But I know that you talked before about how sometimes ambition, particularly from female-identifying people, can be seen as a problem. Why is that? JENI: I mean, the short answer to that is the patriarchy of which we're all a part. Both men and women and non-binary people are all impacted greatly by the patriarchy. I mean, I think it's how girls are socialized. So that's a whole, I think, a whole other podcast conversation to have. But I mean, just even recently, I have a young daughter, and she was told not to raise her hand as much in school because she was so eager and raises her hand for every question that's asked. And that's unacceptable to me. But I was also told those things. And I think just men and women are judged very differently in our culture, and that's just a fact of life. I mean, just look at, I mean, this is maybe opening up a can of worms. But if you just look at the way the Elizabeth Holmes trial played out versus so many other startup stories, and yes, there are differences, but it's really common in our culture to villainize female ambition and to look at it as problematic. CHAD: Yeah, you're absolutely right that this is a whole podcast topic in and of itself, but I think it's an important one. But I'm curious; it can feel angering and powerless when something like that happens at school or in a system where it's very hard to control it or change it. But when it comes to your own company, you are in charge. So what have you done to try to make this different at Marvelous? JENI: Well, I would say elevating women to leadership positions to the extent that we've been able, I mean, we're definitely a female-run company. We make decisions. Also, just even the way we provide benefits and salaries, it's in my mind done from a more holistic standpoint than I would say a lot of other small startups are doing. We prioritize people and their families and try to treat people like human beings versus just kind of pawns in our scheme to build a company, I would say. It's not perfect. But I really think that so much of what goes on around kind of the women in tech stories so much of that and the women in fundraising stories has to do with women or non-binary people really having to prove themselves to a degree that is unrealistic in order to have the same treatment or the same opportunities as white men. So we are obviously very acutely aware of that. And so, in our own company, we're still very small but always trying to elevate the opportunities that women and people of color have in our company. CHAD: And as you said, this permeates. It's systemic. And so, what might you do when you have a male manager with the best of intentions in a female-led company? I'm of the opinion that it's not enough to just assume that, oh, well, in that environment, this stuff won't happen because it is so ingrained. So are there other things that we can do as founders, as people, and leaders, as a company to create an environment where it's better for everybody? JENI: I definitely don't have all the answers for this. But I would say we've put a lot of attention into coming up with core values that we really strongly affirm and reaffirm in the company and make sure that everyone is aware of those. I also just I'm constantly watching what's going on and noticing subtle cues when people start to pull back from contributing or some voices are much louder than others; just trying to notice that and not wait for something to be brought to my attention. I think so much of it is also the culture, and it's hard in a situation like ours where we're a fully remote team with people across the world with their own different, you know, they're bringing their own cultures and their own values to the company. I mean, it's definitely hard. It's harder than I ever would have expected to get people on the same page. And, I don't know, I don't have really good advice other than to say the founders should really agree on the core values. And then, those core values should be shared constantly. And I think it starts with the founder or the co-founder and the leadership team holding everyone to those values and standards. CHAD: So as someone who, like you said earlier, you're not a software developer, you're not a designer, yet you are working on this idea and bringing it out into the world. How did you manage to do that? What did the very early steps look like for you? JENI: So I started a company when I was essentially on maternity leave. I naively thought that that would be like a fun, little hobby project for me to start a tech company. Partly because I was spinning off what had really been a research project that was funded and incubated at a major university. I was spinning that off into a nonprofit with another co-founder, another lawyer. And I spent a great deal of time and energy fundraising and was constantly going and having meetings or drinks or dinner with people or even flying out to different foundations and meeting with donors. And I was like, this is for me who...I have a body of work and have developed expertise as a climate change expert. And that word is problematic but, you know, someone who knows quite a lot about climate change in the law, edited one of the major textbooks in the area, taught some of the first curriculum on energy and the environment. I was constantly having to just go out and raise money all the time and mostly from people who had cashed out of tech companies. So I was in Seattle at the time, you know, Microsoft and Amazon. There are a lot of people with significant wealth, and those are the people that are donating to organizations. And so I just thought I'm as smart and capable as these people. Like, why don't I have a revenue source that helps to fund the work that I want to do in the world? Which was a lot of human rights law and environmental law that's really underfunded really at that intersection of climate change and human rights. And so I thought, well, I'm on maternity leave. I'm really interested in the wellness industry. I see it really being broken. I had gone through yoga teacher training like right at the tail end of law school just for my own mental health and wellbeing. And I just saw all of these friends and colleagues of mine struggling, and I just thought something wasn't quite right. So the first thing I did was I decided I'm going to try to build something to help them. And I set out to interview 75 yoga studio owners or managers in North America and did some research on the biggest markets at the time. CHAD: Why 75? JENI: Because I just thought that was a good number. If I talked to 75 people, I'd be able to have some good information. And I will say I had just come off of a couple of major projects where I had put together a big international conference in my field in climate justice. And I had also put together sort of a retreat of leaders in the field of scenario planning around that. So I had really learned a lot and elevated my real career at the time by reaching out to people who I thought were thought leaders and experts in the field across different disciplines and having really honest, frank conversations and interviews with them. And I had been able to essentially tease out an entire field of work for myself from doing that. So I brought my research and academic skills to bear, which was like, if I talk to enough people, I'm going to start to find some patterns. And I was curious, like, I couldn't quite understand. This wellness industry had been growing for over a decade at that point. It was this massive industry, and yet no one I knew who worked in the industry made any money. And I thought that was really weird. And I'm like, why is all the money going to apparel companies or a couple of big brands? Something is really broken in this model. I don't think the technology...tech hadn't really arrived to wellness at that point. I came up with the idea to start doing this at the end of 2013, and so it was a long time ago. So I was just like, I'm going to talk to as many people as I can who are running businesses in the major metropolitan areas that are big yoga centers and just see what I can figure out. And so that's what really started it. And then, the idea for what was then called Namastream and is now called Marvelous came from those conversations. And it wasn't long; it was maybe six weeks into the research that I really started to...like, there were three or four ideas that I thought, well, here are product ideas that could really make a difference. And so what I did is I sent out about 200 cold emails, and I had 74 interviews from that. And then I agreed to create a report like a white paper because, again, this is what I knew how to do as an academic is like I'm going to do a bunch of interviews, and then I'm going to write a report about it. And I'm going to share it with people. So I agreed to share the research with everyone who agreed to an interview. And so I think that's part of why they agreed to talk to me. So yeah, so that's where the idea came from. And then again, I had no background in tech. I watched some trainings on how to do UX design, I think, like YouTube videos and stuff, and then I just did it. And I made the first prototype like a clickable prototype in Keynote because I knew how to use Keynote. CHAD: Yeah, that's so great. Talking to people, using whatever tool you're comfortable with, Keynote, PowerPoint, that kind of thing to do clickable prototypes that's the exact kind of thing I encourage, we encourage our thoughtbot early-stage founders to do. So you were spot on. I don't know if you realized it at the time. But that's really great. What problem did those 74 or some strong subset of them have that streaming helped them with? JENI: It was really interesting because there were a couple of studios. There were two studios at the time in Southern California that were doing this, and the bigger studios in these other major cities knew that. And so because there were so few, they were very well known way back. I mean, most of those conversations were in 2014 that I was having. Some of the studios, I mean, one of them is still a major company now. You know, most studios had like 2,000 members. Like, a studio that I would interview had 2,000 customers on their list, like, possible customers. Some of those were people who were drop-ins or punch cards or whatever. And then the studios that were streaming had like 30,000 customers. And so that was starting to be known. And people had no idea how to do any of that themselves. And so the problem that we are solving was when I would interview studios in the Boston area because that was one of the metropolitan areas I targeted; there were certain days out of the year that for snow closures like the studio would just lose all their revenue that day. In the south, there were studios that were impacted by hurricanes that were trying to figure out...and I'm a climate change lawyer, so I see this trend. I was looking at it from a disaster response scenario planning lens which was this is only going to get worse. I had no idea about the pandemic. CHAD: Little did you know. [laughs] JENI: Right? [laughs] I just thought like, wow, okay, the hurricanes are increasing in severity and duration. That's not going to change. Sea level rise is happening. Storms are becoming more unpredictable. Like, places that are cold are getting colder and have more snow on average. So all these people were complaining about lost revenue for these cataclysmic weather events. I just thought that as being a huge opportunity for a solution. So that was one reason why this idea really stood out to me. And then also just knowing that...this actually goes back again to my own story. I used to work for Al Gore. I was one of the people that led his environmental outreach on his presidential campaign when I was a teenager. And then I ended up being one of the first people trained to give the climate presentation that he made famous in An Inconvenient Truth. And so, I had been developing presentation materials in my legal and academic career that I was sharing with that organization. And I had to figure out how to record myself and then try to get it on a thumb drive and then send it to Nashville so they could watch it and learn how to present the slides that I was making. And so I actually had this very different use case where I was like, it was really hard. I was on the board of another nonprofit that was bringing together environmental leaders once a year to learn new training materials and then go back out across the world to disseminate them. Again, the same thing. I was like; there is going to be such a need for some kind of streaming tool that's accessible by whoever wants to use it to be able to share knowledge and information. So both as a business tool, but it also kind of scratched this other itch that I had seen in my previous career, like, the career I thought I was taking a short break from. And so I just was like, this is the future. And I had moved during this time across the country for my husband's job and had a new baby. And I missed my own yoga teacher. And so I thought, like, wow, I'm in North Carolina in this small town, and I really miss the Seattle community. And I miss my teachers there. I wish I could take these classes. So for all of those reasons, I saw this as being a trend that wasn't going away, and that was only going to be more in need. And it was really early adopters at that point, like definitely not 74 studios telling me they needed this. But it was a big enough chunk of early adopters that I thought this is when you get to make something new that changes the industry. Mid-Roll Ad: I wanted to tell you all about something I've been working on quietly for the past year or so, and that's AgencyU. AgencyU is a membership-based program where I work one-on-one with a small group of agency founders and leaders toward their business goals. We do one-on-one coaching sessions and also monthly group meetings. We start with goal setting, advice, and problem-solving based on my experiences over the last 18 years of running thoughtbot. As we progress as a group, we all get to know each other more. And many of the AgencyU members are now working on client projects together and even referring work to each other. Whether you're struggling to grow an agency, taking it to the next level and having growing pains, or a solo founder who just needs someone to talk to, in my 18 years of leading and growing thoughtbot, I've seen and learned from a lot of different situations, and I'd be happy to work with you. Learn more and sign up today at thoughtbot.com/agencyu. That's A-G-E-N-C-Y, the letter U. CHAD: So you have the clickable prototype, and you feel like this is something here. What was the next step for you? JENI: So I went back to everyone that had validated that idea, and I tried to sell it to them. [laughs] I had people PayPal.Me money in order to build it. I said, "You can become a founding customer, and you'll get access for a year included and with a payment." I think I was charging people; I don't know, a couple of thousand dollars. I don't even remember. It was a long time ago. It was under $2,000. "So you can get in now. You'll have a 20% discount on anything we make forever. If you want to participate, you can be in this founding member circle with me. And as it's getting developed, you can provide feedback and help to shape exactly what it becomes." So I had enough people throw some money into the pot PayPal.Me money. I also had no idea how to take money from anyone. It was sort of like pre-Stripe being a normal thing. So I just had random people PayPal.Me money. And I took the money, and I hired a developer to build a prototype. CHAD: How did you find that developer? JENI: That was hard. So through this entrepreneurship course that I ended up meeting Sandy in, there was somebody who was a classmate, sort of like a mentor-level classmate who had done the course before who was an engineer at Microsoft. So I asked him to help me. I reached out and asked him to. And if I hadn't reached out to him, I would have reached out to other developers that I knew, just people in my extended circle of friends and stuff. CHAD: So how long did it take from that point to get a product that people could actually use for their classes? JENI: Early use, I would say like four months. So it was very, very, very beta. It's humiliating what it was. CHAD: It should be. It should be humiliating. JENI: Yeah, it should be. So it was like kind of a WordPress plugin. It was basically a glorified WordPress plugin, and that took about four months. And so I onboarded our early adopters who had given me the money, the checks, the PayPal money. That was the beginning of the summer that year. And I said, "You're the only people that are going to have access to it for the first three months," and that was part of the deal. So I really worked around the clock helping them, working with my developer to solve any problems that were coming up, making changes. And then, in September, about three months later, I just figured out how to run Facebook ads. [laughs] So I just made up Facebook ads that ran to a one-on-one demo and let people book one-on-one demos with me through Facebook ads. CHAD: So in those early days, if you had to do it over again, is there a lesson you learned that you might do it differently? JENI: I don't think so, honestly. I feel like that first year; I feel pretty good about everything that I did. I mean, obviously, it would have been great to have someone like Sandy come in early on. But, I mean, I needed to figure some stuff out that I didn't need another person around to figure out, I guess. And I guess now, in like 2022, we're having this conversation. I wish I had dumped money into Facebook ads to have more demos because they were so cheap. [laughter] CHAD: Right. Right. JENI: It was so cheap for me to run ads. It was the golden days of online advertising, I guess. So I was probably paying like 40 cents a lead or something. [laughs] So yeah, maybe that. Because I was very much not wanting to put my own money into it. Like, once I raised the money to fund the prototype, I think I maybe in that first year put in like seven grand or something, but by the end of the year, I paid myself back. CHAD: That's great. JENI: I could have maybe put in a little more money. But I didn't know if it was going to work. Like, I wasn't really willing for...again, I wanted something that was very validated. And I expected fully to be going back to my career as soon as I got this thing launched. I was like, oh, this is just like a side hustle. This is going to be passive income. I did not understand that building a software company was not at all passive. I think I really bought into this idea that, like, oh, it's the internet. It will be passive income. CHAD: When did it become clear to you that it wasn't? JENI: Oh, I mean, I would say, you know, so I had a moment to go back to work, not my exact same job but to do work in my field between one month and two months after I launched and started running ads. I turned down a really incredible job offer, and I think that's when I knew it. I was like, if I take a job, if I go back to work full-time...and I have the kind of career that's all-encompassing, and I sort of, whatever, I'm going to give 150% to whatever I'm doing. And so I knew that this thing would kind of die, but it was taking off. And so, I think I knew at that point I had to make a choice. CHAD: And is that when you decided to apply to the accelerator? JENI: No, it was like another nine months of growing it on my own before I applied to the accelerator. And I just kept doing what I was doing, and it was working. But I was doing things that didn't scale, so that was the problem. And so I didn't know, I mean, there's only so many one-on-one demos that the founder can do before you start to realize [laughs] you need to make some changes because I was doing them around the clock. And then, at some point, I switched to webinars. I taught myself how to do webinars. And so then I was trying to do demos to multiple people at a time, but also, I didn't understand email marketing. I didn't understand copywriting. I was figuring everything out myself as I went, and I was burning out. So that's when I decided, like, oh, not everyone does this. I can't grow Microsoft or Amazon by doing this. I can't become that company. So obviously, I need to figure out how to scale. So that was when I decided to apply to the accelerator. CHAD: Why apply to an accelerator as opposed to start fundraising, for example? JENI: Oh my God. Well, so the whole reason I did this was so that I didn't have to go out for drinks with people and ask them for money. [laughter] I mean, I was not interested in that at all. And then I soon realized that that's what happens when you join an accelerator [laughs] is you basically just start learning to fundraise. But I didn't know that. I knew nothing. And so, I knew nothing about startups. I knew nothing about anything like this. I literally had no idea. So the idea of going and sending emails to wealthy people to go have a drink with them was actually the last thing I was willing to do. And I don't think any VC fund would have met with me. I didn't even really know what that was. Like, I had no idea. So it didn't strike me as something that there was a lot of money that somebody would pour into it. That, honestly, wasn't even an option to me in my mind at that time. CHAD: How did the accelerator help you? JENI: It helped me bring on a co-founder, which I would say was invaluable. [chuckles] And I learned a lot. I mean, I'm a person who's super curious and asks lots of questions. And so there was always somebody I could ask questions to, which was great, saved me a lot of time. And I also got to be in a cohort with other founders and see how they were growing their companies. So if you've never been around that stuff before, it's super helpful, I think, to just learn what other people are doing, like what other models there are, what other teams look like. And I also realized, like, we were one of the only companies that had any revenue. I had no idea how we compared to anything else. And so I realized, oh, we're growing, and we're making money, and we're profitable. And it's really different than what a lot of other people are doing. So I knew that there was something to it also. I knew that we were really onto something. And then I will also say that fast forward a number of years, and our leader, one of the directors of our accelerator, I ended up hiring him to be our Chief Product Officer. So that was also very fortuitous [chuckles] and really an amazing story and outcome as well. CHAD: Did you end up raising money coming out of the accelerator? JENI: Nope. We soft circled around and had an opportunity to take an additional tranche from the accelerator, and we walked away from that at the time. And it was a really hard decision. Mostly because Sandy is Canadian, I don't know if that was made obvious, and I'm American. And we never envisioned wanting...like, building a remote company still in 2016 was not normal. And there was no way we were going to be in the same place. And the potential investors we were talking to, one of them in particular, was pretty adamant that we needed to be in person and have an actual office set up. And that was not negotiable for us. And so we had been doing this fine. I mean, we were fine building a company together. And our first developer was in Asia, and then our designer at the time was in another part of the United States. So I was like, why would we do that? Why would we spend money and have to buy things like a fax machine and chairs? Why would we do that? CHAD: [laughs] JENI: That doesn't make any sense. And so that was one kind of red flag for me. And then also in the accelerator, I pitched to tons of people because you're sort of pitching, but also it's kind of practice. And I don't know how much of that was actual pitching. And I don't know how many of those investors were actually considering making investments, or they were just being nice and giving you their time and feedback. But I pitched a ton, and the only people that we had soft circled were women. And I just had some negative experiences with some of the investors that we had pitched to, which that's also another podcast episode. And I was really bothered by, in particular, one conversation that I had. It was like a situation where someone said something really inappropriate to me, and I just absolutely did not want to do that. So that all factored in. CHAD: Have you ever taken any investment? JENI: Yeah. So this year, we have...because we have a situation where it makes sense to pour fuel on our...it made sense from a marketing standpoint to pour some money in. So we've just taken a small investment from angels, and we may take a little more as well. We're open-minded, I would say, right now about fundraising. I have, in the last two years, taken a lot of meetings. So I've talked to lots of firms and lots of angels and get emails every day and so take a number of those meetings. So I've just tried to be really open-minded about it. So yeah, I would say I don't have such a negative association as what I had before. But I also would say my company is in a really different position now, and fundraising means something else to us. CHAD: It sounds like you're in a little bit more control over the situation. And by working with individual angels probably, you're able to maintain that, I would guess. JENI: Yeah. And it's definitely something where I think that there are...you know, I don't think it's helpful to be closed off to fundraising because I see that there are absolutely opportunities especially to go into new markets where being bootstrapped isn't practical because of the cost to go into those markets. And so, if it's something that's heavily regulated, for example, it's not a feasible option. So allowing us to have options and actually to be able to think through those options is important to me. CHAD: So now that you've done that, what's next for Marvelous? What's the next challenge you're ready to tackle? JENI: I would say we had this tremendous growth early in the pandemic, which really kind of unearthed, not really unearthed, I mean, I knew it was there [laughs] but really publicly unearthed a lot of technical debt. And that's, I think, normal for bootstrapped companies as well who are growing slowly and up to a point that they're not anymore. And so we spent a solid 18 months, I would say. Up until the end of 2021, there was a solid 18 months of really rebuilding the platform from the ground up. And so we've done that, and now we're in growth mode. We're focusing on letting people know that we exist because I think we're quite well known in the wellness industry and in the yoga space in particular, but we're not as well known outside of that in other creator niches. And so it's about brand awareness. It's about really showing up as thought leaders in the space as well. We do a lot of writing and a lot of blogging and podcasting. And in particular, we serve women and non-binary creators in a way that I think no one else does. And so it's about disseminating the information that we have and the teachings that we have and letting people know we exist, and we're a resource for them. CHAD: Yeah. Well, like you said, you have a strong reputation, and you have those roots in the wellness space, but you've expanded beyond that. If someone's out there listening, what would make them a potential customer or an ideal customer for Marvelous? JENI: So anyone who's teaching, training, or coaching online, the software is really industry agnostic. And so we're just, again, not as well known yet in those other spaces. But especially someone who's integrating any cohort-based learning, or really heavily integrating coaching and live streaming, or group programs, or one-on-ones into an online course or a membership, for example, Marvelous is really second to none with all of that. Because again, live streaming and the integration of live teaching with on-demand content was what we started with and what we are known for. And so it's not an afterthought the way that I think a lot of online teaching platforms and edtech companies have slapped on live streaming as like, oh, now you can integrate with Zoom or whatever. And for us, we have an integration with Zoom that's not like anything else. And then we have other WebRTC-based live streaming options, and everything is very well thought out and makes it really easy for the end-user so for the students and clients to be able to use the tool, which I think our audience really cares about that it's easy for their clients. CHAD: I'd be remiss, and since this is a podcast, if I didn't mention that you and Sandy have a podcast. JENI: We sure do, yeah. Thank you so much. laughs] CHAD: It's called the And She Spoke Podcast, and where can folks find it? JENI: So obviously anywhere that they listen to podcasts, but our website for the podcast is andshe.co. So I would love it if you're interested in conversations about women in tech, female founders, women, money and power, online business resources, and training. And that's mostly what we talk about. We're doing a crypto series right now, sort of like exploring crypto and the intersection of women and crypto, so that's going to be coming out shortly as well. CHAD: Cool. If folks want to try out Marvelous or find out more or get in touch with you, where are all the places that they can do that? JENI: So our website is heymarvelous.com. And we are @heymarvelous on Instagram. That's where we hang out the most. But we're also on TikTok, and Pinterest, and Facebook, and pretty much everywhere else as well. But Instagram, I would say, is the best place. CHAD: That's great. Jeni, thanks so much for joining me and sharing your story and your advice. I'm sure people will really appreciate it. JENI: Yeah, thank you so much for your time, Chad. I appreciate you having me. CHAD: And you can subscribe to the show and find notes for this episode along with a complete transcript at giantrobots.fm. If you have questions or comments, email us at hosts@giantrobots.fm. And you can find me on Twitter at @cpytel. This podcast is brought to you by thoughtbot and produced and edited by Mandy Moore. Thanks for listening, and see you next time. ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success. Special Guest: Jeni Barcelos.
Craig and James beg for your forgiveness as the podcast has been on hiatus for a while. But with guests like Bluewave Technology Group and Vodia, like Die Hard 3, they are back with a vengeance. Well-known channel figures Curt Allen and Eric Brooker are new to Bluewave, but not the channel. These gents talk with the guys about the unique business model the company offers and the flood of investment money coming into the industry. Next up is Christian Stredicke, president and CEO of Vodia Networks. You might know him best as the founder of Snom. Stredicke will discuss WebRTC and other technologies impacting the IP PBX market. And it wouldn't be a podcast without a discussion of Channel Partners events. We'll have a preview of Channel Evolution Europe, less than two weeks away. Moreover, the MSP Summit in September will be here before we know it. And we're introducing the inaugural Channel Partners Leadership Summit that we are co-locating with the event. All that plus the 2022 MSP 501 reveal is almost here. We'll get you excited for that and tell you about a webinar where you can get a sneak peak of the top winners.
NVIDIA is open-sourcing their GPU drivers, but there are a few things you need to know. Plus, we get some exclusive insights into Tailscale from one of its co-founders. Special Guests: Avery Pennarun and Christian F.K. Schaller.
When you are teaching someone web development skills, when is the right time to start teaching code quality and testing practices? Karl Stolley believes it's never too early. Let's hear how he incorporates code quality in his courses. Our discussion includes: starting people off with good dev practices and tools linting html and css validation visual regression testing using local dev servers, including https incorporating testing with git hooks testing to aid in css optimization and refactoring Backstop Nightwatch BrowserStack the tree legged stool of learning and progressing as a developer: testing, version control, and documentation Karl is also writing a book on WebRTC, so we jump into that a bit too. Special Guest: Karl Stolley.
Lorenzo Miniero of Meetecho and Janus WebRTC speaks with Jonathan Bennett and Doc Searls on FLOSS Weekly about securing WebRTC streams with encryption and SecureRTP. For more, check out FLOSS Weekly: https://twit.tv/floss/674 Hosts: Doc Searls and Jonathan Bennett Guest: Lorenzo Miniero You can find more about TWiT and subscribe to our podcasts at https://podcasts.twit.tv/
Lorenzo Miniero of Meetecho and the author of the Janus open source WebRTC server, joins Doc Searls and Jonathan Bennett for an informative conversation regarding live streaming and broadcasting, new and better ways to bring in network devices, OBS, virtual and hybrid events, WHIP, the open source future of real time communication, and much more. Hosts: Doc Searls and Jonathan Bennett Guest: Lorenzo Miniero Download or subscribe to this show at https://twit.tv/shows/floss-weekly Think your open source project should be on FLOSS Weekly? Email floss@twit.tv. Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: kolide.com/floss Compiler - FLOSS
Lorenzo Miniero of Meetecho and the author of the Janus open source WebRTC server, joins Doc Searls and Jonathan Bennett for an informative conversation regarding live streaming and broadcasting, new and better ways to bring in network devices, OBS, virtual and hybrid events, WHIP, the open source future of real time communication, and much more. Hosts: Doc Searls and Jonathan Bennett Guest: Lorenzo Miniero Download or subscribe to this show at https://twit.tv/shows/floss-weekly Think your open source project should be on FLOSS Weekly? Email floss@twit.tv. Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: kolide.com/floss Compiler - FLOSS
谈及个人影响力,出书必然是一个排名靠前的选项,作为中华民族价值观最佳体现的「三不朽」中,也把立言作为一个非常重要且困难的事情。但作为个人品牌的重要组成部分,出书是一个无法绕过的话题。于是我们决定认真聊聊出书这个话题。今天,我们邀请 《WebRTC音视频实时互动技术》 的作者李超老师和编辑朱捷老师,跟大家分享一下关于出书写作的那些事。您将在本期节目中听到以下内容:嘉宾出书的经验和教训从编辑视角提供的出书建议不同人群适合的出书选择图书出版的流程嘉宾介绍李超,北京音视跳动科技有限公司,首席架构师。《WebRTC音视频实时互动技术–原理、实战与源码分析》一书作者、声网开发者社区 MVP,拥有10多年的音视频实时互动直播研发经验,多年的团队管理经验。参加并设计了多个高负载,大并发服务器架构。曾在全时云会议担任“Tang”平台研发经理。带领团队研发自主知识产权的全时音视频会议平台。该平台可以同时并发上万场会议,每场可以支持上千人实时互动。2015年底加入跟谁学团队,担任直播研发高级经理,带领团队研发在线教育直播平台,同一教室内可支持上万人。之后,先后任职沪江网高级架构师,以及新东方音视频技术专家。朱捷,拥有超过25年的丰富工作经验和多家知名外企(惠普、安捷伦和EMC)、国企(电信科学研究院)和私企的工作经历。曾运作出版:《生成对抗网络入门指南》、《手机安全和可信应用开发指南:TrustZone与OP-TEE技术详解》、《5G NR标准:下一代无线通信技术》、《MATLAB与机器学习》。魏星,前媒体人,五年以上IT社区技术媒体经历、五年B2B市场经历。现任声网资深技术内容布道师。热爱互联网行业,网络安全与开源项目爱好者,信奉黑客精神和技术的力量。制作团队主播 / 白宦成嘉宾 / 李超、朱捷、魏星后期 / 卷圈封面 / 丁丁产品统筹 / bobo录音间 / 华创资本关于「编码人声」编码人声是一档聚焦程序员成长的播客节目,由资深程序员和年轻开发者共同担任主播。邀请不同背景的程序员担任节目嘉宾,分享程序员职业成长、个人成长经验,为程序员听友答疑解惑。本节目由津津乐道播客网络与声网联合制作。对音视频技术感兴趣?可以到声网Agora RTC 开发者社区了解更多内容。声网Agora RTC开发者社区 | 公众号:声网Agora开发者 | 津津乐道播客官网版权声明 | 评论须知 | 加入听友群
Why Dirty Pipe is a dirty dog, the explosive adoption of Linux at AMD, and an important update on elementary OS.
Why Dirty Pipe is a dirty dog, the explosive adoption of Linux at AMD, and an important update on elementary OS.
Every Business owner faces getting clients at one point or another. See how Gilsi went from 0 to 60,000 users with this method. "Go after the thing that makes your irritated at work and try to fix that problem for others." Check out the Clicks and Bricks Academy resources: https://clicksandbricksacademy.com/ Sponsor: https://mygosite.com/ About Crankwheel: CrankWheel is a bootstrapped startup, based in Iceland. It was founded in 2015 by Jói Sigurdsson and Gilsi Sigvaldason, childhood friends from Akureyri in Northern Iceland. Jói worked for 10 years at Google's Montréal office before moving back to Iceland to found CrankWheel with Gilsi, who had a wealth of experience in sales of insurance and finance solutions. Their different backgrounds are the pillar of CrankWheel; Jói was a tech lead at Google working on technologies such as the Chrome browser and WebRTC, whereas Gilsi had on multiple occasion driven hours simply to show a customer his laptop screen. The result was an application that allows salespeople to share their screens in real-time with customers during a sales call Contact the Team: Web URL: http://www.crankwheel.com/ LinkedIn: https://www.linkedin.com/company/crankwheel Facebook: https://www.facebook.com/crankwheel --- Support this podcast: https://anchor.fm/clicksandbricks/support (https://anchor.fm/clicksandbricks/support) Learn more about your ad choices. Visit megaphone.fm/adchoices