Podcasts about ietf

Open Internet standards organization

  • 139PODCASTS
  • 376EPISODES
  • 44mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Sep 22, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about ietf

Latest podcast episodes about ietf

Telecom Reseller
VCONIC, Frontline Group, and United Way 2-1-1: Empathy at Scale with vCon, Podcast

Telecom Reseller

Play Episode Listen Later Sep 22, 2025


“You can have the best program in the world, but if nobody knows about it, it won't make a difference,” says Todd Jordan, who leads United Way of Greater Kansas City's 2-1-1. “That's why we run a 24/7/365 contact center—to guide people to real help with a kind, empathetic voice.” In this special Technology Reseller News podcast, Publisher Doug Green brings together Todd Jordan (United Way 2-1-1, Kansas City), Jill Blankenship (CEO, Frontline Group), and Thomas McCarthy-Howe (CTO, VCONIC) to explore Empathy at Scale: how vCon (styled vCon) data and AI—implemented with strict privacy and security—are transforming community helplines and complex, multi-agency referrals. The Scale - and the Strain United Way's 2-1-1 covers 23 counties and roughly 2.5 million people across the Greater Kansas City region. Demand has surged since the pandemic: 155,000+ calls last year and nearly 500,000 total contacts (calls, web, email, even USPS), with average call times around 7.5 minutes—well over a million minutes of conversations. The mix spans urban, suburban, and rural needs, multiple languages, and highly sensitive situations (from rent and utilities to domestic violence and mental health crises). Protecting privacy is paramount. From Corridor Conversation to Pilot Blankenship describes how a hallway conversation about vCon—a new IETF-developed file format for conversations—sparked a collaboration. Frontline Group packaged the idea inside Frontline Quest, their agent-enablement and professional services program, while VCONIC, a spin-out dedicated to vCon technology, provided the protocol and secure data handling. The trio launched a live pilot with United Way 2-1-1 to transcribe calls, structure insights, and surface actionable “signals” for quality, safety, and service improvement—without compromising caller confidentiality. “vCon is designed to feed AI and protect people,” says Thomas McCarthy-Howe. “Bringing IETF-grade security and openness to conversational data lets us see the dark operational signals—safely—and use them to help people faster.” What Changed for 2-1-1 Quality & Care Signals: Real-time indicators help supervisors coach empathy, spotting where agents can lean in—and where secondary trauma support is needed for frontline staff. Searchable Conversations (Not Just Dispositions): Instead of relying on boxes and notes, leaders can now query full conversations to answer urgent policy questions. Jordan asked the system to compare eviction-prevention resources across Kansas vs. Missouri; the synthesized, data-grounded view matched the team's lived experience and revealed precise gaps. Multilingual & Multichannel Reality: With 70–80 languages in some school districts, vCon-backed transcription and analysis improve consistency across interpreters and channels—phone, web, email, and more. Why It Matters For a nonprofit with finite resources, the team needed technology that is secure, lean, and humane—helping callers in crisis without forcing agents to split attention between empathy and note-taking. The pilot is doing exactly that: safeguarding sensitive data while unlocking insights that mobilize funding, target interventions, and strengthen outcomes. “We're at the tip of something transformative,” Jordan says. “Real-time data from our community voices helps us advocate better—and care better.” About the participants: United Way of Greater Kansas City 2-1-1 serves 23 counties and ~2.5M people, fielding 155k+ calls annually. 2-1-1 is a North American network covering ~99% of the U.S. and much of Canada. Frontline Group is a contact center BPO and professional services firm; its Frontline Quest program integrates vCon to enhance agent experience and operational insight. VCONIC specializes in vCon technology—a conversation file format being developed in the IETF, the internet standards body behind protocols like TLS and OAuth. Learn more: United Way 2-1-1 (Kansas City),

Software Engineering Radio - The Podcast for Professional Software Developers

François Daoust, W3C staff member and co-chair of the Web Developer Experience Community Group, discusses the origins of the W3C, the browser standardization process, and how it relates to other organizations like TC39, WHATWG, and IETF. This episode covers a lot of ground, including funding through memberships, royalty-free patent access for implementations, why implementations are built in parallel with the specifications, why requestVideoFrameCallback doesn't have a formal specification, balancing functionality with privacy, working group participants, and how certain organizations have more power. François explains why the W3C hasn't specified a video or audio codec, and discusses Media Source Extensions, Encrypted Media Extensions and Digital Rights Management (DRM), closed source content decryption modules such as Widevine and PlayReady, which ship with browsers, and informing developers about which features are available in browsers. Brought to you by IEEE Computer Society and IEEE Software magazine.

CERIAS Security Seminar Podcast
Rolf Oppliger, E2EE Messaging: State of the Art and Future Challenges

CERIAS Security Seminar Podcast

Play Episode Listen Later Sep 17, 2025 65:05


End-to-end encrypted (E2EE) messaging on the Internet allows encrypted messages to be sent from one sender to one or multiple recipients in a way that cannot be decrypted by anybody else - arguably not even the messaging service provider itself. The protocol of choice is Signal that invokes and puts in place several cryptographic primitives in new and ingenious ways. Besides the messenger of the same name, the Signal protocol is also used by WhatsApp, Facebook Messenger, Wire, and many more. As such, it marks the gold standard and state of the art when it comes to E2EE messaging on the Internet.To make it scalable and useful for large groups, the IETF has also standardized a complementary protocol named messaging layer security (MLS). In this talk, we outline the history of development and mode of operation of both the Signal and MLS protocols, and we elaborate on the next challenges for the future. About the speaker: Rolf Oppliger studied computer science, mathematics, and economics at the University of Bern, Switzerland, where he received M.Sc. (1991) and Ph.D. (1993) degrees in computer science. In 1994-95, he was a post-doctoral researcher at the International Computer Science Institute (ICSI) of UC Berkeley, USA. In 1999, he received the venia legendi for computer science from the University of Zurich, Switzerland, where he was appointed adjunct professor in 2007. The focus of his professional activities is on technical information security and privacy. In these areas, he has published 18 books and many scientific articles and papers, regularly participates at conferences and workshops, served on the editorial boards of some leading magazines and journals, and has been the editor of the Artech House information security and privacy book series since its beginning (in the year 2000). He's the founder and owner of eSECURITY Technologies Rolf Oppliger, works for the Swiss National Cyber Security Centre NCSC, and teaches at the University of Zurich. He was a senior member of the ACM and the IEEE, as well as a member of the IEEE Computer Society and the IACR. He also served as vice-chair of the IFIP TC 11 working group on network security.

Software Sessions
François Daost on the W3C

Software Sessions

Play Episode Listen Later Sep 16, 2025 67:56


Francois Daost is a W3C staff member and co-chair of the Web Developer Experience Community Group. We discuss the W3C's role and what it's like to go through the browser standardization process. Related links W3C TC39 Internet Engineering Task Force Web Hypertext Application Technology Working Group (WHATWG) Horizontal Groups Alliance for Open Media What is MPEG-DASH? | HLS vs. DASH Information about W3C and Encrypted Media Extensions (EME) Widevine PlayReady Media Source API Encrypted Media Extensions API requestVideoFrameCallback() Business Benefits of the W3C Patent Policy web.dev Baseline Portable Network Graphics Specification Internet Explorer 6 CSS Vendor Prefix WebRTC Transcript You can help correct transcripts on GitHub. Intro [00:00:00] Jeremy: today I'm talking to Francois Daoust. He's a staff member at the W3C. And we're gonna talk about the W3C and the recommendation process and discuss, Francois's experience with, with how these features end up in our browsers. [00:00:16] Jeremy: So, Francois, welcome [00:00:18] Francois: Thank you Jeremy and uh, many thanks for the invitation. I'm really thrilled to be part of this podcast. What's the W3C? [00:00:26] Jeremy: I think many of our listeners will have heard about the W3C, but they may not actually know what it is. So could you start by explaining what it is? [00:00:37] Francois: Sure. So W3C stands for the Worldwide Web Consortium. It's a standardization organization. I guess that's how people should think about W3C. it was created in 1994. I, by, uh, Tim Berners Lee, who was the inventor of the web. Tim Berners Lee was the, director of W3C for a long, long time. [00:01:00] Francois: He retired not long ago, a few years back. and W3C is, has, uh, a number of, uh. Properties, let's say first the goal is to produce royalty free standards, and that's very important. Uh, we want to make sure that, uh, the standard that get produced can be used and implemented without having to pay, fees to anyone. [00:01:23] Francois: We do web standards. I didn't mention it, but it's from the name. Standards that you find in your web browsers. But not only that, there are a number of other, uh, standards that got developed at W3C including, for example, XML. Data related standards. W3C as an organization is a consortium. [00:01:43] Francois: The, the C stands for consortium. Legally speaking, it's a, it's a 501c3 meaning in, so it's a US based, uh, legal entity not for profit. And the, the little three is important because it means it's public interest. That means we are a consortium, that means we have members, but at the same time, the goal, the mission is to the public. [00:02:05] Francois: So we're not only just, you know, doing what our members want. We are also making sure that what our members want is aligned with what end users in the end, need. and the W3C has a small team. And so I'm part of this, uh, of this team worldwide. Uh, 45 to 55 people, depending on how you count, mostly technical people and some, uh, admin, uh, as well, overseeing the, uh, the work, that we do, uh, at the W3C. Funding through membership fees [00:02:39] Jeremy: So you mentioned there's 45 to 55 people. How is this funded? Is this from governments or commercial companies? [00:02:47] Francois: The main source comes from membership fees. So the W3C has a, so members, uh, roughly 350 members, uh, at the W3C. And, in order to become a member, an organization needs to pay, uh, an annual membership fee. That's pretty common among, uh, standardization, uh, organizations. [00:03:07] Francois: And, we only have, uh, I guess three levels of membership, fees. Uh, well, you may find, uh, additional small levels, but three main ones. the goal is to make sure that, A big player will, not a big player or large company, will not have more rights than, uh, anything, anyone else. So we try to make sure that a member has the, you know, all members have equal, right? [00:03:30] Francois: if it's not perfect, but, uh, uh, that's how things are, are are set. So that's the main source of income for the W3C. And then we try to diversify just a little bit to get, uh, for example, we go to governments. We may go to governments in the u EU. We may, uh, take some, uh, grant for EU research projects that allow us, you know, to, study, explore topics. [00:03:54] Francois: Uh, in the US there, there used to be some, uh, some funding from coming from the government as well. So that, that's, uh, also, uh, a source. But the main one is, uh, membership fees. Relations to TC39, IETF, and WHATWG [00:04:04] Jeremy: And you mentioned that a lot of the W3C'S work is related to web standards. There's other groups like TC 39, which works on the JavaScript spec and the IETF, which I believe worked, with your group on WebRTC, I wonder if you could explain W3C'S connection to other groups like that. [00:04:28] Francois: sure. we try to collaborate with a, a number of, uh, standard other standardization organizations. So in general, everything goes well because you, you have, a clear separation of concerns. So you mentioned TC 39. Indeed. they are the ones who standardize, JavaScript. Proper name of JavaScript is the EcmaScript. [00:04:47] Francois: So that's tc. TC 39 is the technical committee at ecma. and so we have indeed interactions with them because their work directly impact the JavaScript that you're going to find in your, uh, run in your, in your web browser. And we develop a number of JavaScript APIs, uh, actually in W3C. [00:05:05] Francois: So we need to make sure that, the way we develop, uh, you know, these APIs align with the, the language itself. with IETF, the, the, the boundary is, uh, uh, is clear as well. It's a protocol and protocol for our network protocols for our, the IETF and application level. For W3C, that's usually how the distinction is made. [00:05:28] Francois: The boundaries are always a bit fuzzy, but that's how things work. And usually, uh, things work pretty well. Uh, there's also the WHATWG, uh, and the WHATWG is more the, the, the history was more complicated because, uh, t of a fork of the, uh, HTML specification, uh, at the time when it was developed by W3C, a long time ago. [00:05:49] Francois: And there was been some, uh, Well disagreement on the way things should have been done, and the WHATWG took over got created, took, took this the HTML spec and did it a different way. Went in another, another direction, and that other, other direction actually ended up being the direction. [00:06:06] Francois: So, that's a success, uh, from there. And so, W3C no longer works, no longer owns the, uh, HTML spec and the WHATWG has, uh, taken, uh, taken up a number of, uh, of different, core specifications for the web. Uh, doing a lot of work on the, uh, on interopoerability and making sure that, uh, the algorithm specified by the spec, were correct, which, which was something that historically we haven't been very good at at W3C. [00:06:35] Francois: And the way they've been working as a, has a lot of influence on the way we develop now, uh, the APIs, uh, from a W3C perspective. [00:06:44] Jeremy: So, just to make sure I understand correctly, you have TC 39, which is focused on the JavaScript or ECMAScript language itself, and you have APIs that are going to use JavaScript and interact with JavaScript. So you need to coordinate there. The, the have the specification for HTML. then the IATF, they are, I'm not sure if the right term would be, they, they would be one level lower perhaps, than the W3C. [00:07:17] Francois: That's how you, you can formulate it. Yes. The, the one layer, one layer layer in the ISO network in the ISO stack at the network level. How WebRTC spans the IETF and W3C [00:07:30] Jeremy: And so in that case, one place I've heard it mentioned is that webRTC, to, to use it, there is an IETF specification, and then perhaps there's a W3C recommendation and [00:07:43] Francois: Yes. so when we created the webRTC working group, that was in 2011, I think, it was created with a dual head. There was one RTC web, group that got created at IETF and a webRTC group that got created at W3C. And that was done on purpose. Of course, the goal was not to compete on the, on the solution, but actually to, have the two sides of the, uh, solution, be developed in parallel, the API, uh, the application front and the network front. [00:08:15] Francois: And there was a, and there's still a lot of overlap in, uh, participation between both groups, and that's what keep things successful. In the end. It's not, uh, you know, process or organization to organization, uh, relationships, coordination at the organization level. It's really the fact that you have participants that are essentially the same, on both sides of the equation. [00:08:36] Francois: That helps, uh, move things forward. Now, webRTC is, uh, is more complex than just one group at IETF. I mean, web, webRTC is a very complex set of, uh, of technologies, stack of technologies. So when you, when you. Pull a little, uh, protocol from IETFs. Suddenly you have the whole IETF that comes with you with it. [00:08:56] Francois: So you, it's the, you have the feeling that webRTC needs all of the, uh, internet protocols that got, uh, created to work Recommendations [00:09:04] Jeremy: And I think probably a lot of web developers, they may hear words like specification or standard, but I believe the, the official term, at least at the W3C, is this recommendation. And so I wonder if you can explain what that means. [00:09:24] Francois: Well. It means it means standard in the end. and that came from industry. That comes from a time where. As many standardization organizations. W3C was created not to be a standardization organization. It was felt that standard was not the right term because we were not a standardization organization. [00:09:45] Francois: So recommend IETF has the same thing. They call it RFC, request for comment, which, you know, stands for nothing in, and yet it's a standard. So W3C was created with the same kind of, uh thing. We needed some other terminology and we call that recommendation. But in the end, that's standard. It's really, uh, how you should see it. [00:10:08] Francois: And one thing I didn't mention when I, uh, introduced the W3C is there are two types of standards in the end, two main categories. There are, the de jure standards and defacto standards, two families. The de jure standards are the ones that are imposed by some kind of regulation. so it's really usually a standard you see imposed by governments, for example. [00:10:29] Francois: So when you look at your electric plug at home, there's some regulation there that says, this plug needs to have these properties. And that's a standard that gets imposed. It's a de jure standard. and then there are defacto standards which are really, uh, specifications that are out there and people agree to use it to implement it. [00:10:49] Francois: And by virtue of being used and implemented and used by everyone, they become standards. the, W3C really is in the, uh, second part. It's a defacto standard. IETF is the same thing. some of our standards are used in, uh, are referenced in regulations now, but, just a, a minority of them, most of them are defacto standards. [00:11:10] Francois: and that's important because that's in the end, it doesn't matter what the specific specification says, even though it's a bit confusing. What matters is that the, what the specifications says matches what implementations actually implement, and that these implementations are used, and are used interoperably across, you know, across browsers, for example, or across, uh, implementations, across users, across usages. [00:11:36] Francois: So, uh, standardization is a, is a lengthy process. The recommendation is the final stage in that, lengthy process. More and more we don't really reach recommendation anymore. If you look at, uh, at groups, uh, because we have another path, let's say we kind of, uh, we can stop at candidate recommendation, which is in theoretically a step before that. [00:12:02] Francois: But then you, you can stay there and, uh, stay there forever and publish new candidate recommendations. Um, uh, later on. What matters again is that, you know, you get this, virtuous feedback loop, uh, with implementers, and usage. [00:12:18] Jeremy: So if the candidate recommendation ends up being implemented by all the browsers, what's ends up being the distinction between a candidate and one that's a normal recommendation. [00:12:31] Francois: So, today it's mostly a process thing. Some groups actually decide to go to rec Some groups decide to stay at candidate rec and there's no formal difference between the, the two. we've made sure we've adopted, adjusted the process so that the important bits that, applied at the recommendation level now apply at the candidate rec level. Royalty free patent access [00:13:00] Francois: And by important things, I mean the patent commitments typically, uh, the patent policy fully applies at the candidate recommendation level so that you get your, protection, the royalty free patent protection that we, we were aiming at. [00:13:14] Francois: Some people do not care, you know, but most of the world still works with, uh, with patents, uh, for good, uh, or bad reasons. But, uh, uh, that's how things work. So we need to make, we're trying to make sure that we, we secure the right set of, um, of patent commitments from the right set of stakeholders. [00:13:35] Jeremy: Oh, so when someone implements a W3C recommendation or a candidate recommendation, the patent holders related to that recommendation, they basically agree to allow royalty-free use of that patent. [00:13:54] Francois: They do the one that were involved in the working group, of course, I mean, we can't say anything about the companies out there that may have patents and uh, are not part of this standardization process. So there's always, It's a remaining risk. but part of the goal when we create a working group is to make sure that, people understand the scope. [00:14:17] Francois: Lawyers look into it, and the, the legal teams that exist at the all the large companies, basically gave a green light saying, yeah, we, we we're pretty confident that we, we know where the patterns are on this particular, this particular area. And we are fine also, uh, letting go of the, the patterns we own ourselves. Implementations are built in parallel with standardization [00:14:39] Jeremy: And I think you had mentioned. What ends up being the most important is that the browser creators implement these recommendations. So it sounds like maybe the distinction between candidate recommendation and recommendation almost doesn't matter as long as you get the end result you want. [00:15:03] Francois: So, I mean, people will have different opinions, uh, in the, in standardization circles. And I mentioned also W3C is working on other kind of, uh, standards. So, uh, in some other areas, the nuance may be more important when we, but when, when you look at specification, that's target, web browsers. we've switched from a model where, specs were developed first and then implemented to a model where specs and implementing implementations are being, worked in parallel. [00:15:35] Francois: This actually relates to the evolution I was mentioning with the WHATWG taking over the HTML and, uh, focusing on the interoperability issues because the starting point was, yeah, we have an HTML 4.01 spec, uh, but it's not interoperable because it, it's not specified, are number of areas that are gray areas, you can implement them differently. [00:15:59] Francois: And so there are interoperable issues. Back to candidate rec actually, the, the, the, the stage was created, if I remember correctly. uh, if I'm, if I'm not wrong, the stage was created following the, uh, IE problem. In the CSS working group, IE6, uh, shipped with some, version of a CSS that was in the, as specified, you know, the spec was saying, you know, do that for the CSS box model. [00:16:27] Francois: And the IE6 was following that. And then the group decided to change, the box model and suddenly IE6 was no longer compliant. And that created a, a huge mess on the, in the history of, uh, of the web in a way. And so the, we, the, the, the, the candidate recommendation sta uh, stage was introduced following that to try to catch this kind of problems. [00:16:52] Francois: But nowadays, again, we, we switch to another model where it's more live. and so we, you, you'll find a number of specs that are not even at candidate rec level. They are at the, what we call a working draft, and they, they are being implemented, and if all goes well, the standardization process follows the implementation, and then you end up in a situation where you have your candidate rec when the, uh, spec ships. [00:17:18] Francois: a recent example would be a web GPU, for example. It, uh, it has shipped in, uh, in, in Chrome shortly before it transition to a candidate rec. But the, the, the spec was already stable. and now it's shipping uh, in, uh, in different browsers, uh, uh, safari, uh, and uh, and uh, and uh, Firefox. And so that's, uh, and that's a good example of something that follows, uh, things, uh, along pretty well. But then you have other specs such as, uh, in the media space, uh, request video frame back, uh, frame, call back, uh, requestVideoFrameCallback() is a short API that allows you to get, you know, a call back whenever the, the browser renders a video frame, essentially. [00:18:01] Francois: And that spec is implemented across browsers. But from a W3C specific, perspective, it does not even exist. It's not on the standardization track. It's still being incubated in what we call a community group, which is, you know, some something that, uh, usually exists before. we move to the, the standardization process. [00:18:21] Francois: So there, there are examples of things where some things fell through the cracks. All the standardization process, uh, is either too early or too late and things that are in spec are not exactly what what got implemented or implementations are too early in the process. We we're doing a better job, at, Not falling into a trap where someone ships, uh, you know, an implementation and then suddenly everything is frozen. You can no longer, change it because it's too late, it shipped. we've tried, different, path there. Um, mentioned CSS, the, there was this kind of vendor prefixed, uh, properties that used to be, uh, the way, uh, browsers were deploying new features without, you know, taking the final name. [00:19:06] Francois: We are trying also to move away from it because same thing. Then in the end, you end up with, uh, applications that have, uh, to duplicate all the properties, the CSS properties in the style sheets with, uh, the vendor prefixes and nuances in the, in what it does in, in the end. [00:19:23] Jeremy: Yeah, I, I think, is that in CSS where you'll see --mozilla or things like that? Why requestVideoFrameCallback doesn't have a formal specification [00:19:30] Jeremy: The example of the request video frame callback. I, I wonder if you have an opinion or, or, or know why that ended up the way it did, where the browsers all implemented it, even though it was still in the incubation stage. [00:19:49] Francois: On this one, I don't have a particular, uh, insights on whether there was a, you know, a strong reason to implement it,without doing the standardization work. [00:19:58] Francois: I mean, there are, it's not, uh, an IPR (Intellectual Property Rights) issue. It's not, uh, something that, uh, I don't think the, the, the spec triggers, uh, you know, problems that, uh, would be controversial or whatever. [00:20:10] Francois: Uh, so it's just a matter of, uh, there was no one's priority, and in the end, you end up with a, everyone's happy. it's, it has shipped. And so now doing the spec work is a bit,why spend time on something that's already shipped and so on, but the, it may still come back at some point with try to, you know, improve the situation. [00:20:26] Jeremy: Yeah, that's, that's interesting. It's a little counterintuitive because it sounds like you have the, the working group and it, it sounds like perhaps the companies or organizations involved, they maybe agreed on how it should work, and maybe that agreement almost made it so that they felt like they didn't need to move forward with the specification because they came to consensus even before going through that. [00:20:53] Francois: In this particular case, it's probably because it's really, again, it's a small, spec. It's just one function call, you know? I mean, they will definitely want a working group, uh, for larger specifications. by the way, actually now I know re request video frame call back. It's because the, the, the final goal now that it's, uh, shipped, is to merge it into, uh, HTML, uh, the HTML spec. [00:21:17] Francois: So there's a, there's an ongoing issue on the, the WHATWG side to integrate request video frame callback. And it's taking some time but see, it's, it's being, it, it caught up and, uh, someone is doing the, the work to, to do it. I had forgotten about this one. Um, [00:21:33] Jeremy: Tension from specification review (horizontal review) [00:21:33] Francois: so with larger specifications, organizations will want this kind of IPR regime they will want commit commitments from, uh, others, on the scope, on the process, on everything. So they will want, uh, a larger, a, a more formal setting, because that's part of how you ensure that things, uh, will get done properly. [00:21:53] Francois: I didn't mention it, but, uh, something we're really, uh, Pushy on, uh, W3C I mentioned we have principles, we have priorities, and we have, uh, specific several, uh, properties at W3C. And one of them is that we we're very strong on horizontal reviews of our specs. We really want them to be reviewed from an accessibility perspective, from an internationalization perspective, from a privacy and security, uh, perspective, and, and, and a technical architecture perspective as well. [00:22:23] Francois: And that's, these reviews are part of the formal process. So you, all specs need to undergo these reviews. And from time to time, that creates tension. Uh, from time to time. It just works, you know. Goes without problem. a recurring issue is that, privacy and security are hard. I mean, it's not an easy problem, something that can be, uh, solved, uh, easily. [00:22:48] Francois: Uh, so there's a, an ongoing tension and no easy way to resolve it, but there's an ongoing tension between, specifying powerful APIs and preserving privacy without meaning, not exposing too much information to applications in the media space. You can think of the media capabilities, API. So the media space is a complicated space. [00:23:13] Francois: Space because of codecs. codecs are typically not relative free. and so browsers decide which codecs they're going to support, which audio and video codecs they, they're going to support and doing that, that creates additional fragmentation, not in the sense that they're not interoperable, but in the sense that applications need to choose which connect they're going to ship to stream to the end user. [00:23:39] Francois: And, uh, it's all the more complicated that some codecs are going to be hardware supported. So you will have a hardware decoder in your, in your, in your laptop or smartphone. And so that's going to be efficient to decode some, uh, some stream, whereas some code are not, are going to be software, based, supported. [00:23:56] Francois: Uh, and that may consume a lot of CPU and a lot of power and a lot of energy in the end. So you, you want to avoid that if you can, uh, select another thing. Even more complex than, codecs have different profiles, uh, lower end profiles higher end profiles with different capabilities, different features, uh, depending on whether you're going to use this or that color space, for example, this or that resolution, whatever. [00:24:22] Francois: And so you want to surface that to web applications because otherwise, they can't. Select, they can't choose, the right codec and the right, stream that they're going to send to the, uh, client devices. And so they're not going to provide an efficient user experience first, and even a sustainable one in terms of energy because they, they're going to waste energy if they don't send the right stream. [00:24:45] Francois: So you want to surface that to application. That's what the media, media capabilities, APIs, provides. Privacy concerns [00:24:51] Francois: Uh, but at the same time, if you expose that information, you end up with ways to fingerprint the end user's device. And that in turn is often used to track users across, across sites, which is exactly what we don't want to have, uh, for privacy reasons, for obvious privacy reasons. [00:25:09] Francois: So you have to balance that and find ways to, uh, you know, to expose. Capabilities without, without necessarily exposing them too much. Uh, [00:25:21] Jeremy: Can you give an example of how some of those discussions went? Like within the working group? Who are the companies or who are the organizations that are arguing for We shouldn't have this capability because of the privacy concerns, or [00:25:40] Francois: In a way all of the companies, have a vision of, uh, of privacy. I mean, the, you will have a hard time finding, you know, members saying, I don't care about privacy. I just want the feature. Uh, they all have privacy in mind, but they may have a different approach to privacy. [00:25:57] Francois: so if you take, uh, let's say, uh, apple and Google would be the, the, I guess the perfect examples in that, uh, in that space, uh, Google will have a, an approach that is more open-ended thing. The, the user agents has this, uh, should check what the, the, uh, given site is doing. And then if it goes beyond, you know, some kind of threshold, they're going to say, well, okay, well, we'll stop exposing data to that, to that, uh, to that site. [00:26:25] Francois: So that application. So monitor and react in a way. apple has a more, uh, you know, has a stricter view on, uh, on privacy, let's say. And they will say, no, we, the, the, the feature must not exist in the first place. Or, but that's, I mean, I guess, um, it's not always that extreme. And, uh, from time to time it's the opposite. [00:26:45] Francois: You will have, uh, you know, apple arguing in one way, uh, which is more open-ended than the, uh, than, uh, than Google, for example. And they are not the only ones. So in working groups, uh, you will find the, usually the implementers. Uh, so when we talk about APIs that get implemented in browsers, you want the core browsers to be involved. [00:27:04] Francois: Uh, otherwise it's usually not a good sign for, uh, the success of the, uh, of the technology. So in practice, that means Apple, uh, Microsoft, Mozilla which one did I forget? [00:27:15] Jeremy: Google. [00:27:16] Francois: I forgot Google. Of course. Thank you. that's, uh, that the, the core, uh, list of participants you want to have in any, uh, group that develops web standards targeted at web browsers. Who participates in working groups and how much power do they have? [00:27:28] Francois: And then on top of that, you want, organizations and people who are directly going to use it, either because they, well the content providers. So in media, for example, if you look at the media working group, you'll see, uh, so browser vendors, the ones I mentioned, uh, content providers such as the BBC or Netflix. [00:27:46] Francois: Chip set vendors would, uh, would be there as well. Intel, uh, Nvidia again, because you know, there's a hardware decoding in there and encoding. So media is, touches on, on, uh, on hardware, uh, device manufacturer in general. You may, uh, I think, uh, I think Sony is involved in the, in the media working group, for example. [00:28:04] Francois: and these companies are usually less active in the spec development. It depends on the groups, but they're usually less active because the ones developing the specs are usually the browser again, because as I mentioned, we develop the specs in parallel to browsers implementing it. So they have the. [00:28:21] Francois: The feedback on how to formulate the, the algorithms. and so that's this collection of people who are going to discuss first within themselves. W3C pushes for consensual dis decisions. So we hardly take any votes in the working groups, but from time to time, that's not enough. [00:28:41] Francois: And there may be disagreements, but let's say there's agreement in the group, uh, when the spec matches. horizontal review groups will look at the specs. So these are groups I mentioned, accessibility one, uh, privacy, internationalization. And these groups, usually the participants are, it depends. [00:29:00] Francois: It can be anything. It can be, uh, the same companies. It can be, but usually different people from the same companies. But it the, maybe organizations with a that come from very, a very different angle. And that's a good thing because that means the, you know, you enlarge the, the perspectives on your, uh, on the, on the technology. [00:29:19] Francois: and you, that's when you have a discussion between groups, that takes place. And from time to time it goes well from time to time. Again, it can trigger issues that are hard to solve. and the W3C has a, an escalation process in case, uh, you know, in case things degenerate. Uh, starting with, uh, the notion of formal objection. [00:29:42] Jeremy: It makes sense that you would have the, the browser. Vendors and you have all the different companies that would use that browser. All the different horizontal groups like you mentioned, the internationalization, accessibility. I would imagine that you were talking about consensus and there are certain groups or certain companies that maybe have more say or more sway. [00:30:09] Jeremy: For example, if you're a browser, manufacturer, your Google. I'm kind of curious how that works out within the working group. [00:30:15] Francois: Yes, it's, I guess I would be lying if I were saying that, uh, you know, all companies are strictly equal in a, in a, in a group. they are from a process perspective, I mentioned, you know, different membership fees with were design, special specific ethos so that no one could say, I'm, I'm putting in a lot of money, so you, you need to re you need to respect me, uh, and you need to follow what I, what I want to, what I want to do. [00:30:41] Francois: at the same time, if you take a company like, uh, like Google for example, they send, hundreds of engineers to do standardization work. That's absolutely fantastic because that means work progresses and it's, uh, extremely smart people. So that's, uh, that's really a pleasure to work with, uh, with these, uh, people. [00:30:58] Francois: But you need to take a step back and say, well, the problem is. Defacto that gives them more power just by virtue of, uh, injecting more resources into it. So having always someone who can respond to an issue, having always someone, uh, editing a spec defacto that give them more, uh, um, more say on the, on the directions that, get forward. [00:31:22] Francois: And on top of that, of course, they have the, uh, I guess not surprisingly, the, the browser that is, uh, used the most, currently, on the market so there's a little bit of a, the, the, we, we, we, we try very hard to make sure that, uh, things are balanced. it's not a perfect world. [00:31:38] Francois: the the role of the team. I mean, I didn't talk about the role of the team, but part of it is to make sure that. Again, all perspectives are represented and that there's not, such a, such big imbalance that, uh, that something is wrong and that we really need to look into it. so making sure that anyone, if they have something to say, make making sure that they are heard by the rest of the group and not dismissed. [00:32:05] Francois: That usually goes well. There's no problem with that. And again, the escalation process I mentioned here doesn't make any, uh, it doesn't make any difference between, uh, a small player, a large player, a big player, and we have small companies raising formal objections against some of our aspects that happens, uh, all large ones. [00:32:24] Francois: But, uh, that happens too. There's no magical solution, I guess you can tell it by the way. I, uh, I don't know how to formulate the, the process more. It's a human process, and that's very important that it remains a human process as well. [00:32:41] Jeremy: I suppose the role of, of staff and someone in your position, for example, is to try and ensure that these different groups are, are heard and it isn't just one group taking control of it. [00:32:55] Francois: That's part of the role, again, is to make sure that, uh, the, the process is followed. So the, I, I mean, I don't want to give the impression that the process controls everything in the groups. I mean, the, the, the groups are bound by the process, but the process is there to catch problems when they arise. [00:33:14] Francois: most of the time there are no problems. It's just, you know, again, participants talking to each other, talking with the rest of the community. Most of the work happens in public nowadays, in any case. So the groups work in public essentially through asynchronous, uh, discussions on GitHub repositories. [00:33:32] Francois: There are contributions from, you know, non group participants and everything goes well. And so the process doesn't kick in. You just never say, eh, no, you didn't respect the process there. You, you closed the issue. You shouldn't have a, it's pretty rare that you have to do that. Uh, things just proceed naturally because they all, everyone understands where they are, why, what they're doing, and why they're doing it. [00:33:55] Francois: we still have a role, I guess in the, in the sense that from time to time that doesn't work and you have to intervene and you have to make sure that,the, uh, exception is caught and, uh, and processed, uh, in the right way. Discussions are public on github [00:34:10] Jeremy: And you said this process is asynchronous in public, so it sounds like someone, I, I mean, is this in GitHub issues or how, how would somebody go and, and see what the results of [00:34:22] Francois: Yes, there, there are basically a gazillion of, uh, GitHub repositories under the, uh, W3C, uh, organization on GitHub. Most groups are using GitHub. I mean, there's no, it's not mandatory. We don't manage any, uh, any tooling. But the factors that most, we, we've been transitioning to GitHub, uh, for a number of years already. [00:34:45] Francois: Uh, so that's where the work most of the work happens, through issues, through pool requests. Uh, that's where. people can go and raise issues against specifications. Uh, we usually, uh, also some from time to time get feedback from developers and countering, uh, a bug in a particular implementations, which we try to gently redirect to, uh, the actual bug trackers because we're not responsible for the respons implementations of the specs unless the spec is not clear. [00:35:14] Francois: We are responsible for the spec itself, making sure that the spec is clear and that implementers well, understand how they should implement something. Why the W3C doesn't specify a video or audio codec [00:35:25] Jeremy: I can see how people would make that mistake because they, they see it's the feature, but that's not the responsibility of the, the W3C to implement any of the specifications. Something you had mentioned there's the issue of intellectual property rights and how when you have a recommendation, you require the different organizations involved to make their patents available to use freely. [00:35:54] Jeremy: I wonder why there was never any kind of, recommendation for audio or video codecs in browsers since you have certain ones that are considered royalty free. But, I believe that's never been specified. [00:36:11] Francois: At W3C you mean? Yes. we, we've tried, I mean, it's not for lack of trying. Um, uh, we've had a number of discussions with, uh, various stakeholders saying, Hey, we, we really need, an audio or video code for our, for the web. the, uh, png PNG is an example of a, um, an image format which got standardized at W3C and it got standardized at W3C similar reasons. There had to be a royalty free image format for the web, and there was none at the time. of course, nowadays, uh, jpeg, uh, and gif or gif, whatever you call it, are well, you know, no problem with them. But, uh, um, that at the time P PNG was really, uh, meant to address this issue and it worked for PNG for audio and video. [00:37:01] Francois: We haven't managed to secure, commitments by stakeholders. So willingness to do it, so it's not, it's not lack of willingness. We would've loved to, uh, get, uh, a royalty free, uh, audio codec, a royalty free video codec again, audio and video code are extremely complicated because of this. [00:37:20] Francois: not only because of patterns, but also because of the entire business ecosystem that exists around them for good reasons. You, in order for a, a codec to be supported, deployed, effective, it really needs, uh, it needs to mature a lot. It needs to, be, uh, added to at a hardware level, to a number of devices, capturing devices, but also, um, uh, uh, of course players. [00:37:46] Francois: And that takes a hell of a lot of time and that's why you also enter a number of business considerations with business contracts between entities. so I'm personally, on a personal level, I'm, I'm pleased to see, for example, the Alliance for Open Media working on, uh, uh, AV1, uh, which is. At least they, uh, they wanted to be royalty free and they've been adopting actually the W3C patent policy to do this work. [00:38:11] Francois: So, uh, we're pleased to see that, you know, they've been adopting the same process and same thing. AV1 is not yet at the same, support stage, as other, codecs, in the world Yeah, I mean in devices. There's an open question as what, what are we going to do, uh, in the future uh, with that, it's, it's, it's doubtful that, uh, the W3C will be able to work on a, on a royalty free audio, codec or royalty free video codec itself because, uh, probably it's too late now in any case. [00:38:43] Francois: but It's one of these angles in the, in the web platform where we wish we had the, uh, the technology available for, for free. And, uh, it's not exactly, uh, how things work in practice.I mean, the way codecs are developed remains really patent oriented. [00:38:57] Francois: and you will find more codecs being developed. and that's where geopolitics can even enter the, the, uh, the play. Because, uh, if you go to China, you will find new codecs emerging, uh, that get developed within China also, because, the other codecs come mostly from the US so it's a bit of a problem and so on. [00:39:17] Francois: I'm not going to enter details and uh, I would probably say stupid things in any case. Uh, but that, uh, so we continue to see, uh, emerging codecs that are not royalty free, and it's probably going to remain the case for a number of years. unfortunately, unfortunately, from a W3C perspective and my perspective of course. [00:39:38] Jeremy: There's always these new, formats coming out and the, rate at which they get supported in the browser, even on a per browser basis is, is very, there can be a long time between, for example, WebP being released and a browser supporting it. So, seems like maybe we're gonna be in that situation for a while where the codecs will come out and maybe the browsers will support them. Maybe they won't, but the, the timeline is very uncertain. Digital Rights Management (DRM) and Media Source Extensions [00:40:08] Jeremy: Something you had, mentioned, maybe this was in your, email to me earlier, but you had mentioned that some of these specifications, there's, there's business considerations like with, digital rights management and, media source extensions. I wonder if you could talk a little bit about maybe what media source extensions is and encrypted media extensions and, and what the, the considerations or challenges are there. [00:40:33] Francois: I'm going to go very, very quickly over the history of a, video and audio support on the web. Initially it was supported through plugins. you are maybe too young to, remember that. But, uh, we had extensions, added to, uh, a realplayer. [00:40:46] Francois: This kind of things flash as well, uh, supporting, uh, uh, videos, in web pages, but it was not provided by the web browsers themselves. Uh, then HTML5 changed the, the situation. Adding these new tags, audio and video, but that these tags on this, by default, support, uh, you give them a resources, a resource, like an image as it's an audio or a video file. [00:41:10] Francois: They're going to download this, uh, uh, video file or audio file, and they're going to play it. That works well. But as soon as you want to do any kind of real streaming, files are too large and to stream, to, to get, you know, to get just a single fetch on, uh, on them. So you really want to stream them chunk by chunk, and you want to adapt the resolution at which you send the stream based on real time conditions of the user's network. [00:41:37] Francois: If there's plenty of bandwidth you want to send the user, the highest possible resolution. If there's a, some kind of hiccup temporary in the, in the network, you really want to lower the resolution, and that's called adaptive streaming. And to get adaptive streaming on the web, well, there are a number of protocols that exist. [00:41:54] Francois: Same thing. Some many of them are proprietary and actually they remain proprietary, uh, to some extent. and, uh, some of them are over http and they are the ones that are primarily used in, uh, in web contexts. So DASH comes to mind, DASH for Dynamic Adaptive streaming over http. HLS is another one. Uh, initially developed by Apple, I believe, and it's, uh, HTTP live streaming probably. Exactly. And, so there are different protocols that you can, uh, you can use. Uh, so the goal was not to standardize these protocols because again, there were some proprietary aspects to them. And, uh, same thing as with codecs. [00:42:32] Francois: There was no, well, at least people wanted to have the, uh, flexibility to tweak parameters, adaptive streaming parameters the way they wanted for different scenarios. You may want to tweak the parameters differently. So they, they needed to be more flexibility on top of protocols not being truly available for use directly and for implementation directly in browsers. [00:42:53] Francois: It was also about providing applications with, uh, the flexibility they would need to tweak parameters. So media source extensions comes into play for exactly that. Media source extensions is really about you. The application fetches chunks of its audio and video stream the way it wants, and with the parameters it wants, and it adjusts whatever it wants. [00:43:15] Francois: And then it feeds that into the, uh, video or audio tag. and the browser takes care of the rest. So it's really about, doing, you know, the adaptive streaming. let applications do it, and then, uh, let the user agent, uh, the browser takes, take care of the rendering itself. That's media source extensions. [00:43:32] Francois: Initially it was pushed by, uh, Netflix. They were not the only ones of course, but there, there was a, a ma, a major, uh, proponent of this, uh, technical solution, because they wanted, uh, they, uh, they were, expanding all over the world, uh, with, uh, plenty of native, applications on all sorts of, uh, of, uh, devices. [00:43:52] Francois: And they wanted to have a way to stream content on the web as well. both for both, I guess, to expand to, um, a new, um, ecosystem, the web, uh, providing new opportunities, let's say. But at the same time also to have a fallback, in case they, because for native support on different platforms, they sometimes had to enter business agreements with, uh, you know, the hardware manufacturers, the whatever, the, uh, service provider or whatever. [00:44:19] Francois: and so that was a way to have a full back. That kind of work is more open, in case, uh, things take some time and so on. So, and they probably had other reasons. I mean, I'm not, I can't speak on behalf of Netflix, uh, on others, but they were not the only ones of course, uh, supporting this, uh, me, uh, media source extension, uh, uh, specification. [00:44:42] Francois: and that went kind of, well, I think it was creating 2011. I mean, the, the work started in 2011 and the recommendation was published in 2016, which is not too bad from a standardization perspective. It means only five years, you know, it's a very short amount of time. Encrypted Media Extensions [00:44:59] Francois: At the same time, and in parallel and complement to the media source extension specifications, uh, there was work on the encrypted media extensions, and here it was pushed by the same proponent in a way because they wanted to get premium content on the web. [00:45:14] Francois: And by premium content, you think of movies and, uh. These kind of beasts. And the problem with the, I guess the basic issue with, uh, digital asset such as movies, is that they cost hundreds of millions to produce. I mean, some cost less of course. And yet it's super easy to copy them if you have a access to the digital, uh, file. [00:45:35] Francois: You just copy and, uh, and that's it. Piracy uh, is super easy, uh, to achieve. It's illegal of course, but it's super easy to do. And so that's where the different legislations come into play with digital right management. Then the fact is most countries allow system that, can encrypt content and, uh, through what we call DRM systems. [00:45:59] Francois: so content providers, uh, the, the ones that have movies, so the studios here more, more and more, and Netflix is one, uh, one of the studios nowadays. Um, but not only, not only them all major studios will, uh, would, uh, push for, wanted to have something that would allow them to stream encrypted content, encrypted audio and video, uh, mostly video, to, uh, to web applications so that, uh, you. [00:46:25] Francois: Provide the movies, otherwise, they, they are just basically saying, and sorry, but, uh, this premium content will never make it to the web because there's no way we're gonna, uh, send it in clear, to, uh, to the end user. So Encrypting media extensions is, uh, is an API that allows to interface with, uh, what's called the content decryption module, CDM, uh, which itself interacts with, uh, the DR DRM systems that, uh, the browser may, may or may not support. [00:46:52] Francois: And so it provides a way for an application to receive encrypted content, pass it over get the, the, the right keys, the right license keys from a whatever system actually. Pass that logic over to the, and to the user agent, which passes, passes it over to, uh, the CDM system, which is kind of black box in, uh, that does its magic to get the right, uh, decryption key and then the, and to decrypt the content that can be rendered. [00:47:21] Francois: The encrypted media extensions triggered a, a hell of a lot of, uh, controversy. because it's DRM and DRM systems, uh, many people, uh, uh, things should be banned, uh, especially on the web because the, the premise of the web is that the, the user has trusts, a user agent. The, the web browser is called the user agent in all our, all our specifications. [00:47:44] Francois: And that's, uh, that's the trust relationship. And then they interact with a, a content provider. And so whatever they do with the content is their, I guess, actually their problem. And DRM introduces a third party, which is, uh, there's, uh, the, the end user no longer has the control on the content. [00:48:03] Francois: It has to rely on something else that, Restricts what it can achieve with the content. So it's, uh, it's not only a trust relationship with its, uh, user agents, it's also with, uh, with something else, which is the content provider, uh, in the end, the one that has the, uh, the license where provides the license. [00:48:22] Francois: And so that's, that triggers, uh, a hell of a lot of, uh, of discussions in the W3C degenerated, uh, uh, into, uh, formal objections being raised against the specification. and that escalated to, to the, I mean, at all leverage it. It's, it's the, the story in, uh, W3C that, um, really, uh, divided the membership into, opposed camps in a way, if you, that's was not only year, it was not really 50 50 in the sense that not just a huge fights, but the, that's, that triggered a hell of a lot of discussions and a lot of, a lot of, uh, of formal objections at the time. [00:49:00] Francois: Uh, we were still, From a governance perspective, interestingly, um, the W3C used to be a dictatorship. It's not how you should formulate it, of course, and I hope it's not going to be public, this podcast. Uh, but the, uh, it was a benevolent dictatorship. You could see it this way in the sense that, uh, the whole process escalated to one single person was, Tim Burners Lee, who had the final say, on when, when none of the other layers, had managed to catch and to resolve, a conflict. [00:49:32] Francois: Uh, that has hardly ever happened in, uh, the history of the W3C, but that happened to the two for EME, for encrypted media extensions. It had to go to the, uh, director level who, uh, after due consideration, uh, decided to, allow the EME to proceed. and that's why we have a, an EME, uh, uh, standard right now, but still re it remains something on the side. [00:49:56] Francois: EME we're still, uh, it's still in the scope of the media working group, for example. but the scope, if you look at the charter of the working group, we try to scope the, the, the, the, the updates we can make to the specification, uh, to make sure that we don't reopen, reopen, uh, a can of worms, because, well, it's really a, a topic that triggers friction for good and bad reasons again. [00:50:20] Jeremy: And when you talk about the media source extensions, that is the ability to write custom code to stream video in whatever way you want. You mentioned, the MPEG-DASH and http live streaming. So in that case, would that be the developer gets to write that code in JavaScript that's executed by the browser? [00:50:43] Francois: Yep, that's, uh, that would be it. and then typically, I guess the approach nowadays is more and more to develop low level APIs into W3C or web in, in general, I guess. And to let, uh. Libraries emerge that are going to make lives of a, a developer, uh, easier. So for MPEG DASH, we have the DASH.js, which does a fantastic job at, uh, at implementing the complexity of, uh, of adaptive streaming. [00:51:13] Francois: And you just, you just hook it into your, your workflow. And that's, uh, and that's it. Encrypted Media Extensions are closed source [00:51:20] Jeremy: And with the encrypted media extensions I'm trying to picture how those work and how they work differently. [00:51:28] Francois: Well, it's because the, the, the, the key architecture is that the, the stream that you, the stream that you may assemble with a media source extensions, for example. 'cause typically they, they're used in collaboration. When you hook the, hook it into the video tag, you also. Call EME and actually the stream goes to EME. [00:51:49] Francois: And when it goes to EME, actually the user agent hands the encrypted stream. You're still encrypted at this time. Uh, encrypted, uh, stream goes to the CDM content decryption module, and that's a black box well, it has some black, black, uh, black box logic. So it's not, uh, even if you look at the chromium source code, for example, you won't see the implementation of the CDM because it's a, it's a black box, so it's not part of the browser se it's a sand, it's sandboxed, it's execution sandbox. [00:52:17] Francois: That's, uh, the, the EME is kind of unique in, in this way where the, the CDM is not allowed to make network requests, for example, again, for privacy reasons. so anyway, the, the CDM box has the logic to decrypt the content and it hands it over, and then it depends, it depends on the level of protection you. [00:52:37] Francois: You need or that the system supports. It can be against software based protection, in which case actually, a highly motivated, uh, uh, uh, attacker could, uh, actually get access to the decoded stream, or it can be more hardware protected, in which case actually the, it goes to the, uh, to your final screen. [00:52:58] Francois: But it goes, it, it goes through the hardware in a, in a mode that the US supports in a mode that even the user agent doesn't have access to it. So it doesn't, it can't even see the pixels that, uh, gets rendered on the screen. There are, uh, several other, uh, APIs that you could use, for example, to take a screenshot of your, of your application and so on. [00:53:16] Francois: And you cannot apply them to, uh, such content because they're just gonna return a black box. again, because the user agent itself does not see the, uh, the pixels, which is exactly what you want with encrypted content. [00:53:29] Jeremy: And the, the content decryption module, it's, if I understand correctly, it's something that's shipped with the browsers, but you were saying is if you were to look at the public source code of Chromium or of Firefox, you would not see that implementation. Content Decryption Module (Widevine, PlayReady) [00:53:47] Francois: True. I mean, the, the, um, the typical examples are, uh, uh, widevine, so wide Vine. So interestingly, uh, speaking in theory, these, uh, systems could have been provided by anyone in practice. They've been provided by the browser vendors themselves. So Google has Wide Vine. Uh, Microsoft has something called PlayReady. Apple uh, the name, uh, escapes my, uh, sorry. They don't have it on top of my mind. So they, that's basically what they support. So they, they also own that code, but in a way they don't have to. And Firefox actually, uh, they, uh, don't, don't remember which one, they support among these three. but, uh, they, they don't own that code typically. [00:54:29] Francois: They provide a wrapper around, around it. Yeah, that's, that's exactly the, the crux of the, uh, issue that, people have with, uh, with DRMs, right? It's, uh, the fact that, uh, suddenly you have a bit of code running there that is, uh, that, okay, you can send box, but, uh, you cannot inspect and you don't have, uh, access to its, uh, source code. [00:54:52] Jeremy: That's interesting. So the, almost the entire browser is open source, but if you wanna watch a Netflix movie for example, then you, you need to, run this, this CDM, in addition to just the browser code. I, I think, you know, we've kind of covered a lot. Documenting what's available in browsers for developers [00:55:13] Jeremy: I wonder if there's any other examples or anything else you thought would be important to mention in, in the context of the W3C. [00:55:23] Francois: There, there's one thing which, uh, relates to, uh, activities I'm doing also at W3C. Um. Here, we've been talking a lot about, uh, standards and, implementations in browsers, but there's also, uh, adoption of these browser, of these technology standards by developers in general and making sure that developers are aware of what exists, making sure that they understand what exists and one of the, key pain points that people, uh. [00:55:54] Francois: Uh, keep raising on, uh, the web platform is first. Well, the, the, the web platform is unique in the sense that there are different implementations. I mean, if you, [00:56:03] Francois: Uh, anyway, there are different, uh, context, different run times where there, there's just one provided by the company that owns the, uh, the, the, the system. The web platform is implemented by different, uh, organizations. and so you end up the system where no one, there's what's in the specs is not necessarily supported. [00:56:22] Francois: And of course, MDN tries, uh, to document what's what's supported, uh, thoroughly. But for MDN to work, there's a hell of a lot of needs for data that, tracks browser support. And this, uh, this data is typically in a project called the Browser Compat Data, BCD owned by, uh, MDN as well. But, the Open Web Docs collective is a, uh, is, uh, the one, maintaining that, uh, that data under the hoods. [00:56:50] Francois: anyway, all of that to say that, uh, to make sure that, we track things beyond work on technical specifications, because if you look at it from W3C perspective, life ends when the spec reaches standards, uh, you know, candidate rec or rec, you could just say, oh, done with my work. but that's not how things work. [00:57:10] Francois: There's always, you need the feedback loop and, in order to make sure that developers get the information and can provide the, the feedback that standardization can benefit from and browser vendors can benefit from. We've been working on a project called web Features with browser vendors mainly, and, uh, a few of the folks and MDN and can I use and different, uh, different people, to catalog, the web in terms of features that speak to developers and from that catalog. [00:57:40] Francois: So it's a set of, uh, it's a set of, uh, feature IDs with a feature name and feature description that say, you know, this is how developers would, uh, understand, uh, instead of going too fine grained in terms of, uh, there's this one function call that does this because that's where you, the, the kind of support data you may get from browser data and MDN initially, and having some kind of a coarser grained, uh, structure that says these are the, features that make sense. [00:58:09] Francois: They talk to developers. That's what developers talk about, and that's the info. So the, we need to have data on these particular features because that's how developers are going approach the specs. Uh. and from that we've derived the notion of baseline badges that you have, uh, are now, uh, shown on MDN on can I use and integrated in, uh, IDE tool, IDE Tools such as visual, visual studio, and, uh, uh, libraries, uh, linked, some linters have started to, um, to integrate that data. [00:58:41] Francois: Uh, so, the way it works is, uh, we've been mapping these coarser grained features to BCDs finer grained support data, and from there we've been deriving a kind of a, a batch that says, yeah, this, this feature is implemented well, has limited availability because it's only implemented in one or two browsers, for example. [00:59:07] Francois: It's, newly available because. It was implemented. It's been, it's implemented across the main browser vendor, um, across the main browsers that people use. But it's recent, and widely available, which we try to, uh, well, there's been lots of discussion in the, in the group to, uh, come up with a definition which essentially ends up being 30 months after, a feature become, became newly available. [00:59:34] Francois: And that's when, that's the time it takes for the, for the versions of the, the different versions of the browser to propagate. Uh, because you, it's not because there's a new version of a, of a browser that, uh, people just, Ima immediately, uh, get it. So it takes a while, to propagate, uh, across the, uh, the, the user, uh, user base. [00:59:56] Francois: And so the, the goal is to have a, a, a signal that. Developers can rely on saying, okay, well it's widely available so I can really use that feature. And of course, if that doesn't work, then we need to know about it. And so we are also working with, uh, people doing so developer surveys such as state of, uh, CSS, state of HTML, state of JavaScript. [01:00:15] Francois: That's I guess, the main ones. But also we are also running, uh, MDN short surveys with the MDN people to gather feedback on. On the, on these same features, and to feed the loop and to, uh, to complete the loop. and these data is also used by, internally, by browser vendors to inform, prioritization process, their prioritization process, and typically as part of the interop project that they're also running, uh, on the site [01:00:43] Francois: So a, a number of different, I've mentioned, uh, I guess a number of different projects, uh, coming along together. But that's the goal is to create links, across all of these, um, uh, ongoing projects with a view to integrating developers, more, and gathering feedback as early as possible and inform decision. [01:01:04] Francois: We take at the standardization level that can affect the, the lives of the developers and making sure that it's, uh, it affects them in a, in a positive way. [01:01:14] Jeremy: just trying to understand, 'cause you had mentioned that there's the web features and the baseline, and I was, I was trying to picture where developers would actually, um, see these things. And it sounds like from what you're saying is W3C comes up with what stage some of these features are at, and then developers would end up seeing it on MDN or, or some other site. [01:01:37] Francois: So, uh, I'm working on it, but that doesn't mean it's a W3C thing. It's a, it's a, again, it's a, we have different types of group. It's a community group, so it's the Web DX Community group at W3C, which means it's a community owned thing. so that's why I'm mentioning a working with a representative from, and people from MDN people, from open Web docs. [01:02:05] Francois: so that's the first point. The second point is, so it's, indeed this data is now being integrated. If you, and you look, uh, you'll, you'll see it in on top of the MDN pages on most of them. If you look at, uh, any kind of feature, you'll see a, a few logos, uh, a baseline banner. and then can I use, it's the same thing. [01:02:24] Francois: You're going to get a baseline, banner. It's more on, can I use, and it's meant to capture the fact that the feature is widely available or if you may need to pay attention to it. Of course, it's a simplification, and the goal is not to the way it's, the way the messaging is done to developers is meant to capture the fact that, they may want to look, uh, into more than just this, baseline status, because. [01:02:54] Francois: If you take a look at web platform tests, for example, and if you were to base your assessment of whether a feature is supported based on test results, you'll end up saying the web platform has no supported technology because there are absolutely no API that, uh, where browsers pass 100% of the, of the, of the test suite. [01:03:18] Francois: There may be a few of them, I don't know. But, there's a simplification in the, in the process when a feature is, uh, set to be baseline, there may be more things to look at nevertheless, but it's meant to provide a signal that, uh, still developers can rely on their day-to-day, uh, lives. [01:03:36] Francois: if they use the, the feature, let's say, as a reasonably intended and not, uh, using to advance the logic. [01:03:48] Jeremy: I see. Yeah. I'm looking at one of the pages on MDN right now, and I can see at the top there's the, the baseline and it, it mentions that this feature works across many browsers and devices, and then they say how long it's been available. And so that's a way that people at a glance can, can tell, which APIs they can use. [01:04:08] Francois: it also started, uh, out of a desire to summarize this, uh, browser compatibility table that you see at the end of the page of the, the bottom of the page in on MDN. but there are where developers were saying, well, it's, it's fine, but it's, it goes too much into detail. So we don't know in the end, can we, can we use that feature or can we, can we not use that feature? [01:04:28] Francois: So it's meant as a informed summary of, uh, of, of that it relies on the same data again. and more importantly, we're beyond MDN, we're working with tools providers to integrate that as well. So I mentioned the, uh, visual Studio is one of them. So recently they shipped a new version where when you use a feature, you can, you can have some contextual, uh. [01:04:53] Francois: A menu that tells you, yeah, uh, that's fine. You, this CSS property, you can, you can use it, it's widely available or be aware this one is limited Availability only, availability only available in Firefox or, or Chrome or Safari work kit, whatever. [01:05:08] Jeremy: I think that's a good place to wrap it up, if people want to learn more about the work you're doing or learn more about sort of this whole recommendations process, where, where should they head? [01:05:23] Francois: Generally speaking, we're extremely open to, uh, people contributing to the W3C. and where should they go if they, it depends on what they want. So I guess the, the in usually where, how things start for someone getting involved in the W3C is that they have some

Telecom Reseller
CPaaSAA's Amsterdam Summit: From APIs to Intelligent Engagement, Podcast

Telecom Reseller

Play Episode Listen Later Sep 15, 2025


“Voice is back—and with AI, network APIs, and VCons, we're moving from channels to intelligent engagement.” — Kevin Nethercott & Rob Kurver, CPaaS Acceleration Alliance Kevin Nethercott and Rob Kurver of the CPaaS Acceleration Alliance (CPaaSAA) joined Doug Green, Publisher of Technology Reseller News, to preview their Member Summit in Amsterdam, September 22–24 and to chart where programmable communications is headed next. Born from messaging (SMS/A2P), CPaaS now spans voice, video, UCaaS/CCaaS integrations, and carrier network APIs. With AI and the emerging VCon standard (an IETF effort to containerize conversational data across voice, chat, email, and web), CPaaSAA frames the industry's North Star as “intelligent engagement”—outcomes-focused solutions that unify channels, data, and automation. Alliance momentum & event focus 120+ member companies across platforms and operators; ~50 speakers from 20+ countries; curated, senior-level audience. Launch of a Case Directory (120+ commercially available use cases) organized by vertical and region, reflecting where buyers are actually seeing ROI. Publication of the State of CPaaS insights and formation of a VCon working group to accelerate standards adoption and go-to-market patterns. Partnerships highlighted with GSMA and the VCon Foundation. Why this matters now With pandemic-era “Zoom times” behind us, the market is prioritizing profitability and stickiness. CPaaS winners are moving beyond horizontal APIs to verticalized, regulated, and region-specific applications. Example: a Redisys operator solution that uses AI in the core network to improve call intelligibility for people who are hard of hearing—a high-value, retention-friendly use case affecting ~15–18% of users. Takeaways for enterprises and partners Monetize voice again: AI + VCons make conversations machine-usable, improving CX and analytics. Differentiate with network APIs: Security, identity, and authentication services move CPaaS beyond messaging. Build for outcomes: Package solutions by industry and locality; not everything works everywhere the same way. Standardize the data layer: VCons are poised to do for conversations what SIP did for signaling. For membership and summit details, visit cpaasaa.com

Telecom Reseller
Frontline Group & Strolid: Redefining the Contact Center with vCons, Podcast

Telecom Reseller

Play Episode Listen Later Aug 20, 2025


In this Technology Reseller News podcast, Doug Green interviews Jill Blankenship, CEO of Frontline Group, and Thomas McCarthy-Howe, CTO of Strolid, about their collaboration on vCons (Virtualized Conversations)—a new file format that could transform how conversations are captured, stored, and analyzed in the contact center. A vCon is a standardized file (currently under IETF review) that stores the full content of a conversation—recording, transcript, participants, and metadata. Unlike traditional call recordings or after-call notes, vCons provide secure, portable, and queryable data that can be easily integrated into AI systems. For Frontline Group, this means agents no longer need to spend time typing summaries after calls. “vCon captures every part of that conversation,” Blankenship explains. This allows agents to focus on empathy and listening, while supervisors and customers benefit from richer, more accurate insights. For Strolid, which manages high-volume conversations in the automotive sector, vCons provide new visibility into customer frustrations and operational challenges. McCarthy-Howe notes: “Because vCons capture everything, it's easier to bring all the data together so the blindness gets cured.” The applications extend beyond sales and support. In critical services such as 2-1-1, where people call for help with food, housing, or emergencies, vCons can ensure every call is captured, flagged for urgent needs, and analyzed for emerging trends—all while prioritizing data privacy and portability. Blankenship emphasizes that AI should not replace people, but empower them: “We're training our staff to be AI managers—coaching, tweaking, and escalating when needed. It's the people behind the AI that bring the true value.” This partnership demonstrates how AI, human expertise, and open standards can combine to make conversations more accurate, secure, and impactful across industries. Learn more at frontline.group and strolid.com.

PING
The Inevitability of Centrality

PING

Play Episode Listen Later Aug 20, 2025 60:32


In this episode of PING, APNIC's Chief Scientist, Geoff Huston, discusses the economic inevitability of centrality, in the modern Internet. Despite our best intentions, and a lot of long standing belief amongst the IETF technologists, no amount of open standards and end-to-end protocol design prevents large players at all levels of the network (from the physical infrastructure right up to the applications and the data centres which house them) from seeking to acquire smaller competitors, and avoid sharing the space with anyone else. Some of this is a consequence of the drive for efficiency. A part has been fuelled by the effects of Moore's law, and the cost of capital investment against the time available to recover the costs. In an unexpected outcome, networking has become (to all intents and purposes) “free” and instead of end-to-end, we now routinely expect to get data through highly localised, replicated sources. The main cost these days is land, electric power and air-conditioning. This causes a tendency to concentration, and networks and protocols play very little part in the decision about who acquires these assets, and operates them. The network still exists of course, but increasingly data flows over private links, and is not subject to open protocol design imperatives. A quote from Peter Thiel highlights how the modern Venture Capitalist in our space does not actively seek to operate in a competitive market. As Peter says: “competition is for losers” – It can be hard to avoid the “good” and “bad” labels talking about this, but Geoff is clear he isn't here to argue what is right or wrong, simply to observe the behaviour and the consequences. Geoff presented on centrality to the Decentralised Internet Research Group or DINRG at the recent IETF meeting held in Madrid, and as he observes, “distributed” is not the same as “decentralised” -we've managed to achieve the first one, but the second eludes us.

Packet Pushers - Full Podcast Feed
TNO037: The Next Era of Network Management and Operations

Packet Pushers - Full Podcast Feed

Play Episode Listen Later Aug 1, 2025 46:31


What’s the next era of network management and operations? Total Network Operations talks to Mahesh Jethanandani, Chair of NETCONF Working Group and Distinguished Engineer at Arrcus. Mahesh describes a workshop from December of 2024 that sought to investigate the past, present, and future of network management and operations. He talks about the IETF’s role in... Read more »

Packet Pushers - Fat Pipe
TNO037: The Next Era of Network Management and Operations

Packet Pushers - Fat Pipe

Play Episode Listen Later Aug 1, 2025 46:31


What’s the next era of network management and operations? Total Network Operations talks to Mahesh Jethanandani, Chair of NETCONF Working Group and Distinguished Engineer at Arrcus. Mahesh describes a workshop from December of 2024 that sought to investigate the past, present, and future of network management and operations. He talks about the IETF’s role in... Read more »

Passwort - der Podcast von heise security
DNSSEC, die DNS Security Extensions

Passwort - der Podcast von heise security

Play Episode Listen Later Jul 30, 2025 106:56


Das Domain Name System - kurz DNS - ist einer der Grundpfeiler des modernen Internet. Umso wichtiger, dass es zuverlässige und unfälschbare Informationen liefert. Dabei hilft DNSSEC - die DNS Security Extensions. Was das ist, was es kann, wie man es aktiviert und was man davon hat, erklärt den Hosts in dieser Folge ein Gast: DNSSEC-Experte Peter Thomassen arbeitet seit Jahren an vorderster Front bei verschiedenen Gremien mit und entwickelt die Sicherhetismerkmale von DNS weiter. Er kümmert sich besonders um Automatisierung - ein Thema, bei dem DNSSEC anderen großen Ökosystemen wie dem CA-Kosmos noch hinterherhinkt. - https://desec.io/ - Malware in TXT Records: https://arstechnica.com/security/2025/07/hackers-exploit-a-blind-spot-by-hiding-malware-inside-dns-records/ - Post-Quantum DNSSEC Testbed & Feldstudie: https://pq-dnssec.dedyn.io/ - DS-Automatisierung: RFC 7344, 8078, 9615 - IETF-Draft: "Dry run DNSSEC" - ICANN SSAC Report zu DS-Automatisierung (SAC126): https://itp.cdn.icann.org/en/files/security-and-stability-advisory-committee-ssac-reports/sac-126-16-08-2024-en.pdf - Automatisierungs-Guidelines für Registrierungsstellen (Entwurf): https://datatracker.ietf.org/doc/draft-shetho-dnsop-ds-automation/ - Folgt uns im Fediverse: @christopherkunz@chaos.social @syt@social.heise.de Mitglieder unserer Security Community auf heise security PRO hören alle Folgen bereits zwei Tage früher. Mehr Infos: https://pro.heise.de/passwort

linkmeup. Подкаст про IT и про людей
До нас дошло S02E07. Протокол на двух салфетках

linkmeup. Подкаст про IT и про людей

Play Episode Listen Later Jun 13, 2025


Что общего между Интернетом и салфеткой в столовой? Border Gateway Protocol или просто BGP — протокол, который уже больше 30 лет держит Интернет на плаву. В этом выпуске узнаем, как он появился, как определяет, куда везти видео с котиками, и что может пойти не так, если допустить всего одну ошибку в его настройке. Источники: https://blog.apnic.net/2019/06/10/happy-birthday-bgp - статья в блоге регистратора APNIC по случаю 30-ти летия BGP https://datatracker.ietf.org/doc/html/rfc827 - RFC протокола EGP https://datatracker.ietf.org/doc/html/rfc1105 - RFC по BGP-1 https://www.rfc-editor.org/info/rfc7908 - RFC описывающая и классифицирующее проблему утечки маршрутов в BGP https://datatracker.ietf.org/doc/html/rfc1366 - RFC с предложением создать региональные регистраторы IP сетей https://www.rfc-editor.org/rfc/rfc1519 - RFC с описанием Classless Inter-Domain Routing (CIDR) https://www.ietf.org/proceedings/12.pdf - материалы с 12-й конференции IETF, та самая, на которой в обеденный перерыв родился BGP https://www.ietf.org/proceedings/13.pdf - материалы 13-й конференции IETF, на которой уже вовсю обсуждали BGP https://datatracker.ietf.org/wg/bgp/about/ - рабочая группа по BGP в рамках IETF над ранними версиями BGP https://datatracker.ietf.org/wg/idr/about/ - рабочая группа в рамках IETF, продолжившая разработку BGP начиная с версии 4 https://www.washingtonpost.com/sf/business/2015/05/31/net-of-insecurity-part-2 - статья в Washingtonpost по проблемам безопасности Интернета в контексте BGP https://newsroom.cisco.com/c/r/newsroom/en/us/a/y2024/m12/cisco-employee-no-4-looks-back-and-forward.html - интервью с Кирком Лоухидом из Cisco https://computerhistory.org/blog/the-two-napkin-protocol/ - заметка в Computer History Museum https://www.rfc-editor.org/rfc-index2.html - список всех RFC, где по ключевому слову "BGP", на текущий момент, целых 201 совпадение! https://habr.com/ru/companies/rt-dc/articles/532292/ - статья на Хабр, поясняющая, что такое RPKI в BGP https://rpki-monitor.antd.nist.gov/ROV - мониторинг процента внедерения RPKI в BGP https://linkmeup.ru/blog/713/ - статья про самые большие аварии BGP https://lists.ucc.gu.uwa.edu.au/pipermail/lore/2006-August/000040.html - инцидент AS7007 https://web.archive.org/web/20040314224307/http://www.merit.edu/mail.archives/nanog/1997-04/msg00444.html - пост с извинениями представителя аплинк-провайдера, клиентом которого была AS7007 https://web.archive.org/web/20040803141940/http://www.merit.edu/mail.archives/nanog/1997-04/msg00340.html - ветка с обсуждениями инцидента AS7007 "в моменте", от 25 апреля 1997 года https://habr.com/ru/companies/flant/articles/581560/ - инцидент о недоступности Facebook* * Организация Meta, а также её продукт Facebook, признаны экстремистскими на территории РФ

rfc bgp ietf border gateway protocol bgp
Telecom Reseller
“The Impact Is Now”: TeleCloud's Damon Finaldi on vCons, AI, and the Future of Telecom, Podcast

Telecom Reseller

Play Episode Listen Later Jun 3, 2025


“The impact is now,” says Damon Finaldi, President of TeleCloud. “In under a year, this will become more commonplace in the telecom business.” In this episode of Technology Reseller News, Publisher Doug Green welcomes back Damon Finaldi of TeleCloud, a Cloud Communications Alliance (CCA) member, for a live demonstration and deep dive into one of telecom's most talked-about innovations: vCons (Virtual Conversations). TeleCloud, a New Jersey-based cloud service provider, has deep roots in direct client interaction across industries like healthcare, auto, legal, and service trades. That close engagement has led to a key insight: data, not just service, is the most valuable business asset—and vCons are the vehicle to unlock it. Powered by conversational AI, vCons capture and enrich business communications (calls, emails, texts), then deliver actionable insights through TeleCloud's dashboard interface. Finaldi explains how this real-time analysis reveals call trends, sentiment, tone, and key phrases, offering triggers and alerts to preempt issues before they escalate. “We're not just recording calls—we're making the data useful,” Finaldi says. Through integration with vertical-specific applications, such as appointment systems for urgent care clinics, TeleCloud is enabling real-time business intelligence across customer interactions. Key takeaways from the podcast: vCons turn voice and text data into actionable insight with AI TeleCloud's platform now delivers real-time sentiment and trend analysis Insights dashboards trigger alerts and stitch together multi-channel data Service providers must adopt this or risk losing relevance—and valuation The vCon format is being approved by the IETF as a global standard As AI adoption accelerates, TeleCloud is positioning itself not just as a cloud provider—but as a business intelligence partner. The future of telecom, Finaldi argues, lies in offering clients more than connectivity—it's about delivering clarity. Learn more at: https://telecloud.net

PING
DELEG: Changing the DNS engine in flight again

PING

Play Episode Listen Later May 28, 2025 59:27


In this episode of PING, APNIC's Chief Scientist, Geoff Huston, revisits changes underway in how the Domain Name System (DNS) delegates authority over a given zone and how resolvers discover the new authoritative sources. We last explored this in March 2024.  In DNS, the word ‘domain' refers to a scope of authority. Within a domain, everything is governed by its delegated authority. While that authority may only directly manage its immediate subdomains (children), its control implicitly extends to all subordinate levels (grandchildren and beyond). If a parent domain withdraws delegation from a child, everything beneath that child disappears. Think of it like a Venn diagram of nested circles — being a subdomain means being entirely within the parent's scope. The issue lies in how this delegation is handled. It's by way of nameserver (NS) records. These are both part of the child zone (where they are defined) and the parent zone (which must reference them). This becomes especially tricky with DNSSEC. The parent can't authoritatively sign the child's NS records because they are technically owned by the child. But if the child signs them, it breaks the trust chain from the parent. Another complication is the emergence of third parties to the delegate, who actually operate the machinery of the DNS. We need mechanisms to give them permission to make changes to operational aspects of delegation, but not to hold all the keys a delegate has regarding their domain name. A new activity has been spun up in the IETF to discuss how to alter this delegation problem by creating a new kind of DNS record, the DELEG record. This is proposed to follow the Service Binding model defined in RFC 9460. Exactly how this works and what it means for the DNS is still up in the air. DELEG could fundamentally change how authoritative answers are discovered, how DNS messages are transported, and how intermediaries interact with the DNS ecosystem. In the future, significant portions of DNS traffic might flow over new protocols, introducing novel behaviours in the relationships between resolvers and authoritative servers.

Root Causes: A PKI and Security Podcast
Root Causes 497: PQC Update with Sofia Celi

Root Causes: A PKI and Security Podcast

Play Episode Listen Later May 21, 2025 19:50


Guest Sofia Celi (IETF, Brave) returns to talk about important developments in post quantum cryptography. Sofia tells us about her candidate algorithm MAYO and what is happening with the NIST PQC onramp. We learn about KEM TLS and the status of PQC initiatives in IETF.

Telecom Reseller
CPaaSAA Launches Service Provider Executive Forum to Drive Growth and Innovation, Podcast

Telecom Reseller

Play Episode Listen Later May 13, 2025


"We're at an exciting intersection — a convergence of legacy customer bases with emerging technologies like vCons and AI," says Kevin Nethercott, Managing Partner of the CPaaS Acceleration Alliance (CPaaSAA), in this special podcast with Technology Reseller News publisher Doug Green. In this episode, Nethercott announces the launch of the Service Provider Executive Forum, a new initiative designed to connect and empower CSP and MSP leaders, especially in North America. Built around CPaaSAA's extensive global ecosystem — now over 100 members strong — the Forum aims to provide business owners and executives with curated insights, networking opportunities, and access to the Alliance's global research and advisory marketplace. The Forum will include: Monthly newsletters tailored for executive decision-makers In-person meetups and masterminds for networking and idea exchange Exclusive research content led by UK-based analyst Andrew Collinson Access to AI and data working groups, and early insight into transformative technologies like vCons Nethercott emphasizes that the Alliance is focused not only on cutting-edge innovation but also on practical enablement. CPaaSAA's commitment to industry standards, such as IETF's vCon, reflects its push to make AI implementation more effective and actionable across telecom operations. Learn more about the CPaaS Acceleration Alliance at: https://cpaasaa.com/ Read the Press Release at: https://telecomreseller.com/2025/05/13/cpaasaa-launches-the-service-provider-executive-forum-spef-to-empower-csp-msp-executives/

Watch This Space Podcast
Spotlight on Innovation with AI and Communications Technology

Watch This Space Podcast

Play Episode Listen Later May 6, 2025 34:13


April was a busy month for industry events, and the main focus for this episode was Jeff Pulver's vCon event, held in Hyannis, MA. Chris spoke at the event, with the main takeaway being that vCon is a “watch this space” initiative, especially for using AI to derive new value from conversations, including unstructured data. With vCon being early stage, the focus was mainly on laying the groundwork to make this an IETF standard, and proof of concept interop testing. Chris explained how this was a different conference experience, with the participants trying to set the foundation for vCon before it gets on the radar of the hyperscalers. Following this, Jon added his thoughts on other recent events, namely 8x8's analyst event, speaking at the Cloud Communications Alliance event, and Vector Institute's Remarkable conference in Toronto.  

Telecom Reseller
We thought we were listening to the customer—until we actually did, Strolid vCon Podcast

Telecom Reseller

Play Episode Listen Later Apr 25, 2025 13:20


Strolid's vCon Revolution: Bringing Automotive Insight to the Future of Digital Conversations HYANNISPORT, MA - “We thought we were listening to the customer—until we actually did,” said Thomas McCarthy-Howe, CTO, Strolid. The first-ever vCon Conference wrapped up in Cape Cod with a surprising yet visionary host: Strolid, a company known for advancing the automotive sales process. But as CTO Thomas McCarthy-Howe explained, Strolid's role in the conference reflects something much deeper — a transformational shift in how businesses truly hear and act on the voice of the customer. “Once you're able to actually capture the conversations in this format,” McCarthy-Howe said, “you always hear all the things your customers say — all the time.” Why vCon, and Why Now? Strolid specializes in helping automotive dealerships convert leads into in-person visits, operating at the front lines of high-stakes customer interaction. Yet their interest in vCon — a standardized container format for digital conversations — has taken the company beyond automotive and into the heart of digital transformation. “At first, we thought we were collecting feedback,” said McCarthy-Howe. “But we were getting an estimate, a filtered sliver. With vCons, we realized we had been missing most of what customers were actually saying.” Surfacing Operational Blind Spots Strolid's use of vCons revealed what McCarthy-Howe called “dark operational data.” In one example, he described how customers were often frustrated not because of poor service, but because they drove long distances to view cars that had already been sold — a disconnect caused by inaccurate online listings. “That kind of insight doesn't come from hold-time metrics,” he noted. “It comes from capturing and analyzing the full customer conversation.” Enabling Ethical, Scalable Customer Understanding In addition to insight, vCons offer a scalable way to ensure ethical data handling. “Because we can now see everything, it becomes even more urgent to manage consent, protect privacy, and respect customer data,” McCarthy-Howe said. The vCon standard, supported by the IETF working group and open-source ecosystem, enables organizations to share, analyze, and protect conversational data in a consistent, privacy-respecting manner. From Car Lots to Cross-Industry Change Although Strolid is rooted in the automotive world, the lessons apply broadly. “A dealership is just a proxy for any store,” McCarthy-Howe said. “There's a sales cycle, a customer journey, and a need for trust and transparency. We designed this not just for automotive, but for the market.” Learn More Company site: strolid.com Tech insights: strolid.ai vCon standard: ietf.org  

Telecom Reseller
Jeff Pulver: “If you're looking for the future today, it's all about the vCon”, Podcast

Telecom Reseller

Play Episode Listen Later Apr 21, 2025


Tech pioneer previews the world's first VCon-focused conference and the protocols reshaping AI, business conversations, and communications strategy ST. PETERSBURG, FL - At Cloud Connections 2025, Jeff Pulver, internet telephony pioneer and CEO of the newly launched vCon Foundation, joined Technology Reseller News to preview the first-ever VCon Conference, taking place April 22–24 in Hyannis, Massachusetts. Pulver described vCon—short for "virtualized conversation"—as a new file format standard, backed by the IETF, that captures and structures conversations across voice, chat, email, and messaging platforms. More than just a storage format, vCon is the key, he says, to unlocking insights, building memory into AI systems, and enabling truly intelligent, context-aware communications. “For anyone trying to manage unstructured data, better understand support calls, or just improve customer engagement—this is it,” Pulver said. “If you're asking yourself what you can actually do with AI in your business, the answer is two words: virtualized conversations.” The upcoming VCon event will focus on three core themes: Theory and Protocols – Understanding VCon and SCITT (Supply Chain Integrity, Transparency, and Trust) Industry Activation – Product and service announcements from companies integrating the standard Interop Testing – The first public interoperability event for VCon-compatible platforms Pulver, who famously launched Free World Dialup and co-founded Vonage, emphasized the disruptive potential of this new standard: “We've never had a universal file format for conversations before. With VCon, any AI tool, from any vendor, can now understand and analyze that data.” He also announced a related initiative called TAFI (Trust Agent Framework for AI), which incorporates VCon for memory and SCITT for trust—a new model for AI transparency and reliability. Pulver, who now refers to himself as Chief Evangelist Officer of the vCon Foundation, promised attendees real value: “If you show up and don't learn something new, I'll refund your registration. That's how confident I am.” With rapid enterprise AI adoption underway, Pulver sees VCon as the missing link. “Conversations matter. Memory matters. And VCon brings them together.” Learn more and register: www.vonevolution.com/spring25-vcon

Search Off the Record
How are web standards made?

Search Off the Record

Play Episode Listen Later Apr 17, 2025 44:56


Ever wondered how web standards are made? Martin and Gary from Google Search take you behind the scenes of the internet's governing bodies. From the IETF to the W3C, learn about the consensus-driven processes that shape the web. Find out why these standards are crucial for ensuring a consistent and reliable online experience. Resources: Episode transcript →  https://goo.gle/sotr089-transcript    Listen to more Search Off the Record → https://goo.gle/sotr-yt Subscribe to Google Search Channel → https://goo.gle/SearchCentral   Search Off the Record is a podcast series that takes you behind the scenes of Google Search with the Search Relations team.   #SOTRpodcast #SEO   Speakers: Lizzi Sassman, John Mueller, Martin Splitt, Gary Illyes Products Mentioned: Search Console - General  

PING
Night of the BGP Zombies

PING

Play Episode Listen Later Mar 5, 2025 58:52


In this episode of PING, APNIC's Chief Scientist, Geoff Huston explores bgp "Zombies" which are routes which should have been removed, but are still there. They're the living dead of routes. How does this happen? Back in the early 2000s Gert Döring in the RIPE NCC region was collating a state of BGP for IPv6 report, and knew each of the 300 or so IPv6 announcements directly. He understood what should be seen, and what was not being routed. He discovered in this early stage of IPv6 that some routes he knew had been withdrawn in BGP still existed when he looked into the repositories of known routing state. This is some of the first evidence of a failure mode in BGP where withdrawal of information fails to propagate, and some number of BGP speakers do not learn a route has been taken down. They hang on to it. Because BGP is a protocol which only sends differences to the current routing state as and when they emerge (if you start afresh you get a LOT of differences, because it has to send everything from ground state of nothing. But after that, you're only told when new things come and old things go away) it can go a long time without saying anything about a particular route: if its stable and up, nothing to say, and if it was withdrawn, you don't have it, to tell people it's gone, once you passed that on. So if somehow in the middle of this conversation a BGP speaker misses something is gone, as long as it doesn't have to tell anyone it exists, nobody is going to know it missed the news. In more recent times, there has been a concern this may be caused by a problem in how BGP sits inside TCP messages and this has even led to an RFC in the IETF process to define a new way to close things out. Geoff isn't convinced this diagnosis is actually correct or that the remediation proposed is the right one. From a recent NANOG presentation Geoff has been thinking about the problem, and what to do. He has a simpler approach which may work better.

IoT For All Podcast
The State of LoRaWAN in 2025 | LoRa Alliance's Alper Yegin | IoT For All Podcast

IoT For All Podcast

Play Episode Listen Later Feb 25, 2025 27:21


In this episode of the IoT For All Podcast, Alper Yegin, President and CEO of the LoRa Alliance, joins Ryan Chacon to discuss the state of LoRaWAN in 2025. The conversation covers LoRaWAN adoption, LoRaWAN use cases, the role of satellite IoT, edge, and AI, LoRaWAN certification and interoperability, misconceptions about LoRaWAN, and the future of LoRaWAN.Alper Yegin is the President and CEO of the LoRa Alliance. He oversees the organization's strategic direction and supports the development and global adoption of LoRaWAN, a key standard for low-power wide-area networks (LPWAN) in the Internet of Things (IoT). Before becoming CEO, he chaired the LoRa Alliance Technical Committee for eight years and served as Vice-Chair of the board for seven years.With over 25 years of experience in the IoT, mobile, and wireless communication industries, Yegin has held senior roles, including CTO at Actility, and various positions at Samsung Electronics, DoCoMo, and Sun Microsystems. He has contributed to global standards development in organizations such as IETF, 3GPP, ETSI, Zigbee Alliance, WiMAX Forum, and IPv6 Forum. Yegin holds 16 patents and has authored numerous technical standards and papers.The LoRa Alliance is an open, non-profit association that has grown into one of the largest and fastest-growing alliances in the technology industry since its inception in 2015. Its members work closely together and share knowledge to develop and disseminate the LoRaWAN standard, the de facto global standard for secure, quality IoT LPWAN bearer connectivity.Discover more about IoT at https://www.iotforall.comFind IoT solutions: https://marketplace.iotforall.comMore about LoRa Alliance: https://lora-alliance.orgConnect with Alper: https://www.linkedin.com/in/alperyegin/(00:00) Intro(00:18) Alper Yegin and LoRa Alliance(02:58) Current state of LoRaWAN adoption(04:17) The role of LoRaWan in the IoT ecosystem(07:19) Certification and interoperability(09:48) LoRaWAN use cases(15:03) Impact of AI and edge computing(18:09) Misconceptions about LoRaWAN(21:14) Future of LoRaWAN and challenges(24:14) Upcoming initiatives and eventsSubscribe to the Channel: https://bit.ly/2NlcEwmJoin Our Newsletter: https://newsletter.iotforall.comFollow Us on Social: https://linktr.ee/iot4all

Telecom Reseller
From Automotive to AI: How vCon is Transforming Customer Experience, Strolid Podcast

Telecom Reseller

Play Episode Listen Later Feb 18, 2025


“When you start using vCons and actually listen to what your customers say, you realize you never really listened to them before.” – Thomas McCarthy-Howe, Co-Author of VCon and CEO of Strolid At IT Expo, Technology Reseller News publisher Doug Green sat down with Thomas McCarthy-Howe, co-inventor of the vCon standard, to discuss how structured conversation data is revolutionizing industries—starting with automotive sales. McCarthy-Howe, alongside Dan Petrie, developed the vCon standard, now adopted by the IETF, to address a fundamental gap in digital communications: there was no standardized way to structure and analyze conversations. Initially developed to help Strolid, his outsourced sales firm, better understand customer interactions, vCon has since evolved into a critical tool for enhancing both customer (CX) and employee experience (EX). vCon's First Use Case: Automotive Sales In the high-stakes world of car sales, where Strolid processes over 9,000 daily inquiries and converts 5,000-6,000 into sales, conversation intelligence is a game-changer. By structuring and analyzing millions of calls, vCon enables dealerships to: Identify and resolve customer pain points, like inaccurate inventory listings Improve lead response times to meet manufacturer SLAs Uncover hidden operational inefficiencies affecting margins Reduce customer frustration, boosting dealership reputation and loyalty “In our industry, a customer might drive an hour to see a car—only to find it's not on the lot. That's a serious problem. With vCons, we capture these issues and fix them before they escalate,” McCarthy-Howe explained. Beyond Sales: The Future of vCon VCon isn't just about efficiency—it's about data integrity and security. As McCarthy-Howe emphasized, biometric data like voice and face recordings are highly sensitive, making secure, structured storage essential in the age of deepfakes and digital fraud. Learn More Strolid: strolid.com VCon & Structured Conversations: conserver.io Next IETF Meeting (March in Bangkok): Get involved in shaping the future of vCon #vCon #AI #CustomerExperience #AutomotiveSales #TechInnovation #ITExpo

Telecom Reseller
vCon: The Next Evolution in Communication, Podcast

Telecom Reseller

Play Episode Listen Later Feb 18, 2025


"The conversation doesn't end when you hang up the phone. That's when it begins." – Jeff Pulver At the recent TMC event, Technology Reseller News publisher Doug Green sat down with Jeff Pulver, a pioneering voice in the VoIP industry and now the driving force behind vCon. Their conversation, much like the technology it centered on, was a glimpse into the future of communication—one that redefines how we capture, analyze, and leverage conversations. 30 Years of VoIP: From VocalTech to vCon For Pulver, February 13th, 2025, marked a milestone—30 years since VocalTech introduced the first consumer VoIP application, a moment that proved voice could travel over the internet, not just phone lines. While voice-over-IP technology has been in development since 1969, Pulver sees 1995 as the true launch of the VoIP industry. Since then, the landscape has evolved in ways few could have predicted. Now, Pulver is championing vCon, an IETF standard that could revolutionize digital communication. "If you're familiar with SIP (Session Initiation Protocol), then you'll appreciate that the same people behind SIP are bringing you vCon," he said. vCon is more than just a file format—it's a way to store, analyze, and extract value from conversations across voice, email, and text. A Game Changer for Businesses of All Sizes Until recently, only large enterprises could afford advanced AI-driven conversation analysis, using sentiment and metadata to gain insights. But vCon democratizes this capability, allowing small and medium-sized businesses—or even individuals—to capture and analyze conversations effortlessly. Pulver envisions a world where conversations are no longer lost but instead serve as valuable data points. "We've been letting metadata fall on the cutting room floor. Now, we can take conversations, apply AI, and gain insights without millions in R&D costs," he explained. The First-Ever vCon Interop Event To further the adoption of this groundbreaking standard, Pulver announced the first-ever vCon Interop event, set for April 22-24, 2025, in Cape Cod. The event will feature a workshop, hands-on application development, and an industry showcase where companies can test interoperability. Additionally, he is spearheading Vinevolution, an event in Bentonville, Arkansas, on April 9, focusing on the intersection of AI, telecom, and supply chain, as well as AI Com in New York City on April 4. The Future of Conversations Pulver, who coined the term Voice on the Net (VON) in 1995, sees vCon as a natural progression of his work. "We're living in a world where AI is pervasive, and communication is evolving again," he said. "This is more than just telecom—it's about transforming how we do business." With vCon, conversations become assets, not just fleeting moments. For those who want to be part of this next wave of innovation, the opportunity starts now. As Pulver put it, "If you're an early adopter, come join us." For more details, visit pulver.com or join the vCon Foundation at pulver.com/join. (This podcast summary was done by vCon)

PING
RISKY BIZ-ness

PING

Play Episode Listen Later Jan 22, 2025 44:04


Welcome back to PING, at the start of 2025. In this episode, Gautam Akiwate, (now with Apple, but at the time of recording with Stanford University) talks about the 2021 Advanced Network Research Prize winning paper, co-authored with Stefan Savage, Geoffrey Voelker and Kimberly Claffy which was titled "Risky BIZness: Risks Derived from Registrar Name Management". The paper explores a situation which emerged inside the supply chain behind DNS name delegation, in the use of an IETF protocol called Extensible Provisioning Protocol or EPP. EPP is implemented in XML over the SOAP mechanism, and is how registry-registrar communications take place, on behalf of a given domain name holder (the delegate) to record which DNS nameservers have the authority to publish the delegated zone. The problem doesn't lie in the DNS itself, but in the operational practices which emerged in some registrars, to remove dangling dependencies in the systems when domain names were de-registered. In effect they used an EPP feature to rename the dependency, so they could move on with selling the domain name to somebody else. The problem is that feature created valid names, which could themselves then be purchased. For some number of DNS consumers, those new valid nameservers would then be permitted to serve the domain, and enable attacks on the integrity of the DNS and the web. Gautam and his co-authors explored a very interesting quirk of the back end systems and in the process helped improve the security of the DNS and identified weaknesses in a long-standing "daily dump" process to provide audit and historical data.

Root Causes: A PKI and Security Podcast
Root Causes 455: PQC Standardization in IETF

Root Causes: A PKI and Security Podcast

Play Episode Listen Later Jan 8, 2025 35:54


We talk with guest Sofia Celi of Brave Browser, who leads the IETF PQC standardization effort, about the process of setting standards for PQC-compatible digital certificates. We learn about expected timelines, hybrid strategies, the NIST PQC onramp's role, and more.

ImpacTech
Steady Advisors for Steadiwear

ImpacTech

Play Episode Listen Later Dec 20, 2024 25:27


Host: Dr. Mary Goldberg, Co-Director of the IMPACT Center at the University of PittsburghCo-Host: Dr. Michelle Zorrilla, Senior Research Scientist and Associate Director of Technology Translation, IMPACT Center at the University of PittsburghGuest: Emilie Maamary, CMO of Steadiwear Use the code "IETF" to get an additional $50 off the pre-order of the Steadiwear Three device, which is currently available at a 40% discount through the end of the year 2024.IMPACT Center | Website, Facebook, LinkedIn, Twitter Transcript | PDFTimestamps:1:43 Steadiwear's Origin and Personal Inspiration3:47 Challenges and Initial Steps4:45 User Testing and Product Development8:40 Navigating the FDA Process16:53 Seeking Consultants and Mentors18:37 Collaborations and Connections21:40 Promotions and Final Thoughts

Environment Variables
Green Networking with Carlos Pignataro

Environment Variables

Play Episode Listen Later Dec 12, 2024 39:45


In this episode of Environment Variables, Anne Currie welcomes Carlos Pignataro, a leading expert in sustainable network architecture, to explore how networks can balance energy efficiency with performance and resilience. Carlos shares insights from his career at Cisco and beyond, including strategies for reducing emissions through dynamic software principles, energy-aware networking, and leveraging technologies like IoT and Content Delivery Networks (CDNs). They discuss practical applications, the alignment of green practices with business interests, and the role of multidisciplinary collaboration in driving innovation. Tune in for actionable advice and forward-thinking perspectives on making networks greener while enhancing their capabilities.

The Brave Marketer
Privacy and Digital Identity on a Blockchain

The Brave Marketer

Play Episode Listen Later Dec 4, 2024 30:38


Kyle Den Hartog, Security Engineer at Brave Software, discusses emerging use cases for crypto, and their respective privacy implications. He emphasizes the urgency for innovative solutions to safeguard personal information in our digital financial systems given current privacy gaps on blockchains.  Key Takeaways:  The delicate balance between transparency and privacy in the blockchain era The importance of community engagement in driving technological advancements and shaping the future of decentralized commerce The role of the browser in powering the evolving standards of Web3 Guest Bio:  Kyle Den Hartog, Security Engineer at Brave Software, is helping to promote a world where the Web can be more private and secure for everyone. This vision led him to be an eager contributor to the design and development of standards in W3C and IETF. With a background in security and cryptography, he has worked in domain verticals such as digital identity, Web3, and now work on browsers here at Brave. His long term focus remains on improving our symbiotic relationship with technology, and he's active in communities related to these topics. ---------------------------------------------------------------------------------------- About this Show: The Brave Technologist is here to shed light on the opportunities and challenges of emerging tech. To make it digestible, less scary, and more approachable for all! Join us as we embark on a mission to demystify artificial intelligence, challenge the status quo, and empower everyday people to embrace the digital revolution. Whether you're a tech enthusiast, a curious mind, or an industry professional, this podcast invites you to join the conversation and explore the future of AI together. The Brave Technologist Podcast is hosted by Luke Mulks, VP Business Operations at Brave Software—makers of the privacy-respecting Brave browser and Search engine, and now powering AI everywhere with the Brave Search API. Music by: Ari Dvorin Produced by: Sam Laliberte

Packet Pushers - Heavy Networking
HN 759: Deploying the BGP Monitoring Protocol (BMP) at ISP Scale

Packet Pushers - Heavy Networking

Play Episode Listen Later Nov 22, 2024 56:32


The BGP Monitoring Protocol, or BMP, is an IETF standard. With BMP you can send BGP prefixes and updates from a router to a collector before any policy filters are applied. Once collected, you can analyze this routing data without any impact on the router itself. On today’s Heavy Networking, we talk with Bart Dorlandt,... Read more »

Packet Pushers - Full Podcast Feed
HN 759: Deploying the BGP Monitoring Protocol (BMP) at ISP Scale

Packet Pushers - Full Podcast Feed

Play Episode Listen Later Nov 22, 2024 56:32


The BGP Monitoring Protocol, or BMP, is an IETF standard. With BMP you can send BGP prefixes and updates from a router to a collector before any policy filters are applied. Once collected, you can analyze this routing data without any impact on the router itself. On today’s Heavy Networking, we talk with Bart Dorlandt,... Read more »

Packet Pushers - Fat Pipe
HN 759: Deploying the BGP Monitoring Protocol (BMP) at ISP Scale

Packet Pushers - Fat Pipe

Play Episode Listen Later Nov 22, 2024 56:32


The BGP Monitoring Protocol, or BMP, is an IETF standard. With BMP you can send BGP prefixes and updates from a router to a collector before any policy filters are applied. Once collected, you can analyze this routing data without any impact on the router itself. On today’s Heavy Networking, we talk with Bart Dorlandt,... Read more »

PING
A student-led IPv6 deployment at NITK Karnataka

PING

Play Episode Listen Later Oct 30, 2024 27:48


In this episode of PING, Vanessa Fernandez and Kavya Bhat, two students from the National Institute of Technology Karnataka (NITK) discuss the student led, multi-year project to deploy IPv6 at their campus. Kavya & Vanessa have just graduated, and are moving into their next stages of work and study in computer sciences and network engineering. Across 2023 and 2024 they were able to attend IETF118 and IETF119 and present on their project and it's experiences to the IPv6 working groups and off-Working Group meetings, in part funded by the APNIC ISIF Project and the APNIC Foundation. This multi-year project is supervised by the NITK Centre for Open-source Software and Hardware (COSH) and has outside review from Dhruv Dhody (ISOC) and Nalini Elkins (Inside Products inc). Former students have also acted as alumni and remain involved in the project as it progresses. We often focus on IPv6 deployment at scale in the telco sector, or experiences with small deployments in labs, but another side of the IPv6 experience is the large campus network, in scale equivalent to a significant factory or government department deployment but in this case undertaken by volunteer staff, with little or no prior experience of networking technology. Vanessa and Kavya talk about their time on the project, and what they got to present at IETF.

Packet Pushers - Full Podcast Feed
IPB160: The Making of RFC 9637 – IPv6 Documentation Prefix

Packet Pushers - Full Podcast Feed

Play Episode Listen Later Sep 19, 2024 35:04


IPv6 Buzz welcomes back Nick Buraglio, a frequent guest, to discuss RFC 9637. We get into the details of RFC 9637, which describes the new documentation prefix space for IPv6. We also explore the process of how RFCs go from idea to standard in the IETF. (Cue the “I’m Just a Bill” song from Schoolhouse... Read more »

Packet Pushers - IPv6 Buzz
IPB160: The Making of RFC 9637 – IPv6 Documentation Prefix

Packet Pushers - IPv6 Buzz

Play Episode Listen Later Sep 19, 2024 35:04


IPv6 Buzz welcomes back Nick Buraglio, a frequent guest, to discuss RFC 9637. We get into the details of RFC 9637, which describes the new documentation prefix space for IPv6. We also explore the process of how RFCs go from idea to standard in the IETF. (Cue the “I’m Just a Bill” song from Schoolhouse... Read more »

Search Off the Record
Crawling smarter, not harder

Search Off the Record

Play Episode Listen Later Aug 8, 2024 40:11 Transcription Available


In this episode of SOTR, John Mueller, Lizzi Sassman, and Gary Illyes talk about misconceptions around crawl frequency and site quality, what's challenging about crawling the web nowadays, and how search engines could crawl more efficiently.  Resources: Episode transcript → https://goo.gle/sotr079-transcript Gary's post on LinkedIn  → https://goo.gle/3YAT55q  Crawling episode with Dave Smart → https://goo.gle/3WShUsf  If-Modified-Since  → https://goo.gle/3ywXvja  About the IETF → https://goo.gle/3SGVVlo  Robots Exclusion Protocol → https://goo.gle/4dgmBSg  Proposal for new kind of chunked transfer  → https://goo.gle/3AgMF1c  Listen to more Search Off the Record → https://goo.gle/sotr-yt Subscribe to Google Search Channel → https://goo.gle/SearchCentral Search Off the Record is a podcast series that takes you behind the scenes of Google Search with the Search Relations team. #SOTRpodcast

The Hedge
The Hedge 235: Copyrights and Centralization

The Hedge

Play Episode Listen Later Jul 19, 2024 40:26 Transcription Available


Join us as Tom, Eyvonne, and Russ hang out for another roundtable. We start the show talking about Tom's plant (is it real or ... ??). What does copyright have to do with Internet Service Providers? Should the two topics be related at all? What can the IETF do about Internet centralization?

Telemetry Now
How the Internet Society Helps Maintain an Open Internet with Andrew Sullivan

Telemetry Now

Play Episode Listen Later Jul 3, 2024 50:51


Hosts Phil Gervasi and Doug Madory talk with Andrew Sullivan, President of the Internet Society, about the crucial role of the Internet Society in maintaining an open and accessible internet for all. They dive into Andrew's extensive background with the IETF, the Internet Architecture Board, and his work with major networking vendors. Learn about the technical and policy challenges in keeping the internet globally connected and secure, the impact of government regulations, and the importance of ensuring that the internet remains a force for good in society.

Packet Pushers - Heavy Networking
HN740: IETF's Network Management Operations (NMOP) Working Group

Packet Pushers - Heavy Networking

Play Episode Listen Later Jun 28, 2024 47:36


When you think of IETF, you probably just think of defining protocols, but its new NMOP working group is all about helping network operators identify issues and deploy solutions, including those that pop up around automation. Mahesh Jethanandani is an NMOP leader and joins the show today to tell us what they are working on... Read more »

Packet Pushers - Full Podcast Feed
HN740: IETF's Network Management Operations (NMOP) Working Group

Packet Pushers - Full Podcast Feed

Play Episode Listen Later Jun 28, 2024 47:36


When you think of IETF, you probably just think of defining protocols, but its new NMOP working group is all about helping network operators identify issues and deploy solutions, including those that pop up around automation. Mahesh Jethanandani is an NMOP leader and joins the show today to tell us what they are working on... Read more »

Packet Pushers - Fat Pipe
HN740: IETF's Network Management Operations (NMOP) Working Group

Packet Pushers - Fat Pipe

Play Episode Listen Later Jun 28, 2024 47:36


When you think of IETF, you probably just think of defining protocols, but its new NMOP working group is all about helping network operators identify issues and deploy solutions, including those that pop up around automation. Mahesh Jethanandani is an NMOP leader and joins the show today to tell us what they are working on... Read more »

Identity At The Center
#291 - Identity Bubbles with Justin Richer

Identity At The Center

Play Episode Listen Later Jun 24, 2024 56:38


In this lively episode of the Identity at the Center podcast, hosts Jim McDonald and Jeff Steadman kick things off with a humorous mishap involving Jim's tech setup before diving into the latest happenings. They discuss the sweltering summer heat, Jim's recent "Greatest Dad of All Time" award, and their upcoming plans for Identity Week in Washington, DC. The highlight of the episode is a deep dive into the concept of "Federation Bubbles" with special guest Justin Richer, Security and Standards Architect and Founder of Bespoke Engineering. Justin explains the idea behind federation bubbles, a dynamic system designed to handle identity management in disconnected or disadvantaged environments. They explore real-world applications, such as military operations and disaster recovery scenarios, where traditional identity systems fall short. Justin also shares updates on his recent work, including the GNAP protocol and HTTP Message Signatures, and his involvement with the IETF's new working group, WIMSE (Workload Identity in Multi-System Environments). The conversation touches on the challenges and potential of these emerging identity standards, as well as the importance of context and trust in identity management. The episode wraps up on a lighter note with a discussion about Justin's board game project, "Natturuval" and the latest edition of "Cards Against Identity." Connect with Justin: https://www.linkedin.com/in/justinricher/ Learn more about Bespoke Engineering: https://bspk.io/ Workload Identity in Multi System Environments (WIMSE): https://datatracker.ietf.org/wg/wimse/about/ SPIFFE: https://spiffe.io Natturuval: https://gamefound.com/en/projects/bespoke-games/natturuval Cards Against Identity: https://bspk.io/games/cards/ Attending Identity Week in Europe, America, or Asia? Use our discount code IDAC30 for 30% off your registration fee! Learn more at: Europe: https://www.terrapinn.com/exhibition/identity-week/ America: https://www.terrapinn.com/exhibition/identity-week-america Asia: https://www.terrapinn.com/exhibition/identity-week-asia/ Connect with us on LinkedIn: Jim McDonald: https://www.linkedin.com/in/jimmcdonaldpmp/ Jeff Steadman: https://www.linkedin.com/in/jeffsteadman/ Visit the show on the web at idacpodcast.com and follow @IDACPodcast on Twitter.

The Hedge
Hedge 223: The Political Side of Standards with Geoff Huston

The Hedge

Play Episode Listen Later Apr 27, 2024 59:55 Transcription Available


Listen in as Geoff Huston, Tom, and Russ discuss how the IETF, governments, and political movements interact when creating standards and guiding the future of the Internet.

Android Faithful
Consistently Inconsistent Assistant

Android Faithful

Play Episode Listen Later Mar 13, 2024 95:27


Ron, Huyen, Jason, and Mishaal have a hard time sifting through the sheer volume of big Android news this week, but have no fear: There's an exclusive piece of news that Mishaal breaks on the show complete with a shiny new breaking news bumper and the gang celebrates a little too hard.NEWS@MishaalRahman: Here is what's new in Android 14 QPR3 Beta 2Android 15 might let you send text messages via satelliteGoogle finally enables display output on the Pixel 8, here's what it could mean for a DeX-like modePatron News Story Pick: SpaceX says its satellite service aced tests on Android and iPhoneApple is working to make it easier to switch from iPhone to Android because of the EU@MishaalRahman: Google also announced yesterday that they will begin showing "additional choice screens" for users setting up an Android device in the EEA.@MishaalRahman: Google has shared details on its new external offers program, the program they created to comply with the EU's Digital Markets Act.To comply with DMA, WhatsApp and Messenger will become interoperable via Signal protocolHARDWAREPixel 8a will be more expensive: colors, prices, memory of the new Google phoneGoogle confirms Pixel 8a is coming with Android's new battery stats@mishaal_rahman: Gemini Nano won't be coming to the Pixel 8 because of ""some hardware limitations"" but will be coming to more high-end devices in the near future according to Terence Zhang, a Developer Relations engineer at Google, during #TheAndroidShow.Samsung's Galaxy A55 and A35 are official with 6.6" OLED screens, focus on securityNo April Fools' joke: OnePlus will launch a new Nord on April 1Exclusive Leak: Motorola Edge 50 ProJason's Boox Palma hands onAPPS[Exclusive] Google says the IETF's ongoing work on the unwanted tracker detection spec doesn't impact its launch timeline for the Find My Device network@MishaalRahman: Google has announced that movie tickets and boarding passes will automatically be added to Google Wallet when you get a confirmation email in Gmail!"COMMUNITYTimothy, Hilton, and Robert have PLENTY to say about Assistant routinesMichael says we've seen Circle to Search beforeChuck is having Samsung RCS issues Hosted on Acast. See acast.com/privacy for more information.

Packet Pushers - Full Podcast Feed
IPB144: AWS Adds New Charge for IPv4, Governments Push toward IPv6

Packet Pushers - Full Podcast Feed

Play Episode Listen Later Feb 8, 2024 28:08


A round-up of IP address news to start the new year: Eric Vyncke of the IETF has created an RFC 6724 website that is an excellent time-saving tool for figuring out source destination address selection processes. AWS announces more IPv6 features and support, and adds a new charge for public IPv4 use. State actors, including... Read more »

Packet Pushers - Fat Pipe
IPB144: AWS Adds New Charge for IPv4, Governments Push toward IPv6

Packet Pushers - Fat Pipe

Play Episode Listen Later Feb 8, 2024 28:08


A round-up of IP address news to start the new year: Eric Vyncke of the IETF has created an RFC 6724 website that is an excellent time-saving tool for figuring out source destination address selection processes. AWS announces more IPv6 features and support, and adds a new charge for public IPv4 use. State actors, including... Read more »

Packet Pushers - IPv6 Buzz
IPB144: AWS Adds New Charge for IPv4, Governments Push toward IPv6

Packet Pushers - IPv6 Buzz

Play Episode Listen Later Feb 8, 2024 28:08


A round-up of IP address news to start the new year: Eric Vyncke of the IETF has created an RFC 6724 website that is an excellent time-saving tool for figuring out source destination address selection processes. AWS announces more IPv6 features and support, and adds a new charge for public IPv4 use. State actors, including... Read more »

Packet Pushers - Full Podcast Feed
IPB141: IPv6 End Of Year Wrap-Up 

Packet Pushers - Full Podcast Feed

Play Episode Listen Later Dec 14, 2023 28:25


In this episode Ed, Scott, and Tom talk about 2023 and what stood out to us as important for IPv6. Topics discussed include: Overall levels of IPv6 adoption IPv6 security in 2023 IETF efforts with IPv6 IPv6-only in the enterprise Thanks for listening! Show Links: IPv6 Deployment Status (RFC 9386), April 2023 – RFC Editor Four... Read more »

Packet Pushers - Fat Pipe
IPB141: IPv6 End Of Year Wrap-Up 

Packet Pushers - Fat Pipe

Play Episode Listen Later Dec 14, 2023 28:25


In this episode Ed, Scott, and Tom talk about 2023 and what stood out to us as important for IPv6. Topics discussed include: Overall levels of IPv6 adoption IPv6 security in 2023 IETF efforts with IPv6 IPv6-only in the enterprise Thanks for listening! Show Links: IPv6 Deployment Status (RFC 9386), April 2023 – RFC Editor Four... Read more »

Syntax - Tasty Web Development Treats
702: New + Proposed JS APIs for 2024

Syntax - Tasty Web Development Treats

Play Episode Listen Later Dec 6, 2023 55:52


In this episode of Syntax, Wes and Scott talk through new and proposed JavaScript APIs including ones related to regex, sourcemaps, structured clone, temporal, JSON modules, and more! Show Notes 00:10 Welcome 01:26 Syntax Brought to you by Sentry 02:55 RegExp Escaping Proposal tc39/proposal-regex-escaping: Proposal for investigating RegExp escaping for the ECMAScript standard 05:25 Intl.DurationFormat tc39/proposal-intl-duration-format 07:55 Standardized Sourcemaps tc39/source-map-rfc: RFCs for the source map debug format. 10:43 Structured Clone structuredClone() global function - Web APIs | MDN 12:54 Temporal Hasty Treat - Temporal Date Objects in JavaScript Tracking issue for syncing with IETF standardization work (req'd before implementers can ship unflagged) · Issue #1450 · tc39/proposal-temporal 20:59 FindLast and findLastIndex tc39/proposal-array-find-from-last: Proposal for Array.prototype.findLast and Array.prototype.findLastIndex. 22:27 JSON modules tc39/proposal-json-modules: Proposal to import JSON files as modules 24:46 Regex Modifiers RegExp Modifiers - June 2022.pptx - Microsoft PowerPoint Online 26:50 Array Grouping tc39/proposal-array-grouping: A proposal to make grouping of array items easier 30:48 Array Methods tc39/proposal-change-array-by-copy: Provides additional methods on Array.prototype and TypedArray.prototype to enable changes on the array by returning a new copy of it with the change. 6 or so New Approved and Proposed JavaScript APIs 32:12 Promise.withResolvers 35:08 Function.prototype.memo tc39/proposal-function-memo: A TC39 proposal for function memoization in the JavaScript language. 37:48 Node has a Proposed ESM Detection flag 39:54 Node has navigator.userAgent 41:29 Built in .env support 42:52 Permissions model & test runner continues to be worked on 44:06 HTML Web charts Proposal: Web Charts · Issue #9295 · whatwg/html 45:39 autopause Add autopause attribute to media elements to allow automatic pausing of media · Issue #9793 · whatwg/html 46:30 Meta Tag for AI generated content Proposal: Meta Tag for AI Generated Content · Issue #9479 · whatwg/html Schema.org - Schema.org Syntax × Sentry Swag Store – Syntax × Sentry Shop Syntax - A Tasty Treats Podcast for Web Developers. 50:13 Poster frame HTML Video Element: Proposal for adding [srcset] + [posterset] + [sizes] on video element as well [posterset] on source elements · Issue #9812 · whatwg/html 50:57 Popover invoker Popover does not know what triggered it · Issue #9111 · whatwg/html 51:25 Autocomplete on ‘contenteditable' Elements Autocomplete on ‘contenteditable' Elements · Issue #9065 · whatwg/html 52:17 Sick Picks Sick Picks Scott: Escaping Twin Flames cult documentary Wes: Lao Gan Ma spicy Chili Oil Shameless Plugs Scott: Sentry Wes: Wes Bos Courses Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott: X Instagram Tiktok LinkedIn Threads

IoT For All Podcast
Building Resilient Smart Cities | Wi-SUN Alliance's Phil Beecher | Internet of Things Podcast

IoT For All Podcast

Play Episode Listen Later Dec 5, 2023 20:36


Smart cities and utilities require resilient infrastructure that can withstand unexpected disruptions and challenges. From weather and natural disasters, to technology advancements and cyber threats, infrastructure needs to be adaptable and scalable. Phil Beecher, President and CEO of Wi-SUN Alliance, joins Ryan Chacon on the IoT For All Podcast to discuss building resilient smart cities and utilities with IoT and the Wi-SUN standard. Phil Beecher is the President and CEO of the Wi-SUN Alliance. Since 1997, Phil has played a key role in the development of communications standards including Bluetooth, Wi-Fi, IETF, IEEE and cellular and the specification of test plans for a number of Smart Utilities Network standards, including Advanced Metering Infrastructure (AMI) and Home Energy Management Systems. He is a graduate of the University of Sussex with a degree in Electronic Engineering and holds patents in communications and networking technology. Wi-SUN Alliance is a global association of industry leading companies driving the adoption of interoperable wireless solutions for use in smart utilities and smart cities. Wi-SUN® specifications bring Smart Ubiquitous Networks to service providers, utilities, municipalities/local government and other enterprises, by enabling interoperable, multi-service and secure wireless mesh networks. Wi-SUN can be used for large-scale outdoor IoT wireless communication networks in a wide range of applications. Discover more about smart cities and IoT at https://www.iotforall.com More about Wi-SUN Alliance: https://wi-sun.org Connect with Phil: https://www.linkedin.com/in/phil-beecher/ Our sponsor: https://www.routethis.com (00:00) Sponsor (00:40) Intro (01:10) Phil Beecher and Wi-SUN Alliance (02:13) Understanding the Wi-SUN standard (02:32) Origins and applications of Wi-SUN (04:05) Benefits and use cases of Wi-SUN (07:21) Challenges in large-scale outdoor IoT deployments (12:20) Importance of interoperability in IoT solutions (15:18) Future of Wi-SUN technology and IoT adoption (19:52) Learn more and follow up SUBSCRIBE TO THE CHANNEL: https://bit.ly/2NlcEwm​ Join Our Newsletter: https://www.iotforall.com/iot-newsletter Follow Us on Social: https://linktr.ee/iot4all Check out the IoT For All Media Network: https://www.iotforall.com/podcast-overview