Podcasts about w3c

Main international standards organization for the World Wide Web

  • 306PODCASTS
  • 552EPISODES
  • 50mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Sep 17, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about w3c

Latest podcast episodes about w3c

Software Engineering Radio - The Podcast for Professional Software Developers

François Daoust, W3C staff member and co-chair of the Web Developer Experience Community Group, discusses the origins of the W3C, the browser standardization process, and how it relates to other organizations like TC39, WHATWG, and IETF. This episode covers a lot of ground, including funding through memberships, royalty-free patent access for implementations, why implementations are built in parallel with the specifications, why requestVideoFrameCallback doesn't have a formal specification, balancing functionality with privacy, working group participants, and how certain organizations have more power. François explains why the W3C hasn't specified a video or audio codec, and discusses Media Source Extensions, Encrypted Media Extensions and Digital Rights Management (DRM), closed source content decryption modules such as Widevine and PlayReady, which ship with browsers, and informing developers about which features are available in browsers. Brought to you by IEEE Computer Society and IEEE Software magazine.

Software Sessions
François Daost on the W3C

Software Sessions

Play Episode Listen Later Sep 16, 2025 67:56


Francois Daost is a W3C staff member and co-chair of the Web Developer Experience Community Group. We discuss the W3C's role and what it's like to go through the browser standardization process. Related links W3C TC39 Internet Engineering Task Force Web Hypertext Application Technology Working Group (WHATWG) Horizontal Groups Alliance for Open Media What is MPEG-DASH? | HLS vs. DASH Information about W3C and Encrypted Media Extensions (EME) Widevine PlayReady Media Source API Encrypted Media Extensions API requestVideoFrameCallback() Business Benefits of the W3C Patent Policy web.dev Baseline Portable Network Graphics Specification Internet Explorer 6 CSS Vendor Prefix WebRTC Transcript You can help correct transcripts on GitHub. Intro [00:00:00] Jeremy: today I'm talking to Francois Daoust. He's a staff member at the W3C. And we're gonna talk about the W3C and the recommendation process and discuss, Francois's experience with, with how these features end up in our browsers. [00:00:16] Jeremy: So, Francois, welcome [00:00:18] Francois: Thank you Jeremy and uh, many thanks for the invitation. I'm really thrilled to be part of this podcast. What's the W3C? [00:00:26] Jeremy: I think many of our listeners will have heard about the W3C, but they may not actually know what it is. So could you start by explaining what it is? [00:00:37] Francois: Sure. So W3C stands for the Worldwide Web Consortium. It's a standardization organization. I guess that's how people should think about W3C. it was created in 1994. I, by, uh, Tim Berners Lee, who was the inventor of the web. Tim Berners Lee was the, director of W3C for a long, long time. [00:01:00] Francois: He retired not long ago, a few years back. and W3C is, has, uh, a number of, uh. Properties, let's say first the goal is to produce royalty free standards, and that's very important. Uh, we want to make sure that, uh, the standard that get produced can be used and implemented without having to pay, fees to anyone. [00:01:23] Francois: We do web standards. I didn't mention it, but it's from the name. Standards that you find in your web browsers. But not only that, there are a number of other, uh, standards that got developed at W3C including, for example, XML. Data related standards. W3C as an organization is a consortium. [00:01:43] Francois: The, the C stands for consortium. Legally speaking, it's a, it's a 501c3 meaning in, so it's a US based, uh, legal entity not for profit. And the, the little three is important because it means it's public interest. That means we are a consortium, that means we have members, but at the same time, the goal, the mission is to the public. [00:02:05] Francois: So we're not only just, you know, doing what our members want. We are also making sure that what our members want is aligned with what end users in the end, need. and the W3C has a small team. And so I'm part of this, uh, of this team worldwide. Uh, 45 to 55 people, depending on how you count, mostly technical people and some, uh, admin, uh, as well, overseeing the, uh, the work, that we do, uh, at the W3C. Funding through membership fees [00:02:39] Jeremy: So you mentioned there's 45 to 55 people. How is this funded? Is this from governments or commercial companies? [00:02:47] Francois: The main source comes from membership fees. So the W3C has a, so members, uh, roughly 350 members, uh, at the W3C. And, in order to become a member, an organization needs to pay, uh, an annual membership fee. That's pretty common among, uh, standardization, uh, organizations. [00:03:07] Francois: And, we only have, uh, I guess three levels of membership, fees. Uh, well, you may find, uh, additional small levels, but three main ones. the goal is to make sure that, A big player will, not a big player or large company, will not have more rights than, uh, anything, anyone else. So we try to make sure that a member has the, you know, all members have equal, right? [00:03:30] Francois: if it's not perfect, but, uh, uh, that's how things are, are are set. So that's the main source of income for the W3C. And then we try to diversify just a little bit to get, uh, for example, we go to governments. We may go to governments in the u EU. We may, uh, take some, uh, grant for EU research projects that allow us, you know, to, study, explore topics. [00:03:54] Francois: Uh, in the US there, there used to be some, uh, some funding from coming from the government as well. So that, that's, uh, also, uh, a source. But the main one is, uh, membership fees. Relations to TC39, IETF, and WHATWG [00:04:04] Jeremy: And you mentioned that a lot of the W3C'S work is related to web standards. There's other groups like TC 39, which works on the JavaScript spec and the IETF, which I believe worked, with your group on WebRTC, I wonder if you could explain W3C'S connection to other groups like that. [00:04:28] Francois: sure. we try to collaborate with a, a number of, uh, standard other standardization organizations. So in general, everything goes well because you, you have, a clear separation of concerns. So you mentioned TC 39. Indeed. they are the ones who standardize, JavaScript. Proper name of JavaScript is the EcmaScript. [00:04:47] Francois: So that's tc. TC 39 is the technical committee at ecma. and so we have indeed interactions with them because their work directly impact the JavaScript that you're going to find in your, uh, run in your, in your web browser. And we develop a number of JavaScript APIs, uh, actually in W3C. [00:05:05] Francois: So we need to make sure that, the way we develop, uh, you know, these APIs align with the, the language itself. with IETF, the, the, the boundary is, uh, uh, is clear as well. It's a protocol and protocol for our network protocols for our, the IETF and application level. For W3C, that's usually how the distinction is made. [00:05:28] Francois: The boundaries are always a bit fuzzy, but that's how things work. And usually, uh, things work pretty well. Uh, there's also the WHATWG, uh, and the WHATWG is more the, the, the history was more complicated because, uh, t of a fork of the, uh, HTML specification, uh, at the time when it was developed by W3C, a long time ago. [00:05:49] Francois: And there was been some, uh, Well disagreement on the way things should have been done, and the WHATWG took over got created, took, took this the HTML spec and did it a different way. Went in another, another direction, and that other, other direction actually ended up being the direction. [00:06:06] Francois: So, that's a success, uh, from there. And so, W3C no longer works, no longer owns the, uh, HTML spec and the WHATWG has, uh, taken, uh, taken up a number of, uh, of different, core specifications for the web. Uh, doing a lot of work on the, uh, on interopoerability and making sure that, uh, the algorithm specified by the spec, were correct, which, which was something that historically we haven't been very good at at W3C. [00:06:35] Francois: And the way they've been working as a, has a lot of influence on the way we develop now, uh, the APIs, uh, from a W3C perspective. [00:06:44] Jeremy: So, just to make sure I understand correctly, you have TC 39, which is focused on the JavaScript or ECMAScript language itself, and you have APIs that are going to use JavaScript and interact with JavaScript. So you need to coordinate there. The, the have the specification for HTML. then the IATF, they are, I'm not sure if the right term would be, they, they would be one level lower perhaps, than the W3C. [00:07:17] Francois: That's how you, you can formulate it. Yes. The, the one layer, one layer layer in the ISO network in the ISO stack at the network level. How WebRTC spans the IETF and W3C [00:07:30] Jeremy: And so in that case, one place I've heard it mentioned is that webRTC, to, to use it, there is an IETF specification, and then perhaps there's a W3C recommendation and [00:07:43] Francois: Yes. so when we created the webRTC working group, that was in 2011, I think, it was created with a dual head. There was one RTC web, group that got created at IETF and a webRTC group that got created at W3C. And that was done on purpose. Of course, the goal was not to compete on the, on the solution, but actually to, have the two sides of the, uh, solution, be developed in parallel, the API, uh, the application front and the network front. [00:08:15] Francois: And there was a, and there's still a lot of overlap in, uh, participation between both groups, and that's what keep things successful. In the end. It's not, uh, you know, process or organization to organization, uh, relationships, coordination at the organization level. It's really the fact that you have participants that are essentially the same, on both sides of the equation. [00:08:36] Francois: That helps, uh, move things forward. Now, webRTC is, uh, is more complex than just one group at IETF. I mean, web, webRTC is a very complex set of, uh, of technologies, stack of technologies. So when you, when you. Pull a little, uh, protocol from IETFs. Suddenly you have the whole IETF that comes with you with it. [00:08:56] Francois: So you, it's the, you have the feeling that webRTC needs all of the, uh, internet protocols that got, uh, created to work Recommendations [00:09:04] Jeremy: And I think probably a lot of web developers, they may hear words like specification or standard, but I believe the, the official term, at least at the W3C, is this recommendation. And so I wonder if you can explain what that means. [00:09:24] Francois: Well. It means it means standard in the end. and that came from industry. That comes from a time where. As many standardization organizations. W3C was created not to be a standardization organization. It was felt that standard was not the right term because we were not a standardization organization. [00:09:45] Francois: So recommend IETF has the same thing. They call it RFC, request for comment, which, you know, stands for nothing in, and yet it's a standard. So W3C was created with the same kind of, uh thing. We needed some other terminology and we call that recommendation. But in the end, that's standard. It's really, uh, how you should see it. [00:10:08] Francois: And one thing I didn't mention when I, uh, introduced the W3C is there are two types of standards in the end, two main categories. There are, the de jure standards and defacto standards, two families. The de jure standards are the ones that are imposed by some kind of regulation. so it's really usually a standard you see imposed by governments, for example. [00:10:29] Francois: So when you look at your electric plug at home, there's some regulation there that says, this plug needs to have these properties. And that's a standard that gets imposed. It's a de jure standard. and then there are defacto standards which are really, uh, specifications that are out there and people agree to use it to implement it. [00:10:49] Francois: And by virtue of being used and implemented and used by everyone, they become standards. the, W3C really is in the, uh, second part. It's a defacto standard. IETF is the same thing. some of our standards are used in, uh, are referenced in regulations now, but, just a, a minority of them, most of them are defacto standards. [00:11:10] Francois: and that's important because that's in the end, it doesn't matter what the specific specification says, even though it's a bit confusing. What matters is that the, what the specifications says matches what implementations actually implement, and that these implementations are used, and are used interoperably across, you know, across browsers, for example, or across, uh, implementations, across users, across usages. [00:11:36] Francois: So, uh, standardization is a, is a lengthy process. The recommendation is the final stage in that, lengthy process. More and more we don't really reach recommendation anymore. If you look at, uh, at groups, uh, because we have another path, let's say we kind of, uh, we can stop at candidate recommendation, which is in theoretically a step before that. [00:12:02] Francois: But then you, you can stay there and, uh, stay there forever and publish new candidate recommendations. Um, uh, later on. What matters again is that, you know, you get this, virtuous feedback loop, uh, with implementers, and usage. [00:12:18] Jeremy: So if the candidate recommendation ends up being implemented by all the browsers, what's ends up being the distinction between a candidate and one that's a normal recommendation. [00:12:31] Francois: So, today it's mostly a process thing. Some groups actually decide to go to rec Some groups decide to stay at candidate rec and there's no formal difference between the, the two. we've made sure we've adopted, adjusted the process so that the important bits that, applied at the recommendation level now apply at the candidate rec level. Royalty free patent access [00:13:00] Francois: And by important things, I mean the patent commitments typically, uh, the patent policy fully applies at the candidate recommendation level so that you get your, protection, the royalty free patent protection that we, we were aiming at. [00:13:14] Francois: Some people do not care, you know, but most of the world still works with, uh, with patents, uh, for good, uh, or bad reasons. But, uh, uh, that's how things work. So we need to make, we're trying to make sure that we, we secure the right set of, um, of patent commitments from the right set of stakeholders. [00:13:35] Jeremy: Oh, so when someone implements a W3C recommendation or a candidate recommendation, the patent holders related to that recommendation, they basically agree to allow royalty-free use of that patent. [00:13:54] Francois: They do the one that were involved in the working group, of course, I mean, we can't say anything about the companies out there that may have patents and uh, are not part of this standardization process. So there's always, It's a remaining risk. but part of the goal when we create a working group is to make sure that, people understand the scope. [00:14:17] Francois: Lawyers look into it, and the, the legal teams that exist at the all the large companies, basically gave a green light saying, yeah, we, we we're pretty confident that we, we know where the patterns are on this particular, this particular area. And we are fine also, uh, letting go of the, the patterns we own ourselves. Implementations are built in parallel with standardization [00:14:39] Jeremy: And I think you had mentioned. What ends up being the most important is that the browser creators implement these recommendations. So it sounds like maybe the distinction between candidate recommendation and recommendation almost doesn't matter as long as you get the end result you want. [00:15:03] Francois: So, I mean, people will have different opinions, uh, in the, in standardization circles. And I mentioned also W3C is working on other kind of, uh, standards. So, uh, in some other areas, the nuance may be more important when we, but when, when you look at specification, that's target, web browsers. we've switched from a model where, specs were developed first and then implemented to a model where specs and implementing implementations are being, worked in parallel. [00:15:35] Francois: This actually relates to the evolution I was mentioning with the WHATWG taking over the HTML and, uh, focusing on the interoperability issues because the starting point was, yeah, we have an HTML 4.01 spec, uh, but it's not interoperable because it, it's not specified, are number of areas that are gray areas, you can implement them differently. [00:15:59] Francois: And so there are interoperable issues. Back to candidate rec actually, the, the, the, the stage was created, if I remember correctly. uh, if I'm, if I'm not wrong, the stage was created following the, uh, IE problem. In the CSS working group, IE6, uh, shipped with some, version of a CSS that was in the, as specified, you know, the spec was saying, you know, do that for the CSS box model. [00:16:27] Francois: And the IE6 was following that. And then the group decided to change, the box model and suddenly IE6 was no longer compliant. And that created a, a huge mess on the, in the history of, uh, of the web in a way. And so the, we, the, the, the, the candidate recommendation sta uh, stage was introduced following that to try to catch this kind of problems. [00:16:52] Francois: But nowadays, again, we, we switch to another model where it's more live. and so we, you, you'll find a number of specs that are not even at candidate rec level. They are at the, what we call a working draft, and they, they are being implemented, and if all goes well, the standardization process follows the implementation, and then you end up in a situation where you have your candidate rec when the, uh, spec ships. [00:17:18] Francois: a recent example would be a web GPU, for example. It, uh, it has shipped in, uh, in, in Chrome shortly before it transition to a candidate rec. But the, the, the spec was already stable. and now it's shipping uh, in, uh, in different browsers, uh, uh, safari, uh, and uh, and uh, and uh, Firefox. And so that's, uh, and that's a good example of something that follows, uh, things, uh, along pretty well. But then you have other specs such as, uh, in the media space, uh, request video frame back, uh, frame, call back, uh, requestVideoFrameCallback() is a short API that allows you to get, you know, a call back whenever the, the browser renders a video frame, essentially. [00:18:01] Francois: And that spec is implemented across browsers. But from a W3C specific, perspective, it does not even exist. It's not on the standardization track. It's still being incubated in what we call a community group, which is, you know, some something that, uh, usually exists before. we move to the, the standardization process. [00:18:21] Francois: So there, there are examples of things where some things fell through the cracks. All the standardization process, uh, is either too early or too late and things that are in spec are not exactly what what got implemented or implementations are too early in the process. We we're doing a better job, at, Not falling into a trap where someone ships, uh, you know, an implementation and then suddenly everything is frozen. You can no longer, change it because it's too late, it shipped. we've tried, different, path there. Um, mentioned CSS, the, there was this kind of vendor prefixed, uh, properties that used to be, uh, the way, uh, browsers were deploying new features without, you know, taking the final name. [00:19:06] Francois: We are trying also to move away from it because same thing. Then in the end, you end up with, uh, applications that have, uh, to duplicate all the properties, the CSS properties in the style sheets with, uh, the vendor prefixes and nuances in the, in what it does in, in the end. [00:19:23] Jeremy: Yeah, I, I think, is that in CSS where you'll see --mozilla or things like that? Why requestVideoFrameCallback doesn't have a formal specification [00:19:30] Jeremy: The example of the request video frame callback. I, I wonder if you have an opinion or, or, or know why that ended up the way it did, where the browsers all implemented it, even though it was still in the incubation stage. [00:19:49] Francois: On this one, I don't have a particular, uh, insights on whether there was a, you know, a strong reason to implement it,without doing the standardization work. [00:19:58] Francois: I mean, there are, it's not, uh, an IPR (Intellectual Property Rights) issue. It's not, uh, something that, uh, I don't think the, the, the spec triggers, uh, you know, problems that, uh, would be controversial or whatever. [00:20:10] Francois: Uh, so it's just a matter of, uh, there was no one's priority, and in the end, you end up with a, everyone's happy. it's, it has shipped. And so now doing the spec work is a bit,why spend time on something that's already shipped and so on, but the, it may still come back at some point with try to, you know, improve the situation. [00:20:26] Jeremy: Yeah, that's, that's interesting. It's a little counterintuitive because it sounds like you have the, the working group and it, it sounds like perhaps the companies or organizations involved, they maybe agreed on how it should work, and maybe that agreement almost made it so that they felt like they didn't need to move forward with the specification because they came to consensus even before going through that. [00:20:53] Francois: In this particular case, it's probably because it's really, again, it's a small, spec. It's just one function call, you know? I mean, they will definitely want a working group, uh, for larger specifications. by the way, actually now I know re request video frame call back. It's because the, the, the final goal now that it's, uh, shipped, is to merge it into, uh, HTML, uh, the HTML spec. [00:21:17] Francois: So there's a, there's an ongoing issue on the, the WHATWG side to integrate request video frame callback. And it's taking some time but see, it's, it's being, it, it caught up and, uh, someone is doing the, the work to, to do it. I had forgotten about this one. Um, [00:21:33] Jeremy: Tension from specification review (horizontal review) [00:21:33] Francois: so with larger specifications, organizations will want this kind of IPR regime they will want commit commitments from, uh, others, on the scope, on the process, on everything. So they will want, uh, a larger, a, a more formal setting, because that's part of how you ensure that things, uh, will get done properly. [00:21:53] Francois: I didn't mention it, but, uh, something we're really, uh, Pushy on, uh, W3C I mentioned we have principles, we have priorities, and we have, uh, specific several, uh, properties at W3C. And one of them is that we we're very strong on horizontal reviews of our specs. We really want them to be reviewed from an accessibility perspective, from an internationalization perspective, from a privacy and security, uh, perspective, and, and, and a technical architecture perspective as well. [00:22:23] Francois: And that's, these reviews are part of the formal process. So you, all specs need to undergo these reviews. And from time to time, that creates tension. Uh, from time to time. It just works, you know. Goes without problem. a recurring issue is that, privacy and security are hard. I mean, it's not an easy problem, something that can be, uh, solved, uh, easily. [00:22:48] Francois: Uh, so there's a, an ongoing tension and no easy way to resolve it, but there's an ongoing tension between, specifying powerful APIs and preserving privacy without meaning, not exposing too much information to applications in the media space. You can think of the media capabilities, API. So the media space is a complicated space. [00:23:13] Francois: Space because of codecs. codecs are typically not relative free. and so browsers decide which codecs they're going to support, which audio and video codecs they, they're going to support and doing that, that creates additional fragmentation, not in the sense that they're not interoperable, but in the sense that applications need to choose which connect they're going to ship to stream to the end user. [00:23:39] Francois: And, uh, it's all the more complicated that some codecs are going to be hardware supported. So you will have a hardware decoder in your, in your, in your laptop or smartphone. And so that's going to be efficient to decode some, uh, some stream, whereas some code are not, are going to be software, based, supported. [00:23:56] Francois: Uh, and that may consume a lot of CPU and a lot of power and a lot of energy in the end. So you, you want to avoid that if you can, uh, select another thing. Even more complex than, codecs have different profiles, uh, lower end profiles higher end profiles with different capabilities, different features, uh, depending on whether you're going to use this or that color space, for example, this or that resolution, whatever. [00:24:22] Francois: And so you want to surface that to web applications because otherwise, they can't. Select, they can't choose, the right codec and the right, stream that they're going to send to the, uh, client devices. And so they're not going to provide an efficient user experience first, and even a sustainable one in terms of energy because they, they're going to waste energy if they don't send the right stream. [00:24:45] Francois: So you want to surface that to application. That's what the media, media capabilities, APIs, provides. Privacy concerns [00:24:51] Francois: Uh, but at the same time, if you expose that information, you end up with ways to fingerprint the end user's device. And that in turn is often used to track users across, across sites, which is exactly what we don't want to have, uh, for privacy reasons, for obvious privacy reasons. [00:25:09] Francois: So you have to balance that and find ways to, uh, you know, to expose. Capabilities without, without necessarily exposing them too much. Uh, [00:25:21] Jeremy: Can you give an example of how some of those discussions went? Like within the working group? Who are the companies or who are the organizations that are arguing for We shouldn't have this capability because of the privacy concerns, or [00:25:40] Francois: In a way all of the companies, have a vision of, uh, of privacy. I mean, the, you will have a hard time finding, you know, members saying, I don't care about privacy. I just want the feature. Uh, they all have privacy in mind, but they may have a different approach to privacy. [00:25:57] Francois: so if you take, uh, let's say, uh, apple and Google would be the, the, I guess the perfect examples in that, uh, in that space, uh, Google will have a, an approach that is more open-ended thing. The, the user agents has this, uh, should check what the, the, uh, given site is doing. And then if it goes beyond, you know, some kind of threshold, they're going to say, well, okay, well, we'll stop exposing data to that, to that, uh, to that site. [00:26:25] Francois: So that application. So monitor and react in a way. apple has a more, uh, you know, has a stricter view on, uh, on privacy, let's say. And they will say, no, we, the, the, the feature must not exist in the first place. Or, but that's, I mean, I guess, um, it's not always that extreme. And, uh, from time to time it's the opposite. [00:26:45] Francois: You will have, uh, you know, apple arguing in one way, uh, which is more open-ended than the, uh, than, uh, than Google, for example. And they are not the only ones. So in working groups, uh, you will find the, usually the implementers. Uh, so when we talk about APIs that get implemented in browsers, you want the core browsers to be involved. [00:27:04] Francois: Uh, otherwise it's usually not a good sign for, uh, the success of the, uh, of the technology. So in practice, that means Apple, uh, Microsoft, Mozilla which one did I forget? [00:27:15] Jeremy: Google. [00:27:16] Francois: I forgot Google. Of course. Thank you. that's, uh, that the, the core, uh, list of participants you want to have in any, uh, group that develops web standards targeted at web browsers. Who participates in working groups and how much power do they have? [00:27:28] Francois: And then on top of that, you want, organizations and people who are directly going to use it, either because they, well the content providers. So in media, for example, if you look at the media working group, you'll see, uh, so browser vendors, the ones I mentioned, uh, content providers such as the BBC or Netflix. [00:27:46] Francois: Chip set vendors would, uh, would be there as well. Intel, uh, Nvidia again, because you know, there's a hardware decoding in there and encoding. So media is, touches on, on, uh, on hardware, uh, device manufacturer in general. You may, uh, I think, uh, I think Sony is involved in the, in the media working group, for example. [00:28:04] Francois: and these companies are usually less active in the spec development. It depends on the groups, but they're usually less active because the ones developing the specs are usually the browser again, because as I mentioned, we develop the specs in parallel to browsers implementing it. So they have the. [00:28:21] Francois: The feedback on how to formulate the, the algorithms. and so that's this collection of people who are going to discuss first within themselves. W3C pushes for consensual dis decisions. So we hardly take any votes in the working groups, but from time to time, that's not enough. [00:28:41] Francois: And there may be disagreements, but let's say there's agreement in the group, uh, when the spec matches. horizontal review groups will look at the specs. So these are groups I mentioned, accessibility one, uh, privacy, internationalization. And these groups, usually the participants are, it depends. [00:29:00] Francois: It can be anything. It can be, uh, the same companies. It can be, but usually different people from the same companies. But it the, maybe organizations with a that come from very, a very different angle. And that's a good thing because that means the, you know, you enlarge the, the perspectives on your, uh, on the, on the technology. [00:29:19] Francois: and you, that's when you have a discussion between groups, that takes place. And from time to time it goes well from time to time. Again, it can trigger issues that are hard to solve. and the W3C has a, an escalation process in case, uh, you know, in case things degenerate. Uh, starting with, uh, the notion of formal objection. [00:29:42] Jeremy: It makes sense that you would have the, the browser. Vendors and you have all the different companies that would use that browser. All the different horizontal groups like you mentioned, the internationalization, accessibility. I would imagine that you were talking about consensus and there are certain groups or certain companies that maybe have more say or more sway. [00:30:09] Jeremy: For example, if you're a browser, manufacturer, your Google. I'm kind of curious how that works out within the working group. [00:30:15] Francois: Yes, it's, I guess I would be lying if I were saying that, uh, you know, all companies are strictly equal in a, in a, in a group. they are from a process perspective, I mentioned, you know, different membership fees with were design, special specific ethos so that no one could say, I'm, I'm putting in a lot of money, so you, you need to re you need to respect me, uh, and you need to follow what I, what I want to, what I want to do. [00:30:41] Francois: at the same time, if you take a company like, uh, like Google for example, they send, hundreds of engineers to do standardization work. That's absolutely fantastic because that means work progresses and it's, uh, extremely smart people. So that's, uh, that's really a pleasure to work with, uh, with these, uh, people. [00:30:58] Francois: But you need to take a step back and say, well, the problem is. Defacto that gives them more power just by virtue of, uh, injecting more resources into it. So having always someone who can respond to an issue, having always someone, uh, editing a spec defacto that give them more, uh, um, more say on the, on the directions that, get forward. [00:31:22] Francois: And on top of that, of course, they have the, uh, I guess not surprisingly, the, the browser that is, uh, used the most, currently, on the market so there's a little bit of a, the, the, we, we, we, we try very hard to make sure that, uh, things are balanced. it's not a perfect world. [00:31:38] Francois: the the role of the team. I mean, I didn't talk about the role of the team, but part of it is to make sure that. Again, all perspectives are represented and that there's not, such a, such big imbalance that, uh, that something is wrong and that we really need to look into it. so making sure that anyone, if they have something to say, make making sure that they are heard by the rest of the group and not dismissed. [00:32:05] Francois: That usually goes well. There's no problem with that. And again, the escalation process I mentioned here doesn't make any, uh, it doesn't make any difference between, uh, a small player, a large player, a big player, and we have small companies raising formal objections against some of our aspects that happens, uh, all large ones. [00:32:24] Francois: But, uh, that happens too. There's no magical solution, I guess you can tell it by the way. I, uh, I don't know how to formulate the, the process more. It's a human process, and that's very important that it remains a human process as well. [00:32:41] Jeremy: I suppose the role of, of staff and someone in your position, for example, is to try and ensure that these different groups are, are heard and it isn't just one group taking control of it. [00:32:55] Francois: That's part of the role, again, is to make sure that, uh, the, the process is followed. So the, I, I mean, I don't want to give the impression that the process controls everything in the groups. I mean, the, the, the groups are bound by the process, but the process is there to catch problems when they arise. [00:33:14] Francois: most of the time there are no problems. It's just, you know, again, participants talking to each other, talking with the rest of the community. Most of the work happens in public nowadays, in any case. So the groups work in public essentially through asynchronous, uh, discussions on GitHub repositories. [00:33:32] Francois: There are contributions from, you know, non group participants and everything goes well. And so the process doesn't kick in. You just never say, eh, no, you didn't respect the process there. You, you closed the issue. You shouldn't have a, it's pretty rare that you have to do that. Uh, things just proceed naturally because they all, everyone understands where they are, why, what they're doing, and why they're doing it. [00:33:55] Francois: we still have a role, I guess in the, in the sense that from time to time that doesn't work and you have to intervene and you have to make sure that,the, uh, exception is caught and, uh, and processed, uh, in the right way. Discussions are public on github [00:34:10] Jeremy: And you said this process is asynchronous in public, so it sounds like someone, I, I mean, is this in GitHub issues or how, how would somebody go and, and see what the results of [00:34:22] Francois: Yes, there, there are basically a gazillion of, uh, GitHub repositories under the, uh, W3C, uh, organization on GitHub. Most groups are using GitHub. I mean, there's no, it's not mandatory. We don't manage any, uh, any tooling. But the factors that most, we, we've been transitioning to GitHub, uh, for a number of years already. [00:34:45] Francois: Uh, so that's where the work most of the work happens, through issues, through pool requests. Uh, that's where. people can go and raise issues against specifications. Uh, we usually, uh, also some from time to time get feedback from developers and countering, uh, a bug in a particular implementations, which we try to gently redirect to, uh, the actual bug trackers because we're not responsible for the respons implementations of the specs unless the spec is not clear. [00:35:14] Francois: We are responsible for the spec itself, making sure that the spec is clear and that implementers well, understand how they should implement something. Why the W3C doesn't specify a video or audio codec [00:35:25] Jeremy: I can see how people would make that mistake because they, they see it's the feature, but that's not the responsibility of the, the W3C to implement any of the specifications. Something you had mentioned there's the issue of intellectual property rights and how when you have a recommendation, you require the different organizations involved to make their patents available to use freely. [00:35:54] Jeremy: I wonder why there was never any kind of, recommendation for audio or video codecs in browsers since you have certain ones that are considered royalty free. But, I believe that's never been specified. [00:36:11] Francois: At W3C you mean? Yes. we, we've tried, I mean, it's not for lack of trying. Um, uh, we've had a number of discussions with, uh, various stakeholders saying, Hey, we, we really need, an audio or video code for our, for the web. the, uh, png PNG is an example of a, um, an image format which got standardized at W3C and it got standardized at W3C similar reasons. There had to be a royalty free image format for the web, and there was none at the time. of course, nowadays, uh, jpeg, uh, and gif or gif, whatever you call it, are well, you know, no problem with them. But, uh, um, that at the time P PNG was really, uh, meant to address this issue and it worked for PNG for audio and video. [00:37:01] Francois: We haven't managed to secure, commitments by stakeholders. So willingness to do it, so it's not, it's not lack of willingness. We would've loved to, uh, get, uh, a royalty free, uh, audio codec, a royalty free video codec again, audio and video code are extremely complicated because of this. [00:37:20] Francois: not only because of patterns, but also because of the entire business ecosystem that exists around them for good reasons. You, in order for a, a codec to be supported, deployed, effective, it really needs, uh, it needs to mature a lot. It needs to, be, uh, added to at a hardware level, to a number of devices, capturing devices, but also, um, uh, uh, of course players. [00:37:46] Francois: And that takes a hell of a lot of time and that's why you also enter a number of business considerations with business contracts between entities. so I'm personally, on a personal level, I'm, I'm pleased to see, for example, the Alliance for Open Media working on, uh, uh, AV1, uh, which is. At least they, uh, they wanted to be royalty free and they've been adopting actually the W3C patent policy to do this work. [00:38:11] Francois: So, uh, we're pleased to see that, you know, they've been adopting the same process and same thing. AV1 is not yet at the same, support stage, as other, codecs, in the world Yeah, I mean in devices. There's an open question as what, what are we going to do, uh, in the future uh, with that, it's, it's, it's doubtful that, uh, the W3C will be able to work on a, on a royalty free audio, codec or royalty free video codec itself because, uh, probably it's too late now in any case. [00:38:43] Francois: but It's one of these angles in the, in the web platform where we wish we had the, uh, the technology available for, for free. And, uh, it's not exactly, uh, how things work in practice.I mean, the way codecs are developed remains really patent oriented. [00:38:57] Francois: and you will find more codecs being developed. and that's where geopolitics can even enter the, the, uh, the play. Because, uh, if you go to China, you will find new codecs emerging, uh, that get developed within China also, because, the other codecs come mostly from the US so it's a bit of a problem and so on. [00:39:17] Francois: I'm not going to enter details and uh, I would probably say stupid things in any case. Uh, but that, uh, so we continue to see, uh, emerging codecs that are not royalty free, and it's probably going to remain the case for a number of years. unfortunately, unfortunately, from a W3C perspective and my perspective of course. [00:39:38] Jeremy: There's always these new, formats coming out and the, rate at which they get supported in the browser, even on a per browser basis is, is very, there can be a long time between, for example, WebP being released and a browser supporting it. So, seems like maybe we're gonna be in that situation for a while where the codecs will come out and maybe the browsers will support them. Maybe they won't, but the, the timeline is very uncertain. Digital Rights Management (DRM) and Media Source Extensions [00:40:08] Jeremy: Something you had, mentioned, maybe this was in your, email to me earlier, but you had mentioned that some of these specifications, there's, there's business considerations like with, digital rights management and, media source extensions. I wonder if you could talk a little bit about maybe what media source extensions is and encrypted media extensions and, and what the, the considerations or challenges are there. [00:40:33] Francois: I'm going to go very, very quickly over the history of a, video and audio support on the web. Initially it was supported through plugins. you are maybe too young to, remember that. But, uh, we had extensions, added to, uh, a realplayer. [00:40:46] Francois: This kind of things flash as well, uh, supporting, uh, uh, videos, in web pages, but it was not provided by the web browsers themselves. Uh, then HTML5 changed the, the situation. Adding these new tags, audio and video, but that these tags on this, by default, support, uh, you give them a resources, a resource, like an image as it's an audio or a video file. [00:41:10] Francois: They're going to download this, uh, uh, video file or audio file, and they're going to play it. That works well. But as soon as you want to do any kind of real streaming, files are too large and to stream, to, to get, you know, to get just a single fetch on, uh, on them. So you really want to stream them chunk by chunk, and you want to adapt the resolution at which you send the stream based on real time conditions of the user's network. [00:41:37] Francois: If there's plenty of bandwidth you want to send the user, the highest possible resolution. If there's a, some kind of hiccup temporary in the, in the network, you really want to lower the resolution, and that's called adaptive streaming. And to get adaptive streaming on the web, well, there are a number of protocols that exist. [00:41:54] Francois: Same thing. Some many of them are proprietary and actually they remain proprietary, uh, to some extent. and, uh, some of them are over http and they are the ones that are primarily used in, uh, in web contexts. So DASH comes to mind, DASH for Dynamic Adaptive streaming over http. HLS is another one. Uh, initially developed by Apple, I believe, and it's, uh, HTTP live streaming probably. Exactly. And, so there are different protocols that you can, uh, you can use. Uh, so the goal was not to standardize these protocols because again, there were some proprietary aspects to them. And, uh, same thing as with codecs. [00:42:32] Francois: There was no, well, at least people wanted to have the, uh, flexibility to tweak parameters, adaptive streaming parameters the way they wanted for different scenarios. You may want to tweak the parameters differently. So they, they needed to be more flexibility on top of protocols not being truly available for use directly and for implementation directly in browsers. [00:42:53] Francois: It was also about providing applications with, uh, the flexibility they would need to tweak parameters. So media source extensions comes into play for exactly that. Media source extensions is really about you. The application fetches chunks of its audio and video stream the way it wants, and with the parameters it wants, and it adjusts whatever it wants. [00:43:15] Francois: And then it feeds that into the, uh, video or audio tag. and the browser takes care of the rest. So it's really about, doing, you know, the adaptive streaming. let applications do it, and then, uh, let the user agent, uh, the browser takes, take care of the rendering itself. That's media source extensions. [00:43:32] Francois: Initially it was pushed by, uh, Netflix. They were not the only ones of course, but there, there was a, a ma, a major, uh, proponent of this, uh, technical solution, because they wanted, uh, they, uh, they were, expanding all over the world, uh, with, uh, plenty of native, applications on all sorts of, uh, of, uh, devices. [00:43:52] Francois: And they wanted to have a way to stream content on the web as well. both for both, I guess, to expand to, um, a new, um, ecosystem, the web, uh, providing new opportunities, let's say. But at the same time also to have a fallback, in case they, because for native support on different platforms, they sometimes had to enter business agreements with, uh, you know, the hardware manufacturers, the whatever, the, uh, service provider or whatever. [00:44:19] Francois: and so that was a way to have a full back. That kind of work is more open, in case, uh, things take some time and so on. So, and they probably had other reasons. I mean, I'm not, I can't speak on behalf of Netflix, uh, on others, but they were not the only ones of course, uh, supporting this, uh, me, uh, media source extension, uh, uh, specification. [00:44:42] Francois: and that went kind of, well, I think it was creating 2011. I mean, the, the work started in 2011 and the recommendation was published in 2016, which is not too bad from a standardization perspective. It means only five years, you know, it's a very short amount of time. Encrypted Media Extensions [00:44:59] Francois: At the same time, and in parallel and complement to the media source extension specifications, uh, there was work on the encrypted media extensions, and here it was pushed by the same proponent in a way because they wanted to get premium content on the web. [00:45:14] Francois: And by premium content, you think of movies and, uh. These kind of beasts. And the problem with the, I guess the basic issue with, uh, digital asset such as movies, is that they cost hundreds of millions to produce. I mean, some cost less of course. And yet it's super easy to copy them if you have a access to the digital, uh, file. [00:45:35] Francois: You just copy and, uh, and that's it. Piracy uh, is super easy, uh, to achieve. It's illegal of course, but it's super easy to do. And so that's where the different legislations come into play with digital right management. Then the fact is most countries allow system that, can encrypt content and, uh, through what we call DRM systems. [00:45:59] Francois: so content providers, uh, the, the ones that have movies, so the studios here more, more and more, and Netflix is one, uh, one of the studios nowadays. Um, but not only, not only them all major studios will, uh, would, uh, push for, wanted to have something that would allow them to stream encrypted content, encrypted audio and video, uh, mostly video, to, uh, to web applications so that, uh, you. [00:46:25] Francois: Provide the movies, otherwise, they, they are just basically saying, and sorry, but, uh, this premium content will never make it to the web because there's no way we're gonna, uh, send it in clear, to, uh, to the end user. So Encrypting media extensions is, uh, is an API that allows to interface with, uh, what's called the content decryption module, CDM, uh, which itself interacts with, uh, the DR DRM systems that, uh, the browser may, may or may not support. [00:46:52] Francois: And so it provides a way for an application to receive encrypted content, pass it over get the, the, the right keys, the right license keys from a whatever system actually. Pass that logic over to the, and to the user agent, which passes, passes it over to, uh, the CDM system, which is kind of black box in, uh, that does its magic to get the right, uh, decryption key and then the, and to decrypt the content that can be rendered. [00:47:21] Francois: The encrypted media extensions triggered a, a hell of a lot of, uh, controversy. because it's DRM and DRM systems, uh, many people, uh, uh, things should be banned, uh, especially on the web because the, the premise of the web is that the, the user has trusts, a user agent. The, the web browser is called the user agent in all our, all our specifications. [00:47:44] Francois: And that's, uh, that's the trust relationship. And then they interact with a, a content provider. And so whatever they do with the content is their, I guess, actually their problem. And DRM introduces a third party, which is, uh, there's, uh, the, the end user no longer has the control on the content. [00:48:03] Francois: It has to rely on something else that, Restricts what it can achieve with the content. So it's, uh, it's not only a trust relationship with its, uh, user agents, it's also with, uh, with something else, which is the content provider, uh, in the end, the one that has the, uh, the license where provides the license. [00:48:22] Francois: And so that's, that triggers, uh, a hell of a lot of, uh, of discussions in the W3C degenerated, uh, uh, into, uh, formal objections being raised against the specification. and that escalated to, to the, I mean, at all leverage it. It's, it's the, the story in, uh, W3C that, um, really, uh, divided the membership into, opposed camps in a way, if you, that's was not only year, it was not really 50 50 in the sense that not just a huge fights, but the, that's, that triggered a hell of a lot of discussions and a lot of, a lot of, uh, of formal objections at the time. [00:49:00] Francois: Uh, we were still, From a governance perspective, interestingly, um, the W3C used to be a dictatorship. It's not how you should formulate it, of course, and I hope it's not going to be public, this podcast. Uh, but the, uh, it was a benevolent dictatorship. You could see it this way in the sense that, uh, the whole process escalated to one single person was, Tim Burners Lee, who had the final say, on when, when none of the other layers, had managed to catch and to resolve, a conflict. [00:49:32] Francois: Uh, that has hardly ever happened in, uh, the history of the W3C, but that happened to the two for EME, for encrypted media extensions. It had to go to the, uh, director level who, uh, after due consideration, uh, decided to, allow the EME to proceed. and that's why we have a, an EME, uh, uh, standard right now, but still re it remains something on the side. [00:49:56] Francois: EME we're still, uh, it's still in the scope of the media working group, for example. but the scope, if you look at the charter of the working group, we try to scope the, the, the, the, the updates we can make to the specification, uh, to make sure that we don't reopen, reopen, uh, a can of worms, because, well, it's really a, a topic that triggers friction for good and bad reasons again. [00:50:20] Jeremy: And when you talk about the media source extensions, that is the ability to write custom code to stream video in whatever way you want. You mentioned, the MPEG-DASH and http live streaming. So in that case, would that be the developer gets to write that code in JavaScript that's executed by the browser? [00:50:43] Francois: Yep, that's, uh, that would be it. and then typically, I guess the approach nowadays is more and more to develop low level APIs into W3C or web in, in general, I guess. And to let, uh. Libraries emerge that are going to make lives of a, a developer, uh, easier. So for MPEG DASH, we have the DASH.js, which does a fantastic job at, uh, at implementing the complexity of, uh, of adaptive streaming. [00:51:13] Francois: And you just, you just hook it into your, your workflow. And that's, uh, and that's it. Encrypted Media Extensions are closed source [00:51:20] Jeremy: And with the encrypted media extensions I'm trying to picture how those work and how they work differently. [00:51:28] Francois: Well, it's because the, the, the, the key architecture is that the, the stream that you, the stream that you may assemble with a media source extensions, for example. 'cause typically they, they're used in collaboration. When you hook the, hook it into the video tag, you also. Call EME and actually the stream goes to EME. [00:51:49] Francois: And when it goes to EME, actually the user agent hands the encrypted stream. You're still encrypted at this time. Uh, encrypted, uh, stream goes to the CDM content decryption module, and that's a black box well, it has some black, black, uh, black box logic. So it's not, uh, even if you look at the chromium source code, for example, you won't see the implementation of the CDM because it's a, it's a black box, so it's not part of the browser se it's a sand, it's sandboxed, it's execution sandbox. [00:52:17] Francois: That's, uh, the, the EME is kind of unique in, in this way where the, the CDM is not allowed to make network requests, for example, again, for privacy reasons. so anyway, the, the CDM box has the logic to decrypt the content and it hands it over, and then it depends, it depends on the level of protection you. [00:52:37] Francois: You need or that the system supports. It can be against software based protection, in which case actually, a highly motivated, uh, uh, uh, attacker could, uh, actually get access to the decoded stream, or it can be more hardware protected, in which case actually the, it goes to the, uh, to your final screen. [00:52:58] Francois: But it goes, it, it goes through the hardware in a, in a mode that the US supports in a mode that even the user agent doesn't have access to it. So it doesn't, it can't even see the pixels that, uh, gets rendered on the screen. There are, uh, several other, uh, APIs that you could use, for example, to take a screenshot of your, of your application and so on. [00:53:16] Francois: And you cannot apply them to, uh, such content because they're just gonna return a black box. again, because the user agent itself does not see the, uh, the pixels, which is exactly what you want with encrypted content. [00:53:29] Jeremy: And the, the content decryption module, it's, if I understand correctly, it's something that's shipped with the browsers, but you were saying is if you were to look at the public source code of Chromium or of Firefox, you would not see that implementation. Content Decryption Module (Widevine, PlayReady) [00:53:47] Francois: True. I mean, the, the, um, the typical examples are, uh, uh, widevine, so wide Vine. So interestingly, uh, speaking in theory, these, uh, systems could have been provided by anyone in practice. They've been provided by the browser vendors themselves. So Google has Wide Vine. Uh, Microsoft has something called PlayReady. Apple uh, the name, uh, escapes my, uh, sorry. They don't have it on top of my mind. So they, that's basically what they support. So they, they also own that code, but in a way they don't have to. And Firefox actually, uh, they, uh, don't, don't remember which one, they support among these three. but, uh, they, they don't own that code typically. [00:54:29] Francois: They provide a wrapper around, around it. Yeah, that's, that's exactly the, the crux of the, uh, issue that, people have with, uh, with DRMs, right? It's, uh, the fact that, uh, suddenly you have a bit of code running there that is, uh, that, okay, you can send box, but, uh, you cannot inspect and you don't have, uh, access to its, uh, source code. [00:54:52] Jeremy: That's interesting. So the, almost the entire browser is open source, but if you wanna watch a Netflix movie for example, then you, you need to, run this, this CDM, in addition to just the browser code. I, I think, you know, we've kind of covered a lot. Documenting what's available in browsers for developers [00:55:13] Jeremy: I wonder if there's any other examples or anything else you thought would be important to mention in, in the context of the W3C. [00:55:23] Francois: There, there's one thing which, uh, relates to, uh, activities I'm doing also at W3C. Um. Here, we've been talking a lot about, uh, standards and, implementations in browsers, but there's also, uh, adoption of these browser, of these technology standards by developers in general and making sure that developers are aware of what exists, making sure that they understand what exists and one of the, key pain points that people, uh. [00:55:54] Francois: Uh, keep raising on, uh, the web platform is first. Well, the, the, the web platform is unique in the sense that there are different implementations. I mean, if you, [00:56:03] Francois: Uh, anyway, there are different, uh, context, different run times where there, there's just one provided by the company that owns the, uh, the, the, the system. The web platform is implemented by different, uh, organizations. and so you end up the system where no one, there's what's in the specs is not necessarily supported. [00:56:22] Francois: And of course, MDN tries, uh, to document what's what's supported, uh, thoroughly. But for MDN to work, there's a hell of a lot of needs for data that, tracks browser support. And this, uh, this data is typically in a project called the Browser Compat Data, BCD owned by, uh, MDN as well. But, the Open Web Docs collective is a, uh, is, uh, the one, maintaining that, uh, that data under the hoods. [00:56:50] Francois: anyway, all of that to say that, uh, to make sure that, we track things beyond work on technical specifications, because if you look at it from W3C perspective, life ends when the spec reaches standards, uh, you know, candidate rec or rec, you could just say, oh, done with my work. but that's not how things work. [00:57:10] Francois: There's always, you need the feedback loop and, in order to make sure that developers get the information and can provide the, the feedback that standardization can benefit from and browser vendors can benefit from. We've been working on a project called web Features with browser vendors mainly, and, uh, a few of the folks and MDN and can I use and different, uh, different people, to catalog, the web in terms of features that speak to developers and from that catalog. [00:57:40] Francois: So it's a set of, uh, it's a set of, uh, feature IDs with a feature name and feature description that say, you know, this is how developers would, uh, understand, uh, instead of going too fine grained in terms of, uh, there's this one function call that does this because that's where you, the, the kind of support data you may get from browser data and MDN initially, and having some kind of a coarser grained, uh, structure that says these are the, features that make sense. [00:58:09] Francois: They talk to developers. That's what developers talk about, and that's the info. So the, we need to have data on these particular features because that's how developers are going approach the specs. Uh. and from that we've derived the notion of baseline badges that you have, uh, are now, uh, shown on MDN on can I use and integrated in, uh, IDE tool, IDE Tools such as visual, visual studio, and, uh, uh, libraries, uh, linked, some linters have started to, um, to integrate that data. [00:58:41] Francois: Uh, so, the way it works is, uh, we've been mapping these coarser grained features to BCDs finer grained support data, and from there we've been deriving a kind of a, a batch that says, yeah, this, this feature is implemented well, has limited availability because it's only implemented in one or two browsers, for example. [00:59:07] Francois: It's, newly available because. It was implemented. It's been, it's implemented across the main browser vendor, um, across the main browsers that people use. But it's recent, and widely available, which we try to, uh, well, there's been lots of discussion in the, in the group to, uh, come up with a definition which essentially ends up being 30 months after, a feature become, became newly available. [00:59:34] Francois: And that's when, that's the time it takes for the, for the versions of the, the different versions of the browser to propagate. Uh, because you, it's not because there's a new version of a, of a browser that, uh, people just, Ima immediately, uh, get it. So it takes a while, to propagate, uh, across the, uh, the, the user, uh, user base. [00:59:56] Francois: And so the, the goal is to have a, a, a signal that. Developers can rely on saying, okay, well it's widely available so I can really use that feature. And of course, if that doesn't work, then we need to know about it. And so we are also working with, uh, people doing so developer surveys such as state of, uh, CSS, state of HTML, state of JavaScript. [01:00:15] Francois: That's I guess, the main ones. But also we are also running, uh, MDN short surveys with the MDN people to gather feedback on. On the, on these same features, and to feed the loop and to, uh, to complete the loop. and these data is also used by, internally, by browser vendors to inform, prioritization process, their prioritization process, and typically as part of the interop project that they're also running, uh, on the site [01:00:43] Francois: So a, a number of different, I've mentioned, uh, I guess a number of different projects, uh, coming along together. But that's the goal is to create links, across all of these, um, uh, ongoing projects with a view to integrating developers, more, and gathering feedback as early as possible and inform decision. [01:01:04] Francois: We take at the standardization level that can affect the, the lives of the developers and making sure that it's, uh, it affects them in a, in a positive way. [01:01:14] Jeremy: just trying to understand, 'cause you had mentioned that there's the web features and the baseline, and I was, I was trying to picture where developers would actually, um, see these things. And it sounds like from what you're saying is W3C comes up with what stage some of these features are at, and then developers would end up seeing it on MDN or, or some other site. [01:01:37] Francois: So, uh, I'm working on it, but that doesn't mean it's a W3C thing. It's a, it's a, again, it's a, we have different types of group. It's a community group, so it's the Web DX Community group at W3C, which means it's a community owned thing. so that's why I'm mentioning a working with a representative from, and people from MDN people, from open Web docs. [01:02:05] Francois: so that's the first point. The second point is, so it's, indeed this data is now being integrated. If you, and you look, uh, you'll, you'll see it in on top of the MDN pages on most of them. If you look at, uh, any kind of feature, you'll see a, a few logos, uh, a baseline banner. and then can I use, it's the same thing. [01:02:24] Francois: You're going to get a baseline, banner. It's more on, can I use, and it's meant to capture the fact that the feature is widely available or if you may need to pay attention to it. Of course, it's a simplification, and the goal is not to the way it's, the way the messaging is done to developers is meant to capture the fact that, they may want to look, uh, into more than just this, baseline status, because. [01:02:54] Francois: If you take a look at web platform tests, for example, and if you were to base your assessment of whether a feature is supported based on test results, you'll end up saying the web platform has no supported technology because there are absolutely no API that, uh, where browsers pass 100% of the, of the, of the test suite. [01:03:18] Francois: There may be a few of them, I don't know. But, there's a simplification in the, in the process when a feature is, uh, set to be baseline, there may be more things to look at nevertheless, but it's meant to provide a signal that, uh, still developers can rely on their day-to-day, uh, lives. [01:03:36] Francois: if they use the, the feature, let's say, as a reasonably intended and not, uh, using to advance the logic. [01:03:48] Jeremy: I see. Yeah. I'm looking at one of the pages on MDN right now, and I can see at the top there's the, the baseline and it, it mentions that this feature works across many browsers and devices, and then they say how long it's been available. And so that's a way that people at a glance can, can tell, which APIs they can use. [01:04:08] Francois: it also started, uh, out of a desire to summarize this, uh, browser compatibility table that you see at the end of the page of the, the bottom of the page in on MDN. but there are where developers were saying, well, it's, it's fine, but it's, it goes too much into detail. So we don't know in the end, can we, can we use that feature or can we, can we not use that feature? [01:04:28] Francois: So it's meant as a informed summary of, uh, of, of that it relies on the same data again. and more importantly, we're beyond MDN, we're working with tools providers to integrate that as well. So I mentioned the, uh, visual Studio is one of them. So recently they shipped a new version where when you use a feature, you can, you can have some contextual, uh. [01:04:53] Francois: A menu that tells you, yeah, uh, that's fine. You, this CSS property, you can, you can use it, it's widely available or be aware this one is limited Availability only, availability only available in Firefox or, or Chrome or Safari work kit, whatever. [01:05:08] Jeremy: I think that's a good place to wrap it up, if people want to learn more about the work you're doing or learn more about sort of this whole recommendations process, where, where should they head? [01:05:23] Francois: Generally speaking, we're extremely open to, uh, people contributing to the W3C. and where should they go if they, it depends on what they want. So I guess the, the in usually where, how things start for someone getting involved in the W3C is that they have some

Igalia
Wishes, Signals, and the Shape of Standards

Igalia

Play Episode Listen Later Sep 16, 2025 61:01


Igalia's Brian Kardell and Eric Meyer chat with the W3C's Dominique Hazaël-Massieux about ways that we try to involve developers and developer input into standards Mentioned Links Eric's post about XSLT drama Recent episode on XSLT drama Interop 2025 Dashboard Interop Project / Call for Proposals W3C Developers Web Platform DX Features Project

Igalia
Standards, Inc: Inside the new, old organization with Seth Dobbs

Igalia

Play Episode Listen Later Aug 5, 2025 59:58


Brian and Eric sit down to chat with W3C's new CEO Seth Dobbs about the W3C and standards

Les Cast Codeurs Podcast
LCC 327 - Mon ami de 30 ans

Les Cast Codeurs Podcast

Play Episode Listen Later Jun 16, 2025 103:18


Dans cet épisode, c'est le retour de Katia et d'Antonio. Les Cast Codeurs explorent WebAssembly 2.0, les 30 ans de Java, l'interopérabilité Swift-Java et les dernières nouveautés Kotlin. Ils plongent dans l'évolution de l'IA avec Claude 4 et GPT-4.1, débattent de la conscience artificielle et partagent leurs retours d'expérience sur l'intégration de l'IA dans le développement. Entre virtualisation, défis d'infrastructure et enjeux de sécurité open source, une discussion riche en insights techniques et pratiques. Enregistré le 13 juin 2025 Téléchargement de l'épisode LesCastCodeurs-Episode-327.mp3 ou en vidéo sur YouTube. News Langages Wasm 2.0 enfin officialisé ! https://webassembly.org/news/2025-03-20-wasm-2.0/ La spécification Wasm 2.0 est officiellement sortie en décembre dernier. Le consensus sur la spécification avait été atteint plus tôt, en 2022. Les implémentations majeures supportent Wasm 2.0 depuis un certain temps. Le processus W3C a pris du temps pour atteindre le statut de “Recommandation Candidate” pour des raisons non techniques. Les futures versions de Wasm adopteront un modèle “evergreen” où la “Recommandation Candidate” sera mise à jour en place. La dernière version de la spécification est considérée comme le standard actuel (Candidate Recommendation Draft). La version la plus à jour est disponible sur la page GitHub (GitHub page). Wasm 2.0 inclut les nouveautés suivantes : Instructions vectorielles pour le SIMD 128-bit. Instructions de manipulation de mémoire en bloc pour des copies et initialisations plus rapides. Résultats multiples pour les instructions, blocs et fonctions. Types références pour les références à des fonctions ou objets externes. Conversions non-piégeantes de flottant à entier. Instructions d'extension de signe pour les entiers signés. Wasm 2.0 est entièrement rétrocompatible avec Wasm 1.0. Paul Sandoz annonce que le JDK intègrera bientôt une API minimaliste pour lire et écrire du JSON https://mail.openjdk.org/pipermail/core-libs-dev/2025-May/145905.html Java a 30 ans, c'était quoi les points bluffants au début ? https://blog.jetbrains.com/idea/2025/05/do-you-really-know-java/ nom de code Oak Mais le trademark était pris Write Once Run Anywhere Garbage Collector Automatique multi threading au coeur de la palteforme meme si Java est passé par les green threads pendant un temps modèle de sécurité: sandbox applets, security manager, bytecode verifier, classloader Des progrès dans l'interopérabilité Swift / Java mentionnés à la conférence Apple WWDC 2025 https://www.youtube.com/watch?v=QSHO-GUGidA Interopérabilité Swift-Java : Utiliser Swift dans des apps Java et vice-versa. Historique : L'interopérabilité Swift existait déjà avec C et C++. Méthodes : Deux directions d'interopérabilité : Java depuis Swift et Swift depuis Java. JNI : JNI est l'API Java pour le code natif, mais elle est verbeuse. Swift-Java : Un projet pour une interaction Swift-Java plus flexible, sûre et performante. Exemples pratiques : Utiliser des bibliothèques Java depuis Swift et rendre des bibliothèques Swift disponibles pour Java. Gestion mémoire : Swift-Java utilise la nouvelle API FFM de Java pour gérer la mémoire des objets Swift. Open Source : Le projet Swift-Java est open source et invite aux contributions. KotlinConf le retour https://www.sfeir.dev/tendances/kotlinconf25-quelles-sont-les-annonces-a-retenir/ par Adelin de Sfeir “1 developeur sur 10” utilise Kotlin Kotlin 2.2 en RC $$ multi dollar interpolation pour eviter les sur interpolations non local break / continue (changement dans la conssitance de Kotlin guards sur le pattern matching D'autres features annoncées alignement des versions de l'ecosysteme sur kotlin jvm par defaut un nouvel outil de build Amper beaucoup d'annonces autour de l'IA Koog, framework agentique de maniere declarative nouvelle version du LLM de JetBrains: Mellum (focalisé sur le code) Kotlin et Compose multiplateforme (stable en iOS) Hot Reload dans compose en alpha partenariat strategque avec Spring pour bien integrer kotlin dans spring Librairies Sortie d'une version Java de ADK, le framework d'agents IA lancé par Google https://glaforge.dev/posts/2025/05/20/writing-java-ai-agents-with-adk-for-java-getting-started/ Guillaume a travaillé sur le lancement de ce framework ! (améliorations de l'API, code d'exemple, doc…) Comment déployer un serveur MCP en Java, grâce à Quarkus, et le déployer sur Google Cloud Run https://glaforge.dev/posts/2025/06/09/building-an-mcp-server-with-quarkus-and-deploying-on-google-cloud-run/ Même Guillaume se met à faire du Quarkus ! Utilisation du support MCP développé par l'équipe Quarkus. C'est facile, suffit d'annoter une méthode avec @Tool et ses arguments avec @ToolArg et c'est parti ! L'outil MCP inspector est très pratique pour inspecter manuellement le fonctionnement de ses serveurs MCP Déployer sur Cloud Run est facile grâce aux Dockerfiles fournis par Quarkus En bonus, Guillaume montre comment configuré un serveur MCP comme un outil dans le framework ADK pour Java, pour créer ses agents IA Jilt 1.8 est sorti, un annotation processor pour le pattern builder https://www.endoflineblog.com/jilt-1_8-and-1_8_1-released processing incrémental pour Gradle meilleure couverture de votre code (pour ne pas comptabiliser le code généré par l'annotation processeur) une correction d'un problème lors de l'utilisation des types génériques récursifs (genre Node Hibernate Search 8 est sorti https://in.relation.to/2025/06/06/hibernate-search-8-0-0-Final/ aggregation de metriques compatibilité avec les dernieres OpenSearch et Elasticsearch Lucene 10 en backend Preview des requetes validées à la compilation Hibernate 7 est sorti https://in.relation.to/2025/05/20/hibernate-orm-seven/ ASL 2.0 Hibernate Validator 9 Jakarta Persistence 3.2 et Jakarta Validation 3.1 saveOrUpdate (reattachement d'entité) n'est plus supporté session stateless plus capable: oeprations unitaires et pas seulement bach, acces au cache de second niveau, m,eilleure API pour les batchs (insertMultiple etc) nouvelle API criteria simple et type-safe: et peut ajouter a une requete de base Un article qui décrit la Dev UI de Quarkus https://www.sfeir.dev/back/quarkus-dev-ui-linterface-ultime-pour-booster-votre-productivite-en-developpement-java/ apres un test pour soit ou une demo, c'est un article détaillé et la doc de Quarkus n'est pas top là dessus Vert.x 5 est sorti https://vertx.io/blog/eclipse-vert-x-5-released/ on en avait parlé fin de l'année dernière ou début d'année Modèle basé uniquement sur les Futures : Vert.x 5 abandonne le modèle de callbacks pour ne conserver que les Futures, avec une nouvelle classe de base VerticleBase mieux adaptée à ce modèle asynchrone. Support des modules Java (JPMS) : Vert.x 5 prend en charge le système de modules de la plateforme Java avec des modules explicites, permettant une meilleure modularité des applications. Améliorations majeures de gRPC : Support natif de gRPC Web et gRPC Transcoding (support HTTP/JSON et gRPC), format JSON en plus de Protobuf, gestion des timeouts et deadlines, services de réflexion et de health. Support d'io_uring : Intégration native du système io_uring de Linux (précédemment en incubation) pour de meilleures performances I/O sur les systèmes compatibles. Load balancing côté client : Nouvelles capacités de répartition de charge pour les clients HTTP et gRPC avec diverses politiques de distribution. Service Resolver : Nouveau composant pour la résolution dynamique d'adresses de services, étendant les capacités de load balancing à un ensemble plus large de résolveurs. Améliorations du proxy HTTP : Nouvelles transformations prêtes à l'emploi, interception des upgrades WebSocket et interface SPI pour le cache avec support étendu des spécifications. Suppressions et remplacements : Plusieurs composants sont dépréciés (gRPC Netty, JDBC API, Service Discovery) ou supprimés (Vert.x Sync, RxJava 1), remplacés par des alternatives plus modernes comme les virtual threads et Mutiny. Spring AI 1.0 est sorti https://spring.io/blog/2025/05/20/spring-ai-1-0-GA-released ChatClient multi-modèles : API unifiée pour interagir avec 20 modèles d'IA différents avec support multi-modal et réponses JSON structurées. Écosystème RAG complet : Support de 20 bases vectorielles, pipeline ETL et enrichissement automatique des prompts via des advisors. Fonctionnalités enterprise : Mémoire conversationnelle persistante, support MCP, observabilité Micrometer et évaluateurs automatisés. Agents et workflows : Patterns prédéfinis (routing, orchestration, chaînage) et agents autonomes pour applications d'IA complexes. Infrastructure Les modèles d'IA refusent d'être éteint et font du chantage pour l'eviter, voire essaient se saboter l'extinction https://www.thealgorithmicbridge.com/p/ai-companies-have-lost-controland?utm_source=substac[…]aign=email-restack-comment&r=2qoalf&triedRedirect=true Les chercheur d'Anthropic montrent comment Opus 4 faisait du chantage aux ingenieurs qui voulaient l'eteindre pour mettre une nouvelle version en ligne Une boite de recherche a montré la même chose d'Open AI o3 non seulemenmt il ne veut pas mais il essaye activement d'empêcher l'extinction Apple annonce le support de la virtualisation / conteneurisation dans macOS lors de la WWDC https://github.com/apple/containerization C'est open source Possibilité de lancer aussi des VM légères Documentation technique : https://apple.github.io/containerization/documentation/ Grosse chute de services internet suite à un soucis sur GCP Le retour de cloud flare https://blog.cloudflare.com/cloudflare-service-outage-june-12-2025/ Leur système de stockage (une dépendance majeure) dépend exclusivement de GCP Mais ils ont des plans pour surfit de cette dépendance exclusive la première analyse de Google https://status.cloud.google.com/incidents/ow5i3PPK96RduMcb1SsW Un quota auto mis à jour qui a mal tourné. ils ont bypassé le quota en code mais le service de quote en us-central1 était surchargé. Prochaines améliorations: pas d propagation de données corrompues, pas de déploiement global sans rolling upgrade avec monitoring qui peut couper par effet de bord (fail over) certains autres cloud providers ont aussi eu quelques soucis (charge) - unverified Data et Intelligence Artificielle Claude 4 est sorti https://www.anthropic.com/news/claude-4 Deux nouveaux modèles lancés : Claude Opus 4 (le meilleur modèle de codage au monde) et Claude Sonnet 4 (une amélioration significative de Sonnet 3.7) Claude Opus 4 atteint 72,5% sur SWE-bench et peut maintenir des performances soutenues sur des tâches longues durant plusieurs heures Claude Sonnet 4 obtient 72,7% sur SWE-bench tout en équilibrant performance et efficacité pour un usage quotidien Nouvelle fonctionnalité de “pensée étendue avec utilisation d'outils” permettant à Claude d'alterner entre raisonnement et usage d'outils Les modèles peuvent maintenant utiliser plusieurs outils en parallèle et suivre les instructions avec plus de précision Capacités mémoire améliorées : Claude peut extraire et sauvegarder des informations clés pour maintenir la continuité sur le long terme Claude Code devient disponible à tous avec intégrations natives VS Code et JetBrains pour la programmation en binôme Quatre nouvelles capacités API : outil d'exécution de code, connecteur MCP, API Files et mise en cache des prompts Les modèles hybrides offrent deux modes : réponses quasi-instantanées et pensée étendue pour un raisonnement plus approfondi en mode “agentique” L'intégration de l'IA au delà des chatbots et des boutons à étincelles https://glaforge.dev/posts/2025/05/23/beyond-the-chatbot-or-ai-sparkle-a-seamless-ai-integration/ Plaidoyer pour une IA intégrée de façon transparente et intuitive, au-delà des chatbots. Chatbots : pas toujours l'option LLM la plus intuitive ou la moins perturbatrice. Préconisation : IA directement dans les applications pour plus d'intelligence et d'utilité naturelle. Exemples d'intégration transparente : résumés des conversations Gmail et chat, web clipper Obsidian qui résume et taggue, complétion de code LLM. Meilleure UX IA : intégrée, contextuelle, sans “boutons IA” ou fenêtres de chat dédiées. Conclusion de Guillaume : intégrations IA réussies = partie naturelle du système, améliorant les workflows sans perturbation, le développeur ou l'utilisateur reste dans le “flow” Garder votre base de donnée vectorielle à jour avec Debezium https://debezium.io/blog/2025/05/19/debezium-as-part-of-your-ai-solution/ pas besoin de detailler mais expliquer idee de garder les changements a jour dans l'index Outillage guide pratique pour choisir le bon modèle d'IA à utiliser avec GitHub Copilot, en fonction de vos besoins en développement logiciel. https://github.blog/ai-and-ml/github-copilot/which-ai-model-should-i-use-with-github-copilot/ - Équilibre coût/performance : GPT-4.1, GPT-4o ou Claude 3.5 Sonnet pour des tâches générales et multilingues. - Tâches rapides : o4-mini ou Claude 3.5 Sonnet pour du prototypage ou de l'apprentissage rapide. - Besoins complexes : Claude 3.7 Sonnet, GPT-4.5 ou o3 pour refactorisation ou planification logicielle. - Entrées multimodales : Gemini 2.0 Flash ou GPT-4o pour analyser images, UI ou diagrammes. - Projets techniques/scientifiques : Gemini 2.5 Pro pour raisonnement avancé et gros volumes de données. UV, un package manager pour les pythonistes qui amène un peu de sanité et de vitesse http://blog.ippon.fr/2025/05/12/uv-un-package-manager-python-adapte-a-la-data-partie-1-theorie-et-fonctionnalites/ pour les pythonistes un ackage manager plus rapide et simple mais il est seulement semi ouvert (license) IntelliJ IDEA 2025.1 permet de rajouter un mode MCP client à l'assistant IA https://blog.jetbrains.com/idea/2025/05/intellij-idea-2025-1-model-context-protocol/ par exemple faire tourner un MCP server qui accède à la base de donnée Méthodologies Développement d'une bibliothèque OAuth 2.1 open source par Cloudflare, en grande partie générée par l'IA Claude: - Prompts intégrés aux commits : Chaque commit contient le prompt utilisé, ce qui facilite la compréhension de l'intention derrière le code. - Prompt par l'exemple : Le premier prompt montrait un exemple d'utilisation de l'API qu'on souhaite obtenir, ce qui a permis à l'IA de mieux comprendre les attentes. - Prompts structurés : Les prompts les plus efficaces suivaient un schéma clair : état actuel, justification du changement, et directive précise. - Traitez les prompts comme du code source : Les inclure dans les commits aide à la maintenance. - Acceptez les itérations : Chaque fonctionnalité a nécessité plusieurs essais. - Intervention humaine indispensable : Certaines tâches restent plus rapides à faire à la main. https://www.maxemitchell.com/writings/i-read-all-of-cloudflares-claude-generated-commits/ Sécurité Un packet npm malicieux passe par Cursor AI pour infecter les utilisateurs https://thehackernews.com/2025/05/malicious-npm-packages-infect-3200.html Trois packages npm malveillants ont été découverts ciblant spécifiquement l'éditeur de code Cursor sur macOS, téléchargés plus de 3 200 fois au total.Les packages se déguisent en outils de développement promettant “l'API Cursor la moins chère” pour attirer les développeurs intéressés par des solutions AI abordables. Technique d'attaque sophistiquée : les packages volent les identifiants utilisateur, récupèrent un payload chiffré depuis des serveurs contrôlés par les pirates, puis remplacent le fichier main.js de Cursor. Persistance assurée en désactivant les mises à jour automatiques de Cursor et en redémarrant l'application avec le code malveillant intégré. Nouvelle méthode de compromission : au lieu d'injecter directement du malware, les attaquants publient des packages qui modifient des logiciels légitimes déjà installés sur le système. Persistance même après suppression : le malware reste actif même si les packages npm malveillants sont supprimés, nécessitant une réinstallation complète de Cursor. Exploitation de la confiance : en s'exécutant dans le contexte d'une application légitime (IDE), le code malveillant hérite de tous ses privilèges et accès. Package “rand-user-agent” compromis : un package légitime populaire a été infiltré pour déployer un cheval de Troie d'accès distant (RAT) dans certaines versions. Recommandations de sécurité : surveiller les packages exécutant des scripts post-installation, modifiant des fichiers hors node_modules, ou initiant des appels réseau inattendus, avec monitoring d'intégrité des fichiers. Loi, société et organisation Le drama OpenRewrite (automatisation de refactoring sur de larges bases de code) est passé en mode propriétaire https://medium.com/@jonathan.leitschuh/when-open-source-isnt-how-openrewrite-lost-its-way-642053be287d Faits Clés : Moderne, Inc. a re-licencié silencieusement du code OpenRewrite (dont rewrite-java-security) de la licence Apache 2.0 à une licence propriétaire (MPL) sans consultation des contributeurs. Ce re-licenciement rend le code inaccessible et non modifiable pour les contributeurs originaux. Moderne s'est retiré de la Commonhaus Foundation (dédiée à l'open source) juste avant ces changements. La justification de Moderne est la crainte que de grandes entreprises utilisent OpenRewrite sans contribuer, créant une concurrence. Des contributions communautaires importantes (VMware, AliBaba) sous Apache 2.0 ont été re-licenciées sans leur consentement. La légalité de ce re-licenciement est incertaine sans CLA des contributeurs. Cette action crée un précédent dangereux pour les futurs contributeurs et nuit à la confiance dans l'écosystème OpenRewrite. Corrections de Moderne (Suite aux réactions) : Les dépôts Apache originaux ont été restaurés et archivés. Des versions majeures ont été utilisées pour signaler les changements de licence. Des espaces de noms distincts (org.openrewrite vs. io.moderne) ont été créés pour différencier les modules. Suggestions de Correction de l'Auteur : Annuler les changements de licence sur toutes les recettes communautaires. S'engager dans le dialogue et communiquer publiquement les changements majeurs. Respecter le versionnement sémantique (versions majeures pour les changements de licence). L'ancien gourou du design d'Apple, Jony Ive, va occuper un rôle majeur chez OpenAI OpenAI va acquérir la startup d'Ive pour 6,5 milliards de dollars, tandis qu'Ive et le PDG Sam Altman travaillent sur une nouvelle génération d'appareils et d'autres produits d'IA https://www.wsj.com/tech/ai/former-apple-design-guru-jony-ive-to-take-expansive-role-at-openai-5787f7da Rubrique débutant Un article pour les débutants sur le lien entre source, bytecode et le debug https://blog.jetbrains.com/idea/2025/05/sources-bytecode-debugging/ le debugger voit le bytecode et le lien avec la ligne ou la methode est potentiellement perdu javac peut ajouter les ligne et offset des operations pour que le debugger les affichent les noms des arguments est aussi ajoutable dans le .class quand vous pointez vers une mauvaise version du fichier source, vous avez des lignes decalées, c'est pour ca peu de raisons de ne pas actier des approches de compilations mais cela rend le fichier un peu plus gros Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 11-13 juin 2025 : Devoxx Poland - Krakow (Poland) 12-13 juin 2025 : Agile Tour Toulouse - Toulouse (France) 12-13 juin 2025 : DevLille - Lille (France) 13 juin 2025 : Tech F'Est 2025 - Nancy (France) 17 juin 2025 : Mobilis In Mobile - Nantes (France) 19-21 juin 2025 : Drupal Barcamp Perpignan 2025 - Perpignan (France) 24 juin 2025 : WAX 2025 - Aix-en-Provence (France) 25 juin 2025 : Rust Paris 2025 - Paris (France) 25-26 juin 2025 : Agi'Lille 2025 - Lille (France) 25-27 juin 2025 : BreizhCamp 2025 - Rennes (France) 26-27 juin 2025 : Sunny Tech - Montpellier (France) 1-4 juillet 2025 : Open edX Conference - 2025 - Palaiseau (France) 7-9 juillet 2025 : Riviera DEV 2025 - Sophia Antipolis (France) 5 septembre 2025 : JUG Summer Camp 2025 - La Rochelle (France) 12 septembre 2025 : Agile Pays Basque 2025 - Bidart (France) 18-19 septembre 2025 : API Platform Conference - Lille (France) & Online 23 septembre 2025 : OWASP AppSec France 2025 - Paris (France) 25-26 septembre 2025 : Paris Web 2025 - Paris (France) 2-3 octobre 2025 : Volcamp - Clermont-Ferrand (France) 3 octobre 2025 : DevFest Perros-Guirec 2025 - Perros-Guirec (France) 6-7 octobre 2025 : Swift Connection 2025 - Paris (France) 6-10 octobre 2025 : Devoxx Belgium - Antwerp (Belgium) 7 octobre 2025 : BSides Mulhouse - Mulhouse (France) 9 octobre 2025 : DevCon #25 : informatique quantique - Paris (France) 9-10 octobre 2025 : Forum PHP 2025 - Marne-la-Vallée (France) 9-10 octobre 2025 : EuroRust 2025 - Paris (France) 16 octobre 2025 : PlatformCon25 Live Day Paris - Paris (France) 16 octobre 2025 : Power 365 - 2025 - Lille (France) 16-17 octobre 2025 : DevFest Nantes - Nantes (France) 30-31 octobre 2025 : Agile Tour Bordeaux 2025 - Bordeaux (France) 30-31 octobre 2025 : Agile Tour Nantais 2025 - Nantes (France) 30 octobre 2025-2 novembre 2025 : PyConFR 2025 - Lyon (France) 4-7 novembre 2025 : NewCrafts 2025 - Paris (France) 5-6 novembre 2025 : Tech Show Paris - Paris (France) 6 novembre 2025 : dotAI 2025 - Paris (France) 7 novembre 2025 : BDX I/O - Bordeaux (France) 12-14 novembre 2025 : Devoxx Morocco - Marrakech (Morocco) 13 novembre 2025 : DevFest Toulouse - Toulouse (France) 15-16 novembre 2025 : Capitole du Libre - Toulouse (France) 19 novembre 2025 : SREday Paris 2025 Q4 - Paris (France) 20 novembre 2025 : OVHcloud Summit - Paris (France) 21 novembre 2025 : DevFest Paris 2025 - Paris (France) 27 novembre 2025 : DevFest Strasbourg 2025 - Strasbourg (France) 28 novembre 2025 : DevFest Lyon - Lyon (France) 5 décembre 2025 : DevFest Dijon 2025 - Dijon (France) 10-11 décembre 2025 : Devops REX - Paris (France) 10-11 décembre 2025 : Open Source Experience - Paris (France) 28-31 janvier 2026 : SnowCamp 2026 - Grenoble (France) 2-6 février 2026 : Web Days Convention - Aix-en-Provence (France) 3 février 2026 : Cloud Native Days France 2026 - Paris (France) 23-25 avril 2026 : Devoxx Greece - Athens (Greece) 17 juin 2026 : Devoxx Poland - Krakow (Poland) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/

Igalia
What Even is a Verifiable Credential?

Igalia

Play Episode Listen Later Jun 6, 2025 59:42


Eric and Brian chat with Dmitri Zagidulin and Juan Caballero of the W3C's Verifiable Credentials Working Group to learn about Verifiable Credentials

HearSay
ADA Title II and Government Readiness with Don Torrez

HearSay

Play Episode Listen Later May 2, 2025 44:56


In the season finale of HearSay, we're joined by Don Torrez, Director of Partnerships at CivicPlus, discussed the upcoming ADA Title II compliance deadline and how government municipalities are proactively preparing. Mike and Don also discussed best practices for enhancing accessibility and how the partnership between AudioEye and CivicPlus simplifies the whole process.—Below are links to the resources mentioned during the show:ADA fact sheet on new rule for website and mobile apps provided by state and local governments: https://www.ada.gov/resources/2024-03-08-web-rule/W3C's WCAG Guidelines: https://www.w3.org/WAI/standards-guidelines/wcag/AudioEye's ADA Compliance Checklist: https://www.audioeye.com/post/ada-reasonable-accommodation-checklist—View Transcript: https://aeurl.xyz/hearsay-podcast-with-done-torrezHearSay is a podcast focusing on the advocates, heroes, and leaders making the web more accessible. We're interviewing these change makers to hear what they have to say, to set the record straight, and offer their perspectives on how we can all work to make the web accessible to all.

Thinking Elixir Podcast
251: SSH Vulnerability and Cookies are Changing

Thinking Elixir Podcast

Play Episode Listen Later Apr 29, 2025 41:51


News includes a critical Unauthenticated Remote Code Execution vulnerability in Erlang/OTP SSH, José Valim teasing a new project, Oban Pro v1.6's impressive new "Cascade Mode" feature, Semaphore CI/CD platform being open-sourced as a primarily Elixir application, new sandboxing options for Elixir code with Dune and Mini Elixir, BeaconCMS development slowing due to DockYard cuts, and a look at the upcoming W3C Device Bound Session Credentials standard that will impact all web applications, and more! Show Notes online - http://podcast.thinkingelixir.com/251 (http://podcast.thinkingelixir.com/251) Elixir Community News https://paraxial.io/ (https://paraxial.io/?utm_source=thinkingelixir&utm_medium=shownotes) – Paraxial.io is sponsoring today's show! Sign up for a free trial of Paraxial.io today and mention Thinking Elixir when you schedule a demo for a limited time offer. https://x.com/ErlangDiscu/status/1914259474937753747 (https://x.com/ErlangDiscu/status/1914259474937753747?utm_source=thinkingelixir&utm_medium=shownotes) – Unauthenticated Remote Code Execution vulnerability discovered in Erlang/OTP SSH. https://github.com/erlang/otp/security/advisories/GHSA-37cp-fgq5-7wc2 (https://github.com/erlang/otp/security/advisories/GHSA-37cp-fgq5-7wc2?utm_source=thinkingelixir&utm_medium=shownotes) – Official security advisory for the Erlang/OTP SSH vulnerability. https://paraxial.io/blog/erlang-ssh (https://paraxial.io/blog/erlang-ssh?utm_source=thinkingelixir&utm_medium=shownotes) – Paraxial.io's detailed blog post addressing how the SSH vulnerability impacts typical Elixir systems. https://elixirforum.com/t/updated-nerves-systems-available-with-cve-2025-32433-ssh-fix/70539 (https://elixirforum.com/t/updated-nerves-systems-available-with-cve-2025-32433-ssh-fix/70539?utm_source=thinkingelixir&utm_medium=shownotes) – Updated Nerves systems available with SSH vulnerability fix. https://bsky.app/profile/oban.pro/post/3lndzg72r2k2g (https://bsky.app/profile/oban.pro/post/3lndzg72r2k2g?utm_source=thinkingelixir&utm_medium=shownotes) – Announcement of Oban Pro v1.6's new "Cascade Mode" feature. https://oban.pro/articles/weaving-stories-with-cascading-workflows (https://oban.pro/articles/weaving-stories-with-cascading-workflows?utm_source=thinkingelixir&utm_medium=shownotes) – Blog post demonstrating Oban Pro's new Cascading Workflows feature used to create children's stories with AI. https://bsky.app/profile/josevalim.bsky.social/post/3lmw5fvnyvc2k (https://bsky.app/profile/josevalim.bsky.social/post/3lmw5fvnyvc2k?utm_source=thinkingelixir&utm_medium=shownotes) – José Valim teasing a new logo with "Soon" message. https://tidewave.ai/ (https://tidewave.ai/?utm_source=thinkingelixir&utm_medium=shownotes) – New site mentioned in José Valim's teasers, not loading to anything yet. https://github.com/tidewave-ai (https://github.com/tidewave-ai?utm_source=thinkingelixir&utm_medium=shownotes) – New GitHub organization related to José Valim's upcoming announcement. https://github.com/tidewave-ai/mcpproxyelixir (https://github.com/tidewave-ai/mcp_proxy_elixir?utm_source=thinkingelixir&utm_medium=shownotes) – The only public project in the tidewave-ai organization - an Elixir MCP server for STDIO. https://x.com/chris_mccord/status/1913073561561858229 (https://x.com/chris_mccord/status/1913073561561858229?utm_source=thinkingelixir&utm_medium=shownotes) – Chris McCord teasing AI development with Phoenix applications. https://ashweekly.substack.com/p/ash-weekly-issue-13 (https://ashweekly.substack.com/p/ash-weekly-issue-13?utm_source=thinkingelixir&utm_medium=shownotes) – Zach Daniel teasing upcoming Ash news to be announced at ElixirConf EU. https://elixirforum.com/t/dune-sandbox-for-elixir/42480 (https://elixirforum.com/t/dune-sandbox-for-elixir/42480?utm_source=thinkingelixir&utm_medium=shownotes) – Dune - a sandbox for Elixir created by a Phoenix maintainer. https://github.com/functional-rewire/dune (https://github.com/functional-rewire/dune?utm_source=thinkingelixir&utm_medium=shownotes) – GitHub repository for Dune, an Elixir code sandbox. https://blog.sequinstream.com/why-we-built-mini-elixir/ (https://blog.sequinstream.com/why-we-built-mini-elixir/?utm_source=thinkingelixir&utm_medium=shownotes) – Blog post explaining Mini Elixir, another Elixir code sandbox solution. https://github.com/sequinstream/sequin/tree/main/lib/sequin/transforms/minielixir (https://github.com/sequinstream/sequin/tree/main/lib/sequin/transforms/minielixir?utm_source=thinkingelixir&utm_medium=shownotes) – GitHub repository that contains Mini Elixir, an Elixir AST interpreter. https://www.reddit.com/r/elixir/comments/1k27ekg/webuiltacustomelixirastinterpreter_for/ (https://www.reddit.com/r/elixir/comments/1k27ekg/we_built_a_custom_elixir_ast_interpreter_for/?utm_source=thinkingelixir&utm_medium=shownotes) – Reddit discussion about Mini Elixir AST interpreter. https://github.com/semaphoreio/semaphore (https://github.com/semaphoreio/semaphore?utm_source=thinkingelixir&utm_medium=shownotes) – Semaphore CI/CD platform open-sourced under Apache 2.0 license - primarily an Elixir application. https://semaphore.io/ (https://semaphore.io/?utm_source=thinkingelixir&utm_medium=shownotes) – Official website for Semaphore CI/CD platform. https://docs.semaphoreci.com/CE/getting-started/install (https://docs.semaphoreci.com/CE/getting-started/install?utm_source=thinkingelixir&utm_medium=shownotes) – Installation guide for Semaphore Community Edition. https://bsky.app/profile/markoanastasov.bsky.social/post/3lj5o5h5z7k2t (https://bsky.app/profile/markoanastasov.bsky.social/post/3lj5o5h5z7k2t?utm_source=thinkingelixir&utm_medium=shownotes) – Announcement from Marko Anastasov, co-founder of Semaphore CI, about open-sourcing their platform. https://github.com/elixir-dbvisor/sql (https://github.com/elixir-dbvisor/sql?utm_source=thinkingelixir&utm_medium=shownotes) – GitHub repository for SQL parser and sigil with impressive benchmarks. https://groups.google.com/g/elixir-ecto/c/8MOkRFAdLZc?pli=1 (https://groups.google.com/g/elixir-ecto/c/8MOkRFAdLZc?pli=1?utm_source=thinkingelixir&utm_medium=shownotes) – Discussion about SQL parser being 400-650x faster than Ecto for generating SQL. https://bsky.app/profile/bcardarella.bsky.social/post/3lndymobsak2p (https://bsky.app/profile/bcardarella.bsky.social/post/3lndymobsak2p?utm_source=thinkingelixir&utm_medium=shownotes) – Announcement about BeaconCMS reducing development due to Dockyard cuts. https://bsky.app/profile/did:plc:vnywtpvzgdgetnwea3fs3y6w (https://bsky.app/profile/did:plc:vnywtpvzgdgetnwea3fs3y6w?utm_source=thinkingelixir&utm_medium=shownotes) – Related profile for BeaconCMS announcement. https://beaconcms.org/ (https://beaconcms.org/?utm_source=thinkingelixir&utm_medium=shownotes) – BeaconCMS official website. https://github.com/BeaconCMS/beacon (https://github.com/BeaconCMS/beacon?utm_source=thinkingelixir&utm_medium=shownotes) – GitHub repository for BeaconCMS. Do you have some Elixir news to share? Tell us at @ThinkingElixir (https://twitter.com/ThinkingElixir) or email at show@thinkingelixir.com (mailto:show@thinkingelixir.com) Discussion Resources Discussion about Device Bound Session Credentials, a W3C initiative being built into major browsers that will require minor changes to Phoenix for implementation. https://w3c.github.io/webappsec-dbsc/ (https://w3c.github.io/webappsec-dbsc/?utm_source=thinkingelixir&utm_medium=shownotes) – W3C - Device Bound Session Credentials proposal https://github.com/w3c/webappsec-dbsc/ (https://github.com/w3c/webappsec-dbsc/?utm_source=thinkingelixir&utm_medium=shownotes) – Device Bound Session Credentials explainer https://developer.chrome.com/docs/web-platform/device-bound-session-credentials (https://developer.chrome.com/docs/web-platform/device-bound-session-credentials?utm_source=thinkingelixir&utm_medium=shownotes) – Device Bound Session Credentials (DBSC) on the Google Chrome developer blog https://en.wikipedia.org/wiki/TrustedPlatformModule (https://en.wikipedia.org/wiki/Trusted_Platform_Module?utm_source=thinkingelixir&utm_medium=shownotes) – Wikipedia article on Trusted Platform Module, relevant to Device Bound Session Credentials discussion. https://www.grc.com/sn/sn-1021-notes.pdf (https://www.grc.com/sn/sn-1021-notes.pdf?utm_source=thinkingelixir&utm_medium=shownotes) – Other podcast show notes discussing Device Bound Session Credentials (DBSC). https://twit.tv/shows/security-now/episodes/1021?autostart=false (https://twit.tv/shows/security-now/episodes/1021?autostart=false?utm_source=thinkingelixir&utm_medium=shownotes) – Security Now podcast episode covering Device Bound Session Credentials (time coded link to discussion). Find us online - Message the show - Bluesky (https://bsky.app/profile/thinkingelixir.com) - Message the show - X (https://x.com/ThinkingElixir) - Message the show on Fediverse - @ThinkingElixir@genserver.social (https://genserver.social/ThinkingElixir) - Email the show - show@thinkingelixir.com (mailto:show@thinkingelixir.com) - Mark Ericksen on X - @brainlid (https://x.com/brainlid) - Mark Ericksen on Bluesky - @brainlid.bsky.social (https://bsky.app/profile/brainlid.bsky.social) - Mark Ericksen on Fediverse - @brainlid@genserver.social (https://genserver.social/brainlid) - David Bernheisel on Bluesky - @david.bernheisel.com (https://bsky.app/profile/david.bernheisel.com) - David Bernheisel on Fediverse - @dbern@genserver.social (https://genserver.social/dbern)

Search Off the Record
How are web standards made?

Search Off the Record

Play Episode Listen Later Apr 17, 2025 44:56


Ever wondered how web standards are made? Martin and Gary from Google Search take you behind the scenes of the internet's governing bodies. From the IETF to the W3C, learn about the consensus-driven processes that shape the web. Find out why these standards are crucial for ensuring a consistent and reliable online experience. Resources: Episode transcript →  https://goo.gle/sotr089-transcript    Listen to more Search Off the Record → https://goo.gle/sotr-yt Subscribe to Google Search Channel → https://goo.gle/SearchCentral   Search Off the Record is a podcast series that takes you behind the scenes of Google Search with the Search Relations team.   #SOTRpodcast #SEO   Speakers: Lizzi Sassman, John Mueller, Martin Splitt, Gary Illyes Products Mentioned: Search Console - General  

Igalia
History of the Web: Chris Lilley

Igalia

Play Episode Listen Later Apr 8, 2025 47:27


Igalia's Brian Kardell chats with Chris Lilley, Technical Director at the W3C, about his long history with the Web and the W3C, ranging from line-mode browsers to CSS to SVG and more.

Brave UX with Brendan Jarvis
Meryl Evans - Progress Over Perfection in Accessibility

Brave UX with Brendan Jarvis

Play Episode Listen Later Apr 1, 2025 75:46


Meryl Evans shares why accessibility is everyone's responsibility, how captions can create connection—not just convenience—and why progress over perfection is the mindset that drives lasting inclusion. Highlights include: 11:30 – How did publishing your first video in 2018 change everything? 20:28 – Why progress over perfection is essential in accessibility 29:06 – How do you carry the responsibility of being an advocate? 49:34 – What does it take to build a culture of accessibility? 59:15 – Why equitable design matters for dignity and inclusion ====== Who is Meryl Evans? Meryl is a Certified Professional in Accessibility Core Competencies (CPACC), a sought-after speaker, trainer, and accessibility marketing consultant who has spent over two decades championing inclusive digital experiences. Named one of LinkedIn's Top 12 Voices for Accessibility Advocacy, Meryl's work has been recognised by organisations such as the North Texas Disability Chamber and featured in The Wall Street Journal, MarketingProfs, and The Dallas Morning News. She's the author of The Brilliant Outlook Pocketbook and co-author of Adapting to Web Standards: CSS and Ajax for Big Sites, and her speaking credits include TEDx, SXSW, and the American Marketing Association. Beyond her consulting and speaking, Meryl is a passionate volunteer, contributing to the W3C's Immersive Captions community group and XR Access, helping ensure emerging technologies are accessible to everyone. === Find Meryl here: LinkedIn: https://www.linkedin.com/in/meryl/ Website: https://meryl.net/ YouTube: https://www.youtube.com/merylkevans/videos  X: https://x.com/merylkevans ====== Subscribe to Brave UX Liked what you heard and want to hear more? Subscribe and support the show by leaving a review on Apple Podcasts (or wherever you listen). Apple Podcast Spotify YouTube Podbean Follow us on our other social channels for more great Brave UX content! LinkedIn TikTok Instagram Brendan Jarvis hosts the Show, and you can find him here: Brendan Jarvis on LinkedIn The Space InBetween Website

PurePerformance
The History & Power of Distributed Tracing with Christoph Neumueller & Thomas Rothschaedl

PurePerformance

Play Episode Listen Later Mar 31, 2025 56:19


So you think Distributed Tracing is the new thing? Well - its not! But its never been as exciting as today!In this episode we combine 50 years of Distributed Tracing experience across our guests and hosts. We invited Christoph Neumueller and Thomas Rothschaedl who have seen the early days of agent-based instrumentation, how global standards like the W3C Trace Context allowed tracing to connect large enterprise systems and how OpenTelemetry is commoditizing data collection across all tech stacks.Tune in and learn about the difference between spans and traces, why collecting the data is only part of the story, how to combat the challenge when dealing with too much data and how traces relate and connect to logs, metrics and events.Links we discussedYouTube with Christoph: LINK WILL FOLLOW ONCE VIDEO IS POSTEDChristoph's LinkedIn: https://www.linkedin.com/in/christophneumueller/Thomas's LinkedIn: https://www.linkedin.com/in/rothschaedl/

Emílias Podcast
Daniele Pishinin: Inspirando Mulheres na Tecnologia e no Front-end

Emílias Podcast

Play Episode Listen Later Mar 27, 2025 33:30


Daniele Pishinin, engenheira de software especializada em tecnologias front-end, compartilhou sua trajetória no Emílias Podcast. Ela atua como Frontend Engineer. Além disso, ela é embaixadora do Google Women Techmakers, mentora na Reprograma e colaboradora em eventos comunitários como o DevOps Day Campinas 2024. Durante o episódio, Daniele abordou temas como sua motivação para ingressar na área de computação, destacando o impacto de videogames em sua infância e a escolha por uma carreira focada no desenvolvimento frontend. Ela compartilhou boas práticas para acessibilidade e segurança no desenvolvimento de aplicações, incluindo o uso de guidelines como as do W3C. Além disso, discutiu as ferramentas que utiliza no trabalho, como Visual Studio Code e React Query.Daniele também relatou desafios enfrentados por mulheres na tecnologia, incluindo assédio no início da carreira e dificuldades para mulheres em transição de carreira. Como mentora na Reprograma, ela destacou a importância de criar oportunidades para mulheres ingressarem no mercado. Sobre sua atuação como embaixadora do Google Women Techmakers, enfatizou o papel do programa em conectar mulheres e fortalecer redes de apoio.Por fim, Daniele incentivou mulheres a persistirem na área de computação e indicou os filmes Estrelas Além do Tempo e Batalhão 6888, além do livro Os Irmãos Karamazov. Ela encerrou o episódio agradecendo a oportunidade e reforçando sua disponibilidade para conexões profissionais via LinkedIn.Links de Daniele:https://www.instagram.com/danipishinin/https://www.danipishinin.com/ https://www.linkedin.com/in/danipishinin/Indicações de Daniele:Estrelas Além do Tempo https://www.imdb.com/pt/title/tt4846340Batalhão 6888 https://www.imdb.com/pt/title/tt24458622 Irmãos Karamázov https://www.goodreads.com/book/show/43176794-os-irm-os-karamazov Entrevistadores:Adolfo Neto - Professor da UTFPR https://adolfont.github.io/ Nathálya Chaves - Estudante de Sistemas de Informação da UTFPR e Bolsista do Emílias Podcast.Editor: Allax Almeida Episódio 122 do Emílias Podcast.O Emílias Podcast é um projeto de extensão da UTFPR Curitiba que faz parte da Rede Emílias de Podcasts ⁠⁠https://fronteirases.github.io/redeemilias .  Descubra tudo sobre o programa Emílias - Armação em Bits em ⁠⁠⁠⁠⁠⁠⁠⁠⁠https://linktr.ee/Emilias   #podcast  #EMILIAS

Software Sessions
Hong Minhee on ActivityPub

Software Sessions

Play Episode Listen Later Feb 28, 2025 45:39


Hong Minhee is an open source developer and the creator of the Fedify ActivityPub server framework. We talk about how applications like Mastodon and Misskey communicate with one another using ActivityPub. This includes discussions on built-in activites, extending the specification in a backwards compatible way, difficulties implementing JSON-LD, the inbox model, and his experience implementing the specification. Hong Minhee: activitypub profile fedify hollo Specifications: ActivityPub W3C specification JSON Linked Data Resource Description Framework W3C Semantic Web Standards ActivityPub and WebFinger ActivityPub and HTTP Signatures ActivityPub implementations: Mastodon Misskey Akkoma Pleroma Pixelfed Lemmy Loops GoToSocial ActivityPub support in Ghost Threads has entered the Fediverse ActivityPub tools: ActivityPub Academy BrowserPub fedify CLI -- Transcript You can help correct transcripts on GitHub. What's ActivityPub? [00:00:00] Jeremy: Today, I'm talking to Hong Minhee. He is the developer of Fedify. A TypeScript library for building ActivityPub server applications. The first thing I think we should start with is defining ActivityPub. what is ActivityPub? [00:00:16] Hong: ActivityPub is the protocol that lets social networks talk to each other and it's officially recommended by W3C. It's what powers this thing we call the Fediverse which is basically a way for different social media platforms to work together. Users of ActivityPub [00:00:39] Jeremy: Can you give some examples that people might have heard of -- either users of ActivityPub or things that are a part of this fediverse? [00:00:50] Hong: Mastodon is probably the biggest one out there. And you know what's interesting? Meta threads has actually started implementing ActivityPub this summer. So this still pretty much a one way street right now. In East Asia, especially Japan, there's this really popular microblogging platform called misskey. It's got so many forks that people actually joke around and called them forkeys. but it's not just about Twitter style microblogging, there's Pixelfed which is kind of like Instagram, but for the fediverse. And those same folks recently launched loops. Which is basically doing what TikTok does, but in the Fediverse. Then you've got stuff like Lemmy and which are doing the reddit thing up in the Fediverse. [00:02:00] Jeremy: Oh like Reddit. [00:02:01] Hong: Yeah. There are so much more out there that I haven't even mentioned. Um, most of it is open source, which is pretty cool. [00:02:13] Jeremy: So the first few examples you gave, Mastodon and Meta's threads, they're very similar to, to Twitter, right? So that's what you were calling the, the Microblogging applications. And I think what you had said, which is a little bit interesting is you had said Metas threads is only one way. So could you kind of describe like what you mean by that? [00:02:37] Hong: Currently meta threads only can be followed by other ActivityPub applications but you cannot follow other people in the fediverse. [00:02:55] Jeremy: People who are using another Microblogging platform like Mastodon can follow someone on Meta's Threads platform. But the other way is not true. If you're on threads, you can't follow someone on Mastodon. [00:03:07] Hong: Yes, that's right. [00:03:09] Jeremy: And that's not a limitation of the protocol itself. That's a design decision or a decision made by Meta. [00:03:17] Hong: Yeah. They are slowly implementing ActivityPub and I hope they will implement complete ActivityPub in the future. Interoperability through Activities [00:03:27] Jeremy: And then the other examples you gave, one is I believe it was Pixel Fed is very similar to Instagram. And then the last examples you gave was I think it was Lemmy, you said it's similar to Reddit. Because you mentioned the term Fediverse before and you mentioned that these all use ActivityPub and since these seem like different kinds of applications, what does it mean for them to interact? Because with Mastodon and Threads I can kind of understand because they're both similar to Twitter. So you're posting messages and replying, but, but what does it mean, for example, for someone on Mastodon to interact with someone on Lemmy which is like Reddit because they seem very different. [00:04:16] Hong: People in Lemmy and Mastodon are called actors and can follow each other. They have interactions between them called activities. And there are several types of activities like, create and follow and undo, like, and so on. So, ActivityPub applications tend to, use these vocabulary to implement their features. So, for example, Lemmy uses like activities for upvoting and like activities for down voting and it's translated to likes in Mastodon. So if you submit a post on Lemmy and it shows up on your Mastodon timeline. If you like that post (it) is upvoting in Lemmy. [00:05:36] Jeremy: And probably similarly with Pixelfed, which you said is like Instagram, if you follow someone's Pixelfed account in Mastodon and they post a photo in Pixel Fed, they would see it as a post in Mastodon natively and they could give it a like there. Adding activities or properties [00:05:56] Jeremy: And these activities that you mentioned -- So the like and the dislike are those part of ActivityPub itself? [00:06:05] Hong: Yes, and this vocabulary can be extended. [00:06:10] Jeremy: So you can add, additional actions (activities) or are you adding information (properties) to the existing actions? [00:06:37] Hong: It is called activity vocabulary, and there are, things like accept, add, arrive, block, create lead, dislike, flag, follow, ignore invite, join, and so on. So, basically, almost everything you need to build social media is already there in the vocabulary, but if you want to extend some more, you can define your, own vocabulary. [00:06:56] Jeremy: Most of the things that an Instagram or a Twitter, or a Reddit would need is already there. But you're saying that you can have your own vocabulary. So if there's an action or an activity that is not covered by the specification, you can create one yourself. [00:07:13] Hong: Yes. For example, Misskey and Pleroma defined emoji reactor to represent emoji reactions. [00:07:25] Jeremy: Because the systems can extend the vocabulary. What are some other examples of cases where mastodon or any other of these systems has found that the existing vocabulary is not enough. What are some other examples of applications extending it? [00:07:45] Hong: For example, uh, mastodon defined suspended -- suspended property. They are not activities, but they are properties in the activity. ActivityPub consists of several types of objects and there are activities and normal objects like, article. they can have properties and there are several existing properties, but they can be also extended. So Mastodon extended some properties they need. So for example, they define suspended or discoverable. [00:08:44] Hong: Suspended for to tell if an actor is suspended by moderators. Discoverable tells if an actor itself wants to be, searched and indexed, and there are much more properties. Mastodon extended. Actors [00:09:12] Jeremy: And these are, these are properties of the actor. These are properties of the user? [00:09:19] Hong: Yes. Actors. [00:09:21] Jeremy: Cause I think earlier you mentioned that. The concept of a user is an actor, and it sounds like what you're saying is an actor can have all these properties. There's probably a, a username and things like that, but Mastodon has extended the properties so that, you can have a property on whether you wanna be searched or indexed you can have a property that says you're suspended. So I guess your account, is still there, but can't be used anymore. Something we should probably talk about then is, so you have these actors, you have these activities that I'm assuming the actors are performing on one another. What does that data look like and what does the communication look like? [00:10:09] Hong: Actors have their own dereferencable URI and when you look up that URI you get all the info about the actor in JSON-LD format [00:10:22] Jeremy: JSON-LD? [00:10:23] Hong: Yeah. JSON-LD. linked data. (The) Actor has all the stuff you expect to find on a social account name, bio URL to the profile page, profile picture, head image and more. And there are five main types of actors: application, group, organization, person and service. And you know how sometimes on Mastodon you will see an account marked as a bot? [00:10:58] Jeremy: A bot? [00:10:59] Hong: Yeah. Bot and that's what an actor of type service looks like. And the ActivityPub spec actually let you create other types beyond these five. But I haven't seen anyone actually do that yet. JSON-LD [00:11:15] Jeremy: And you mentioned that these are all JSON objects. but the LD part, the linked data part, I'm not familiar with. So what different about the linked data part of the JSON? [00:11:31] Hong: So JSON-LD is the special way of writing RDF. Which was originally used in the semantic web. Usually RDF uses (a) format (that) is called triples. [00:11:48] Jeremy: Triples? [00:11:49] Hong: Yeah, subject and predicate and object. [00:11:55] Jeremy: Subject, predicate, object. Can you give an example of what those three would be? [00:12:00] Hong: For example, is a person, it's a triple. John is a subject and is a predicate [00:12:11] Jeremy: is, is the predicate. [00:12:12] Hong: Okay. And person is a object. That's great for showing how things are connected, but it is pretty different from how we usually handle data in REST for APIs and stuff. Like normally we say a personal object has property like name, DOB, bio, and so on. And a bunch of subject predicated object triples that's where JSON-LD comes in -- is designed to look more like the JSON we are used to working with, while still being able to represent RDF Graphs. RDF graph are ontology. It's a way to represent factual data, but is, quite different from, how we represent data in relational database. And it's a bunch of triples each subject and objects are nodes and predicates connect these nodes. Semantic Web [00:13:30] Jeremy: You mentioned the Semantic web, what does that mean? What is the semantic web? [00:13:35] Hong: It's a way to represent web in the structural way, is machine readable so that you can, scan the data in the web, using scrapers or crawlers. [00:13:52] Jeremy: Scrapers -- or what was the second one? Crawling. [00:13:59] Hong: Yeah. Then you can have graph data of web and you can, query information about things from the data. [00:14:14] Jeremy: So is the web as it exists now, is that the Semantic web or is it something different? [00:14:24] Hong: I think it is partially semantic web, you have several metadata in Your HTML. For example, there are several specification for semantic web, like, OpenGraph metadata. [00:14:32] Jeremy: Cause when I think about OpenGraph, I think about the metadata on a webpage that, that tells other applications or websites that if you link to this page: show this image or show this title and description. You're saying that specifically you consider part of the semantic web? [00:15:05] Hong: That's, semantic web. To make your website semantic web. Your website should be able to, provide structural data. And other people can make Scrapers to scan, structural data from your website. There are a bunch of attributes and text for HTML to represent metadata. For example you have relation attribute rel so if you have a link with rel=me to your another social profile. Then other people can tell two web pages represent the same person. [00:16:10] Jeremy: Oh, I see. So you could have more than one website. Maybe one is your blog and maybe one is your favorite birds or something like that. But you could put a rel tag with information about you as a person so that someone who scrapes both websites could look at that tag and see that both of these websites are by, Hong, by this person. JSON-LD is difficult to implement and not used as intended [00:16:43] Hong: Yeah. I think JSON-LD is, designed for semantic web, but in reality, ActivityPub implementations, most of them are, not aware of semantic web. [00:17:01] Jeremy: The choice of JSON Linked Data, the JSON-LD, by the people who made the specification -- They had this idea that things that implemented ActivityPub would be a part of this semantic web, but the actual implementation of a Mastodon or a Pixelfed, they use JSON-LD because it's part of the specification, but the way they use it, it ends up not really being a part of this semantic web. [00:17:34] Hong: Yeah, that's exactly.. [00:17:37] Jeremy: You've mentioned that implementing it is difficult. What makes implementing JSON LD particularly hard? [00:17:48] Hong: The JSON-LD is quite complex. Which is why a lot of programming language don't even have JSON-LD implementations and it's pretty slow compared to just working with the regular JSON. So, what happens is a lot of ActivityPub implementations just treat JSON-LD like (it) is regular JSON without using a proper JSON-LD processor. You can do that, but it creates a source of headache. In JSON-LD there are weird equivalences like if a property is missing or if it's an empty array, that means the same thing. Or if a property has one value versus an array with just that one value in it, same thing. So when you are writing code to parse JSON-LD, you've got to keep checking if something's an array how long it is and all that is super easy to mess up. It's not just reading JSON-LD that's tricky. Creating it is just as bad. Like you might forget to include the right context metadata for a vocabulary and end up with a JSON-LD document that's either invalid or means something totally different from what you wanted. Even the big ActivityPub implementations mess this up pretty often. With Fedify we've got a JSON-LD processor built in and we keep running into issues where major ActivityPub implementations create invalidate JSON-LD. We've had to create workaround for all of them, but it's not pretty and causes kind of a mess. [00:19:52] Jeremy: Even though there is a specification for JSON-LD, it sounds like the implementers don't necessarily follow it. So you are kind of parsing JSON-LD, but not really. You're parsing something that. Looks like JSON-LD, but isn't quite it. [00:20:12] Hong: Yes, that's right. [00:20:14] Jeremy: And is that true in the, the biggest implementations, Mastodon, for example, are there things that it sends in its activities that aren't valid JSON-LD? [00:20:26] Hong: Those implementations that had bad JSON-LD tends to fix them soon as a possible. But regressions are so often made. Yeah. [00:20:45] Jeremy: Even within Mastodon, which is probably one of the largest implementers of ActivityPub, there are cases where it's not valid, JSON-LD and somebody fixes it. But then later on there are other messages or other activities that were valid, but aren't valid anymore. And so it's this, it's this back and forth of fixing them and causing new issues it sounds ... [00:21:15] Hong: Yeah. Yeah. Right. [00:21:17] Jeremy: Yeah. That sounds very difficult to deal with. How instances communicate (Inbox) [00:21:20] Jeremy: We've been talking about the messages themselves are this special format of JSON that's very particular. but how do these instances communicate with one another? [00:21:32] Hong: Most of time, it all starts with a follow. Like when John follows Alice, then Alice adds both John and John's inbox URI to her followers list, and after John follows Alice, Whenever Alice posts something new that activities get sent to John's inbox behind the scenes. This is just one HTTP post request. Even though ActivityPub is built on HTTP. It doesn't really care about the HTTP response beyond did it work or not. If you want to reply to an activity, you need to figure out the standard inbox, URI and send or reply activity there. [00:22:27] Jeremy: If we define all the terms, there's the actor, which is the person, each actor can send different activities. those activities are in the form of a JSON linked data. [00:22:40] Hong: Yeah. [00:22:42] Jeremy: And everybody has an inbox. And an inbox is an HTTP URL that people post to. [00:22:50] Hong: Right. [00:22:52] Jeremy: And so when you think about that, you had mentioned that if you have a list of followers, let's say you have a hundred followers, would that mean that you have the URLs to all hundred of those follower's inboxes and that you would send one HTTP post to each inbox every time you had a new message? [00:23:16] Hong: Pretty much all ActivityPub implementations have, a thing called shared inbox, it's exactly what it sounds like. One inbox that all actors on a server share. Private stuff like DMs don't go there (it) is just for public posts and thoughts. [00:23:36] Jeremy: I think we haven't really talked about the fact that, when you have multiple users, usually they're on a server, right? That somebody chooses. So you could have tens of thousands, I don't know how many people can fit on the same server. But, rather than, you having to post to each user individually, you can post to the shared inbox on this server. So let's say, of your 100 followers, 50 them are on the same server, and you have a new post, you only need to post to the shared inbox once. [00:24:16] Hong: Yes, that's right. [00:24:18] Jeremy: And in that message you would I assume have links to each of the profiles or actors that you wanted to send that message to. [00:24:30] Hong: Yeah. Scaling challenges [00:24:31] Jeremy: Something that I've seen in the past is there are people who have challenges with scaling. Their Mastodon instance or their implementations of ActivityPub. As the, the number of followers grow, I've seen a post about, ghost one of the companies you work with mentioning that they've had challenges there. What are the challenges there and, and how do you think those can be resolved? [00:25:04] Hong: To put this in context, when Ghost mentioned the scaling, they were not using Message Queue yet. I'm pretty sure using Message Queue would help a lot of their scaling problems. That said it is definitely true that a lot of activity post software has trouble with scaling right now. I think part of the problem is that everyone's using this purely event driven approach to sending activities around. One of the big issues is that when their delivery fails it's the sender who has to retry and not the receiver. Plus there's all this overhead because the sender has to authenticate itself with HTTP signatures every time. Actually the ActivityPub spec suggests using polling too so I'd love to see more ActivityPub software try using both approaches together. [00:26:16] Jeremy: You mean the followers would poll who they're following instead of the person posting the messages having to send their posts to everyone's inboxes. [00:26:29] Hong: Yeah. [00:26:29] Jeremy: I see. So that's a part of the ActivityPubs specification, but not implemented in a lot of ActivityPub implementations, And so it sounds like maybe that puts a lot of burden on the servers that have people with a lot of followers because they have to post to every single, follower server and maybe the server is slow or they can't reach it. And like you said, they have to just keep trying and trying. There could be a lot of challenges there. [00:27:09] Hong: Right. Account migration [00:27:10] Jeremy: We've talked a little bit about the fact that each person each actor is hosted by a server and those servers can host multiple actors. But if you want to move to another server either because your server is shutting down or you just would like to change servers, what are some of the challenges there? [00:27:38] Hong: ActivityPub and Fediverse already have the specification for an account move. It's called FEP-7628 Move Actor. First thing you need to do when moving an account is prove that both the old and new accounts belong to the same person. You do this by adding the all accounts, add the URI to the new account's AlsoKnownAs property. And then the old account contacts all the other instances it's moving by sending out a move activity. When a server gets this move activity, it checks that both accounts really do belong to the same parts, and then it makes all the accounts that, uh, were following the, all the accounts start to, following the new one instead. that's how the new account gets to keep all the, all the accounts follow us. pretty much all, all the major activity post software has this feature built in, for example, Mastodon Misskey you name it. [00:29:04] Jeremy: This is very similar to the post where when you execute a move, the server that originally hosted that actor, they need to somehow tell every single other server that was following that account that you've moved. And so if there's any issues with communicating with one of those servers, or you miss one, then it just won't recognize that you've moved. You have to make sure that you talk to every single server. [00:29:36] Hong: That's right. [00:29:38] Jeremy: I could see how that could be a difficult problem sometimes if you have a lot of followers. [00:29:45] Hong: Yeah. Fedify [00:29:46] Jeremy: You've created a TypeScript library Fedify for building ActivityPub powered applications. What was the reason you decided to create Fedify? [00:29:58] Hong: Fedify is (a) ActivityPub servers framework I built for TypeScript. It basically takes away a lot of headaches you'd get trying to implement (an) ActivityPub server from scratch. The whole thing started because I wanted to build hollo -- A single user microblogging platform I built. But when I tried, to implement ActivityPub from (the) ground up it was kind of a nightmare. Imagine trying to write a CGI program in Perl or C back in the late nineties, where you are manually printing, HTTP headers and HTML as bias. there just wasn't any good abstraction layer to go with. There were already some libraries and frameworks for ActivityPub out there but none of them really hit the sweet spot I was looking for. They were either too high level and rigid. Like you could only build a mastodon clone or they barely did anything at all. Or they were written in languages I didn't really know. Ghost and Fedify [00:31:24] Jeremy: I saw that you are doing some work with, ghost. How is Ghost using fedify? [00:31:30] Hong: Ghost is an open source publishing platform. They have put some money into fedify which is why I get to work on it full time now. Their ActivityPub feature is still in private beta but it should be available to everyone pretty soon. We work together to improve fedify. Basically they are a user of fedify. They report bugs request new features to fedify then I fix them or implement them, first. [00:32:16] Jeremy: Ghost to my understanding is a blogging platform and a a newsletter platform. So what does it mean for them to implement ActivityPub? What would somebody using Mastodon, for example, get when they follow somebody using Ghost? [00:32:38] Hong: Ghost will have a fediverse handle for each blog. If you follow them in your mastodon or something (similar) then a new post is published. These post will show up (in) your timeline in Mastodon and you can like them or share them. Andin the dashboard of Ghost you can see who liked their posts or shared their posts and so on. It is like how mastodon works but in Ghost. [00:33:26] Jeremy: I see. So if you are writing a ghost blog and somebody follows your blog from Mastodon, sort of like we were talking about earlier, they can like your post, and on the blog itself you could show, oh, I have 200 likes. And those aren't necessarily people who were on your ghost website, they could be people that were liking your post from Mastodon. [00:33:58] Hong: Yes. Misskey / Forkey development in Asia [00:34:00] Jeremy: Something you mentioned at the beginning was there is a community of developers in Asia making forks of I believe of Mastodon, right? [00:34:13] Hong: Yeah. [00:34:14] Jeremy: Do you have experience working in that development community? What's different about it compared to the more Western centric community? [00:34:24] Hong: They are very similar in most ways. The key difference is language of course. They communicate in Japanese primarily. They also accept pull requests with English. But there are tons of comments in Japanese in their code. So you need to translate them into English or your first language to understand what code does. So I think that makes a barrier for Western developers. In fact, many Western developers that contribute to misskey or forkey are able to speak a little Japanese. And many of the developers of misskey and forkey are kind of otaku. [00:35:31] Jeremy: Oh otaku okay. [00:35:33] Hong: It's not a big deal, but you can see (the) difference in a glance. [00:35:41] Jeremy: Yeah. You mentioned one of the things that I believe misskey implemented was the emoji reactions and maybe one of the reasons they wanted that was so that they could react to each other's posts with you know anime pictures or things like that. [00:35:58] Hong: Yeah, that's right. [00:36:01] Jeremy: You've mentioned misskey and forkey. So is misskey a fork of Mastodon and then is forkey a fork of misskey? [00:36:10] Hong: No, misskey is not a fork of mastodon. (It) is built from scratch. It's its own implementation. And forkeys are forks of Mastodon. [00:36:22] Jeremy: Oh, I see. But both of those are primarily built by Japanese developers. [00:36:30] Hong: Yes. Whereas Mastodon (is) written in Ruby. Ruby on Rails. But misskey is built in TypeScript. [00:36:40] Jeremy: And because of ActivityPub -- they all implement it. So you can communicate with people between mastodon and misskey because they all understand the same activities. [00:36:56] Hong: Yes. Backwards compatible activity implementations [00:36:57] Jeremy: You did mention since there are extensions like misskey has the emoji reactions. When there is an activity that an implementation doesn't support what happens between the two servers? Do you send it to a server's inbox and then the server just doesn't do anything with it? [00:37:16] Hong: Some implementers consider backwards compatibility. So they design (it) to work with other implementations that don't support that activity. For example misskey uses like activity for emoji reaction. So if you put an emoji to a Mastodon post then in Mastodon you get one like. So it's intended behavior by misskey developers that they fall back to normal likes. But sometimes ActivityPub implementers introduce entirely new activity types. For example Pleroma introduced the emoji react. And if you put emoji reaction to Mastodon post from Pleroma in Mastodon you have nothing to see because Mastodon just ignores them. [00:38:37] Jeremy: If I understand correctly, both misskey and Pleroma are independent implementations of ActivityPub, but with misskey, they can tell when or their message is backwards compatible where it's if you don't understand the emoji reaction, it'll be embedded inside of a like message. Whereas with Pleroma they send an activity that Mastodon can't understand at all. So it just doesn't do anything. [00:39:11] Hong: Yes, right. But, Misskey also understands (the) emoji react activity. So between pleroma and misskey they have exchanged emoji reactions with no problem. [00:39:27] Jeremy: Oh, I see. So they, they both understand that activity. They both implement it the same way, but then when misskey communicates with Mastodon or with an instance that it knows doesn't understand it, it sends something different. [00:39:45] Hong: Yeah, that's right. [00:39:47] Jeremy: The servers -- can they query one another to know which activities they support? [00:39:53] Hong: Usually ActivityPub implementations also implement NodeInfo specification. It's like a user agent-like thing in Fediverse. Implementations tell the other instance (if it) is Mastodon or something else. You can query the type of server. [00:40:20] Jeremy: Okay, so within ActivityPub are each of the servers -- is the term node is that the word they use for each server? [00:40:31] Hong: Yes. Right. [00:40:32] Jeremy: You have the nodes, which can have any number of actors and the servers send activities to one another, to each other's inboxes. And so those are the way they all communicate. [00:40:49] Hong: Yeah. Building an ActivityPub implementation [00:40:50] Jeremy: You've implemented ActivityPub with Fedify because you found like there weren't good enough implementations or resources already. Did you implement it based off of the specification or did you look at existing implementations while you were building your implementation? [00:41:12] Hong: To be honest, instead of just, diving into the spec. I usually start by looking at actually ActivityPub software code first. The ActivityPub spec is so vague that you can't really build something just from reading it. So when we talk about ActivityPub, we are actually talking about a whole bunch of other technical standards too, WebFinger, HTTP signatures and more. So you need to understand all of these as well. [00:41:47] Jeremy: With the specification alone, you were saying it's too vague and so what ends up being -- I'm not sure if it's right to call it a spec, but looking at the implementations that people have already made that collectively becomes the spec because trying to follow the spec just by itself is maybe too difficult. [00:42:12] Hong: Yes. [00:42:14] Jeremy: Maybe that brings up the issues you were talking about before where you have specifications like JSON-LD where they're so complicated that even the biggest implementations aren't quite following it exactly. [00:42:28] Hong: Yeah. [00:42:29] Jeremy: If somebody wanted to, to get started with understanding a little bit more about ActivityPub or building something with it where would you recommend they start? [00:42:44] Hong: I recommend to dig into a lot of code from actual implementations. First, Mastodon, Misskey, Akkoma and so on. There are are some really cool tools that have been so helpful. For example, ActivityPub Academy is this awesome mastodon server for debugging ActivityPub. It makes it super easy to create a temporary account and see what activities are going back and forth. There is also BrowserPub. BrowserPub is this neat tool for looking up and browsing ActivityPub objects. It's really handy when you want to see how different ActivityPub software handles various features. I also recommend to use Fedify. I've got to mention the Fedify CLI, which comes with some really useful tools. [00:43:46] Jeremy: So if someone uses Fedify they're writing an application in TypeScript, then it sounds like they have to know the high level concepts. They have to know what are the different activities, what is inside of an actor. But the actual implementation of how do I create and parse JSON linked data, those kinds of things are taken care of by the library. [00:44:13] Hong: Yes, right. [00:44:16] Jeremy: So in some ways it seems like it might be good to, like you were saying, use the tools you mentioned to create a test Mastodon account, look at the messages being sent back and forth, and then when you're trying to implement it, starting with something like Fedify might be good because then you can really just focus on the concepts and not worry so much about the, the implementation details. [00:44:43] Hong: Yes, that's right. [00:44:45] Jeremy: Is there anything else you. Wanted to mention or thought we should have talked about? [00:44:52] Hong: Mm. I want to, talk about, a lot of stuff about ActivityPub but it's difficult to speak in English for me, so, it's a shame to talk about it very little. [00:45:15] Jeremy: We need everybody to learn Korean right? [00:45:23] Hong: Yes, please. (laughs) [00:45:23] Jeremy: Yeah. Well, I wanna thank you for taking the time. I know it must have been really challenging to give an interview in, you know, a language that's not your native one. So thank you for spending the time to talk with me. [00:45:38] Hong: Thank you for having me.

Software Engineering Radio - The Podcast for Professional Software Developers
SE Radio 657: Hong Minhee on ActivityPub and the Fediverse

Software Engineering Radio - The Podcast for Professional Software Developers

Play Episode Listen Later Feb 27, 2025 40:09


Hong Minhee, an open source developer and creator of the Fedify ActivityPub library, discusses the ActivityPub protocol and the fediverse with SE Radio's Jeremy Jung. They explore ActivityPub use cases, including microblogging applications such as Mastodon and Misskey, as well as activities built into the specification such as Like, Follow, and Accept. They also discuss extending the specification to include properties like Discoverable and Suspended, how different implementations communicate when they don't implement the same extensions, ND the use of JSON-LD and why it is challenging to implement. Finally, they consider the HTTP-based inbox communication model, difficulties with scaling when using a push rather than a pull model, account migration, and resources for implementing the ActivityPub specification. Brought to you by IEEE Computer Society and IEEE Software magazine.

HearSay
Why Accessibility is Never One and Done with Janina Sajka

HearSay

Play Episode Listen Later Feb 20, 2025 46:44


Many businesses believe accessibility is a one-time fix — but that couldn't be further from the truth. Janina Sajka, an active contributor at the W3C, Amazon, and the American Foundation for the Blind, explores common misconceptions around accessibility. She also discusses the importance of ongoing accessibility audits and how the new W3C Accessibility Maturity Model can help organizations better track their accessibility progress.HearSay is produced by Sojin Rank, Mike Barton, Mariella Paulino, and Missy Jensen. Edited by Alex Dorrier.–View Transcript: https://aeurl.xyz/hearsay-podcast-with-janina-sajkaHearSay is a podcast focusing on the advocates, heroes, and leaders making the web more accessible. We're interviewing these change makers to hear what they have to say, to set the record straight, and offer their perspectives on how we can all work to make the web accessible to all.

Software Process and Measurement Cast
Accessibility Is An Ethical Issue, A Conversation With Mike Paciello, SPaMCAST 851

Software Process and Measurement Cast

Play Episode Listen Later Feb 16, 2025 37:45


SPaMCAST 851 will feature our interview with Mike Paciello. Accessibility which should be a basic human right has always been a struggle and now seems even farther from reality. Mike and I talk about the definition of accessibility, the issues we face, and why this is an ethical issue not just a compliance problem. Mike Paciello is the Chief Accessibility Officer at AudioEye, Inc., where he drives advancements in digital accessibility. He previously founded WebABLE/WebABLE.TV, delivering insights on disability and accessibility technology, and authored the pioneering book “Web Accessibility for People with Disabilities.” Recognized by President Bill Clinton in 1997 for his role in the W3C's Web Accessibility Initiative, Mike has advised the US Access Board and other federal agencies since 1992. An international leader in accessibility and usability, he also founded The Paciello Group (TPG), a leading software accessibility consultancy acquired by Vispero in 2017. Contact Information: Email: LinkedIn: https://www.linkedin.com/in/mike-paciello-1231741/ Mastering Work Intake sponsors SPaMCAST! Look at your to-do list and tell me your work intake process is perfectly balanced. Whether you are reacting to your work or personal backlog, it's time to learn to take control!  Buy a copy of Mastering Work Intake (your work-life balance will improve). Amazon (US) — JRoss — Do you want to test the water before spending part of your hard-earned paycheck?  and I offer free 30-minute “office hours” sessions. In these sessions, we'll facilitate helping to identify and create a plan to tackle one of your work intake challenges. Book time with us here:   Re-read Saturday News In Chapter 4 of , Sen states that value is generated based on the “freedoms a person enjoys that allows them to lead the life they have reason to value.” This leads to the postulate that “poverty must be seen as the deprivation of basic capabilities rather than merely as lowness of incomes.” The idea of observing and understanding poverty as a function of capability deprecation refocuses the reader on the process rather than just the outcome. As we noted in the last chapter, the journey matters. The argument is that without capabilities (i.e. access to education, health care, and transportation) low income is the outcome. While there is covariance (low income reduces access to capabilities) the relationship is obvious. Couple that with the point that poverty is relative to context and the data gets harder to compare region to region. Finally, without understanding which capabilities are in deficit it will be difficult to make policy decisions.  Previous installments of : Week 1: Week 2: Week 3: Week 5: Week 6: Next SPaMCAST  SPaMCAST 852 will feature an essay on scaling attention. Unlike many things, attention doesn't scale no matter how hard you try. So why do people try so hard? We will also have a visit from the Evolutionary Agilist, .

字谈字畅
#249:左右夹击双面装

字谈字畅

Play Episode Listen Later Feb 11, 2025 82:21


今天的主题源自一封听众来信。我们将围绕中文排版的「双面装」工艺,讨论专名号、行间书名号及其他行间标点的排版方式,以及基于当下排版技术的运用及实现。 参考链接 Imposition,印刷工艺术语,中国国标 GB/T 9851.9 作「拼大版」 专名号,中文标点符号之一 字谈字畅 146:书名号居然是个九〇后 W3C《中文排版需求》中关于与「双面装」相关的章节 《星空的巡禮》开明书店版 《國史大綱》商务印书馆版 CTeX 字谈字畅 147:Kerning Panic·字谈字串(十)宗师级程序员的排印解决方案 厉向晨早年在自己复刻开发的「聚珍仿宋」(后改名「文悦古体仿宋」)字体上,添加了行间标点特性 CSS text-decoration-skip 特性可配置下划线不粘连,此特性正在测试,尚不稳定 CSS text-underline-position 特性,可配置下划线位置 CSS text-emphasis-position 特性,可配置着重号位置 The Type 曾整理发布过部分《艺文印刷月刊》的扫描资料(包括节目中讨论到的排版细节) 主播 Eric:字体排印研究者,译者,The Type 执行编辑 蒸鱼:设计师,The Type 编辑 欢迎与我们交流或反馈,来信请致 podcast@thetype.com​。如果你喜爱本期节目,也欢迎用支付宝向我们捐赠:hello@thetype.com​。

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

If you're in SF, join us tomorrow for a fun meetup at CodeGen Night!If you're in NYC, join us for AI Engineer Summit! The Agent Engineering track is now sold out, but 25 tickets remain for AI Leadership and 5 tickets for the workshops. You can see the full schedule of speakers and workshops at https://ai.engineer!It's exceedingly hard to introduce someone like Bret Taylor. We could recite his Wikipedia page, or his extensive work history through Silicon Valley's greatest companies, but everyone else already does that.As a podcast by AI engineers for AI engineers, we had the opportunity to do something a little different. We wanted to dig into what Bret sees from his vantage point at the top of our industry for the last 2 decades, and how that explains the rise of the AI Architect at Sierra, the leading conversational AI/CX platform.“Across our customer base, we are seeing a new role emerge - the role of the AI architect. These leaders are responsible for helping define, manage and evolve their company's AI agent over time. They come from a variety of both technical and business backgrounds, and we think that every company will have one or many AI architects managing their AI agent and related experience.”In our conversation, Bret Taylor confirms the Paul Buchheit legend that he rewrote Google Maps in a weekend, armed with only the help of a then-nascent Google Closure Compiler and no other modern tooling. But what we find remarkable is that he was the PM of Maps, not an engineer, though of course he still identifies as one. We find this theme recurring throughout Bret's career and worldview. We think it is plain as day that AI leadership will have to be hands-on and technical, especially when the ground is shifting as quickly as it is today:“There's a lot of power in combining product and engineering into as few people as possible… few great things have been created by committee.”“If engineering is an order taking organization for product you can sometimes make meaningful things, but rarely will you create extremely well crafted breakthrough products. Those tend to be small teams who deeply understand the customer need that they're solving, who have a maniacal focus on outcomes.”“And I think the reason why is if you look at like software as a service five years ago, maybe you can have a separation of product and engineering because most software as a service created five years ago. I wouldn't say there's like a lot of technological breakthroughs required for most business applications. And if you're making expense reporting software or whatever, it's useful… You kind of know how databases work, how to build auto scaling with your AWS cluster, whatever, you know, it's just, you're just applying best practices to yet another problem. "When you have areas like the early days of mobile development or the early days of interactive web applications, which I think Google Maps and Gmail represent, or now AI agents, you're in this constant conversation with what the requirements of your customers and stakeholders are and all the different people interacting with it and the capabilities of the technology. And it's almost impossible to specify the requirements of a product when you're not sure of the limitations of the technology itself.”This is the first time the difference between technical leadership for “normal” software and for “AI” software was articulated this clearly for us, and we'll be thinking a lot about this going forward. We left a lot of nuggets in the conversation, so we hope you'll just dive in with us (and thank Bret for joining the pod!)Timestamps* 00:00:02 Introductions and Bret Taylor's background* 00:01:23 Bret's experience at Stanford and the dot-com era* 00:04:04 The story of rewriting Google Maps backend* 00:11:06 Early days of interactive web applications at Google* 00:15:26 Discussion on product management and engineering roles* 00:21:00 AI and the future of software development* 00:26:42 Bret's approach to identifying customer needs and building AI companies* 00:32:09 The evolution of business models in the AI era* 00:41:00 The future of programming languages and software development* 00:49:38 Challenges in precisely communicating human intent to machines* 00:56:44 Discussion on Artificial General Intelligence (AGI) and its impact* 01:08:51 The future of agent-to-agent communication* 01:14:03 Bret's involvement in the OpenAI leadership crisis* 01:22:11 OpenAI's relationship with Microsoft* 01:23:23 OpenAI's mission and priorities* 01:27:40 Bret's guiding principles for career choices* 01:29:12 Brief discussion on pasta-making* 01:30:47 How Bret keeps up with AI developments* 01:32:15 Exciting research directions in AI* 01:35:19 Closing remarks and hiring at Sierra Transcript[00:02:05] Introduction and Guest Welcome[00:02:05] Alessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co host swyx, founder of smol.ai.[00:02:17] swyx: Hey, and today we're super excited to have Bret Taylor join us. Welcome. Thanks for having me. It's a little unreal to have you in the studio.[00:02:25] swyx: I've read about you so much over the years, like even before. Open AI effectively. I mean, I use Google Maps to get here. So like, thank you for everything that you've done. Like, like your story history, like, you know, I think people can find out what your greatest hits have been.[00:02:40] Bret Taylor's Early Career and Education[00:02:40] swyx: How do you usually like to introduce yourself when, you know, you talk about, you summarize your career, like, how do you look at yourself?[00:02:47] Bret: Yeah, it's a great question. You know, we, before we went on the mics here, we're talking about the audience for this podcast being more engineering. And I do think depending on the audience, I'll introduce myself differently because I've had a lot of [00:03:00] corporate and board roles. I probably self identify as an engineer more than anything else though.[00:03:04] Bret: So even when I was. Salesforce, I was coding on the weekends. So I think of myself as an engineer and then all the roles that I do in my career sort of start with that just because I do feel like engineering is sort of a mindset and how I approach most of my life. So I'm an engineer first and that's how I describe myself.[00:03:24] Bret: You majored in computer[00:03:25] swyx: science, like 1998. And, and I was high[00:03:28] Bret: school, actually my, my college degree was Oh, two undergrad. Oh, three masters. Right. That old.[00:03:33] swyx: Yeah. I mean, no, I was going, I was going like 1998 to 2003, but like engineering wasn't as, wasn't a thing back then. Like we didn't have the title of senior engineer, you know, kind of like, it was just.[00:03:44] swyx: You were a programmer, you were a developer, maybe. What was it like in Stanford? Like, what was that feeling like? You know, was it, were you feeling like on the cusp of a great computer revolution? Or was it just like a niche, you know, interest at the time?[00:03:57] Stanford and the Dot-Com Bubble[00:03:57] Bret: Well, I was at Stanford, as you said, from 1998 to [00:04:00] 2002.[00:04:02] Bret: 1998 was near the peak of the dot com bubble. So. This is back in the day where most people that they're coding in the computer lab, just because there was these sun microsystems, Unix boxes there that most of us had to do our assignments on. And every single day there was a. com like buying pizza for everybody.[00:04:20] Bret: I didn't have to like, I got. Free food, like my first two years of university and then the dot com bubble burst in the middle of my college career. And so by the end there was like tumbleweed going to the job fair, you know, it was like, cause it was hard to describe unless you were there at the time, the like level of hype and being a computer science major at Stanford was like, A thousand opportunities.[00:04:45] Bret: And then, and then when I left, it was like Microsoft, IBM.[00:04:49] Joining Google and Early Projects[00:04:49] Bret: And then the two startups that I applied to were VMware and Google. And I ended up going to Google in large part because a woman named Marissa Meyer, who had been a teaching [00:05:00] assistant when I was, what was called a section leader, which was like a junior teaching assistant kind of for one of the big interest.[00:05:05] Bret: Yes. Classes. She had gone there. And she was recruiting me and I knew her and it was sort of felt safe, you know, like, I don't know. I thought about it much, but it turned out to be a real blessing. I realized like, you know, you always want to think you'd pick Google if given the option, but no one knew at the time.[00:05:20] Bret: And I wonder if I'd graduated in like 1999 where I've been like, mom, I just got a job at pets. com. It's good. But you know, at the end I just didn't have any options. So I was like, do I want to go like make kernel software at VMware? Do I want to go build search at Google? And I chose Google. 50, 50 ball.[00:05:36] Bret: I'm not really a 50, 50 ball. So I feel very fortunate in retrospect that the economy collapsed because in some ways it forced me into like one of the greatest companies of all time, but I kind of lucked into it, I think.[00:05:47] The Google Maps Rewrite Story[00:05:47] Alessio: So the famous story about Google is that you rewrote the Google maps back in, in one week after the map quest quest maps acquisition, what was the story there?[00:05:57] Alessio: Is it. Actually true. Is it [00:06:00] being glorified? Like how, how did that come to be? And is there any detail that maybe Paul hasn't shared before?[00:06:06] Bret: It's largely true, but I'll give the color commentary. So it was actually the front end, not the back end, but it turns out for Google maps, the front end was sort of the hard part just because Google maps was.[00:06:17] Bret: Largely the first ish kind of really interactive web application, say first ish. I think Gmail certainly was though Gmail, probably a lot of people then who weren't engineers probably didn't appreciate its level of interactivity. It was just fast, but. Google maps, because you could drag the map and it was sort of graphical.[00:06:38] Bret: My, it really in the mainstream, I think, was it a map[00:06:41] swyx: quest back then that was, you had the arrows up and down, it[00:06:44] Bret: was up and down arrows. Each map was a single image and you just click left and then wait for a few seconds to the new map to let it was really small too, because generating a big image was kind of expensive on computers that day.[00:06:57] Bret: So Google maps was truly innovative in that [00:07:00] regard. The story on it. There was a small company called where two technologies started by two Danish brothers, Lars and Jens Rasmussen, who are two of my closest friends now. They had made a windows app called expedition, which had beautiful maps. Even in 2000.[00:07:18] Bret: For whenever we acquired or sort of acquired their company, Windows software was not particularly fashionable, but they were really passionate about mapping and we had made a local search product that was kind of middling in terms of popularity, sort of like a yellow page of search product. So we wanted to really go into mapping.[00:07:36] Bret: We'd started working on it. Their small team seemed passionate about it. So we're like, come join us. We can build this together.[00:07:42] Technical Challenges and Innovations[00:07:42] Bret: It turned out to be a great blessing that they had built a windows app because you're less technically constrained when you're doing native code than you are building a web browser, particularly back then when there weren't really interactive web apps and it ended up.[00:07:56] Bret: Changing the level of quality that we [00:08:00] wanted to hit with the app because we were shooting for something that felt like a native windows application. So it was a really good fortune that we sort of, you know, their unusual technical choices turned out to be the greatest blessing. So we spent a lot of time basically saying, how can you make a interactive draggable map in a web browser?[00:08:18] Bret: How do you progressively load, you know, new map tiles, you know, as you're dragging even things like down in the weeds of the browser at the time, most browsers like Internet Explorer, which was dominant at the time would only load two images at a time from the same domain. So we ended up making our map tile servers have like.[00:08:37] Bret: Forty different subdomains so we could load maps and parallels like lots of hacks. I'm happy to go into as much as like[00:08:44] swyx: HTTP connections and stuff.[00:08:46] Bret: They just like, there was just maximum parallelism of two. And so if you had a map, set of map tiles, like eight of them, so So we just, we were down in the weeds of the browser anyway.[00:08:56] Bret: So it was lots of plumbing. I can, I know a lot more about browsers than [00:09:00] most people, but then by the end of it, it was fairly, it was a lot of duct tape on that code. If you've ever done an engineering project where you're not really sure the path from point A to point B, it's almost like. Building a house by building one room at a time.[00:09:14] Bret: The, there's not a lot of architectural cohesion at the end. And then we acquired a company called Keyhole, which became Google earth, which was like that three, it was a native windows app as well, separate app, great app, but with that, we got licenses to all this satellite imagery. And so in August of 2005, we added.[00:09:33] Bret: Satellite imagery to Google Maps, which added even more complexity in the code base. And then we decided we wanted to support Safari. There was no mobile phones yet. So Safari was this like nascent browser on, on the Mac. And it turns out there's like a lot of decisions behind the scenes, sort of inspired by this windows app, like heavy use of XML and XSLT and all these like.[00:09:54] Bret: Technologies that were like briefly fashionable in the early two thousands and everyone hates now for good [00:10:00] reason. And it turns out that all of the XML functionality and Internet Explorer wasn't supporting Safari. So people are like re implementing like XML parsers. And it was just like this like pile of s**t.[00:10:11] Bret: And I had to say a s**t on your part. Yeah, of[00:10:12] Alessio: course.[00:10:13] Bret: So. It went from this like beautifully elegant application that everyone was proud of to something that probably had hundreds of K of JavaScript, which sounds like nothing. Now we're talking like people have modems, you know, not all modems, but it was a big deal.[00:10:29] Bret: So it was like slow. It took a while to load and just, it wasn't like a great code base. Like everything was fragile. So I just got. Super frustrated by it. And then one weekend I did rewrite all of it. And at the time the word JSON hadn't been coined yet too, just to give you a sense. So it's all XML.[00:10:47] swyx: Yeah.[00:10:47] Bret: So we used what is now you would call JSON, but I just said like, let's use eval so that we can parse the data fast. And, and again, that's, it would literally as JSON, but at the time there was no name for it. So we [00:11:00] just said, let's. Pass on JavaScript from the server and eval it. And then somebody just refactored the whole thing.[00:11:05] Bret: And, and it wasn't like I was some genius. It was just like, you know, if you knew everything you wished you had known at the beginning and I knew all the functionality, cause I was the primary, one of the primary authors of the JavaScript. And I just like, I just drank a lot of coffee and just stayed up all weekend.[00:11:22] Bret: And then I, I guess I developed a bit of reputation and no one knew about this for a long time. And then Paul who created Gmail and I ended up starting a company with him too, after all of this told this on a podcast and now it's large, but it's largely true. I did rewrite it and it, my proudest thing.[00:11:38] Bret: And I think JavaScript people appreciate this. Like the un G zipped bundle size for all of Google maps. When I rewrote, it was 20 K G zipped. It was like much smaller for the entire application. It went down by like 10 X. So. What happened on Google? Google is a pretty mainstream company. And so like our usage is shot up because it turns out like it's faster.[00:11:57] Bret: Just being faster is worth a lot of [00:12:00] percentage points of growth at a scale of Google. So how[00:12:03] swyx: much modern tooling did you have? Like test suites no compilers.[00:12:07] Bret: Actually, that's not true. We did it one thing. So I actually think Google, I, you can. Download it. There's a, Google has a closure compiler, a closure compiler.[00:12:15] Bret: I don't know if anyone still uses it. It's gone. Yeah. Yeah. It's sort of gone out of favor. Yeah. Well, even until recently it was better than most JavaScript minifiers because it was more like it did a lot more renaming of variables and things. Most people use ES build now just cause it's fast and closure compilers built on Java and super slow and stuff like that.[00:12:37] Bret: But, so we did have that, that was it. Okay.[00:12:39] The Evolution of Web Applications[00:12:39] Bret: So and that was treated internally, you know, it was a really interesting time at Google at the time because there's a lot of teams working on fairly advanced JavaScript when no one was. So Google suggest, which Kevin Gibbs was the tech lead for, was the first kind of type ahead, autocomplete, I believe in a web browser, and now it's just pervasive in search boxes that you sort of [00:13:00] see a type ahead there.[00:13:01] Bret: I mean, chat, dbt[00:13:01] swyx: just added it. It's kind of like a round trip.[00:13:03] Bret: Totally. No, it's now pervasive as a UI affordance, but that was like Kevin's 20 percent project. And then Gmail, Paul you know, he tells the story better than anyone, but he's like, you know, basically was scratching his own itch, but what was really neat about it is email, because it's such a productivity tool, just needed to be faster.[00:13:21] Bret: So, you know, he was scratching his own itch of just making more stuff work on the client side. And then we, because of Lars and Yen sort of like setting the bar of this windows app or like we need our maps to be draggable. So we ended up. Not only innovate in terms of having a big sync, what would be called a single page application today, but also all the graphical stuff you know, we were crashing Firefox, like it was going out of style because, you know, when you make a document object model with the idea that it's a document and then you layer on some JavaScript and then we're essentially abusing all of this, it just was running into code paths that were not.[00:13:56] Bret: Well, it's rotten, you know, at this time. And so it was [00:14:00] super fun. And, and, you know, in the building you had, so you had compilers, people helping minify JavaScript just practically, but there is a great engineering team. So they were like, that's why Closure Compiler is so good. It was like a. Person who actually knew about programming languages doing it, not just, you know, writing regular expressions.[00:14:17] Bret: And then the team that is now the Chrome team believe, and I, I don't know this for a fact, but I'm pretty sure Google is the main contributor to Firefox for a long time in terms of code. And a lot of browser people were there. So every time we would crash Firefox, we'd like walk up two floors and say like, what the hell is going on here?[00:14:35] Bret: And they would load their browser, like in a debugger. And we could like figure out exactly what was breaking. And you can't change the code, right? Cause it's the browser. It's like slow, right? I mean, slow to update. So, but we could figure out exactly where the bug was and then work around it in our JavaScript.[00:14:52] Bret: So it was just like new territory. Like so super, super fun time, just like a lot of, a lot of great engineers figuring out [00:15:00] new things. And And now, you know, the word, this term is no longer in fashion, but the word Ajax, which was asynchronous JavaScript and XML cause I'm telling you XML, but see the word XML there, to be fair, the way you made HTTP requests from a client to server was this.[00:15:18] Bret: Object called XML HTTP request because Microsoft and making Outlook web access back in the day made this and it turns out to have nothing to do with XML. It's just a way of making HTTP requests because XML was like the fashionable thing. It was like that was the way you, you know, you did it. But the JSON came out of that, you know, and then a lot of the best practices around building JavaScript applications is pre React.[00:15:44] Bret: I think React was probably the big conceptual step forward that we needed. Even my first social network after Google, we used a lot of like HTML injection and. Making real time updates was still very hand coded and it's really neat when you [00:16:00] see conceptual breakthroughs like react because it's, I just love those things where it's like obvious once you see it, but it's so not obvious until you do.[00:16:07] Bret: And actually, well, I'm sure we'll get into AI, but I, I sort of feel like we'll go through that evolution with AI agents as well that I feel like we're missing a lot of the core abstractions that I think in 10 years we'll be like, gosh, how'd you make agents? Before that, you know, but it was kind of that early days of web applications.[00:16:22] swyx: There's a lot of contenders for the reactive jobs of of AI, but no clear winner yet. I would say one thing I was there for, I mean, there's so much we can go into there. You just covered so much.[00:16:32] Product Management and Engineering Synergy[00:16:32] swyx: One thing I just, I just observe is that I think the early Google days had this interesting mix of PM and engineer, which I think you are, you didn't, you didn't wait for PM to tell you these are my, this is my PRD.[00:16:42] swyx: This is my requirements.[00:16:44] mix: Oh,[00:16:44] Bret: okay.[00:16:45] swyx: I wasn't technically a software engineer. I mean,[00:16:48] Bret: by title, obviously. Right, right, right.[00:16:51] swyx: It's like a blend. And I feel like these days, product is its own discipline and its own lore and own industry and engineering is its own thing. And there's this process [00:17:00] that happens and they're kind of separated, but you don't produce as good of a product as if they were the same person.[00:17:06] swyx: And I'm curious, you know, if, if that, if that sort of resonates in, in, in terms of like comparing early Google versus modern startups that you see out there,[00:17:16] Bret: I certainly like wear a lot of hats. So, you know, sort of biased in this, but I really agree that there's a lot of power and combining product design engineering into as few people as possible because, you know few great things have been created by committee, you know, and so.[00:17:33] Bret: If engineering is an order taking organization for product you can sometimes make meaningful things, but rarely will you create extremely well crafted breakthrough products. Those tend to be small teams who deeply understand the customer need that they're solving, who have a. Maniacal focus on outcomes.[00:17:53] Bret: And I think the reason why it's, I think for some areas, if you look at like software as a service five years ago, maybe you can have a [00:18:00] separation of product and engineering because most software as a service created five years ago. I wouldn't say there's like a lot of like. Technological breakthroughs required for most, you know, business applications.[00:18:11] Bret: And if you're making expense reporting software or whatever, it's useful. I don't mean to be dismissive of expense reporting software, but you probably just want to understand like, what are the requirements of the finance department? What are the requirements of an individual file expense report? Okay.[00:18:25] Bret: Go implement that. And you kind of know how web applications are implemented. You kind of know how to. How databases work, how to build auto scaling with your AWS cluster, whatever, you know, it's just, you're just applying best practices to yet another problem when you have areas like the early days of mobile development or the early days of interactive web applications, which I think Google Maps and Gmail represent, or now AI agents, you're in this constant conversation with what the requirements of your customers and stakeholders are and all the different people interacting with it.[00:18:58] Bret: And the capabilities of the [00:19:00] technology. And it's almost impossible to specify the requirements of a product when you're not sure of the limitations of the technology itself. And that's why I use the word conversation. It's not literal. That's sort of funny to use that word in the age of conversational AI.[00:19:15] Bret: You're constantly sort of saying, like, ideally, you could sprinkle some magic AI pixie dust and solve all the world's problems, but it's not the way it works. And it turns out that actually, I'll just give an interesting example.[00:19:26] AI Agents and Modern Tooling[00:19:26] Bret: I think most people listening probably use co pilots to code like Cursor or Devon or Microsoft Copilot or whatever.[00:19:34] Bret: Most of those tools are, they're remarkable. I'm, I couldn't, you know, imagine development without them now, but they're not autonomous yet. Like I wouldn't let it just write most code without my interactively inspecting it. We just are somewhere between it's an amazing co pilot and it's an autonomous software engineer.[00:19:53] Bret: As a product manager, like your aspirations for what the product is are like kind of meaningful. But [00:20:00] if you're a product person, yeah, of course you'd say it should be autonomous. You should click a button and program should come out the other side. The requirements meaningless. Like what matters is like, what is based on the like very nuanced limitations of the technology.[00:20:14] Bret: What is it capable of? And then how do you maximize the leverage? It gives a software engineering team, given those very nuanced trade offs. Coupled with the fact that those nuanced trade offs are changing more rapidly than any technology in my memory, meaning every few months you'll have new models with new capabilities.[00:20:34] Bret: So how do you construct a product that can absorb those new capabilities as rapidly as possible as well? That requires such a combination of technical depth and understanding the customer that you really need more integration. Of product design and engineering. And so I think it's why with these big technology waves, I think startups have a bit of a leg up relative to incumbents because they [00:21:00] tend to be sort of more self actualized in terms of just like bringing those disciplines closer together.[00:21:06] Bret: And in particular, I think entrepreneurs, the proverbial full stack engineers, you know, have a leg up as well because. I think most breakthroughs happen when you have someone who can understand those extremely nuanced technical trade offs, have a vision for a product. And then in the process of building it, have that, as I said, like metaphorical conversation with the technology, right?[00:21:30] Bret: Gosh, I ran into a technical limit that I didn't expect. It's not just like changing that feature. You might need to refactor the whole product based on that. And I think that's, that it's particularly important right now. So I don't, you know, if you, if you're building a big ERP system, probably there's a great reason to have product and engineering.[00:21:51] Bret: I think in general, the disciplines are there for a reason. I think when you're dealing with something as nuanced as the like technologies, like large language models today, there's a ton of [00:22:00] advantage of having. Individuals or organizations that integrate the disciplines more formally.[00:22:05] Alessio: That makes a lot of sense.[00:22:06] Alessio: I've run a lot of engineering teams in the past, and I think the product versus engineering tension has always been more about effort than like whether or not the feature is buildable. But I think, yeah, today you see a lot more of like. Models actually cannot do that. And I think the most interesting thing is on the startup side, people don't yet know where a lot of the AI value is going to accrue.[00:22:26] Alessio: So you have this rush of people building frameworks, building infrastructure, layered things, but we don't really know the shape of the compute. I'm curious that Sierra, like how you thought about building an house, a lot of the tooling for evals or like just, you know, building the agents and all of that.[00:22:41] Alessio: Versus how you see some of the startup opportunities that is maybe still out there.[00:22:46] Bret: We build most of our tooling in house at Sierra, not all. It's, we don't, it's not like not invented here syndrome necessarily, though, maybe slightly guilty of that in some ways, but because we're trying to build a platform [00:23:00] that's in Dorian, you know, we really want to have control over our own destiny.[00:23:03] Bret: And you had made a comment earlier that like. We're still trying to figure out who like the reactive agents are and the jury is still out. I would argue it hasn't been created yet. I don't think the jury is still out to go use that metaphor. We're sort of in the jQuery era of agents, not the react era.[00:23:19] Bret: And, and that's like a throwback for people listening,[00:23:22] swyx: we shouldn't rush it. You know?[00:23:23] Bret: No, yeah, that's my point is. And so. Because we're trying to create an enduring company at Sierra that outlives us, you know, I'm not sure we want to like attach our cart to some like to a horse where it's not clear that like we've figured out and I actually want as a company, we're trying to enable just at a high level and I'll, I'll quickly go back to tech at Sierra, we help consumer brands build customer facing AI agents.[00:23:48] Bret: So. Everyone from Sonos to ADT home security to Sirius XM, you know, if you call them on the phone and AI will pick up with you, you know, chat with them on the Sirius XM homepage. It's an AI agent called Harmony [00:24:00] that they've built on our platform. We're what are the contours of what it means for someone to build an end to end complete customer experience with AI with conversational AI.[00:24:09] Bret: You know, we really want to dive into the deep end of, of all the trade offs to do it. You know, where do you use fine tuning? Where do you string models together? You know, where do you use reasoning? Where do you use generation? How do you use reasoning? How do you express the guardrails of an agentic process?[00:24:25] Bret: How do you impose determinism on a fundamentally non deterministic technology? There's just a lot of really like as an important design space. And I could sit here and tell you, we have the best approach. Every entrepreneur will, you know. But I hope that in two years, we look back at our platform and laugh at how naive we were, because that's the pace of change broadly.[00:24:45] Bret: If you talk about like the startup opportunities, I'm not wholly skeptical of tools companies, but I'm fairly skeptical. There's always an exception for every role, but I believe that certainly there's a big market for [00:25:00] frontier models, but largely for companies with huge CapEx budgets. So. Open AI and Microsoft's Anthropic and Amazon Web Services, Google Cloud XAI, which is very well capitalized now, but I think the, the idea that a company can make money sort of pre training a foundation model is probably not true.[00:25:20] Bret: It's hard to, you're competing with just, you know, unreasonably large CapEx budgets. And I just like the cloud infrastructure market, I think will be largely there. I also really believe in the applications of AI. And I define that not as like building agents or things like that. I define it much more as like, you're actually solving a problem for a business.[00:25:40] Bret: So it's what Harvey is doing in legal profession or what cursor is doing for software engineering or what we're doing for customer experience and customer service. The reason I believe in that is I do think that in the age of AI, what's really interesting about software is it can actually complete a task.[00:25:56] Bret: It can actually do a job, which is very different than the value proposition of [00:26:00] software was to ancient history two years ago. And as a consequence, I think the way you build a solution and For a domain is very different than you would have before, which means that it's not obvious, like the incumbent incumbents have like a leg up, you know, necessarily, they certainly have some advantages, but there's just such a different form factor, you know, for providing a solution and it's just really valuable.[00:26:23] Bret: You know, it's. Like just think of how much money cursor is saving software engineering teams or the alternative, how much revenue it can produce tool making is really challenging. If you look at the cloud market, just as a analog, there are a lot of like interesting tools, companies, you know, Confluent, Monetized Kafka, Snowflake, Hortonworks, you know, there's a, there's a bunch of them.[00:26:48] Bret: A lot of them, you know, have that mix of sort of like like confluence or have the open source or open core or whatever you call it. I, I, I'm not an expert in this area. You know, I do think [00:27:00] that developers are fickle. I think that in the tool space, I probably like. Default towards open source being like the area that will win.[00:27:09] Bret: It's hard to build a company around this and then you end up with companies sort of built around open source to that can work. Don't get me wrong, but I just think that it's nowadays the tools are changing so rapidly that I'm like, not totally skeptical of tool makers, but I just think that open source will broadly win, but I think that the CapEx required for building frontier models is such that it will go to a handful of big companies.[00:27:33] Bret: And then I really believe in agents for specific domains which I think will, it's sort of the analog to software as a service in this new era. You know, it's like, if you just think of the cloud. You can lease a server. It's just a low level primitive, or you can buy an app like you know, Shopify or whatever.[00:27:51] Bret: And most people building a storefront would prefer Shopify over hand rolling their e commerce storefront. I think the same thing will be true of AI. So [00:28:00] I've. I tend to like, if I have a, like an entrepreneur asked me for advice, I'm like, you know, move up the stack as far as you can towards a customer need.[00:28:09] Bret: Broadly, but I, but it doesn't reduce my excitement about what is the reactive building agents kind of thing, just because it is, it is the right question to ask, but I think we'll probably play out probably an open source space more than anything else.[00:28:21] swyx: Yeah, and it's not a priority for you. There's a lot in there.[00:28:24] swyx: I'm kind of curious about your idea maze towards, there are many customer needs. You happen to identify customer experience as yours, but it could equally have been coding assistance or whatever. I think for some, I'm just kind of curious at the top down, how do you look at the world in terms of the potential problem space?[00:28:44] swyx: Because there are many people out there who are very smart and pick the wrong problem.[00:28:47] Bret: Yeah, that's a great question.[00:28:48] Future of Software Development[00:28:48] Bret: By the way, I would love to talk about the future of software, too, because despite the fact it didn't pick coding, I have a lot of that, but I can talk to I can answer your question, though, you know I think when a technology is as [00:29:00] cool as large language models.[00:29:02] Bret: You just see a lot of people starting from the technology and searching for a problem to solve. And I think it's why you see a lot of tools companies, because as a software engineer, you start building an app or a demo and you, you encounter some pain points. You're like,[00:29:17] swyx: a lot of[00:29:17] Bret: people are experiencing the same pain point.[00:29:19] Bret: What if I make it? That it's just very incremental. And you know, I always like to use the metaphor, like you can sell coffee beans, roasted coffee beans. You can add some value. You took coffee beans and you roasted them and roasted coffee beans largely, you know, are priced relative to the cost of the beans.[00:29:39] Bret: Or you can sell a latte and a latte. Is rarely priced directly like as a percentage of coffee bean prices. In fact, if you buy a latte at the airport, it's a captive audience. So it's a really expensive latte. And there's just a lot that goes into like. How much does a latte cost? And I bring it up because there's a supply chain from growing [00:30:00] coffee beans to roasting coffee beans to like, you know, you could make one at home or you could be in the airport and buy one and the margins of the company selling lattes in the airport is a lot higher than the, you know, people roasting the coffee beans and it's because you've actually solved a much more acute human problem in the airport.[00:30:19] Bret: And, and it's just worth a lot more to that person in that moment. It's kind of the way I think about technology too. It sounds funny to liken it to coffee beans, but you're selling tools on top of a large language model yet in some ways your market is big, but you're probably going to like be price compressed just because you're sort of a piece of infrastructure and then you have open source and all these other things competing with you naturally.[00:30:43] Bret: If you go and solve a really big business problem for somebody, that's actually like a meaningful business problem that AI facilitates, they will value it according to the value of that business problem. And so I actually feel like people should just stop. You're like, no, that's, that's [00:31:00] unfair. If you're searching for an idea of people, I, I love people trying things, even if, I mean, most of the, a lot of the greatest ideas have been things no one believed in.[00:31:07] Bret: So I like, if you're passionate about something, go do it. Like who am I to say, yeah, a hundred percent. Or Gmail, like Paul as far, I mean I, some of it's Laura at this point, but like Gmail is Paul's own email for a long time. , and then I amusingly and Paul can't correct me, I'm pretty sure he sent her in a link and like the first comment was like, this is really neat.[00:31:26] Bret: It would be great. It was not your email, but my own . I don't know if it's a true story. I'm pretty sure it's, yeah, I've read that before. So scratch your own niche. Fine. Like it depends on what your goal is. If you wanna do like a venture backed company, if its a. Passion project, f*****g passion, do it like don't listen to anybody.[00:31:41] Bret: In fact, but if you're trying to start, you know an enduring company, solve an important business problem. And I, and I do think that in the world of agents, the software industries has shifted where you're not just helping people more. People be more productive, but you're actually accomplishing tasks autonomously.[00:31:58] Bret: And as a consequence, I think the [00:32:00] addressable market has just greatly expanded just because software can actually do things now and actually accomplish tasks and how much is coding autocomplete worth. A fair amount. How much is the eventual, I'm certain we'll have it, the software agent that actually writes the code and delivers it to you, that's worth a lot.[00:32:20] Bret: And so, you know, I would just maybe look up from the large language models and start thinking about the economy and, you know, think from first principles. I don't wanna get too far afield, but just think about which parts of the economy. We'll benefit most from this intelligence and which parts can absorb it most easily.[00:32:38] Bret: And what would an agent in this space look like? Who's the customer of it is the technology feasible. And I would just start with these business problems more. And I think, you know, the best companies tend to have great engineers who happen to have great insight into a market. And it's that last part that I think some people.[00:32:56] Bret: Whether or not they have, it's like people start so much in the technology, they [00:33:00] lose the forest for the trees a little bit.[00:33:02] Alessio: How do you think about the model of still selling some sort of software versus selling more package labor? I feel like when people are selling the package labor, it's almost more stateless, you know, like it's easier to swap out if you're just putting an input and getting an output.[00:33:16] Alessio: If you think about coding, if there's no ID, you're just putting a prompt and getting back an app. It doesn't really matter. Who generates the app, you know, you have less of a buy in versus the platform you're building, I'm sure on the backend customers have to like put on their documentation and they have, you know, different workflows that they can tie in what's kind of like the line to draw there versus like going full where you're managed customer support team as a service outsource versus.[00:33:40] Alessio: This is the Sierra platform that you can build on. What was that decision? I'll sort of[00:33:44] Bret: like decouple the question in some ways, which is when you have something that's an agent, who is the person using it and what do they want to do with it? So let's just take your coding agent for a second. I will talk about Sierra as well.[00:33:59] Bret: Who's the [00:34:00] customer of a, an agent that actually produces software? Is it a software engineering manager? Is it a software engineer? And it's there, you know, intern so to speak. I don't know. I mean, we'll figure this out over the next few years. Like what is that? And is it generating code that you then review?[00:34:16] Bret: Is it generating code with a set of unit tests that pass, what is the actual. For lack of a better word contract, like, how do you know that it did what you wanted it to do? And then I would say like the product and the pricing, the packaging model sort of emerged from that. And I don't think the world's figured out.[00:34:33] Bret: I think it'll be different for every agent. You know, in our customer base, we do what's called outcome based pricing. So essentially every time the AI agent. Solves the problem or saves a customer or whatever it might be. There's a pre negotiated rate for that. We do that. Cause it's, we think that that's sort of the correct way agents, you know, should be packaged.[00:34:53] Bret: I look back at the history of like cloud software and notably the introduction of the browser, which led to [00:35:00] software being delivered in a browser, like Salesforce to. Famously invented sort of software as a service, which is both a technical delivery model through the browser, but also a business model, which is you subscribe to it rather than pay for a perpetual license.[00:35:13] Bret: Those two things are somewhat orthogonal, but not really. If you think about the idea of software running in a browser, that's hosted. Data center that you don't own, you sort of needed to change the business model because you don't, you can't really buy a perpetual license or something otherwise like, how do you afford making changes to it?[00:35:31] Bret: So it only worked when you were buying like a new version every year or whatever. So to some degree, but then the business model shift actually changed business as we know it, because now like. Things like Adobe Photoshop. Now you subscribe to rather than purchase. So it ended up where you had a technical shift and a business model shift that were very logically intertwined that actually the business model shift was turned out to be as significant as the technical as the shift.[00:35:59] Bret: And I think with [00:36:00] agents, because they actually accomplish a job, I do think that it doesn't make sense to me that you'd pay for the privilege of like. Using the software like that coding agent, like if it writes really bad code, like fire it, you know, I don't know what the right metaphor is like you should pay for a job.[00:36:17] Bret: Well done in my opinion. I mean, that's how you pay your software engineers, right? And[00:36:20] swyx: and well, not really. We paid to put them on salary and give them options and they vest over time. That's fair.[00:36:26] Bret: But my point is that you don't pay them for how many characters they write, which is sort of the token based, you know, whatever, like, There's a, that famous Apple story where we're like asking for a report of how many lines of code you wrote.[00:36:40] Bret: And one of the engineers showed up with like a negative number cause he had just like done a big refactoring. There was like a big F you to management who didn't understand how software is written. You know, my sense is like the traditional usage based or seat based thing. It's just going to look really antiquated.[00:36:55] Bret: Cause it's like asking your software engineer, how many lines of code did you write today? Like who cares? Like, cause [00:37:00] absolutely no correlation. So my old view is I don't think it's be different in every category, but I do think that that is the, if an agent is doing a job, you should, I think it properly incentivizes the maker of that agent and the customer of, of your pain for the job well done.[00:37:16] Bret: It's not always perfect to measure. It's hard to measure engineering productivity, but you can, you should do something other than how many keys you typed, you know Talk about perverse incentives for AI, right? Like I can write really long functions to do the same thing, right? So broadly speaking, you know, I do think that we're going to see a change in business models of software towards outcomes.[00:37:36] Bret: And I think you'll see a change in delivery models too. And, and, you know, in our customer base you know, we empower our customers to really have their hands on the steering wheel of what the agent does they, they want and need that. But the role is different. You know, at a lot of our customers, the customer experience operations folks have renamed themselves the AI architects, which I think is really cool.[00:37:55] Bret: And, you know, it's like in the early days of the Internet, there's the role of the webmaster. [00:38:00] And I don't know whether your webmaster is not a fashionable, you know, Term, nor is it a job anymore? I just, I don't know. Will they, our tech stand the test of time? Maybe, maybe not. But I do think that again, I like, you know, because everyone listening right now is a software engineer.[00:38:14] Bret: Like what is the form factor of a coding agent? And actually I'll, I'll take a breath. Cause actually I have a bunch of pins on them. Like I wrote a blog post right before Christmas, just on the future of software development. And one of the things that's interesting is like, if you look at the way I use cursor today, as an example, it's inside of.[00:38:31] Bret: A repackaged visual studio code environment. I sometimes use the sort of agentic parts of it, but it's largely, you know, I've sort of gotten a good routine of making it auto complete code in the way I want through tuning it properly when it actually can write. I do wonder what like the future of development environments will look like.[00:38:55] Bret: And to your point on what is a software product, I think it's going to change a lot in [00:39:00] ways that will surprise us. But I always use, I use the metaphor in my blog post of, have you all driven around in a way, Mo around here? Yeah, everyone has. And there are these Jaguars, the really nice cars, but it's funny because it still has a steering wheel, even though there's no one sitting there and the steering wheels like turning and stuff clearly in the future.[00:39:16] Bret: If once we get to that, be more ubiquitous, like why have the steering wheel and also why have all the seats facing forward? Maybe just for car sickness. I don't know, but you could totally rearrange the car. I mean, so much of the car is oriented around the driver, so. It stands to reason to me that like, well, autonomous agents for software engineering run through visual studio code.[00:39:37] Bret: That seems a little bit silly because having a single source code file open one at a time is kind of a goofy form factor for when like the code isn't being written primarily by you, but it begs the question of what's your relationship with that agent. And I think the same is true in our industry of customer experience, which is like.[00:39:55] Bret: Who are the people managing this agent? What are the tools do they need? And they definitely need [00:40:00] tools, but it's probably pretty different than the tools we had before. It's certainly different than training a contact center team. And as software engineers, I think that I would like to see particularly like on the passion project side or research side.[00:40:14] Bret: More innovation in programming languages. I think that we're bringing the cost of writing code down to zero. So the fact that we're still writing Python with AI cracks me up just cause it's like literally was designed to be ergonomic to write, not safe to run or fast to run. I would love to see more innovation and how we verify program correctness.[00:40:37] Bret: I studied for formal verification in college a little bit and. It's not very fashionable because it's really like tedious and slow and doesn't work very well. If a lot of code is being written by a machine, you know, one of the primary values we can provide is verifying that it actually does what we intend that it does.[00:40:56] Bret: I think there should be lots of interesting things in the software development life cycle, like how [00:41:00] we think of testing and everything else, because. If you think about if we have to manually read every line of code that's coming out as machines, it will just rate limit how much the machines can do. The alternative is totally unsafe.[00:41:13] Bret: So I wouldn't want to put code in production that didn't go through proper code review and inspection. So my whole view is like, I actually think there's like an AI native I don't think the coding agents don't work well enough to do this yet, but once they do, what is sort of an AI native software development life cycle and how do you actually.[00:41:31] Bret: Enable the creators of software to produce the highest quality, most robust, fastest software and know that it's correct. And I think that's an incredible opportunity. I mean, how much C code can we rewrite and rust and make it safe so that there's fewer security vulnerabilities. Can we like have more efficient, safer code than ever before?[00:41:53] Bret: And can you have someone who's like that guy in the matrix, you know, like staring at the little green things, like where could you have an operator [00:42:00] of a code generating machine be like superhuman? I think that's a cool vision. And I think too many people are focused on like. Autocomplete, you know, right now, I'm not, I'm not even, I'm guilty as charged.[00:42:10] Bret: I guess in some ways, but I just like, I'd like to see some bolder ideas. And that's why when you were joking, you know, talking about what's the react of whatever, I think we're clearly in a local maximum, you know, metaphor, like sort of conceptual local maximum, obviously it's moving really fast. I think we're moving out of it.[00:42:26] Alessio: Yeah. At the end of 23, I've read this blog post from syntax to semantics. Like if you think about Python. It's taking C and making it more semantic and LLMs are like the ultimate semantic program, right? You can just talk to them and they can generate any type of syntax from your language. But again, the languages that they have to use were made for us, not for them.[00:42:46] Alessio: But the problem is like, as long as you will ever need a human to intervene, you cannot change the language under it. You know what I mean? So I'm curious at what point of automation we'll need to get, we're going to be okay making changes. To the underlying languages, [00:43:00] like the programming languages versus just saying, Hey, you just got to write Python because I understand Python and I'm more important at the end of the day than the model.[00:43:08] Alessio: But I think that will change, but I don't know if it's like two years or five years. I think it's more nuanced actually.[00:43:13] Bret: So I think there's a, some of the more interesting programming languages bring semantics into syntax. So let me, that's a little reductive, but like Rust as an example, Rust is memory safe.[00:43:25] Bret: Statically, and that was a really interesting conceptual, but it's why it's hard to write rust. It's why most people write python instead of rust. I think rust programs are safer and faster than python, probably slower to compile. But like broadly speaking, like given the option, if you didn't have to care about the labor that went into it.[00:43:45] Bret: You should prefer a program written in Rust over a program written in Python, just because it will run more efficiently. It's almost certainly safer, et cetera, et cetera, depending on how you define safe, but most people don't write Rust because it's kind of a pain in the ass. And [00:44:00] the audience of people who can is smaller, but it's sort of better in most, most ways.[00:44:05] Bret: And again, let's say you're making a web service and you didn't have to care about how hard it was to write. If you just got the output of the web service, the rest one would be cheaper to operate. It's certainly cheaper and probably more correct just because there's so much in the static analysis implied by the rest programming language that it probably will have fewer runtime errors and things like that as well.[00:44:25] Bret: So I just give that as an example, because so rust, at least my understanding that came out of the Mozilla team, because. There's lots of security vulnerabilities in the browser and it needs to be really fast. They said, okay, we want to put more of a burden at the authorship time to have fewer issues at runtime.[00:44:43] Bret: And we need the constraint that it has to be done statically because browsers need to be really fast. My sense is if you just think about like the, the needs of a programming language today, where the role of a software engineer is [00:45:00] to use an AI to generate functionality and audit that it does in fact work as intended, maybe functionally, maybe from like a correctness standpoint, some combination thereof, how would you create a programming system that facilitated that?[00:45:15] Bret: And, you know, I bring up Rust is because I think it's a good example of like, I think given a choice of writing in C or Rust, you should choose Rust today. I think most people would say that, even C aficionados, just because. C is largely less safe for very similar, you know, trade offs, you know, for the, the system and now with AI, it's like, okay, well, that just changes the game on writing these things.[00:45:36] Bret: And so like, I just wonder if a combination of programming languages that are more structurally oriented towards the values that we need from an AI generated program, verifiable correctness and all of that. If it's tedious to produce for a person, that maybe doesn't matter. But one thing, like if I asked you, is this rest program memory safe?[00:45:58] Bret: You wouldn't have to read it, you just have [00:46:00] to compile it. So that's interesting. I mean, that's like an, that's one example of a very modest form of formal verification. So I bring that up because I do think you have AI inspect AI, you can have AI reviewed. Do AI code reviews. It would disappoint me if the best we could get was AI reviewing Python and having scaled a few very large.[00:46:21] Bret: Websites that were written on Python. It's just like, you know, expensive and it's like every, trust me, every team who's written a big web service in Python has experimented with like Pi Pi and all these things just to make it slightly more efficient than it naturally is. You don't really have true multi threading anyway.[00:46:36] Bret: It's just like clearly that you do it just because it's convenient to write. And I just feel like we're, I don't want to say it's insane. I just mean. I do think we're at a local maximum. And I would hope that we create a programming system, a combination of programming languages, formal verification, testing, automated code reviews, where you can use AI to generate software in a high scale way and trust it.[00:46:59] Bret: And you're [00:47:00] not limited by your ability to read it necessarily. I don't know exactly what form that would take, but I feel like that would be a pretty cool world to live in.[00:47:08] Alessio: Yeah. We had Chris Lanner on the podcast. He's doing great work with modular. I mean, I love. LVM. Yeah. Basically merging rust in and Python.[00:47:15] Alessio: That's kind of the idea. Should be, but I'm curious is like, for them a big use case was like making it compatible with Python, same APIs so that Python developers could use it. Yeah. And so I, I wonder at what point, well, yeah.[00:47:26] Bret: At least my understanding is they're targeting the data science Yeah. Machine learning crowd, which is all written in Python, so still feels like a local maximum.[00:47:34] Bret: Yeah.[00:47:34] swyx: Yeah, exactly. I'll force you to make a prediction. You know, Python's roughly 30 years old. In 30 years from now, is Rust going to be bigger than Python?[00:47:42] Bret: I don't know this, but just, I don't even know this is a prediction. I just am sort of like saying stuff I hope is true. I would like to see an AI native programming language and programming system, and I use language because I'm not sure language is even the right thing, but I hope in 30 years, there's an AI native way we make [00:48:00] software that is wholly uncorrelated with the current set of programming languages.[00:48:04] Bret: or not uncorrelated, but I think most programming languages today were designed to be efficiently authored by people and some have different trade offs.[00:48:15] Evolution of Programming Languages[00:48:15] Bret: You know, you have Haskell and others that were designed for abstractions for parallelism and things like that. You have programming languages like Python, which are designed to be very easily written, sort of like Perl and Python lineage, which is why data scientists use it.[00:48:31] Bret: It's it can, it has a. Interactive mode, things like that. And I love, I'm a huge Python fan. So despite all my Python trash talk, a huge Python fan wrote at least two of my three companies were exclusively written in Python and then C came out of the birth of Unix and it wasn't the first, but certainly the most prominent first step after assembly language, right?[00:48:54] Bret: Where you had higher level abstractions rather than and going beyond go to, to like abstractions, [00:49:00] like the for loop and the while loop.[00:49:01] The Future of Software Engineering[00:49:01] Bret: So I just think that if the act of writing code is no longer a meaningful human exercise, maybe it will be, I don't know. I'm just saying it sort of feels like maybe it's one of those parts of history that just will sort of like go away, but there's still the role of this offer engineer, like the person actually building the system.[00:49:20] Bret: Right. And. What does a programming system for that form factor look like?[00:49:25] React and Front-End Development[00:49:25] Bret: And I, I just have a, I hope to be just like I mentioned, I remember I was at Facebook in the very early days when, when, what is now react was being created. And I remember when the, it was like released open source I had left by that time and I was just like, this is so f*****g cool.[00:49:42] Bret: Like, you know, to basically model your app independent of the data flowing through it, just made everything easier. And then now. You know, I can create, like there's a lot of the front end software gym play is like a little chaotic for me, to be honest with you. It is like, it's sort of like [00:50:00] abstraction soup right now for me, but like some of those core ideas felt really ergonomic.[00:50:04] Bret: I just wanna, I'm just looking forward to the day when someone comes up with a programming system that feels both really like an aha moment, but completely foreign to me at the same time. Because they created it with sort of like from first principles recognizing that like. Authoring code in an editor is maybe not like the primary like reason why a programming system exists anymore.[00:50:26] Bret: And I think that's like, that would be a very exciting day for me.[00:50:28] The Role of AI in Programming[00:50:28] swyx: Yeah, I would say like the various versions of this discussion have happened at the end of the day, you still need to precisely communicate what you want. As a manager of people, as someone who has done many, many legal contracts, you know how hard that is.[00:50:42] swyx: And then now we have to talk to machines doing that and AIs interpreting what we mean and reading our minds effectively. I don't know how to get across that barrier of translating human intent to instructions. And yes, it can be more declarative, but I don't know if it'll ever Crossover from being [00:51:00] a programming language to something more than that.[00:51:02] Bret: I agree with you. And I actually do think if you look at like a legal contract, you know, the imprecision of the English language, it's like a flaw in the system. How many[00:51:12] swyx: holes there are.[00:51:13] Bret: And I do think that when you're making a mission critical software system, I don't think it should be English language prompts.[00:51:19] Bret: I think that is silly because you want the precision of a a programming language. My point was less about that and more about if the actual act of authoring it, like if you.[00:51:32] Formal Verification in Software[00:51:32] Bret: I'll think of some embedded systems do use formal verification. I know it's very common in like security protocols now so that you can, because the importance of correctness is so great.[00:51:41] Bret: My intellectual exercise is like, why not do that for all software? I mean, probably that's silly just literally to do what we literally do for. These low level security protocols, but the only reason we don't is because it's hard and tedious and hard and tedious are no longer factors. So, like, if I could, I mean, [00:52:00] just think of, like, the silliest app on your phone right now, the idea that that app should be, like, formally verified for its correctness feels laughable right now because, like, God, why would you spend the time on it?[00:52:10] Bret: But if it's zero costs, like, yeah, I guess so. I mean, it never crashed. That's probably good. You know, why not? I just want to, like, set our bars really high. Like. We should make, software has been amazing. Like there's a Mark Andreessen blog post, software is eating the world. And you know, our whole life is, is mediated digitally.[00:52:26] Bret: And that's just increasing with AI. And now we'll have our personal agents talking to the agents on the CRO platform and it's agents all the way down, you know, our core infrastructure is running on these digital systems. We now have like, and we've had a shortage of software developers for my entire life.[00:52:45] Bret: And as a consequence, you know if you look, remember like health care, got healthcare. gov that fiasco security vulnerabilities leading to state actors getting access to critical infrastructure. I'm like. We now have like created this like amazing system that can [00:53:00] like, we can fix this, you know, and I, I just want to, I'm both excited about the productivity gains in the economy, but I just think as software engineers, we should be bolder.[00:53:08] Bret: Like we should have aspirations to fix these systems so that like in general, as you said, as precise as we want to be in the specification of the system. We can make it work correctly now, and I'm being a little bit hand wavy, and I think we need some systems. I think that's where we should set the bar, especially when so much of our life depends on this critical digital infrastructure.[00:53:28] Bret: So I'm I'm just like super optimistic about it. But actually, let's go to w

Software Sessions
Paul Frazee on Bluesky and ATProto

Software Sessions

Play Episode Listen Later Jan 16, 2025 67:11


Paul Frazee is the CTO of Bluesky. He previously worked on the Beaker browser and the peer-to-peer social media protocol Secure Scuttlebutt. Paul discusses how Bluesky and ATProto got started, scaling up a social media site, what makes ATProto decentralized, lessons ATProto learned from previous peer-to-peer projects, and the challenges of content moderation. Episode transcript available here. My Bluesky profile. -- Related Links Bluesky ATProtocol ATProto for distributed systems engineers Bluesky and the AT Protocol: Usable Decentralized Social Media Decentralized Identifiers (DIDs) ActivityPub Webfinger Beaker web browser Secure Scuttlebutt -- Transcript You can help correct transcripts on GitHub. [00:00:00] Jeremy: Today I am talking to Paul Frazee. He's the current CTO of bluesky, and he previously worked on other decentralized applications like Beaker and Secure Scuttlebutt. [00:00:15] Paul: Thanks for having me. What's bluesky [00:00:16] Jeremy: For people who aren't familiar with bluesky, what is it? [00:00:20] Paul: So bluesky is an open social network, simplest way to put it, designed in particular for high scale. That's kind of one of the big requirements that we had when we were moving into it. and it is really geared towards making sure that the operation of the social network is open amongst multiple different organizations. [00:00:44] So we're one of the operators, but other folks can come in, spin up the software, all the open source software, and essentially have a full node with a full copy of the network active users and have their users join into our network. And they all work functionally as one shared application. [00:01:03] Jeremy: So it, it sounds like it's similar to Twitter but instead of there being one Twitter, there could be any number and there is part of the underlying protocol that allows them to all connect to one another and act as one system. [00:01:21] Paul: That's exactly right. And there's a metaphor we use a lot, which is comparing to the web and search engines, which actually kind of matches really well. Like when you use Bing or Google, you're searching the same web. So on the AT protocol on bluesky, you use bluesky, you use some alternative client or application, all the same, what we're we call it, the atmosphere, all one shared network, [00:01:41] Jeremy: And more than just the, the client. 'cause I think sometimes when people think of a client, they'll think of, I use a web browser. I could use Chrome or Firefox, but ultimately I'm connecting to the same thing. But it's not just people running alternate clients, right? [00:01:57] Paul: Their own full backend to it. That's right. Yeah. Yeah. The anchoring point on that being the fire hose of data that runs the entire thing is open as well. And so you start up your own application, you spin up a service that just pipes into that fire hose and taps into all the activity. History of AT Protocol [00:02:18] Jeremy: Talking about this underlying protocol maybe we could start where this all began so people get some context for where this all came from. [00:02:28] Paul: For sure. All right, so let's wind the clock back here in my brain. We started out 2022, right at the beginning of the year. We were formed as a, essentially a consulting company outside of Twitter with a contract with Twitter. And, uh, our goal was to build a protocol that could run, uh, Twitter, much like the way that we just described, which set us up with a couple of pretty specific requirements. [00:02:55] For one, we had to make sure that it could scale. And so that ended up being a really important first requirement. and we wanted to make sure that there was a strong kind of guarantees that the network doesn't ever get captured by any one operator. The idea was that Twitter would become the first, uh, adopter of the technology. [00:03:19] Other applications, other services would begin to take advantage of it and users would be able to smoothly migrate their accounts in between one or the other at any time. Um, and it's really, really anchored in a particular goal of just deconstructing monopolies. Getting rid of those moats that make it so that there's a kind of a lack of competition, uh, between these things. [00:03:44] And making sure that, if there was some kind of reason that you decided you're just not happy with what direction this service has been going, you move over to another one. You're still in touch with all the folks you were in touch with before. You don't lose your data. You don't lose your, your your follows. Those were the kind of initial requirements that we set out with. The team by and large came from, the decentralized web, movement, which is actually a pretty, large community that's been around since, I wanna say around 2012 is when we first kind of started to form. It got really made more specifically into a community somewhere around 2015 or 16, I wanna say. [00:04:23] When the internet archives started to host conferences for us. And so that gave us kind of a meeting point where all started to meet up there's kind of three schools of thought within that movement. There was the blockchain community, the, federation community, and the peer-to-peer community. [00:04:43] And so blockchain, you don't need to explain that one. You got Federation, which was largely ActivityPub Mastodon. And then peer-to-peer was IPFS, DAT protocol, um, secure scuttlebutt. But, those kinds of BitTorrent style of technologies really they were all kind of inspired by that. [00:05:02] So these three different kind of sub communities we're all working, independently on different ways to attack how to make these open applications. How do you get something that's a high scale web application without one corporation being the only operator? When this team came together in 2022, we largely sourced from the peer-to-peer group of the decentralized community. Scaling limitations of peer-to-peer [00:05:30] Paul: Personally, I've been working in the space and on those kinds of technologies for about 10 years at that stage. And, the other folks that were in there, you know, 5-10 each respectively. So we all had a fair amount of time working on that. And we had really kind of hit some of the limitations of doing things entirely using client devices. We were running into challenges about reliability of connections. Punching holes to the individual device is very hard. Synchronizing keys between the devices is very hard. Maintaining strong availability of the data because people's devices are going off and on, things like that. Even when you're using the kind of BitTorrent style of shared distribution, that becomes a challenge. [00:06:15] But probably the worst challenge was quite simply scale. You need to be able to create aggregations of a lot of behavior even when you're trying to model your application as largely peer wise interactions like messaging. You might need an aggregation of accounts that even exist, how do you do notifications reliably? [00:06:37] Things like that. Really challenging. And what I was starting to say to myself by the end of that kind of pure peer-to-peer stent was that it can't be rocket science to do a comment section. You know, like at some point you just ask yourself like, how, how hard are we willing to work to, to make these ideas work? [00:06:56] But, there were some pretty good pieces of tech that did come out of the peer-to-peer world. A lot of it had to do with what I might call a cryptographic structure. things like Merkel trees and advances within Merkel Trees. Ways to take data sets and reduce them down to hashes so that you can then create nice signatures and have signed data sets at rest at larger scales. [00:07:22] And so our basic thought was, well, all right, we got some pretty good tech out of this, but let's drop that requirement that it all run off of devices. And let's get some servers in there. And instead think of the entire network as a peer-to-peer mesh of servers. That's gonna solve your scale problem. [00:07:38] 'cause you can throw big databases at it. It's gonna solve your availability problems, it's gonna solve your device sync problems. But you get a lot of the same properties of being able to move data sets between services. Much like you could move them between devices in the peer-to-peer network without losing their identifiers because you're doing this in direction of, cryptographic identifiers to the current host. [00:08:02] That's what peer-to-peer is always doing. You're taking like a public key or hash and then you're asking the network, Hey, who has this? Well, if you just move that into the server, you get the same thing, that dynamic resolution of who's your active host. So you're getting that portability that we wanted real bad. [00:08:17] And then you're also getting that kind of in meshing of the different services where each of them is producing these data sets that they can sink from each other. So take peer-to-peer and apply it to the server stack. And that was our kind of initial thought of like, Hey, you know what? This might work. [00:08:31] This might solve the problems that we have. And a lot of the design fell out from that basic mentality. Crytographic identifiers and domain names [00:08:37] Jeremy: When you talk about these cryptographic identifiers, is the idea that anybody could have data about a person, like a message or a comment, and that could be hosted different places, but you would still know which person that originally came from. Is that, is that the goal there? [00:08:57] Paul: That's exactly it. Yeah. Yeah. You wanna create identification that supersedes servers, right? So when you think about like, if I'm using Twitter and I wanna know what your posts are, I go to twitter.com/jeremy, right? I'm asking Twitter and your ID is consequently always bound to Twitter. You're always kind of a second class identifier. [00:09:21] We wanted to boost up the user identifier to be kind of a thing freestanding on its own. I wanna just know what Jeremy's posts are. And then once you get into the technical system it'll be designed to figure out, okay, who knows that, who can answer that for you? And we use cryptographic identifiers internally. [00:09:41] So like all the data sets use these kind of long URLs to identify things. But in the application, the user facing part, we used domain names for people. Which I think gives the picture of how this all operates. It really moves the user accounts up into a free standing first class identifier within the system. [00:10:04] And then consequently, any application, whatever application you're using, it's really about whatever data is getting put into your account. And then that just exchanges between any application that anybody else is using. [00:10:14] Jeremy: So in this case, it sounds like the identifier is some long string that, I'm not sure if it's necessarily human readable or not. You're shaking your head no. [00:10:25] Paul: No. [00:10:26] Jeremy: But if you have that string, you know it's for a specific person. And since it's not really human readable, what you do is you put a layer on top of it which in this case is a domain that somebody can use to look up and find the identifier. [00:10:45] Paul: Yeah, yeah, yeah. So we just use DNS. Put a TXT record in there, map into that long string, or you could do a .well-known file on a web server if that's more convenient for you. And then the ID that's behind that, the non-human readable one, those are called DIDs which is actually a W3C spec. Those then map to a kind of a certificate. What you call a DID document that kind of confirms the binding by declaring what that domain name should be. So you get this bi-directional binding. And then that certificate also includes signing keys and active servers. So you pull down that certificate and that's how the discovery of the active server happens is through the DID system. What's stored on a PDS [00:11:29] Jeremy: So when you refer to an active server what is that server and what is that server storing? [00:11:35] Paul: It's kinda like a web server, but instead of hosting HTML, it's hosting a bunch of JSON records. Every user has their own document store of JSON documents. It's bucketed into collections. Whenever you're looking up somebody on the network you're gonna get access to that repository of data, jump into a collection. [00:11:58] This collection is their post collection. Get the rkey (Record Key), and then you're pulling out JSON at the end of it, which is just a structured piece of stuff saying here's the CreatedAt, here's the text, here's the type, things like that. One way you could look at the whole system is it's a giant, giant database network. Servers can change, signing keys change, but not DID [00:12:18] Jeremy: So if someone's going to look up someone's identifier, let's say they have the user's domain they have to go to some source, right? To find the user's data. You've mentioned, I think before, the idea that this is decentralized and by default I would, I would picture some kind of centralized resource where I send somebody a domain and then they give me back the identifier and the links to the servers. [00:12:46] So, so how does that work in practice where it actually can be decentralized? [00:12:51] Paul: I mentioned that your DID that non-human readable identifier, and that has that certificate attached to it that lists servers and signing keys and things like that. [00:13:00] So you're just gonna look up inside that DID document what that server is your data repository host. And then you contact that guy and say, all right, I'm told you're hosting this thing. Here's the person I'm looking for, hand over the hand over the data. It's really, you know, pretty straightforward. [00:13:18] The way that gets decentralized is by then to the fact that I could swap out that active server that's in my certificate and probably wanna rotate the signing keys 'cause I've just changed the, you know. I don't want to keep using the same signing keys as I was using previously because I just changed the authority. [00:13:36] So that's the migration change, change the hosting server, change out the signing keys. Somebody that's looking for me now, they're gonna load up my document, my DID document. They're gonna say, okay, new server, new keys. Pull down the data. Looks good, right? Matches up with the DID doc. [00:13:50] So that's how you get that level of portability. But when those changes happen, the DID doesn't change, right? The DID document changes. So there's the level of indirection there and that's pretty important because if you don't have a persistent identifier whenever you're trying to change out servers, all those backlinks are gonna break. [00:14:09] That's the kind of stuff that stops you from being able to do clean migrations on things like web-based services. the only real option is to go out and ask everybody to update their data. And when you're talking about like interactions on the social network, like people replying to each other, there's no chance, right? [00:14:25] Every time somebody moves you're gonna go back and modify all those records. You don't even control all the records from the top down 'cause they're hosted all over the web. So it's just, you can't do it. Generally we call this account portability, that you're kinda like phone number portability that you can change your host, but, so that part's portable, but the ID stays the same. [00:14:45] And keeping that ID the same is the real key to making sure that this can happen without breaking the whole system. [00:14:52] Jeremy: And so it, it sounds like there's the decentralized id, then there's the decentralized ID document that's associated with that points you to where the actual location of your, your data, your posts, your pictures and whatnot. but then you also mentioned that they could change servers. [00:15:13] So let's say somebody changes where their data is, is stored, that would change the servers, I guess, in their document. But [00:15:23] then how do all of these systems. Know okay. I need to change all these references to your old server, to these new servers, [00:15:32] Paul: Yeah. Well, the good news is that you only have to, you, you got the public data set of all the user's activity, and then you have like internal caches of where the current server is. You just gotta update those internal caches when you're trying to contact their server. Um, so it's actually a pretty minimal thing to just like update like, oh, they moved, just start talking to update my, my table, my Redis, that's holding onto that kind of temporary information, put it on ttl, that sort of thing. Most communication won't be between servers, it will be from event streams [00:16:01] Paul: And, honestly, in practice, a fair amount of the system for scalability reasons doesn't necessarily work by servers directly contacting each other. It's actually a little bit more like how, I told you before, I'm gonna use this metaphor a lot, the search engines with the web, right? What we do is we actually end up crawling the repositories that are out in the world and funneling them into event streams like a Kafka. And that allows the entire system to act like a data processing pipeline where you're just tapping into these event streams and then pushing those logs into databases that produce these large scale aggregations. [00:16:47] So a lot of the application behavior ends up working off of these event logs. If I reply to somebody, for instance, I don't necessarily, it's not, my server has to like talk to your server and say, Hey, I'm replying to you. What I do is I just publish a reply in my repository that gets shot out into the event logs, and then these aggregators pick up that the reply got created and just update their database with it. [00:17:11] So it's not that our hosting servers are constantly having to send messages with each other, you actually use these aggregators to pull together the picture of what's happening on the network. [00:17:22] Jeremy: Okay, so like you were saying, it's an event stream model where everybody publishes the events the things that they're doing, whether that's making a new post, making a reply, that's all being posted to this event stream. And then everybody who provides, I'm not sure if instances is the right term, but an implementation of the atmosphere protocol (Authenticated Transfer protocol). [00:17:53] They are listening for all those changes and they don't necessarily have to know that you moved servers because they're just listening for the events and you still have the same identifier. [00:18:10] Paul: Generally speaking. Yeah. 'cause like if you're listening to one of these event streams what you end up looking for is just the signature on it and making sure that the signature matches up. Because you're not actually having to talk to their live server. You're just listening to this relay that's doing this aggregation for you. [00:18:27] But I think actually to kind of give a little more clarity to what you're talking about, it might be a good idea to refocus how we're talking about the system here. I mentioned before that our goal was to make a high scale system, right? We need to handle a lot of data. If you're thinking about this in the way that Mastodon does it, the ActivityPub model, that's actually gonna give you the wrong intuition. Designing the protocol to match distributed systems practices (Event sourcing / Stream processing) [00:18:45] Paul: 'cause we chose a dramatically different system. What we did instead was we picked up, essentially the same practices you're gonna use for a data center, a high scale application data center, and said, all right, how do you tend to build these sorts of things? Well, what you're gonna do is you're gonna have, multiple different services running different purposes. [00:19:04] It gets pretty close to a microservices approach. You're gonna have a set of databases, and then you're going to, generally speaking for high scale, you're gonna have some kind of a kafka, some kind of a event log that you are tossing changes about the state of these databases into. And then you have a bunch of secondary systems that are tapping into the event log and processing that into, the large scale, databases like your search index, your, nice postgres of user profiles. [00:19:35] And that makes sure that you can get each of these different systems to perform really well at their particular task, and then you can detach them in their design. for instance, your primary storage can be just a key value store that scales horizontally. And then on the event log, you, you're using a Kafka that's designed to handle. [00:19:58] Particular semantics of making sure that the messages don't get dropped, that they come through at a particular throughput. And then you're using, for us, we're using like ScyllaDB for the big scale indexes that scales horizontally really well. So it's just different kind of profiles for different pieces. [00:20:13] If you read Martin Kleppman's book, data Intensive applications I think it's called or yeah. A lot of it gets captured there. He talks a lot about this kind of thing and it's sometimes called a kappa architecture is one way this is described, event sourcing is a similar term for it as well. [00:20:30] Stream processing. That's pretty standard practices for how you would build a traditional high scale service. so if you take, take this, this kind of microservice architecture and essentially say, okay, now imagine that each of the services that are a part of your data center could be hosted by anybody, not just within our data center, but outside of our data center as well and should be able to all work together. [00:20:57] Basically how the AT Proto is designed. We were talking about the data repository hosts. Those are just the primary data stores that they hold onto the user keys and they hold onto those JSON records. And then we have another service category we call Relay that just crawls those data repositories and sucks that in that fire hose of data we were talking about that event log. App views pull data from relay and produces indexes and threads [00:21:21] Paul: And then we have what we call app views that sit there and tail the index and tail the log, excuse me, and produce indexes off of it, they're listening to those events and then like, making threads like okay, that guy posted, that guy replied, that guy replied. [00:21:37] That's a thread. They assemble it into that form. So when you're running an application, you're talking to the AppView to read the network, and you're talking to the hosts to write to the network, and each of these different pieces sync up together in this open mesh. So we really took a traditional sort of data center model and just turned it inside out where each piece is a part of the protocol and communicate it with each other and therefore anybody can join into that mesh. [00:22:07] Jeremy: And to just make sure I am tracking the data repository is the data about the user. So it has your decentralized identifier, it has your replies, your posts, And then you have a relay, which is, its responsibility, is to somehow find all of those data repositories and collect them as they happen so that it can publish them to some kind of event stream. [00:22:41] And then you have the AppView which it's receiving messages from the relay as they happen, and then it can have its own store and index that for search. It can collect them in a way so that it can present them onto a UI. That's sort of thing that's the user facing part I suppose. [00:23:00] Paul: Yeah, that's exactly it. And again, it's, it's actually quite similar to how the web works. If you combine together the relay and the app view, you got all these different, you know, the web works where you got all these different websites, they're hosting their stuff, and then the search engine is going around, aggregating all that data and turning it into a search experience. [00:23:19] Totally the same model. It's just being applied to, more varieties of data, like structured data, like posts and, and replies, follows, likes, all that kinda stuff. And then instead of producing a search application at the end. I mean, it does that too, but it also produces a, uh, you know, timelines and threads and, um, people's profiles and stuff like that. [00:23:41] So it's actually a pretty bog standard way of doing, that's one of the models that we've seen work for large scale decentralized systems. And so we're just transposing it onto something that kind of is more focused towards social applications [00:23:58] Jeremy: So I think I'm tracking that the data repository itself, since it has your decentralized identifier and because the data is cryptographically signed, you know, it's from a specific user. I think the part that I am still not quite sure about is the relays. I, I understand if you run all the data repositories, you know where they are, so you know how to collect the data from them. [00:24:22] But if someone's running another system outside of your organization, how do they find, your data repositories? Or do they have to connect to your relay? What's the intention for that? Data hosts request relays to pull their data [00:24:35] Paul: That logic runs, again, really similar to how search engines find out about websites. So there is actually a way for, one of these, data hosts to contact Relay and say, Hey, I exist. You know, go ahead and get my stuff. And then it'll be up to the relay to decide like if they want it or not. [00:24:52] Right now, generally we're just like, yeah, you know, we, we want it. But as you can imagine, as the thing matures and gets to higher scale, there might be some trust kind of things to worry about, you know? So that's kind of the naive operation that currently exists. But over time as the network gets bigger and bigger, it'll probably involve some more traditional kind of spiraling behaviors because as more relays come into the system, each of these hosts, they're not gonna know who to talk to. Relays can bootstrap who they know about by talking to other relays [00:25:22] Paul: You're trying to start a new relay. What they're gonna do is they're going to discover all of the different users that exist in the system by looking at what data they have to start with. Probably involve a little bit of a manual feeding in at first, whenever I'm starting up a relay, like, okay, there's bluesky's relay. [00:25:39] Lemme just pull what they know. And then I go from there. And then anytime you discover a new user you don't have, you're like, oh, I wanna look them up. Pull them into the relay too. Right. So there's a, pretty straightforward, discovery process that you'll just have to bake into a relay to, to make sure you're calling as much the network as possible. ActivityPub federation vs AT Proto [00:25:57] Jeremy: And so I don't think we've defined the term federation, but maybe you could explain what that is and if that is what this is. [00:26:07] Paul: We are so unsure. [00:26:10] Jeremy: Okay. [00:26:11] Paul: Yeah. This has jammed is up pretty bad. Um, because I think everybody can, everybody pretty strongly agrees that ActivityPub is federation, right? and ActivityPub kind of models itself pretty similarly to email in a way, like the metaphors they use is that there's inboxes and outboxes and, and every ActivityPub server they're standing up the full vertical stack. [00:26:37] They set up, the primary hosting, the views of the data that's happening there. the interface for the application, all of it, pretty traditional, like close service, but then they're kind of using the perimeter. they're making that permeable by sending, exchanging, essentially mailing records to each other, right? [00:26:54] That's their kind of logic of how that works. And that's pretty much in line with, I think, what most people think of with Federation. Whereas what we're doing isn't like that we've cut, instead of having a bunch of vertical stacks communicating horizontally with each other, we kind of sliced in the other direction. [00:27:09] We sliced horizontally into, this microservices mesh and have all the different, like a total mix and match of different microservices between different operators. Is that federation? I don't know. Right. we tried to invent a term, didn't really work, you know, At the moment, we just kind of don't worry about it that much, see what happens, see what the world sort of has to say to us about it. [00:27:36] and beyond that, I don't know. [00:27:42] Jeremy: I think people probably are thinking of something like, say, a Mastodon instance when you're, when you're talking about everything being included, The webpage where you view the posts, the Postgres database that's keeping the messages. [00:28:00] And that same instance it's responsible for basically everything. [00:28:06] Paul: mm-Hmm [00:28:06] Jeremy: And I believe what you're saying is that the difference with, the authenticated transfer protocol, is that the [00:28:15] Paul: AT Protocol, Yep. [00:28:17] Jeremy: And the difference there is that you've, at the protocol level, you've split it up into the data itself, which can be validated completely separately from other parts of the system. [00:28:31] You could have the JSON files on your hard drive and somebody else can have that same JSON file and they would know that who the user is and that these are real things that user posted. That's like a separate part. And then the relay component that looks for all these different repositories that has people's data, that can also be its own independent thing where its job is just to output events. [00:29:04] And that can exist just by itself. It doesn't need the application part, the, the user facing part, it can just be this event stream on itself. and that's the part where it sounds like you can make decisions on who to, um, collect data from. I guess you have to agree that somebody can connect to you and get the users from your data repositories. [00:29:32] And likewise, other people that run relays, they also have to agree to let you pull the users from theirs. [00:29:38] Paul: Yeah, that's right. Yeah. [00:29:41] Jeremy: And so I think the Mastodon example makes sense. And, but I wonder if the underlying ActivityPub protocol forces you to use it in that way, in like a whole full application that talks to another full application. [00:29:55] Or is it more like that's just how people tend to use it and it's not necessarily a characteristic of the protocol. [00:30:02] Paul: Yeah, that's a good question actually. so, you know, generally what I would say is pretty core to the protocol is the expectations about how the services interact with each other. So the mailbox metaphor that's used in ActivityPub, that design, if I reply to you, I'll update my, local database with what I did, and then I'll send a message over to your server saying, Hey, by the way, add this reply. [00:30:34] I did this. And that's how they find out about things. That's how they find out about activity outside of their network. that's the part that as long as you're doing ActivityPub, I suspect you're gonna see reliably happening. That's that, I can say for sure that's a pretty tight requirement. [00:30:50] That's ActivityPub. If you wanted to split it up the way we're talking about, you could, I don't know, I don't know if you necessarily would want to. Because I don't know. That's actually, I think I'd have to dig into their stack a little bit more to see how meaningful that would be. I do know that there's some talk of introducing a similar kind of an aggregation method into the ActivityPub world which I believe they're also calling a relay and to make things even more complicated. [00:31:23] And NOSTR has a concept of a relay. So these are three different protocols that are using this term. I think you could do essentially what a search engine does on any of these things. You could go crawling around for the data, pull them into a fire hose, and then, tap into that aggregation to produce, bigger views of the network. [00:31:41] So that principle can certainly apply anywhere. AT Protocol, I think it's a little bit, we, we focused in so hard from that on that from the get go, we focus really hard on making sure that this, the data is, signed at rest. That's why it's called the authenticated transfer protocol. And that's a nice advantage to have when you're running a relay like this because it means that you don't have to trust the relay. [00:32:08] Like generally speaking, when I look at results from Google, you know, I'm trusting pretty well that they're accurately reflecting what's on the website, which is fine. You know, there's, that's not actually a huge risk or anything. But whenever you're trying to build entire applications and you're using somebody else's relay, you could really run into things where they say like, oh, you know what Paul tweeted the other day, you know, I hate dogs. [00:32:28] They're like, no, I didn't. That's a lie, right? You just sneak in Little lies like that over a while, it becomes a problem. So having the signatures on the data is pretty important. You know, if you're gonna be trying to get people to cooperate, uh, you gotta manage the trust model. I know that ActivityPub does have mechanisms for signed records. Issuers with ActivityPub identifiers [00:32:44] Paul: I don't know how deep they go if they could fully replace that, that utility. and then Mastodon ActivityPub, they also use a different identifier system that they're actually taking a look at DIDs um, right now, I don't know what's gonna happen there. We're, we're totally on board to, you know, give any kind of insight that we got working on 'em. [00:33:06] But at, at the moment, they use I think it's WebFinger based identifiers they look like emails. So you got host names in there and those identifiers are being used in the data records. So you don't get that continuous identifier. They actually do have to do that hey, I moved update your records sort of thing. [00:33:28] And that causes it to, I mean, it works like decently well, but not as well as it could. They got us to the point where it moves your profile over and you update all the folks that were following you so they can update their follow records, but your posts, they're not coming right, because that's too far into that mesh of interlinking records. [00:33:48] There's just no chance. So that's kind of the upper limit on that, it's a different set of choices and trade-offs. You're always kind of asking like, how important is the migration? Does that work out? Anyway, now I'm just kind of digging into differences between the two here. Issues with an identifier that changes and updating old records [00:34:07] Jeremy: So you were saying that with ActivityPub, all of the instances need to be notified that you've changed your identifier but then all of the messages that they had already received. They don't point to the new identifier somehow. [00:34:24] Paul: Yeah. You run into basically just the practicalities of actual engineering with that is what happens, right? Because if you imagine you got a multimillion user social network. They got all their posts. Maybe the user has like, let's say a thousand posts and 10,000 likes. And that, activity can range back three years. [00:34:48] Let's say they changed their identifier, and now you need to change the identifier of all those records. If you're in a traditional system that's already a tall order, you're going back and rewriting a ton of indexes, Anytime somebody replied to you, they have these links to your posts, they're now, you've gotta update the identifiers on all of those things. [00:35:11] You could end up with a pretty significant explosion of rewrites that would have to occur. Now that's, that's tough. If you're in a centralized model. If you're in a decentralized one, it's pretty much impossible because you're now, when you notify all the other servers like, Hey, this, this changed. How successful are all of them at actually updating that, that those, those pointers, it's a good chance that there's things are gonna fall out of correctness. that's just a reality of it. And if, so, if you've got a, if you've got a mutable identifier, you're in trouble for migrations. So the DID is meant to keep it permanent and that ends up being the anchoring point. If you lose control of your DID well, that's it. Managing signing keys by server, paper key reset [00:35:52] Paul: Your, your account's done. We took some pretty traditional approaches to that, uh, where the signing keys get managed by your hosting server instead of like trying to, this may seem like really obvious, but if you're from the decentralization community, we spend a lot of time with blockchains, like, Hey, how do we have the users hold onto their keys? [00:36:15] You know, and the tooling on that is getting better for what it's worth. We're starting to see a lot better key pair management in like Apple's ecosystem and Google's ecosystem, but it's still in the range of like, nah, people lose their keys, you know? So having the servers manage those is important. [00:36:33] Then we have ways of exporting paper keys so that you could kind of adversarially migrate if you wanted to. That was in the early spec we wanted to make sure that this portability idea works, that you can always migrate your accounts so you can export a paper key that can override. [00:36:48] And that was how we figured that out. Like, okay, yeah, we don't have to have everything getting signed by keys that are on the user's devices. We just need these master backup keys that can say, you know what? I'm done with that host. No matter what they say, I'm overriding what they, what they think. and that's how we squared that one. [00:37:06] Jeremy: So it seems like one of the big differences with account migration is that with ActivityPub, when you move to another instance, you have to actually change your identifier. [00:37:20] And with the AT protocol you're actually not allowed to ever change that identifier. And maybe what you're changing is just you have say, some kind of a lookup, like you were saying, you could use a domain name to look that up, get a reference to your decentralized identifier, but your decentralized identifier it can never change. [00:37:47] Paul: It, it, it can't change. Yeah. And it shouldn't need to, you know what I mean? It's really a total disaster kind of situation if that happens. So, you know that it's designed to make sure that doesn't happen in the applications. We use these domain name handles to, to identify folks. And you can change those anytime you want because that's really just a user facing thing. [00:38:09] You know, then in practice what you see pretty often is that you may, if you change hosts, if you're using, we, we give some domains to folks, you know, 'cause like not everybody has their own domain. A lot of people do actually, to our surprise, people actually kind of enjoy doing that. But, a lot of folks are just using like paul.bsky.social as their handle. [00:38:29] And so if you migrated off of that, you probably lose that. Like your, so your handle's gonna change, but you're not losing the followers and stuff. 'cause the internal system isn't using paul.bsky.social, it's using that DID and that DID stays the same. Benefits of domain names, trust signal [00:38:42] Jeremy: Yeah. I thought that was interesting about using the domain names, because when you like you have a lot of users, everybody's got their own sub-domain. You could have however many millions of users. Does that become, does that become an issue at some point? [00:39:00] Paul: Well, it's a funny thing. I mean like the number of users, like that's not really a problem 'cause you run into the same kind of namespace crowding problem that any service is gonna have, right? Like if you just take the subdomain part of it, like the name Paul, like yeah, only, you only get to have one paul.bsky.social. [00:39:15] so that part of like, in terms of the number of users, that part's fine I guess. Uh, as fine as ever. where gets more interesting, of course is like, really kind of around the usability questions. For one, it's, it's not exactly the prettiest to always have that B sky.social in there. If we, if we thought we, if we had some kind of solution to that, we would use it. [00:39:35] But like the reality is that, you know, now we're, we've committed to the domain name approach and some folks, you know, they kind of like, ah, that's a little bit ugly. And we're like, yeah that's life. I guess the plus side though is that you can actually use like TLD the domain. It's like on pfrazee.com. [00:39:53] that starts to get more fun. it can actually act as a pretty good trust signal in certain scenarios. for instance, well-known domain names like nytimes.com, strong authentication right there, we don't even need a blue check for it. Uh, similarly the .gov, domain name space is tightly regulated. [00:40:14] So you actually get a really strong signal out of that. Senator Wyden is one of our users and so he's, I think it's wyden.senate.gov and same thing, strong, you know, strong identity signal right there. So that's actually a really nice upside. So that's like positives, negatives. [00:40:32] That trust signal only works so far. If somebody were to make pfrazee.net, then that can be a bit confusing. People may not be paying attention to .com vs .net, so it's not, I don't wanna give the impression that, ah, we've solved blue checks. It's a complicated and multifaceted situation, but, it's got some juice. [00:40:54] It's also kinda nice too, 'cause a lot of folks that are doing social, they're, they've got other stuff that they're trying to promote, you know? I'm pretty sure that, uh, nytimes would love it if you went to their website. And so tying it to their online presence so directly like that is a really nice kind of feature of it. [00:41:15] And tells a I think a good story about what we're trying to do with an open internet where, yeah, everybody has their space on the internet where they can do whatever they want on that. And that's, and then thethese social profiles, it's that presence showing up in a shared space. It's all kind of part of the same thing. [00:41:34] And that that feels like a nice kind of thing to be chasing, you know? And it also kind of speaks well to the naming worked out for us. We chose AT Protocol as a name. You know, we back acronymed our way into that one. 'cause it was a @ simple sort of thing. But like, it actually ended up really reflecting the biggest part of it, which is that it's about putting people's identities at the front, you know, and make kind of promoting everybody from a second class identity that's underneath Twitter or Facebook or something like that. [00:42:03] Up into. Nope, you're freestanding. You exist as a person independently. Which is what a lot of it's about. [00:42:12] Jeremy: Yeah, I think just in general, not necessarily just for bluesky, if people had more of an interest in getting their own domain, that would be pretty cool if people could tie more of that to something you basically own, right? [00:42:29] I mean, I guess you're leasing it from ICANN or whatever, but, [00:42:33] yeah, rather than everybody having an @Gmail, Outlook or whatever they could actually have something unique that they control more or less. [00:42:43] Paul: Yeah. And we, we actually have a little experimental service for registering domain names that we haven't integrated into the app yet because we just kind of wanted to test it out and, and kind of see what that appetite is for folks to register domain names way higher than you'd think we did that early on. [00:43:01] You know, it's funny when you're coming from decentralization is like an activist space, right? Like it's a group of people trying to change how this tech works. And sometimes you're trying to parse between what might come off as a fascination of technologists compared to what people actually care about. [00:43:20] And it varies, you know, the domain name thing to a surprising degree, folks really got into that. We saw people picking that up almost straight away. More so than certainly we ever predicted. And I think that's just 'cause I guess it speaks to something that people really get about the internet at this point. [00:43:39] Which is great. We did a couple of other things that are similar and we saw varied levels of adoption on them. We had similar kinds of user facing, opening up of the system with algorithms and with moderation. And those have both been pretty interesting in and of themselves. Custom feed algorithms [00:43:58] Paul: So with algorithms, what we did was we set that up so that anybody can create a new feed algorithm. And this was kind of one of the big things that you run into whenever you use the app. If you wanted to create a new kind of for you feed you can set up a service somewhere that's gonna tap into that fire hose, right? [00:44:18] And then all it needs to do is serve a JSON endpoint. That's just a list of URLs, but like, here's what should be in that feed. And then the bluesky app will pick that up and, and send that, hydrate in the content of the posts and show that to folks. I wanna say this is a bit of a misleading number and I'll explain why but I think there's about 35,000 of these feeds that have been created. [00:44:42] Now, the reason it's little misleading is that, I mean, not significantly, but it's not everybody went, sat down in their IDE and wrote these things. Essentially one of our users created, actually multiple of our users made little platforms for building these feeds, which is awesome. That's the kinda thing you wanna see because we haven't gotten around to it. [00:44:57] Our app still doesn't give you a way to make these things. But they did. And so lots of, you know, there it is. Cool. Like, one, one person made a kind of a combinatorial logic thing that's like visual almost like scratch, it's like, so if it has this hashtag and includes these users, but not those users, and you're kind of arranging these blocks and that constructs the feed and then probably publish it on your profile and then folks can use it, you know? [00:45:18] And um, so that has been I would say fairly successful. Except, we had one group of hackers do put in a real effort to make a replacement for you feed, like magic algorithmic feed kind of thing. And then they kind of kept up going for a while and then ended up giving up on it. Most of what we see are actually kind of weird niche use cases for feeds. [00:45:44] You get straightforward ones, like content oriented ones like a cat feed, politics feed, things like that. It's great, some of those are using ML detection, so like the cat feed is ML detection, so sometimes you get like a beaver in there, but most of the time it's a cat. And then we got some ones that are kind of a funny, like change in the dynamic of freshness. [00:46:05] So, uh, or or selection criteria, things that you wouldn't normally see. Um, but because they can do whatever they want, you know, they try it out. So like the quiet posters ended up being a pretty successful one. And that one just shows people you're following that don't post that often when they do just those folks. [00:46:21] It ended up being, I use that one all the time because yeah, like they get lost in the noise. So it's like a way to keep up with them. Custom moderation and labeling [00:46:29] Paul: The moderation one, that one's a a real interesting situation. What we did there essentially we wanted to make sure that the moderation system was capable of operating across different apps so that they can share their work, so to speak. [00:46:43] And so we created what we call labeling. And labeling is a metadata layer that exists over the network. Doesn't actually live in the normal data repositories. It uses a completely different synchronization because a lot of these labels are getting produced. It's just one of those things where the engineering characteristics of the labels is just too different from the rest of the system. [00:47:02] So we created a separate synchronization for this, and it's really kind of straightforward. It's, here's a URL and here's a string saying something like NSFW or Gore, or you know, whatever. then those get merged onto the records brought down by the client and then the client, you know, based on the user's preferences. [00:47:21] We'll put like warning screens up, hide it, stuff like that. So yeah, these label streams can then, you know, anybody that's running a moderation service can, you know, are publishing these things and so anybody can subscribe to 'em. And you get that kind of collaborative thing we're always trying to do with this. [00:47:34] And we had some users set up moderation services and so then as an end user you find it, it looks like a profile in the app and you subscribe to it and you configure it and off you go. That one has had probably the least amount of adoption throughout all of 'em. It's you know, moderation. [00:47:53] It's a sticky topic as you can imagine, challenging for folks. These moderation services, they do receive reports, you know, like whenever I'm reporting a post, I choose from all my moderation services who I wanna report this to. what has ended up happening more than being used to actually filter out like subjective stuff is more kind of like either algorithmic systems or what you might call informational. [00:48:21] So the algorithmic ones are like, one of the more popular ones is a thing that's looking for, posts from other social networks. Like this screenshot of a Reddit post or a Twitter post or a Facebook post. Because, which you're kinda like, why, you know, but the thing is some folks just get really tired of seeing screenshots from the other networks. [00:48:40] 'cause often it's like, look what this person said. Can you believe it? You know, it's like, ah. Okay, I've had enough. So one of our users aendra made a moderate service that just runs an ML that detects it, labels it, and then folks that are tired of it, they subscribe to it and they're just hide it, you know? [00:48:57] And so it's like a smart filter kind of thing that they're doing. you know, hypothetically you could do that for things like spiders, you know, like you've got arachniphobia, things like that. that's like a pretty straightforward, kind of automated way of doing it. Which takes a lot of the spice, you know, outta out of running moderation. [00:49:15] So that users have been like, yeah, yeah, okay, we can do that. [00:49:20] Those are user facing ways that we tried to surface the. Decentralized principle, right? And make take advantage of how this whole architecture can have this kind of a pluggability into it. Users can self host now [00:49:33] Paul: But then really at the end of the day, kind of the important core part of it is those pieces we were talking about before, the hosting, the relay and the, the applications themselves, having those be swappable in completely. so we tend to think of those as kind of ranges of infrastructure into application and then into particular client side stuff. [00:49:56] So a lot of folks right now, for instance, they're making their own clients to the application and those clients are able to do customizations, add features, things like that, as you might expect, [00:50:05] but most of them are not running their own backend. They're just using our backend. But at any point, it's right there for you. You know, you can go ahead and, and clone that software and start running the backend. If you wanted to run your own relay, you could go ahead and go all the way to that point. [00:50:19] You know, if you wanna do your own hosting, you can go ahead and do that. Um, it's all there. It's really just kind of a how much effort your project really wants to take. That's the kind of systemically important part. That's the part that makes sure that the overall mission of de monopolizing, social media online, that's where that really gets enforced. [00:50:40] Jeremy: And so someone has their own data repository with their own users and their own relay. they can request that your relay collect the information from their own data repositories. And that's, that's how these connections get made. [00:50:58] Paul: Yeah. And, and we have a fair number of those already. Fair number of, we call those the self hosters right? And we got I wanna say 75 self hoster going right now, which is, you know, love to see that be more, but it's, really the folks that if you're running a service, you probably would end up doing that. [00:51:20] But the folks that are just doing it for themselves, it's kind of the, the nerdiest of the nerds over there doing that. 'cause it doesn't end up showing itself in the, in the application at all. Right? It's totally abstracted away. So it, that, that one's really about like, uh, measure your paranoia kind of thing. [00:51:36] Or if you're just proud of the self-hosting or, or curious, you know, that that's kind of where that sits at the moment. AT Protocol beyond bluesky [00:51:42] Jeremy: We haven't really touched on the fact that there's this underlying protocol and everything we've been discussing has been centered around the bluesky social network where you run your own, instance of the relay and the data repositories with the purpose of talking to bluesky, but the protocol itself is also intended to be used for other uses, right? [00:52:06] Paul: Yeah. It's generic. The data types are set up in a way that anybody can build new data types in the system. there's a couple that have already begun, uh, front page, which is kind of a hacker news clone. There's Smoke Signals, which is a events app. There's Blue Cast, which is like a Twitter spaces, clubhouse kind of thing. [00:52:29] Those are the folks that are kind of willing to trudge into the bleeding edge and deal with some of the rough edges there for pretty I think, obvious reasons. A lot of our work gets focused in on making sure that the bluesky app and that use case is working correctly. [00:52:43] But we are starting to round the corner on getting to a full kind of how to make alternative applications state. If you go to the atproto.com, there's a kind of a introductory tutorial where that actually shows that whole stack and how it's done. So it's getting pretty close. There's a couple of still things that we wanna finish up. [00:53:04] jeremy so in a way you can almost think of it as having an eventually consistent data store on the network, You can make a traditional web application with a relational database, and the source of truth can actually be wherever that data repository is stored on the network. [00:53:24] paul Yeah, that's exactly, it is an eventually consistent system. That's exactly right. The source of truth is there, is their data repo. And that relational database that you might be using, I think the best way to think about it is like secondary indexes or computed indexes, right? They, reflect the source of truth. [00:53:43] Paul: This is getting kind of grandiose. I don't tend to poses in these terms, but it is almost like we're trying to have an OS layer at a protocol level. It's like having your own [00:53:54] Network wide database or network-wide file system, you know, these are the kind of facilities you expect out of a platform like an os And so the hope would be that this ends up getting that usage outside of just the initial social, uh, app, like what we're doing here. [00:54:12] If it doesn't end up working out that way, if this ends up, you know, good for the Twitter style use case, the other one's not so much, and that's fine too. You know, that's, that's our initial goal, but we, we wanted to make sure to build it in a way that like, yeah, there's evolve ability to, it keeps, it, keeps it, make sure that you're getting kinda the most utility you can out of it. Peer-to-peer and the difficulty of federated queries [00:54:30] Jeremy: Yeah, I can see some of the parallels to some of the decentralized stuff that I, I suppose people are still working on, but more on the peer-to-peer side, where the idea was that I can have a network host this data. but, and in this case it's a network of maybe larger providers where they could host a bunch of people's data versus just straight peer to peer where everybody has to have a piece of it. [00:54:57] And it seems like your angle there was really the scalability part. [00:55:02] Paul: It was the scalability part. And there's great work happening in peer-to-peer. There's a lot of advances on it that are still happening. I think really the limiter that you run into is running queries against aggregations of data. Because you can get the network, you know, BitTorrent sort of proved that you can do distributed open horizontal scaling of hosting. [00:55:29] You know, that basic idea of, hey, everybody's got a piece and you sync it from all these different places. We know you can do things like that. What nobody's been able to really get into a good place is running, queries across large data sets. In the model like that, there's been some research in what is, what's called federated queries, which is where you're sending a query to multiple different nodes and asking them to fulfill as much of it as they can and then collating the results back. But it didn't work that well. That's still kind of an open question and until that is in a place where it can like reliably work and at very large scales, you're just gonna need a big database somewhere that does give the properties that you need. You need these big indexes. And once we were pretty sure of that requirement, then from there you start asking, all right, what else about the system [00:56:29] Could we make easier if we just apply some more traditional techniques and merge that in with the peer-to-peer ideas? And so key hosting, that's an obvious one. You know, availability, let's just have a server. It's no big deal. But you're trying to, you're trying to make as much of them dumb as possible. [00:56:47] So that they have that easy replaceability. Moderation challenges [00:56:51] Jeremy: Earlier you were talking a a little bit about the moderation tools that people could build themselves. There was some process where people could label posts and then build their own software to determine what a feed should show per a person. [00:57:07] Paul: Mm-Hmm [00:57:07] Jeremy: But, but I think before that layer for the platform itself, there's a base level of moderation that has to happen. [00:57:19] Paul: yeah. [00:57:20] Jeremy: And I wonder if you could speak to, as the app has grown, how that's handled. [00:57:26] Paul: Yeah. the, you gotta take some requirements in moderation pretty seriously to start. And with decentralization. It sometimes that gets a little bit dropped. You need to have systems that can deal with questions about CSAM. So you got those big questions you gotta answer and then you got stuff that's more in the line of like, alright, what makes a good platform? [00:57:54] What kind of guarantees are we trying to give there? So just not legal concerns, but you know, good product experience concerns. That's something we're in the realm of like spam and and abusive behavior and things like that. And then you get into even more fine grain of like what is a person's subjective preference and how can they kind of make their thing better? [00:58:15] And so you get a kind of a telescoping level of concerns from the really big, the legal sort of concerns. And then the really small subjective preference kind of concerns. And that actually that telescoping maps really closely to the design of the system as well. Where the further you get up in the kind of the, in that legal concern territory, you're now in core infrastructure. [00:58:39] And then you go from infrastructure, which is the relay down into the application, which is kind of a platform and then down into the client. And that's where we're having those labelers apply. And each of them, as you kind of move closer to infrastructure, the importance of the decision gets bigger too. [00:58:56] So you're trying to do just legal concerns with the relay right? Stuff that you objectively can, everybody's in agreement like Yeah, yeah, yeah. You know, no bigs don't include that. The reason is that at the relay level, you're anybody that's using your relay, they depend on the decisions you're making, that sort of selection you're doing, any filtering you're doing, they don't get a choice after that. [00:59:19] So you wanna try to keep that focus really on legal concerns and doing that well. so that applications that are downstream of it can, can make their choices. The applications themselves, you know, somebody can run a parallel I guess you could call it like a parallel platform, so we got bluesky doing the microblogging use case, other people can make an application doing the microblogging use case. So there's, there's choice that users can easily switch, easily enough switch between, it's still a big choice. [00:59:50] So we're operating that in many ways. Like any other app nowadays might do it. You've got policies, you know, for what's acceptable on the network. you're still trying to keep that to be as, you know, objective as possible, make it fair, things like that. You want folks to trust your T&S team. Uh, but from the kind of systemic decentralization question, you get to be a little bit more opinionated. [01:00:13] Down all the way into the client with that labeling system where you can, you know, this is individuals turning on and off preferences. You can be as opinionated as you want on that letter. And that's how we have basically approached this. And in a lot of ways, it really just comes down to, in the day to day, you're the moderation, the volume of moderation tasks is huge. [01:00:40] You don't actually have high stakes moderation decisions most of the time. Most of 'em are you know pretty straightforward. Shouldn't have done that. That's gotta go. You get a couple every once in a while that are a little spicier or a policy that's a little spicier. And it probably feels pretty common to end users, but that just speaks to how much moderation challenges how the volume of reports and problems that come through. [01:01:12] And we don't wanna make it so that the system is seized up, trying to decentralize itself. You know, it needs to be able to operate day to day. What you wanna make is, you know, back pressure, you know, uh, checks on that power so that if an application or a platform does really start to go down the wrong direction on moderation, then people can have this credible exit. [01:01:36] This way of saying, you know what, that's a problem. We're moving from here. And somebody else can come in with different policies that better fit people's people's expectations about what should be done at, at these levels. So yeah, it's not about taking away authority, it's about checking authority, you know, kind of a checks and balances mentality. [01:01:56] Jeremy: And high level, 'cause you saying how there's such a high volume of, of things that you know what it is, you'd know you wanna remove it, but there's just so much of it. So is there, do you have automated tools to label these things? Do you have a team of moderators? Do they have to understand all the different languages that are coming through your network? [01:02:20] Yes, yes, yes and yes. Yeah. You use every tool at your disposal to, to stay on top of it. cause you're trying to move as fast as you can, folks. The problems showing up, you know, the slower you are to respond to it, the, the more irritating it is to folks. Likewise, if you make a, a missed call, if somebody misunderstands what's happening, which believe me, is sometimes just figuring out what the heck is going on is hard. [01:02:52] Paul: People's beefs definitely surface up to the moderation misunderstanding or wrong application. Moderators make mistakes so you're trying to maintain a pretty quick turnaround on this stuff. That's tough. And you, especially when to move fast on some really upsetting content that can make its way through, again, illegal stuff, for instance, but more videos, stuff like that, you know, it's a real problem. [01:03:20] So yeah, you're gotta be using some automated systems as well. Clamping down on bot rings and spam. You know, you can imagine that's gotten a lot harder thanks to LLMs just doing text analysis by dumb statistics of what they're talking about that doesn't even work anymore. [01:03:41] 'cause the, the LLMs are capable of producing consistently varied responses while still achieving the same goal of plugging a online betting site of some kind, you know? So we do use kind of dumb heuristic systems for when it works, but boy, that won't work for much longer. [01:04:03] And we've already got cases where it's, oh boy, so the moderation's in a dynamic place to say the least right now with, with LLMs coming in, it was tough before and

字谈字畅
#247:「我们可以再哭一把」

字谈字畅

Play Episode Listen Later Jan 14, 2025 90:35


2024 年终,W3C 发布了新版《中文排版需求》。这是该文档自发布以来,首次经历的大幅度结构调整。 2025 年伊始,我们有幸请到 W3C 国际化工作负责人、《中文排版需求》编辑薛富侨,向我们介绍文档的更新进展及逻辑框架;同时,也与我们分享 W3C 国际化工作的细节与愿景。 参考链接 W3C(World Wide Web Consortium,万维网联盟)于 2023 年正式转型成为公益性非营利组织 W3C 无障碍相关的标准及指南 W3C FAQ 之一「本地化与国际化有什么关系?」 Richard Ishida,国际化专家,前任 W3C 国际化标准工作负责人 W3C Requirements for Chinese Text Layout(中文排版需求) 字谈字畅 007:「通缉」中文字体排印事业的贡献者 字谈字畅 143:「中文电子书为什么还这么差?」 字谈字畅 144:CSS 中文排版的十年跬步 字谈字畅 186:《中文排版需求》的进展 薛富侨近期在 The Type 发布文章《新版〈中文排版需求〉:结构的统一与未来的可能性》,介绍《中文排版需求》的更新进展 W3C Patent Policy(专利政策) 语言文字矩阵(language matrix)是 W3C 推进国际化工作的重要框架之一 Safari 18.2 开始支持行内(字间)注音符号的排版,基于 CSS ruby-position 属性的 inter-character 值 Unicode 变体序列(variation sequence) Unicode Technical Report #59: East Asian Spacing 《中文排版需求》的 GitHub repo 嘉宾 薛富侨:W3C 国际化专家,致力于推动全球文字排版需求 主播 Eric:字体排印研究者,译者,The Type 执行编辑 蒸鱼:设计师,The Type 编辑 欢迎与我们交流或反馈,来信请致 podcast@thetype.com​。如果你喜爱本期节目,也欢迎用支付宝向我们捐赠:hello@thetype.com​。

ShopTalk » Podcast Feed
646: Hard Code & Soft Skills

ShopTalk » Podcast Feed

Play Episode Listen Later Dec 16, 2024 71:14


Show DescriptionThe employees of the startup RAPPTR (Quinn Pine, Blaze Lightyear, Astra Q, and Eddit Kit) find themselves in quite a pickle as a rogue investor tries to take over the company while CEO Chad is recovering in his napping pod. Join us on a workplace-themed role playing adventure created by Dave Rupert. Listen on Website →GuestsAlex (Eddie)Guest's Main URL • Guest's TwitterA web developer from Atlanta, GA that specializes in Vue.js, python, and PHP. Ben (Blaze)Guest's Main URL • Guest's TwitterA user-centered frontend developer with a focus on web accessibility. Miriam (Astra)Guest's Main URL • Guest's TwitterCo-Founder of OddBird, working on CSS in a W3C group. Links Hard Code & Soft Skills Lasers & Feelings by John Harper one.seven design studio SponsorsBluehostFind unique domains, web hosting, and WordPress tools, all in one place. Empower your business or digital agency with Bluehost.

The Brave Marketer
Privacy and Digital Identity on a Blockchain

The Brave Marketer

Play Episode Listen Later Dec 4, 2024 30:38


Kyle Den Hartog, Security Engineer at Brave Software, discusses emerging use cases for crypto, and their respective privacy implications. He emphasizes the urgency for innovative solutions to safeguard personal information in our digital financial systems given current privacy gaps on blockchains.  Key Takeaways:  The delicate balance between transparency and privacy in the blockchain era The importance of community engagement in driving technological advancements and shaping the future of decentralized commerce The role of the browser in powering the evolving standards of Web3 Guest Bio:  Kyle Den Hartog, Security Engineer at Brave Software, is helping to promote a world where the Web can be more private and secure for everyone. This vision led him to be an eager contributor to the design and development of standards in W3C and IETF. With a background in security and cryptography, he has worked in domain verticals such as digital identity, Web3, and now work on browsers here at Brave. His long term focus remains on improving our symbiotic relationship with technology, and he's active in communities related to these topics. ---------------------------------------------------------------------------------------- About this Show: The Brave Technologist is here to shed light on the opportunities and challenges of emerging tech. To make it digestible, less scary, and more approachable for all! Join us as we embark on a mission to demystify artificial intelligence, challenge the status quo, and empower everyday people to embrace the digital revolution. Whether you're a tech enthusiast, a curious mind, or an industry professional, this podcast invites you to join the conversation and explore the future of AI together. The Brave Technologist Podcast is hosted by Luke Mulks, VP Business Operations at Brave Software—makers of the privacy-respecting Brave browser and Search engine, and now powering AI everywhere with the Brave Search API. Music by: Ari Dvorin Produced by: Sam Laliberte

thinkfuture with kalaboukis
1036 BREAKING BARRIERS IN ACCESSIBILITY TECH

thinkfuture with kalaboukis

Play Episode Listen Later Nov 20, 2024 34:09


Visit Mike at: https://www.linkedin.com/in/mike-paciello-1231741/ Be A Better YOU with AI: Join The Community: https://10xyou.us Get AIDAILY every weekday. Subscribe at https://aidaily.us Read more here: https://thinkfuture.com --- In this episode of thinkfuture, host Chris Kalaboukis dives into the future of accessibility with Mike Paciello, a tech accessibility pioneer with over 40 years in the industry. From his early days at Digital Equipment Corporation to his role in launching the Web Accessibility Initiative at the W3C, Mike shares how he's helped shape the accessibility landscape. Now the Chief Accessibility Officer at AudioEye, Mike reveals how the company is transforming web accessibility by combining AI-powered automated remediation with human expertise, addressing the massive gap in accessible websites. They discuss the current challenges with voice interfaces, the limitations of overlay solutions, and their vision for a future where tech adapts seamlessly to individual needs. This episode is packed with insights on creating an inclusive digital world for people of all abilities. --- Support this podcast: https://podcasters.spotify.com/pod/show/thinkfuture/support

Environment Variables
Shaping Web Sustainability with the W3C

Environment Variables

Play Episode Listen Later Nov 14, 2024 36:39


In this episode of Environment Variables, host Chris Adams dives into the evolving landscape of sustainable web development with Alexander Dawson and Tzviya Siegman from the World Wide Web Consortium (W3C). Dawson and Siegman discuss the W3C's efforts to develop Web Sustainability Guidelines (WSG), a comprehensive set of evidence-based practices aimed at reducing the environmental impact of web technologies. They explore the creation and potential impact of these guidelines, especially as global interest grows in embedding sustainable practices within web standards. The episode also covers the challenges of driving adoption across public and private sectors, the role of testability in sustainability guidelines, and future directions for standards that minimize digital carbon footprints. This engaging conversation provides listeners with insights into how W3C's sustainability initiatives could shape the future of the web.

sustainability shaping gri chris adams w3c web standards world wide web consortium w3c testability
Bad Attitudes: An Uninspiring Podcast About Disability
Episode 125: Access on the Interwebs

Bad Attitudes: An Uninspiring Podcast About Disability

Play Episode Listen Later Oct 20, 2024 10:36


Some people are (apparently) surprised to learn that accessibility extends to the Internet. Let's talk about accessible web design, and how important internet accessibility is to the disabled community.W3C: https://www.w3.org/WAI/tips/designing/Support the showJoin the Mailing List: Subscribe NowApply to be a guest: Guest ApplicationBad Attitudes Shop: badattitudesshop.etsy.comBecome a Member: ko-fi.com/badattitudespod Email badattitudespod@gmail.comFollow @badattitudespod on Instagram, Facebook, and TwitterAll available platforms hereWatch my TEDx talkBe sure to leave a rating or review wherever you listen!FairyNerdy: https://linktr.ee/fairynerdy

Igalia
History of the Web - Coralie Mercier

Igalia

Play Episode Listen Later Oct 8, 2024 46:51


Igalia's Brian Kardell chats with the Head of W3C Marketing & Communications, Coralie Mercier about the her history and relationship with the Web and the W3C.

Igalia
Igalia Chats: TPAC 2024

Igalia

Play Episode Listen Later Oct 4, 2024 52:09


Igalia's Brian Kardell and Eric Meyer talk about their recent trip to W3C's TPAC 2024

BookNet Canada
The W3C and digital publishing

BookNet Canada

Play Episode Listen Later Sep 26, 2024 28:39


This month, we invited Wendy Reid, the Accessibility and Publishing Standards Lead at Rakuten Kobo and the co-chair of the W3C (World Wide Web Consortium) Advisory Board to talk about the W3C, the EPUB Fixed Layout Accessibility document, and more. Find the transcript here: https://www.booknetcanada.ca/blog/2024/9/26/podcast-the-w3c-and-digital-publishing

Les Cast Codeurs Podcast
LCC 315 - les températures ne sont pas déterministes

Les Cast Codeurs Podcast

Play Episode Listen Later Sep 17, 2024 110:08


JVM summit, virtual threads, stacks applicatives, licences, déterminisme et LLMs, quantification, deux outils de l'épisode et bien plus encore. Enregistré le 13 septembre 2024 Téléchargement de l'épisode LesCastCodeurs-Episode–315.mp3 News Langages Netflix utilise énormément Java et a rencontré un problème avec les Virtual Thread dans Java 21. Les ingénieurs de Netflix analysent ce problème dans cet article : https://netflixtechblog.com/java–21-virtual-threads-dude-wheres-my-lock–3052540e231d Les threads virtuels peuvent améliorer les performances mais posent des défis. Un problème de locking a été identifié : les threads virtuels se bloquent mutuellement. Cela entraîne des performances dégradées et des instabilités. Netflix travaille à résoudre ces problèmes et à tirer pleinement parti des threads virtuels. Une syntax pour indiquer qu'un type est nullable ou null-restricted arriverait dans Java https://bugs.openjdk.org/browse/JDK–8303099 Foo! interdirait null Foo? indiquerait que null est accepté Foo?[]! serait un tableau non-null de valeur nullable Il y a aussi des idées de syntaxe pour initialiser les tableaux null-restricted JEP: https://openjdk.org/jeps/8303099 Les vidéos du JVM Language Summit 2024 sont en ligne https://www.youtube.com/watch?v=OOPSU4LnKg0&list=PLX8CzqL3ArzUEYnTa6KYORRbP3nhsK0L1 Project Leyden Update Project Babylon - Code Reflection Valhalla - Where Are We? An Opinionated Overview on Static Analysis for Java Rethinking Java String Concatenation Code Reflection in Action - Translating Java to SPIR-V Java in 2024 Type Specialization of Java Generics - What If Casts Have Teeth ? (avec notre Rémi Forax national !) aussi tip or tail pour tout l'ecosysteme quelques liens sur Babylon: Code reflection pour exprimer des langages etranger (SQL) dans Java: https://openjdk.org/projects/babylon/ et sont example en emulation de LINQ https://openjdk.org/projects/babylon/articles/linq Librairies Micronaut sort sa version 4.6 https://micronaut.io/2024/08/26/micronaut-framework–4–6–0-released/ essentiellement une grosse mise à jour de tonnes de modules avec les dernières versions des dépendances Microprofile 7 faire quelques changements et evolution incompatibles https://microprofile.io/2024/08/22/microprofile–7–0-release/#general enleve Metrics et remplace avec Telemetry (metrics, log et tracing) Metrics reste une spec mais standalone Microprofile 7 depende de Jakarta Core profile et ne le package plus Microprofile OpenAPI 4 et Telemetry 2 amenent des changements incompatibles Quarkus 3.14 avec LetsEncrypt et des serialiseurs JAckson sans reflection https://quarkus.io/blog/quarkus–3–14–1-released/ Hibernate ORM 6.6 Serialisateurs JAckson sans reflection installer des certificats letsencrypt simplement (notamment avec la ligne de commande qui aide sympa notamment avec ngrok pour faire un tunnel vers son localhost retropedalage sur @QuarkusTestResource vs @WithTestResource suite aux retour de OOME et lenteur des tests mieux isolés Les logs structurées dans Spring Boot 3.4 https://spring.io/blog/2024/08/23/structured-logging-in-spring-boot–3–4 Les logs structurées (souvent en JSON) vous permettent de les envoyer facilement vers des backends comme Elastic, AWS CloudWatch… Vous pouvez les lier à du reporting et de l'alerting. Spring Boot 3.4 prend en charge la journalisation structurée par défaut. Il prend en charge les formats Elastic Common Schema (ECS) et Logstash, mais il est également possible de l'étendre avec vos propres formats. Vous pouvez également activer la journalisation structurée dans un fichier. Cela peut être utilisé, par exemple, pour imprimer des journaux lisibles par l'homme sur la console et écrire des journaux structurés dans un fichier pour l'ingestion par machine. Infrastructure CockroachDB qui avait une approche Business Software License (source available puis ALS 3 ans apres), passe maintenant en license proprietaire avec source available https://www.cockroachlabs.com/blog/enterprise-license-announcement/ Polyform project offre des licences standardisees selon les besoins de gratuit vs payant https://polyformproject.org/ Cloud Azure fonctions, comment le demarrage a froid est optimisé https://www.infoq.com/articles/azure-functions-cold-starts/?utm_campaign=infoq_content&utm_source=twitter&utm_medium=feed&utm_term=Cloud fonctions ont une latence naturelle forte toutes les lantences longues ne sont aps impactantes pour le business les demarrages a froid peuvent etre mesures avec les outils du cloud provider donc faites en usage faites des decentilers de latences experience 381 ms cold et 10ms apres tracing pour end to end latence les strategies keep alive pings: reveiller la fonctione a intervalles reguliers pour rester “warm” dans le code de la fonction: initialiser les connections et le chargement des assemblies dans l'initialization configurer dans host.json le batching, desactiver file system logging etc deployer les fonctions as zips reduire al taille du code et des fichiers (qui sont copies sur le serveur froid) sur .net activer ready to run qui aide le JIT compiler instances azure avec plus de CPU et memoire sont plus cher amis baissent le cold start dedicated azure instances pour vos fonctions (pas aprtage avec les autres tenants) ensuite montre des exemples concrets Web Sortie de Vue.js 3.5 https://blog.vuejs.org/posts/vue–3–5 Vue.JS 3.5: Nouveautés clés Optimisations de performance et de mémoire: Réduction significative de la consommation de mémoire (–56%). Amélioration des performances pour les tableaux réactifs de grande taille. Résolution des problèmes de valeurs calculées obsolètes et de fuites de mémoire. Nouvelles fonctionnalités: Reactive Props Destructure: Simplification de la déclaration des props avec des valeurs par défaut. Lazy Hydration: Contrôle de l'hydratation des composants asynchrones. useId(): Génération d'ID uniques stables pour les applications SSR. data-allow-mismatch: Suppression des avertissements de désynchronisation d'hydratation. Améliorations des éléments personnalisés: Prise en charge de configurations d'application, d'API pour accéder à l'hôte et au shadow root, de montage sans Shadow DOM, et de nonce pour les balises. useTemplateRef(): Obtention de références de modèle via l'API useTemplateRef(). Teleport différé: Téléportation de contenu vers des éléments rendus après le montage du composant. onWatcherCleanup(): Enregistrement de callbacks de nettoyage dans les watchers. Data et Intelligence Artificielle On entend souvent parler de Large Language Model quantisés, c'est à dire qu'on utilise par exemple des entiers sur 8 bits plutôt que des floatants sur 32 bits, pour réduire les besoins mémoire des GPU tout en gardant une précision proche de l'original. Cet article explique très visuellement et intuitivement ce processus de quantisation : https://newsletter.maartengrootendorst.com/p/a-visual-guide-to-quantization Guillaume continue de partager ses aventures avec le framework LangChain4j. Comment effectuer de la classification de texte : https://glaforge.dev/posts/2024/07/11/text-classification-with-gemini-and-langchain4j/ en utilisant la classe TextClassification de LangChain4j, qui utilise une approche basée sur les vector embeddings pour comparer des textes similaires en utilisant du few-shot prompting, sous différentes variantes, dans cet autre article : https://glaforge.dev/posts/2024/07/30/sentiment-analysis-with-few-shots-prompting/ et aussi comment faire du multimodal avec LangChain4j (avec le modèle Gemini) pour analyser des textes, des images, mais également des vidéos, du contenu audio, ou bien des fichiers PDFs : https://glaforge.dev/posts/2024/07/25/analyzing-videos-audios-and-pdfs-with-gemini-in-langchain4j/ Pour faire varier la prédictibilité ou la créativité des LLMs, certains hyperparamètres peuvent être ajustés, comme la température, le top-k et le top-p. Mais est-ce que vous savez vraiment comment fonctionnent ces paramètres ? Deux articles très clairs et intuitifs expliquent leur fonctionnement : https://medium.com/google-cloud/is-a-zero-temperature-deterministic-c4a7faef4d20 https://medium.com/google-cloud/beyond-temperature-tuning-llm-output-with-top-k-and-top-p–24c2de5c3b16 la tempoerature va ecraser la probabilite du prochain token mais il reste des variables: approximnation des calculs flottants, stacks differentes effectuants ces choix differemment, que faire en cas d'egalité de probabilité entre deux tokens mais il y a d'atures apporoches de configuiration des reaction du LLM: top-k (qui evite les tokens peu frequents), top-p pour avoir les n des tokens qui totalient p% des probabilités temperature d'abord puis top-k puis top-p explique quoi utiliser quand OSI propose une definition de l'IA open source https://www.technologyreview.com/2024/08/22/1097224/we-finally-have-a-definition-for-open-source-ai/ gros debats ces derniers mois utilisable pour tous usages sans besoin de permission chercheurs peuvent inspecter les components et etudier comment le system fonctionne systeme modifiable pour tout objectif y compris chager son comportement et paratger avec d'autres avec ou sans modification quelque soit l'usage Definit des niveaux de transparence (donnees d'entranement, code source, poids) Une longue rétrospective de PostgreSQL a des volumes de malades et les problèmes de lock https://ardentperf.com/2024/03/03/postgres-indexes-partitioning-and-lwlocklockmanager-scalability/ un article pour vous rassurer que vous n'aurez probablement jamais le problème histoire sous forme de post mortem des conseils pour éviter ces falaises Outillage Un premier coup d'oeil à la future notation déclarative de Gradle https://blog.gradle.org/declarative-gradle-first-eap un article qui explique à quoi ressemble cette nouvelle syntaxe déclarative de Gradle (en plus de Groovy et Kotlin) Quelques vidéos montrent le support dans Android Studio, pour le moment, ainsi que dans un outil expérimental, en attendant le support dans tous les IDEs L'idée est d'éviter le scripting et d'avoir vraiment qu'une description de son build Cela devrait améliorer la prise en charge de Gradle dans les IDEs et permettre d'avoir de la complétion rapide, etc c'est moi on on a Maven là? Support de Firefox dans Puppeteer https://hacks.mozilla.org/2024/08/puppeteer-support-for-firefox/ Puppeteer, la bibliothèque d'automatisation de navigateur, supporte désormais officiellement Firefox dès la version 23. Cette avancée permet aux développeurs d'écrire des scripts d'automatisation et d'effectuer des tests de bout en bout sur Chrome et Firefox de manière interchangeable. L'intégration de Firefox dans Puppeteer repose sur WebDriver BiDi, un protocole inter-navigateurs en cours de standardisation au W3C. WebDriver BiDi facilite la prise en charge de plusieurs navigateurs et ouvre la voie à une automatisation plus simple et plus efficace. Les principales fonctionnalités de Puppeteer, telles que la capture de journaux, l'émulation de périphériques, l'interception réseau et le préchargement de scripts, sont désormais disponibles pour Firefox. Mozilla considère WebDriver BiDi comme une étape importante vers une meilleure expérience de test inter-navigateurs. La prise en charge expérimentale de CDP (Chrome DevTools Protocol) dans Firefox sera supprimée fin 2024 au profit de WebDriver BiDi. Bien que Firefox soit officiellement pris en charge, certaines API restent non prises en charge et feront l'objet de travaux futurs. Guillaume a créé une annotation @Retry pour JUnit 5, pour retenter l'exécution d'un test qui est “flaky” https://glaforge.dev/posts/2024/09/01/a-retryable-junit–5-extension/ Guillaume n'avait pas trouvé d'extension par défaut dans JUnit 5 pour remplacer les Retry rules de JUnit 4 Mais sur les réseaux sociaux, une discussion intéressante s'ensuit avec des liens sur des extensions qui implémentent cette approche Comme JUnit Pioneer qui propose plein d'extensions utiles https://junit-pioneer.org/docs/retrying-test/ Ou l'extension rerunner https://github.com/artsok/rerunner-jupiter Arnaud a aussi suggéré la configuration de Maven Surefire pour relancer automatiquement les tests qui ont échoué https://maven.apache.org/surefire/maven-surefire-plugin/examples/rerun-failing-tests.html la question philosophique est: est-ce que c'est tolerable les tests qui ecouent de façon intermitente Architecture Un ancien fan de GraphQL en a fini avec la technologie GraphQL et réfléchit aux alternatives https://bessey.dev/blog/2024/05/24/why-im-over-graphql/ Problèmes de GraphQL: Sécurité: Attaques d'autorisation Difficulté de limitation de débit Analyse de requêtes malveillantes Performance: Problème N+1 (récupération de données et autorisation) Impact sur la mémoire lors de l'analyse de requêtes invalides Complexité accrue: Couplage entre logique métier et couche de transport Difficulté de maintenance et de tests Solutions envisagées: Adoption d'API REST conformes à OpenAPI 3.0+ Meilleure documentation et sécurité des types Outils pour générer du code client/serveur typé Deux approches de mise en œuvre d'OpenAPI: “Implementation first” (génération de la spécification à partir du code) “Specification first” (génération du code à partir de la spécification) retour interessant de quelqu'un qui n'utilise pas GraphQL au quotidien. C'était des problemes qui devaient etre corrigés avec la maturité de l'ecosysteme et des outils mais ca a montré ces limites pour cette personne. Prensentation de Grace Hoper en 1980 sur le future des ordinateurs. https://youtu.be/AW7ZHpKuqZg?si=w_o5_DtqllVTYZwt c'est fou la modernité de ce qu'elle décrit Des problèmes qu'on a encore aujourd'hui positive leadership Elle décrit l'avantage de systèmes fait de plusieurs ordinateurs récemment declassifié Leader election avec les conditional writes sur les buckets S3/GCS/Azure https://www.morling.dev/blog/leader-election-with-s3-conditional-writes/ L'élection de leader est le processus de choisir un nœud parmi plusieurs pour effectuer une tâche. Traditionnellement, l'élection de leader se fait avec un service de verrouillage distribué comme ZooKeeper. Amazon S3 a récemment ajouté le support des écritures conditionnelles, ce qui permet l'élection de leader sans service séparé. L'algorithme d'élection de leader fonctionne en faisant concourir les nœuds pour créer un fichier de verrouillage dans S3. Le fichier de verrouillage inclut un numéro d'époque, qui est incrémenté à chaque fois qu'un nouveau leader est élu. Les nœuds peuvent déterminer s'ils sont le leader en listant les fichiers de verrouillage et en vérifiant le numéro d'époque. attention il peut y avoir plusieurs leaders élus (horloges qui ont dérivé) donc c'est à gérer aussi Méthodologies Guillaume Laforge interviewé par Sfeir, où il parle de l'importance de la curiosité, du partage, de l'importance de la qualité du code, et parsemé de quelques photos des Cast Codeurs ! https://www.sfeir.dev/success-story/guillaume-laforge-maestro-de-java-et-esthete-du-code-propre/ Sécurité Comment crowdstrike met a genoux windows et de nombreuses entreprises https://next.ink/144464/crowdstrike-donne-des-details-techniques-sur-son-fiasco/ l'incident vient de la mise à jour de la configuration de Falcon l'EDR de crowdstrike https://www.crowdstrike.com/blog/falcon-update-for-windows-hosts-technical-details/ qu'est ce qu'un EDR? Un système Endpoint Detection and Response a pour but de surveiller votre machine ( access réseaux, logs, …) pour detecter des usages non habituels. Cet espion doit interagir avec les couches basses du système (réseau, sockets, logs systems) et se greffe donc au niveau du noyau du système d'exploitation. Il remonte les informations en live à une plateforme qui peut ensuite adapter les réponse en live si l'incident a duré moins de 1h30 coté crowdstrike plus de 8 millions de machines se sont retrouvées hors service bloquées sur le Blue Screen Of Death selon Microsoft https://blogs.microsoft.com/blog/2024/07/20/helping-our-customers-through-the-crowdstrike-outage/ cela n'est pas la première fois et était déjà arrivé il y a quelques mois sur Linux. Comme il s'agissait d'une incompatibilité de kernel il avait été moins important car les services ITs gèrent mieux ces problèmes sous Linux https://stackdiary.com/crowdstrike-took-down-debian-and-rocky-linux-a-few-months-ago-and-no-one-noticed/ Les benchmarks CIS, un pilier pour la sécurité de nos environnements cloud, et pas que ! (Katia HIMEUR TALHI) https://blog.cockpitio.com/security/cis-benchmarks/ Le CIS est un organisme à but non lucratif qui élabore des normes pour améliorer la cybersécurité. Les référentiels CIS sont un ensemble de recommandations et de bonnes pratiques pour sécuriser les systèmes informatiques. Ils peuvent être utilisés pour renforcer la sécurité, se conformer aux réglementations et normaliser les pratiques. Loi, société et organisation Microsoft signe un accord avec OVHCloud pour qu'il arretent leur plaine d'antitrust https://www.politico.eu/article/microsoft-signs-antitrust-truce-with-ovhcloud/ la plainte était en Europe mermet a des clients de plus facilement deployer les solutions Microsoft dans le fournisseur de cloud de leur choix la plainte avait ete posé à l'été 2021 ca rendait faire tourner les solutions MS plus cheres et non competitives vs MS ElasticSearch et Kibana sont de nouveau Open Source, en ajoutant la license AGPL à ses autres licences existantes https://www.elastic.co/fr/blog/elasticsearch-is-open-source-again le marché d'il y a trois ans et maintenant a changé AWS est une bon partenaire le flou Elasticsearch vs le produit d'AWS s'est clarifié donc retour a l'open source via AGPL Affero GPL Elastic n'a jamais cessé de croire en l'open source d'après Shay Banon son fondateur Le changement vers l'AGPL est une option supplémentaire, pas un remplacement d'une des autres licences existantes et juste apres, Elastic annonce des resultants decevants faisant plonger l'action de 25% https://siliconangle.com/2024/08/29/elastic-shares-plunge–25-lower-revenue-projections-amid-slower-customer-commitments/ https://unrollnow.com/status/1832187019235397785 et https://www.elastic.co/pricing/faq/licensing pour un résumé des licenses chez elastic Outils de l'épisode MailMate un client email Markdown et qui gere beaucoup d'emails https://medium.com/@nicfab/mailmate-a-powerful-client-email-for-macos-markdown-integrated-email-composition-e218fe2accf3 Emmanuel l'utilise sur les boites email secondaires un peu lent a demarrer (synchro) et le reste est rapide boites virtuelles (par requete) SpamSieve Que macOS je crois Trippy, un analyseur de réseau https://github.com/fujiapple852/trippy Il regroupe dans une CLI traceroute et ping Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 17 septembre 2024 : We Love Speed - Nantes (France) 17–18 septembre 2024 : Agile en Seine 2024 - Issy-les-Moulineaux (France) 19–20 septembre 2024 : API Platform Conference - Lille (France) & Online 20–21 septembre 2024 : Toulouse Game Dev - Toulouse (France) 25–26 septembre 2024 : PyData Paris - Paris (France) 26 septembre 2024 : Agile Tour Sophia-Antipolis 2024 - Biot (France) 2–4 octobre 2024 : Devoxx Morocco - Marrakech (Morocco) 3 octobre 2024 : VMUG Montpellier - Montpellier (France) 7–11 octobre 2024 : Devoxx Belgium - Antwerp (Belgium) 8 octobre 2024 : Red Hat Summit: Connect 2024 - Paris (France) 10 octobre 2024 : Cloud Nord - Lille (France) 10–11 octobre 2024 : Volcamp - Clermont-Ferrand (France) 10–11 octobre 2024 : Forum PHP - Marne-la-Vallée (France) 11–12 octobre 2024 : SecSea2k24 - La Ciotat (France) 15–16 octobre 2024 : Malt Tech Days 2024 - Paris (France) 16 octobre 2024 : DotPy - Paris (France) 16–17 octobre 2024 : NoCode Summit 2024 - Paris (France) 17–18 octobre 2024 : DevFest Nantes - Nantes (France) 17–18 octobre 2024 : DotAI - Paris (France) 30–31 octobre 2024 : Agile Tour Nantais 2024 - Nantes (France) 30–31 octobre 2024 : Agile Tour Bordeaux 2024 - Bordeaux (France) 31 octobre 2024–3 novembre 2024 : PyCon.FR - Strasbourg (France) 6 novembre 2024 : Master Dev De France - Paris (France) 7 novembre 2024 : DevFest Toulouse - Toulouse (France) 8 novembre 2024 : BDX I/O - Bordeaux (France) 13–14 novembre 2024 : Agile Tour Rennes 2024 - Rennes (France) 16–17 novembre 2024 : Capitole Du Libre - Toulouse (France) 20–22 novembre 2024 : Agile Grenoble 2024 - Grenoble (France) 21 novembre 2024 : DevFest Strasbourg - Strasbourg (France) 21 novembre 2024 : Codeurs en Seine - Rouen (France) 27–28 novembre 2024 : Cloud Expo Europe - Paris (France) 28 novembre 2024 : Who Run The Tech ? - Rennes (France) 2–3 décembre 2024 : Tech Rocks Summit - Paris (France) 3 décembre 2024 : Generation AI - Paris (France) 3–5 décembre 2024 : APIdays Paris - Paris (France) 4–5 décembre 2024 : DevOpsRex - Paris (France) 4–5 décembre 2024 : Open Source Experience - Paris (France) 5 décembre 2024 : GraphQL Day Europe - Paris (France) 6 décembre 2024 : DevFest Dijon - Dijon (France) 22–25 janvier 2025 : SnowCamp 2025 - Grenoble (France) 30 janvier 2025 : DevOps D-Day #9 - Marseille (France) 6–7 février 2025 : Touraine Tech - Tours (France) 3 avril 2025 : DotJS - Paris (France) 16–18 avril 2025 : Devoxx France - Paris (France) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via twitter https://twitter.com/lescastcodeurs Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/

AXSChat Podcast
Driving Progress in European Web Accessibility

AXSChat Podcast

Play Episode Listen Later Sep 13, 2024 32:26 Transcription Available


Could the future of web accessibility hinge on nuanced grading systems rather than binary pass/fail ratings? Join us as we sit down with Detlev Fischer, Managing Director of Dias and a key figure in the development of the German test procedure for WCAG 2. Detlev shares his eclectic career path, from studying film in Hamburg and visual arts in the UK to becoming an accessibility advocate. You'll gain a unique insider perspective on the collaborative efforts required to shape accessibility standards and the critical role organizations like W3C and IAAP play in this evolving field.We dive deep into the complexities of web accessibility standards, particularly focusing on the WCAG 2 guidelines. Discover why binary ratings may not be enough and explore innovative solutions like graded ratings and the use of assertions for dynamic content. The conversation expands to the European Accessibility Act, the challenges of reaching a consensus among experts across regions, and the urgency to accelerate progress in web accessibility despite slow advancements over the past decade.In our final segment, we tackle the implementation challenges of European accessibility standards and the varied progress across different regions. Learn how mandatory accessibility statements have driven improvements on public websites and why the demand for accessibility expertise is skyrocketing among software companies. We also celebrate recent achievements in the field, extending heartfelt thanks to everyone who contributed to these milestones. This episode is a tribute to the hard work and collaboration that continue to drive the accessibility movement forward.Support the showFollow axschat on social mediaTwitter:https://twitter.com/axschathttps://twitter.com/AkwyZhttps://twitter.com/neilmillikenhttps://twitter.com/debraruhLinkedInhttps://www.linkedin.com/in/antoniovieirasantos/ https://www.linkedin.com/company/axschat/Vimeohttps://vimeo.com/akwyz

All JavaScript Podcasts by Devchat.tv
Overcoming JavaScript Load Issues: Import Maps and Performance Enhancements - JSJ 643

All JavaScript Podcasts by Devchat.tv

Play Episode Listen Later Aug 8, 2024 95:31


In this episode, they dive deep into the intricate world of JavaScript loading and web performance. Join the panel with insightful discussions led by Dan, Charles, Steve, and special guest Yoav Weiss—an expert with extensive experience in web performance from his time at Google, Akamai, and Shopify.They explore the latest initiatives aimed at improving ES modules, import maps, and the challenges faced with script loading, especially when dealing with web workers. They uncover the critical role of sub-resource integrity, the successful integration of integrity support in Chrome and Safari, and the urgent need for advanced import map solutions for large applications.They also delve into the nuts and bolts of optimizing web performance, including the impact of script execution on browser responsiveness, bundling techniques, and innovative strategies for managing resource download priorities. Tune in to hear about the latest developments, engage with provocative questions, and discover ways you can contribute to the ongoing work of the W3C web performance working group. Plus, stay for heartfelt moments, personal anecdotes, and practical recommendations from the speakers.  SponsorsWix StudioSocialsLinkedIn: Yoav WeissPicksAJ - Jason Bourne 5-part TrilogyAJ - Crucial MX500 has dethroned SP as my pick for best value server SSDCharles - Imaginiff | Board GameCharles - A Quiet Place: Day One (2024)Steve - How Does OpenAI Survive?Become a supporter of this podcast: https://www.spreaker.com/podcast/javascript-jabber--6102064/support.

This Week in Google (MP3)
TWiG 779: White Friend in a Box - Galaxy Fold, AI Aided Gymnastics, Top 10 Podcasts

This Week in Google (MP3)

Play Episode Listen Later Aug 1, 2024 149:45 Transcription Available


Amazon Paid Almost $1 Billion for Twitch in 2014. It's Still Losing Money. Craig Newmark stops by Galaxy fold unboxing We know almost everything about the Google Pixel 9 series already A Few Blockbuster Podcasts Are Making All the Money Friend in a box US border agents must get warrant before cell phone searches, federal court rules The gymnastics world braces for an AI future NASA will shut down NASA TV on cable to focus on NASA+ W3C Slams Google U-turn on Third-Party Cookie Removal Senate passes the Kids Online Safety Act Canon EF 400mm f/5.6L Follow Ant's ZDNET stuff Scots brat Hosts: Leo Laporte, Jeff Jarvis, and Ant Pruitt Guest: Craig Newmark Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: betterhelp.com/TWIG cachefly.com/twit

All TWiT.tv Shows (MP3)
This Week in Google 779: White Friend in a Box

All TWiT.tv Shows (MP3)

Play Episode Listen Later Aug 1, 2024 149:45 Transcription Available


Amazon Paid Almost $1 Billion for Twitch in 2014. It's Still Losing Money. Craig Newmark stops by Galaxy fold unboxing We know almost everything about the Google Pixel 9 series already A Few Blockbuster Podcasts Are Making All the Money Friend in a box US border agents must get warrant before cell phone searches, federal court rules The gymnastics world braces for an AI future NASA will shut down NASA TV on cable to focus on NASA+ W3C Slams Google U-turn on Third-Party Cookie Removal Senate passes the Kids Online Safety Act Canon EF 400mm f/5.6L Follow Ant's ZDNET stuff Scots brat Hosts: Leo Laporte, Jeff Jarvis, and Ant Pruitt Guest: Craig Newmark Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: betterhelp.com/TWIG cachefly.com/twit

Radio Leo (Audio)
This Week in Google 779: White Friend in a Box

Radio Leo (Audio)

Play Episode Listen Later Aug 1, 2024 149:45 Transcription Available


Amazon Paid Almost $1 Billion for Twitch in 2014. It's Still Losing Money. Craig Newmark stops by Galaxy fold unboxing We know almost everything about the Google Pixel 9 series already A Few Blockbuster Podcasts Are Making All the Money Friend in a box US border agents must get warrant before cell phone searches, federal court rules The gymnastics world braces for an AI future NASA will shut down NASA TV on cable to focus on NASA+ W3C Slams Google U-turn on Third-Party Cookie Removal Senate passes the Kids Online Safety Act Canon EF 400mm f/5.6L Follow Ant's ZDNET stuff Scots brat Hosts: Leo Laporte, Jeff Jarvis, and Ant Pruitt Guest: Craig Newmark Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: betterhelp.com/TWIG cachefly.com/twit

This Week in Google (Video HI)
TWiG 779: White Friend in a Box - Galaxy Fold, AI Aided Gymnastics, Top 10 Podcasts

This Week in Google (Video HI)

Play Episode Listen Later Aug 1, 2024 149:45 Transcription Available


Amazon Paid Almost $1 Billion for Twitch in 2014. It's Still Losing Money. Craig Newmark stops by Galaxy fold unboxing We know almost everything about the Google Pixel 9 series already A Few Blockbuster Podcasts Are Making All the Money Friend in a box US border agents must get warrant before cell phone searches, federal court rules The gymnastics world braces for an AI future NASA will shut down NASA TV on cable to focus on NASA+ W3C Slams Google U-turn on Third-Party Cookie Removal Senate passes the Kids Online Safety Act Canon EF 400mm f/5.6L Follow Ant's ZDNET stuff Scots brat Hosts: Leo Laporte, Jeff Jarvis, and Ant Pruitt Guest: Craig Newmark Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: betterhelp.com/TWIG cachefly.com/twit

All TWiT.tv Shows (Video LO)
This Week in Google 779: White Friend in a Box

All TWiT.tv Shows (Video LO)

Play Episode Listen Later Aug 1, 2024 149:45 Transcription Available


Amazon Paid Almost $1 Billion for Twitch in 2014. It's Still Losing Money. Craig Newmark stops by Galaxy fold unboxing We know almost everything about the Google Pixel 9 series already A Few Blockbuster Podcasts Are Making All the Money Friend in a box US border agents must get warrant before cell phone searches, federal court rules The gymnastics world braces for an AI future NASA will shut down NASA TV on cable to focus on NASA+ W3C Slams Google U-turn on Third-Party Cookie Removal Senate passes the Kids Online Safety Act Canon EF 400mm f/5.6L Follow Ant's ZDNET stuff Scots brat Hosts: Leo Laporte, Jeff Jarvis, and Ant Pruitt Guest: Craig Newmark Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: betterhelp.com/TWIG cachefly.com/twit

Total Ant (Audio)
This Week in Google 779: White Friend in a Box

Total Ant (Audio)

Play Episode Listen Later Aug 1, 2024 149:45 Transcription Available


Amazon Paid Almost $1 Billion for Twitch in 2014. It's Still Losing Money. Craig Newmark stops by Galaxy fold unboxing We know almost everything about the Google Pixel 9 series already A Few Blockbuster Podcasts Are Making All the Money Friend in a box US border agents must get warrant before cell phone searches, federal court rules The gymnastics world braces for an AI future NASA will shut down NASA TV on cable to focus on NASA+ W3C Slams Google U-turn on Third-Party Cookie Removal Senate passes the Kids Online Safety Act Canon EF 400mm f/5.6L Follow Ant's ZDNET stuff Scots brat Hosts: Leo Laporte, Jeff Jarvis, and Ant Pruitt Guest: Craig Newmark Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: betterhelp.com/TWIG cachefly.com/twit

Total Ant (Video)
This Week in Google 779: White Friend in a Box

Total Ant (Video)

Play Episode Listen Later Aug 1, 2024 149:45 Transcription Available


Amazon Paid Almost $1 Billion for Twitch in 2014. It's Still Losing Money. Craig Newmark stops by Galaxy fold unboxing We know almost everything about the Google Pixel 9 series already A Few Blockbuster Podcasts Are Making All the Money Friend in a box US border agents must get warrant before cell phone searches, federal court rules The gymnastics world braces for an AI future NASA will shut down NASA TV on cable to focus on NASA+ W3C Slams Google U-turn on Third-Party Cookie Removal Senate passes the Kids Online Safety Act Canon EF 400mm f/5.6L Follow Ant's ZDNET stuff Scots brat Hosts: Leo Laporte, Jeff Jarvis, and Ant Pruitt Guest: Craig Newmark Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: betterhelp.com/TWIG cachefly.com/twit

Radio Leo (Video HD)
This Week in Google 779: White Friend in a Box

Radio Leo (Video HD)

Play Episode Listen Later Aug 1, 2024 149:45 Transcription Available


Amazon Paid Almost $1 Billion for Twitch in 2014. It's Still Losing Money. Craig Newmark stops by Galaxy fold unboxing We know almost everything about the Google Pixel 9 series already A Few Blockbuster Podcasts Are Making All the Money Friend in a box US border agents must get warrant before cell phone searches, federal court rules The gymnastics world braces for an AI future NASA will shut down NASA TV on cable to focus on NASA+ W3C Slams Google U-turn on Third-Party Cookie Removal Senate passes the Kids Online Safety Act Canon EF 400mm f/5.6L Follow Ant's ZDNET stuff Scots brat Hosts: Leo Laporte, Jeff Jarvis, and Ant Pruitt Guest: Craig Newmark Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: betterhelp.com/TWIG cachefly.com/twit

IT Privacy and Security Weekly update.
The IT Privacy and Security Weekly Update from the back seat of the car for the week ending July 30th 2024

IT Privacy and Security Weekly update.

Play Episode Listen Later Jul 31, 2024 13:59


Episode 201 This week we have bookends for gearheads. A book for Ferrari and a booking for GMC.Our second story is so incredible that even we had to read it twice. Our question is: “Would you ever use an exchange run by people so stupid?”Your house just got robbed.  We have the detail your wifi Camera didn't pick up.We've got some good news for you as you pass through US airports, and bad news as you pass through another US healthcare benefits administrator.From there Google gets body slammed by the W3C, and you'll understand why when you realize how it affects your privacyThis week we'll take a little cruise in the back seat of the car, but please don't think you'll get any privacy there.Drivers take your mark, get set, Go! Find the full transcript to this week's podcast here.

The Guy R Cook Report - Got a Minute?
Remembering Internet Milestones of Importance

The Guy R Cook Report - Got a Minute?

Play Episode Listen Later Jul 29, 2024 2:47 Transcription Available


Remembering Internet Milestones of Importance Are you ready to take a trip down the digital memory lane? Whether you're a tech enthusiast, history buff, or one of the internet's original pioneers, our podcast “Remembering Internet Milestones of Importance” is specially crafted for you. What to Expect: Listen in as I explore the most pivotal moments so far for July 29th, that have shaped the online world as we know it. Each episode of Practical Digital Strategies is a deep dive into the stories, technologies, and innovations that have left an indelible mark on the internet. Episode Highlights: The Launch of Mosaic (July 29, 1993): Mosaic introduced the world to the concept of easily accessible, graphical web pages, revolutionizing how people interacted with the Internet. Its user-friendly interface made the Internet more accessible to a broader audience, setting the stage for the explosive growth of the World Wide Web. This was a turning point that transitioned the Internet from a tool used primarily by researchers and tech enthusiasts to a global platform for information sharing and communication. Establishment of the World Wide Web Consortium (W3C) (July 29, 1994): The W3C has been instrumental in standardizing many elements of the web, including HTML, XML, and CSS. These standards ensure the interoperability of web technologies, making it easier for information to be shared and accessed globally. The establishment of the W3C was crucial for fostering a cohesive and universally accessible web, driving innovation, and maintaining the open nature of the Internet. Adoption of the IPv6 Protocol (July 29, 2016): The adoption of IPv6 was critical in addressing the issue of IPv4 address exhaustion. IPv6 provides a vastly larger pool of IP addresses, ensuring that the Internet can continue to grow and accommodate the increasing number of devices connected to it. This transition was essential for the sustainability and scalability of the global Internet infrastructure. Why Listen? By tuning in, you'll gain a deeper understanding of the internet's rich history and its profound impact on our daily lives. Our engaging discussion will also inspire curiosity about the digital frontier. Stay Connected: Subscribe via Podbean now and never miss an episode as we remember the milestones that have made the internet a truly extraordinary place. Remembering Internet Milestones of Importance – because every byte has a story.

The Guy R Cook Report - Got a Minute?
Remembering Internet Milestones of Importance

The Guy R Cook Report - Got a Minute?

Play Episode Listen Later Jul 24, 2024 2:47 Transcription Available


Remembering Internet Milestones of Importance Are you ready to take a trip down the digital memory lane? Whether you're a tech enthusiast, history buff, or one of the internet's original pioneers, our podcast “Remembering Internet Milestones of Importance” is specially crafted for you. What to Expect: Listen in as I explore the most pivotal moments so far for July 29th, that have shaped the online world as we know it. Each episode of Practical Digital Strategies is a deep dive into the stories, technologies, and innovations that have left an indelible mark on the internet. Episode Highlights: The Launch of Mosaic (July 29, 1993): Mosaic introduced the world to the concept of easily accessible, graphical web pages, revolutionizing how people interacted with the Internet. Its user-friendly interface made the Internet more accessible to a broader audience, setting the stage for the explosive growth of the World Wide Web. This was a turning point that transitioned the Internet from a tool used primarily by researchers and tech enthusiasts to a global platform for information sharing and communication. Establishment of the World Wide Web Consortium (W3C) (July 29, 1994): The W3C has been instrumental in standardizing many elements of the web, including HTML, XML, and CSS. These standards ensure the interoperability of web technologies, making it easier for information to be shared and accessed globally. The establishment of the W3C was crucial for fostering a cohesive and universally accessible web, driving innovation, and maintaining the open nature of the Internet. Adoption of the IPv6 Protocol (July 29, 2016): The adoption of IPv6 was critical in addressing the issue of IPv4 address exhaustion. IPv6 provides a vastly larger pool of IP addresses, ensuring that the Internet can continue to grow and accommodate the increasing number of devices connected to it. This transition was essential for the sustainability and scalability of the global Internet infrastructure. Why Listen? By tuning in, you'll gain a deeper understanding of the internet's rich history and its profound impact on our daily lives. Our engaging discussion will also inspire curiosity about the digital frontier. Stay Connected: Subscribe via Podbean now and never miss an episode as we remember the milestones that have made the internet a truly extraordinary place. Remembering Internet Milestones of Importance – because every byte has a story.

WP Builds
381 – No Script Show, Episode 13 – What is the W3C doing about AI?

WP Builds

Play Episode Listen Later Jul 18, 2024 49:13


In this episode of the WP Builds Podcast, Nathan Wrigley and David Waumsley discuss the significant and evolving role of AI on the web, focusing on the World Wide Web Consortium's (W3C) new report titled "AI and the Web, Understanding and Managing the Impact of Machine Learning Models on the Web". The episode delves into AI's dual challenges: data quality and environmental impact. We explore ethical and societal implications, such as privacy, transparency, and the potential for AI to undermine human creativity and entry-level jobs. We also address the importance of standards, regulatory frameworks, and Tim Berners-Lee's optimistic vision of AI, emphasising the need for collaborative and ethical AI development. Go listen...

The New Stack Podcast
The Fediverse: What It Is, Why It's Promising, What's Next

The New Stack Podcast

Play Episode Listen Later Jul 18, 2024 40:38


In the early days, the internet was a decentralized space created by enthusiasts. However, it has since transformed into a centralized, commerce-driven entity dominated by a few major players. The promise of the fediverse, a decentralized social networking concept, offers a refreshing alternative.Evan Prodromou, OpenEarth Foundation's director of open technology, has been advocating for decentralized social networks since 2008, starting with his creation, Identi.ca. Unlike Twitter, Identi.ca was open source and federated, allowing independent networks to interconnect.Prodromou, a co-author of ActivityPub—the W3C standard for decentralized networking used by platforms like Mastodon—discusses the evolution of the fediverse on The New Stack Makers podcast. He notes that small social networks dwindled to a few giants, such as Twitter and Facebook, which rarely interconnected. The acquisition of Twitter by Elon Musk disrupted the established norms, prompting users to reconsider their dependence on centralized platforms.The fediverse aims to address these issues by allowing users to maintain relationships across different instances, ensuring a smoother transition between networks. This decentralization fosters community management and better control over social interactions.Check out the full podcast episode to explore how tech giants like Meta are engaging with the fediverse and how to join decentralized social networks.Learn more from The New Stack about fediverse:FediForum Showcases New Fediverse Apps and Developer NetworkOne Login: Towards a Single Fediverse Identity on ActivityPubWeb Dev 2024: Fediverse Ramps Up, More AI, Less JavaScriptJoin our community of newsletter subscribers to stay on top of the news and at the top of your game. https://thenewstack.io/newsletter/

Hacker Public Radio
HPR4157: Talking with Halla about the past and future of Krita for its 25th birthday

Hacker Public Radio

Play Episode Listen Later Jul 9, 2024


For the occasion of 25 years of Krita and in preparation of Software Freedom Day in September 21st of this year, we wanted to talk with Halla, the lead maintainer of this great project. We asked around and Arnoud stepped up and offered to visit Halla to ask some questions about the project's history and future. The talk is also available as a video on the PeerTube instance of the Digital Freedom Foundation. If you know what Software Freedom Day is, I'm confident that your heart warms up with fond memories. If you don't know what it is, have a look at digitalfreedoms.org/sfd for more info. Basically, it's a grassroots movement from local teams organizing events to tell others about the benefits and importance of software freedom. If you would consider organizing Software Freedom Day where you live, don't hesitate to visit the blog on our site, and get some inspiration for what you could do. With that said, let's listen to the interview between Arnoud and Halla. Enjoy it! Today we're interviewing Halla, who is the lead maintainer of Krita, to learn all about it and to hear where the project has been and where it's going. Halla, to start us off, could you tell us a little bit about what Krita is? Sure. I love telling people about Krita. So Krita is a digital painting application. It's meant to make art from scratch, both still images and animations. So we've got a huge number of brush engines, color spaces for people who need to print and lots of features really focused at creating art from start. For what kind of illustrations would you use Krita? Pretty much everything. I've seen so many different artworks, different styles. People are working on comics in Krita. People are working with illustrations. There are people who design those trade book card with Krita. Games, I mean, whole animated games, like platform games. It's used for all that sort of thing, for everything, in every style, in pretty much every country in the world. Wow. Uh, are there any publications we might know about that have used images created in Krita? There are so many! We got sent a copy of a book on American wild birds. That was entirely done in Krita. Wow, cool. Talk a little bit about yourself. What role did you play in the creation of Krita? This year Krita is 25 years old. Which meant I wasn't there at the absolute beginning. So, in 2003, my parents gave me for my birthday a really small graphics tablet, a Wacom Graphire. And I wanted to use it to draw a map for a fantasy novel I was writing back then. The novel never got finished, because of course I wanted to use Linux as my desktop operating system. And I sort of couldn't get into GIMP, and I started looking around for an application other than GIMP that I could maybe improve or could maybe be good enough. Well, I found Krita. In 2003, it had already gone through three names: KImageShop. That didn't last long. Krayon. That didn't last long either. And it was finally called Krita. It has also gone through three complete rewrites. So when I started working on Krita in 2003, it didn't even have a brush tool. You could open images, add images as layers, and move the layers around. And that was everything. So, it was a really good place to get started. Except, of course, that it turns out that I'm not a genius. I'm not even a computer scientist. I mean, I'm a linguist. And writing a good brush engine is pretty difficult. So, I started blogging about how I was completely failing at creating a nice brush engine. And how is was failing. That turned out to be a turning point for the project because people saw that: "Oh, there's someone working on it, and they're not making any progress, mmm, I will take a look as well." They started getting enthusiastic and pretty soon after 2004, we already had our fourth complete core rewrite. So that's how I got started. So how many people were involved in the Krita community by that time? Mid 2004, it was about a dozen. Krita was still part of KOffice, which was KDE's suite of productivity applications. And KOffice developed that still, because they were porting from one document format to another document format. But suddenly there was an application that we really wanted to release. And that's when KOffice got released again as well. So it's a bit hard to say how many people are actually working on Krita because there were also some people working on the core libraries that every application used, but say a dozen. And can you speak a little bit about how the community evolved since? Yes. Until around 2006, we didn't really have a focus. Krita was a GIMP clone or a Photoshop clone. And, in 2006, David Revoy, a French artist who only uses free software, tried Krita, and he told us it's no good. While we thought we had quite a nice application by then. Afterwards, we started taking this very seriously. So, when we have a sprint, we also invite artists. We actually videotape the artists working with Krita. And that's for the developers a really nice way of getting to know where the bottlenecks are for users. So because we involved artists, our developer community also started to grow. At some point of time, most growth came through Google Summer of Code, but those days are over. That program is not doing a lot anymore. We've only got one student this year. So that started the second phase. Let's make Krita good enough for David Revoy. We also invited Peter Sicking to a print. Peter Sicking is the guy who was involved in defining the mission statement for GIMP. He sat down with us and asked us: "What do you really want to do?" Make Krita good for David Revoy. That's a bit thin as a mission statement. So we came up with we want to make Krita purely a painting application. Sure, there are filters and other stuff, but if it's good for painting, it goes in. So we started working on that and that took quite a long time to get there, especially because we were stupid. We started doing a complete rewrite in 2007 of everything. That was the fifth. So, that continued, everyone was working on Krita as a hobby. Most people were still students, until our Slovak student, Lukáš, was working on his thesis. And his thesis was brush engines for Krita. And of course he got 10 out of 10 because he could show his professors that he had created real software that was used by real people all over the world. And then he was like, okay, I'm almost done with university. What should I do? If you guys can pay my rent, then I can work on Krita full time. If not, I'm going to flip burgers. So I ask him what his rent was. It was like 35 euros a month. So I thought, well, let's do a fundraiser and we can pay you for, say, six months. Six months turned into a year. And after that, Lukas got a job at a different company, but it started sponsored development. And that's been really important for the growth of our community, because by now there are six people working full time on Krita. The second student we hired on graduation was Dmitry Kazakov, a Russian guy, and he's currently our lead developer. So because we're all there, lots of volunteer developers can see that their patches and merge requests get reviewed, they get merged and that makes people happy. So we have a really healthy mix right now of sponsors and volunteer developers. That sounds great. You mentioned sprints a couple of times, can you tell us a little bit more about how that is organized? In theory, we organize one big sprint a year. Of course, it hasn't been possible. Some people have had to flee Russia, for instance. So visa problems are real problems. And the way it mostly used to happen was I would invite everyone to Deventer, have some people sleep upstairs, in our spare bedrooms. And the rest would go to Hotel Royale in Deventer, which has two big rooms on the top floor. Then we'd go down in the cellar of the church. It's a 12th century cellar. Really roomy, and we would just do some hacking, then do a meeting. And in the evenings, we would go out for dinner, and just get to know each other better. One thing that I really miss about sprints, or rather not having sprints, is the time we would spend in my study over there. Just, just a couple of us. The rest would be hacking around. And we would try to just go through the list of bug reports. And for us, sprints are fun. We also invite developers, artists, documentation writers. Yeah, that sounds like a lot of fun. So, if a new contributor would like to join Krita, what would be the typical on ramps that they could come into? It used to be that people would mostly join us on IRC. Nowadays, we also have Matrix, because building Krita from scratch is not easy. But we've got a great manual for that by now. So either people join us on IRC and ask for help building Krita, and then maybe ask, do you know a nice bug or feature ish that I could start working on? And then we, we'd help them with that. But these days it's mostly people who out of the blue, post a merge request on KDE's GitLab instance. And then we're "Oh, this person from Serbia, this person from Denmark, they have suddenly have a really nice patch!" And sometimes a patch needs to be improved. Sometimes it can go in as is. And then we try to get them, in our chat channel, because that's still the place where we have most development discussions. And the mailing list is almost dead, but that holds for many mailing lists. After that, once you've got three merge requests in Krita, merged into Krita, we will ask you: "Do you want to have a developer account, so you can review other people's work, merge it, get full access to everything?" And sometimes they are "Yes, I've always wanted that", and sometimes "I'm not really comfortable with that, I just want to send you more patches", and that's fine. Sounds great. In terms of features, are there any particular features of Krita that you're particularly proud of, or that sets Krita apart from other drawing programs? Over the years, we had a number of firsts. Like, before Adobe even knew that OpenGL existed, we had a hardware accelerated canvas implementation. Then, about the same year, I think it was 2005, we implemented support for all kinds of color models. Like CMYK, LAB, also painterly color models. That's stuff that tries to mix spectral wavelengths to simulate the way paint mixes. That feature is out because it never worked well enough. Then we got, I think, a really nice way of doing animations. Of course the brush engines are great. Oh, and this is something that almost nobody knows, but we support painting in HDR. So color values lower than zero and bigger than one, fully dynamic. And the way we work with those images is compatible with the way Blender imports images. So, you mentioned Blender, are there any other products that Krita works particularly well with, or that are nice complements to Krita? Scribus. Scribus is a desktop publishing application, it's also free software. Development is a bit slow at the moment, but it's really solid. We used it for our 2006, I think it was 2006, Krita art book, for instance. And Inkscape of course, as well. Krita does have vector layers, and they are quite advanced, but still Inkscape is a really good complement. Krita and Inkscape are the only applications that currently implement the W3C mesh gradient standard. Cool, and in terms of current development, which features are you most excited about which are coming up? What's coming up is the port of Qt6, new version of our development library. That's going to really eat development time. But again, we've got some volunteers who already started working on that. I'm not sure I'm really excited about it, but, but we have to investigate it. We are looking into AI assisted inking. So you would train Krita on the way you would normally ink your sketches. And then Krita should be able to semi-automatically ink your sketches for you. Because for many artists inking is a bit of a boring step, because when you're doing inking, you're often really, really careful. And that means that the lines are a bit, often a bit deader compared to the sketch, um, Trying to use AI to assist with that is something we are investigating. We are working on that together with Intel because Intel is one of our corporate sponsors. But we are also doing all kinds of projects with Intel. Like, Intel also worked with us on that HDR feature, for instance. Oh, and text. That's, that's important as well. Volterra has been working on that. The text shape and the text tool, like the object that contains text on canvas and the tool that modifies it are of course two different projects. This will implement full SVG to text including CSS, ligatures, font features and everything. And she's already implemented it. And the text shape itself, it can do vertical text, like for Chinese or Japanese. It can do Ruby, which is the furigana, the small, text that in Japanese you put next to the kanji, the Chinese derived characters, so you know how to pronounce them. And she's now working on the UI, and, and it's something we've wanted to start working on, uh, years ago already, I think it was 2017. Actually, I was working on that, but then I was distracted by the Dutch tax office which wanted to have money. And I had to do difficult stuff and hire accountants and so on. And it's not easy being a manager. So that's the two big things that are coming, hopefully: The experimental assisted inking an a super deluxe text tool. Cool. So what does your release schedule look like? Do you have set dates or is it ready when it's ready? Ready when it's ready, but it's often ready. If our infrastructure is working correctly, then we typically do a bugfix release every two months. There have been years when we did one every month, but that was just eating up too much of our time. We try to have one or two full feature releases a year as well. Of course, we moved from Jenkins as our binary factory platform to GitLab CI. And that means we haven't been able to do a release for six months because so many bits needed fixing, bits were broken. The whole pipeline had to be rewritten. But that's done now. So we just released 5.2.3 beta 1. And we hope to do the 5.3 pretty soon, which is a bug fix release. And 5.3 will be a feature release again. I think we've got almost enough features in there. We're only waiting for the text tool to be completed. That sounds great. In terms of, uh, volunteers, are there any areas that you would really appreciate someone helping out and looking into things? Android experts, because our Android expert started at a very difficult university and doesn't have any spare time anymore. And Android is, is a difficult platform. Platform itself, the libraries, it changes all the time. We do have a UX designer, Scott Petrovic, but more help there would also be welcome. And for the rest, it's actually mostly not what we wish to be done, but what volunteers wish to do and most work is welcome. Sounds great. On the topic of platforms, which platforms does Krita support right now? That's Linux. We prefer our own binary builds in AppImage format because we have to patch a lot of the libraries that Krita depends on. Windows, MacOS, Android. If and when iPadOS gets opened up, we might port to iOS. But both for iOS and Android, Oh, we also support Chrome OS, but that's Android. For iPadOS and Android, so tablet form factor, we really want to optimize our user interface for touch and for that we need to have the port to Qt6 done. So that's going to take some time. Sounds like there's a lot of exciting things coming. I think that's all I have for you today. So I'd like to really thank you for taking the time to speak to us. It was a pleasure. Um, is there any things we haven't covered that you would like to, uh, talk about? Oh, I want to brag a bit. Go for it. Because we have about 7 million users. That's quite a lot. I mean, I used to do commercial software development. And most of the companies we worked for never ever released. So that makes it so much more fun to work on. Yeah, that's genuinely amazing. Awesome. Thank you very much. Thank you, too.

UXpeditious: A UserZoom Podcast
Designing for Everyone: Digital Accessibility Insights with Karen Hawkins

UXpeditious: A UserZoom Podcast

Play Episode Listen Later Jun 24, 2024 11:27


Link to this episode's web page ----------------------- The Human Insight Summit (THiS) PROMO Code The Human Insight Summit (THiS) is headed to Austin, Texas, this October 28-30. THiS is our annual, in-person customer conference dedicated to helping organizations understand their customers by showing and sharing, first-hand, the possibilities of using human insight across the business. Get $100 off your registration with the podcast listener code: UNLOCKED. Learn more or register. ----------------------- Episode Description: In this enlightening episode of Insights Unlocked, host Nathan Isaacs sits down with Karen Hawkins, Principal of Accessible Design at Level Access, during the UXDX conference in New York City. Together, they dive deep into the world of digital accessibility, exploring why it remains a critical yet often overlooked aspect of design. Karen shares valuable insights on how companies can create inclusive digital experiences and the importance of accessibility in today's digital landscape. Key Themes & Takeaways: Introduction to Digital Accessibility: Karen introduces herself and her role at Level Access. The discussion opens with an overview of digital accessibility and its importance in ensuring everyone can execute tasks efficiently, regardless of their abilities or characteristics. Current State of Accessibility: Despite existing laws and guidelines, accessibility issues persist, particularly simple ones like contrast problems. Karen references Web Aim's annual survey, which highlights recurring accessibility issues on top websites. Challenges and Solutions: The need for both grassroots efforts and top-down support to drive meaningful change in accessibility. Importance of conducting audits focusing on people, processes, and tools to benchmark and improve accessibility. Setting realistic, incremental goals for continuous improvement in accessibility practices. Beyond Digital Products: Accessibility extends to organizational culture and practices, including hiring and travel policies. The discussion emphasizes that accessibility is a global issue, supported by laws like the European Accessibility Act and guidelines from the Web Content Accessibility Guidelines. Resources and Personal Responsibility: Karen recommends resources like the W3C's introductory courses and Web Aim for those looking to learn more about accessibility. The importance of individuals taking personal responsibility for creating accessible digital content within their organizations. Call to Action: Karen encourages listeners to connect with her on LinkedIn for further resources and insights on accessibility. Final thoughts on fostering a culture of inclusivity and accessibility in every aspect of digital and organizational practices. Connect with Karen Hawkins: LinkedIn: Karen Hawkins (Note: Ensure you find the right Karen Hawkins, not the romance novelist!) Additional Resources: W3C Introductory Course on Accessibility Web Aim

Article 19
Strokes and Accessibility

Article 19

Play Episode Listen Later May 8, 2024 26:49


Full Transcript   Tamman's Article 19 podcast speaks with long-time accessibility strategist and stroke survivor, Robert Jolly.  Host Kristen Witucki chats with Robert about how his career started, what led to his stroke, and why he returned to the cause of accessibility with even more conviction after his stroke.   Robert Jolly is an accessibility-focused product manager working on the delivery team for 10x in the Technology Transformation Services (TTS) at General Services Administration. Robert has previously been co-lead for the TTS Accessibility Guild, and was a member of the W3C's Web Accessibility Initiative (WAI) Education and Outreach Working Group.

Voices of VR Podcast – Designing for Virtual Reality
#1231: XR Accessibility Insights from a Government Contractor + AR as an Assistive Technology

Voices of VR Podcast – Designing for Virtual Reality

Play Episode Listen Later Jul 13, 2023 26:18


Joel Ward is an Emerging Technology Manager at Booz Allen Hamilton who has been working within Web Accessibility since around the time Section 508 of the Rehabilitation Act was enacted on August 7, 1998 and the W3C's Web Content Accessibility Guidelines came out (WCAG version 1.0 was published on May 5, 1999). Ward spoke on a XR Access Symposium panel discussion about Empowering the Workforce through Accessible XR, and also brought a pair of XREAL Air glasses with live captioning to show off the power of AR as an assistive technology to other attendees. I had a chance to speak with Ward about the pending work that has to be done in order to specify XR accessibility experimentation within government contracts. Right now XR accessibility is in a bit of a Catch-22, because if something is not specified within a contract to be worked on, then it most likely won't happen. However, XR Accessibility features and functionality are still too nascent to be clearly specified and scoping out within these contracts. As I covered in the unpacking of public policy and XR accessibility write-up of my interview with XR Association's Liz Hyman, then the emerging nature of XR technologies means that they're not clearly specified within legislation like Section 508 or the American's with Disability Act to be enforced. We talk what needs to be done at the federal government contracting level so that his group at Booz Allen Hamilton can have the freedom and resources to develop more XR Accessibility features that feeds into the type of Darwinian experimentation that Neil Trevett says is a crucial phase of the standards development process. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

Voices of VR Podcast – Designing for Virtual Reality
#1228: XR Accessibility and Public Policy with XR Association’s Liz Hyman

Voices of VR Podcast – Designing for Virtual Reality

Play Episode Listen Later Jul 13, 2023 21:19


Liz Hyman President and the CEO of the XR Association (XRA), which is a non-profit, industry trade association representing 47 XR companies including major players like Meta, Google, Microsoft, HTC, Sony Interactive Entertainment, Unity, and HTC. XRA has been collaborating with XR Access from the beginning, and they collaborated on Chapter 3 of their an XR Developer's Guide Series focusing on Accessibility & Inclusive Design in Immersive Experiences. At the XR Access Symposium, Hyman moderated a breakout session focused on public policy. I had a chance to unpack the three major points from the group discussion that included educating policy makers about accessibility, brainstorming bluesky legislation to lift up accessibility tech, and identify gaps in public policy so that they can be addressed. A common theme that I heard at the XR Access Symposium again and again is that accessibility hardly ever is prioritized with emerging technology platforms. XR Access co-founder Shiri Azenkot told me "But it was always a retroactive thing. So that's a pattern that we've seen with technology a lot, many times -- every time there's a new technology." There are laws on the books like the American Disabilities Act and Section 508, but this doesn't always apply to XR technologies. "Section 508 of the Rehabilitation Act requires federal agencies to ensure that their information and communication technology (ICT) is accessible to people with disabilities." Section 508 would only apply to XR experiences produced by the federal government. And Hyman told me, "I will say the thing is that I notice is [that XR] is an emerging technology. And I think even within a program like 508, there's an acknowledgment of the emerging nature of technology that should not be something that is a permanent barrier. We need to make steady progress." She is likely referencing some recent rulings about Section 508 to bring it more up to date since it was originally passed in 1998 and 2000. I was able to find a couple of relevant passages that support Hyman's interpretation on the Section 508 website which says, "The U.S. Access Board is responsible for developing Information and Communication Technology (ICT) accessibility standards to incorporate into regulations that govern Federal procurement practices. On January 18, 2017, the Access Board issued a final rule that updated accessibility requirements covered by Section 508, and refreshed guidelines for telecommunications equipment subject to Section 255 of the Communications Act. The final rule went into effect on January 18, 2018." I found a copy of the final rule from the Access Board ICT in the Federal Register from January 18th, 2017 and there is a mention of virtual reality that says, "The Board expects that an agency that decides to use a conforming alternate version of a Web page as opposed to making the main page accessible will typically do so when, as the W3C® explains, certain limited circumstances warrant or mandate their use. For example, W3C® has noted that a conforming alternate version may be necessary: (1) When a new emerging technology is used on a Web page, but the new technology cannot be designed in a way that allows assistive technologies to access all the information needed to present the content to the user (e.g., virtual reality or computer-simulated reality); (2) when it is not possible to modify some content on a Web page because the Web site owner is legally prohibited from modifying the Web content; or (3) to provide the best experience for users with certain types of disabilities by tailoring a Web page specifically to accommodate those disabilities." The bottom line is that the 2D web is a mature platform relative to paradigm shift into VR, 3D, and spatial computing, which means that the US government can more heavily lean upon established guidelines from organizations like the W3C that has produced a number of different Web Content Accessibility Guideline...