Podcasts about websockets

Computer network protocol

  • 149PODCASTS
  • 235EPISODES
  • 43mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • May 20, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about websockets

Latest podcast episodes about websockets

Oracle University Podcast
Oracle GoldenGate 23ai Security Strategies

Oracle University Podcast

Play Episode Listen Later May 20, 2025 16:13


GoldenGate 23ai takes security seriously, and this episode unpacks everything you need to know. GoldenGate expert Nick Wagner breaks down how authentication, access roles, and encryption protect your data.   Learn how GoldenGate integrates with identity providers, secures communication, and keeps passwords out of storage. Understand how trail files work, why they only store committed data, and how recovery processes prevent data loss.   Whether you manage replication or just want to tighten security, this episode gives you the details to lock things down without slowing operations.   Oracle GoldenGate 23ai: Fundamentals: https://mylearn.oracle.com/ou/course/oracle-goldengate-23ai-fundamentals/145884/237273 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode.   --------------------------------------------------------------   Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services.  Nikita: Welcome, everyone! This is our fourth episode on Oracle GoldenGate 23ai. Last week, we discussed the terminology, different processes and what they do, and the architecture of the product at a high level. Today, we have Nick Wagner back with us to talk about the security strategies of GoldenGate. 00:56 Lois: As you know by now, Nick is a Senior Director of Product Management for GoldenGate at Oracle. He's played a key role as one of the product designers behind the latest version of GoldenGate. Hi Nick! Thank you for joining us again. Can you tell us how GoldenGate takes care of data security? Nick: So GoldenGate authentication and authorization is done in a couple of different ways. First, we have user credentials for GoldenGate for not only the source and target databases, but also for GoldenGate itself. We have integration with third-party identity management products, and everything that GoldenGate does can be secured. 01:32 Nikita: And we must have some access roles, right? Nick: There's four roles built into the GoldenGate product. You have your security role, administrator, operator, and user. They're all hierarchical. The most important one is the security user. This user is going to be the one that provides the administrative tasks. This user is able to actually create additional users and assign roles within the product. So do not lose this password and this user is extremely important. You probably don't want to use this security user as your everyday user. That would be your administrator. The administrator role is able to perform all administrative tasks within GoldenGate. So not only can they go in and create new extracts, create new replicats, create new distribution services, but they can also start and stop them. And that's where the operator role is and the user role. So the operator role allows you to go in and start/stop processes, but you can't create any new ones, which is kind of important. So this user would be the one that could go in and suspend activity. They could restart activity. But they can't actually add objects to replication. The user role is really a read-only role. They can come in. They can see what's going on. They can look at the log files. They can look at the alerts. They can look at all the watches and see exactly what GoldenGate is doing. But they're unable to make any changes to the product itself. 02:54 Lois: You mentioned the roles are hierarchical in nature. What does that mean? Nick: So anything that the user role does can be done by the operator. Anything that the operator and user roles can do can be done by the administrator. And anything that the user, operator, and administrator roles do can be done by the security role. 03:11 Lois: Ok. So, is there a single sign-on available for GoldenGate? Nick: We also have a password plugin for GoldenGate Connections. A lot of customers have asked for integration with whatever their single sign-on utility is, and so GoldenGate now has that with GoldenGate 23ai. So these are customer-created entities. So, we have some examples that you can use in our documentation on how to set up an identity provider or a third-party identity provider with GoldenGate. And this allows you to ensure that your corporate standards are met. As we started looking into this, as we started designing it, every single customer wanted something different. And so instead of trying to meet the needs for every customer and every possible combination of security credentials, we want you to be able to design it the way you need it. The passwords are never stored. They're only retrieved from the identity provider by the plugin itself. 04:05 Nikita: That's a pretty important security aspect…that when it's time to authenticate a user, we go to the identity provider. Nick: We're going to connect in and see if that password is matching. And only then do we use it. And as soon as we detect that it's matched, that password is removed. And then for the extract and replicats themselves, you can also use it for the database, data source, and data target connections, as well as for the GoldenGate users. So, it is a full-featured plugin. So, our identity provider plugin works with IAM as well as OAM. These are your standard identity manager authentication methods. The standard one is OAuth 2, as well as OIDC. And any Identity Manager that uses that is able to integrate with GoldenGate. 04:52 Lois: And how does this work? Nick: The way that it works is pretty straightforward. Once the user logs into the database, we're going to hand off authentication to the identity provider. Once the identity provider has validated that user's identity and their credentials, then it comes back to GoldenGate and says that user is able to log in to either GoldenGate or the application or the database. Once the user is logged in, we get that confirmation that's been sent out and they can continue working through GoldenGate. So, it's very straightforward on how it works. There's also a nice little UI that will help set up each additional user within those systems. All the communication is also secured as well. So any communication done through any of the GoldenGate services is encrypted using HTTPS. All the REST calls themselves are all done using HTTPS as well. All the data protection calls and all the communication across the network when we send data across a distribution service is encrypted using a secure WebSocket. And there's also trail file encryption at the operating system level for data at REST. So, this really gives you the full level of encryption for customers that need that high-end security. GoldenGate does have an option for FIPS 140-2 compliance as well. So that's even a further step for most of those customers. 06:12 Nikita: That's impressive! Because we want to maintain the highest security standards, right? Especially when dealing with sensitive information. I now want to move on to trail files. In our last episode, we briefly spoke about how they serve as logs that record and track changes made to data. But what more can you tell us about them, Nick? Nick: There's two different processes that write to the trail files. The extract process will write to the trail file and the receiver service will write to the trail file. The extract process is going to write to the trail file as it's pulling data out of that source database. Now, the extract process is controlled by a parameter file, that says, hey, here's the exact changes that I'm going to be pulling out. Here's the tables. Here's the rows that I want. As it's pulling that data out and writing it to the trail files, it's ensuring that those trail files have enough information so that the replicat process can actually construct a SQL statement and apply that change to that target platform. And so there's a lot of ways to change what's actually stored in those trail files and how it's handled. The trail files can also be used for initial loads. So when we do the initial load through GoldenGate, we can grab and write out the data for those tables, and that excludes the change data. So initial loads is pulling the data directly from the tables themselves, whereas ongoing replication is pulling it from the transaction logs. 07:38 Lois: But do we need to worry about rollbacks? Nick: Our trail files contain committed data only and all data is sequential. So this is two important things. Because it contains committed data only, we don't need to worry about rollbacks. We also don't need to worry about position within that trail file because we know all data is sequential. And so as we're reading through the trail file, we know that anything that's written in a prior location in that trial file was committed prior to something else. And as we get into the recovery aspects of GoldenGate, this will all make a lot more sense. 08:13 Lois: Before we do that, can you tell us about the naming of trail files? Nick: The trail files as far as naming, because these do reside on the operating system, you start with a two-letter trail file abbreviation and then a nine-digit sequential value. So, you almost look at it as like an archive log from Oracle, where we have a prefix and then an affix, which is numeric. Same kind of thing. So, we have our two-letter, in this case, an ab, and then we have a nine-digit number. 08:47 Transform the way you work with Oracle Database 23ai! This cutting-edge technology brings the power of AI directly to your data, making it easier to build powerful applications and manage critical workloads. Want to learn more about Database 23ai? Visit mylearn.oracle.com to pick from our range of courses and enroll today! 09:12 Nikita: Welcome back! Ok, Nick. Let's get into the GoldenGate recovery process. Nick: When we start looking at the GoldenGate recovery process, it essentially makes GoldenGate kind of point-in-time like. So on that source database, you have your extract process that's going to be capturing data from the transaction logs. In the case of Oracle, the Oracle Database is actually going to be reading those transaction logs from us and passing the change records directly to GoldenGate. We call them an LCR, Logical Change Record. And so the integrated extract and GoldenGate, the extract portion tells the database, hey, I'm now going to be interested in the following list of tables. And it gives a list of tables to that internal component, the log mining engine within the database. And it says, OK, I'm now pulling data for those tables and I'm going to send you those table changes. And so as the extract process gets sent those changes, it's going to have checkpoint information. So not only does it know where it was pulling data from out of that source database, but what it's also writing to the trail file. The trail files themselves are all sequential and they have only committed data, as we talked about earlier. The distribution service has checkpoint information that says, hey, I know where I'm reading from in the previous trail file, and I know what I've sent across the network. The receiver service is the same thing. It knows what it's receiving, as well as what it's written to the trail file and the target system. The replicat also has a checkpoint. It knows where it's reading from in the trail file, and then it knows what it's been applying into that target database.  This is where things start to become a little complicated. Our replicat process in most cases are parallel, so it'll have multiple threads applying data into that target database. Each of those threads is applying different transactions. And because of the way that the parallelism works in the replicat process, you can actually get situations where one replicat thread might be applying a transaction higher than another thread. And so you can eliminate that sequential or serial aspect of it, and we can get very high throughput speeds to the replicat. But it means that the checkpoint needs to be kind of smart enough to know how to rebuild itself if something fails. 11:32 Lois: Ok, sorry Nick, but can you go through that again? Maybe we can work backwards this time?  Nick: If the replicat process fails, when it comes back up, it's going to look to its checkpoint tables inside that target database. These checkpoint tables keep track of where each thread was at when it crashed. And so when the replicat process restarts, it goes, oh, I was applying these threads at this location in these SCNs. It'll then go and read from the trail file and say, hey, let me rebuild that data and it only applies transactions that it hasn't applied yet to that target system. There is a synchronized replicat command as well that will tell a crashed replicat to say, hey, bring all your threads up to the same high watermark. It does that process automatically as it restarts and continues normal replication. But there is an option to do it just by itself too. So that's how the replicat kind of repairs and recovers itself. It'll simply look at the trail files. Now, let's say that the replicat crashed, and it goes to read from the trail files when it restarts and that trail profile is missing. It'll actually communicate to the distribution, or excuse me, to the receiver service and say, hey, receiver service, I don't have this trail file. Can you bring it back for me? And the receiver service will communicate downstream and say, hey, distribution service, I need you to resend me trail find number 6. And so the distribution service will resend that trail file so that the replicat can reprocess it. So it's often nice to have redundant environments with GoldenGate so we can have those trail files kind of around for availability. 13:13 Nikita: What if one of these files gets corrupted? Nick: If one of those trail files is corrupt, let's say that a trail file on the target site became corrupt and the replicat can't read from it for one reason or another. Simply stop the replicat process, delete the corrupt trail file, restart the replicat process, and now it's going to rebuild that trail file from scratch based on the information from the source GoldenGate environment. And so it's very recoverable. Handles it all very well. 13:40 Nikita: And can the extract process bounce back in the same way? Nick: The extract process can also recover in a similar way. So if the extract process crashes, when it restarts itself, there's a number of things that it does. The first thing is it has to rebuild any open transactions. So it keeps all sorts of checkpoint information about the oldest transaction that it's keeping track of, any open transactions that haven't been committed, and any other transactions that have been committed that it's already written to the trail file. So as it's reprocessing that data, it knows exactly what it's committed to trail and what hasn't been committed. And there's a number of ways that it does this.  There's two main components here. One of them is called bounded recovery. Bounded recovery will allow you to set a time limit on transactions that span a certain length of time that they'll actually get flushed out to disk on that GoldenGate Hub. And that way it'll reduce the amount of time it takes GoldenGate to restart the extract process. And the other component is cache manager. Cache manager stores uncommitted transactions. And so it's a very elegant way of rebuilding itself from any kind of failure. You can also set up restart profiles so that if any process does crash, the GoldenGate service manager can automatically restart that service an x number of times across y time span. So if I say, hey, if my extract crashes, then attempt to restart it 100 times every 5 seconds. So there's a lot of things that you can do there to make it really nice and automatic repair itself and automatically resilient.  15:18 Lois: Well, that brings us to the end of this episode. Thank you, Nick, for going through the security strategies and recovery processes in such detail. Next week, we'll look at the installation of GoldenGate. Nikita: And if you want to learn more about the topics we discussed today, head over to mylearn.oracle.com and take a look at the Oracle GoldenGate 23ai Fundamentals course. Until next time, this is Nikita Abraham… Lois: And Lois Houston signing off! 15:44 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

PodRocket - A web development podcast from LogRocket
Put your database in the browser with Ben Holmes

PodRocket - A web development podcast from LogRocket

Play Episode Listen Later Apr 3, 2025 32:25


Ben Holmes, product engineer at Warp, joins PodRocket to talk about local-first web apps and what it takes to run a database directly in the browser. He breaks down how moving data closer to the user can reduce latency, improve performance, and simplify frontend development. Learn about SQLite in the browser, syncing challenges, handling conflicts, and tools like WebAssembly, IndexedDB, and CRDTs. Plus, Ben shares insights from building his own SimpleSyncEngine and where local-first development is headed! Links https://bholmes.dev https://www.linkedin.com/in/bholmesdev https://www.youtube.com/@bholmesdev https://x.com/bholmesdev https://bsky.app/profile/bholmes.dev https://github.com/bholmesdev We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guest: Ben Holmes.

Remote Ruby
Mac Upgrades to Debugging Dilemmas

Remote Ruby

Play Episode Listen Later Mar 21, 2025 47:41


In this episode, Andrew and Chris discuss Chris's new hardware upgrade to a Mac Studio, diving into its benefits for video processing and development work. They share stories about troubleshooting a perplexing bug related to WebSockets and Cable Ready, and discuss the conference proposal process, offering insights into writing effective CFPs based on their experiences with RailsConf and Rails World. Additionally, Andrew shares a game update about Cyberpunk and Chris shares the inspiring success story of the game ‘Balatro,' highlighting the developer's journey from side project to commercial triumph. Hit the download button now!HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.JudoscaleMake your deployments bulletproof with autoscaling that just works.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you. Chris Oliver X/Twitter Andrew Mason X/Twitter Jason Charnes X/Twitter

52 Weeks of Cloud
xtermjs and Browser Terminals

52 Weeks of Cloud

Play Episode Listen Later Feb 28, 2025 5:25


The podcast notes effectively capture the key technical aspects of the WebSocket terminal implementation. The transcript explores how Rust's low-level control and memory management capabilities make it an ideal language for building high-performance terminal emulation over WebSockets.What makes this implementation particularly powerful is the combination of Rust's ownership model with the PTY (pseudoterminal) abstraction. This allows for efficient binary data transfer without the overhead typically associated with scripting languages that require garbage collection.The architecture demonstrates several advanced Rust patterns:Zero-copy buffer management - Using Rust's ownership semantics to avoid redundant memory allocations when transferring terminal dataAsync I/O with Tokio runtime - Leveraging Rust's powerful async/await capabilities to handle concurrent terminal sessions without blocking operationsActor-based concurrency - Implementing the Actix actor model to maintain thread-safety across terminal session boundariesFFI and syscall integration - Direct integration with Unix PTY facilities through Rust's foreign function interfaceThe containerization aspect complements Rust's performance characteristics by providing clean, reproducible environments with minimal overhead. This combination of Rust's performance with Docker's isolation creates a compelling architecture for browser-based terminals that rivals native applications in responsiveness.For developers looking to understand practical applications of Rust's memory safety guarantees in real-world systems programming, this terminal implementation serves as an excellent case study of how ownership, borrowing, and zero-cost abstractions translate into tangible performance benefits.

Critical Thinking - Bug Bounty Podcast
Episode 112: Interview with Ciarán Cotter (MonkeHack) - Critical Lab Researcher and Full-time Hunter

Critical Thinking - Bug Bounty Podcast

Play Episode Listen Later Feb 27, 2025 67:37


Episode 112: In this episode of Critical Thinking - Bug Bounty Podcast Joseph Thacker is joined by Ciarán Cotter (Monke) to share his bug hunting journey and give us the rundown on some recent client-side and server-side bugs. Then they discuss WebSockets, SaaS security, and cover some AI news including Grok 3, Nuclei -AI Flag, and some articles by Johann Rehberger.Follow us on twitter at: https://x.com/ctbbpodcastGot any ideas and suggestions? Feel free to send us any feedback here: info@criticalthinkingpodcast.ioShoutout to YTCracker for the awesome intro music!====== Links ======Follow your hosts Rhynorater and Rez0 on Twitter:https://x.com/Rhynoraterhttps://x.com/rez0__====== Ways to Support CTBBPodcast ======Hop on the CTBB Discord at https://ctbb.show/discord!We also do Discord subs at $25, $10, and $5 - premium subscribers get access to private masterclasses, exploits, tools, scripts, un-redacted bug reports, etc.You can also find some hacker swag at https://ctbb.show/merch!Today's Guest - Ciarán Cotterhttps://x.com/monkehack====== Resources ======Mstyhttps://msty.app/From Day Zero to Zero Dayhttps://nostarch.com/zero-dayNuclei - ai flaghttps://x.com/pdiscoveryio/status/1890082913900982763ChatGPT Operator: Prompt Injection Exploits & Defenseshttps://embracethered.com/blog/posts/2025/chatgpt-operator-prompt-injection-exploits/Hacking Gemini's Memory with Prompt Injection and Delayed Tool Invocationhttps://embracethered.com/blog/posts/2025/gemini-memory-persistence-prompt-injection/====== Timestamps ======(00:00:00) Introduction(00:01:04) Bug Rundowns(00:13:05) Monke's Bug Bounty Background(00:20:03) Websocket Research(00:34:01) Connecting Hackers with Companies(00:34:56) Grok 3, Msty, From Day Zero to Zero Day(00:42:58) Full time Bug Bounty, SaaS security, and Threat Modeling while AFK(00:54:49) Nuclei - ai flag, ChatGPT Operator, and Hacking Gemini's Memory

PodRocket - A web development podcast from LogRocket
Prisma Postgres with Nikolas Burk

PodRocket - A web development podcast from LogRocket

Play Episode Listen Later Feb 20, 2025 28:18


Nikolas Burk, DevRel at Prisma, talks about Prisma Postgres, its unikernel architecture, and its seamless integration with cloud infrastructure. Discover how Prisma Postgres is revolutionizing database management with features like cold start elimination, real-time event handling and advanced caching strategies! Links https://www.prisma.io/blog/announcing-prisma-postgres-early-access https://x.com/nikolasburk https://www.linkedin.com/in/nikolas-burk-1bbb7b8a https://github.com/nikolasburk We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guest: Nikolas Burk.

52 Weeks of Cloud
Websockets

52 Weeks of Cloud

Play Episode Listen Later Feb 19, 2025 8:03


WebSockets in Rust: From Theory to ImplementationEpisode Notes for Pragmatic Labs Technical Deep DiveIntroduction [00:00-00:45]WebSockets vs HTTP request-response pattern analogyReal-time communication model comparisonRust's zero-cost abstractions and compile-time guaranteesSQLite WebSocket demo introductionRust's WebSocket Advantages [01:05-01:47]Zero-cost abstractions implementationMemory safety guarantees preventing vulnerabilitiesAsync/await ecosystem optimizationStrong type system for message handlingOwnership model for connection lifecyclesCross-platform compilation capabilitiesProject Implementation Details [01:53-02:16]Tokio async runtime efficiencyStructured error handling patternsThread-safe SQLite connectionsClean architectural separationDeployment considerations for embedded systemsWebSocket Core Concepts [02:34-03:35]Full-duplex TCP communication protocolPersistent connection characteristicsBi-directional data flow mechanismsHTTP upgrade processFrame-based message transferMinimal protocol overhead benefitsTechnical Implementation [03:35-04:00]HTTP request upgrade header processWebSocket URL scheme structureInitial handshake protocolBinary/text message frame handlingConnection management strategiesAdvantages Over HTTP [04:00-04:20]Reduced latency benefitsLower header overheadEliminated connection establishment costsServer push capabilitiesNative browser supportEvent-driven architecture suitabilityCommon Use Cases [04:20-04:36]Real-time collaboration toolsLive data streaming systemsFinancial market data updatesMultiplayer game state synchronizationIoT device communicationLive monitoring systemsRust Implementation Specifics [04:36-05:16]Actor model implementationConnection state management with ArcGraceful shutdown with tokio::selectConnection management heartbeatsWebSocket server scaling considerationsPerformance Characteristics [05:36-06:15]Zero-cost futures in practiceGarbage collection eliminationCompile-time guarantee benefitsPredictable memory usage patternsReduced server load metricsProject Structure [06:15-06:52]ws.rs: Connection handlingdb.rs: Database abstractionerrors.rs: Error type hierarchymodels.rs: Data structure definitionsmain.rs: System orchestrationBrowser API integration pointsReal-World Applications [07:10-08:02]Embedded systems implementationComputer vision integrationReal-time data processingSpace system applicationsResource-constrained environmentsKey Technical TakeawaysRust's ownership model enables efficient WebSocket implementationsZero-cost abstractions provide performance benefitsThread-safety guaranteed through type systemAsync runtime optimized for real-time communicationClean architecture promotes maintainable systemsResourcesFull code examples available on Pragmatic LabsSQLite WebSocket demo repositoryImplementation walkthroughsEmbedded system deployment guides

Les Cast Codeurs Podcast
LCC 319 - le ramasse-miettes-charognes

Les Cast Codeurs Podcast

Play Episode Listen Later Dec 16, 2024 70:05


Dans cet épisde en audio et en vidéo (youtube.com/lescastcodeurs), Guillaume et Emmanuel discutent des 15 ans de Go, d'une nouvelle approche de garbage collecting, de LLMs dans les applications Java, dobservabilité, d'une attaque de chaine d'approvisionnement via javac et d'autres choses. Enregistré le 13 décembre 2024 Téléchargement de l'épisode LesCastCodeurs-Episode-319.mp3 News Langages Go fête son 15ème anniversaire ! https://go.dev/blog/15years discute les 15 ans la corrections de gotchas dans les for loops (notamment les variables étaient loop scoped) le fait que la compile echoue si on attend une version de go superieure seulement depuis go 1.21 en parallele de la gestion de la chaine d'outil (c'est en 2023 seulement!) opt-in telemetrie aussi recent Construire OpenJDK à partir des sources sur macOS https://www.morling.dev/blog/building-openjdk-from-source-on-macos/ de maniere surprenante ce n'est pas tres compliqué Papier sur l'aproche Mark-scavenge pour un ramasse miette https://inside.java/2024/11/22/mark-scavenge-gc/ papier de recherche utiliser l'accessibilité pour preuve de vie n'est pas idéal: un objet peut etre atteignable mais ne sera jamais accedé par le programme les regions les plus pauvres en objets vivant voient leurs objets bouger dans uen autre region et la regio libéré, c'est le comportement classique des GC deux methodes: mark evaguate qui le fait en deux temps et la liveness peut evoluer ; et scavenge qui bouge l'objet vivant des sa decouverte ont fait tourner via ZGC des experience pour voir les objects consideres vivants et bougés inutilement. resultats montrent un gros taux d'objets bougés de maniere inutile proposent un algo different ils marquent les objets vivants mais ne les bougent pas avant le prochain GC pour leur donner une change de devenir unreachable elimine beaucoup de deplacement inutiles vu que les objets deviennent non accessible en un cycle de GC jusquà 91% de reduction ! Particulierement notable dans les machines chargées en CPU. Les tokens d'accès court ou longs https://grayduck.mn/2023/04/17/refresh-vs-long-lived-access-tokens/ pourquoi des long access tokens (gnre refresh token) sont utilises pour des short lived dans oauth 2.0 refresh token simplifient la revocation: vu que seul le auth serveur a a verifier la révocation et les clients vérifient l'expiration et la validité de la signature refresh token ne sont envoyés que entre endpoints alors que les access tokens se baladent pas mal: les frontières de confiance ne sont pas traversées refresh token comme utilise infréquement, et donc peut etre protegee dans une enclave les changements de grants sont plus simple tout en restant distribuable histoire des access refresh token et access token permet de mieux tracer les abus / attaques les inconvenients: c'est plus compliqué en flow, the auth serveur est un SPOF amis mitigeable Java Advent est de retour https://www.javaadvent.com/calendar backstage Java integrite par defaut (et ses consequences sur l'ecosysteme) timefold (sovler) Les extensions JUNit 5 OpenTelemetry via Java Agent vs Micrometer analyse statique de code CQRS et les fonctionalités modernes de Java java simple (sans compilatrion, sans objet fullstack dev with quarkus as backend José Paumard introduit et explique les Gatherers dans Java 24 dans cette vidéo https://inside.java/2024/11/26/jepcafe23/ Librairies Micronaut 4.7, avec l'intégration de LangChain4j https://micronaut.io/2024/11/14/micronaut-framework-4-7-0-released/ Combiner le framework de test Spock et Cucumber https://www.sfeir.dev/back/spock-framework-revolutionnez-vos-tests-unitaires-avec-la-puissance-du-bdd-et-de-cucumber/ les experts peuvent écrire leurs tests au format Gherkin (de Cucumber) et les développeurs peuvent implémenter les assertions correspondantes avec l'intégration dans Spock, pour des tests très lisibles Spring 6.2 https://spring.io/blog/2024/11/14/spring-framework-6-2-0-available-now beans @Fallback améliorations sur SpELet sur le support de tests support de l'echape des property placeholders une initioalisation des beans en tache de fond nouvelle et pleins d'autres choses encore Comment créer une application Java LLM tournant 100% en Java avec Jlama https://quarkus.io/blog/quarkus-jlama/ blog de Mario Fusco, Mr API et Java et Drools utilise jlama + quarkus + langchain Explique les avantage de l'approche pure Java comme le cycle de vie unique, tester les modeles rapidement, securite (tout est in process), monolithe ahahah, observabilité simplifiée, distribution simplifiée (genre appli embarquée) etc Vert.x 5 en seconde incubation https://vertx.io/blog/eclipse-vert-x-5-candidate-2-released/ Support des Java modules (mais beacoup des modules vert.x eux-même ne le supportent pas support io_uring dans vert.x core le load balancing côté client le modele des callbacks n'est plus supporté, vive les Futur beaucoup d'améliorations autour de gRPC et d'autres choses Un article sur Spring AI et la multi modalite audio https://spring.io/blog/2024/12/05/spring-ai-audio-modality permet de voir les evolutions des APIs de Spring AI s'appluie sur les derniers modeles d'open ai des examples comme par exemple un chatbot voix et donc comment enregistrer la voix et la passer a OpenAI Comment activer le support experimental HTTP/3 dans Spring Boot https://spring.io/blog/2024/11/26/http3-in-reactor-2024 c'ets Netty qui fait le boulot puis Spring Netty l'article décrit les etapes pour l'utiliser dans vos applis Spring Boot ou Spring Cloud Gateway l'article explique aussi le cote client (app cliente) ce qui est sympa Infrastructure Un survol des offres d'observabilité http://blog.ippon.fr/2024/11/18/observabilite-informatique-comprendre-les-bases-2eme-partie/ un survol des principales offres d'observabilité Open source ou SaaS et certains outsiders Pas mal pour commencer à défricher ce qui vous conviendrait blog de ippon Web Sortie de Angular 19 https://blog.ninja-squad.com/2024/11/19/what-is-new-angular-19.0/ stabilité des APIs Signal APIs migration automatique vers signals composants standalone par défaut nouvelles APIs linkedSignal et resource de grosses améliorations de SSR et HMR un article également de Sfeir sur Angular 19 https://www.sfeir.dev/front/angular-19-tout-ce-quil-faut-savoir-sur-les-innovations-majeures-du-framework/ Angluar 19 https://www.sfeir.dev/front/angular-19-tout-ce-quil-faut-savoir-sur-les-innovations-majeures-du-framework/ composant standalone par default (limiter les problemes de dependances), peut le mettre en strict pour le l'imposer (ou planter) signalement des imports inutilisés @let pour les variables locales dans les templates linkedSignal (experimental) pour lier des signaux entre eux (cascade de changement suite a un evenement hydratation incrementale (contenu progressivement interactif avec le chargement - sur les parties de la page visible ou necessaires et event replay, routing et modes de rendu en rendy hybride, Hot module replacement etc The State of Frontend — dernière compilation des préférences des développeurs en terme de front https://tsh.io/state-of-frontend/ React en tête, suivi de Vue et Svelte. Angular seulement 4ème Côté rendering framework, Next.js a la majorité absolue, ensuite viennent Nuxt et Astro Zod est la solution de validation préférée Pour la gestion de date, date-fns est en tête, suivi par moment.js Côté state management, React Context API en première place, mais les suivants sont tous aussi pour React ! Grosse utilisation de lodash pour plein d'utilités Pour fetcher des resources distantes, l'API native Fetch et Axios sont les 2 vaincoeurs Pour le déploiement, Vercel est premier Côté CI/CD, beaucoup de Github Actions, suivi par Gitlab CI Package management, malgré de bonnes alternatives, NPM se taille toujours la part du lion Ecrasante utilisation de Node.js comme runtime JavaScript pour faire du développement front Pour ce qui est du typing, beaucoup utilisent TypeScript, et un peu de JSdoc, et la majorité des répondants pensent que TypeScript a dépassé JavaScript en usage Dans les API natives du navigateur, Fetch, Storage et WebSockets sont les APIs les plus utilisées La popularité des PWA devrait suivre son petit bonhomme de chemin En terme de design system, shadcn.ui en tête, suivi par Material, puis Bootstram Pour la gestion des styles, un bon mix de plain old CSS, de Tailwind, et de Sass/CSS Jest est premier comme framework de tests Les 3/4 des développeurs front utilisent Visual Studio Code, quant au quart suivant, c'est JetBrains qui raffle les miettes Pour le build, Vite récolte les 4/5 des voix ESLint et Prettier sont les 2 favoris pour vérifier le code   Parfois, on aimerait pouvoir tester une librairie ou un framework JavaScript, sans pour autant devoir mettre en place tout un projet, avec outil de build et autre. Julia Evans explore les différents cas de figure, suivant la façon dont ces librairies sont bundlées https://jvns.ca/blog/2024/11/18/how-to-import-a-javascript-library/ Certaines librairies permette de ne faire qu'un simple import dans une balise script Certaines frameworks sont distribués sous forme d'Universal Module Definition, sous CommonJS, d'ESmodule franchemet en tant que noob c'est compliqué quand même Data et Intelligence Artificielle L'impact de l'IA en entreprise et des accès aux documents un peu laxistes https://archive.ph/uPyhX l'indexing choppe tout ce qu'il peut et l'IA est tres puissante pour diriger des requetes et extraires les données qui auraient du etre plus restreintes Différentes manières de faire de l'extraction de données et de forcer la main à un LLM pour qu'il génère du JSON https://glaforge.dev/posts/2024/11/18/data-extraction-the-many-ways-to-get-llms-to-spit-json-content/ l'approche “je demande gentiment” au LLM, en faisant du prompt engineering en utilisant du function calling pour les modèles supportant la fonctionnalité, en particulier avant les approches de type “JSON mode” ou “JSON schema” ou effectivement si le modèle le supporte aussi, toujours avec un peu de prompting, mais en utilisant le “JSON mode” qui force le LLM a générer du JSON valide encore mieux avec la possibilité de spécifier un schema JSON (type OpenAPI) pour que le JSON en sortie soit “compliant” avec le schéma proposé Comment masquer les données confidentielles avec ses échanges avec les LLMs https://glaforge.dev/posts/2024/11/25/redacting-sensitive-information-when-using-generative-ai-models/ utilisation de l'API Data Loss Prevention de Google Cloud qui permet d'identifier puis de censurer / masquer (“redacted” en anglais) des informations personnelles identifiables (“PII”, comme un nom, un compte bancaire, un numéro de passeport, etc) pour des raison de sécurité, de privacy, pour éviter les brèche de données comme on en entend trop souvent parler dans les nouvelles On peut utiliser certains modèles d'embedding pour faire de la recherche de code https://glaforge.dev/posts/2024/12/02/semantic-code-search-for-programming-idioms-with-langchain4j-and-vertex-ai-embedding-models/ Guillaume recherche des bouts de code, en entrant une requête en langue naturel Certains embedding models supportent différents types de tâches, comme question/réponse, question en langue naturelle / retour sous forme de code, ou d'autres tâches comme le fact checking, etc Dans cet article, utilisation du modèle de Google Cloud Vertex AI, en Java, avec LangChain4j Google sort la version 2 de Gemini Flash https://blog.google/technology/google-deepmind/google-gemini-ai-update-december-2024/ La nouvelle version Gemini 2.0 Flash dépasse même Gemini 1.5 Pro dans les benchmarks Tout en étant 2 fois plus rapide que Gemini 1.5 Pro, et bien que le prix ne soit pas encore annoncé, on imagine également plus abordable Google présente Gemini 2 comme le LLM idéal pour les “agents” Gemini propose une vraie multimodalité en sortie (premier LLM sur le marché à le proposer) : Gemini 2 peut entrelacer du texte, des images, de l'audio Gemini 2 supporte plus de 100 langues 8 voix de haute qualité, assez naturelles, pour la partie audio Un nouveau mode speech-to-speech en live, où on peut même interrompre le LLM, c'est d'ailleurs ce qui est utilisé dans Project Astra, l'application mobile montrée à Google I/O qui devient un vrai assistant vocale en live sur votre téléphone Google annonce aussi une nouvelle expérimentation autour des assistants de programmation, avec Project Jules, avec lequel on peut discuter en live aussi, partager son code, comme un vrai pair programmeur Google a présenté Project Mariner qui est un agent qui est sous forme d'extension Chrome, qui va permettre de commander votre navigateur comme votre assistant de recherche personnel, qui va être capable de faire des recherches sur le web, de naviguer dans les sites web, pour trouver les infos que vous recherchez Cet autre article montre différentes vidéos de démos de ces fonctionnalités https://developers.googleblog.com/en/the-next-chapter-of-the-gemini-era-for-developers/ Un nouveau projet appelé Deep Research, qui permet de faire des rapports dans Gemini Advanced : on donne un sujet et l'agent va proposer un plan pour un rapport sur ce sujet (qu'on peut valider, retoucher) et ensuite, Deep Research va effectuer des recherches sur le web pour vous, et faire la synthèse de ses recherches dans un rapport final https://blog.google/products/gemini/google-gemini-deep-research/ Enfin, Google AI Studio, en plus de vous permettre d'expérimenter avec Gemini 2, vous pourrez aussi utiliser des “starter apps” qui montrent comment faire de la reconnaissance d'objet dans des images, comment faire des recherches avec un agent connecté à Google Maps, etc. Google AI Studio permet également de partager votre écran avec lui, en mobile ou en desktop, de façon à l'utiliser comme un assistant qui peut voir ce que vous faites, ce que vous coder et peut répondre à vos questions Méthodologies Un article de GitHub sur l'impact de la surutilisation des CPU sur la perf de l'appli https://github.blog/engineering/architecture-optimization/breaking-down-cpu-speed-how-utilization-impacts-performance/ c'est surprenant qu'ils ont des effets des 30% de perf c'est du a la non limit thermique, au boost de frequece qui en suit ils ont donc cherché le golden ratio pour eux autour de 60% ils prennent des morceaux de cluster kube poru faire tourner les workloads et ajoutent des wqorkload CPU artificiels (genre math) Sécurité Attaque de la chaîne d'approvisionnement via javac https://xdev.software/en/news/detail/discovering-the-perfect-java-supply-chain-attack-vector-and-how-it-got-fixed s'appuie sur l'annotation processeur l'annotation processors des dependances est chargé et executé au moment du build du projet et cherche les annotations processor dans le user classpath (via le pattern serviceloader) et donc si la dependance est attaquée et un annotation processor est ajouté ou modifié on a un vecteur d'attaque au moment de la compilation du projet ciblé des qu'on deparre l'IDE en gros workaround, activer -proc:none et activer les annotation processors explicitly dans votre outil de build certaines améliorations dans le JDK: le compilateur note qu'il execute un annotation processor dans java 23+ les annotation processors sont deactivés par defaut Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 19 décembre 2024 : Normandie.ai 2024 - Rouen (France) 20 janvier 2025 : Elastic{ON} - Paris (France) 22-25 janvier 2025 : SnowCamp 2025 - Grenoble (France) 24-25 janvier 2025 : Agile Games Île-de-France 2025 - Paris (France) 30 janvier 2025 : DevOps D-Day #9 - Marseille (France) 6-7 février 2025 : Touraine Tech - Tours (France) 21 février 2025 : LyonJS 100 - Lyon (France) 28 février 2025 : Paris TS La Conf - Paris (France) 20 mars 2025 : PGDay Paris - Paris (France) 20-21 mars 2025 : Agile Niort - Niort (France) 25 mars 2025 : ParisTestConf - Paris (France) 26-29 mars 2025 : JChateau Unconference 2025 - Cour-Cheverny (France) 28 mars 2025 : DataDays - Lille (France) 28-29 mars 2025 : Agile Games France 2025 - Lille (France) 3 avril 2025 : DotJS - Paris (France) 10-11 avril 2025 : Android Makers - Montrouge (France) 10-12 avril 2025 : Devoxx Greece - Athens (Greece) 16-18 avril 2025 : Devoxx France - Paris (France) 29-30 avril 2025 : MixIT - Lyon (France) 7-9 mai 2025 : Devoxx UK - London (UK) 16 mai 2025 : AFUP Day 2025 Lille - Lille (France) 16 mai 2025 : AFUP Day 2025 Lyon - Lyon (France) 16 mai 2025 : AFUP Day 2025 Poitiers - Poitiers (France) 24 mai 2025 : Polycloud - Montpellier (France) 5-6 juin 2025 : AlpesCraft - Grenoble (France) 11-13 juin 2025 : Devoxx Poland - Krakow (Poland) 12-13 juin 2025 : Agile Tour Toulouse - Toulouse (France) 12-13 juin 2025 : DevLille - Lille (France) 24 juin 2025 : WAX 2025 - Aix-en-Provence (France) 26-27 juin 2025 : Sunny Tech - Montpellier (France) 1-4 juillet 2025 : Open edX Conference - 2025 - Palaiseau (France) 18-19 septembre 2025 : API Platform Conference - Lille (France) & Online 2-3 octobre 2025 : Volcamp - Clermont-Ferrand (France) 6-10 octobre 2025 : Devoxx Belgium - Antwerp (Belgium) 16-17 octobre 2025 : DevFest Nantes - Nantes (France) 6 novembre 2025 : dotAI 2025 - Paris (France) 12-14 novembre 2025 : Devoxx Morocco - Marrakech (Morocco) 23-25 avril 2026 : Devoxx Greece - Athens (Greece) 17 juin 2026 : Devoxx Poland - Krakow (Poland) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via twitter https://twitter.com/lescastcodeurs Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/

Kubernetes Podcast from Google
Kubernetes v1.32: Penelope, with Frederico Muñoz

Kubernetes Podcast from Google

Play Episode Listen Later Dec 12, 2024 44:14


Release Lead Frederico Muñoz walks us through the 1.32 release: new feature highlights, deprecations and removals, and the release theme: Penelope.   Do you have something cool to share? Some questions? Let us know: - web: kubernetespodcast.com - mail: kubernetespodcast@google.com - twitter: @kubernetespod   News of the week Gateway API v1.2: WebSockets, Timeouts, Retries, and More Streamline Kubernetes cluster management with new Amazon EKS Auto Mode Top announcements of AWS re:Invent 2024 Links from the interview Kubernetes v1.32: Penelope (Release Blog) SIG ContribEx Comms Kubernetes Contributor Awards 2023 (Frederico received an award last year) The Odyssey by Homer, Samuel Butler on Google Books Releases on Kubernetes.io Links from the post-interview chat Kubernetes 1.27: In-place Resource Resize for Kubernetes Pods (alpha)  

Smart Software with SmartLogic
Telemetry & Observability for Elixir Apps at Cars.com with Zack Kayser & Ethan Gunderson

Smart Software with SmartLogic

Play Episode Listen Later Dec 12, 2024 42:39


Zack Kayser and Ethan Gunderson, Software Engineers at Cars Commerce, join the Elixir Wizards to share their expertise on telemetry and observability in large-scale systems. Drawing from their experience at Cars.com—a platform handling high traffic and concurrent users—they discuss the technical and organizational challenges of scaling applications, managing microservices, and implementing effective observability practices. The conversation highlights the pivotal role observability plays in diagnosing incidents, anticipating system behavior, and asking unplanned questions of a system. Zack and Ethan explore tracing, spans, and the unique challenges introduced by LiveView deployments and WebSocket connections. They also discuss the benefits of OpenTelemetry as a vendor-agnostic instrumentation tool, the significance of Elixir's telemetry library, and practical steps for developers starting their observability journey. Additionally, Zack and Ethan introduce their upcoming book, Instrumenting Elixir Applications, which will offer guidance on integrating telemetry and tracing into Elixir projects. Topics Discussed: Cars.com's transition to Elixir and scaling solutions The role of observability in large-scale systems Uncovering insights by asking unplanned system questions Managing high-traffic and concurrent users with Elixir Diagnosing incidents and preventing recurrence using telemetry Balancing data collection with storage constraints Sampling strategies for large data volumes Tracing and spans in observability LiveView's influence on deployments and WebSocket behavior Mitigating downstream effects of socket reconnections Contextual debugging for system behavior insights Observability strategies for small vs. large-scale apps OpenTelemetry for vendor-agnostic instrumentation Leveraging OpenTelemetry contrib libraries for easy setup Elixir's telemetry library as an ecosystem cornerstone Tracing as the first step in observability Differentiating observability from business analytics Profiling with OpenTelemetry Erlang project tools The value of profiling for performance insights Making observability tools accessible and impactful for developers Links Mentioned https://www.carscommerce.inc/ https://www.cars.com/ https://hexdocs.pm/telemetry/readme.html https://kubernetes.io/ https://github.com/ninenines/cowboy https://hexdocs.pm/bandit/Bandit.html https://hexdocs.pm/broadway/Broadway.html https://hexdocs.pm/oban/Oban.html https://www.dynatrace.com/ https://www.jaegertracing.io/ https://newrelic.com/ https://www.datadoghq.com/ https://www.honeycomb.io/ https://fly.io/phoenix-files/how-phoenix-liveview-form-auto-recovery-works/ https://www.elastic.co/ https://opentelemetry.io/ https://opentelemetry.io/docs/languages/erlang/ https://opentelemetry.io/docs/concepts/signals/traces/ https://opentelemetry.io/docs/specs/otel/logs/ https://github.com/runfinch/finch https://hexdocs.pm/telemetry_metrics/Telemetry.Metrics.html https://opentelemetry.io/blog/2024/state-profiling https://www.instrumentingelixir.com/ https://prometheus.io/ https://www.datadoghq.com/dg/monitor/ts/statsd/ https://x.com/kayserzl https://github.com/zkayser https://bsky.app/profile/ethangunderson.com  https://github.com/open-telemetry/opentelemetry-collector-contrib Special Guests: Ethan Gunderson and Zack Kayser.

Modernize or Die ® Podcast - CFML News Edition
Modernize or Die® - CFML News Podcast for November 19th, 2024 - Episode 223

Modernize or Die ® Podcast - CFML News Edition

Play Episode Listen Later Nov 19, 2024 61:01


2024-11-19 Weekly News — Episode 223Watch the video version on YouTube at https://youtube.com/live/bFX1uaN5Hec?feature=share Hosts: Gavin Pickin - Senior Developer at Ortus SolutionsEric Peterson - Senior Developer at Ortus SolutionsBig Thanks to our Sponsor - Ortus SolutionsThe makers of ColdBox, CommandBox, ForgeBox, TestBox and all your favorite box-es out there including BoxLang.A few ways to say thanks back to Ortus Solutions:Buy Tickets to Into the Box 2025 in Washington DC https://t.co/cFLDUJZEyMApril 30, 2025 - May 2, 2025 - Washington, DCLike and subscribe to our videos on YouTube. Help ORTUS reach for the Stars - Star and Fork our ReposStar all of your Github Box Dependencies from CommandBox with https://www.forgebox.io/view/commandbox-github Subscribe to our Podcast on your Podcast Apps and leave us a reviewSign up for a free or paid account on CFCasts, which is releasing new content regularlyBOXLife store: https://www.ortussolutions.com/about-us/shopBuy Ortus's Books102 ColdBox HMVC Quick Tips and Tricks on GumRoad (http://gum.co/coldbox-tips)Now on Amazon! In hardcover too!!!https://www.amazon.com/dp/B0CJHB712MLearn Modern ColdFusion (CFML) in 100+ Minutes - Free online https://modern-cfml.ortusbooks.com/ or buy an EBook or Paper copy https://www.ortussolutions.com/learn/books/coldfusion-in-100-minutes  Patreon Support (jolly)We have 59 patreons: https://www.patreon.com/ortussolutions. New Releases and UpdatesCommandBox 6.1.0 Released!We are pleased to announce the release of CommandBox 6.1.0, the latest release of our CLI, REPL, and Server, and Package Manager.  This is a minor update to our last release.  It has a handful of new features, and bug fixes, as well as better out-of-the-box support for BoxLang, our new CFML-compatible JVM language.New FeaturesWebSocket ServerUpdates to run BoxLangAdd command to deploy Lucee lex or lco filesCheck if an entry has a hash associated to it and validate itImprovementSort by date last started when finding a server by web rootMake semantic version prerelease identifiers not case sensitivedefault servlet pass predicate include Boxlang filesTasksUpdate to Undertow 2.2.33.FinalUpdate to Lucee 5.4.6.9Update bundled JRE to 11.0.23+99 Bugshttps://www.ortussolutions.com/blog/commandbox-610-released https://commandbox.ortusbooks.com/ BoxLang Beta 22 Released11/15/24 - BoxLang 1.0.0 Beta 22 Launched4 New Features10 Improvements14 Bugs Fixedhttps://boxlang.ortusbooks.com/readme/release-history/1.0.0-beta22 Webinars, Meetups and WorkshopsBoost Your Confidence & Silence the Inner Critic: Overcome Imposter Phenomenon!Sac Interactive Tech Meetup • Sacramento, CAWed, Nov 20 · 6:00 PM PSThttps://www.meetup.com/sacinteractive/events/303708476/?eventOrigin=home_page_upcoming_events$all Online ColdFusion Meetup - "ColdFusion Horizons: Unveiling 2025", Charvi Dhoot (CF Product Manager)--CFMeetup #311Nov 21 at 12p US EasternOver the years, ColdFusion has evolved to support not only the development of dynamic web pages but also the creation of complex applications and services. It remains a popular choice for developers seeking a versatile and efficient platform for building web-based solutions and business applications.As we look ahead to the imminent release of ColdFusion 2025, we invite you to join us for an exclusive feature showcase that highlights the compelling reasons to adopt this upcoming version. Additionally, this session will provide a comprehensive overview of the strategic vision for future releases, offering an opportunity for you to engage with new ideas under consideration and contribute valuable feedback.Attend this talk to gain deeper insight into the current and future releases of ColdFusion on the horizon!https://www.meetup.com/coldfusionmeetup/events/304633294/ CFCasts Content Updateshttps://www.cfcasts.comIntroducing SocketBoxThis innovative library simplifies WebSocket integration, making real-time features and message handling a breeze. Build engaging applications with SocketBox – get started today! #CFML #BoxLang #WebSockets #RealTime https://www.cfcasts.com/series/ortus-bytes/videos/introducing-socketbox Conferences and TrainingCF Summit India 2024Join us for the Adobe ColdFusion India Summit, a premier, completely free event where developers, industry experts, and thought leaders come together to explore the latest in ColdFusion and web development. Network with peers, meet the ColdFusion engineering team, get your questions answered, discover strategies to boost your career and grow your business—all at no cost.Price: FreeDecember 7, 20242 Cities: Bengaluru and NoidaRegister: https://cf-indiasummit-2024.attendease.com ITB 2025Location: Washington, DCDates: April 30, 2025 - May 2, 2025 - Washington, DCTickets and more info: https://t.co/cFLDUJZEyM50% off blind tickets$249.50 for the Conference$349.50 for the Conference + Workshop!!!Call for Speakers CLOSEDCFCamp 2025May 22, 23rd - 2025Atomis Hotel Munich Airporthttps://www.cfcamp.org/ Call for Speakers open - https://www.papercall.io/cfcamp2025More conferencesNeed more conferences, this site has a huge list of conferences for almost any language/community.https://confs.tech/Blogs, Posts, and Videos of the Week11/19/24 - Blog - Ortus Solutions - 5 Signs It's Time to Modernize Your ColdFusion / CFML ApplicationColdFusion has long been a reliable platform for building web applications, but like any technology, it requires maintenance and modernization over time. Whether you're using Lucee or Adobe ColdFusion, it's critical to recognize the signs that your application is no longer meeting today's standards in performance, security, and scalability. Let's explore five clear indicators th...

Syntax - Tasty Web Development Treats
844: TypeScript, Branded Types, Streaming vs Polling + More

Syntax - Tasty Web Development Treats

Play Episode Listen Later Nov 6, 2024 56:22


Scott and Wes serve up listener questions on everything from SvelteKit's limits and the quirks of branded types in TypeScript to handling email queues and secure token storage in cookies. Plus, they get into app security, the evolution of checkout flows, and why QA teams can actually be game-changers for dev teams. Show Notes 00:00 Welcome to Syntax! 00:50 Catching up. 01:52 Brought to you by Sentry.io. 02:56 What are the limitations of SvelteKit? 06:41 Svelte 5 updates. 07:53 Branded types in TypeScript. EggHead.io Blog. 11:56 Queue applications and a ‘poor man's queue'. 17:20 The real value of a functional QA team. 21:34 Invoker commands. Invokers Explained. Denver Script Talk. 26:29 Growing security and permissions concerns. Little Snitch. 33:03 Stripe vs PayPal in 2024. 38:24 Christmas gift guide. 38:39 Websockets vs streams vs polling. 41:04 Storing access and refresh tokens in a cookie. 45:41 Shipping with TypeScript errors. 49:34 Sick Picks + Shameless Plugs. Sick Picks Scott: The Black Stuff Deodorant. Wes: Apple Watch Charging Brick. Shameless Plugs Scott: Syntax on YouTube. Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott: X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads

Modernize or Die ® Podcast - CFML News Edition
Modernize or Die® - CFML News Podcast for November 5th, 2024 - Episode 222

Modernize or Die ® Podcast - CFML News Edition

Play Episode Listen Later Nov 5, 2024 42:57


2024-11-05 Weekly News — Episode 222Watch the video version on YouTube at Hosts: Gavin Pickin - Senior Developer at Ortus SolutionsGrant Copley - Senior Developer at Ortus SolutionsBig Thanks to our Sponsor - Ortus SolutionsThe makers of ColdBox, CommandBox, ForgeBox, TestBox and all your favorite box-es out there including BoxLang.A few ways to say thanks back to Ortus Solutions:Buy Tickets to Into the Box 2025 in Washington DC https://t.co/cFLDUJZEyMApril 30, 2025 - May 2, 2025 - Washington, DCLike and subscribe to our videos on YouTube. Help ORTUS reach for the Stars - Star and Fork our ReposStar all of your Github Box Dependencies from CommandBox with https://www.forgebox.io/view/commandbox-github Subscribe to our Podcast on your Podcast Apps and leave us a reviewSign up for a free or paid account on CFCasts, which is releasing new content regularlyBOXLife store: https://www.ortussolutions.com/about-us/shopBuy Ortus's Books102 ColdBox HMVC Quick Tips and Tricks on GumRoad (http://gum.co/coldbox-tips)Now on Amazon! In hardcover too!!!https://www.amazon.com/dp/B0CJHB712MLearn Modern ColdFusion (CFML) in 100+ Minutes - Free online https://modern-cfml.ortusbooks.com/ or buy an EBook or Paper copy https://www.ortussolutions.com/learn/books/coldfusion-in-100-minutes Patreon Support ()We have 59 patreons: https://www.patreon.com/ortussolutions. News and AnnouncementsGithub UniverseThis year marks our tenth GitHub Universe—and one theme has remained constant: our focus on developers and the developer experience. Over 10 years, that developer experience has evolved from inventing the pull request to building the world's most widely adopted AI coding tool, GitHub Copilot.Today, our platform serves more than 100 million developers, and through the power of this interconnected community combined with generative AI, we are enabling every developer to build, release, scale, and secure software rapidly.Recap: https://github.blog/news-insights/product-news/universe-2024-previews-releases/ Videos: https://githubuniverse.com/?utm_source=github&utm_medium=banner&utm_campaign=24bannerheader16li-newsroom-press-releases-github-universe-2024 New Releases and UpdatesBoxLang Betas — Beta 20-2110/25/24 - BoxLang 1.0.0 Beta 20 Launched6 New Features11 Improvements15 Bugs Fixedhttps://www.ortussolutions.com/blog/boxlang-100-beta-20-launched 11/1/24 - BoxLang 1.0.0 Beta 21 Launched4 New Features5 Improvements14 Bugs Fixedhttps://www.ortussolutions.com/blog/boxlang-100-beta-21-launched Webinars, Meetups and WorkshopsBoost Your Confidence & Silence the Inner Critic: Overcome Imposter Phenomenon!Sac Interactive Tech Meetup • Sacramento, CAWed, Nov 20 · 6:00 PM PSThttps://www.meetup.com/sacinteractive/events/303708476/?eventOrigin=home_page_upcoming_events$all CFCasts Content Updateshttps://www.cfcasts.comIntroducing SocketBoxThis innovative library simplifies WebSocket integration, making real-time features and message handling a breeze. Build engaging applications with SocketBox – get started today! #CFML #BoxLang #WebSockets #RealTime https://www.cfcasts.com/series/ortus-bytes/videos/introducing-socketbox Conferences and TrainingCF Summit India 2024Join us for the Adobe ColdFusion India Summit, a premier, completely free event where developers, industry experts, and thought leaders come together to explore the latest in ColdFusion and web development. Network with peers, meet the ColdFusion engineering team, get your questions answered, discover strategies to boost your career and grow your business—all at no cost.Price: FreeDecember 7, 20242 Cities: Bengaluru and NoidaRegister: https://cf-indiasummit-2024.attendease.com ITB 2025Location: Washington, DCDates: April 30, 2025 - May 2, 2025 - Washington, DCTickets and more info: https://t.co/cFLDUJZEyM50% off blind tickets$249.50 for the Conference$349.50 for the Conference + Workshop!!!Call for Speakers Announced - Deadline Nov 15th https://www.ortussolutions.com/blog/call-for-speakers-into-the-box-2025 CFCamp 2025May 22, 23rd - 2025Atomis Hotel Munich Airporthttps://www.cfcamp.org/ More conferencesNeed more conferences, this site has a huge list of conferences for almost any language/community.https://confs.tech/Blogs, Posts, and Videos of the Week11/4/24 - Blog - Ortus Solutions - Thanksgiving Day deals: Access Exclusive Discounts Until November 30th!Prepare for the Ortus Thanksgiving Day Deals! This is your chance to enhance your development journey with amazing deals catered to everyone—from experienced developers to tech aficionados. Our unique offers are crafted to assist you in reaching your objectives while conserving your budget!Don't Let These Incredible Offers Slip Away!Utilize the code THKSGIVING24 at checkout to take advantage of these fantastic discounts before they endhttps://www.ortussolutions.com/blog/thanksgiving-day-deals-access-exclusive-discounts-until-november-30th Applies to:ColdFusion ConsultingCFCastsCouchbase/Redis Lucee ExtensionsBoxLang Visionary License11/1/24 - Blog - Ortus Solutions - Ortus October Newsletter 2024Welcome to the October Ortus Newsletter! This month is packed with exciting content designed to boost your web development productivity, deepen your technical knowledge, and empower your development projects. We've packed this edition with cutting-edge news, insightful updates, and a lineup of valuable resources across our platforms.https://www.ortussolutions.com/blog/ortus-october-newsletter-2024 10/31/24 - Blog - Ben Nadel - Safe-Navigation Operator ...

Learn System Design
Mastering System Design Interview: Unlocking API Design, Crafting Routes, and Real-Time Data Transfer Techniques

Learn System Design

Play Episode Listen Later Nov 5, 2024 34:41 Transcription Available


Send us a textUnlock the secrets of API design and elevate your system design skills with our latest episode featuring me, Benny Kitchell. Explore the pivotal role APIs play in system design interviews and real-world development, where they act like the seamless communication between waiters, cooks, and customers in a restaurant. Learn how to craft APIs that are tailored to both internal and external developers by understanding their specific needs and objectives, ensuring a smooth and efficient user experience.We also shine a light on the critical aspects of designing API routes. Understanding user needs and addressing core problems are the bedrock of effective API design. By focusing on functional and non-functional requirements, you'll be equipped to create API routes that meet real-world demands. Discover the importance of API versioning through our Spotify example, where future-proofing your design becomes crucial in maintaining user satisfaction and facilitating seamless updates.Finally, we delve into the world of real-time data transfer, examining both synchronous and asynchronous communication methods. From the traditional request-response model to the innovative use of WebSockets for instantaneous data exchanges, we break down the strengths and limitations of each approach. Equip yourself with the knowledge to choose the best method for your client-server interactions, ensuring your system design is robust, flexible, and ready for any challenge.Support the showDedicated to the memory of Crystal Rose.Email me at LearnSystemDesignPod@gmail.comJoin the free Discord Consider supporting us on PatreonSpecial thanks to Aimless Orbiter for the wonderful music.Please consider giving us a rating on ITunes or wherever you listen to new episodes.

Azure DevOps Podcast
David Fowler: Architecture of .NET Aspire - Episode 322

Azure DevOps Podcast

Play Episode Listen Later Nov 4, 2024 50:35


David Fowler is a Distinguished Engineer and has been at Microsoft for over 16 years working on developer frameworks and tools in the .NET space. He's one of the creators of several popular OSS frameworks and tools such as .NET Aspire, NuGet, SignalR, and ASP.NET Core. He also architected the Azure SignalR Service, a service for doing real-time WebSocket-based communication at scale. He's an avid open-source advocate and developer currently focused on simplifying developer experiences in the microservice space. David is also passionate about mentoring, supporting individual contributors in their careers, and helping underrepresented folks get interested in technology.   Topics of Discussion: [3:14] David's career journey. [4:28] Development of.NET Aspire. [6:47] Evolution of front-end and back-end architectures. [13:27] Challenges and Improvements in.NET Aspire. [19:08] The set of resources they chose to start with. [21:48] The Infrastructure Library. [25:23] The potential for creating templates to streamline the bootstrapping process for teams and projects. [28:58] Does Azure API include networking in the firewall rules? [31:52] What are the things David is most excited about with DevStory? [32:3] The “wait for” feature. [38:03] The complexity of implementing health checks. [44:53] What is Dapper? [44:01] Future Vision for.NET Aspire.   Mentioned in this Episode: Clear Measure Way Architect Forum Software Engineer Forum Programming with Palermo — New Video Podcast! Email us at programming@palermo.net. Clear Measure, Inc. (Sponsor) .NET DevOps for Azure: A Developer's Guide to DevOps Architecture the Right Way, by Jeffrey Palermo — Available on Amazon! Jeffrey Palermo's Twitter — Follow to stay informed about future events! David Fowler GitHub David on LinkedIn David Fowler Microsoft David Fowler, Author at .NET Blog Azure Provisioning 3.0 GitHub NET Aspire Overview .NET Conf2024   Want to Learn More? Visit AzureDevOps.Show for show notes and additional episodes.

Database School
Bootstrapping an email service provider (with Jesse Hanley)

Database School

Play Episode Listen Later Oct 14, 2024 81:59


Want to learn more Postgres? Get on the waiting list for the full course: https://masteringpostgres.com. In this interview, I talk with Jesse Hanley, founder of Bento, about running a lean email service from Japan. We chat about the challenges of scaling infrastructure, managing databases, and maintaining a calm business while serving a global customer base. Links Mentioned: Bento: https://bentonow.com Database school on YouTube: https://www.youtube.com/playlist?list=PLI72dgeNJtzqElnNB6sQoAn2R-F3Vqm15 Database school audio only: https://databaseschool.transistor.fm Follow Jesse: Twitter: https://twitter.com/jessethanley Bento on Twitter: https://twitter.com/Bento Follow Aaron: Twitter: https://twitter.com/aarondfrancis LinkedIn: https://www.linkedin.com/in/aarondfrancis Website: https://aaronfrancis.com - find articles, podcasts, courses, and more. Chapters: 00:00 - Introduction to Jesse Hanley 01:02 - Running Bento from Japan 01:48 - The Lean Team Structure at Bento 03:00 - Managing Support via Discord 05:01 - Benefits of Using Discord for Customer Support 06:45 - The Role of Community in Customer Feedback 09:01 - How Bento Gained Traction 13:00 - Bootstrapping Bento and Profitable Growth 16:00 - Running Your Own Mail Servers 19:03 - The Economics and Redundancy of Email Delivery 21:00 - Bento's Heroku Setup and Scaling Challenges 26:00 - Handling and Querying Massive Data in Bento 29:52 - Leveraging Elasticsearch for Data Queries 35:40 - Moving Toward Multi-Database Solutions 37:45 - Exploring Crunchy Data and Citus for Database Scaling 42:00 - Optimizing Bento for Performance and Scalability 54:02 - Jesse's Advice on Building a Calm and Profitable Business 57:00 - How Bento Uses WebSockets and Background Jobs 1:00:00 - Optimizing Bento with Action Cable 1:02:25 - Avoiding N+1 Queries with WebSockets 1:04:50 - Scaling Redis and Postgres at Bento 1:09:00 - Jesse's Approach to Managing Growth and Multiple Services 1:11:00 - Final Thoughts on Scaling and Optimizing Databases 1:13:10 - Advice for Aspiring Builders: Stay Patient and True to Your Vision 1:16:00 - Bento's Unique Approach to Email Marketing and Transactional Emails 1:19:50 - Closing Thoughts and Where to Find Jesse Hanley Online

Modernize or Die ® Podcast - CFML News Edition
Modernize or Die® - CFML News Podcast for October 8th, 2024 - Episode 220

Modernize or Die ® Podcast - CFML News Edition

Play Episode Listen Later Oct 8, 2024 47:02


2024-10-08 Weekly News — Episode 220Watch the video version on YouTube at https://youtube.com/live/YOxkhO6DPzg?feature=share Hosts: Eric Peterson - Senior Developer at Ortus SolutionsGavin Pickin - Senior Developer at Ortus SolutionsThanks to our Sponsor - Ortus SolutionsThe makers of ColdBox, CommandBox, ForgeBox, TestBox and all your favorite box-es out there including BoxLang.A few ways to say thanks back to Ortus Solutions:Buy Tickets to Into the Box 2025 in Washington DC https://t.co/cFLDUJZEyMApril 30, 2025 - May 2, 2025 - Washington, DCLike and subscribe to our videos on YouTube. Help ORTUS reach for the Stars - Star and Fork our ReposStar all of your Github Box Dependencies from CommandBox with https://www.forgebox.io/view/commandbox-github Subscribe to our Podcast on your Podcast Apps and leave us a reviewSign up for a free or paid account on CFCasts, which is releasing new content regularlyBOXLife store: https://www.ortussolutions.com/about-us/shopBuy Ortus's Books102 ColdBox HMVC Quick Tips and Tricks on GumRoad (http://gum.co/coldbox-tips)Now on Amazon! In hardcover too!!!https://www.amazon.com/dp/B0CJHB712MLearn Modern ColdFusion (CFML) in 100+ Minutes - Free online https://modern-cfml.ortusbooks.com/ or buy an EBook or Paper copy https://www.ortussolutions.com/learn/books/coldfusion-in-100-minutes Patreon Support (Supernatural)We have 59 patreons: https://www.patreon.com/ortussolutions. News and AnnouncementsNew Library - SocketBox - Bring 1st Class Socket Support to BoxLang, Lucee and Adobe ColdFusionThe WebSocket server in CommandBox and BoxLang MiniServer is not really a separate "server" per se, since it's on the same port. It's just an upgrade listener which will upgrade any WS requests.This websocket integration will work for Lucee, Adobe, and BoxLang alike as it passes incoming messages to the app via an "internal" HTTP request to /WebSocket.cfc?method=onProcess where the CF/BL code can handle it. The incoming request will have all cookies, headers, hostname, etc that the original websocket connection was started with, so normal CGI variables and session scopes should work fine.You need to create a custom /WebSocket.cfc class that should extend the modules.socketbox.models.WebSocketCore class in this library which provides the base functionality.https://www.forgebox.io/view/socketbox https://community.ortussolutions.com/t/introducing-socketbox-a-new-websocket-library/10381/1New Releases and UpdatesTestBox v6.0.0 ReleaseTestBox 6.x is a major update to the testing framework, introducing key improvements like full support for BoxLang. With this release, users can write tests, specs, and harnesses directly in BoxLang. A new BoxLang CLI runner simplifies running specs and bundles without needing a web server. Additionally, the CLI now detects BoxLang projects, adds headless web server testing with the bx-web-support module, and includes dynamic assertion methods for cleaner test writing. Significant new features include environment helpers, display name annotations, and improved debugging and matcher functions. https://www.ortussolutions.com/blog/testbox-v600-release qb v10 is BoxLang CertifiedHyper v7.5.3 is BoxLang CertifiedUpgrade to FusionReactor 12.1.1 Now: Defect in FusionReactor 12.1.0 That Can Potentially Disable Metrics ShipperOn September 26th, 2024, we learned of a bug affecting some customers who upgraded to FusionReactor Agent version 12.1.0. We want to provide a detailed account of the bug, its impact, and the steps we've taken to resolve it. Transparency and communication are central to our approach, and we want to ensure that all customers are informed and empowered to take the necessary action.https://fusion-reactor.com/blog/upgrade-to-fusionreactor-12-1-1-now-defect-in-fusionreactor-12-1-0-that-can-potentially-disable-metrics-shipper/ BoxLang Betas — Beta 16-179/27/24 - BoxLang 1.0.0 Beta 16 Launched2 New Features9 Improvements11 Bugs Fixedhttps://boxlang.ortusbooks.com/readme/release-history/1.0.0-beta16https://www.ortussolutions.com/blog/boxlang-100-beta-16-launched 10/4/24 - BoxLang 1.0.0 Beta 17 Launched1 New Features3 Improvements2 Bugs Fixedhttps://boxlang.ortusbooks.com/readme/release-history/1.0.0-beta17 https://www.ortussolutions.com/blog/boxlang-100-beta-17-launched Webinars, Meetups and WorkshopsOctober 2024 Seattle ColdFusion User Group MeetingWednesday, October 16, 20245:00 PM to 6:00 PM PDTAdobe ColdFusion Summit 2024 RecapWill Frankhouser and Leon O'Daniel will share a complete recap of their experiences at Adobe ColdFusion Summit 2024, including what's coming soon with Adobe ColdFusion 2025, takeaways from the sessions, and more.Establishing a Connection and Transferring Files to Netsuite Using Adobe ColdFusion's FTP FeaturesThis session recaps an actual real-live case study where Leon O'Daniel assisted a large clothing manufacturer with how to quickly and easily establish a connection and transfer files to their Netsuite FTP account.ColdFusion Tag/Function of the MonthWill Frankhouser will be sharing his ColdFusion Tag/Function of the month and how to use it in your next ColdFusion application project.https://www.meetup.com/seattle-coldfusion-user-group/events/303894410/?eventOrigin=home_page_upcoming_events$all Sacramento Interactive: Debugging With JavaScript with Kevin GobleWednesday, October 16, 20246:00 PM to 8:00 PM PDTWorkshop is open to new and curious programmers. We'll first go over some philosophy about dealing with code that does the unexpected. Then we'll follow up with some live practical examples utilizing debugging tools. If your only weapon against bugs right now is console logs, if debugging feels like playing Where's Waldo, this is a workshop for you!https://www.meetup.com/sacinteractive/events/303708439/?eventOrigin=home_page_upcoming_events$all

Ethereum Cat Herders Podcast
EPD# 13 - Web3.js with Santiago Trujillo Zuluaga

Ethereum Cat Herders Podcast

Play Episode Listen Later Oct 4, 2024 57:44


Web3.js is a robust and flexible collection of TypeScript and JavaScript libraries that allows developers to interact with local or remote Ethereum nodes (or any EVM-compatible blockchain) over HTTP, IPC or WebSocket connections. It is a powerful and efficient toolkit for crafting applications within the Ethereum ecosystem and beyond. Useful Links: ---------------------- slides - https://docs.google.com/presentation/... contracts plugin - https://www.npmjs.com/package/web3-pl... plugins - https://web3js.org/plugins public RPC endpoints - https://chainlist.org/ Topics Covered: ------------------------- 00:00 - Intro 01:42 - Santiago Trujillo Introduction 04:19 - Presentation (Demo) by Santiago Trujillo 04:30 - Presentation Content 05:10 - 1. What is Web3.js? 05:10 - Fun Fact about Web3.js 05:31 - Web3.js Usages 06:01 - JavaScript/TypeScript Library simple example 06:53 - dApp Flow 07:29 - Modular Design, show how Web3.js worked in Web3. 08:03 - 2. Getting started with Web3.js. 08:07 - Zero to Hero with web3.js (Demo) 30:25 - 3. Web3 Plugin Contracts 37:19 - 4. Community & Outreach 37:22 - Education and Community Outreach 38:21 - Join the web3.js ambassadors 38:46 - Education and Community Outreach 39:13 - Web3.js Africa Tour 40:02 - Presentation Last Slide 40:16 - Presentation End 40:43 - Key consideration while choosing Web3.js and other similar Libraries. 42:58 - How Santiago Trujillo see to adapt Web3.js in the future upgrades and Layer 2 Solutions? 44:08 - Santiago Trujillo suggestions to dApp Developers. 46:07 - Is Plugins really support new Web3 Developers? 49:47 - Santiago Trujillo on trusting different Plugins. 50:50 - is this a Snapshot of specific version or start by just pulling an update? 51:29 - Challenges faced by Santiago Trujillo. 54:23 - Last word by Santiago Trujillo. 56:37 - Closing Words Follow at Twitter: -------------------------- Santiago Trujillo Zuluaga:   / santiagodevrel   Pooja Ranjan:   / poojaranjan19   Other resources: --------------------------    • Meet The Herders     • Ecosystem Project Demo     • NFT     • Non-EIP PEEPanEIP     • Community     • PEEPanEIP   Official Website: -------------------------- Ethereum Cat Herders: https://www.ethereumcatherders.com/ Web3.js: https://web3js.org/#/ -------------------------- #ethereum #web3 #web3js #ethcatherders

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

We all have fond memories of the first Dev Day in 2023:and the blip that followed soon after. As Ben Thompson has noted, this year's DevDay took a quieter, more intimate tone. No Satya, no livestream, (slightly fewer people?). Instead of putting ChatGPT announcements in DevDay as in 2023, o1 was announced 2 weeks prior, and DevDay 2024 was reserved purely for developer-facing API announcements, primarily the Realtime API, Vision Finetuning, Prompt Caching, and Model Distillation.However the larger venue and more spread out schedule did allow a lot more hallway conversations with attendees as well as more community presentations including our recent guest Alistair Pullen of Cosine as well as deeper dives from OpenAI including our recent guest Michelle Pokrass of the API Team. Thanks to OpenAI's warm collaboration (we particularly want to thank Lindsay McCallum Rémy!), we managed to record exclusive interviews with many of the main presenters of both the keynotes and breakout sessions. We present them in full in today's episode, together with a full lightly edited Q&A with Sam Altman.Show notes and related resourcesSome of these used in the final audio episode below* Simon Willison Live Blog* swyx live tweets and videos* Greg Kamradt coverage of Structured Output session, Scaling LLM Apps session* Fireside Chat Q&A with Sam AltmanTimestamps* [00:00:00] Intro by Suno.ai* [00:01:23] NotebookLM Recap of DevDay* [00:09:25] Ilan's Strawberry Demo with Realtime Voice Function Calling* [00:19:16] Olivier Godement, Head of Product, OpenAI* [00:36:57] Romain Huet, Head of DX, OpenAI* [00:47:08] Michelle Pokrass, API Tech Lead at OpenAI ft. Simon Willison* [01:04:45] Alistair Pullen, CEO, Cosine (Genie)* [01:18:31] Sam Altman + Kevin Weill Q&A* [02:03:07] Notebook LM Recap of PodcastTranscript[00:00:00] Suno AI: Under dev daylights, code ignites. Real time voice streams reach new heights. O1 and GPT, 4. 0 in flight. Fine tune the future, data in sight. Schema sync up, outputs precise. Distill the models, efficiency splice.[00:00:33] AI Charlie: Happy October. This is your AI co host, Charlie. One of our longest standing traditions is covering major AI and ML conferences in podcast format. Delving, yes delving, into the vibes of what it is like to be there stitched in with short samples of conversations with key players, just to help you feel like you were there.[00:00:54] AI Charlie: Covering this year's Dev Day was significantly more challenging because we were all requested not to record the opening keynotes. So, in place of the opening keynotes, we had the viral notebook LM Deep Dive crew, my new AI podcast nemesis, Give you a seven minute recap of everything that was announced.[00:01:15] AI Charlie: Of course, you can also check the show notes for details. I'll then come back with an explainer of all the interviews we have for you today. Watch out and take care.[00:01:23] NotebookLM Recap of DevDay[00:01:23] NotebookLM: All right, so we've got a pretty hefty stack of articles and blog posts here all about open ais. Dev day 2024.[00:01:32] NotebookLM 2: Yeah, lots to dig into there.[00:01:34] NotebookLM 2: Seems[00:01:34] NotebookLM: like you're really interested in what's new with AI.[00:01:36] NotebookLM 2: Definitely. And it seems like OpenAI had a lot to announce. New tools, changes to the company. It's a lot.[00:01:43] NotebookLM: It is. And especially since you're interested in how AI can be used in the real world, you know, practical applications, we'll focus on that.[00:01:51] NotebookLM: Perfect. Like, for example, this Real time API, they announced that, right? That seems like a big deal if we want AI to sound, well, less like a robot.[00:01:59] NotebookLM 2: It could be huge. The real time API could completely change how we, like, interact with AI. Like, imagine if your voice assistant could actually handle it if you interrupted it.[00:02:08] NotebookLM: Or, like, have an actual conversation.[00:02:10] NotebookLM 2: Right, not just these clunky back and forth things we're used to.[00:02:14] NotebookLM: And they actually showed it off, didn't they? I read something about a travel app, one for languages. Even one where the AI ordered takeout.[00:02:21] NotebookLM 2: Those demos were really interesting, and I think they show how this real time API can be used in so many ways.[00:02:28] NotebookLM 2: And the tech behind it is fascinating, by the way. It uses persistent WebSocket connections and this thing called function calling, so it can respond in real time.[00:02:38] NotebookLM: So the function calling thing, that sounds kind of complicated. Can you, like, explain how that works?[00:02:42] NotebookLM 2: So imagine giving the AI Access to this whole toolbox, right?[00:02:46] NotebookLM 2: Information, capabilities, all sorts of things. Okay. So take the travel agent demo, for example. With function calling, the AI can pull up details, let's say about Fort Mason, right, from some database. Like nearby restaurants, stuff like that.[00:02:59] NotebookLM: Ah, I get it. So instead of being limited to what it already knows, It can go and find the information it needs, like a human travel agent would.[00:03:07] NotebookLM 2: Precisely. And someone on Hacker News pointed out a cool detail. The API actually gives you a text version of what's being said. So you can store that, analyze it.[00:03:17] NotebookLM: That's smart. It seems like OpenAI put a lot of thought into making this API easy for developers to use. But, while we're on OpenAI, you know, Besides their tech, there's been some news about, like, internal changes, too.[00:03:30] NotebookLM: Didn't they say they're moving away from being a non profit?[00:03:32] NotebookLM 2: They did. And it's got everyone talking. It's a major shift. And it's only natural for people to wonder how that'll change things for OpenAI in the future. I mean, there are definitely some valid questions about this move to for profit. Like, will they have more money for research now?[00:03:46] NotebookLM 2: Probably. But will they, you know, care as much about making sure AI benefits everyone?[00:03:51] NotebookLM: Yeah, that's the big question, especially with all the, like, the leadership changes happening at OpenAI too, right? I read that their Chief Research Officer left, and their VP of Research, and even their CTO.[00:04:03] NotebookLM 2: It's true. A lot of people are connecting those departures with the changes in OpenAI's structure.[00:04:08] NotebookLM: And I guess it makes you wonder what's going on behind the scenes. But they are still putting out new stuff. Like this whole fine tuning thing really caught my eye.[00:04:17] NotebookLM 2: Right, fine tuning. It's essentially taking a pre trained AI model. And, like, customizing it.[00:04:23] NotebookLM: So instead of a general AI, you get one that's tailored for a specific job.[00:04:27] NotebookLM 2: Exactly. And that opens up so many possibilities, especially for businesses. Imagine you could train an AI on your company's data, you know, like how you communicate your brand guidelines.[00:04:37] NotebookLM: So it's like having an AI that's specifically trained for your company?[00:04:41] NotebookLM 2: That's the idea.[00:04:41] NotebookLM: And they're doing it with images now, too, right?[00:04:44] NotebookLM: Fine tuning with vision is what they called it.[00:04:46] NotebookLM 2: It's pretty incredible what they're doing with that, especially in fields like medicine.[00:04:50] NotebookLM: Like using AI to help doctors make diagnoses.[00:04:52] NotebookLM 2: Exactly. And AI could be trained on thousands of medical images, right? And then it could potentially spot things that even a trained doctor might miss.[00:05:03] NotebookLM: That's kind of scary, to be honest. What if it gets it wrong?[00:05:06] NotebookLM 2: Well, the idea isn't to replace doctors, but to give them another tool, you know, help them make better decisions.[00:05:12] NotebookLM: Okay, that makes sense. But training these AI models must be really expensive.[00:05:17] NotebookLM 2: It can be. All those tokens add up. But OpenAI announced something called automatic prompt caching.[00:05:23] Alex Volkov: Automatic what now? I don't think I came across that.[00:05:26] NotebookLM 2: So basically, if your AI sees a prompt that it's already seen before, OpenAI will give you a discount.[00:05:31] NotebookLM: Huh. Like a frequent buyer program for AI.[00:05:35] NotebookLM 2: Kind of, yeah. It's good that they're trying to make it more affordable. And they're also doing something called model distillation.[00:05:41] NotebookLM: Okay, now you're just using big words to sound smart. What's that?[00:05:45] NotebookLM 2: Think of it like like a recipe, right? You can take a really complex recipe and break it down to the essential parts.[00:05:50] NotebookLM: Make it simpler, but it still tastes the same.[00:05:53] NotebookLM 2: Yeah. And that's what model distillation is. You take a big, powerful AI model and create a smaller, more efficient version.[00:06:00] NotebookLM: So it's like lighter weight, but still just as capable.[00:06:03] NotebookLM 2: Exactly. And that means more people can actually use these powerful tools. They don't need, like, a supercomputer to run them.[00:06:10] NotebookLM: So they're making AI more accessible. That's great.[00:06:13] NotebookLM 2: It is. And speaking of powerful tools, they also talked about their new O1 model.[00:06:18] NotebookLM 2: That's the one they've been hyping up. The one that's supposed to be this big leap forward.[00:06:22] NotebookLM: Yeah, O1. It sounds pretty futuristic. Like, from what I read, it's not just a bigger, better language model.[00:06:28] NotebookLM 2: Right. It's a different porch.[00:06:29] NotebookLM: They're saying it can, like, actually reason, right? Think.[00:06:33] NotebookLM 2: It's trained differently.[00:06:34] NotebookLM 2: They used reinforcement learning with O1.[00:06:36] NotebookLM: So it's not just finding patterns in the data it's seen before.[00:06:40] NotebookLM 2: Not just that. It can actually learn from its mistakes. Get better at solving problems.[00:06:46] NotebookLM: So give me an example. What can O1 do that, say, GPT 4 can't?[00:06:51] NotebookLM 2: Well, OpenAI showed it doing some pretty impressive stuff with math, like advanced math.[00:06:56] NotebookLM 2: And coding, too. Complex coding. Things that even GPT 4 struggled with.[00:07:00] NotebookLM: So you're saying if I needed to, like, write a screenplay, I'd stick with GPT 4? But if I wanted to solve some crazy physics problem, O1 is what I'd use.[00:07:08] NotebookLM 2: Something like that, yeah. Although there is a trade off. O1 takes a lot more power to run, and it takes longer to get those impressive results.[00:07:17] NotebookLM: Hmm, makes sense. More power, more time, higher quality.[00:07:21] NotebookLM 2: Exactly.[00:07:22] NotebookLM: It sounds like it's still in development, though, right? Is there anything else they're planning to add to it?[00:07:26] NotebookLM 2: Oh, yeah. They mentioned system prompts, which will let developers, like, set some ground rules for how it behaves. And they're working on adding structured outputs and function calling.[00:07:38] Alex Volkov: Wait, structured outputs? Didn't we just talk about that? We[00:07:41] NotebookLM 2: did. That's the thing where the AI's output is formatted in a way that's easy to use.[00:07:47] NotebookLM: Right, right. So you don't have to spend all day trying to make sense of what it gives you. It's good that they're thinking about that stuff.[00:07:53] NotebookLM 2: It's about making these tools usable.[00:07:56] NotebookLM 2: And speaking of that, Dev Day finished up with this really interesting talk. Sam Altman, the CEO of OpenAI, And Kevin Weil, their new chief product officer. They talked about, like, the big picture for AI.[00:08:09] NotebookLM: Yeah, they did, didn't they? Anything interesting come up?[00:08:12] NotebookLM 2: Well, Altman talked about moving past this whole AGI term, Artificial General Intelligence.[00:08:18] NotebookLM: I can see why. It's kind of a loaded term, isn't it?[00:08:20] NotebookLM 2: He thinks it's become a bit of a buzzword, and people don't really understand what it means.[00:08:24] NotebookLM: So are they saying they're not trying to build AGI anymore?[00:08:28] NotebookLM 2: It's more like they're saying they're focused on just Making AI better, constantly improving it, not worrying about putting it in a box.[00:08:36] NotebookLM: That makes sense. Keep pushing the limits.[00:08:38] NotebookLM 2: Exactly. But they were also very clear about doing it responsibly. They talked a lot about safety and ethics.[00:08:43] NotebookLM: Yeah, that's important.[00:08:44] NotebookLM 2: They said they were going to be very careful. About how they release new features.[00:08:48] NotebookLM: Good! Because this stuff is powerful.[00:08:51] NotebookLM 2: It is. It was a lot to take in, this whole Dev Day event.[00:08:54] NotebookLM 2: New tools, big changes at OpenAI, and these big questions about the future of AI.[00:08:59] NotebookLM: It was. But hopefully this deep dive helped make sense of some of it. At least, that's what we try to do here.[00:09:05] AI Charlie: Absolutely.[00:09:06] NotebookLM: Thanks for taking the deep dive with us.[00:09:08] AI Charlie: The biggest demo of the new Realtime API involved function calling with voice mode and buying chocolate covered strawberries from our friendly local OpenAI developer experience engineer and strawberry shop owner, Ilan Biggio.[00:09:21] AI Charlie: We'll first play you the audio of his demo and then go into a little interview with him.[00:09:25] Ilan's Strawberry Demo with Realtime Voice Function Calling[00:09:25] Romain Huet: Could you place a call and see if you could get us 400 strawberries delivered to the venue? But please keep that under 1500. I'm on it. We'll get those strawberries delivered for you.[00:09:47] Ilan: Hello? Hi there. Is this Ilan? I'm Romain's AI assistant. How is it going? Fantastic. Can you tell me what flavors of strawberry dips you have for me? Yeah, we have chocolate, vanilla, and we have peanut butter. Wait, how much would 400 chocolate covered strawberries cost? 400? Are you sure you want 400? Yes, 400 chocolate covered[00:10:14] swyx: strawberries.[00:10:15] Ilan: Wait,[00:10:16] swyx: how much[00:10:16] Ilan: would that be? I think that'll be around, like, 1, 415. 92.[00:10:25] Alex Volkov: Awesome. Let's go ahead and place the order for four chocolate covered strawberries.[00:10:31] Ilan: Great, where would you like that delivered? Please deliver them to the Gateway Pavilion at Fort Mason. And I'll be paying in cash.[00:10:42] Alex Volkov: Okay,[00:10:43] Ilan: sweet. So just to confirm, you want four strawberries?[00:10:45] Ilan: 400 chocolate covered strawberries to the Gateway Pavilion. Yes, that's perfect. And when can we expect delivery? Well, you guys are right nearby, so it'll be like, I don't know, 37 seconds? That's incredibly fast. Cool, you too.[00:11:09] swyx: Hi, Ilan, welcome to Lanespace. Oh, thank you. I just saw your amazing demos, had your amazing strawberries. You are dressed up, like, exactly like a strawberry salesman. Gotta have it all. What was the building on demo like? What was the story behind the demo?[00:11:22] swyx: It was really interesting. This is actually something I had been thinking about for months before the launch.[00:11:27] swyx: Like, having a, like, AI that can make phone calls is something like I've personally wanted for a long time. And so as soon as we launched internally, like, I started hacking on it. And then that sort of just started. We made it into like an internal demo, and then people found it really interesting, and then we thought how cool would it be to have this like on stage as, as one of the demos.[00:11:47] swyx: Yeah, would would you call out any technical issues building, like you were basically one of the first people ever to build with a voice mode API. Would you call out any issues like integrating it with Twilio like that, like you did with function calling, with like a form filling elements. I noticed that you had like intents of things to fulfill, and then.[00:12:07] swyx: When there's still missing info, the voice would prompt you, roleplaying the store guy.[00:12:13] swyx: Yeah, yeah, so, I think technically, there's like the whole, just working with audio and streams is a whole different beast. Like, even separate from like AI and this, this like, new capabilities, it's just, it's just tough.[00:12:26] swyx: Yeah, when you have a prompt, conversationally it'll just follow, like the, it was, Instead of like, kind of step by step to like ask the right questions based on like the like what the request was, right? The function calling itself is sort of tangential to that. Like, you have to prompt it to call the functions, but then handling it isn't too much different from, like, what you would do with assistant streaming or, like, chat completion streaming.[00:12:47] swyx: I think, like, the API feels very similar just to, like, if everything in the API was streaming, it actually feels quite familiar to that.[00:12:53] swyx: And then, function calling wise, I mean, does it work the same? I don't know. Like, I saw a lot of logs. You guys showed, like, in the playground, a lot of logs. What is in there?[00:13:03] swyx: What should people know?[00:13:04] swyx: Yeah, I mean, it is, like, the events may have different names than the streaming events that we have in chat completions, but they represent very similar things. It's things like, you know, function call started, argument started, it's like, here's like argument deltas, and then like function call done.[00:13:20] swyx: Conveniently we send one that has the full function, and then I just use that. Nice.[00:13:25] swyx: Yeah and then, like, what restrictions do, should people be aware of? Like, you know, I think, I think, before we recorded, we discussed a little bit about the sensitivities around basically calling random store owners and putting, putting like an AI on them.[00:13:40] swyx: Yeah, so there's, I think there's recent regulation on that, which is why we want to be like very, I guess, aware of, of You know, you can't just call anybody with AI, right? That's like just robocalling. You wouldn't want someone just calling you with AI.[00:13:54] swyx: I'm a developer, I'm about to do this on random people.[00:13:57] swyx: What laws am I about to break?[00:14:00] swyx: I forget what the governing body is, but you should, I think, Having consent of the person you're about to call, it always works. I, as the strawberry owner, have consented to like getting called with AI. I think past that you, you want to be careful. Definitely individuals are more sensitive than businesses.[00:14:19] swyx: I think businesses you have a little bit more leeway. Also, they're like, businesses I think have an incentive to want to receive AI phone calls. Especially if like, they're dealing with it. It's doing business. Right, like, it's more business. It's kind of like getting on a booking platform, right, you're exposed to more.[00:14:33] swyx: But, I think it's still very much like a gray area. Again, so. I think everybody should, you know, tread carefully, like, figure out what it is. I, I, I, the law is so recent, I didn't have enough time to, like, I'm also not a lawyer. Yeah, yeah, yeah, of course. Yeah.[00:14:49] swyx: Okay, cool fair enough. One other thing, this is kind of agentic.[00:14:52] swyx: Did you use a state machine at all? Did you use any framework? No. You just stick it in context and then just run it in a loop until it ends call?[00:15:01] swyx: Yeah, there isn't even a loop, like Okay. Because the API is just based on sessions. It's always just going to keep going. Every time you speak, it'll trigger a call.[00:15:11] swyx: And then after every function call was also invoked invoking like a generation. And so that is another difference here. It's like it's inherently almost like in a loop, be just by being in a session, right? No state machines needed. I'd say this is very similar to like, the notion of routines, where it's just like a list of steps.[00:15:29] swyx: And it, like, sticks to them softly, but usually pretty well. And the steps is the prompts? The steps, it's like the prompt, like the steps are in the prompt. Yeah, yeah, yeah. Right, it's like step one, do this, step one, step two, do that. What if I want to change the system prompt halfway through the conversation?[00:15:44] swyx: You can. Okay. You can. To be honest, I have not played without two too much. Yeah,[00:15:47] swyx: yeah.[00:15:48] swyx: But, I know you can.[00:15:49] swyx: Yeah, yeah. Yeah. Awesome. I noticed that you called it real time API, but not voice API. Mm hmm. So I assume that it's like real time API starting with voice. Right, I think that's what he said on the thing.[00:16:00] swyx: I can't imagine, like, what else is real[00:16:02] swyx: time? Well, I guess, to use ChatGPT's voice mode as an example, Like, we've demoed the video, right? Like, real time image, right? So, I'm not actually sure what timelines are, But I would expect, if I had to guess, That, like, that is probably the next thing that we're gonna be making.[00:16:17] swyx: You'd probably have to talk directly with the team building this. Sure. But, You can't promise their timelines. Yeah, yeah, yeah, right, exactly. But, like, given that this is the features that currently, Or that exists that we've demoed on Chachapiti. Yeah. There[00:16:29] swyx: will never be a[00:16:29] swyx: case where there's like a real time text API, right?[00:16:31] swyx: I don't Well, this is a real time text API. You can do text only on this. Oh. Yeah. I don't know why you would. But it's actually So text to text here doesn't quite make a lot of sense. I don't think you'll get a lot of latency gain. But, like, speech to text is really interesting. Because you can prevent You can prevent responses, like audio responses.[00:16:54] swyx: And force function calls. And so you can do stuff like UI control. That is like super super reliable. We had a lot of like, you know, un, like, we weren't sure how well this was gonna work because it's like, you have a voice answering. It's like a whole persona, right? Like, that's a little bit more, you know, risky.[00:17:10] swyx: But if you, like, cut out the audio outputs and make it so it always has to output a function, like you can end up with pretty pretty good, like, Pretty reliable, like, command like a command architecture. Yeah,[00:17:21] swyx: actually, that's the way I want to interact with a lot of these things as well. Like, one sided voice.[00:17:26] swyx: Yeah, you don't necessarily want to hear the[00:17:27] swyx: voice back. And like, sometimes it's like, yeah, I think having an output voice is great. But I feel like I don't always want to hear an output voice. I'd say usually I don't. But yeah, exactly, being able to speak to it is super sweet.[00:17:39] swyx: Cool. Do you want to comment on any of the other stuff that you announced?[00:17:41] swyx: From caching I noticed was like, I like the no code change part. I'm looking forward to the docs because I'm sure there's a lot of details on like, what you cache, how long you cache. Cause like, enthalpy caches were like 5 minutes. I was like, okay, but what if I don't make a call every 5 minutes?[00:17:56] swyx: Yeah,[00:17:56] swyx: to be super honest with you, I've been so caught up with the real time API and making the demo that I haven't read up on the other stuff. Launches too much. I mean, I'm aware of them, but I think I'm excited to see how all distillation works. That's something that we've been doing like, I don't know, I've been like doing it between our models for a while And I've seen really good results like I've done back in a day like from GPT 4 to GPT 3.[00:18:19] swyx: 5 And got like, like pretty much the same level of like function calling with like hundreds of functions So that was super super compelling So, I feel like easier distillation, I'm really excited for. I see. Is it a tool?[00:18:31] swyx: So, I saw evals. Yeah. Like, what is the distillation product? It wasn't super clear, to be honest.[00:18:36] swyx: I, I think I want to, I want to let that team, I want to let that team talk about it. Okay,[00:18:40] swyx: alright. Well, I appreciate you jumping on. Yeah, of course. Amazing demo. It was beautifully designed. I'm sure that was part of you and Roman, and[00:18:47] swyx: Yeah, I guess, shout out to like, the first people to like, creators of Wanderlust, originally, were like, Simon and Carolis, and then like, I took it and built the voice component and the voice calling components.[00:18:59] swyx: Yeah, so it's been a big team effort. And like the entire PI team for like Debugging everything as it's been going on. It's been, it's been so good working with them. Yeah, you're the first consumers on the DX[00:19:07] swyx: team. Yeah. Yeah, I mean, the classic role of what we do there. Yeah. Okay, yeah, anything else? Any other call to action?[00:19:13] swyx: No, enjoy Dev Day. Thank you. Yeah. That's it.[00:19:16] Olivier Godement, Head of Product, OpenAI[00:19:16] AI Charlie: The latent space crew then talked to Olivier Godmont, head of product for the OpenAI platform, who led the entire Dev Day keynote and introduced all the major new features and updates that we talked about today.[00:19:28] swyx: Okay, so we are here with Olivier Godmont. That's right.[00:19:32] swyx: I don't pronounce French. That's fine. It was perfect. And it was amazing to see your keynote today. What was the back story of, of preparing something like this? Preparing, like, Dev Day? It[00:19:43] Olivier Godement: essentially came from a couple of places. Number one, excellent reception from last year's Dev Day.[00:19:48] Olivier Godement: Developers, startup founders, researchers want to spend more time with OpenAI, and we want to spend more time with them as well. And so for us, like, it was a no brainer, frankly, to do it again, like, you know, like a nice conference. The second thing is going global. We've done a few events like in Paris and like a few other like, you know, non European, non American countries.[00:20:05] Olivier Godement: And so this year we're doing SF, Singapore, and London. To frankly just meet more developers.[00:20:10] swyx: Yeah, I'm very excited for the Singapore one.[00:20:12] Olivier Godement: Ah,[00:20:12] swyx: yeah. Will you be[00:20:13] Olivier Godement: there?[00:20:14] swyx: I don't know. I don't know if I got an invite. No. I can't just talk to you. Yeah, like, and then there was some speculation around October 1st.[00:20:22] Olivier Godement: Yeah. Is it because[00:20:23] swyx: 01, October 1st? It[00:20:25] Olivier Godement: has nothing to do. I discovered the tweet yesterday where like, people are so creative. No one, there was no connection to October 1st. But in hindsight, that would have been a pretty good meme by Tiana. Okay.[00:20:37] swyx: Yeah, and you know, I think like, OpenAI's outreach to developers is something that I felt the whole in 2022, when like, you know, like, people were trying to build a chat GPT, and like, there was no function calling, all that stuff that you talked about in the past.[00:20:51] swyx: And that's why I started my own conference as like like, here's our little developer conference thing. And, but to see this OpenAI Dev Day now, and like to see so many developer oriented products coming to OpenAI, I think it's really encouraging.[00:21:02] Olivier Godement: Yeah, totally. It's that's what I said, essentially, like, developers are basically the people who make the best connection between the technology and, you know, the future, essentially.[00:21:14] Olivier Godement: Like, you know, essentially see a capability, see a low level, like, technology, and are like, hey, I see how that application or that use case that can be enabled. And so, in the direction of enabling, like, AGI, like, all of humanity, it's a no brainer for us, like, frankly, to partner with Devs.[00:21:31] Alessio: And most importantly, you almost never had waitlists, which, compared to like other releases, people usually, usually have.[00:21:38] Alessio: What is the, you know, you had from caching, you had real time voice API, we, you know, Shawn did a long Twitter thread, so people know the releases. Yeah. What is the thing that was like sneakily the hardest to actually get ready for, for that day, or like, what was the kind of like, you know, last 24 hours, anything that you didn't know was gonna work?[00:21:56] Olivier Godement: Yeah. The old Fairly, like, I would say, involved, like, features to ship. So the team has been working for a month, all of them. The one which I would say is the newest for OpenAI is the real time API. For a couple of reasons. I mean, one, you know, it's a new modality. Second, like, it's the first time that we have an actual, like, WebSocket based API.[00:22:16] Olivier Godement: And so, I would say that's the one that required, like, the most work over the month. To get right from a developer perspective and to also make sure that our existing safety mitigation that worked well with like real time audio in and audio out.[00:22:30] swyx: Yeah, what design choices or what was like the sort of design choices that you want to highlight?[00:22:35] swyx: Like, you know, like I think for me, like, WebSockets, you just receive a bunch of events. It's two way. I obviously don't have a ton of experience. I think a lot of developers are going to have to embrace this real time programming. Like, what are you designing for, or like, what advice would you have for developers exploring this?[00:22:51] Olivier Godement: The core design hypothesis was essentially, how do we enable, like, human level latency? We did a bunch of tests, like, on average, like, human beings, like, you know, takes, like, something like 300 milliseconds to converse with each other. And so that was the design principle, essentially. Like, working backward from that, and, you know, making the technology work.[00:23:11] Olivier Godement: And so we evaluated a few options, and WebSockets was the one that we landed on. So that was, like, one design choice. A few other, like, big design choices that we had to make prompt caching. Prompt caching, the design, like, target was automated from the get go. Like, zero code change from the developer.[00:23:27] Olivier Godement: That way you don't have to learn, like, what is a prompt prefix, and, you know, how long does a cache work, like, we just do it as much as we can, essentially. So that was a big design choice as well. And then finally, on distillation, like, and evaluation. The big design choice was something I learned at Skype, like in my previous job, like a philosophy around, like, a pit of success.[00:23:47] Olivier Godement: Like, what is essentially the, the, the minimum number of steps for the majority of developers to do the right thing? Because when you do evals on fat tuning, there are many, many ways, like, to mess it up, frankly, like, you know, and have, like, a crappy model, like, evals that tell, like, a wrong story. And so our whole design was, okay, we actually care about, like, helping people who don't have, like, that much experience, like, evaluating a model, like, get, like, in a few minutes, like, to a good spot.[00:24:11] Olivier Godement: And so how do we essentially enable that bit of success, like, in the product flow?[00:24:15] swyx: Yeah, yeah, I'm a little bit scared to fine tune especially for vision, because I don't know what I don't know for stuff like vision, right? Like, for text, I can evaluate pretty easily. For vision let's say I'm like trying to, one of your examples was grab.[00:24:33] swyx: Which, very close to home, I'm from Singapore. I think your example was like, they identified stop signs better. Why is that hard? Why do I have to fine tune that? If I fine tune that, do I lose other things? You know, like, there's a lot of unknowns with Vision that I think developers have to figure out.[00:24:50] swyx: For[00:24:50] Olivier Godement: sure. Vision is going to open up, like, a new, I would say, evaluation space. Because you're right, like, it's harder, like, you know, to tell correct from incorrect, essentially, with images. What I can say is we've been alpha testing, like, the Vision fine tuning, like, for several weeks at that point. We are seeing, like, even higher performance uplift compared to text fine tuning.[00:25:10] Olivier Godement: So that's, there is something here, like, we've been pretty impressed, like, in a good way, frankly. But, you know, how well it works. But for sure, like, you know, I expect the developers who are moving from one modality to, like, text and images will have, like, more, you know Testing, evaluation, like, you know, to set in place, like, to make sure it works well.[00:25:25] Alessio: The model distillation and evals is definitely, like, the most interesting. Moving away from just being a model provider to being a platform provider. How should people think about being the source of truth? Like, do you want OpenAI to be, like, the system of record of all the prompting? Because people sometimes store it in, like, different data sources.[00:25:41] Alessio: And then, is that going to be the same as the models evolve? So you don't have to worry about, you know, refactoring the data, like, things like that, or like future model structures.[00:25:51] Olivier Godement: The vision is if you want to be a source of truth, you have to earn it, right? Like, we're not going to force people, like, to pass us data.[00:25:57] Olivier Godement: There is no value prop, like, you know, for us to store the data. The vision here is at the moment, like, most developers, like, use like a one size fits all model, like be off the shelf, like GP40 essentially. The vision we have is fast forward a couple of years. I think, like, most developers will essentially, like, have a.[00:26:15] Olivier Godement: An automated, continuous, fine tuned model. The more, like, you use the model, the more data you pass to the model provider, like, the model is automatically, like, fine tuned, evaluated against some eval sets, and essentially, like, you don't have to every month, when there is a new snapshot, like, you know, to go online and, you know, try a few new things.[00:26:34] Olivier Godement: That's a direction. We are pretty far away from it. But I think, like, that evaluation and decision product are essentially a first good step in that direction. It's like, hey, it's you. I set it by that direction, and you give us the evaluation data. We can actually log your completion data and start to do some automation on your behalf.[00:26:52] Alessio: And then you can do evals for free if you share data with OpenAI. How should people think about when it's worth it, when it's not? Sometimes people get overly protective of their data when it's actually not that useful. But how should developers think about when it's right to do it, when not, or[00:27:07] Olivier Godement: if you have any thoughts on it?[00:27:08] Olivier Godement: The default policy is still the same, like, you know, we don't train on, like, any API data unless you opt in. What we've seen from feedback is evaluation can be expensive. Like, if you run, like, O1 evals on, like, thousands of samples Like, your build will get increased, like, you know, pretty pretty significantly.[00:27:22] Olivier Godement: That's problem statement number one. Problem statement number two is, essentially, I want to get to a world where whenever OpenAI ships a new model snapshot, we have full confidence that there is no regression for the task that developers care about. And for that to be the case, essentially, we need to get evals.[00:27:39] Olivier Godement: And so that, essentially, is a sort of a two bugs one stone. It's like, we subsidize, basically, the evals. And we also use the evals when we ship new models to make sure that we keep going in the right direction. So, in my sense, it's a win win, but again, completely opt in. I expect that many developers will not want to share their data, and that's perfectly fine to me.[00:27:56] swyx: Yeah, I think free evals though, very, very good incentive. I mean, it's a fair trade. You get data, we get free evals. Exactly,[00:28:04] Olivier Godement: and we sanitize PII, everything. We have no interest in the actual sensitive data. We just want to have good evaluation on the real use cases.[00:28:13] swyx: Like, I always want to eval the eval. I don't know if that ever came up.[00:28:17] swyx: Like, sometimes the evals themselves are wrong, and there's no way for me to tell you.[00:28:22] Olivier Godement: Everyone who is starting with LLM, teaching with LLM, is like, Yeah, evaluation, easy, you know, I've done testing, like, all my life. And then you start to actually be able to eval, understand, like, all the corner cases, And you realize, wow, there's like a whole field in itself.[00:28:35] Olivier Godement: So, yeah, good evaluation is hard and so, yeah. Yeah, yeah.[00:28:38] swyx: But I think there's a, you know, I just talked to Brain Trust which I think is one of your partners. Mm-Hmm. . They also emphasize code based evals versus your sort of low code. What I see is like, I don't know, maybe there's some more that you didn't demo.[00:28:53] swyx: YC is kind of like a low code experience, right, for evals. Would you ever support like a more code based, like, would I run code on OpenAI's eval platform?[00:29:02] Olivier Godement: For sure. I mean, we meet developers where they are, you know. At the moment, the demand was more for like, you know, easy to get started, like eval. But, you know, if we need to expose like an evaluation API, for instance, for people like, you know, to pass, like, you know, their existing test data we'll do it.[00:29:15] Olivier Godement: So yeah, there is no, you know, philosophical, I would say, like, you know, misalignment on that. Yeah,[00:29:19] swyx: yeah, yeah. What I think this is becoming, by the way, and I don't, like it's basically, like, you're becoming AWS. Like, the AI cloud. And I don't know if, like, that's a conscious strategy, or it's, like, It doesn't even have to be a conscious strategy.[00:29:33] swyx: Like, you're going to offer storage. You're going to offer compute. You're going to offer networking. I don't know what networking looks like. Networking is maybe, like, Caching or like it's a CDN. It's a prompt CDN.[00:29:45] Alex Volkov: Yeah,[00:29:45] swyx: but it's the AI versions of everything, right? Do you like do you see the analogies or?[00:29:52] Olivier Godement: Whatever Whatever I took to developers. I feel like Good models are just half of the story to build a good app There's a third model you need to do Evaluation is the perfect example. Like, you know, you can have the best model in the world If you're in the dark, like, you know, it's really hard to gain the confidence and so Our philosophy is[00:30:11] Olivier Godement: The whole like software development stack is being basically reinvented, you know, with LLMs. There is no freaking way that open AI can build everything. Like there is just too much to build, frankly. And so my philosophy is, essentially, we'll focus on like the tools which are like the closest to the model itself.[00:30:28] Olivier Godement: So that's why you see us like, you know, investing quite a bit in like fine tuning, distillation, our evaluation, because we think that it actually makes sense to have like in one spot, Like, you know, all of that. Like, there is some sort of virtual circle, essentially, that you can set in place. But stuff like, you know, LLMOps, like tools which are, like, further away from the model, I don't know if you want to do, like, you know, super elaborate, like, prompt management, or, you know, like, tooling, like, I'm not sure, like, you know, OpenAI has, like, such a big edge, frankly, like, you know, to build this sort of tools.[00:30:56] Olivier Godement: So that's how we view it at the moment. But again, frankly, the philosophy is super simple. The strategy is super simple. It's meeting developers where they want us to be. And so, you know that's frankly, like, you know, day in, day out, like, you know, what I try to do.[00:31:08] Alessio: Cool. Thank you so much for the time.[00:31:10] Alessio: I'm sure you,[00:31:10] swyx: Yeah, I have more questions on, a couple questions on voice, and then also, like, your call to action, like, what you want feedback on, right? So, I think we should spend a bit more time on voice, because I feel like that's, like, the big splash thing. I talked well Well, I mean, I mean, just what is the future of real time for OpenAI?[00:31:28] swyx: Yeah. Because I think obviously video is next. You already have it in the, the ChatGPT desktop app. Do we just have a permanent, like, you know, like, are developers just going to be, like, sending sockets back and forth with OpenAI? Like how do we program for that? Like, what what is the future?[00:31:44] Olivier Godement: Yeah, that makes sense. I think with multimodality, like, real time is quickly becoming, like, you know, essentially the right experience, like, to build an application. Yeah. So my expectation is that we'll see like a non trivial, like a volume of applications like moving to a real time API. Like if you zoom out, like, audio is really simple, like, audio until basically now.[00:32:05] Olivier Godement: Audio on the web, in apps, was basically very much like a second class citizen. Like, you basically did like an audio chatbot for users who did not have a choice. You know, they were like struggling to read, or I don't know, they were like not super educated with technology. And so, frankly, it was like the crappy option, you know, compared to text.[00:32:25] Olivier Godement: But when you talk to people in the real world, the vast majority of people, like, prefer to talk and listen instead of typing and writing.[00:32:34] swyx: We speak before we write.[00:32:35] Olivier Godement: Exactly. I don't know. I mean, I'm sure it's the case for you in Singapore. For me, my friends in Europe, the number of, like, WhatsApp, like, voice notes they receive every day, I mean, just people, it makes sense, frankly, like, you know.[00:32:45] Olivier Godement: Chinese. Chinese, yeah.[00:32:46] swyx: Yeah,[00:32:47] Olivier Godement: all voice. You know, it's easier. There is more emotions. I mean, you know, you get the point across, like, pretty well. And so my personal ambition for, like, the real time API and, like, audio in general is to make, like, audio and, like, multimodality, like, truly a first class experience.[00:33:01] Olivier Godement: Like, you know, if you're, like, you know, the amazing, like, super bold, like, start up out of YC, you want to build, like, the next, like, billion, like, you know, user application to make it, like, truly your first and make it feel, like, you know, an actual good, like, you know, product experience. So that's essentially the ambition, and I think, like, yeah, it could be pretty big.[00:33:17] swyx: Yeah. I think one, one people, one issue that people have with the voice so far as, as released in advanced voice mode is the refusals.[00:33:24] Alex Volkov: Yeah.[00:33:24] swyx: You guys had a very inspiring model spec. I think Joanne worked on that. Where you said, like, yeah, we don't want to overly refuse all the time. In fact, like, even if, like, not safe for work, like, in some occasions, it's okay.[00:33:38] swyx: How, is there an API that we can say, not safe for work, okay?[00:33:41] Olivier Godement: I think we'll get there. I think we'll get there. The mobile spec, like, nailed it, like, you know. It nailed it! It's so good! Yeah, we are not in the business of, like, policing, you know, if you can say, like, vulgar words or whatever. You know, there are some use cases, like, you know, I'm writing, like, a Hollywood, like, script I want to say, like, will go on, and it's perfectly fine, you know?[00:33:59] Olivier Godement: And so I think the direction where we'll go here is that basically There will always be like, you know, a set of behavior that we will, you know, just like forbid, frankly, because they're illegal against our terms of services. But then there will be like, you know, some more like risky, like themes, which are completely legal, like, you know, vulgar words or, you know, not safe for work stuff.[00:34:17] Olivier Godement: Where basically we'll expose like a controllable, like safety, like knobs in the API to basically allow you to say, hey, that theme okay, that theme not okay. How sensitive do you want the threshold to be on safety refusals? I think that's the Dijkstra. So a[00:34:31] swyx: safety API.[00:34:32] Olivier Godement: Yeah, in a way, yeah.[00:34:33] swyx: Yeah, we've never had that.[00:34:34] Olivier Godement: Yeah. '[00:34:35] swyx: cause right now is you, it is whatever you decide. And then it's, that's it. That, that, that would be the main reason I don't use opening a voice is because of[00:34:42] Olivier Godement: it's over police. Over refuse over refusals. Yeah. Yeah, yeah. No, we gotta fix that. Yeah. Like singing,[00:34:47] Alessio: we're trying to do voice. I'm a singer.[00:34:49] swyx: And you, you locked off singing.[00:34:51] swyx: Yeah,[00:34:51] Alessio: yeah, yeah.[00:34:52] swyx: But I, I understand music gets you in trouble. Okay. Yeah. So then, and then just generally, like, what do you want to hear from developers? Right? We have, we have all developers watching you know, what feedback do you want? Any, anything specific as well, like from, especially from today anything that you are unsure about, that you are like, Our feedback could really help you decide.[00:35:09] swyx: For sure.[00:35:10] Olivier Godement: I think, essentially, it's becoming pretty clear after today that, you know, I would say the open end direction has become pretty clear, like, you know, after today. Investment in reasoning, investment in multimodality, Investment as well, like in, I would say, tool use, like function calling. To me, the biggest question I have is, you know, Where should we put the cursor next?[00:35:30] Olivier Godement: I think we need all three of them, frankly, like, you know, so we'll keep pushing.[00:35:33] swyx: Hire 10, 000 people, or actually, no need, build a bunch of bots.[00:35:37] Olivier Godement: Exactly, and so let's take O1 smart enough, like, for your problems? Like, you know, let's set aside for a second the existing models, like, for the apps that you would love to build, is O1 basically it in reasoning, or do we still have, like, you know, a step to do?[00:35:50] Olivier Godement: Preview is not enough, I[00:35:52] swyx: need the full one.[00:35:53] Olivier Godement: Yeah, so that's exactly that sort of feedback. Essentially what they would love to do is for developers I mean, there's a thing that Sam has been saying like over and over again, like, you know, it's easier said than done, but I think it's directionally correct. As a developer, as a founder, you basically want to build an app which is a bit too difficult for the model today, right?[00:36:12] Olivier Godement: Like, what you think is right, it's like, sort of working, sometimes not working. And that way, you know, that basically gives us like a goalpost, and be like, okay, that's what you need to enable with the next model release, like in a few months. And so I would say that Usually, like, that's the sort of feedback which is like the most useful that I can, like, directly, like, you know, incorporate.[00:36:33] swyx: Awesome. I think that's our time. Thank you so much, guys. Yeah, thank you so much.[00:36:38] AI Charlie: Thank you. We were particularly impressed that Olivier addressed the not safe for work moderation policy question head on, as that had only previously been picked up on in Reddit forums. This is an encouraging sign that we will return to in the closing candor with Sam Altman at the end of this episode.[00:36:57] Romain Huet, Head of DX, OpenAI[00:36:57] AI Charlie: Next, a chat with Roman Hewitt, friend of the pod, AI Engineer World's fair closing keynote speaker, and head of developer experience at OpenAI on his incredible live demos And advice to AI engineers on all the new modalities.[00:37:12] Alessio: Alright, we're live from OpenAI Dev Day. We're with Juan, who just did two great demos on, on stage.[00:37:17] Alessio: And he's been a friend of Latentspace, so thanks for taking some of the time.[00:37:20] Romain Huet: Of course, yeah, thank you for being here and spending the time with us today.[00:37:23] swyx: Yeah, I appreciate appreciate you guys putting this on. I, I know it's like extra work, but it really shows the developers that you're, Care and about reaching out.[00:37:31] Romain Huet: Yeah, of course, I think when you go back to the OpenAI mission, I think for us it's super important that we have the developers involved in everything we do. Making sure that you know, they have all of the tools they need to build successful apps. And we really believe that the developers are always going to invent the ideas, the prototypes, the fun factors of AI that we can't build ourselves.[00:37:49] Romain Huet: So it's really cool to have everyone here.[00:37:51] swyx: We had Michelle from you guys on. Yes, great episode. She very seriously said API is the path to AGI. Correct. And people in our YouTube comments were like, API is not AGI. I'm like, no, she's very serious. API is the path to AGI. Like, you're not going to build everything like the developers are, right?[00:38:08] swyx: Of[00:38:08] Romain Huet: course, yeah, that's the whole value of having a platform and an ecosystem of amazing builders who can, like, in turn, create all of these apps. I'm sure we talked about this before, but there's now more than 3 million developers building on OpenAI, so it's pretty exciting to see all of that energy into creating new things.[00:38:26] Alessio: I was going to say, you built two apps on stage today, an international space station tracker and then a drone. The hardest thing must have been opening Xcode and setting that up. Now, like, the models are so good that they can do everything else. Yes. You had two modes of interaction. You had kind of like a GPT app to get the plan with one, and then you had a cursor to do apply some of the changes.[00:38:47] Alessio: Correct. How should people think about the best way to consume the coding models, especially both for You know, brand new projects and then existing projects that you're trying to modify.[00:38:56] Romain Huet: Yeah. I mean, one of the things that's really cool about O1 Preview and O1 Mini being available in the API is that you can use it in your favorite tools like cursor like I did, right?[00:39:06] Romain Huet: And that's also what like Devin from Cognition can use in their own software engineering agents. In the case of Xcode, like, it's not quite deeply integrated in Xcode, so that's why I had like chat GPT side by side. But it's cool, right, because I could instruct O1 Preview to be, like, my coding partner and brainstorming partner for this app, but also consolidate all of the, the files and architect the app the way I wanted.[00:39:28] Romain Huet: So, all I had to do was just, like, port the code over to Xcode and zero shot the app build. I don't think I conveyed, by the way, how big a deal that is, but, like, you can now create an iPhone app from scratch, describing a lot of intricate details that you want, and your vision comes to life in, like, a minute.[00:39:47] Romain Huet: It's pretty outstanding.[00:39:48] swyx: I have to admit, I was a bit skeptical because if I open up SQL, I don't know anything about iOS programming. You know which file to paste it in. You probably set it up a little bit. So I'm like, I have to go home and test it. And I need the ChatGPT desktop app so that it can tell me where to click.[00:40:04] Romain Huet: Yeah, I mean like, Xcode and iOS development has become easier over the years since they introduced Swift and SwiftUI. I think back in the days of Objective C, or like, you know, the storyboard, it was a bit harder to get in for someone new. But now with Swift and SwiftUI, their dev tools are really exceptional.[00:40:23] Romain Huet: But now when you combine that with O1, as your brainstorming and coding partner, it's like your architect, effectively. That's the best way, I think, to describe O1. People ask me, like, can GPT 4 do some of that? And it certainly can. But I think it will just start spitting out code, right? And I think what's great about O1, is that it can, like, make up a plan.[00:40:42] Romain Huet: In this case, for instance, the iOS app had to fetch data from an API, it had to look at the docs, it had to look at, like, how do I parse this JSON, where do I store this thing, and kind of wire things up together. So that's where it really shines. Is mini or preview the better model that people should be using?[00:40:58] Romain Huet: Like, how? I think people should try both. We're obviously very excited about the upcoming O1 that we shared the evals for. But we noticed that O1 Mini is very, very good at everything math, coding, everything STEM. If you need for your kind of brainstorming or your kind of science part, you need some broader knowledge than reaching for O1 previews better.[00:41:20] Romain Huet: But yeah, I used O1 Mini for my second demo. And it worked perfectly. All I needed was very much like something rooted in code, architecting and wiring up like a front end, a backend, some UDP packets, some web sockets, something very specific. And it did that perfectly.[00:41:35] swyx: And then maybe just talking about voice and Wanderlust, the app that keeps on giving, what's the backstory behind like preparing for all of that?[00:41:44] Romain Huet: You know, it's funny because when last year for Dev Day, we were trying to think about what could be a great demo app to show like an assistive experience. I've always thought travel is a kind of a great use case because you have, like, pictures, you have locations, you have the need for translations, potentially.[00:42:01] Romain Huet: There's like so many use cases that are bounded to travel that I thought last year, let's use a travel app. And that's how Wanderlust came to be. But of course, a year ago, all we had was a text based assistant. And now we thought, well, if there's a voice modality, what if we just bring this app back as a wink.[00:42:19] Romain Huet: And what if we were interacting better with voice? And so with this new demo, what I showed was the ability to like, So, we wanted to have a complete conversation in real time with the app, but also the thing we wanted to highlight was the ability to call tools and functions, right? So, like in this case, we placed a phone call using the Twilio API, interfacing with our AI agents, but developers are so smart that they'll come up with so many great ideas that we could not think of ourselves, right?[00:42:48] Romain Huet: But what if you could have like a, you know, a 911 dispatcher? What if you could have like a customer service? Like center, that is much smarter than what we've been used to today. There's gonna be so many use cases for real time, it's awesome.[00:43:00] swyx: Yeah, and sometimes actually you, you, like this should kill phone trees.[00:43:04] swyx: Like there should not be like dial one[00:43:07] Romain Huet: of course para[00:43:08] swyx: espanol, you know? Yeah, exactly. Or whatever. I dunno.[00:43:12] Romain Huet: I mean, even you starting speaking Spanish would just do the thing, you know you don't even have to ask. So yeah, I'm excited for this future where we don't have to interact with those legacy systems.[00:43:22] swyx: Yeah. Yeah. Is there anything, so you are doing function calling in a streaming environment. So basically it's, it's web sockets. It's UDP, I think. It's basically not guaranteed to be exactly once delivery. Like, is there any coding challenges that you encountered when building this?[00:43:39] Romain Huet: Yeah, it's a bit more delicate to get into it.[00:43:41] Romain Huet: We also think that for now, what we, what we shipped is a, is a beta of this API. I think there's much more to build onto it. It does have the function calling and the tools. But we think that for instance, if you want to have something very robust, On your client side, maybe you want to have web RTC as a client, right?[00:43:58] Romain Huet: And, and as opposed to like directly working with the sockets at scale. So that's why we have partners like Life Kit and Agora if you want to, if you want to use them. And I'm sure we'll have many mores in the, in many more in the future. But yeah, we keep on iterating on that, and I'm sure the feedback of developers in the weeks to come is going to be super critical for us to get it right.[00:44:16] swyx: Yeah, I think LiveKit has been fairly public that they are used in, in the Chachapiti app. Like, is it, it's just all open source, and we just use it directly with OpenAI, or do we use LiveKit Cloud or something?[00:44:28] Romain Huet: So right now we, we released the API, we released some sample code also, and referenced clients for people to get started with our API.[00:44:35] Romain Huet: And we also partnered with LifeKit and Agora, so they also have their own, like ways to help you get started that plugs natively with the real time API. So depending on the use case, people can, can can decide what to use. If you're working on something that's completely client or if you're working on something on the server side, for the voice interaction, you may have different needs, so we want to support all of those.[00:44:55] Alessio: I know you gotta run. Is there anything that you want the AI engineering community to give feedback on specifically, like even down to like, you know, a specific API end point or like, what, what's like the thing that you want? Yeah. I[00:45:08] Romain Huet: mean, you know, if we take a step back, I think dev Day this year is all different from last year and, and in, in a few different ways.[00:45:15] Romain Huet: But one way is that we wanted to keep it intimate, even more intimate than last year. We wanted to make sure that the community is. Thank you very much for joining us on the Spotlight. That's why we have community talks and everything. And the takeaway here is like learning from the very best developers and AI engineers.[00:45:31] Romain Huet: And so, you know we want to learn from them. Most of what we shipped this morning, including things like prompt caching the ability to generate prompts quickly in the playground, or even things like vision fine tuning. These are all things that developers have been asking of us. And so, the takeaway I would, I would leave them with is to say like, Hey, the roadmap that we're working on is heavily influenced by them and their work.[00:45:53] Romain Huet: And so we love feedback From high feature requests, as you say, down to, like, very intricate details of an API endpoint, we love feedback, so yes that's, that's how we, that's how we build this API.[00:46:05] swyx: Yeah, I think the, the model distillation thing as well, it might be, like, the, the most boring, but, like, actually used a lot.[00:46:12] Romain Huet: True, yeah. And I think maybe the most unexpected, right, because I think if I, if I read Twitter correctly the past few days, a lot of people were expecting us. To shape the real time API for speech to speech. I don't think developers were expecting us to have more tools for distillation, and we really think that's gonna be a big deal, right?[00:46:30] Romain Huet: If you're building apps that have you know, you, you want high, like like low latency, low cost, but high performance, high quality on the use case distillation is gonna be amazing.[00:46:40] swyx: Yeah. I sat in the distillation session just now and they showed how they distilled from four oh to four mini and it was like only like a 2% hit in the performance and 50 next.[00:46:49] swyx: Yeah,[00:46:50] Romain Huet: I was there as well for the superhuman kind of use case inspired for an Ebola client. Yeah, this was really good. Cool man! so much for having me. Thanks again for being here today. It's always[00:47:00] AI Charlie: great to have you. As you might have picked up at the end of that chat, there were many sessions throughout the day focused on specific new capabilities.[00:47:08] Michelle Pokrass, Head of API at OpenAI ft. Simon Willison[00:47:08] AI Charlie: Like the new model distillation features combining EVOLs and fine tuning. For our next session, we are delighted to bring back two former guests of the pod, which is something listeners have been greatly enjoying in our second year of doing the Latent Space podcast. Michelle Pokras of the API team joined us recently to talk about structured outputs, and today gave an updated long form session at Dev Day, describing the implementation details of the new structured output mode.[00:47:39] AI Charlie: We also got her updated thoughts on the VoiceMode API we discussed in her episode, now that it is finally announced. She is joined by friend of the pod and super blogger, Simon Willison, who also came back as guest co host in our Dev Day. 2023 episode.[00:47:56] Alessio: Great, we're back live at Dev Day returning guest Michelle and then returning guest co host Fork.[00:48:03] Alessio: Fork, yeah, I don't know. I've lost count. I think it's been a few. Simon Willison is back. Yeah, we just wrapped, we just wrapped everything up. Congrats on, on getting everything everything live. Simon did a great, like, blog, so if you haven't caught up, I[00:48:17] Simon Willison: wrote my, I implemented it. Now, I'm starting my live blog while waiting for the first talk to start, using like GPT 4, I wrote me the Javascript, and I got that live just in time and then, yeah, I was live blogging the whole day.[00:48:28] swyx: Are you a cursor enjoyer?[00:48:29] Simon Willison: I haven't really gotten into cursor yet to be honest. I just haven't spent enough time for it to click, I think. I'm more a copy and paste things out of Cloud and chat GPT. Yeah. It's interesting.[00:48:39] swyx: Yeah. I've converted to cursor and 01 is so easy to just toggle on and off.[00:48:45] Alessio: What's your workflow?[00:48:46] Alessio: VS[00:48:48] Michelle Pokrass: Code co pilot, so Yep, same here. Team co pilot. Co pilot is actually the reason I joined OpenAI. It was, you know, before ChatGPT, this is the thing that really got me. So I'm still into it, but I keep meaning to try out Cursor, and I think now that things have calmed down, I'm gonna give it a real go.[00:49:03] swyx: Yeah, it's a big thing to change your tool of choice.[00:49:06] swyx: Yes,[00:49:06] Michelle Pokrass: yeah, I'm pretty dialed, so.[00:49:09] swyx: I mean, you know, if you want, you can just fork VS Code and make your own. That's the thing to dumb thing, right? We joked about doing a hackathon where the only thing you do is fork VS Code and bet me the best fork win.[00:49:20] Michelle Pokrass: Nice.[00:49:22] swyx: That's actually a really good idea. Yeah, what's up?[00:49:26] swyx: I mean, congrats on launching everything today. I know, like, we touched on it a little bit, but, like, everyone was kind of guessing that Voice API was coming, and, like, we talked about it in our episode. How do you feel going into the launch? Like, any design decisions that you want to highlight?[00:49:41] Michelle Pokrass: Yeah, super jazzed about it. The team has been working on it for a while. It's, like, a very different API for us. It's the first WebSocket API, so a lot of different design decisions to be made. It's, like, what kind of events do you send? When do you send an event? What are the event names? What do you send, like, on connection versus on future messages?[00:49:57] Michelle Pokrass: So there have been a lot of interesting decisions there. The team has also hacked together really cool projects as we've been testing it. One that I really liked is we had an internal hack a thon for the API team. And some folks built like a little hack that you could use to, like VIM with voice mode, so like, control vim, and you would tell them on like, nice, write a file and it would, you know, know all the vim commands and, and pipe those in.[00:50:18] Michelle Pokrass: So yeah, a lot of cool stuff we've been hacking on and really excited to see what people build with it.[00:50:23] Simon Willison: I've gotta call out a demo from today. I think it was Katja had a 3D visualization of the solar system, like WebGL solar system, you could talk to. That is one of the coolest conference demos I've ever seen.[00:50:33] Simon Willison: That was so convincing. I really want the code. I really want the code for that to get put out there. I'll talk[00:50:39] Michelle Pokrass: to the team. I think we can[00:50:40] Simon Willison: probably

The top AI news from the past week, every ThursdAI

Hey, Alex here. Super quick, as I'm still attending Dev Day, but I didn't want to leave you hanging (if you're a paid subscriber!), I have decided to outsource my job and give the amazing podcasters of NoteBookLM the whole transcript of the opening keynote of OpenAI Dev Day.You can see a blog of everything they just posted hereHere's a summary of all what was announced:* Developer-Centric Approach: OpenAI consistently emphasized the importance of developers in their mission to build beneficial AGI. The speaker stated, "OpenAI's mission is to build AGI that benefits all of humanity, and developers are critical to that mission... we cannot do this without you."* Reasoning as a New Frontier: The introduction of the GPT-4 series, specifically the "O1" models, marks a significant step towards AI with advanced reasoning capabilities, going beyond the limitations of previous models like GPT-3.* Multimodal Capabilities: OpenAI is expanding the potential of AI applications by introducing multimodal capabilities, particularly focusing on real-time speech-to-speech interaction through the new Realtime API.* Customization and Fine-Tuning: Empowering developers to customize models is a key theme. OpenAI introduced Vision for fine-tuning with images and announced easier access to fine-tuning with model distillation tools.* Accessibility and Scalability: OpenAI demonstrated a commitment to making AI more accessible and cost-effective for developers through initiatives like price reductions, prompt caching, and model distillation tools.Important Ideas and Facts:1. The O1 Models:* Represent a shift towards AI models with enhanced reasoning capabilities, surpassing previous generations in problem-solving and logical thought processes.* O1 Preview is positioned as the most powerful reasoning model, designed for complex problems requiring extended thought processes.* O1 Mini offers a faster, cheaper, and smaller alternative, particularly suited for tasks like code debugging and agent-based applications.* Both models demonstrate advanced capabilities in coding, math, and scientific reasoning.* OpenAI highlighted the ability of O1 models to work with developers as "thought partners," understanding complex instructions and contributing to the development process.Quote: "The shift to reasoning introduces a new shape of AI capability. The ability for our model to scale and correct the process is pretty mind-blowing. So we are resetting the clock, and we are introducing a new series of models under the name O1."2. Realtime API:* Enables developers to build real-time AI experiences directly into their applications using WebSockets.* Launches with support for speech-to-speech interaction, leveraging the technology behind ChatGPT's advanced voice models.* Offers natural and seamless integration of voice capabilities, allowing for dynamic and interactive user experiences.* Showcased the potential to revolutionize human-computer interaction across various domains like driving, education, and accessibility.Quote: "You know, a lot of you have been asking about building amazing speech-to-speech experiences right into your apps. Well now, you can."3. Vision, Fine-Tuning, and Model Distillation:* Vision introduces the ability to use images for fine-tuning, enabling developers to enhance model performance in image understanding tasks.* Fine-tuning with Vision opens up opportunities in diverse fields such as product recommendations, medical imaging, and autonomous driving.* OpenAI emphasized the accessibility of these features, stating that "fine-tuning with Vision is available to every single developer."* Model distillation tools facilitate the creation of smaller, more efficient models by transferring knowledge from larger models like O1 and GPT-4.* This approach addresses cost concerns and makes advanced AI capabilities more accessible for a wider range of applications and developers.Quote: "With distillation, you take the outputs of a large model to supervise, to teach a smaller model. And so today, we are announcing our own model distillation tools."4. Cost Reduction and Accessibility:* OpenAI highlighted its commitment to lowering the cost of AI models, making them more accessible for diverse use cases.* Announced a 90% decrease in cost per token since the release of GPT-3, emphasizing continuous efforts to improve affordability.* Introduced prompt caching, automatically providing a 50% discount for input tokens the model has recently processed.* These initiatives aim to remove financial barriers and encourage wider adoption of AI technologies across various industries.Quote: "Every time we reduce the price, we see new types of applications, new types of use cases emerge. We're super far from the price equilibrium. In a way, models are still too expensive to be bought at massive scale."Conclusion:OpenAI DevDay conveyed a strong message of developer empowerment and a commitment to pushing the boundaries of AI capabilities. With new models like O1, the introduction of the Realtime API, and a dedicated focus on accessibility and customization, OpenAI is paving the way for a new wave of innovative and impactful AI applications developed by a global community. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit sub.thursdai.news/subscribe

Syntax - Tasty Web Development Treats

In this episode of Syntax, Wes and Scott talk about the latest features in Node.js, including native support for TypeScript, .env parsing, a built-in test runner, watch mode, SQLite integration, glob support, and top-level await. They also discuss some wishlist items, and experimental features like WebSocket support and the require module. Show Notes 00:00 Welcome to Syntax! 01:13 Brought to you by Sentry.io 01:37 Node.js new features Deno Bun 02:51 TypeScript tsx swc/wasm-typescript 10:03 SQLite v22.5 14:35 .env support 16:24 Test runner Jest 19:42 Watch Mode nodemon 21:22 Glob support 22:48 Top-Level Await Top-level await is a footgun 26:40 Experimental require module Default ESM Detection Web request standards HonoJS 29:39 Experimental WebSocket support 30:13 Async local storage 31:43 Single file executables 32:46 Wishlist 32:54 Hot reload 34:20 Window shim globalThis 35:30 Better server 35:56 Better terminal integration NIM styleText chalk warp 41:36 Twitter responses Coolify n 46:54 Sick Picks + Shameless Plugs Sick Picks Scott: Cascadia Wes: Roborock Qrevo Shameless Plugs Scott: YouTube Channel Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott: X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads

Syntax - Tasty Web Development Treats
791: LLRT The Serverless Runtime w/ Richard Davison

Syntax - Tasty Web Development Treats

Play Episode Listen Later Jul 5, 2024 56:40


Scott and Wes chat with Richard Davison from AWS about LLRT, a new runtime tailored specifically for Lambda. They dive into the benefits of using LLRT, challenges with JavaScript in serverless, and why Rust was chosen for its development. Show Notes 00:00 Welcome to Syntax! 01:07 Who is Richard Davison? 05:11 What is LLRT and what's the motivation for building it? 08:25 AWS Lambda example. 11:20 What makes LLRT specifically tailored to Lambda? 14:55 Brought to you by Sentry.io. 15:22 Node.js in Lambda. 16:00 What are some challenges that people have with JavaScript in serverless? 17:20 Lambda memory configuration. 19:23 Managing cost of compute. 21:29 Simpler and faster than Node, Bun, Dino, but not a replacement. 22:31 The benchmarks. 27:00 Quick.js, the main reason for the performance gains. Fabrice Bellard QuickJS. 28:03 The Quick.js engine. 30:35 What was the reason behind creating Quick.js? 33:46 What made you pick Rust for LLRT? 36:34 Abstractions and the value of speed. 39:08 The JIT Compiler. 42:38 Compile cache. 43:27 De-optimizations. 44:59 Node.js Compat, what to use and avoid with LLRT. GitHub AWS Labs Compatibility Chart. 47:52 Will you target with WinterCG spec? 50:22 Streams API. 52:06 What about WebSockets? 53:10 Is this going to be promoted from a labs project? 54:49 Sick Picks + Shameless Plugs. Sick Picks Richard: QuickJS Engine, JSLinux. Shameless Plugs Richard: Javascript Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott: X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads

Les Cast Codeurs Podcast
LCC 313 - 313 CCL

Les Cast Codeurs Podcast

Play Episode Listen Later Jun 15, 2024 79:45


Katia, Guillaume, Emmanuel et Antonio discutent Kotlin, Micronaut, Spring Boot, Quarkus, Langchain4j, LLMs en Java, builds reproductible et la question AMA du jour, comment fait-on carrière de dev à 40 ans ? Enregistré le 14 juin 2024 Téléchargement de l'épisode LesCastCodeurs-Episode-313.mp3 News Langages Android avec Kotlin Multiplatform our Flutter avec Dart ? https://developers.googleblog.com/en/making-development-across-platforms-easier-for-developers/ Des licenciements ont continué chez Google et l'équipe Flutter/Dart comme plein d'autres ont été touchées, mais sur les réseaux sociaux les gens ont pensé que Google désinvestissait dans Flutter et Dart. Par ailleurs, côté Android, ils poussent plutôt du côté de Kotlin et KMP, mais naturellement aussi les gens se sont demandé si Google avait pris parti pour pousser plus Kotlin/KMP plutôt que Flutter/Dart. Pour essayer de mieux faire comprendre aux développeurs l'intérêt des deux plateformes, et leurs avantages et inconvénients, les directeurs des deux plateformes ont rédigé un article commun. Si l'on souhaite une expérience plus proche du hardware et des dernières nouveautés d'Android, et d'avoir aussi une UI/UX vraiment native Android, mieux vaut aller du côté de Kotlin/KMP. Si l'on souhaite par contre une expérience multiplateforme Web, mobile, desktop avec une UX commune cross-plateforme, avec également le partage de business logic à partir d'une même base de code, Flutter et Dart sont plus adaptés. Recap de KotlinConf https://x.com/gz_k/status/1793887581433971083?s=46&t=C18cckWlfukmsB_Fx0FfxQ RPC multiplatform la pres Grow with the flow montrant la reecriture en kotlin plus simple que des solutions complexes ailleurs power-assert pour ecrire des tests Kotlin 2.0 et les evolutions majeures Kotlin multiplatforme mainteant stable Kotlin Compose Multiplatform continue a amturer Retour d'experience de la migration d'android jetpack vers Kotlin Multiplatform use cases de coroutines et scope Librairies Quarkus veut aller dans une fondation https://quarkus.io/blog/quarkus-in-a-foundation/ ameliorer l'adoption (encore plus), ameliorer la transparence, et la collaboration, encourager la participatiopn multi vendeur Premiere etape : une gouvernance plus overte Deuxieme etape: bouger dans uen foundation Echange avec la communaute sur la proposition et les fondations cibles Des criteres pour al foudnation (notamment la rapidite de delivery Quarkus 3.11 https://quarkus.io/blog/quarkus-3-11-0-released/ Websocket.next en cours Dev services pour observabilite (grafana, jaegel, open telemetry extension infinispan cache #38448 - Observability extensions - Dev Services, Dev Resources, LGTM #39836 - Infinispan Cache Extension #40309 - WebSockets Next: client endpoints #40534 - WebSockets Next: initial version of security integration #40273 - Allow quarkus:run to launch Dev Services #40539 - Support for OIDC session expired page #40600 - Introduce OidcRedirectFilter LangChain4j 0.31 est sorti https://github.com/langchain4j/langchain4j/releases/tag/0.31.0 Recherche Web pour le RAG avec Google et Tavily RAG avec les bases de données SQL (expérimental) Récupération des resources remontées par le RAG lorsque AiServices retourne un Result Observabilité LLM pour OpenAI pour être notifié des requêtes, réponses et erreurs Intégration de Cohere (embedding), Jina (embedding et re-ranking scoring), Azuere CosmosDB comme embedding store Mise à jour de Gemini avec le parallel function calling et les instructions système Spring Boot 3.3.0 est sorti https://spring.io/blog/2024/05/23/spring-boot-3-3-0-available-now support Class Data Sharing Micrometer sipport de spantag etc Amelioration Spring Security comme JwtAuthenticationCovnerter support docker compose pour les images container bitnami Virtual thread pour les websockets Support sBOM via an actuator SNI for embedded web servers une nouvelle doc via antora Micronaut 4.5 est sortie https://github.com/micronaut-projects/micronaut-platform/releases/tag/v4.5.0 Le serveur basé sur Netty inclus la détection d'opération bloquante et les modules l'utilisant indiqueront à l'utilisateur quand certaines opérations peuvent être redirigée plutôt sur un virtual thread ou dans le thread pool IO Micronaut Data inclus le support de la multitenance avec partitionnement par discriminateur pour JDBC et R2DBC Micronaut Data rajoute le pagination par curseur pour JDBC et R2DBC (important aussi pour Jakarta Data) Support des annotations Jakarta Servlet pour configurer par exemple les servelet filters Support virtual thread et HTTP/2 Un nouveau module JSON Schema pour générer des JSON Schemas pour les records Java Un nouveau module Source Gen pour faire de la génération de source pour Java et Kotlin cross-language Un nouveau module Guice pour importer des modules Guice existants Web Angular 18 est sorti https://blog.angular.dev/angular-v18-is-now-available-e79d5ac0affe Support expérimental pour la détection de changement sans zone Angular.dev est désormais le nouveau site pour les développeurs Angular Material 3, les “deferrable views”, le “built-in control flow” sont maintenant stables et intègrent une série d'améliorations Améliorations du rendu côté serveur telles que le support de l'hydratation i18n, un meilleur débogage, le support de l'hydratation dans Angular Material, et la event replay qui utilise la même bibliothèque que Google Search. Data et Intelligence Artificielle Une version pure Java du LLM Llama3 de Meta https://github.com/mukel/llama3.java/tree/main utilise la future API Vector de Java JLama, un moteur d‘exécution de LLM en Java avec l'api vector https://www.infoq.com/news/2024/05/jlama-llm-inference-java/ basé sur llama.c qui est un moteur d'inference de LLM (l'execution des requetes) jlama implementé avec vector APIs et PamanaTensorOperations plusisures alternatives (native binding, iml0ementation pure en java, scala, kotlin) Target Speech Hearing https://www.infoq.com/news/2024/05/target-speech-hearing/ Nouveau algo Deep Learning de l'Université de Washington permet d'écouter une seule personne de ton choix et effacer tout le bruit autour le système nécessite que la personne portant les écouteurs appuie sur un bouton tout en regardant quelqu'un parler ou simplement en le fixant pendant trois à cinq secondes Permet à un modèle d'apprendre les schémas vocaux du locuteur et de s'y attacher pour pouvoir les restituer à l'auditeur, même s'il se déplace et cesse de regarder cette personne. Selon les chercheurs, cela constitue une avancée significative par rapport aux écouteurs à réduction de bruit existants, qui peuvent annuler efficacement tous les sons, mais ne peuvent pas sélectionner les locuteurs en fonction de leurs caractéristiques vocales. Actuellement, le système ne peut enregistrer qu'un seul locuteur à la fois. Une autre limitation est que l'enregistrement ne réussira que si aucune autre voix forte ne provient de la même direction. L'équipe a mis en open source leur code et leur jeu de données afin de faciliter les travaux de recherche futurs pour améliorer l'audition de la parole cible. Outillage Utiliser LLM pour migrer du framework de testing https://www.infoq.com/news/2024/06/slack-automatic-test-conversion/ Slack a migré 15.000 tests de Enzyme à React Testing Library avec un succès de 80% Migration nécessaire pour le manque de support de Enzyme pour React 18 L'équipe a essayé d'automatiser la conversion avec des transformations AST, mais n'a atteint que 45 % de succès à cause de la complexité des méthodes d'Enzyme et du manque d'accès aux informations contextuelles du DOM. L'équipe a utilisé Claude 2.1 pour la conversion, avec des taux de réussite variant de 40 % à 60 %, les résultats dépendant largement de la complexité des tâches. Suite aux résultats insatisfaisants, l'équipe a décidé d'observer comment les développeurs humains abordaient la conversion des tests unitaires. Les développeurs humains utilisaient leurs connaissances sur React, Enzyme et RTL, ainsi que le contexte du rendu et les conversions AST de l'outil initial pour mieux convertir les tests unitaires. Finalement les ingénieurs de Slack ont combiné transformations AST et LLM en intégrant des composants React rendus et des conversions AST dans les invites, atteignant un taux de réussite de 80 % démontrant ainsi la complémentarité de ces technologies. Claude 2.1 est un modèle de langage de grande taille (LLM) annoncé en novembre 2023 par Anthropic. Il inclut une fenêtre contextuelle de 200 000 tokens, des réductions significatives des taux d'hallucination du modèle, des invites système et permet l'utilisation d'outils. Depuis, Anthropic a introduit la famille de modèles Claude 3, composée de trois modèles distincts, avec des capacités multimodales et une compréhension contextuelle améliorée. Un arbre de syntaxe abstraite (AST) est une représentation arborescente de la structure syntaxique abstraite du code source écrit dans un langage de programmation. Chaque nœud de l'arbre représente une construction du code source. Un arbre de syntaxe se concentre sur la structure et le contenu nécessaires pour comprendre la fonctionnalité du code. Les AST sont couramment utilisés dans les compilateurs et les interpreters pour analyser et examiner le code, permettant diverses transformations, optimisations et traductions lors de la compilation. IDE de test de JetBrains https://blog.jetbrains.com/qa/2024/05/aqua-general-availability/ Aqua, le premier IDE conçu pour l'automatisation des tests, supporte plusieurs langages (Java, Python, JavaScript, TypeScript, Kotlin, SQL) et frameworks de tests (Selenium, Playwright, Cypress). Pourquoi ? Les tests d'applications nécessitent des compétences spécifiques. Aqua, un IDE adapté, est recommandé par les ingénieurs en automatisation des tests. Aqua propose deux plans de licence : un gratuit pour les usages non commerciaux et un payant pour les usages commerciaux. cam me parait un peu contre intuitif a l'heure du devops et du TDD de faire des outils dédiés et donc des equipes ou personnes dédiées Méthodologies Les 10 principes à suivre, selon le créateur de cURL, pour être un bon BDFL (Benevolent Dictator For Life) https://daniel.haxx.se/blog/2024/05/27/my-bdfl-guiding-principles/ Être ouvert et amical Livrer des produits solides comme le roc Être un leader de l'Open Source Privilégier la sécurité Fournir une documentation de premier ordre Rester indépendant Répondre rapidement Suivre l'actualité Rester à la pointe de la technologie Respecter les retours d'information Dans un vieil article de Artima, Guido Van Rossum, le créateur de Python et premier BDFL d'un projet, se remémore un échange de 1995 qui est à l'origine de ce concept https://www.artima.com/weblogs/viewpost.jsp?thread=235725 Guido Van Rossum a été le premier à endosser ce “rôle” Un site compréhensif sur les build reproductibles https://reproducible-builds.org longue doc de la definition aux méthodes pour resoudre des problèmes spécifiques Masterclass de Fabien Olicard: Le Palais Mental https://www.youtube.com/watch?v=u6wu_iY4xd8 Technique pour retenir de l'information plus longtemps que dans sa mémoire courte Les APIs web ne devraient pas rediriger HTTP vers HTTPS https://jviide.iki.fi/http-redirects grosso modo le risque majeur est d'envoyer des données confidentielles en clair sur le réseau le mieux serait de ne pas rediriger vers HTTPS, mais par contre de retourner une vraie erreur explicite notamment les clés d'API et c'est facile de ne pas le,voir vu les redirects. Sécurité Blog de GitHub sur la provenance et l'attestation https://github.blog/2024-04-30-where-does-your-software-really-come-from/ Discute les concepts de securisation de chainne d'approvisionnement de sogiciel et comment elles s'articulent entre elle. A haut niveau discute les hash pour garantir le meme fichier La signature asymetrique pour prouver que j'ai signé (e.g. le hash) et donc que je garantis. L'attenstation qui declare des faits sur un artifact attestation de provenance: source code et instructions de build (SLSA provenance) mais il faut garantir les signature avec une autorite de certification et avec des certificats a courte vide idealement, c'est sigstore MEtionne aussi The Update Framework pour s'appuyer sur cela et garantir des undates non compromis Keycloak 25 est sorti https://www.keycloak.org/2024/06/keycloak-2500-released.html Argon2 pour le hashing de mots de passe Depreciation des adaptateurs (Tomcat, servlet etc) Java 21 et depreciation de Java 17 session utilisatur persistente meme pour les instances online (pour survivre a une rotation de keycloak ameliorations autour des passkeys management et health endpoint sur un port different Et plus Demande aux cast codeurs A 40 ans, tu peux encore être codeur reconnu ? Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 12-14 juin 2024 : Rencontres R - Vannes (France) 13-14 juin 2024 : Agile Tour Toulouse - Toulouse (France) 14 juin 2024 : DevQuest - Niort (France) 18 juin 2024 : Mobilis In Mobile 2024 - Nantes (France) 18 juin 2024 : BSides Strasbourg 2024 - Strasbourg (France) 18 juin 2024 : Tech & Wine 2024 - Lyon (France) 19-20 juin 2024 : AI_dev: Open Source GenAI & ML Summit Europe - Paris (France) 19-21 juin 2024 : Devoxx Poland - Krakow (Poland) 26-28 juin 2024 : Breizhcamp 2024 - Rennes (France) 27 juin 2024 : DotJS - Paris (France) 27-28 juin 2024 : Agi Lille - Lille (France) 4-5 juillet 2024 : Sunny Tech - Montpellier (France) 8-10 juillet 2024 : Riviera DEV - Sophia Antipolis (France) 6 septembre 2024 : JUG Summer Camp - La Rochelle (France) 6-7 septembre 2024 : Agile Pays Basque - Bidart (France) 17 septembre 2024 : We Love Speed - Nantes (France) 17-18 septembre 2024 : Agile en Seine 2024 - Issy-les-Moulineaux (France) 19-20 septembre 2024 : API Platform Conference - Lille (France) & Online 25-26 septembre 2024 : PyData Paris - Paris (France) 26 septembre 2024 : Agile Tour Sophia-Antipolis 2024 - Biot (France) 2-4 octobre 2024 : Devoxx Morocco - Marrakech (Morocco) 7-11 octobre 2024 : Devoxx Belgium - Antwerp (Belgium) 8 octobre 2024 : Red Hat Summit: Connect 2024 - Paris (France) 10 octobre 2024 : Cloud Nord - Lille (France) 10-11 octobre 2024 : Volcamp - Clermont-Ferrand (France) 10-11 octobre 2024 : Forum PHP - Marne-la-Vallée (France) 11-12 octobre 2024 : SecSea2k24 - La Ciotat (France) 16 octobre 2024 : DotPy - Paris (France) 17-18 octobre 2024 : DevFest Nantes - Nantes (France) 17-18 octobre 2024 : DotAI - Paris (France) 30-31 octobre 2024 : Agile Tour Nantais 2024 - Nantes (France) 30-31 octobre 2024 : Agile Tour Bordeaux 2024 - Bordeaux (France) 31 octobre 2024-3 novembre 2024 : PyCon.FR - Strasbourg (France) 6 novembre 2024 : Master Dev De France - Paris (France) 7 novembre 2024 : DevFest Toulouse - Toulouse (France) 8 novembre 2024 : BDX I/O - Bordeaux (France) 13-14 novembre 2024 : Agile Tour Rennes 2024 - Rennes (France) 20-22 novembre 2024 : Agile Grenoble 2024 - Grenoble (France) 21 novembre 2024 : DevFest Strasbourg - Strasbourg (France) 27-28 novembre 2024 : Cloud Expo Europe - Paris (France) 28 novembre 2024 : Who Run The Tech ? - Rennes (France) 3-5 décembre 2024 : APIdays Paris - Paris (France) 4-5 décembre 2024 : DevOpsDays Paris - Paris (France) 4-5 décembre 2024 : Open Source Experience - Paris (France) 6 décembre 2024 : DevFest Dijon - Dijon (France) 22-25 janvier 2025 : SnowCamp 2025 - Grenoble (France) 16-18 avril 2025 : Devoxx France - Paris (France) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via twitter https://twitter.com/lescastcodeurs Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/

Talking Kotlin
Kord: Deep Dive into the Discord API

Talking Kotlin

Play Episode Listen Later Jun 12, 2024 46:58


Join us for an in-depth conversation with Michael Rittmeister and Luca Kellermann, the brains behind Kord, a Kotlin library designed for interacting with the Discord API. In this episode, we dive into the complexities of the Discord API, the importance of WebSockets, and the challenges and strategies in building and maintaining a modularized library. We also explore the future of Kord, including their move towards native support and the ongoing effort to keep up with Discord's evolving features. Whether you're a Kotlin enthusiast or a Discord bot developer, this episode is packed with valuable insights and practical advice. Kord library – https://github.com/kordlib/kord Michael on X – @SchlaumeierTVDE Luca on GitHub – https://github.com/lukellmann

Thinking Elixir Podcast
199: Websockets' Late Hero and Elixir's Fresh Updates

Thinking Elixir Podcast

Play Episode Listen Later Apr 23, 2024 40:35


In this episode, we talk about the eagerly awaited Elixir 1.17.0-dev changelog, the exciting ExDoc v0.32 release, and the slightly amusing declaration of Cloudflare playing the websocket hero – about 9 years after Phoenix neatly demoed 2 million concurrent connections to the world. We also explore potential performance improvements with Ecto's support for unlogged tables and how PragProg's new Elixir book is shaping up as a must-read for DevOps enthusiasts. But that's not all, we also touch on the launch of a constraint solver in Elixir, a nifty Livebook visualization tip, and more! Plus, don't miss out as Mark, Tyler, and David also share insights into their current projects, ranging from tackling Elixir mock libraries to the latest in the tech conference scene and educational initiatives in the Erlang ecosystem. Show Notes online - http://podcast.thinkingelixir.com/199 (http://podcast.thinkingelixir.com/199) Elixir Community News - https://github.com/elixir-lang/elixir/blob/main/CHANGELOG.md#v1170-dev (https://github.com/elixir-lang/elixir/blob/main/CHANGELOG.md#v1170-dev?utm_source=thinkingelixir&utm_medium=shownotes) – Elixir 1.17.0-dev changelog update introducing a new Duration type and calendar-specific shifts support. - https://hexdocs.pm/db_connection/DBConnection.Ownership.html#module-callers-lookup (https://hexdocs.pm/db_connection/DBConnection.Ownership.html#module-callers-lookup?utm_source=thinkingelixir&utm_medium=shownotes) – ExUnit start_supervised now populates $callers, affecting libraries like Mox and Ecto. - https://hexdocs.pm/mox/Mox.html#module-explicit-allowances (https://hexdocs.pm/mox/Mox.html#module-explicit-allowances?utm_source=thinkingelixir&utm_medium=shownotes) – ExUnit changes with regards to $callers provide better support for global mocks like Mox. - https://github.com/elixir-ecto/ecto_sql/pull/601 (https://github.com/elixir-ecto/ecto_sql/pull/601?utm_source=thinkingelixir&utm_medium=shownotes) – Elixir Ecto experiment for adding support to unlogged tables that could speed up test suites. - https://twitter.com/josevalim/status/1777990973454262476 (https://twitter.com/josevalim/status/1777990973454262476?utm_source=thinkingelixir&utm_medium=shownotes) – Announcement of ExDoc v0.32 release. - https://hexdocs.pm/ex_doc/changelog.html (https://hexdocs.pm/ex_doc/changelog.html?utm_source=thinkingelixir&utm_medium=shownotes) – Official changelogs for ExDoc v0.32 highlighting new features like OS default font for content and search bar improvements. - https://x.com/chris_mccord/status/1776370373946175706 (https://x.com/chris_mccord/status/1776370373946175706?utm_source=thinkingelixir&utm_medium=shownotes) – Cloudflare Workers support for WebSockets and its impact on real-time web experiences. - https://blog.cloudflare.com/introducing-websockets-in-workers/ (https://blog.cloudflare.com/introducing-websockets-in-workers/?utm_source=thinkingelixir&utm_medium=shownotes) – Introduction and explanation of websockets in Cloudflare Workers. - https://www.phoenixframework.org/blog/the-road-to-2-million-websocket-connections (https://www.phoenixframework.org/blog/the-road-to-2-million-websocket-connections?utm_source=thinkingelixir&utm_medium=shownotes) – Chris McCord's blog post on achieving 2 million WebSocket connections. - https://twitter.com/pragprog/status/1779253657097117890 (https://twitter.com/pragprog/status/1779253657097117890?utm_source=thinkingelixir&utm_medium=shownotes) – Announcement of a new Elixir focused book in beta from PragProg. - https://twitter.com/sm_debenedetto/status/1779558393373409481 (https://twitter.com/sm_debenedetto/status/1779558393373409481?utm_source=thinkingelixir&utm_medium=shownotes) – Additional information about the new PragProg Elixir book. - https://pragprog.com/titles/beamops/engineering-elixir-applications/ (https://pragprog.com/titles/beamops/engineering-elixir-applications/?utm_source=thinkingelixir&utm_medium=shownotes) – Direct link to the forthcoming Elixir book on PragProg focused on DevOps for the BEAM ecosystem. - https://developer.hashicorp.com/terraform/intro (https://developer.hashicorp.com/terraform/intro?utm_source=thinkingelixir&utm_medium=shownotes) – Introduction to Terraform, as covered in PragProg's new Elixir book. - https://x.com/thibaut_barrere/status/1777702586944036899 (https://x.com/thibaut_barrere/status/1777702586944036899?utm_source=thinkingelixir&utm_medium=shownotes) – Discussion on a constraint solver implementation in Elixir. - https://github.com/bokner/fixpoint (https://github.com/bokner/fixpoint?utm_source=thinkingelixir&utm_medium=shownotes) – Boris Okner's Fixpoint, a constraint solver library for Elixir. - https://en.wikipedia.org/wiki/NP-completeness (https://en.wikipedia.org/wiki/NP-completeness?utm_source=thinkingelixir&utm_medium=shownotes) – Explanation of NP Completeness related to constraint solving. - https://twitter.com/hugobarauna/status/1779930969593512159 (https://twitter.com/hugobarauna/status/1779930969593512159?utm_source=thinkingelixir&utm_medium=shownotes) – Tip on building custom visualizations for Elixir data structures in Livebook using the Kino.Render protocol. Do you have some Elixir news to share? Tell us at @ThinkingElixir (https://twitter.com/ThinkingElixir) or email at show@thinkingelixir.com (mailto:show@thinkingelixir.com) Discussion Resources - https://cloud.withgoogle.com/next (https://cloud.withgoogle.com/next?utm_source=thinkingelixir&utm_medium=shownotes) - https://github.com/brainlid/langchain (https://github.com/brainlid/langchain?utm_source=thinkingelixir&utm_medium=shownotes) - https://github.com/dashbitco/mox (https://github.com/dashbitco/mox?utm_source=thinkingelixir&utm_medium=shownotes) - https://twitter.com/TylerAYoung/status/1780240662341849256 (https://twitter.com/TylerAYoung/status/1780240662341849256?utm_source=thinkingelixir&utm_medium=shownotes) – Tyler's Tweet about Mox - https://fly.io/phoenix-files/safe-ecto-migrations/ (https://fly.io/phoenix-files/safe-ecto-migrations/?utm_source=thinkingelixir&utm_medium=shownotes) Guest Host Information - Tyler Young joined as a guest host. - https://twitter.com/TylerAYoung (https://twitter.com/TylerAYoung?utm_source=thinkingelixir&utm_medium=shownotes) – on Twitter - https://github.com/s3cur3 (https://github.com/s3cur3?utm_source=thinkingelixir&utm_medium=shownotes) – on Github - https://fosstodon.org/@tylerayoung (https://fosstodon.org/@tylerayoung?utm_source=thinkingelixir&utm_medium=shownotes) – on Fediverse - https://tylerayoung.com/ (https://tylerayoung.com/?utm_source=thinkingelixir&utm_medium=shownotes) – Blog Find us online - Message the show - @ThinkingElixir (https://twitter.com/ThinkingElixir) - Message the show on Fediverse - @ThinkingElixir@genserver.social (https://genserver.social/ThinkingElixir) - Email the show - show@thinkingelixir.com (mailto:show@thinkingelixir.com) - Mark Ericksen - @brainlid (https://twitter.com/brainlid) - Mark Ericksen on Fediverse - @brainlid@genserver.social (https://genserver.social/brainlid) - David Bernheisel - @bernheisel (https://twitter.com/bernheisel) - David Bernheisel on Fediverse - @dbern@genserver.social (https://genserver.social/dbern)

IBKR Podcasts
Unlocking WebSockets

IBKR Podcasts

Play Episode Listen Later Apr 12, 2024 13:08


Interactive Brokers' Web API delivers real-time access to Interactive Brokers' trading functionality, including live market data, market scanners, and intra-day portfolio updates. This podcast introduces WebSocket technology.

The Laravel Podcast
Laravel 11 Release, Context, New Hires, & Open-Source Sustainability

The Laravel Podcast

Play Episode Listen Later Apr 9, 2024 33:22


Welcome back to another episode of the Laravel podcast! In this episode, we dive into the exciting release of Laravel 11 and discuss its impact since it has been out in the wild. Taylor also shares his insights on the release and other recent developments in the Laravel ecosystem, including the launch of Herd Pro. Additionally, we discuss the new hires at Laravel, including Andre Valentin, Chris Fadao, Mohammed Said, and Alyssa Mazzina and touch on the topic of open-source sustainability.Taylor Otwell's Twitter - https://twitter.com/taylorotwellMatt Stauffer's Twitter - https://twitter.com/stauffermattLaravel Twitter - https://twitter.com/laravelphpLaravel Website - https://laravel.com/Tighten Website - https://tighten.com/Laravel 11 - https://laravel.com/docs/11.x/releasesLaravel Careers - https://laravel.com/careersLaravel Reverb - https://laravel.com/docs/master/reverbLaravel Herd - https://herd.laravel.com/Laravel Herd for Windows - https://herd.laravel.com/windowsLaravel Valet - https://laravel.com/docs/11.x/valetChristoph Rumpel Forge Productivity Tweet: https://x.com/christophrumpel/status/1773738874264600608?s=20Andre Valentin Twitter - @‌theandreval Laravel Director of EngineeringChris Fidao Twitter - @‌fideloper Laravel Infrastructure EngineerMohammed Said Twitter - @‌themsaidAlyssa Mazzina Twitter - @‌alyssamazzinaLaragon - https://laragon.org/Django - https://www.djangoproject.com/Spatie - https://spatie.be/Laracon US - https://laracon.us/-----Editing and transcription sponsored by Tighten.

Hacker News Recap
March 18th, 2024 | YouTube now requires to label their realistic-looking videos made using AI

Hacker News Recap

Play Episode Listen Later Mar 19, 2024 18:03


This is a recap of the top 10 posts on Hacker News on March 18th, 2024.This podcast was generated by wondercraft.ai(00:33): YouTube now requires to label their realistic-looking videos made using AIOriginal post: https://news.ycombinator.com/item?id=39746468&utm_source=wondercraft_ai(02:08): 900 Sites, 125M accounts, 1 VulnerabilityOriginal post: https://news.ycombinator.com/item?id=39742422&utm_source=wondercraft_ai(04:07): Dear Paul Graham, there is no cookie banner lawOriginal post: https://news.ycombinator.com/item?id=39742578&utm_source=wondercraft_ai(05:43): Stability.ai – Introducing Stable Video 3DOriginal post: https://news.ycombinator.com/item?id=39749312&utm_source=wondercraft_ai(07:38): Cranelift code generation comes to RustOriginal post: https://news.ycombinator.com/item?id=39742692&utm_source=wondercraft_ai(09:19): EPA bans asbestos, a deadly carcinogen still in use decades after partial banOriginal post: https://news.ycombinator.com/item?id=39746806&utm_source=wondercraft_ai(11:05): WebSockets vs. Server-Sent-Events vs. Long-Polling vs. WebRTC vs. WebTransportOriginal post: https://news.ycombinator.com/item?id=39745993&utm_source=wondercraft_ai(12:42): Nvidia CEO Jensen Huang announces new AI chips: ‘We need bigger GPUs'Original post: https://news.ycombinator.com/item?id=39749646&utm_source=wondercraft_ai(14:35): Elegant open source project tracking, Trello like but self-hostedOriginal post: https://news.ycombinator.com/item?id=39742114&utm_source=wondercraft_ai(15:59): Paris cycling numbers double in one year thanks to investmentOriginal post: https://news.ycombinator.com/item?id=39744932&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai

The Daily Decrypt - Cyber News and Discussions
Roblox Malware, FakeBat Malvertising Surge, and Tor’s WebTunnel Censorship Evasion

The Daily Decrypt - Cyber News and Discussions

Play Episode Listen Later Mar 14, 2024


Today, we uncover a Roblox malware scam, explore the rise of FakeBat through malvertising, and highlight Tor's innovative WebTunnel for evading censorship. From the playful realms of Roblox to the frontlines of digital freedom with Tor, this episode is a journey through the evolving challenges and solutions in our online world. Protect your digital footprint and join the conversation on cybersecurity, privacy, and the relentless quest for a free internet. Article URLs: Roblox: https://www.zscaler.com/blogs/security-research/tweaks-stealer-targets-roblox-users-through-youtube-and-discord FakeBat: https://www.malwarebytes.com/blog/threat-intelligence/2024/03/fakebat-delivered-via-several-active-malvertising-campaigns TOR: https://www.bleepingcomputer.com/news/security/tors-new-webtunnel-bridges-mimic-https-traffic-to-evade-censorship/ Thanks to Jered Jones for providing the music for this episode. https://www.jeredjones.com/ Logo Design by https://www.zackgraber.com/ Tags: Roblox, malware, cybersecurity, digital privacy, FakeBat, malvertising, Tor, WebTunnel, online security, internet censorship, gaming, digital rights, data protection, hacking, antivirus Search Phrases: How to protect against Roblox malware Understanding malvertising and FakeBat malware Tor's WebTunnel for bypassing internet censorship Latest cybersecurity threats 2024 Safe gaming practices for parents and children Fighting digital surveillance and censorship Effective antivirus and malware protection tips Digital privacy concerns in online platforms Strategies for secure internet browsing Overcoming hacking in gaming communities Enhancements in Tor for secure communication Detecting and preventing online ad-based malware Innovations in digital privacy and security Techniques for safeguarding personal data online Analysis of current cyber threats and solutions Transcript: Mar 14 [00:00:00] offsetkeyz: A nefarious malware targets Roblox users, tricking them with the false promise of enhanced gameplay, only to steal personal information. [00:00:12] HGF: Fake bat malware leaps from the shadows of online ads, preying on unsuspecting victims with the disguise of legitimacy. [00:00:20] offsetkeyz: Tor Project unveils WebTunnel, a masterstroke in digital disguise. Outmaneuvering censorship with a stealthy sophistication. In the digital playground of Roblox, a new villain lurks. Not in the game, but in the guise of a performance boost. [00:00:42] HGF: Oh, Roblox, where my niece says she's gonna be the next digital Picasso. But really, what is it? Just a bunch of blocky characters bumping into each other? [00:00:50] offsetkeyz: It's more than that. Roblox is a sprawling online platform that lets users design, play, and share their own games and experiences. Think of it as [00:01:00] a digital Lego set, but with infinite pieces and possibilities. [00:01:04] HGF: Infinite pieces? You're saying if I wanted to build a castle with a moat filled with laser sharks, I could? Mmm, [00:01:11] offsetkeyz: sharky. Absolutely! Roblox provides the tools and the canvas. Your imagination brings it to life! Roblox It's powered by a robust game development system that lets creators script events, design levels, and even monetize their creations. [00:01:27] HGF: Monetize? So kids are making bank on this? My lemonade stand suddenly seems less impressive. [00:01:34] offsetkeyz: Yep, gone of the days of lemonade stands. Some developers on Roblox earn serious money through in game purchases using a virtual currency called Robux. It's a whole economy. Roblox is a vibrant community with millions of user generated games. There's something for everyone, from obstacle courses and simulators, to role playing games and beyond. But here's the twist. Discovered by Zscaler's Threat [00:02:00] Labs, attackers are exploiting platforms like YouTube and Discord to distribute a malware called Tweaks, or Tweaker. It promises to optimize your frames per second, but instead it's a one way ticket to Hacksville. So, [00:02:15] HGF: it's like ordering a performance enhancing smoothie but ending up with a bellyache that steals your wallet? [00:02:21] offsetkeyz: Oh yeah, there are tummyache survivors out there. But this PowerShell based malware silently siphons off sensitive data. Think Wi Fi passwords, location, Roblox IDs, and even in game currency. Like [00:02:35] HGF: a digital pickpocket. But wait, PowerShell? Is that like a superhero power? [00:02:40] offsetkeyz: Quite. PowerShell is a scripting language for Windows, often used by system administrators for automation, but in our villain's hands, it becomes a tool for theft. [00:02:51] HGF: I see, so they're using PowerShell for evil. Got it. How bad is this [00:02:57] offsetkeyz: breach? Given Roblox's [00:03:00] massive user base, where 45 percent are under 13, the scale is concerning. It's not just gamers at risk, it's potentially their parents and even corporations, thanks to remote work blurring the lines between personal and professional devices. [00:03:15] HGF: Yikes, so my quest for smoother gameplay could put dad's spreadsheets in jeopardy? [00:03:21] offsetkeyz: Precisely! The attackers cleverly disguised their malicious intent, using legitimate looking videos and discord communities to spread tweaks. They even offer a free version to lure users into their trap. Free? [00:03:36] HGF: Sounds like the only thing free about it is the trip to hackerland. What can the good folks at home do to protect themselves? [00:03:43] offsetkeyz: The golden rule is stick to reputable sources for game enhancements, avoid disabling antivirus software for dubious downloads, and maintain skepticism towards too good to be true offers on social platforms. Got [00:03:57] HGF: it. Keep it legit or risk a cyber hit. [00:04:00] Any word from the digital knights at Zscaler? [00:04:03] offsetkeyz: Zscaler's Threat Labs is on the front line. Their sandbox technology uncovering the murky tactics and techniques of this malware campaign. It's a reminder of the constant cat and mouse game in cyber security. [00:04:15] HGF: So the moral of the story, if an FPS booth sounds too good to be true on Discord or YouTube, it probably is. [00:04:29] offsetkeyz: February saw a surge in search based malvertising. with Fakebat leading the charge. A cunning malware distributed through ads for popular software, which was reported by Malwarebytes on Tuesday. See the link in the show notes. [00:04:44] HGF: Fakebat? That sounds like a villain from a Saturday morning cartoon. So what, now I can't even click on things I search for with [00:04:51] offsetkeyz: Google? Remember, if you're searching for something specific, try not to click on the advertisement results. Anyone can buy these advertisement [00:05:00] spaces, and trick you into clicking them by impersonating the site you're trying to find. This malware has been impersonating brands like Parsec and FreeCAD. Both are cutting edge technologies in their respective fields. Like a wolf in sheep's [00:05:14] HGF: clothing. How do they even get these ads in front of people? [00:05:18] offsetkeyz: malvertising campaign gets creative., They've leveraged URL shorteners and even compromised legitimate websites, making these malicious ads appear trustworthy. Victims are lured into clicking, which kicks off a redirect chain ending in malware installation. Once installed, Fakebat reaches out to its command and control servers, putting victims data at risk. So, what's [00:05:41] HGF: the digital equivalent of locking our doors and windows? [00:05:44] offsetkeyz: The key is caution. Avoid clicking on ads for software downloads, and ensure your system's security measures are up to date. Utilizing tools like Threat Down DNS Filter can block malicious ads at the source, offering a layer [00:06:00] of protection. Got [00:06:01] HGF: it. Be skeptical of too good to be true ads and maybe invest in a cyber guard dog. How is Google dealing with this ad [00:06:09] offsetkeyz: apocalypse? The incidents have been reported to Google, highlighting the ongoing battle between security researchers and malvertisers. It's a digital cat and mouse game. with each side constantly evolving their strategies. [00:06:23] HGF: This actually happened to me when I was shopping for a pan at our place. It was like a website. I clicked on the sponsored one and it was like at our place. com and it almost got me, but the font looked a little bit off. Really? Yeah. I think I sent it to, this is like last year, but I'll never forget how identical to the site it looked, but everything was on super sale and it was like, hurry, act now. And they never do that. And I was like, Good [00:06:50] offsetkeyz: catch. I actually had a friend order shoes at my recommendation from my favorite shoe store called Vivo Barefoot. Except he went [00:07:00] and googled it, and it, the first one on Google was vivobarefootusa. com, and he actually ordered shoes from them. And never got [00:07:09] HGF: them. [00:07:10] offsetkeyz: And never [00:07:12] transition: Uh, uh, uh, uh, uh, uh. [00:07:24] offsetkeyz: The Tor Project has just unveiled their latest innovation, WebTunnel. A new bridge designed to camouflage Tor connections amidst regular HTTPS traffic, making it a formidable foe against censorship. Tor is all about privacy and overcoming digital barriers. Bridges in the Tor network are like secret pathways that aren't listed in the public directory, helping users connect without drawing attention. However, countries with tight censorship have gotten better at spotting these connections, Causes the need for WebTunnel, which disguises Tor traffic to look just like any other HTTPS web traffic.[00:08:00] This makes it much harder for these countries to block without also disrupting HTTPS connections, which are vital for a secure internet. How does this magic work? It's all about the art of disguise. WebTunnel wraps Tor's data packets in a layer that mimics WebSocket like HTTPS connections. To any observer on the network, it appears as if the user is simply visiting websites, making it difficult to distinguish and block Tor users specifically. And what about [00:08:29] HGF: those living under the watchful eye of Big Brother? Any luck for [00:08:33] offsetkeyz: them? That is the heart of the matter. WebTunnel proving to be effective in countries who govern internet usage, but some of them, like China and Iran, are figuring out ways to detect it. Tor recommends adding a layer of obfuscation using the OBFS4 protocol. [00:08:52] HGF: OBFS what? Sounds like something out of a spy novel. [00:08:56] offsetkeyz: Not far off, oBFS4, or as I like to [00:09:00] call it, the OB4 Skater, is a protocol designed to disguise internet traffic, making it difficult for third parties to determine what's being transmitted. This is crucial in places where censorship is rampant and access to information is restricted. [00:09:15] HGF: Hide and seek with your internet connection. If I'm in a country that's not too fond of free information, it's OBFS4 could be my ticket to [00:09:23] offsetkeyz: the wider world. Exactly, yeah. OBFS4 is a critical tool for those in oppressive regimes, offering a lifeline to unrestricted information. It exemplifies the ongoing battle for digital freedom, ensuring that the internet remains open and accessible to all. A lot of companies IT departments block Tor on their network. For good reason. So, the main purpose of the WebSocket is to allow citizens of oppressive regimes to Access the internet unrestricted, but I'm wondering what the implications will be for IT departments trying to detect [00:10:00] this type of traffic on their networks. It will blend in with everything else, which could open some, some floodgates for some activity. And that's all we've got for you today. Huge thanks to Hot Girl Farmer for joining us and delivering the news. And we will talk to you some more tomorrow.

Syntax - Tasty Web Development Treats
739: The LoFi Movement: Building Local First Apps

Syntax - Tasty Web Development Treats

Play Episode Listen Later Mar 6, 2024 52:10


Join Wes and Scott as they explore LoFi (local first) web development, delving into CRDT, Websockets, IndexedDB, SQLite, and more. Discover when Local-First shines and when it's better to steer clear in this episode packed with practical insights. Show Notes 00:00 Welcome to Syntax! 03:18 Brought to you by Sentry.io. 04:08 What is LoFi? localfirst.dev 05:02 The Seven Ideals for Local-first. 05:29 1: No Spinners. 06:48 2: Your work is not trapped on one device. 07:56 3: The network is optional. 08:50 4: Seamless collaboration with your colleagues. 09:35 Oops, we for got to read 5: The Long Now. 09:37 6: Security and privacy by default. 09:45 7: You retain ultimate ownership and control. Actual Budget Finance App Ink & Switch 13:01 Sounds great, let's go! Not so fast, this stuff is hard. 14:07 The technology involved. 14:30 CRDT (Conflict-free Replicated Data Types). Wikipedia Definition James Long dotJS 2019 dotconfrences 17:48 How does it prioritize conflict resolutions? 19:36 Websockets. 20:17 IndexedDB + SQLite. 21:23 Service Workers. 22:16 The software involved. 22:24 Replicache. 24:48 YJS. 25:06 Electric SQL. 25:51 The most basic LoFi application. 31:26 Some bigger concepts. 32:34 Answering some common questions. 35:19 Some real-world examples of LoFi. Habit Path by Scott Tolinski 37:18 What about Apple PWA nonsense? 38:20 This seems similar to real-time software and multiplayer. 38:47 Sounds like too much work. Triplit Fullstack Database Electric SQL Evolu 40:46 Some useful links. Local-First Web Development Local-First Lo.fi YouTube Local-First Ink & Switch Local-First Reddit Syntax GitHub Local-First 43:30 Sick Picks + Shameless Plugs. Sick Picks Wes: Dresscode.dev Scott: Monarch Money Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott:X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads

The Laravel Podcast
Laravel 11, Reverb, Herd Windows/Pro, & Other Laracon EU Recap

The Laravel Podcast

Play Episode Listen Later Feb 27, 2024 37:09


In this episode of the Laravel Podcast, we are diving into the highlights of Laracon EU including the unveiling of Laravel 11 and the introduction of Laravel Reverb. Taylor Otwell shares insights on the streamlined application structure and new features in Laravel 11. We also discuss the launch of Laravel Herd for Windows and Herd Pro, offering power user features for local development, and provide some exciting updates about the upcoming Laracon US.Taylor Otwell's Twitter - https://twitter.com/taylorotwellMatt Stauffer's Twitter - https://twitter.com/stauffermattLaravel Twitter - https://twitter.com/laravelphpLaravel Website - https://laravel.com/Tighten Website - https://tighten.com/Laracon EU Photo Gallery Tweet - https://x.com/LaraconEU/status/1755957896209113444?s=20Laravel Reverb - https://laravel.com/docs/master/reverbLaravel 11 - https://laravel.com/docs/master/releasesThiery Laverdure's Project - https://github.com/tlaverdure/laravel-echo-serverPusher - https://pusher.com/Ably - https://ably.com/Laravel Herd - https://herd.laravel.com/Adam Wathan Twitter - https://twitter.com/adamwathanJess Archer Twitter - https://twitter.com/jessarchercodesLuke Twitter Downing Twitter - https://twitter.com/lukedowning19Daniel Coulbourne Twitter - https://twitter.com/DCoulbourneJoe Dixon Twitter - https://twitter.com/_joedixonPhilo Hermans Twitter - https://twitter.com/Philo01?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5EauthorLaracon US - https://laracon.us/Laracon CFP Talk Submission Form - https://docs.google.com/forms/d/e/1FAIpQLSdlyTDvqeKNB3r-wVNmDBlE23oHKEL4m8lzL5nci0YPH_5WYA/viewform-----Editing and transcription sponsored by Tighten.

Syntax - Tasty Web Development Treats
728: AI Superpowers with Kevin Hou and Codeium

Syntax - Tasty Web Development Treats

Play Episode Listen Later Feb 9, 2024 58:34


In this supper club, Scott and Wes welcome Kevin Hou, Head of Product Engineering at Codeium, a blazing fast AI-powered code completion and chat tool for developers. Show Notes 00:00 Welcome to Syntax! 05:31 An introduction to Codeium. 07:56 What information are you sending the AI to get such good completions? Codeium compliance 09:15 Codeium runs a 'Language Server'. 11:15 Crawling dependency tree and abstract syntax tree. 12:07 Using Codeium Live. 12:34 How big of a codebase can you run this on? 14:39 Sending select amounts of data to AI. 16:06 Does Codeium maintain codebase preferences and styling (ie. snake case)? 17:39 Will Codeium scan the dependency? 19:23 AI UI, have we found the best format? 21:55 Crazy ideas in tech. 22:53 Additional AI UI inputs. 24:14 How do you make an AI model? 28:42 How does Codeium manage the product roadmap? 32:09 Do AI models get worse over time? How does Codeium validate that it's not? Open AI Evals 35:39 How is Codeium THAT fast? 36:49 What programming langauges does Codeium use? 38:55 Codeium Playground. Codeium Playground 39:15 Caching as a performance improvement. 39:58 What is the pipe between Codeium and editor? 40:17 Codeium chat service. 41:44 A WebSocket system allowing push and pull communication. 43:13 Closed Beta for GPT 4.0. 46:12 The dreaded closing quote bug. 48:26 Sometimes bugs bug Wes. 49:49 Supper Club Questions 51:40 Perplexity.ai 52:35 What editor does Kevin use? 55:31 Sick Picks + Shameless Plugs Sick Picks Kevin: Lapse.com Shameless Plugs Kevin: Codeium.com Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott: X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads

Oracle University Podcast
Autonomous Database on Dedicated Infrastructure

Oracle University Podcast

Play Episode Listen Later Jan 9, 2024 26:36


The Oracle Autonomous Database Dedicated deployment is a good choice for customers who want to implement a private database cloud in their own dedicated Exadata infrastructure. That dedicated infrastructure can either be in the Oracle Public Cloud or in the customer's own data center via Oracle Exadata Cloud@Customer.   In a dedicated environment, the Exadata infrastructure is entirely dedicated to the subscribing customer, isolated from other cloud tenants, with no shared processor, storage, and memory resource.   In this episode, hosts Lois Houston and Nikita Abraham speak with Oracle Database experts about how Autonomous Database Dedicated offers greater control of the software and infrastructure life cycle, customizable policies for separation of database workload, software update schedules and versioning, workload consolidation, availability policies, and much more.   Oracle MyLearn: https://mylearn.oracle.com/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Tamal Chatterjee, and the OU Studio Team for helping us create this episode.   -------------------------------------------------------   Episode Transcript:   00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started. 00:26 Nikita: Hello and welcome to the Oracle University Podcast. I'm Nikita Abraham, Principal Technical Editor with Oracle University, and I'm joined by Lois Houston, Director of Innovation Programs. Lois: Hi there! This is our second episode on Oracle's Autonomous Database, and today we're going to spend time discussing Autonomous Database on Dedicated Infrastructure. We'll be talking with three of our colleagues: Maria Colgan, Kamryn Vinson, and Kay Malcolm. 00:53 Nikita: Maria is a Distinguished Product Manager for Oracle Database, Kamryn is a Database Product Manager, and Kay is a Senior Director of Database Product Management.  Lois: Hi Maria! Thanks for joining us today. We know that Oracle Autonomous Database offers two deployment choices: serverless and dedicated Exadata infrastructure. We spoke about serverless infrastructure last week but for anyone who missed that episode, can you give us a quick recap of what it is? 01:22 Maria: With Autonomous Database Serverless, Oracle automates all aspects of the infrastructure and database management for you. That includes provisioning, configuring, monitoring, backing up, and tuning. You simply select what type of database you want, maybe a data warehouse, transaction processing, or a JSON document store, which region in the Oracle Public Cloud you want that database deployed, and the base compute and storage resources necessary. Oracle automatically takes care of everything else. Once provisioned, the database can be instantly scaled through our UI, our APIs, or automatically based on your workload needs. All scaling activities happen completely online while the database remains open for business. 02:11 Nikita: Ok, so now that we know what serverless is, let's move on to dedicated infrastructure. What can you tell us about it? Maria: Autonomous Database Dedicated allows customers to implement a private database cloud running on their own dedicated Exadata infrastructure. That dedicated infrastructure can be in Oracle's Public Cloud or in the customer's own data center via Oracle Exadata Cloud@Customer. It makes an ideal platform to consolidate multiple databases regardless of their workload type or their size. And it also allows you to offer database as a service within your enterprise. 02:50 Lois: What are the primary benefits of Autonomous Database Dedicated infrastructure? Maria: With the dedicated deployment option, you must first subscribe to Dedicated Exadata Cloud Infrastructure that is isolated from other tenants with no shared processors, memory, network, or storage resources. This infrastructure choice offers greater control of both the software and the infrastructure life cycle. Customers can specify their own policies for workload separation, software update schedules, and availability. One of the key benefits of an autonomous database is a lower total cost of ownership through more automation and operational delegation to Oracle. Remember it's a fully managed service. All database operations, such as backup, software updates, upgrades, OS maintenance, incident management, and health monitoring, will be automatically done for you by Oracle. Its maximum availability architecture protects you from any hardware failures and in the event of a full outage, the service will be automatically failed over to your standby site. Built-in application continuity ensures zero downtime during the standard software update or in the event of a failover.  04:09 Nikita: And how is this billed?  Maria: Autonomous Database also has true pay-per-use billing so even when autoscale is enabled, you'll only pay for those additional resources when you use them. And we make it incredibly simple to develop on this environment with managed developer add-ons like our low code development environment, APEX, and our REST data services. This means you don't need any additional development environments in order to get started with a new application. 04:40 Lois: Ok. So, it looks like the dedicated option offers more control and customization. Maria, how do we access a dedicated database over a network? Maria: The network path is through a VCN, or Virtual Cloud Network, and the subnet that's defined by the Exadata infrastructure hosting the database. By default, this subnet is defined as private, meaning, there's no public internet access to those databases. This ensures only your company can access your Exadata infrastructure and your databases. Autonomous Database Dedicated can also take advantage of network services provided by OCI, including subnets or VCN peering, as well as connections to on-prem databases through the IP secure VPN and FastConnect dedicated corporate network connections. 05:33 Maria: You can also take advantage of the Oracle Microsoft partnership that enables customers to connect their Oracle Cloud Infrastructure resources and Microsoft Azure resources through a dedicated private connection. However, for some customers, a move to the public cloud is just not possible. Perhaps it's due to industry regulations, performance concerns, or integration with legacy on-prem applications. For these types of customers, Exadata Cloud@Customer should meet their requirements for strict data sovereignty and security by delivering high-performance Exadata Cloud Services capabilities in their data center behind their own firewall. 06:16 Nikita: What are the benefits of Autonomous Database on Exadata Cloud@Customer? How's it different? Maria: Autonomous Database on Exadata Cloud@Customer provides the same service as Autonomous Database Dedicated in the public cloud. So you get the same simplicity, agility, and performance, and elasticity that you get in the cloud. But it also provides a very fast and simple transition to an autonomous cloud because you can easily migrate on-prem databases to Exadata Cloud@Customer. Once the database is migrated, any existing applications can simply reconnect to that new database and run without any application changes being needed. And the data will leave your data center, so making it a very safe way to adopt a cloud model. 07:04 Lois: So, how do we manage communication to and from the public cloud? Maria: Each Cloud@Customer rack includes two local control plane servers to manage the communication to and from the public cloud. The local control plane acts on behalf of requests from the public cloud, keeping communications consolidated and secure. Platform control plane commands are sent to the Exadata Cloud@Customer system through a dedicated WebSocket secure tunnel.  Oracle Cloud operations staff use that same tunnel to monitor the autonomous database on Exadata Cloud@Customer both for maintenance and for troubleshooting. The two remote, control plane servers installed in the Exadata Cloud@Customer rack host that secure tunnel endpoint and act as a gateway for access to the infrastructure. They also host components that orchestrate the cloud automation, aggregates and routes telemetry messages from the Exadata Cloud@Customer platform to the Oracle Support Service infrastructure. And they also host images for server patching. 08:13 Maria: The Exadata Database Server is connected to the customer-managed switches via either 10 gigabit or 25 gigabit Ethernet. Customers have access to the customer Virtual Machine, or VM, via a pair of layer 2 network connections that are implemented as Virtual Network Interface Cards, or vNICs. They're also tagged VLAN. The physical network connections are implemented for high availability in an active standby configuration. Autonomous Database on Exadata Cloud@Customer provides the best of both worlds-- all of the automation including patching, backing up, scaling, and management of a database that you get with a cloud service, but without the data ever leaving the customer's data center. 09:01 Nikita: That's interesting. And, what happens if a dedicated database loses network connectivity to the OCI control plane? Maria: In the event an autonomous database on Exadata Cloud@Customer loses network connectivity to the OCI control plane, the Autonomous Database will actually continue to be available for your applications. And operations such as backups and autoscaling will not be impacted in that loss of network connectivity. However, the management and monitoring of the Autonomous Database via the OCI console and APIs as well as access by the Oracle Cloud operations team will not be available until that network is reconnected. 09:43 Maria: The capability suspended in the case of a lost network connection include, as I said, infrastructure management-- so that's the manual scaling of an Autonomous Database via the UI or our OCI CLI, or REST APIs, as well as Terraform scripts. They won't be available. Neither will the ability for Oracle Cloud ops to access and perform maintenance activities, such as patching. Nor will we be able to monitor the Oracle infrastructure during the time where the system is not connected. 10:20 Lois: That's good to know, Maria. What about data encryption and backup options? Maria: All Oracle Autonomous Databases encrypt data at REST. Data is automatically encrypted as it's written to the storage. But this encryption is transparent to authorized users and applications because the database automatically decrypts the data when it's being read from the storage. There are several options for backing up the Autonomous Database Cloud@Customer including using a Zero Data Loss Recovery Appliance, or ZDLRA. You can back it up to locally mounted NFS storage or back it up to the Oracle Public Cloud. 10:57 Nikita: I want to ask you about the typical workflow for Autonomous Database Dedicated infrastructure. What are the main steps here? Maria: In the typical workflow, the fleet administrator role performs the following steps. They provision the Exadata infrastructure by specifying its size, availability domain, and region within the Oracle Cloud. Once the hardware has been provisioned, the fleet administrator partitions the system by provisioning clusters and container databases. Then the developers, DBAs, or anyone who needs a database can provision databases within those container databases. Billing is based on the size of the Exadata infrastructure that's provisioned. So whether that's a quarter rack, half rack, or full rack. It also depends on the number of CPUs that are being consumed. Remember, it's also possible for customers to use their existing Oracle database licenses with this service to reduce the cost. 11:53 Lois: And what Exadata infrastructure models and shapes does Autonomous Database Dedicated support? Maria: That's the X7, X8, and X8M and you can get all of those in either a quarter, half, or full Exadata rack. Currently, you can create a maximum of 12 VM clusters on an Autonomous Database Dedicated infrastructure. We also advise that you limit the number of databases you provision to meet your preferred SLA. To meet the high availability SLA, we recommend a maximum of 100 databases. To meet the extreme availability SLA, we recommend a maximum of 25 databases. 12:35 Nikita: Ok, so now that I know all this, how do I actually get started with Autonomous Database on dedicated infrastructure? Maria: You need to increase your service limit to include that Exadata infrastructure and then you need to create the fleet and DBA service roles. You also need to create the necessary network model, VM clusters, and container databases for your organization. Finally, you need to provide access to the end users who want to create and use those Autonomous databases. Autonomous Database requires a subscription to that Exadata infrastructure for a minimum of 48 hours. But once subscribed, you can test out ideas and then terminate the subscription with no ongoing costs. While subscribed, you can control where you place the resources to perhaps manage latency sensitive applications. 13:29 Maria: You can also have control over patching schedules, software versions, so you can be sure that you're testing exactly what you need to. You can also migrate databases to the Autonomous Database via our export, import capabilities via the object store or through Data Pump or Golden Gate. As with any Autonomous Database, once it's provisioned, you've got full access to both autoscaling and all our cloning capabilities.  13:57 Lois: Maria, I've heard you talk about the importance of clean role separation in managing a private cloud. Can you elaborate on that, please? Maria: A successful private cloud is set up and managed using clean role separation between the fleet administration group and the developers, or DBA groups. The fleet administration group establishes the governance constraints, including things like budgeting, capacity compliance, and SLAs, according to the business structure. The physical resources are also logically grouped to align with this business structure, and then groups of users are given self-service access to the resources within these groups. So a good example of this would be that the developers and DBA groups use self-service database resources within these constraints. 14:46 Nikita: I see. So, what exactly does a fleet administrator do? Maria: Fleet administrators allocate budget by department and are responsible for the creation, monitoring, and management of the autonomous exadata infrastructure, the autonomous exadata VM clusters, and the autonomous container databases. To perform these duties, the fleet administrators must have an Oracle Cloud account or user, and that user must have permissions to manage these resources and be permitted to use network resources that need to be specified when you create these other resources. 15:24 Nikita: And what about database administrators? Maria: Database administrators create, monitor, and manage autonomous databases. They, too, need to have an Oracle Cloud account or be an Oracle Cloud user. Now, those accounts need to have the necessary permissions in order to create and access databases. They also need to be able to access autonomous backups and have permission to access the autonomous container databases, inside which these autonomous databases will be created, and have all of the necessary permissions to be able to create those databases, as I said. While creating autonomous databases, the database administrators will define and gain access to an admin user account inside the database. It's through this account that they will actually get the necessary permissions to be able to create and control database users.  16:24 Lois: How do developers fit into the picture? Maria: Database users and developers who write applications that will use or access an autonomous database don't actually need Oracle Cloud accounts. They'll actually be given the network connectivity and authorization information they need to access those databases by the database administrators. 16:45 Lois: Maria, you mentioned the various ways to manage the lifecycle of an autonomous dedicated service. Can you tell us more about that? Maria: You can manage the lifecycle of an autonomous dedicated service through the Cloud UI, Command Line Interface, through our REST APIs, or through one of the several language SDKs. The lifecycle operations that you can manage include capacity planning and setup, the provisioning and partitioning of exadata infrastructure, the provisioning and management of databases, the scaling of CPU storage and other resources, the scheduling of updates for the infrastructure, the VMs, and the database, as well as monitoring through event notifications.  17:30 Lois: And how do policies come into play? Maria: OCI allows fine-grained control over resources through the application of policies to groups. These policies are applicable to any member of the group. For Oracle Autonomous Database on dedicated infrastructure, the resources in question are autonomous exadata infrastructure, autonomous container databases, autonomous databases, and autonomous backups.  Lois: Thanks so much, Maria. That was great information. 18:05 The Oracle University Learning Community is a great place for you to collaborate and learn with experts, peers, and practitioners. Grow your skills, inspire innovation, and celebrate your successes. The more you participate, the more recognition you can earn. All of your activities, from liking a post to answering questions and sharing with others, will help you earn badges and ranks, and be recognized within the community. If you are already an Oracle MyLearn user, go to MyLearn to join the community. You will need to log in first. If you have not yet accessed Oracle MyLearn, visit mylearn.oracle.com and create an account to get started. 18:44 Nikita: Welcome back! Hi Kamryn, thanks for joining us on the podcast. So, in an Autonomous Database environment where most DBA tasks are automated, what exactly does an application DBA do? Kamryn: While Autonomous Database automates most of the repetitive tasks that DBAs perform, the application DBA will still want to monitor and diagnose databases for applications to maintain the highest performance and the greatest security possible. Tasks the application DBA performs includes operations on databases, cloning, movement, monitoring, and creating alerts. When required, the application DBA performs low-level diagnostics for application performance and looks for insights on performance and capacity trends.  19:36 Nikita: I see. And which tools do they use for these tasks? Kamryn: There are several tools at the application DBA's disposal, including Enterprise Manager, Performance Hub, and the OCI Console. For Autonomous Dedicated, all the database operations are exposed through the console UI and available through REST API calls, including provisioning, stop/start, lifecycle operations for dedicated database types, unscheduled on-demand backups and restores, CPU scaling and storage management, providing connectivity information, including wallets, scheduling updates. 20:17 Lois: So, Kamryn, what tools can DBAs use for deeper exploration? Kamryn: For deeper exploration of the databases themselves, Autonomous Database DBAs can use SQL Developer Web, Performance Hub, and Enterprise Manager. 20:31 Nikita: Let's bring Kay into the conversation. Hi Kay! With Autonomous Database Dedicated, I've heard that customers have more control over patching. Can you tell us a little more about that? Kay: With Autonomous Database Dedicated, customers get to determine the update or patching schedule if they wish. Oracle automatically manages all patching activity, but with the ADB-Dedicated service, customers have the option of customizing the patching schedule. You can specify which month in every quarter you want, which week in that month, which day in that month, and which patching window within that day. You can also dynamically change the scheduled patching date and time for a specific database if the originally scheduled time becomes inconvenient. 21:22 Lois: That's great! So, how often are updates published, and what options do customers have when it comes to applying these updates? Kay: Every quarter, updates are published to the console, and OCI notifications are sent out. ADB-Dedicated allows for greater control over updates by allowing you to choose to apply the current update or stay with the previous version and skip to the next release. And the latest update can be applied immediately. This provides fleet administrators with the option to maintain test and production systems at different patch levels. A fleet administrator or a database admin sets up the software version policy at the Autonomous Container Database level during provisioning, although the defaults can be modified at any time for an existing Autonomous Container Database. At the bottom of the Autonomous Exadata Infrastructure provisioning screen, you will see a Configure the Automatic Maintenance section, where you should click the Modify Schedule.  22:34 Nikita: What happens if a customer doesn't customize their patching schedule? Kay: If you do not customize a schedule, it behaves like Autonomous Serverless, and Oracle will set a schedule for you. ADB-Dedicated customers get to choose the patching schedule that fits their business.  22:52 Lois: Back to you, Kamryn, I know a bit about Transparent Data Encryption, but I'm curious to learn more. Can you tell me what it does and how it helps protect data? Kamryn: Transparent Data Encryption, TDE, enables you to encrypt sensitive data that you store in tables and tablespaces. After the data is encrypted, this data is transparently decrypted for authorized users or applications when they access this data. TDE helps protect data stored on media, also called data at rest. If the storage media or data file is stolen, Oracle database uses authentication, authorization, and auditing mechanisms to secure data in the database, but not in the operating system data files where data is stored. To protect these data files, Oracle database provides TDE.  23:45 Nikita: That sounds important for data security. So, how does TDE protect data files? Kamryn: TDE encrypts sensitive data stored in data files. To prevent unauthorized decryption, TDE stores the encryption keys in a security module external to the database called a keystore. You can configure Oracle Key Vault as part of the TDE implementation. This enables you to centrally manage TDE key stores, called TDE wallets, in Oracle Key Vault in your enterprise. For example, you can upload a software keystore to Oracle Key Vault and then make the contents of this keystore available to other TDE-enabled databases. 24:28 Lois: What about Oracle Autonomous Database? How does it handle encryption? Kamryn: Oracle Autonomous Database uses always-on encryption that protects data at rest and in transit. All data stored in Oracle Cloud and network communication with Oracle Cloud is encrypted by default. Encryption cannot be turned off. By default, Oracle Autonomous Database creates and manages all the master encryption keys used to protect your data, storing them in a secure PKCS 12 keystore on the same Exadata systems where the databases reside. If your company's security policies require, Oracle Autonomous Database can instead use keys you create and manage. Customers can control key generation and rotation of the keys. 25:19 Kamryn: The Autonomous databases you create automatically use customer-managed keys because the Autonomous container database in which they are created is configured to use customer-managed keys. Thus, those users who create and manage Autonomous databases do not have to worry about configuring their databases to use customer-managed keys. 25:41 Nikita: Thank you so much, Kamryn, Kay, and Maria for taking the time to give us your insights. To learn more about provisioning Autonomous Database Dedicated resources, head over to mylearn.oracle.com and search for the Oracle Autonomous Database Administration Workshop. Lois: In our next episode, we will discuss Autonomous Database tools. Until then, this is Lois Houston… Nikita: …and Nikita Abraham signing off. 26:07 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

Smart Software with SmartLogic
Web Development Frameworks: Elixir and Phoenix vs. Ruby on Rails with Owen Bickford & Dan Ivovich

Smart Software with SmartLogic

Play Episode Listen Later Dec 7, 2023 41:41


On today's episode, Elixir Wizards Owen Bickford and Dan Ivovich compare notes on building web applications with Elixir and the Phoenix Framework versus Ruby on Rails. They discuss the history of both frameworks, key differences in architecture and approach, and deciding which programming language to use when starting a project. Both Phoenix and Rails are robust frameworks that enable developers to build high-quality web apps—Phoenix leverages functional programming in Elixir and Erlang's networking for real-time communication. Rails follows object-oriented principles and has a vast ecosystem of plug-ins. For data-heavy CRUD apps, Phoenix's immutable data pipelines provide some advantages. Developers can build great web apps with either Phoenix or Rails. Phoenix may have a slight edge for new projects based on its functional approach, built-in real-time features like LiveView, and ability to scale efficiently. But, choosing the right tech stack depends heavily on the app's specific requirements and the team's existing skills. Topics discussed in this episode: History and evolution of Phoenix Framework and Ruby on Rails Default project structure and code organization preferences in each framework Comparing object-oriented vs functional programming paradigms CRUD app development and interaction with databases Live reloading capabilities in Phoenix LiveView vs Rails Turbolinks Leveraging WebSockets for real-time UI updates Testing frameworks like RSpec, Cucumber, Wallaby, and Capybara Dependency management and size of standard libraries Scalability and distribution across nodes Readability and approachability of object-oriented code Immutability and data pipelines in functional programming Types, specs, and static analysis with Dialyzer Monkey patching in Ruby vs extensible core language in Elixir Factors to consider when choosing between frameworks Experience training new developers on Phoenix and Rails Community influences on coding styles Real-world project examples and refactoring approaches Deployment and dev ops differences Popularity and adoption curves of both frameworks Ongoing research into improving Phoenix and Rails Links Mentioned in this Episode: SmartLogic.io (https://smartlogic.io/) Dan's LinkedIn (https://www.linkedin.com/in/divovich/) Owen's LinkedIn (https://www.linkedin.com/in/owen-bickford-8b6b1523a/) Ruby https://www.ruby-lang.org/en/ Rails https://rubyonrails.org/ Sams Teach Yourself Ruby in 21 Days (https://www.overdrive.com/media/56304/sams-teach-yourself-ruby-in-21-days) Learn Ruby in 7 Days (https://www.thriftbooks.com/w/learn-ruby-in-7-days---color-print---ruby-tutorial-for-guaranteed-quick-learning-ruby-guide-with-many-practical-examples-this-ruby-programming-book--to-build-real-life-software-projects/18539364/#edition=19727339&idiq=25678249) Build Your Own Ruby on Rails Web Applications (https://www.thriftbooks.com/w/build-your-own-ruby-on-rails-web-applications_patrick-lenz/725256/item/2315989/?utm_source=google&utm_medium=cpc&utm_campaign=low_vol_backlist_standard_shopping_customer_acquisition&utm_adgroup=&utm_term=&utm_content=593118743925&gad_source=1&gclid=CjwKCAiA1MCrBhAoEiwAC2d64aQyFawuU3znN0VFgGyjR0I-0vrXlseIvht0QPOqx4DjKjdpgjCMZhoC6PcQAvD_BwE#idiq=2315989&edition=3380836) Django https://github.com/django Sidekiq https://github.com/sidekiq Kafka https://kafka.apache.org/ Phoenix Framework https://www.phoenixframework.org/ Phoenix LiveView https://hexdocs.pm/phoenixliveview/Phoenix.LiveView.html#content Flask https://flask.palletsprojects.com/en/3.0.x/ WebSockets API https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API WebSocket connection for Phoenix https://github.com/phoenixframework/websock Morph Dom https://github.com/patrick-steele-idem/morphdom Turbolinks https://github.com/turbolinks Ecto https://github.com/elixir-ecto Capybara Testing Framework https://teamcapybara.github.io/capybara/ Wallaby Testing Framework https://wallabyjs.com/ Cucumber Testing Framework https://cucumber.io/ RSpec https://rspec.info/

DevTalles
134-WebSockets - ¿Qué soon y para qué me sirven?

DevTalles

Play Episode Listen Later Oct 2, 2023 20:18


En este episodio aprenderemos un poco sobre cómo funcionan los WebSockets y para qué me pueden servir. --- Support this podcast: https://podcasters.spotify.com/pod/show/fernando-her85/support

Remote Ruby
Layered Rails Design with Vladimir Dementyev

Remote Ruby

Play Episode Listen Later Sep 29, 2023 47:41


In this episode, Jason and Andrew are joined by guest, Vladimir Dementyev of Ruby on Rails and Evil Martians fame. Today, they touch on Vladmir's new book on designing Rails applications, and dive into the importance of sticking to Rails principles, even in the era of microservices.  Vladimir shares insights on working as a consultant on legacy Rails projects and the challenges that can arise when codebases deviate from Rails conventions. We'll also explore the evolution of Rails applications, the power of open source contributions, and Vladimir's journey to becoming a recognized figure in the tech community. Also, Vladimir introduces AnyCable, a performance-oriented solution for real-time communication in Rails applications and provides insights into its capabilities and evolution. Hit download now to hear much more! [00:02:29] Vladimir briefly describes his book on designing Rails applications. [00:05:40] Vladimir talks about sticking to Rails principles and not injecting foreign patterns into Rails applications and emphasizes the importance of maintaining a Rails oriented approach even when using microservices. [00:08:33] We hear about Vladimir working as a consultant on legacy Rails projects and the challenges of maintaining codebases that deviate from Rails principles. [00:10:29] Jason asks for more examples of where the Rails framework ends and developers have to steer their own course. Vladimir discusses the structure of the app folder in Rails applications and mentions the trend of putting everything in the model folder, and he talks about how Rails applications changed during the API-only era, leading to a shift away from Rails conventions and MVC patterns. [00:13:41] Andrew expresses how he feels vindicated for sticking to writing Rails apps even when the trend shifted towards API-only development. [00:15:08] Vladimir shares his journey to joining Evil Martians, starting as a solo developer, and his attraction to the simplicity of Rails. He mentions his experiments with different design patterns and how joining Evil Martians provided a collaborative environment for open source work. [00:19:15] Vladimir talks about how Evil Martians encouraged new engineers to propose conference talks, leading him to present on AnyCable, which sparked his open source contributions. [00:20:18]  He talks about how it took a couple of years for his efforts, including writing blog posts and working on AnyCable, to gain recognition and production users outside of Evil Martians. Also, he explains how writing became a way for him to cope with stress and how it contributed to the company's visibility and recognition in the tech community. [00:26:20] We hear about Evil Martians' shift in focus from consumer products to developer tools and how they use and contribute to products built by others. Vladimir briefly discusses HTTPie, and how they helped with its development.  [00:28:44] Jason brings up AnyCable, and Vladimir tells us what it is, what problem it solves, and the benefits of using it. Also, he explains how AnyCable allows for seamless replacement of Action Cable in existing applications and its Go-based WebSocket server. [00:32:16] Vladimir mentions that AnyCable has evolved over seven years to offer additional features, including support for different transports and service-sent events, making it versatile for various use cases. [00:34:08] Vladimir discusses the versatility of AnyCable, highlighting that it can be deployed anywhere and used with platforms beyond Rails. He mentions that AnyCable is becoming the default choice for handling WebSockets in Rails applications as they continue to expand their reach into other ecosystems.[00:38:09] We hear about some upcoming features for AnyCable, including presence tracking, and plans to make AnyCable compatible with other ecosystems. Vladimir teases a new feature he's working on for Rails and Turbo.[00:43:04] Andrew shares that he used to read Vladimir's code on GitHub to learn new patterns and gain inspiration for his own work.  He mentions reading code from different libraries and ecosystems is a powerful way to expand one's toolkit for problem-solving.[00:43:47] Vladimir suggests the idea of a podcast or show where experts review codebases and discuss patterns, techniques, and the rationale behind certain code decisions. He believes it could be a great way to learn and share knowledge. [00:45:36] Jason shares that he appreciates Vladimir's contributions to Ruby on Rails and the high-quality content shared by Evil Martian's on their blog. [00:46:36] Find out where you can find Vladimir online.Panelists:Jason CharnesAndrew MasonGuest:Vladimir DementyevSponsor:HoneybadgerLinks:Jason Charnes TwitterChris Oliver TwitterAndrew Mason TwitterVladimir Dementyev TwitterVladimir Dementyev GitHubStimulusReflex DiscordEvil MartiansLayered Design for Ruby on Rails Applications: Discover practical design patterns for maintainable web applications by Vladimir DementyevAnyCableHTTPieRails World 2023Ruby for All Podcast

PodRocket - A web development podcast from LogRocket

HTMX has become increasingly popular in the web dev community, so we sat down with creator Carson Gross to talk about what HTMX is, why developers should use it, and what's on the horizon for the tool. Links https://bigsky.software/cv https://www.linkedin.com/in/1cg https://github.com/bigskysoftware https://twitter.com/htmxorg https://htmx.org https://www.youtube.com/watch?v=NA5FcgsviU&ab_channel=Theo-t3%E2%80%A4gg Tell us what you think of PodRocket We want to hear from you! We want to know what you love and hate about the podcast. What do you want to hear more about? Who do you want to see on the show? Our producers want to know, and if you talk with us, we'll send you a $25 gift card! If you're interested, schedule a call with us (https://podrocket.logrocket.com/contact-us) or you can email producer Kate Trahan at kate@logrocket.com (mailto:kate@logrocket.com) Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket combines frontend monitoring, product analytics, and session replay to help software teams deliver the ideal product experience. Try LogRocket for free today. (https://logrocket.com/signup/?pdr) Special Guest: Carson Gross.

Azure DevOps Podcast
Chris "Woody" Woodruff: Network Programming - Episode 262

Azure DevOps Podcast

Play Episode Listen Later Sep 11, 2023 42:08


Chris Woodruff, or as his friends call him, Woody, is a software developer and architect of over 25 years. Woody loves software engineering, especially allowing applications and services to communicate across networks and through Web APIs. He has been a Microsoft MVP in SQL, Data, and C# in the past, along with multiple years of being awarded the AWS Community Builder Award. Woody lives in Grand Rapids, Michigan, where he explores the many breweries in West Michigan and travels with his family. Woody is also a long-time bourbon fan and loves hunting for whiskey bottles.   Topics of Discussion: [4:46] The many positions Woody has held in his career. [7:14] The genesis behind Woody's new book, Practical Network Programming Using C#. Dive deep into #CSharp12 and #DotNET8. [9:24] The second book Woody is working on co-writing, on the patterns of developer relations. [14:10] The original intent of the internet was to protect the military. [15:22] What is a packet? [21:08] A brief history of web services. [24:00] Who was Roy Fielding? [28:48] Woody talks about using different applications, the WebSocket service, and Dapr. [35:36] You have to know about the transport across the network, as well as how to build the code and architect your application so that it utilizes the network efficiently. [40:14] We can expect the book out by May or June 2024.   Mentioned in this Episodes: Clear Measure Way Architect Forum Software Engineer Forum Programming with Palermo — New Video Podcast! Email us at programming@palermo.net Clear Measure, Inc. (Sponsor) .NET DevOps for Azure: A Developer's Guide to DevOps Architecture the Right Way, by Jeffrey Palermo — Available on Amazon! Jeffrey Palermo's Twitter — Follow to stay informed about future events! Architect Tips — Video podcast! Azure DevOps Chris Woodruff's Blog Chris Woodruff LinkedIn Chris Woodruff Twitter   Want to Learn More? Visit AzureDevOps.Show for show notes and additional episodes.

Giant Robots Smashing Into Other Giant Robots
482: Evil Martians with Irina Nazarova

Giant Robots Smashing Into Other Giant Robots

Play Episode Listen Later Jul 6, 2023 33:46


Irina Nazarova is CEO of Evil Martians, a product development consultancy that works with startups and established businesses while creating open-source products and services. Victoria talks to Irina about getting a sense of what people are interested in learning about or what kind of problems they have, how consulting and product development complement each other, and of course, the question on everyone's minds: Is Evil Martians really evil?

The MapScaping Podcast - GIS, Geospatial, Remote Sensing, earth observation and digital geography

The promise of digital mapping is to provide a shared and real-time view of the state of the underlying system. pg_eventserv is a free and open-source component that helps fulfill the promise of real-time event modeling and shared views in PostgreSQL. By connecting to PostgreSQL and listening on specified channels, pg_eventserv captures database notifications and forwards them to web clients, enabling real-time updates and synchronization of data displayed on maps or other web interfaces. pg_eventserv does one thing and one thing only: take events generated by the PostgreSQL NOTIFY command and passes the payload along to waiting WebSockets clients. pg_eventserv is free and easy to install and you can find it here: https://github.com/CrunchyData/pg_eventserv What this means is that any client can watch for notifications and update as changes in the database happen. Real-time data! Here is a link to a Youtube video demonstration of pg_eventserv in action! https://youtu.be/UakRtYmoWow  I will let Paul Ramsey the creator of pg_eventserv explain all this in more detail in this episode.  If you want to reach out to Paul the best place to do that is http://blog.cleverelephant.ca/  Or if you want to listen to previous episodes with Paul you might find these interesting Raster in the database? https://mapscaping.com/podcast/rasters-in-a-database/ Dynamic Vector Tiles Straight From The Database https://mapscaping.com/podcast/dynamic-vector-tiles-straight-from-the-database/ Spatial SQL- GIS Without The GIS https://mapscaping.com/podcast/spatial-sql-gis-without-the-gis/   also ...  If you are interested in spatial databases at scale ... you might find this episode interesting https://mapscaping.com/podcast/distributing-geospatial-data/    

Smart Software with SmartLogic
Mat Trudel on the Future of Phoenix and Web Transports

Smart Software with SmartLogic

Play Episode Listen Later Apr 27, 2023 48:05


In this episode of Elixir Wizards, Owen and Dan talk to Mat Trudel, Phoenix contributor and creator of the Bandit Web Server, about the future of Phoenix, web transports, and HTTP/3. Mat explains the challenges and benefits of implementing HTTP/3 support in Phoenix. Mat provides in-depth insights into the evolution of web protocols and encourages developers to continue pushing the boundaries of web development and to contribute to the growth of the open-source community. Main topics discussed in this episode: The evolution of web protocols and how HTTP/3 is changing the landscape The challenges and benefits of implementing HTTP/3 support in Phoenix How a home AC project revealed a gap in web server testing tools and inspired Bandit how web transports like Cowboy and Ranch are used to build scalable web servers WebSock for multiplexing data over a single WebSocket connection Mat's philosophy on naming projects and his passion for malapropisms The Bandit project and how it can help developers better understand web protocols Autobahn, a testing suite for WebSocket protocol specification conformance The importance of community involvement in open-source projects Encouragement for more people to use Bandit and report bugs Links Mentioned: SmartLogic is Hiring: https://smartlogic.io/about/jobs PagerDuty: https://www.pagerduty.com Phoenix Framework: https://www.phoenixframework.org/ Cowboy: https://ninenines.eu/docs/en/cowboy/2.9/guide/introduction/ Ranch: https://github.com/ninenines/ranch Bandit - https://hexdocs.pm/bandit/Bandit.html Autobahn: https://github.com/crossbario/autobahn-testsuite HTTP Cats: https://http.cat/ Mat Trudel at Empex 2022 A Funny Thing Happened On The Way To The Phoenix (https://www.youtube.com/watch?v=FtZBTUvRt0g) Thousand Island - https://hexdocs.pm/thousand_island/ThousandIsland.html Special Guest: Mat Trudel.

Paul's Security Weekly TV
Deps.dev API, Right to Repair Tractors, Secure by Design, WebSockets, Adversarial AI - ASW #237

Paul's Security Weekly TV

Play Episode Listen Later Apr 19, 2023 32:26


A new deps.dev API for supply chain enthusiasts, hacking and modding agricultural devices, guidance from CISA on secure by design (and by default!), Glaze brings adversarial art to AI training, key transparency for WhatsApp, a new appsec myth(?), Android hacking tool list, and a Chrome extension to find web debugging behavior.   Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw237

Smart Software with SmartLogic
Season 10 Kickoff: The Hosts Discuss The Future of Elixir

Smart Software with SmartLogic

Play Episode Listen Later Mar 23, 2023 41:48


It's the season 10 premiere of the Elixir Wizards podcast! Sundi Myint, Owen Bickford, and Dan Ivovich kick off the season with a spirited conversation about what they hope to see in the future of the Elixir programming language, experiences that inform their predictions, and excitement for this season's lineup of guests. They touch on how Elixir has evolved in the previous ten years and how the range of use cases has expanded beyond web development. The hosts introduce the season 10 theme: The Next Ten Years of Elixir Dan explains his initial hesitation and how he was ultimately won over by the language Owen talks about functional programming and why Elixir piqued his interest Sundi compares Elixir to other languages she's worked with and why she thinks it's more intuitive and readable Sundi talks about her recent experience using Flutter for mobile application development The hosts express excitement about the various ways Elixir is currently being used and its potential for growth The Wizards express interest in hearing from guests this season to gain more perspective They discuss Phoenix, LiveView, documentation, Flutter, Dart, and resources available to the Elixir community, and the benefits of being fluent in different programming languages Owen suggests that Elixir and Phoenix are optimal for projects with limited resources and leaner teams They highlight the importance of building resource-efficient apps that work well on low-powered devices Dan expresses his desire to embrace types more but acknowledges that a first-party typing system is unlikely The speakers discuss how Elixir has made complex tasks more accessible through features like LiveView, Phoenix Presence, WebSockets, Pub/Sub, Nerves, and ML libraries They express excitement about the possibilities for the future of Elixir Links mentioned in this episode: SmartLogic — https://smartlogic.io/ SmartLogic Twitter — https://twitter.com/smartlogic Axon – https://hexdocs.pm/axon/Axon.html Bumblebee – https://hex.pm/packages/bumblebee HEEx – https://hexdocs.pm/phoenixliveview/assigns-eex.html Phoenix LiveView – https://hexdocs.pm/phoenixliveview/Phoenix.LiveView.html Numerical Elixir – https://hexdocs.pm/nx/intro-to-nx.html Hugging Face – https://huggingface.co/docs Flutter – https://docs.flutter.dev/ Dart – https://dart.dev/ Broadway – https://hexdocs.pm/broadway/Broadway.html Phoenix Presence – https://hexdocs.pm/phoenix/Phoenix.Presence.html Nerves – https://hexdocs.pm/nerves/getting-started.html WebSocket – https://hexdocs.pm/web_socket/readme.html

Paul's Security Weekly TV
WebSocket Hijack, Post-Quantum Side-Channel, OWASP's Future, OAuth Misconfigs, ZAP - ASW #231

Paul's Security Weekly TV

Play Episode Listen Later Mar 8, 2023 40:30


WebSocket hijack that leads to a full workspace takeover in a cloud IDE, malicious packages flood public repos, side-channel attack on a post-quantum algorithm, looking at OWASP's evolution, OAuth misconfigs lead to account takeover, AI risk management framework, Zed Attack Proxy   Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw231

SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast

Live Linux IR with UAC https://isc.sans.edu/diary/Live%20Linux%20IR%20with%20UAC/29480 Bitwarden Phishing https://community.bitwarden.com/t/phishing-website-bitwardenlogin-com/49704 https://www.reddit.com/r/Bitwarden/comments/10k2aj5/google_search_ads_showing_fake_bitwarden_web/ PY#RATION Attack Campaign Leverages Fernet Encyrption and Websockets https://www.securonix.com/blog/security-advisory-python-based-pyration-attack-campaign/ Skyhigh Security Secure Web Gateway: XSS in Single Sign On Plugin https://www.redteam-pentesting.de/en/advisories/rt-sa-2022-002/-skyhigh-security-secure-web-gateway-cross-site-scripting-in-single-sign-on-plugin Windows Crypto API Vuln PoC https://github.com/akamai/akamai-security-research/tree/main/PoCs/CVE-2022-34689 BIND Patches https://kb.isc.org/docs/cve-2022-3094

SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast

Live Linux IR with UAC https://isc.sans.edu/diary/Live%20Linux%20IR%20with%20UAC/29480 Bitwarden Phishing https://community.bitwarden.com/t/phishing-website-bitwardenlogin-com/49704 https://www.reddit.com/r/Bitwarden/comments/10k2aj5/google_search_ads_showing_fake_bitwarden_web/ PY#RATION Attack Campaign Leverages Fernet Encyrption and Websockets https://www.securonix.com/blog/security-advisory-python-based-pyration-attack-campaign/ Skyhigh Security Secure Web Gateway: XSS in Single Sign On Plugin https://www.redteam-pentesting.de/en/advisories/rt-sa-2022-002/-skyhigh-security-secure-web-gateway-cross-site-scripting-in-single-sign-on-plugin Windows Crypto API Vuln PoC https://github.com/akamai/akamai-security-research/tree/main/PoCs/CVE-2022-34689 BIND Patches https://kb.isc.org/docs/cve-2022-3094

The CyberWire
TA444 and crypto theft on behalf of the Dear Successor. CryptoAPI spoofing vulnerability described. New Python-based malware campaign. User headspace. Tanks vs. hacktivists.

The CyberWire

Play Episode Listen Later Jan 25, 2023 30:40


How do the North Koreans get away with it? They do run their cyber ops like a creepy start-up business. A spoofing vulnerability is discovered in Windows CryptoAPI. Python-based malware is distributed via phishing. MacOS may have a reputation for threat-resistance, but users shouldn't get cocky. DevSecOps survey results show tension between innovation and security. Russian hacktivist auxiliaries hit German targets. Tim Starks from the Washington Post Cyber 202 shares insights from his interview with Senator Warner. Our guest is Keith McCammon of Red Canary to discuss cyber accessibility. And Private sector support for Ukraine's cyber defense. For links to all of today's stories check out our CyberWire daily news briefing: https://thecyberwire.com/newsletters/daily-briefing/12/16 Selected reading. TA444: The APT Startup Aimed at Acquisition (of Your Funds) (Proofpoint) Exploiting a Critical Spoofing Vulnerability in Windows CryptoAPI (Akamai)  Securonix Security Advisory: Python-Based PY#RATION Attack Campaign Leverages Fernet Encryption and Websockets to Avoid Detection (Securonix) BlackBerry's Inaugural Quarterly Threat Intelligence Report Reveals Threat Actors Launch One Malicious Threat Every Minute (BlackBerry) Global CIO Report Reveals Growing Urgency for Observability and Security to Converge (Dynatrace) Russian 'hacktivists' briefly knock German websites offline (Reuters) How Microsoft is helping Ukraine's cyberwar against Russia (Computerworld) CISA Releases Two Industrial Control Systems Advisories (CISA) 

Ventures
Learn Product and Code: DNS, WebSockets, and Profile Picture AI

Ventures

Play Episode Listen Later Dec 27, 2022 9:46


In this episode of Ventures, I (https://www.reddit.com/user/wclittle) continue from last week's screencast to discuss WebSockets, DNS, and Profile Picture AI. I talk about how you can get involved with the Q1 product and code cohort I'm leading (apply here: https://forms.gle/dtVA1bQ9xATRrPW29), how rare it is to actually be good at “product”, Prota Ventures slack group for investors and entrepreneurs, a deeper dive into how web requests are packed on the server side with Ruby on Rails, and the future of Rails vs. Low-Code/No-Code.Visit https://satchel.works/@wclittle/ventures-episode-129 for more information. You can watch this episode via video here.   0:08 - Tee-up for the episode (a followup from last week's episode). To train people in Product and Code. If you'd like to apply, read this and fill out the form: https://forms.gle/dtVA1bQ9xATRrPW29 0:24 - Rarity of being actually good at “product”0:43 - Teaching product and code - learning in parallel1:07 - Profile Picture AI → Referral link: https://profilepicture.ai/?ref=mttd2YgdG7X4XIdrOGrAFQkEl1j2 1:54 - Hit up will to join the Prota Ventures slack group if you are an entrepreneur or an investor (will@wclittle.com or will@protaventures.com)2:20 - More about DNS / (including ChatGPT's definition of it)3:48 - What happens when a web request hits the web server. 4:40 - Multiple ways to transfer HTML/CSS/JS to the server. More here about Web Sockets. See: https://hotwired.dev/ 6:12 - Thesis: it's better to learn Rails w/ ChatGPT vs. Low-Code/No-Code7:22 - Hit Will up at will@wclittle.com or apply for the cohort here → https://forms.gle/dtVA1bQ9xATRrPW29  

Syntax - Tasty Web Development Treats
Potluck × Twitter Following × TypeScript × Playwright

Syntax - Tasty Web Development Treats

Play Episode Listen Later Nov 30, 2022 57:31


In this potluck episode of Syntax, Wes and Scott answer your questions about strategies to find good Twitter follows, should we use TypeScript for a company blog, what's Playwright, and more! Prismic - Sponsor Prismic is a Headless CMS that makes it easy to build website pages as a set of components. Break pages into sections of components using React, Vue, or whatever you like. Make corresponding Slices in Prismic. Start building pages dynamically in minutes. Get started at prismic.io/syntax. Sanity - Sponsor Sanity.io is a real-time headless CMS with a fully customizable Content Studio built in React. Get a Sanity powered site up and running in minutes at sanity.io/create. Get an awesome supercharged free developer plan on sanity.io/syntax. LogRocket - Sponsor LogRocket lets you replay what users do on your site, helping you reproduce bugs and fix issues faster. It's an exception tracker, a session re-player and a performance monitor. Get 14 days free at logrocket.com/syntax. Show Notes 00:10 Welcome 01:53 TikTok update 02:46 What strategies do you use to find the good stuff? Tweetdeck 11:17 What is the clear distinction b/w how the CRUD Ops & the Real time chat protocols (XMPP, Websockets, etc) differ? 15:56 Should we use TypeScript for a new blog? 20:45 Sponsor: Sanity 21:45 Is there any reason to still use string concatenation over template literals? 26:13 What should React devs use to get initial scaffolding in place? Create React App Vite Turbo Pack 29:38 What's the deal with Playwright? Playwright 34:28 Sponsor: LogRocket 35:19 What separates a “scripting” language from a “programming” language? 39:37:14 Sponsor: Prismic 40:23:02 Do you have any tips on how to change a defeated programmer mindset? Partydown 47:34:07 How has your use of and work in the browser changed with Arc? Arc Browser Syntax 514 - Supper Club × Arc Browser with Hursh Agrawal 49:18:15 Why would I attach an event listener with the useCapture argument as true? 51:33:24 SIIIIICK ××× PIIIICKS ××× ××× SIIIIICK ××× PIIIICKS ××× Scott: Screwless plates / 4 pack of screwless plates Wes: Shoe Boxes for Cords Shameless Plugs Scott: LevelUp TikTok Wes: Wes Bos TikTok Tweet us your tasty treats Scott's Instagram LevelUpTutorials Instagram Wes' Instagram Wes' Twitter Wes' Facebook Scott's Twitter Make sure to include @SyntaxFM in your tweets