POPULARITY
The Data Center Boom: Five Trends Engineering Firms Need to Know The data center market is experiencing unprecedented growth, driven by artificial intelligence adoption and changing infrastructure demands. For ACEC member firms, this represents both a substantial business opportunity and a chance to shape critical national infrastructure. ACEC's latest Market Intelligence Brief reveals a market poised to reach $62 billion in design and construction spending by 2029, with implications that extend far beyond traditional data center engineering. The launch of ChatGPT in 2022 marked an inflection point. What began as voice assistants has evolved into sophisticated language learning models that consume dramatically more energy. A standard AI query uses about 0.012 kilowatt-hours, while generating a single high-quality image requires 2.0 kWh—roughly 20 times the daily consumption of a standard LED lightbulb. As weekly ChatGPT users surged from 100 million to 700 million between November 2023 and August 2025, the infrastructure implications became impossible to ignore. AI-driven data center power demand, which stood at just 4 gigawatts in 2024, is projected to reach 123 gigawatts by 2035. Even more striking: 70 percent of data center power demand will be driven by AI workloads. This explosive growth requires engineering solutions at unprecedented scale, from power distribution and backup systems to advanced cooling technologies and grid integration strategies. Public perception about data center water consumption often overlooks important nuances in cooling technology. While mechanical cooling systems have historically consumed significant water resources, newer approaches could dramatically reduce water use. Free air cooling, closed-loop systems, and liquid immersion technologies offer low-water use alternatives, with some methods reducing freshwater consumption by 70 percent or more compared to traditional systems. As Thom Jackson, mechanical engineer and partner at Dunham Engineering, notes: "Most data centers utilize closed loop cooling systems requiring no makeup water and minimal maintenance." The "big four" hyperscale operators—Amazon Web Services, Microsoft Azure, Google Cloud Platform, and Meta—have all committed to becoming water-positive by 2030, replenishing more water than they consume. These commitments are driving innovation in cooling system design and creating opportunities for engineering firms with expertise in sustainable mechanical systems. The days of one-size-fits-all data centers are over. Latency requirements, scalability needs, and proximity to end users are accelerating adoption of diverse building types. Edge data centers bring computing closer to users for real-time applications like IoT and 5G. Hyperscale facilities support massive cloud and AI workloads with 100,000-plus servers. Colocation models enable scalable shared environments for enterprises, while modular designs—prefabricated with integrated power and cooling—offer rapid, cost-effective deployment. Each model presents distinct engineering challenges and opportunities, from specialized HVAC systems and high floor-to-ceiling ratios for hyperscale facilities to distributed infrastructure planning for edge networks. Two emerging trends deserve particular attention. First, the Department of Energy has selected four federal sites to host AI data centers paired with clean energy generation, including small modular reactors (SMRs). The Nuclear Regulatory Commission anticipates at least 25 SMR license applications by 2029, signaling strong demand for nuclear co-location expertise. Second, developers are increasingly exploring adaptive reuse of underutilized office spaces, Brownfield sites, and historical buildings. These locations offer existing utility infrastructure that can reduce construction time and costs, making them attractive alternatives despite some design constraints. Recent federal policy changes are streamlining data center deployment. Executive Order 14318 directs agencies to accelerate environmental reviews and permitting, while revisions to New Source Review under the Clean Air Act could allow construction to begin before air permits are issued. ACEC recently formed the Data Center Task Force to advocate for policies that balance speed, affordability, and national security in data center development, complimenting EO 14318. For engineering firms, site selection expertise has become increasingly valuable. Success hinges on sales and use tax exemptions, existing power and fiber connectivity, effective community engagement, and thorough environmental risk assessment. AI-driven planning tools like UrbanFootprint and ESRI ArcGIS are helping developers evaluate site suitability, identifying opportunities for firms. The data center market offers engineering firms a chance to lead in sustainable design, infrastructure innovation, and strategic planning at a moment when digital infrastructure has become as critical as traditional utilities.
In this week's episode of Hands-On Tech, Lance asks Mikah Sargent about the pros and cons of using powerline ethernet adapters, and Mikah shares his strong thoughts on these devices. Don't forget to send in your questions for Mikah to answer during the show! hot@twit.tv Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
In this week's episode of Hands-On Tech, Lance asks Mikah Sargent about the pros and cons of using powerline ethernet adapters, and Mikah shares his strong thoughts on these devices. Don't forget to send in your questions for Mikah to answer during the show! hot@twit.tv Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
In this week's episode of Hands-On Tech, Lance asks Mikah Sargent about the pros and cons of using powerline ethernet adapters, and Mikah shares his strong thoughts on these devices. Don't forget to send in your questions for Mikah to answer during the show! hot@twit.tv Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
In this week's episode of Hands-On Tech, Lance asks Mikah Sargent about the pros and cons of using powerline ethernet adapters, and Mikah shares his strong thoughts on these devices. Don't forget to send in your questions for Mikah to answer during the show! hot@twit.tv Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
In this week's episode of Hands-On Tech, Lance asks Mikah Sargent about the pros and cons of using powerline ethernet adapters, and Mikah shares his strong thoughts on these devices. Don't forget to send in your questions for Mikah to answer during the show! hot@twit.tv Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
In this week's episode of Hands-On Tech, Lance asks Mikah Sargent about the pros and cons of using powerline ethernet adapters, and Mikah shares his strong thoughts on these devices. Don't forget to send in your questions for Mikah to answer during the show! hot@twit.tv Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
In this week's episode of Hands-On Tech, Lance asks Mikah Sargent about the pros and cons of using powerline ethernet adapters, and Mikah shares his strong thoughts on these devices. Don't forget to send in your questions for Mikah to answer during the show! hot@twit.tv Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
In this week's episode of Hands-On Tech, Lance asks Mikah Sargent about the pros and cons of using powerline ethernet adapters, and Mikah shares his strong thoughts on these devices. Don't forget to send in your questions for Mikah to answer during the show! hot@twit.tv Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
In this week's episode of Hands-On Tech, Lance asks Mikah Sargent about the pros and cons of using powerline ethernet adapters, and Mikah shares his strong thoughts on these devices. Don't forget to send in your questions for Mikah to answer during the show! hot@twit.tv Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
The Crew and some new and old friends, take on the Mass Driver at Morro Rock. They've made it past the barge and into the compound, now to split up, plant the virus and get out without a bang. Hopefully...Jade, Evan and Peter are joined by Dayeanne Hutton and Anais R Morgan (Infinite Sided Dice) live on stage at San Diego Comic Con 2025!Check out our Youtube Channel for more live panels! HereBTS and art posts will come out for all our patrons to peruse next week!More info can be found here: linktr.ee/NoLatencyIf you'd like to support us, We now have a Patreon! Patreon.com/nolatencyEven more information and MERCH is on our website! www.nolatencypodcast.comTwitter: @nolatencypodInstagram: @nolatencypodLogo & Map Art By Paris ArrowsmithCharacter Art by: Doodlejumps, Saint and Paris ArrowsmithProducing and Editing by Paris ArrowsmithMusic and Sound sfx by Epidemic Sound.Find @SkullorJade, @Miss_Magitek and @Binary_Dragon, @retrodatv on twitch, for live D&D, TTRPGs and more.#cyberpunkred #actualplay #ttrpg #radioplay #scifi #cyberpunk #drama #comedy #LIVE #SDCC25
How do we know through atmospheres? How can being affected by an atmosphere give rise to knowledge? What role does somatic, nonverbal knowledge play in how we belong to places? Atmospheric Knowledge takes up these questions through detailed analyses of practices that generate atmospheres and in which knowledge emerges through visceral intermingling with atmospheres. From combined musicological and anthropological perspectives, Birgit Abels and Patrick Eisenlohr investigate atmospheres as a compelling alternative to better-known analytics of affect by way of performative and sonic practices across a range of ethnographic settings. With particular focus on oceanic relations and sonic affectedness, Atmospheric Knowledge centers the rich affordances of sonic connections for knowing our environments. A free ebook version of this title is available through Luminos, University of California Press's Open Access publishing program. Visit www.luminosoa.org to learn more. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network
How do we know through atmospheres? How can being affected by an atmosphere give rise to knowledge? What role does somatic, nonverbal knowledge play in how we belong to places? Atmospheric Knowledge takes up these questions through detailed analyses of practices that generate atmospheres and in which knowledge emerges through visceral intermingling with atmospheres. From combined musicological and anthropological perspectives, Birgit Abels and Patrick Eisenlohr investigate atmospheres as a compelling alternative to better-known analytics of affect by way of performative and sonic practices across a range of ethnographic settings. With particular focus on oceanic relations and sonic affectedness, Atmospheric Knowledge centers the rich affordances of sonic connections for knowing our environments. A free ebook version of this title is available through Luminos, University of California Press's Open Access publishing program. Visit www.luminosoa.org to learn more. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/anthropology
How do we know through atmospheres? How can being affected by an atmosphere give rise to knowledge? What role does somatic, nonverbal knowledge play in how we belong to places? Atmospheric Knowledge takes up these questions through detailed analyses of practices that generate atmospheres and in which knowledge emerges through visceral intermingling with atmospheres. From combined musicological and anthropological perspectives, Birgit Abels and Patrick Eisenlohr investigate atmospheres as a compelling alternative to better-known analytics of affect by way of performative and sonic practices across a range of ethnographic settings. With particular focus on oceanic relations and sonic affectedness, Atmospheric Knowledge centers the rich affordances of sonic connections for knowing our environments. A free ebook version of this title is available through Luminos, University of California Press's Open Access publishing program. Visit www.luminosoa.org to learn more. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/sociology
How do we know through atmospheres? How can being affected by an atmosphere give rise to knowledge? What role does somatic, nonverbal knowledge play in how we belong to places? Atmospheric Knowledge takes up these questions through detailed analyses of practices that generate atmospheres and in which knowledge emerges through visceral intermingling with atmospheres. From combined musicological and anthropological perspectives, Birgit Abels and Patrick Eisenlohr investigate atmospheres as a compelling alternative to better-known analytics of affect by way of performative and sonic practices across a range of ethnographic settings. With particular focus on oceanic relations and sonic affectedness, Atmospheric Knowledge centers the rich affordances of sonic connections for knowing our environments. A free ebook version of this title is available through Luminos, University of California Press's Open Access publishing program. Visit www.luminosoa.org to learn more. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/geography
How do we know through atmospheres? How can being affected by an atmosphere give rise to knowledge? What role does somatic, nonverbal knowledge play in how we belong to places? Atmospheric Knowledge takes up these questions through detailed analyses of practices that generate atmospheres and in which knowledge emerges through visceral intermingling with atmospheres. From combined musicological and anthropological perspectives, Birgit Abels and Patrick Eisenlohr investigate atmospheres as a compelling alternative to better-known analytics of affect by way of performative and sonic practices across a range of ethnographic settings. With particular focus on oceanic relations and sonic affectedness, Atmospheric Knowledge centers the rich affordances of sonic connections for knowing our environments. A free ebook version of this title is available through Luminos, University of California Press's Open Access publishing program. Visit www.luminosoa.org to learn more. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/sound-studies
The Crew and some new and old friends, take on the Mass Driver at Morro Rock. Domino has discovered a plot to take out the crew's communication satellite and time is ticking before it's blown out of the sky.Jade, Evan and Peter are joined by Dayeanne Hutton and Anais R Morgan (Infinite Sided Dice) live on stage at San Diego Comic Con 2025!Check out our Youtube Channel for more live panels! HereBTS and art posts will come out for all our patrons to peruse, after Part 2 is released next week!More info can be found here: linktr.ee/NoLatencyIf you'd like to support us, We now have a Patreon! Patreon.com/nolatencyEven more information and MERCH is on our website! www.nolatencypodcast.comTwitter: @nolatencypodInstagram: @nolatencypodFind @SkullorJade, @Miss_Magitek and @Binary_Dragon, @retrodatv on twitch, for live D&D, TTRPGs and more.#cyberpunkred #actualplay #ttrpg #radioplay #scifi #cyberpunk #drama #comedy #LIVE #SDCC25
In this week's vBrownBag, Principal Software Engineer Dominik Wosiński takes us on a deep dive into Amazon Nova Sonic — AWS's latest speech-to-speech AI model. Dominik explores how unified voice models like Nova Sonic are reshaping customer experience, DevOps workflows, and real-time AI interaction, with live demos showing just how natural machine-generated speech can sound. We cover what makes speech-to-speech difficult, how latency and turn-detection affect conversational design, and why this technology marks the next frontier for AI-driven customer support. Stick around for audience Q&A, live experiments, and insights on where AWS Bedrock and generative AI are headed next.
Bill Severn of 1623 Farnam joins JSA TV from DCD>Connect Virginia to discuss how #GenerativeAI is reshaping network infrastructure. He shares insights on latency limits, #edge inference, #hyperscaler-driven metro upgrades, and what an AI-ready interconnect looks like. Plus, a look ahead at 1623 Farnam's expansion plans and investments for the next 12–24 months.
“AI is hungry — for bandwidth, for speed, and for talent.” — Jean-Philippe Avelange, Chief Information Officer, Expereo Jean-Philippe Avelange, CIO of Expereo, joined Doug Green, Publisher of Technology Reseller News, to discuss findings from Expereo's Horizon Telecom Report—revealing how U.S. organizations are losing millions to network failures and struggling to find skilled professionals in cybersecurity, networking, and data automation. Avelange explained that as companies digitize everything from collaboration to customer experience, connectivity interruptions now directly halt business operations, making network reliability as vital as cybersecurity. “Modern enterprises are building their products and services on connectivity. When it stops, business stops,” he noted. The AI multiplier AI adoption is compounding the challenge. “AI is not just another workload—it's a new kind of demand,” Avelange said. AI-driven automation, real-time data flows, and low-latency interactions place unprecedented pressure on legacy network architectures. Organizations can no longer treat networking as a commodity; they must rethink it as a strategic platform requiring redesign and intelligent automation. The human factor According to Avelange, the real shortage isn't people—it's adaptability. The industry needs professionals skilled in network automation, data flow optimization, and problem solving, not just hardware management. “AI won't solve your problem if you don't understand the problem,” he said, advocating for upskilling internal teams alongside strong partnerships with managed service providers (MSPs) that bring intelligence, not just infrastructure. Latency by design Latency, Avelange warned, must be addressed before deployment. “You can always add bandwidth, but you can't add speed after the fact. Latency has to be engineered from the start.” A new mindset For Expereo, the future of networking lies in intelligent connectivity—solutions that merge automation, analytics, and agility to keep enterprises resilient in the AI era. “We're not selling boxes,” Avelange said. “We're helping companies design the networks their digital business runs on.” Read more in the Horizon Telecom Report or visit expereo.com.
In this Mission Matters session hosted by Adam Torres, Eraj Akhtar (CTO & Co-Founder, Excite Capital LLC) and Namuun Battulga (CEO, Jenko Tour JSC & Igo Hotel and Resorts) discuss physics-based, quantum-inspired AI trading and Mongolia's emergence as a cost-efficient, secure data center location powered by a new 70MW plant. They share partner criteria, address security considerations, and outline a mission to scale globally distributed compute and real-economy growth across Asia. Follow Adam on Instagram at https://www.instagram.com/askadamtorres/ for up to date information on book releases and tour schedule. Apply to be a guest on our podcast: https://missionmatters.lpages.co/podcastguest/ Visit our website: https://missionmatters.com/ More FREE content from Mission Matters here: https://linktr.ee/missionmattersmedia Learn more about your ad choices. Visit podcastchoices.com/adchoices
In this Mission Matters session hosted by Adam Torres, Eraj Akhtar (CTO & Co-Founder, Excite Capital LLC) and Namuun Battulga (CEO, Jenko Tour JSC & Igo Hotel and Resorts) discuss physics-based, quantum-inspired AI trading and Mongolia's emergence as a cost-efficient, secure data center location powered by a new 70MW plant. They share partner criteria, address security considerations, and outline a mission to scale globally distributed compute and real-economy growth across Asia. Follow Adam on Instagram at https://www.instagram.com/askadamtorres/ for up to date information on book releases and tour schedule. Apply to be a guest on our podcast: https://missionmatters.lpages.co/podcastguest/ Visit our website: https://missionmatters.com/ More FREE content from Mission Matters here: https://linktr.ee/missionmattersmedia Learn more about your ad choices. Visit podcastchoices.com/adchoices
Show Notes:Chapters 00:00 Introduction and Background of Lighter02:26 The Launch of Lighter and Its Features05:25 Transition from Private to Public Beta07:56 Trading Volume and Metrics10:37 Open Interest and Volume Dynamics13:31 Incentive Programs and User Engagement15:56 Points System and User Behavior18:42 Future Developments and Season Two21:27 Verifiable Matching and Liquidations24:09 Fee Structure and Token Philosophy24:45 Retail vs. Professional Trading27:12 Fee Structures and Trading Tiers29:00 Latency and Advantages for Premium Accounts32:43 Order Flow and System Verification35:40 Single Sequencer Challenges38:46 Auto-Deleveraging and Liquidation Processes41:33 Criteria for Asset Listings43:25 Community-Driven Regional Strategies If you like this episode, you're welcome to tip with Ethereum / Solana / Bitcoin:如果喜欢本作品,欢迎打赏ETH/SOL/BTC:ETH: 0x83Fe9765a57C9bA36700b983Af33FD3c9920Ef20SOL: AaCeeEX5xBH6QchuRaUj3CEHED8vv5bUizxUpMsr1KytBTC: 3ACPRhHVbh3cu8zqtqSPpzNnNULbZwaNqG Important Disclaimer: All opinions expressed by Mable Jiang, or other podcast guests, are solely their opinion. This podcast is for informational purposes only and should not be construed as investment advice. Mable Jiang may hold positions in some of the projects discussed on this show. 重要声明:Mable Jiang或嘉宾在播客中的观点仅代表他们的个人看法。此播客仅用于提供信息,不作为投资参考。Mable Jiang有时可能会在此节目中讨论的某项目中持有头寸。
A fantastic episode with one of the founding minds behind Broadband, Jason Livingood and I chat about luck, experience, the shifting landscape of broadband speed and content, and the good work of people like the late Dave Taht that has changed the way we treat the “old ideas” of how Internet services should be delivered.
Join Alex Golding as he sits down with Austin Federa, Co-founder of DoubleZero, to explore how they're building permissionless high-performance fiber infrastructure that could revolutionize blockchain performance. Austin shares the technical vision behind creating a parallel internet for distributed systems, starting with Solana validators as their initial market.DoubleZero: https://doublezero.xyz
Doug Madory joins us to unpack the recent Red Sea submarine cable cuts and how Kentik's Cloud Latency Map revealed the global impact in real-time, offering critical insight into cloud performance, interconnectivity, and internet resilience.
Send us a textWhat if AI could tap into live operational data — without ETL or RAG? In this episode, Deepti Srivastava, founder of Snow Leopard, reveals how her company is transforming enterprise data access with intelligent data retrieval, semantic intelligence, and a governance-first approach. Tune in for a fresh perspective on the future of AI and the startup journey behind it.We explore how companies are revolutionizing their data access and AI strategies. Deepti Srivastava, founder of Snow Leopard, shares her insights on bridging the gap between live operational data and generative AI — and how it's changing the game for enterprises worldwide.We dive into Snow Leopard's innovative approach to data retrieval, semantic intelligence, and governance-first architecture.04:54 Meeting Deepti Srivastava 14:06 AI with No ETL, no RAG 17:11 Snow Leopard's Intelligent Data Fetching 19:00 Live Query Challenges 21:01 Snow Leopard's Secret Sauce 22:14 Latency 23:48 Schema Changes 25:02 Use Cases 26:06 Snow Leopard's Roadmap 29:16 Getting Started 33:30 The Startup Journey 34:12 A Woman in Technology 36:03 The Contrarian View
Send us a textWhat if AI could tap into live operational data — without ETL or RAG? In this episode, Deepti Srivastava, founder of Snow Leopard, reveals how her company is transforming enterprise data access with intelligent data retrieval, semantic intelligence, and a governance-first approach. Tune in for a fresh perspective on the future of AI and the startup journey behind it.We explore how companies are revolutionizing their data access and AI strategies. Deepti Srivastava, founder of Snow Leopard, shares her insights on bridging the gap between live operational data and generative AI — and how it's changing the game for enterprises worldwide.We dive into Snow Leopard's innovative approach to data retrieval, semantic intelligence, and governance-first architecture.04:54 Meeting Deepti Srivastava 14:06 AI with No ETL, no RAG 17:11 Snow Leopard's Intelligent Data Fetching 19:00 Live Query Challenges 21:01 Snow Leopard's Secret Sauce 22:14 Latency 23:48 Schema Changes 25:02 Use Cases 26:06 Snow Leopard's Roadmap 29:16 Getting Started 33:30 The Startup Journey 34:12 A Woman in Technology 36:03 The Contrarian View
Jimmy Bogard joins Pod Rocket to talk about making monoliths more modular, why boundaries matter, and how to avoid turning systems into distributed monoliths. From refactoring techniques and database migrations at scale to lessons from Stripe and WordPress, he shares practical ways to balance architecture choices. We also explore how tools like Claude and Lambda fit into modern development and what teams should watch for with latency, transactions, and growing complexity. Links Website: https://www.jimmybogard.com X: https://x.com/jbogard Github: https://github.com/jbogard LinkedIn: https://www.linkedin.com/in/jimmybogard/ Resources Modularizing the Monolith - Jimmy Bogard - NDC Oslo 2024: https://www.youtube.com/watch?v=fc6_NtD9soI Chapters We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Fill out our listener survey (https://t.co/oKVAEXipxu)! Let us know by sending an email to our producer, Em, at emily.kochanek@logrocket.com (mailto:emily.kochanek@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understanding where your users are struggling by trying it for free at LogRocket.com. Try LogRocket for free today. (https://logrocket.com/signup/?pdr) Special Guest: Jimmy Bogard.
Monitoring and troubleshooting latency can be tricky. If it’s in the network, was it the IP stack? A NIC? A switch buffer? A middlebox somewhere on the WAN? If it’s the application, can you, the network engineer, bring receipts to the app team? And what if you need to build and operate a network that’s... Read more »
Monitoring and troubleshooting latency can be tricky. If it’s in the network, was it the IP stack? A NIC? A switch buffer? A middlebox somewhere on the WAN? If it’s the application, can you, the network engineer, bring receipts to the app team? And what if you need to build and operate a network that’s... Read more »
Monitoring and troubleshooting latency can be tricky. If it’s in the network, was it the IP stack? A NIC? A switch buffer? A middlebox somewhere on the WAN? If it’s the application, can you, the network engineer, bring receipts to the app team? And what if you need to build and operate a network that’s... Read more »
Today we are joined by Gorkem and Batuhan from Fal.ai, the fastest growing generative media inference provider. They recently raised a $125M Series C and crossed $100M ARR. We covered how they pivoted from dbt pipelines to diffusion models inference, what were the models that really changed the trajectory of image generation, and the future of AI videos. Enjoy! 00:00 - Introductions 04:58 - History of Major AI Models and Their Impact on Fal.ai 07:06 - Pivoting to Generative Media and Strategic Business Decisions 10:46 - Technical discussion on CUDA optimization and kernel development 12:42 - Inference Engine Architecture and Kernel Reusability 14:59 - Performance Gains and Latency Trade-offs 15:50 - Discussion of model latency importance and performance optimization 17:56 - Importance of Latency and User Engagement 18:46 - Impact of Open Source Model Releases and Competitive Advantage 19:00 - Partnerships with closed source model developers 20:06 - Collaborations with Closed-Source Model Providers 21:28 - Serving Audio Models and Infrastructure Scalability 22:29 - Serverless GPU infrastructure and technical stack 23:52 - GPU Prioritization: H100s and Blackwell Optimization 25:00 - Discussion on ASICs vs. General Purpose GPUs 26:10 - Architectural Trends: MMDiTs and Model Innovation 27:35 - Rise and Decline of Distillation and Consistency Models 28:15 - Draft Mode and Streaming in Image Generation Workflows 29:46 - Generative Video Models and the Role of Latency 30:14 - Auto-Regressive Image Models and Industry Reactions 31:35 - Discussion of OpenAI's Sora and competition in video generation 34:44 - World Models and Creative Applications in Games and Movies 35:27 - Video Models' Revenue Share and Open-Source Contributions 36:40 - Rise of Chinese Labs and Partnerships 38:03 - Top Trending Models on Hugging Face and ByteDance's Role 39:29 - Monetization Strategies for Open Models 40:48 - Usage Distribution and Model Turnover on FAL 42:11 - Revenue Share vs. Open Model Usage Optimization 42:47 - Moderation and NSFW Content on the Platform 44:03 - Advertising as a key use case for generative media 45:37 - Generative Video in Startup Marketing and Virality 46:56 - LoRA Usage and Fine-Tuning Popularity 47:17 - LoRA ecosystem and fine-tuning discussion 49:25 - Post-Training of Video Models and Future of Fine-Tuning 50:21 - ComfyUI Pipelines and Workflow Complexity 52:31 - Requests for startups and future opportunities in the space 53:33 - Data Collection and RedPajama-Style Initiatives for Media Models 53:46 - RL for Image and Video Models: Unknown Potential 55:11 - Requests for Models: Editing and Conversational Video Models 57:12 - VO3 Capabilities: Lip Sync, TTS, and Timing 58:23 - Bitter Lesson and the Future of Model Workflows 58:44 - FAL's hiring approach and team structure 59:29 - Team Structure and Scaling Applied ML and Performance Teams 1:01:41 - Developer Experience Tools and Low-Code/No-Code Integration 1:03:04 - Improving Hiring Process with Public Challenges and Benchmarks 1:04:02 - Closing Remarks and Culture at FAL
In a time when the world is run by data and real-time actions, edge computing is quickly becoming a must-have in enterprise technology. In the recent episode of the Tech Transformed podcast, hosted by Shubhangi Dua, a Podcast Producer and B2B Tech Journalist, discusses the complexities of this distributed future with guest Dmitry Panenkov, Founder and CEO of emma.The conversation dives into how latency is the driving force behind edge adoption. Applications like autonomous vehicles and real-time analytics cannot afford to wait on a round trip to a centralised data centre. They need to compute where the data is generated.Rather than viewing edge as a rival to the cloud, the discussion highlights it as a natural extension. Edge environments bring speed, resilience and data control, all necessary capabilities for modern applications. Adopting Edge ComputingFor organisations looking to adopt edge computing, this episode lays out a practical step-by-step approach. The skills necessary in multi-cloud environments – automation, infrastructure as code, and observability – translate well to edge deployments. These capabilities are essential for managing the unique challenges of edge devices, which may be disconnected, have lower power, or be located in hard-to-reach areas. Without this level of operational maturity, Panenkov warns of a "zombie apocalypse" of unmanaged devices.Simplifying ComplexityManaging different APIs, SDKs, and vendor lock-ins across a distributed network can be a challenging task, and this is where platforms like emma become crucial.Alluding to emma's mission, Panenkov explains, "We're building a unified platform that simplifies the way people interact with different cloud and computer environments, whether these are in a public setting or private data centres or even at the edge."Overall, emma creates a unified API layer and user interface, which simplifies the complexity. It helps businesses manage, automate, and scale their workloads from a singular perspective and reduces the burden on IT teams. They also reduce the need for a large team of highly skilled professionals leads to substantial cost savings. emma's customers have experienced that their cloud bills went down significantly and updates could be rolled out much faster using the platform.TakeawaysEdge computing is becoming a reality for more organisations.Latency-sensitive applications drive the need for edge computing.Real-time analytics and industry automation benefit from edge computing.Edge computing enhances resilience, cost efficiency, and data sovereignty.Integrating edge into cloud strategies requires automation and observability.Maturity in operational practices, like automation and observability, is essential for...
Join Lois Houston and Nikita Abraham as they chat with Yunus Mohammed, a Principal Instructor at Oracle University, about the key stages of AI model development. From gathering and preparing data to selecting, training, and deploying models, learn how each phase impacts AI's real-world effectiveness. The discussion also highlights why monitoring AI performance and addressing evolving challenges are critical for long-term success. AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/252500 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hey everyone! In our last episode, we spoke about generative AI and gen AI agents. Today, we're going to look at the key stages in a typical AI workflow. We'll also discuss how data quality, feedback loops, and business goals influence AI success. With us today is Yunus Mohammed, a Principal Instructor at Oracle University. 01:00 Lois: Hi Yunus! We're excited to have you here! Can you walk us through the various steps in developing and deploying an AI model? Yunus: The first point is the collect data. We gather relevant data, either historical or real time. Like customer transactions, support tickets, survey feedbacks, or sensor logs. A travel company, for example, can collect past booking data to predict future demand. So, data is the most crucial and the important component for building your AI models. But it's not just the data. You need to prepare the data. In the prepared data process, we clean, organize, and label the data. AI can't learn from messy spreadsheets. We try to make the data more understandable and organized, like removing duplicates, filling missing values in the data with some default values or formatting dates. All these comes under organization of the data and give a label to the data, so that the data becomes more supervised. After preparing the data, I go for selecting the model to train. So now, we pick what type of model fits your goals. It can be a traditional ML model or a deep learning network model, or it can be a generative model. The model is chosen based on the business problems and the data we have. So, we train the model using the prepared data, so it can learn the patterns of the data. Then after the model is trained, I need to evaluate the model. You check how well the model performs. Is it accurate? Is it fair? The metrics of the evaluation will vary based on the goal that you're trying to reach. If your model misclassifies emails as spam and it is doing it very much often, then it is not ready. So I need to train it further. So I need to train it to a level when it identifies the official mail as official mail and spam mail as spam mail accurately. After evaluating and making sure your model is perfectly fitting, you go for the next step, which is called the deploy model. Once we are happy, we put it into the real world, like into a CRM, or a web application, or an API. So, I can configure that with an API, which is application programming interface, or I add it to a CRM, Customer Relationship Management, or I add it to a web application that I've got. Like for example, a chatbot becomes available on your company's website, and the chatbot might be using a generative AI model. Once I have deployed the model and it is working fine, I need to keep track of this model, how it is working, and need to monitor and improve whenever needed. So I go for a stage, which is called as monitor and improve. So AI isn't set in and forget it. So over time, there are lot of changes that is happening to the data. So we monitor performance and retrain when needed. An e-commerce recommendation model needs updates as there might be trends which are shifting. So the end user finally sees the results after all the processes. A better product, or a smarter service, or a faster decision-making model, if we do this right. That is, if we process the flow perfectly, they may not even realize AI is behind it to give them the accurate results. 04:59 Nikita: Got it. So, everything in AI begins with data. But what are the different types of data used in AI development? Yunus: We work with three main types of data: structured, unstructured, and semi-structured. Structured data is like a clean set of tables in Excel or databases, which consists of rows and columns with clear and consistent data information. Unstructured is messy data, like your email or customer calls that records videos or social media posts, so they all comes under unstructured data. Semi-structured data is things like logs on XML files or JSON files. Not quite neat but not entirely messy either. So they are, they are termed semi-structured. So structured, unstructured, and then you've got the semi-structured. 05:58 Nikita: Ok… and how do the data needs vary for different AI approaches? Yunus: Machine learning often needs labeled data. Like a bank might feed past transactions labeled as fraud or not fraud to train a fraud detection model. But machine learning also includes unsupervised learning, like clustering customer spending behavior. Here, no labels are needed. In deep learning, it needs a lot of data, usually unstructured, like thousands of loan documents, call recordings, or scan checks. These are fed into the models and the neural networks to detect and complex patterns. Data science focus on insights rather than the predictions. So a data scientist at the bank might use customer relationship management exports and customer demographies to analyze which age group prefers credit cards over the loans. Then we have got generative AI that thrives on diverse, unstructured internet scalable data. Like it is getting data from books, code, images, chat logs. So these models, like ChatGPT, are trained to generate responses or mimic the styles and synthesize content. So generative AI can power a banking virtual assistant trained on chat logs and frequently asked questions to answer customer queries 24/7. 07:35 Lois: What are the challenges when dealing with data? Yunus: Data isn't just about having enough. We must also think about quality. Is it accurate and relevant? Volume. Do we have enough for the model to learn from? And is my data consisting of any kind of unfairly defined structures, like rejecting more loan applications from a certain zip code, which actually gives you a bias of data? And also the privacy. Are we handling personal data responsibly or not? Especially data which is critical or which is regulated, like the banking sector or health data of the patients. Before building anything smart, we must start smart. 08:23 Lois: So, we've established that collecting the right data is non-negotiable for success. Then comes preparing it, right? Yunus: This is arguably the most important part of any AI or data science project. Clean data leads to reliable predictions. Imagine you have a column for age, and someone accidentally entered an age of like 999. That's likely a data entry error. Or maybe a few rows have missing ages. So we either fix, remove, or impute such issues. This step ensures our model isn't misled by incorrect values. Dates are often stored in different formats. For instance, a date, can be stored as the month and the day values, or it can be stored in some places as day first and month next. We want to bring everything into a consistent, usable format. This process is called as transformation. The machine learning models can get confused if one feature, like example the income ranges from 10,000 to 100,000, and another, like the number of kids, range from 0 to 5. So we normalize or scale values to bring them to a similar range, say 0 or 1. So we actually put it as yes or no options. So models don't understand words like small, medium, or large. We convert them into numbers using encoding. One simple way is assigning 1, 2, and 3 respectively. And then you have got removing stop words like the punctuations, et cetera, and break the sentence into smaller meaningful units called as tokens. This is actually used for generative AI tasks. In deep learning, especially for Gen AI, image or audio inputs must be of uniform size and format. 10:31 Lois: And does each AI system have a different way of preparing data? Yunus: For machine learning ML, focus is on cleaning, encoding, and scaling. Deep learning needs resizing and normalization for text and images. Data science, about reshaping, aggregating, and getting it ready for insights. The generative AI needs special preparation like chunking, tokenizing large documents, or compressing images. 11:06 Oracle University's Race to Certification 2025 is your ticket to free training and certification in today's hottest tech. Whether you're starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That's education.oracle.com/race-to-certification-2025. 11:50 Nikita: Welcome back! Yunus, how does a user choose the right model to solve their business problem? Yunus: Just like a business uses different dashboards for marketing versus finance, in AI, we use different model types, depending on what we are trying to solve. Like classification is choosing a category. Real-world example can be whether the email is a spam or not. Use in fraud detection, medical diagnosis, et cetera. So what you do is you classify that particular data and then accurately access that classification of data. Regression, which is used for predicting a number, like, what will be the price of a house next month? Or it can be a useful in common forecasting sales demands or on the cost. Clustering, things without labels. So real-world examples can be segmenting customers based on behavior for targeted marketing. It helps discovering hidden patterns in large data sets. Generation, that is creating new content. So AI writing product description or generating images can be a real-world example for this. And it can be used in a concept of generative AI models like ChatGPT or Dall-E, which operates on the generative AI principles. 13:16 Nikita: And how do you train a model? Yunus: We feed it with data in small chunks or batches and then compare its guesses to the correct values, adjusting its thinking like weights to improve next time, and the cycle repeats until the model gets good at making predictions. So if you're building a fraud detection system, ML may be enough. If you want to analyze medical images, you will need deep learning. If you're building a chatbot, go for a generative model like the LLM. And for all of these use cases, you need to select and train the applicable models as and when appropriate. 14:04 Lois: OK, now that the model's been trained, what else needs to happen before it can be deployed? Yunus: Evaluate the model, assess a model's accuracy, reliability, and real-world usefulness before it's put to work. That is, how often is the model right? Does it consistently perform well? Is it practical in the real world to use this model or not? Because if I have bad predictions, doesn't just look bad, it can lead to costly business mistakes. Think of recommending the wrong product to a customer or misidentifying a financial risk. So what we do here is we start with splitting the data into two parts. So we train the data by training data. And this is like teaching the model. And then we have got the testing data. This is actually used for checking how well the model has learned. So once trained, the model makes predictions. We compare the predictions to the actual answers, just like checking your answer after a quiz. We try to go in for tailored evaluation based on AI types. Like machine learning, we care about accuracy in prediction. Deep learning is about fitting complex data like voice or images, where the model repeatedly sees examples and tunes itself to reduce errors. Data science, we look for patterns and insights, such as which features will matter. In generative AI, we judge by output quality. Is it coherent, useful, and is it natural? The model improves with the accuracy and the number of epochs the training has been done on. 15:59 Nikita: So, after all that, we finally come to deploying the model… Yunus: Deploying a model means we are integrating it into our actual business system. So it can start making decisions, automating tasks, or supporting customer experiences in real time. Think of it like this. Training is teaching the model. Evaluating is testing it. And deployment is giving it a job. The model needs a home either in the cloud or inside your company's own servers. Think of it like putting the AI in place where it can be reached by other tools. Exposed via API or embedded in an app, or you can say application, this is how the AI becomes usable. Then, we have got the concept of receives live data and returns predictions. So receives live data and returns prediction is when the model listens to real-time inputs like a user typing, or user trying to search or click or making a transaction, and then instantly, your AI responds with a recommendation, decisions, or results. Deploying the model isn't the end of the story. It is just the beginning of the AI's real-world journey. Models may work well on day one, but things change. Customer behavior might shift. New products get introduced in the market. Economic conditions might evolve, like the era of COVID, where the demand shifted and the economical conditions actually changed. 17:48 Lois: Then it's about monitoring and improving the model to keep things reliable over time. Yunus: The monitor and improve loop is a continuous process that ensures an AI model remains accurate, fair, and effective after deployment. The live predictions, the model is running in real time, making decisions or recommendations. The monitor performance are those predictions still accurate and helpful. Is latency acceptable? This is where we track metrics, user feedbacks, and operational impact. Then, we go for detect issues, like accuracy is declining, are responses feeling biased, are customers dropping off due to long response times? And the next step will be to reframe or update the model. So we add fresh data, tweak the logic, or even use better architectures to deploy the uploaded model, and the new version replaces the old one and the cycle continues again. 18:58 Lois: And are there challenges during this step? Yunus: The common issues, which are related to monitor and improve consist of model drift, bias, and latency of failures. In model drift, the model becomes less accurate as the environment changes. Or bias, the model may favor or penalize certain groups unfairly. Latency or failures, if the model is too slow or fails unpredictably, it disrupts the user experience. Let's take the loan approvals. In loan approvals, if we notice an unusually high rejection rate due to model bias, we might retrain the model with more diverse or balanced data. For a chatbot, we watch for customer satisfaction, which might arise due to model failure and fine-tune the responses for the model. So in forecasting demand, if the predictions no longer match real trends, say post-pandemic, due to the model drift, we update the model with fresh data. 20:11 Nikita: Thanks for that, Yunus. Any final thoughts before we let you go? Yunus: No matter how advanced your model is, its effectiveness depends on the quality of the data you feed it. That means, the data needs to be clean, structured, and relevant. It should map itself to the problem you're solving. If the foundation is weak, the results will be also. So data preparation is not just a technical step, it is a business critical stage. Once deployed, AI systems must be monitored continuously, and you need to watch for drops in performance for any bias being generated or outdated logic, and improve the model with new data or refinements. That's what makes AI reliable, ethical, and sustainable in the long run. 21:09 Nikita: Yunus, thank you for this really insightful session. If you're interested in learning more about the topics we discussed today, go to mylearn.oracle.com and search for the AI for You course. Lois: That's right. You'll find skill checks to help you assess your understanding of these concepts. In our next episode, we'll discuss the idea of buy versus build in the context of AI. Until then, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 21:39 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
Yonatan Sompolinsky is an academic in the field of computer science, best known for his work on the GHOST protocol (Greedy Heaviest Observed Subtree, which was cited in the Ethereum whitepaper) and the way he applied his research to create Kaspa. In this episode, we talk about scaling Proof of Work and why Kaspa might be a worthy contender to process global payments. –––––––––––––––––––––––––––––––––––– Time stamps: 00:01:22 - Debunking rumors: Why some think Yonatan is Satoshi Nakamoto 00:02:52 - Candidates for Satoshi: Charles Hoskinson, Charlie Lee, Zooko, and Alex Chepurnoy 00:03:41 - Alex Chepurnoy as a Satoshi-like figure 00:04:07 - Kaspa overview: DAG structure, no orphaned blocks, generalization of Bitcoin 00:04:55 - Similarities between Kaspa and Bitcoin fundamentals 00:06:12 - Why Kaspa couldn't be built directly on Bitcoin 00:08:05 - Kaspa as generalization of Nakamoto consensus 00:11:55 - Origins of GHOST protocol and early DAG concepts for Bitcoin scaling 00:13:16 - Academic motivation for GHOST and transitioning to computer science 00:13:50 - Turtle pet named Bitcoin 00:15:22 - Increasing block rate in Bitcoin and GHOST protocol 00:16:57 - Meeting Gregory Maxwell and discovering GHOST flaws 00:20:00 - Yonatan's views on drivechains and Bitcoin maximalism 00:20:36 - Defining Bitcoin maximalism: Capital B vs lowercase b 00:23:18 - Satoshi's support for Namecoin and merged mining 00:24:12 - Bitcoin culture in 2013-2018: Opposing other functionalities 00:26:01 - Vitalik's 2014 article on Bitcoin maximalism 00:26:13 - Andrew Poelstra's opposition to other assets on Bitcoin 00:26:38 - Bitcoin culture: Distaste for DeFi, criticism of Ethereum as a scam 00:28:03 - Bitcoin Cash developments: Cash tokens, cash fusion, contracts 00:28:39 - Rejection of Ethereum in Bitcoin circles 00:30:18 - Ethereum's successful PoS transition despite critics 00:35:04 - Ethereum's innovation: From Plasma to ZK rollups, nurturing development 00:37:04 - Stacks protocol and criticism from Luke Dashjr 00:39:02 - Bitcoin culture justifying technical limitations 00:41:01 - Declining Bitcoin adoption as money, rise of altcoins for payments 00:43:02 - Kaspa's aspirations: Merging sound money with DeFi, beyond just payments 00:43:56 - Possibility of tokenized Bitcoin on Kaspa 00:46:30 - Native currency advantage and friction in bridges 00:48:49 - WBTC on Ethereum scale vs Bitcoin L2s 00:53:33 - Quotes: Richard Dawkins on atheism, Milton Friedman on Yap Island money 00:55:44 - Story of Kaspa's messy fair launch in 2021 01:14:08 - Tech demo of Kaspa wallet experience 01:28:45 - Kaspa confirmation times & transaction fees 01:43:26 - GHOST DAG visualizer 01:44:10 - Mining Kaspa 01:55:48 - Data pruning in Kaspa, DAG vs MimbleWimble 02:01:40 - Grin & the fairest launch 02:12:21 - Zcash scaling & ZKP OP code in Kaspa 02:19:50 - Jameson Lopp, cold storage & self custody elitism 02:35:08 - Social recovery 02:41:00 - Amir Taaki, DarkFi & DAO 02:53:10 - Nick Szabo's God Protocols 03:00:00 - Layer twos on Kaspa for DeFi 03:13:09 - How Kaspa's DeFi will resemble Solana 03:24:03 - Centralized exchanges vs DeFi 03:32:05 - The importance of community projects 03:37:00 - DAG KNIGHT and its resilience 03:51:00 - DAG KNIGHT tradeoffs 03:58:18 - Blockchain vs DAG, the bottleneck for Kaspa 04:03:00 - 100 blocks per second? 04:11:43 - Question from Quai's Dr. K 04:17:03 - Doesn't Kaspa require super fast internet? 04:23:10 - Are ASIC miners desirable? 04:33:53 - Why Proof of Work matters 04:35:55 - A short history of Bitcoin mining 04:44:00 - DAG's sequencing 04:49:09 - Phantom GHOST DAG 04:52:47 - Why Kaspa had high inflation initially 04:55:10 - Selfish mining 05:03:00 - K Heavy Hash & other community questions 06:33:20 - Latency settings in DAG KNIGHT for security 06:36:52 - Aviv Zohar's involvement in Kaspa research 06:38:07 - World priced in Kaspa after hyperinflation 06:39:51 - Kaspa's fate intertwined with crypto 06:40:29 - Kaspa contracts vs Solana, why better for banks 06:42:53 - Cohesive developer experience in Kaspa like Solana 06:45:22 - Incorporating ZK design in Kaspa smart contracts 06:47:22 - Heroes: Garry Kasparov 06:48:12 - Shift in attitude from academics like Hoskinson, Buterin, Back 06:53:07 - Adam Back's criticism of Kaspa 06:55:57 - Michael Jordan and LeBron analogy for Bitcoiners' mindset 06:58:02 - Can Kaspa flip Bitcoin in market cap 07:00:34 - Gold and USD market cap comparison 07:06:06 - Collaboration with Kai team 07:10:37 - Community improvement: More context on crypto 07:13:43 - Theoretical maximum TPS for Kaspa 07:16:05 - Full ZK on L1 improvements 07:17:45 - Atomic composability and logic zones in Kaspa 07:23:12 - Sparkle and monolithic UX feel 07:26:00 - Wrapping up: Beating podcast length record, final thoughts on Bitcoin and Kaspa 07:27:31 - Why Yonatan called a scammer despite explanations 07:32:29 - Luke Dashjr's views and disconnect 07:33:01 - Hope for Bitcoin scaling and revolution
In this episode, Markus Viitamäki, Senior Infrastructure Architect at Embark Studios joins the podcast to discuss trends in the gaming industry, main drivers to creating a new game, and how much latency and bandwidth are affecting the gaming experience.
Andrew Lamb, a veteran of database engine development, shares his thoughts on why Rust is the right tool for developing low-latency systems, not only from the perspective of the code's performance, but also looking at productivity and developer joy. He discusses the overall experience of adopting Rust after a decade of programming in C/C++. Read a transcript of this interview: http://bit.ly/45qi4eK Subscribe to the Software Architects' Newsletter for your monthly guide to the essential news and experience from industry peers on emerging patterns and technologies: https://www.infoq.com/software-architects-newsletter Upcoming Events: InfoQ Dev Summit Munich (October 15-16, 2025) Essential insights on critical software development priorities. https://devsummit.infoq.com/conference/munich2025 QCon San Francisco 2025 (November 17-21, 2025) Get practical inspiration and best practices on emerging software trends directly from senior software developers at early adopter companies. https://qconsf.com/ QCon AI New York 2025 (December 16-17, 2025) https://ai.qconferences.com/ QCon London 2026 (March 16-19, 2026) https://qconlondon.com/ The InfoQ Podcasts: Weekly inspiration to drive innovation and build great teams from senior software leaders. Listen to all our podcasts and read interview transcripts: - The InfoQ Podcast https://www.infoq.com/podcasts/ - Engineering Culture Podcast by InfoQ https://www.infoq.com/podcasts/#engineering_culture - Generally AI: https://www.infoq.com/generally-ai-podcast/ Follow InfoQ: - Mastodon: https://techhub.social/@infoq - X: https://x.com/InfoQ?from=@ - LinkedIn: https://www.linkedin.com/company/infoq/ - Facebook: https://www.facebook.com/InfoQdotcom# - Instagram: https://www.instagram.com/infoqdotcom/?hl=en - Youtube: https://www.youtube.com/infoq - Bluesky: https://bsky.app/profile/infoq.com Write for InfoQ: Learn and share the changes and innovations in professional software development. - Join a community of experts. - Increase your visibility. - Grow your career. https://www.infoq.com/write-for-infoq
Latency, it's that tiny, annoying delay between what you say and what you hear back. In the studio, online, or even in your own headphones, it can trip you up, wreck your timing, and make you feel like you're talking to yourself in slow motion. In this episode of The Pro Audio Suite, Robbo, AP, George, and Robert dig into: What latency really is (and why it's not just a tech buzzword) How it sneaks into your recording chain The difference between “good” latency and “bad” latency Fixes you can do right now without buying a new rig When hardware or interface upgrades actually make sense Whether you're a VO artist fighting through a remote session, a podcaster dealing with talkback lag, or a studio pro chasing perfect sync, this is your guide to killing the delay and getting back in the groove. Proudly supported by Tri-Booth and Austrian Audio, we only partner with brands we believe in, and these two deliver the goods for pro audio pros like you.
“I have a regular chat with a friend of mine in New Zealand. He's a tetraplegic and a musician, so he invents his own music instruments that he can play with his limited motion, and he can send me his instrument over MIDI to where I am across the ocean and we can play together and we can have an engagement. It's not possible for him to come to see me in Europe. It would be so expensive, and a lot of work. So, you know, thank God there's the internet for him, you know. He gets to participate, he has remote concerts, he still plays with his friends. It's really special.” – Rebekah Wilson This episode is the second half of my conversation with Source Elements CEO and remote collaboration specialist Rebekah Wilson as we discuss how physics and neurology collide when it comes to reducing latency, how the pandemic transformed online music collaboration and gave rise to today's generation of at-home musicians, and where Rebekah sees sound, technology, and music itself heading in the future, over both the coming decades and even generations from now. As always, if you have questions for my guest, you're welcome to reach out through the links in the show notes. If you have questions for me, visit audiobrandingpodcast.com, where you'll find a lot of ways to get in touch. Plus, subscribing to the newsletter will let you know when the new podcasts are available, along with other interesting bits of audio-related news. And if you're getting some value from listening, the best ways to show your support are to share this podcast with a friend and leave an honest review. Both those things really help, and I'd love to feature your review on future podcasts. You can leave one either in written or in voice format from the podcast's main page. I would so appreciate that. (0:00:01) - Impact of Latency on Music CollaborationWe continue our talk about the science of latency, and Rebekah explains how it impacts music in ways that our brains only dimly perceive. “If you add a little bit of latency onto that,” she says, “music's like, one, two... three… music's not very friendly to that [sort of] latency.” She tells us more about how our brains unconsciously adapt to latency, and how technology relies both on improving speed and taking advantage of our ability to filter out information gaps. “What's happening is that you're anticipating it based on this model that's in your brain,” Rebekah explains. “For example, every time you look at a wall or your surroundings, if it's not moving, your brain's not processing it.”(0:06:02) - Advancements in Remote Music CollaborationShe talks about how the COVID-19 pandemic's lockdown phase led to a boom in online collaboration, some of which continues to thrive today. “There remained a group of people,” she says, “a small group of people, you know, scattered around the world… who were like, ‘You know what? Some interesting things came out of this. Some interesting artistic development is possible here and it's worth pursuing.” We discuss the technical and creative innovations that emerged from that period, and where they might lead in the years to come as we continue to innovate. “What we love as humans,” Rebekah says, “is to seek new forms of expression. This is what we do, we're adventurers. So we go out, we go into the desert, we go out into the oceans, and we look for where something new is. And you know, music and performance and being together on the internet is still very new for us as humans.”(0:12:42) - Expanding Music Collaboration With TechnologyOur conversation wraps up as we continue to talk about online collaboration and creative efforts that can now
Real-time Feature Generation at Lyft // MLOps Podcast #334 with Rakesh Kumar, Senior Staff Software Engineer at Lyft.Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractThis session delves into real-time feature generation at Lyft. Real-time feature generation is critical for Lyft where accurate up-to-the-minute marketplace data is paramount for optimal operational efficiency. We will explore how the infrastructure handles the immense challenge of processing tens of millions of events per minute to generate features that truly reflect current marketplace conditions. Lyft has built this massive infrastructure over time, evolving from a humble start and a naive pipeline. Through lessons learned and iterative improvements, Lyft has made several trade-offs to achieve low-latency, real-time feature delivery. MLOps plays a critical role in managing the lifecycle of these real-time feature pipelines, including monitoring and deployment. We will discuss the practicalities of building and maintaining high-throughput, low-latency real-time feature generation systems that power Lyft's dynamic marketplace and business-critical products.// BioRakesh Kumar is a Senior Staff Software Engineer at Lyft, specializing in building and scaling Machine Learning platforms. Rakesh has expertise in MLOps, including real-time feature generation, experimentation platforms, and deploying ML models at scale. He is passionate about sharing his knowledge and fostering a culture of innovation. This is evident in his contributions to the tech community through blog posts, conference presentations, and reviewing technical publications.// Related LinksWebsite: https://englife101.io/https://eng.lyft.com/search?q=rakeshhttps://eng.lyft.com/real-time-spatial-temporal-forecasting-lyft-fa90b3f3ec24https://eng.lyft.com/evolution-of-streaming-pipelines-in-lyfts-marketplace-74295eaf1ebaStreaming Ecosystem Complexities and Cost Management // Rohit Agrawal // MLOps Podcast #302 - https://youtu.be/0axFbQwHEh8~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Rakesh on LinkedIn: /rakeshkumar1007/Timestamps:[00:00] Rakesh preferred coffee[00:24] Real-time machine learning[04:51] Latency tricks explanation[09:28] Real-time problem evolution[15:51] Config management complexity[18:57] Data contract implementation[23:36] Feature store[28:23] Offline vs online workflows[31:02] Decision-making in tech shifts[36:54] Cost evaluation frequency[40:48] Model feature discussion[49:09] Hot shard tricks[55:05] Pipeline feature bundling[57:38] Wrap up
This week, we highlight Disney's new content bundling deals with Charter and ITV, as well as Paramount's new agreement with DIRECTV. We discuss Amazon's investment in AI to help remaster SD content into HD, and what we think the impact of AI can be in the video streaming workflow. We also discuss why the media is getting it wrong when they suggest that Google has won the living room, as YouTube tightens its enforcement of its monetization guidelines. We break down the latest rumors that Apple will acquire Formula 1 content and why Apple's F1 movie was about more than just content, since it ties into the hardware of the iPhone. Finally, we discuss some recent online comments about ultra-low-latency deployments for media use cases, which overlook any tangible or measurable business benefits.Podcast produced by Security Halt Media
Our 216th episode with a summary and discussion of last week's big AI news! Recorded on 07/11/2025 Hosted by Andrey Kurenkov and Jeremie Harris. Feel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai Read out our text newsletter and comment on the podcast at https://lastweekin.ai/. In this episode: xAI launches Grok 4 with breakthrough performance across benchmarks, becoming the first true frontier model outside established labs, alongside a $300/month subscription tier Grok's alignment challenges emerge with antisemitic responses, highlighting the difficulty of steering models toward "truth-seeking" without harmful biases Perplexity and OpenAI launch AI-powered browsers to compete with Google Chrome, signaling a major shift in how users interact with AI systems Meta study reveals AI tools actually slow down experienced developers by 20% on complex tasks, contradicting expectations and anecdotal reports of productivity gains Timestamps + Links: (00:00:10) Intro / Banter (00:01:02) News Preview Tools & Apps (00:01:59) Elon Musk's xAI launches Grok 4 alongside a $300 monthly subscription | TechCrunch (00:15:28) Elon Musk's AI chatbot is suddenly posting antisemitic tropes (00:29:52) Perplexity launches Comet, an AI-powered web browser | TechCrunch (00:32:54) OpenAI is reportedly releasing an AI browser in the coming weeks | TechCrunch (00:33:27) Replit Launches New Feature for its Agent, CEO Calls it ‘Deep Research for Coding' (00:34:40) Cursor launches a web app to manage AI coding agents (00:36:07) Cursor apologizes for unclear pricing changes that upset users | TechCrunch Applications & Business (00:39:10) Lovable on track to raise $150M at $2B valuation (00:41:11) Amazon built a massive AI supercluster for Anthropic called Project Rainier – here's what we know so far (00:46:35) Elon Musk confirms xAI is buying an overseas power plant and shipping the whole thing to the U.S. to power its new data center — 1 million AI GPUs and up to 2 Gigawatts of power under one roof, equivalent to powering 1.9 million homes (00:48:16) Microsoft's own AI chip delayed six months in major setback — in-house chip now reportedly expected in 2026, but won't hold a candle to Nvidia Blackwell (00:49:54) Ilya Sutskever becomes CEO of Safe Superintelligence after Meta poached Daniel Gross (00:52:46) OpenAI's Stock Compensation Reflect Steep Costs of Talent Wars Projects & Open Source (00:58:04) Hugging Face Releases SmolLM3: A 3B Long-Context, Multilingual Reasoning Model - MarkTechPost (00:58:33) Kimi K2: Open Agentic Intelligence (00:58:59) Kyutai Releases 2B Parameter Streaming Text-to-Speech TTS with 220ms Latency and 2.5M Hours of Training Research & Advancements (01:02:14) Does Math Reasoning Improve General LLM Capabilities? Understanding Transferability of LLM Reasoning (01:07:58) Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity (01:13:03) Mitigating Goal Misgeneralization with Minimax Regret (01:17:01) Correlated Errors in Large Language Models (01:20:31) What skills does SWE-bench Verified evaluate? Policy & Safety (01:22:53) Evaluating Frontier Models for Stealth and Situational Awareness (01:25:49) When Chain of Thought is Necessary, Language Models Struggle to Evade Monitors (01:30:09) Why Do Some Language Models Fake Alignment While Others Don't? (01:34:35) Positive review only': Researchers hide AI prompts in papers (01:35:40) Google faces EU antitrust complaint over AI Overviews (01:36:41) The transfer of user data by DeepSeek to China is unlawful': Germany calls for Google and Apple to remove the AI app from their stores (01:37:30) Virology Capabilities Test (VCT): A Multimodal Virology Q&A Benchmark
Brady Volpe and DOCSIS expert John Downey recap the latest from ANGA COM 2025 in Germany.
Taken from the AI + a16z podcast, Arcjet CEO David Mytton sits down with a16z partner Joel de la Garza to discuss the increasing complexity of managing who can access websites, and other web apps, and what they can do there. A primary challenge is determining whether automated traffic is coming from bad actors and troublesome bots, or perhaps AI agents trying to buy a product on behalf of a real customer.Joel and David dive into the challenge of analyzing every request without adding latency, and how faster inference at the edge opens up new possibilities for fraud prevention, content filtering, and even ad tech.Topics include:Why traditional threat analysis won't work for the AI-powered webThe need for full-context security checksHow to perform sub-second, cost-effective inferenceThe wide range of potential actors and actions behind any given visitAs David puts it, lower inference costs are key to letting apps act on the full context window — everything you know about the user, the session, and your application. Follow everyone on social media:David MyttonJoel de la GarzaCheck out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts. Stay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
This special CDN focused podcast details the latest on delivery pricing and industry trends. I discuss delivery commits, QoE measurement, DIY deployments, HD HDR bitrates, and the impact of vendors exiting the market. I also cover how content owners perform capacity planning at the ASN level, leaching, latency, and why multicasting and P2P won't positively impact the industry. The data comes from my CDN pricing survey, as well as hosting panels and private events at the NAB Show Streaming Summit in April, which included OTT platforms, content owners, broadcasters, sports leagues, and others.Podcast produced by Security Halt Media
This week, we highlight all the sports news from Peacock, Pac-12, ESPN, Optus Sport, Amazon, IMG, and NASCAR, detailing viewership stats and new licensing deals. We also cover the recent layoffs at Disney, the BBC's low latency live streaming trial, and the current ad loads of Max, Prime Video and Peacock. Finally, we cover some vendor news, including Wowza's acquisition of AI tech company AVA Intellect, and discuss some recent data I collected on the use of AI in the streaming video workflow.Podcast produced by Security Halt Media
Rich travels to Dubrovnik for the European Congress of Virology 2025 and Vincent joins via Zoom to speak with Stéphane Blanc, Vanda Juranić Lisnić, and Elisabeth Puchhammer-Stöckl about their work on plant viruses, cytomegalovirus, and Epstein-Barr virus. Hosts: Vincent Racaniello and Rich Condit Guests: Stéphane Blanc, Vanda Juranić Lisnić, and Elisabeth Puchhammer-Stöckl Subscribe (free): Apple Podcasts, RSS, email Become a patron of TWiV! Links for this episode Support science education at MicrobeTV Assembled plant viruses move through plants (PLoS Path) Genome formula of multipartite virus (PLoS Path) Immune surveillance of cytomegalovirus in tissues (Cell Mol Immunol) Cytomegalovirus and NK cells (Nat Commun) Epstein-Barr virus and multiple sclerosis (J Clin Inves) Epstein-Barr virus and lymphoproliferative disease (Transplant) Timestamps by Jolene Ramsey. Thanks! Intro music is by Ronald Jenkees Send your virology questions and comments to twiv@microbe.tv Content in this podcast should not be construed as medical advice.
Over 70% of the world has herpes—yet it's still taboo. In this episode, Dr. G breaks down the truth about HSV-1 & HSV-2, from how it spreads to how to heal physically and emotionally. He shares the Heal Thyself protocol, featuring powerful supplements, nervous system tools, and mindset shifts to reduce outbreaks and reclaim your peace. #wellnessjourney #herpes #wellness ==== Thank You To Our Sponsors! Calroy Head on over to at calroy.com/drg and Save over $50 when you purchase the Vascanox and Arterosil bundle! ==== Timestamps: 00:00:00 - Understanding the Herpes Virus 00:02:56 - Prevalence, Latency & Treatment 06:00 - Transmission: Myths & Facts 08:58 - Triggers, Treatments & Misconceptions 12:02:47 - Antiviral Drugs & Holistic Healing 15:09 - Treatment: Sleep, Stress & Supplements 18:09 - Natural Herpes Remedies 21:15 - Treatment & Emotional Roots 24:10 - Healing Herpes: Shame & Self-Ownership Be sure to like and subscribe to #HealThySelf Hosted by Doctor Christian Gonzalez N.D. Follow Doctor G on Instagram @doctor.gonzalez https://www.instagram.com/doctor.gonzalez/ Sign up for our newsletter! https://drchristiangonzalez.com/newsletter/
Industrial Talk is onsite at DistribuTech 2025 and talking to Marcus McCarthy, Sr. Vice President at Siemens Grid Software about "Energy Solutions for the Future". Scott MacKenzie and Marcus McCarthy discuss the evolving utility industry and the role of digital twins in improving efficiency and reliability. Marcus highlights the challenges of aging infrastructure, increased power demand, and the need for carbon removal. He emphasizes the importance of accurate digital models for better planning and decision-making. Marcus explains how Siemens' digital twin solutions enable real-time operations and scenario simulations, enhancing network management. They also touch on the practicality of cloud technology and the industry's readiness to adopt new technologies. The conversation underscores the urgency for utilities to invest in digital twins to meet future energy demands and optimize grid performance. Action Items [ ] Connect with Marcus McCarthy on LinkedIn or at marcus.mccarthy@siemens.com to discuss further [ ] Establish accurate digital models of the utility network (digital twins) with temporal stamping to simulate future scenarios [ ] Explore how high-energy consumption facilities like data centers can optimize their interaction with the grid Outline Introduction and Welcome Scott MacKenzie as a passionate industry professional dedicated to transferring cutting-edge industry innovations and trends. Scott MacKenzie welcomes listeners to the Industrial Talk Podcast, highlighting the celebration of industry professionals worldwide. Scott mentions the podcast is brought to you by Siemens Smart Infrastructure and Grid Software, encouraging listeners to visit siemens.com for more information. Scott and Marcus discuss the massive scale of the Distribute Tech conference in Dallas, Texas, and Scott's limited time to explore the solutions. Background on Marcus McCarthy Marcus shares his background, mentioning his move from Ireland to the US about 12 years ago. Marcus discusses his career in utilities, focusing on distribution and transmission software systems. Scott and Marcus agree on the positive aspects of the utility industry, including the people and the current market dynamics. Marcus reflects on the industry's shift from a quiet period to a time of rapid change and innovation. Challenges and Pressures in the Utility Industry Marcus highlights the increasing demand for power and the need for safe and reliable delivery. Scott and Marcus discuss the challenges of aging infrastructure and the need for modernization. Marcus explains the complexities of meeting future power demands while addressing carbon removal efforts. Scott shares his experience as a lineman and the evolution of the utility industry from a linear design to a more distributed energy system. Digital Twin and Its Importance Scott expresses his enthusiasm for digital twin technology and its potential for simulation and decision-making. Marcus explains the critical role of digital twin in achieving faster, better decision-making in complex environments. Marcus discusses the importance of standardized models and the sharing of planning data among different players in the industry. Marcus highlights the need for real-time operations and the challenges of integrating planning and operations data. Cloud Technology and Latency