Podcasts about Latency

  • 495PODCASTS
  • 788EPISODES
  • 44mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Dec 27, 2025LATEST

POPULARITY

20192020202120222023202420252026


Best podcasts about Latency

Latest podcast episodes about Latency

Musiques du monde
Playlist festive de Sophian Fanen

Musiques du monde

Play Episode Listen Later Dec 27, 2025 48:30


De Rosalia à Little Simz, en passant par Bad Bunny, Sophian Fanen balaye son année 2025. En cette fin d'année, Sophian vous fait plaisir et sélectionne ses obsessions favorites. Rosalía, Berghain, tiré de l'album Lux (Columbia Records, 2025) Bad Bunny, Weltita (feat. Chuwi), tiré de l'album Debí Tirar Más Fotos (Rimas Entertainment)  Theodora, Ils me rient tous au nez, tiré de l'album Mega BBL (Boss Lady, 2025)  Tarta Relena, Si veriash a la rana, tiré de l'album És pregunta (Latency, 2024) Ale hop & Titi Bakorta, Bonne année, tiré de l'album Mapambazuko (Nyege Nyege Tapes, 2025)  Andrea Laszlo de Simone, Quando, tiré de l'album Una Lunghissima Ombra (Ekler/Hamburger Records, 2025)  Blaiz Fayah, Maureen et DJ Glad, Money Pull Up, tiré de l'album Shatta Ting (Creepy Music, 2025)  Little Simz, Lion (feat. Obongjayar), tiré de l'album Lotus (Awal, 2025)  Miki, Jtm encore, tiré de l'album Industry Plant (Structure, 2025)  Xania Monet, How Was I Supposed to Know?, tiré de l'album Unfolded (TMJ/Hallwood, 2025) 

Musiques du monde
Playlist festive de Sophian Fanen

Musiques du monde

Play Episode Listen Later Dec 27, 2025 48:30


De Rosalia à Little Simz, en passant par Bad Bunny, Sophian Fanen balaye son année 2025. En cette fin d'année, Sophian vous fait plaisir et sélectionne ses obsessions favorites. Rosalía, Berghain, tiré de l'album Lux (Columbia Records, 2025) Bad Bunny, Weltita (feat. Chuwi), tiré de l'album Debí Tirar Más Fotos (Rimas Entertainment)  Theodora, Ils me rient tous au nez, tiré de l'album Mega BBL (Boss Lady, 2025)  Tarta Relena, Si veriash a la rana, tiré de l'album És pregunta (Latency, 2024) Ale hop & Titi Bakorta, Bonne année, tiré de l'album Mapambazuko (Nyege Nyege Tapes, 2025)  Andrea Laszlo de Simone, Quando, tiré de l'album Una Lunghissima Ombra (Ekler/Hamburger Records, 2025)  Blaiz Fayah, Maureen et DJ Glad, Money Pull Up, tiré de l'album Shatta Ting (Creepy Music, 2025)  Little Simz, Lion (feat. Obongjayar), tiré de l'album Lotus (Awal, 2025)  Miki, Jtm encore, tiré de l'album Industry Plant (Structure, 2025)  Xania Monet, How Was I Supposed to Know?, tiré de l'album Unfolded (TMJ/Hallwood, 2025) 

The Effortless Podcast
The Structured vs. Unstructured Debate in Business Software - Episode 20: The Effortless Podcast

The Effortless Podcast

Play Episode Listen Later Dec 15, 2025 82:29


In this episode of The Effortless Podcast, Amit Prakash and Dheeraj Pandey dive deep into one of the most important shifts happening in AI today: the convergence of structured and unstructured data, interfaces, and systems.Together, they unpack how conversations—not CRM fields—hold the real ground truth; why schemas still matter in an AI-driven world; and how agents can evolve into true managers, coaches, and chiefs of staff for revenue teams. They explore the cognitive science behind visual vs conversational UI, the future of dynamically generated interfaces, and the product depth required to build enduring AI-native software.Amit and Dheeraj break down the tension between deterministic and probabilistic systems, the limits of prompt-driven workflows, and why the future of enterprise AI is “both-and” rather than “either-or.” It's a masterclass in modern product, data design, and the psychology of building intelligent tools.Key Topics & Timestamps 00:00 – Introduction02:00 – Why conversations—not CRM fields—hold real ground truth05:00 – Reps as labelers and the parallels with AI training pipelines08:00 – Business logic vs world models: defining meaning inside enterprises11:00 – Prompts flatten nuance; schemas restore structure14:00 – SQL schemas as the true model of a business17:00 – CRM overload and the friction of rigid data entry20:00 – AI agents that debrief and infer fields dynamically23:00 – Capturing qualitative signals: champions, pain, intent26:00 – Multi-source context: transcripts, email threads, Slack29:00 – Why structure is required for math, aggregation, forecasting32:00 – Aggregating unstructured data to reveal organizational issues35:00 – Labels, classification, and the limits of LLM-only workflows38:00 – Deterministic (SQL/Python) vs probabilistic (LLMs) systems41:00 – Transitional workflows: humans + AI field entry44:00 – Trust issues and the confusion of the early AI market47:00 – Avoiding “Clippy moments” in agent design50:00 – Latency, voice UX, and expectations for responsiveness53:00 – Human-machine interface for SDRs vs senior reps56:00 – Structured vs unstructured UI: cognitive science insights59:00 – Charts vs paragraphs: parallel vs sequential processing1:02:00 – The “Indian thali” dashboard problem and dynamic UI1:05:00 – Exploration modes, drill-downs, and empty prompts1:08:00 – Dynamic leaves, static trunk: designing hierarchy1:11:00 – Both-and thinking: voice + visual, structured + unstructured1:14:00 – Why “good enough” AI fails without deep product1:17:00 – PLG, SLG, data access, and trust barriers1:20:00 – Closing reflections and the future of AI-native softwareHosts: Amit Prakash – CEO and Founder at AmpUp, former engineer at Google AdSense and Microsoft Bing, with extensive expertise in distributed systems and machine learningDheeraj Pandey – Co-founder and CEO at DevRev, former Co-founder & CEO of Nutanix. A tech visionary with a deep interest in AI, systems, and the future of work.Follow the Hosts:Amit PrakashLinkedIn – Amit Prakash I LinkedInTwitter/X – https://x.com/amitp42Dheeraj PandeyLinkedIn –Dheeraj Pandey | LinkedIn Twitter/X – https://x.com/dheerajShare your thoughts : Have questions, comments, or ideas for future episodes?Email us at EffortlessPodcastHQ@gmail.comDon't forget to Like, Comment, and Subscribe for more conversations at the intersection of AI, technology, and innovation.

Fireside Product Management
The Future of Product Management in the Age of AI: Lessons From a Five Leader Panel

Fireside Product Management

Play Episode Listen Later Dec 8, 2025 83:15


Every few years, the world of product management goes through a phase shift. When I started at Microsoft in the early 2000s, we shipped Office in boxes. Product cycles were long, engineering was expensive, and user research moved at the speed of snail mail. Fast forward a decade and the cloud era reset the speed at which we build, measure, and learn. Then mobile reshaped everything we thought we knew about attention, engagement, and distribution.Now we are standing at the edge of another shift. Not a small shift, but a tectonic one. Artificial intelligence is rewriting the rules of product creation, product discovery, product expectations, and product careers.To help make sense of this moment, I hosted a panel of world class product leaders on the Fireside PM podcast:• Rami Abu-Zahra, Amazon product leader across Kindle, Books, and Prime Video• Todd Beaupre, Product Director at YouTube leading Home and Recommendations• Joe Corkery, CEO and cofounder of Jaide Health • Tom Leung (me), Partner at Palo Alto Foundry• Lauren Nagel, VP Product at Mezmo• David Nydegger, Chief Product Officer at OvivaThese are leaders running massive consumer platforms, high stakes health tech, and fast moving developer tools. The conversation was rich, honest, and filled with specific examples. This post summarizes the discussion, adds my own reflections, and offers a practical guide for early and mid career PMs who want to stay relevant in a world where AI is redefining what great product management looks like.Table of Contents* What AI Cannot Do and Why PM Judgment Still Matters* The New AI Literacy: What PMs Must Know by 2026* Why Building AI Products Speeds Up Some Cycles and Slows Down Others* Whether the PM, Eng, UX Trifecta Still Stands* The Biggest Risks AI Introduces Into Product Development* Actionable Advice for Early and Mid Career PMs* My Takeaways and What Really Matters Going Forward* Closing Thoughts and Coaching Practice1. What AI Cannot Do and Why PM Judgment Still MattersWe opened the panel with a foundational question. As AI becomes more capable every quarter, what is left for humans to do. Where do PMs still add irreplaceable value. It is the question every PM secretly wonders.Todd put it simply: “At the end of the day, you have to make some judgment calls. We are not going to turn that over anytime soon.”This theme came up again and again. AI is phenomenal at synthesizing, drafting, exploring, and narrowing. But it does not have conviction. It does not have lived experience. It does not feel user pain. It does not carry responsibility.Joe from Jaide Health captured it perfectly when he said: “AI cannot feel the pain your users have. It can help meet their goals, but it will not get you that deep understanding.”There is still no replacement for sitting with a frustrated healthcare customer who cannot get their clinical data into your system, or a creator on YouTube who feels the algorithm is punishing their art, or a devops engineer staring at an RCA output that feels 20 percent off.Every PM knows this feeling: the moment when all signals point one way, but your gut tells you the data is incomplete or misleading. This is the craft that AI does not have.Why judgment becomes even more important in an AI worldDavid, who runs product at a regulated health company, said something incredibly important: “Knowing what great looks like becomes more essential, not less. The PM's that thrive in AI are the ones with great product sense.”This is counterintuitive for many. But when the operational work becomes automated, the differentiation shifts toward taste, intuition, sequencing, and prioritization.Lauren asked the million dollar question. “How are we going to train junior PMs if AI is doing the legwork. Who teaches them how to think.”This is a profound point. If AI closes the gap between junior and senior PMs in execution tasks, the difference will emerge almost entirely in judgment. Knowing how to probe user problems. Knowing when a feature is good enough. Knowing which tradeoffs matter. Knowing which flaw is fatal and which is cosmetic.AI is incredible at writing a PRD. AI is terrible at knowing whether the PRD is any good.Which means the future PM becomes more strategic, more intuitive, more customer obsessed, and more willing to make thoughtful bets under uncertainty.2. The New AI Literacy: What PMs Must Know by 2026I asked the panel what AI literacy actually means for PMs. Not the hype. Not the buzzwords. The real work.Instead of giving gimmicky answers, the discussion converged on a clear set of skills that PMs must master.Skill 1: Understanding context engineeringDavid laid this out clearly: “Knowing what LMS are good at and what they are not good at, and knowing how to give them the right context, has become a foundational PM skill.”Most PMs think prompt engineering is about clever phrasing. In reality, the future is about context engineering. Feeding models the right data. Choosing the right constraints. Deciding what to ignore. Curating inputs that shape outputs in reliable ways.Context engineering is to AI product development what Figma was to collaborative design. If you cannot do it, you are not going to be effective.Skill 2: Evals, evals, evalsRami said something that resonated with the entire panel: “Last year was all about prompts. This year is all about evals.”He is right.• How do you build a golden dataset.• How do you evaluate accuracy.• How do you detect drift.• How do you measure hallucination rates.• How do you combine UX evals with model evals.• How do you decide what good looks like.• How do you define safe versus unsafe boundaries.AI evaluation is now a core PM responsibility. Not exclusively. But PMs must understand what engineers are testing for, what failure modes exist, and how to design test sets that reflect the real world.Lauren said her PMs write evals side by side with engineering. That is where the world is going.Skill 3: Knowing when to trust AI output and when to override itTodd noted: “It is one thing to get an answer that sounds good. It is another thing to know if it is actually good.”This is the heart of the role. AI can produce strategic recommendations that look polished, structured, and wise. But the real question is whether they are grounded in reality, aligned with your constraints, and consistent with your product vision.A PM without the ability to tell real insight from confident nonsense will be replaced by someone who can.Skill 4: Understanding the physics of model changesThis one surprised many people, but it was a recurring point.Rami noted: “When you upgrade a model, the outputs can be totally different. The evals start failing. The experience shifts.”PMs must understand:• Models get deprecated• Models drift• Model updates can break well tuned prompts• API pricing has real COGS implications• Latency varies• Context windows vary• Some tasks need agents, some need RAG, some need a small finetuned modelThis is product work now. The PM of 2026 must know these constraints as well as a PM of the cloud era understood database limits or API rate limits.Skill 5: How to construct AI powered prototypes in hours, not weeksIt now takes one afternoon to build something meaningful. Zero code required. Prompt, test, refine. Whether you use Replit, Cursor, Vercel, or sandboxed agents, the speed is shocking.But this makes taste and problem selection even more important. The future PM must be able to quickly validate whether a concept is worth building beyond the demo stage.3. Why Building AI Products Speeds Up Some Cycles and Slows Down OthersThis part of the conversation was fascinating because people expected AI to accelerate everything. The panel had a very different view.Fast: Prototyping and concept validationLauren described how her teams can build working versions of an AI powered Root Cause Analysis feature in days, test it with customers, and get directional feedback immediately.“You can think bigger because the cost of trying things is much lower,” she said.For founders, early PMs, and anyone validating hypotheses, this is liberating. You can test ten ideas in a week. That used to take a quarter.Slow: Productionizing AI featuresThe surprising part is that shipping the V1 of an AI feature is slower than most expect.Joe noted: “You can get prototypes instantly. But turning that into a real product that works reliably is still hard.”Why. Because:• You need evals.• You need monitoring.• You need guardrails.• You need safety reviews.• You need deterministic parts of the workflow.• You need to manage COGS.• You need to design fallbacks.• You need to handle unpredictable inputs.• You need to think about hallucination risk.• You need new UI surfaces for non deterministic outputs.Lauren said bluntly: “Vibe coding is fast. Moving that vibe code to production is still a four month process.”This should be printed on a poster in every AI startup office.Very Slow: Iterating on AI powered featuresAnother counterintuitive point. Many teams ship a great V1 but struggle to improve it significantly afterward.David said their nutrition AI feature launched well but: “We struggled really hard to make it better. Each iteration was easy to try but difficult to improve in a meaningful way.”Why is iteration so difficult.Because model improvements may not translate directly into UX improvements. Users need consistency. Drift creates churn. Small changes in context or prompts can cause large changes in behavior.Teams are learning a hard truth: AI powered features do not behave like typical deterministic product flows. They require new iteration muscles that most orgs do not yet have.4. The PM, Eng, UX Trifecta in the AI EraI asked whether the classic PM, Eng, UX triad is still the right model. The audience was expecting disagreement. The panel was surprisingly aligned.The trifecta is not going anywhereRami put it simply: “We still need experts in all three domains to raise the bar.”Joe added: “AI makes it possible for PMs to do more technical work. But it does not replace engineering. Same for design.”AI blurs the edges of the roles, but it does not collapse them. In fact, each role becomes more valuable because the work becomes more abstract.• PMs focus on judgment, sequencing, evaluation, and customer centric problem framing• Engineers focus on agents, systems, architecture, guardrails, latency, and reliability• Designers focus on dynamic UX, non deterministic UX patterns, and new affordances for AI outputsWhat does changeAI makes the PM-Eng relationship more intense. The backbone of AI features is a combination of model orchestration, evaluation, prompting, and context curation. PMs must be tighter than ever with engineering to design these systems.David noted that his teams focus more on individual talents. Some PMs are great at context engineering. Some designers excel at polishing AI generated layouts. Some engineers are brilliant at prompt chaining. AI reveals strengths quickly.The trifecta remains. The skill distribution within it evolves.5. The Biggest Risks AI Introduces Into Product DevelopmentWhen we asked what scares PMs most about AI, the conversation became blunt and honest. Risk 1: Loss of user trustLauren warned: “If people keep shipping low quality AI features, user trust in AI erodes. And then your good AI product suffers from the skepticism.”This is very real. Many early AI features across industries are low quality, gimmicky, or unreliable. Users quickly learn to distrust these experiences.Which means PMs must resist the pressure to ship before the feature is ready.Risk 2: Skill atrophyTodd shared a story that hit home for many PMs. “Junior folks just want to plug in the prompt and take whatever the AI gives them. That is a recipe for having no job later.”PMs who outsource their thinking to AI will lose their judgment. Judgment cannot be regained easily.This is the silent career killer.Risk 3: Safety hazards in sensitive domainsDavid was direct: “If we have one unsafe output, we have to shut the feature off. We cannot afford even small mistakes.”In healthcare, finance, education, and legal industries, the tolerance for error is near zero. AI must be monitored relentlessly. Human in the loop systems are mandatory. The cycles are slower but the stakes are higher.Risk 4: The high bar for AI compared to humansJoe said something I have thought about for years: “AI is held to a much higher standard than human decision making. Humans make mistakes constantly, but we forgive them. AI makes one mistake and it is unacceptable.”This slows adoption in certain industries and creates unrealistic expectations.Risk 5: Model deprecation and instabilityRami described a real problem AI PMs face: “Models get deprecated faster than they get replaced. The next model is not always GA. Outputs change. Prompts break.”This creates product instability that PMs must anticipate and design around.Risk 6: Differentiation becomes hardI shared this perspective because I see so many early stage startups struggle with it.If your whole product is a wrapper around an LLM, competitors will copy you in a week. The real differentiation will not come from using AI. It will come from how deeply you understand the customer, how you integrate AI with proprietary data, and how you create durable workflows.6. Actionable Advice for Early and Mid Career PMsThis was one of my favorite parts of the panel because the advice was humble, practical, and immediately useful.A. Develop deep user empathy. This will become your biggest differentiator.Lauren said it clearly: “Maintain your empathy. Understand the pain your user really has.”AI makes execution cheap. It makes insight valuable.If you can articulate user pain precisely.If you can differentiate surface friction from underlying need.If you can see around corners.If you can prototype solutions and test them in hours.If you can connect dots between what AI can do and what users need.You will thrive.Tactical steps:• Sit in on customer support calls every week.• Watch 10 user sessions for every feature you own.• Talk to customers until patterns emerge.• Ask “why” five times in every conversation.• Maintain a user pain log and update it constantly.B. Become great at context engineeringThis will matter as much as SQL mattered ten years ago.Action steps:• Practice writing prompts with structured context blocks.• Build a library of prompts that work for your product.• Study how adding, removing, or reordering context changes output.• Learn RAG patterns.• Learn when structured data beats embeddings.• Learn when smaller local models outperform big ones.C. Learn eval frameworksThis is non negotiable.You need to know:• Precision vs recall tradeoffs• How to build golden datasets• How to design scenario based evals for UX• How to test for hallucination• How to monitor drift• How to set quality thresholds• How to build dashboards that reflect real world input distributionsYou do not need to write the code.You do need to define the eval strategy.D. Strengthen your product senseYou cannot outsource product taste.Todd said it best: “Imagine asking AI to generate 20 percent growth for you. It will not tell you what great looks like.”To strengthen your product sense:• Review the best products weekly.• Take screenshots of great UX patterns.• Map user flows from apps you admire.• Break products down into primitives.• Ask yourself why a product decision works.• Predict what great would look like before you design it.The PMs who thrive will be the ones who can recognize magic when they see it.E. Stay curiousRami's closing advice was simple and perfect: “Stay curious. Keep learning. It never gets old.”AI changes monthly. The PM who is excited by new ideas will outperform the PM who clings to old patterns.Practical habits:• Read one AI research paper summary each week.• Follow evaluation and model updates from major vendors.• Build at least one small AI prototype a month.• Join AI PM communities.• Teach juniors what you learn. Nothing accelerates mastery faster.F. Embrace velocity and side projectsTodd said that some of his biggest career breakthroughs came from solving problems on the side.This is more true now than ever.If you have an idea, you can build an MVP over a weekend. If it solves a real problem, someone will notice.G. Stay close to engineeringNot because you need to code, but because AI features require tighter PM engineering collaboration.Learn enough to be dangerous:• How embeddings work• How vector stores behave• What latency tradeoffs exist• How agents chain tasks• How model versioning works• How context limits shape UX• Why some prompts blow up API costsIf you can speak this language, you will earn trust and accelerate cycles.H. Understand the business deeplyJoe's advice was timeless: “Know who pays you and how much they pay. Solve real problems and know the business model.”PMs who understand unit economics, COGS, pricing, and funnel dynamics will stand out.7. Tom's Takeaways and What Really Matters Going ForwardI ended the recording by sharing what I personally believe after moderating this discussion and working closely with a variety of AI teams over the past 2 years.Judgment becomes the most valuable PM skillAs AI gets better at analysis, synthesis, and execution, your value shifts to:• Choosing the right problem• Sequencing decisions• Making 55 45 calls• Understanding user pain• Making tradeoffs• Deciding when good is good enough• Defining success• Communicating vision• Influencing the orgAgents can write specs.LLMs can produce strategies.But only humans can choose the right one and commit.Learning speed becomes a competitive advantageI said this on the panel and I believe it more every month.Because of AI, you now have:• Infinite coaches• Infinite mentors• Infinite experts• Infinite documentation• Infinite learning loopsA PM who learns slowly will not survive the next decade. Curiosity, empathy, and velocity will separate great from goodMany panelists said versions of this. The common pattern was:• Understand users deeply• Combine multiple tools creatively• Move quickly• Learn constantlyThe future rewards generalists with taste, speed, and emotional intelligence.Differentiation requires going beyond wrapper appsThis is one of my biggest concerns for early stage founders. If your entire product is a wrapper around a model, you are vulnerable.Durable value will come from:• Proprietary data• Proprietary workflows• Deep domain insight• Organizational trust• Distribution advantage• Safety and reliability• Integration with existing systemsAI is a component, not a moat.8. Closing ThoughtsHosting this panel made me more optimistic about the future of product management. Not because AI will not change the job. It already has. But because the fundamental craft remains alive.Product management has always been about understanding people, making decisions with incomplete information, telling compelling stories, and guiding teams through ambiguity and being right often.AI accelerates the craft. It amplifies the best PMs and exposes the weak ones. It rewards curiosity, empathy, velocity, and judgment.If you want tailored support on your PM career, leadership journey, or executive path, I offer 1 on 1 career, executive, and product coaching at tomleungcoaching.com.OK team. Let's ship greatness. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit firesidepm.substack.com

Making a Scene Presents
What Is Latency? And How to Record Without Losing Your Groove

Making a Scene Presents

Play Episode Listen Later Dec 4, 2025 11:36


Making a Scene Presents - What Is Latency? And How to Record Without Losing Your GrooveWritten for indie musicians who just want their tracks to sound right without fighting their gearIf you've ever tried to record vocals or guitar and felt like your timing was weird, or you couldn't stay in the pocket no matter how hard you focused, you've already met the enemy. That enemy is latency. Latency is one of those home-studio problems that doesn't care how talented you are. When it's bad, it throws off your groove in a way that feels like someone moved the beat a few inches to the left. http://www.makingascene.org

The Ravit Show
PlayStation at Scale with Flink: Telemetry, Latency, and Reliability

The Ravit Show

Play Episode Listen Later Nov 24, 2025 10:14


How does PlayStation run real time at massive scale. I sat down with Bahar Pattarkine from PlayStation the team to unpack how they use Apache Flink across telemetry and player experiences.What we covered:-- Why they chose Flink and what problem it solved first-- Running 15,000+ events per second, launch peaks, regional latency SLOs, and avoiding hot partitions across titles-- Phasing the move from Kafka consumers to a unified Flink pipeline without double processing during cutover-- How checkpointing and async I/O keep latency low during spikes or failures-- Privacy controls and regional rules enforced in real time-- What Flink simplified in their pipelines and the impact on cost and ops#data #ai #streaming #Flink #Playstation #Ververica #realtimestreaming #theravitshow

Engineering Influence from ACEC
The Data Center Boom: 5 Trends Engineering Firms Need to Know

Engineering Influence from ACEC

Play Episode Listen Later Nov 20, 2025 5:31 Transcription Available


The Data Center Boom: Five Trends Engineering Firms Need to Know The data center market is experiencing unprecedented growth, driven by artificial intelligence adoption and changing infrastructure demands. For ACEC member firms, this represents both a substantial business opportunity and a chance to shape critical national infrastructure. ACEC's latest Market Intelligence Brief reveals a market poised to reach $62 billion in design and construction spending by 2029, with implications that extend far beyond traditional data center engineering. The launch of ChatGPT in 2022 marked an inflection point. What began as voice assistants has evolved into sophisticated language learning models that consume dramatically more energy. A standard AI query uses about 0.012 kilowatt-hours, while generating a single high-quality image requires 2.0 kWh—roughly 20 times the daily consumption of a standard LED lightbulb. As weekly ChatGPT users surged from 100 million to 700 million between November 2023 and August 2025, the infrastructure implications became impossible to ignore. AI-driven data center power demand, which stood at just 4 gigawatts in 2024, is projected to reach 123 gigawatts by 2035. Even more striking: 70 percent of data center power demand will be driven by AI workloads. This explosive growth requires engineering solutions at unprecedented scale, from power distribution and backup systems to advanced cooling technologies and grid integration strategies. Public perception about data center water consumption often overlooks important nuances in cooling technology. While mechanical cooling systems have historically consumed significant water resources, newer approaches could dramatically reduce water use. Free air cooling, closed-loop systems, and liquid immersion technologies offer low-water use alternatives, with some methods reducing freshwater consumption by 70 percent or more compared to traditional systems. As Thom Jackson, mechanical engineer and partner at Dunham Engineering, notes: "Most data centers utilize closed loop cooling systems requiring no makeup water and minimal maintenance." The "big four" hyperscale operators—Amazon Web Services, Microsoft Azure, Google Cloud Platform, and Meta—have all committed to becoming water-positive by 2030, replenishing more water than they consume. These commitments are driving innovation in cooling system design and creating opportunities for engineering firms with expertise in sustainable mechanical systems. The days of one-size-fits-all data centers are over. Latency requirements, scalability needs, and proximity to end users are accelerating adoption of diverse building types. Edge data centers bring computing closer to users for real-time applications like IoT and 5G. Hyperscale facilities support massive cloud and AI workloads with 100,000-plus servers. Colocation models enable scalable shared environments for enterprises, while modular designs—prefabricated with integrated power and cooling—offer rapid, cost-effective deployment. Each model presents distinct engineering challenges and opportunities, from specialized HVAC systems and high floor-to-ceiling ratios for hyperscale facilities to distributed infrastructure planning for edge networks. Two emerging trends deserve particular attention. First, the Department of Energy has selected four federal sites to host AI data centers paired with clean energy generation, including small modular reactors (SMRs). The Nuclear Regulatory Commission anticipates at least 25 SMR license applications by 2029, signaling strong demand for nuclear co-location expertise. Second, developers are increasingly exploring adaptive reuse of underutilized office spaces, Brownfield sites, and historical buildings. These locations offer existing utility infrastructure that can reduce construction time and costs, making them attractive alternatives despite some design constraints. Recent federal policy changes are streamlining data center deployment. Executive Order 14318 directs agencies to accelerate environmental reviews and permitting, while revisions to New Source Review under the Clean Air Act could allow construction to begin before air permits are issued. ACEC recently formed the Data Center Task Force to advocate for policies that balance speed, affordability, and national security in data center development, complimenting EO 14318. For engineering firms, site selection expertise has become increasingly valuable. Success hinges on sales and use tax exemptions, existing power and fiber connectivity, effective community engagement, and thorough environmental risk assessment. AI-driven planning tools like UrbanFootprint and ESRI ArcGIS are helping developers evaluate site suitability, identifying opportunities for firms. The data center market offers engineering firms a chance to lead in sustainable design, infrastructure innovation, and strategic planning at a moment when digital infrastructure has become as critical as traditional utilities.  

Ask The Tech Guys (Audio)
HOT 242: Powerline Networking - Pros & Cons of Powerline Ethernet Adapters

Ask The Tech Guys (Audio)

Play Episode Listen Later Nov 16, 2025 28:44


In this week's episode of Hands-On Tech, Lance asks Mikah Sargent about the pros and cons of using powerline ethernet adapters, and Mikah shares his strong thoughts on these devices. Don't forget to send in your questions for Mikah to answer during the show! hot@twit.tv Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.

All TWiT.tv Shows (MP3)
Hands-On Tech 242: Powerline Networking

All TWiT.tv Shows (MP3)

Play Episode Listen Later Nov 16, 2025 28:44


In this week's episode of Hands-On Tech, Lance asks Mikah Sargent about the pros and cons of using powerline ethernet adapters, and Mikah shares his strong thoughts on these devices. Don't forget to send in your questions for Mikah to answer during the show! hot@twit.tv Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.

The Tech Guy (Video HI)
HOT 242: Powerline Networking - Pros & Cons of Powerline Ethernet Adapters

The Tech Guy (Video HI)

Play Episode Listen Later Nov 16, 2025 28:44


In this week's episode of Hands-On Tech, Lance asks Mikah Sargent about the pros and cons of using powerline ethernet adapters, and Mikah shares his strong thoughts on these devices. Don't forget to send in your questions for Mikah to answer during the show! hot@twit.tv Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.

Hands-On Tech (Video HD)
HOT 242: Powerline Networking - Pros & Cons of Powerline Ethernet Adapters

Hands-On Tech (Video HD)

Play Episode Listen Later Nov 16, 2025 28:44


In this week's episode of Hands-On Tech, Lance asks Mikah Sargent about the pros and cons of using powerline ethernet adapters, and Mikah shares his strong thoughts on these devices. Don't forget to send in your questions for Mikah to answer during the show! hot@twit.tv Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.

Hands-On Tech (MP3)
HOT 242: Powerline Networking - Pros & Cons of Powerline Ethernet Adapters

Hands-On Tech (MP3)

Play Episode Listen Later Nov 16, 2025 28:44


In this week's episode of Hands-On Tech, Lance asks Mikah Sargent about the pros and cons of using powerline ethernet adapters, and Mikah shares his strong thoughts on these devices. Don't forget to send in your questions for Mikah to answer during the show! hot@twit.tv Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.

All TWiT.tv Shows (Video LO)
Hands-On Tech 242: Powerline Networking

All TWiT.tv Shows (Video LO)

Play Episode Listen Later Nov 16, 2025 28:44 Transcription Available


In this week's episode of Hands-On Tech, Lance asks Mikah Sargent about the pros and cons of using powerline ethernet adapters, and Mikah shares his strong thoughts on these devices. Don't forget to send in your questions for Mikah to answer during the show! hot@twit.tv Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.

Hands-On Tech (Video HI)
HOT 242: Powerline Networking - Pros & Cons of Powerline Ethernet Adapters

Hands-On Tech (Video HI)

Play Episode Listen Later Nov 16, 2025 28:44


In this week's episode of Hands-On Tech, Lance asks Mikah Sargent about the pros and cons of using powerline ethernet adapters, and Mikah shares his strong thoughts on these devices. Don't forget to send in your questions for Mikah to answer during the show! hot@twit.tv Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.

Total Mikah (Video)
Hands-On Tech 242: Powerline Networking

Total Mikah (Video)

Play Episode Listen Later Nov 16, 2025 28:44


In this week's episode of Hands-On Tech, Lance asks Mikah Sargent about the pros and cons of using powerline ethernet adapters, and Mikah shares his strong thoughts on these devices. Don't forget to send in your questions for Mikah to answer during the show! hot@twit.tv Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.

Total Mikah (Audio)
Hands-On Tech 242: Powerline Networking

Total Mikah (Audio)

Play Episode Listen Later Nov 16, 2025 28:44


In this week's episode of Hands-On Tech, Lance asks Mikah Sargent about the pros and cons of using powerline ethernet adapters, and Mikah shares his strong thoughts on these devices. Don't forget to send in your questions for Mikah to answer during the show! hot@twit.tv Host: Mikah Sargent Download or subscribe to Hands-On Tech at https://twit.tv/shows/hands-on-tech Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.

No Latency
No Latency Live SDCC2025 - A Night at Morro Rock - PT2

No Latency

Play Episode Listen Later Nov 12, 2025 57:34


The Crew and some new and old friends, take on the Mass Driver at Morro Rock. They've made it past the barge and into the compound, now to split up, plant the virus and get out without a bang. Hopefully...Jade, Evan and Peter are joined by Dayeanne Hutton and Anais R Morgan (Infinite Sided Dice) live on stage at San Diego Comic Con 2025!Check out our Youtube Channel for more live panels! HereBTS and art posts will come out for all our patrons to peruse next week!More info can be found here: ⁠⁠⁠⁠⁠⁠linktr.ee/NoLatency⁠⁠⁠⁠⁠⁠If you'd like to support us, We now have a Patreon! ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Patreon.com/nolatency⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Even more information and MERCH is on our website! ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠www.nolatencypodcast.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Twitter: @nolatencypodInstagram: @nolatencypodLogo & Map Art By Paris ArrowsmithCharacter Art by: Doodlejumps, Saint and Paris ArrowsmithProducing and Editing by Paris ArrowsmithMusic and Sound sfx by Epidemic Sound.Find @SkullorJade, ⁠⁠⁠⁠⁠⁠⁠@Miss_Magitek⁠⁠⁠⁠⁠⁠⁠ and ⁠⁠⁠⁠⁠⁠⁠@Binary_Dragon⁠⁠⁠⁠⁠⁠⁠,⁠⁠⁠⁠⁠⁠⁠ @retrodatv⁠⁠⁠⁠⁠⁠⁠ on twitch, for live D&D, TTRPGs and more.#cyberpunkred #actualplay #ttrpg #radioplay #scifi #cyberpunk #drama #comedy #LIVE #SDCC25

New Books Network
Birgit Abels and Patrick Eisenlohr, "Atmospheric Knowledge: Environmentality, Latency, and Sonic Multimodality" (U California Press, 2025)

New Books Network

Play Episode Listen Later Nov 7, 2025 46:45


How do we know through atmospheres? How can being affected by an atmosphere give rise to knowledge? What role does somatic, nonverbal knowledge play in how we belong to places? Atmospheric Knowledge takes up these questions through detailed analyses of practices that generate atmospheres and in which knowledge emerges through visceral intermingling with atmospheres. From combined musicological and anthropological perspectives, Birgit Abels and Patrick Eisenlohr investigate atmospheres as a compelling alternative to better-known analytics of affect by way of performative and sonic practices across a range of ethnographic settings. With particular focus on oceanic relations and sonic affectedness, Atmospheric Knowledge centers the rich affordances of sonic connections for knowing our environments. A free ebook version of this title is available through Luminos, University of California Press's Open Access publishing program. Visit www.luminosoa.org to learn more. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network

New Books in Anthropology
Birgit Abels and Patrick Eisenlohr, "Atmospheric Knowledge: Environmentality, Latency, and Sonic Multimodality" (U California Press, 2025)

New Books in Anthropology

Play Episode Listen Later Nov 7, 2025 46:45


How do we know through atmospheres? How can being affected by an atmosphere give rise to knowledge? What role does somatic, nonverbal knowledge play in how we belong to places? Atmospheric Knowledge takes up these questions through detailed analyses of practices that generate atmospheres and in which knowledge emerges through visceral intermingling with atmospheres. From combined musicological and anthropological perspectives, Birgit Abels and Patrick Eisenlohr investigate atmospheres as a compelling alternative to better-known analytics of affect by way of performative and sonic practices across a range of ethnographic settings. With particular focus on oceanic relations and sonic affectedness, Atmospheric Knowledge centers the rich affordances of sonic connections for knowing our environments. A free ebook version of this title is available through Luminos, University of California Press's Open Access publishing program. Visit www.luminosoa.org to learn more. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/anthropology

New Books in Sociology
Birgit Abels and Patrick Eisenlohr, "Atmospheric Knowledge: Environmentality, Latency, and Sonic Multimodality" (U California Press, 2025)

New Books in Sociology

Play Episode Listen Later Nov 7, 2025 46:45


How do we know through atmospheres? How can being affected by an atmosphere give rise to knowledge? What role does somatic, nonverbal knowledge play in how we belong to places? Atmospheric Knowledge takes up these questions through detailed analyses of practices that generate atmospheres and in which knowledge emerges through visceral intermingling with atmospheres. From combined musicological and anthropological perspectives, Birgit Abels and Patrick Eisenlohr investigate atmospheres as a compelling alternative to better-known analytics of affect by way of performative and sonic practices across a range of ethnographic settings. With particular focus on oceanic relations and sonic affectedness, Atmospheric Knowledge centers the rich affordances of sonic connections for knowing our environments. A free ebook version of this title is available through Luminos, University of California Press's Open Access publishing program. Visit www.luminosoa.org to learn more. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/sociology

New Books in Geography
Birgit Abels and Patrick Eisenlohr, "Atmospheric Knowledge: Environmentality, Latency, and Sonic Multimodality" (U California Press, 2025)

New Books in Geography

Play Episode Listen Later Nov 7, 2025 46:45


How do we know through atmospheres? How can being affected by an atmosphere give rise to knowledge? What role does somatic, nonverbal knowledge play in how we belong to places? Atmospheric Knowledge takes up these questions through detailed analyses of practices that generate atmospheres and in which knowledge emerges through visceral intermingling with atmospheres. From combined musicological and anthropological perspectives, Birgit Abels and Patrick Eisenlohr investigate atmospheres as a compelling alternative to better-known analytics of affect by way of performative and sonic practices across a range of ethnographic settings. With particular focus on oceanic relations and sonic affectedness, Atmospheric Knowledge centers the rich affordances of sonic connections for knowing our environments. A free ebook version of this title is available through Luminos, University of California Press's Open Access publishing program. Visit www.luminosoa.org to learn more. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/geography

New Books in Sound Studies
Birgit Abels and Patrick Eisenlohr, "Atmospheric Knowledge: Environmentality, Latency, and Sonic Multimodality" (U California Press, 2025)

New Books in Sound Studies

Play Episode Listen Later Nov 7, 2025 46:45


How do we know through atmospheres? How can being affected by an atmosphere give rise to knowledge? What role does somatic, nonverbal knowledge play in how we belong to places? Atmospheric Knowledge takes up these questions through detailed analyses of practices that generate atmospheres and in which knowledge emerges through visceral intermingling with atmospheres. From combined musicological and anthropological perspectives, Birgit Abels and Patrick Eisenlohr investigate atmospheres as a compelling alternative to better-known analytics of affect by way of performative and sonic practices across a range of ethnographic settings. With particular focus on oceanic relations and sonic affectedness, Atmospheric Knowledge centers the rich affordances of sonic connections for knowing our environments. A free ebook version of this title is available through Luminos, University of California Press's Open Access publishing program. Visit www.luminosoa.org to learn more. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/sound-studies

No Latency
No Latency Live SDCC2025 - A Night at Morro Rock - PT1

No Latency

Play Episode Listen Later Nov 5, 2025 48:30


The Crew and some new and old friends, take on the Mass Driver at Morro Rock. Domino has discovered a plot to take out the crew's communication satellite and time is ticking before it's blown out of the sky.Jade, Evan and Peter are joined by Dayeanne Hutton and Anais R Morgan (Infinite Sided Dice) live on stage at San Diego Comic Con 2025!Check out our Youtube Channel for more live panels! HereBTS and art posts will come out for all our patrons to peruse, after Part 2 is released next week!More info can be found here: ⁠⁠⁠⁠⁠⁠linktr.ee/NoLatency⁠⁠⁠⁠⁠⁠If you'd like to support us, We now have a Patreon! ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Patreon.com/nolatency⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Even more information and MERCH is on our website! ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠www.nolatencypodcast.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Twitter: @nolatencypodInstagram: @nolatencypodFind @SkullorJade, ⁠⁠⁠⁠⁠⁠⁠@Miss_Magitek⁠⁠⁠⁠⁠⁠⁠ and ⁠⁠⁠⁠⁠⁠⁠@Binary_Dragon⁠⁠⁠⁠⁠⁠⁠,⁠⁠⁠⁠⁠⁠⁠ @retrodatv⁠⁠⁠⁠⁠⁠⁠ on twitch, for live D&D, TTRPGs and more.#cyberpunkred #actualplay #ttrpg #radioplay #scifi #cyberpunk #drama #comedy #LIVE #SDCC25

Datacenter Technical Deep Dives
From Speech to Speech: A Tale about Amazon Nova Sonic

Datacenter Technical Deep Dives

Play Episode Listen Later Nov 3, 2025


In this week's vBrownBag, Principal Software Engineer Dominik Wosiński takes us on a deep dive into Amazon Nova Sonic — AWS's latest speech-to-speech AI model. Dominik explores how unified voice models like Nova Sonic are reshaping customer experience, DevOps workflows, and real-time AI interaction, with live demos showing just how natural machine-generated speech can sound. We cover what makes speech-to-speech difficult, how latency and turn-detection affect conversational design, and why this technology marks the next frontier for AI-driven customer support. Stick around for audience Q&A, live experiments, and insights on where AWS Bedrock and generative AI are headed next.

JSA Podcasts for Telecom and Data Centers
AI at the Edge: Bill Severn on Latency, Interconnection & What's Next for 1623 Farnam

JSA Podcasts for Telecom and Data Centers

Play Episode Listen Later Nov 3, 2025 9:18


Bill Severn of 1623 Farnam joins JSA TV from DCD>Connect Virginia to discuss how #GenerativeAI is reshaping network infrastructure. He shares insights on latency limits, #edge inference, #hyperscaler-driven metro upgrades, and what an AI-ready interconnect looks like. Plus, a look ahead at 1623 Farnam's expansion plans and investments for the next 12–24 months.

Telecom Reseller
Connectivity, Latency, and the AI Effect: Expereo on Network Resilience and the Talent Gap, Podcast

Telecom Reseller

Play Episode Listen Later Oct 21, 2025


“AI is hungry — for bandwidth, for speed, and for talent.” — Jean-Philippe Avelange, Chief Information Officer, Expereo Jean-Philippe Avelange, CIO of Expereo, joined Doug Green, Publisher of Technology Reseller News, to discuss findings from Expereo's Horizon Telecom Report—revealing how U.S. organizations are losing millions to network failures and struggling to find skilled professionals in cybersecurity, networking, and data automation. Avelange explained that as companies digitize everything from collaboration to customer experience, connectivity interruptions now directly halt business operations, making network reliability as vital as cybersecurity. “Modern enterprises are building their products and services on connectivity. When it stops, business stops,” he noted. The AI multiplier AI adoption is compounding the challenge. “AI is not just another workload—it's a new kind of demand,” Avelange said. AI-driven automation, real-time data flows, and low-latency interactions place unprecedented pressure on legacy network architectures. Organizations can no longer treat networking as a commodity; they must rethink it as a strategic platform requiring redesign and intelligent automation. The human factor According to Avelange, the real shortage isn't people—it's adaptability. The industry needs professionals skilled in network automation, data flow optimization, and problem solving, not just hardware management. “AI won't solve your problem if you don't understand the problem,” he said, advocating for upskilling internal teams alongside strong partnerships with managed service providers (MSPs) that bring intelligence, not just infrastructure. Latency by design Latency, Avelange warned, must be addressed before deployment. “You can always add bandwidth, but you can't add speed after the fact. Latency has to be engineered from the start.” A new mindset For Expereo, the future of networking lies in intelligent connectivity—solutions that merge automation, analytics, and agility to keep enterprises resilient in the AI era. “We're not selling boxes,” Avelange said. “We're helping companies design the networks their digital business runs on.” Read more in the Horizon Telecom Report or visit expereo.com.

Mission Matters Podcast with Adam Torres
AI, Ultra-Low Latency & Mongolia's Data Center Moment

Mission Matters Podcast with Adam Torres

Play Episode Listen Later Oct 17, 2025 16:40


In this Mission Matters session hosted by ⁠Adam Torres⁠, ⁠Eraj Akhtar⁠ (CTO & Co-Founder, Excite Capital LLC) and ⁠Namuun Battulga⁠ (CEO, Jenko Tour JSC & Igo Hotel and Resorts) discuss physics-based, quantum-inspired AI trading and Mongolia's emergence as a cost-efficient, secure data center location powered by a new 70MW plant. They share partner criteria, address security considerations, and outline a mission to scale globally distributed compute and real-economy growth across Asia. Follow Adam on Instagram at ⁠https://www.instagram.com/askadamtorres/⁠ for up to date information on book releases and tour schedule. Apply to be a guest on our podcast: ⁠https://missionmatters.lpages.co/podcastguest/⁠ Visit our website: ⁠https://missionmatters.com/⁠ More FREE content from Mission Matters here: ⁠https://linktr.ee/missionmattersmedia⁠ Learn more about your ad choices. Visit podcastchoices.com/adchoices

Mission Matters Money
AI, Ultra-Low Latency & Mongolia's Data Center Moment

Mission Matters Money

Play Episode Listen Later Oct 17, 2025 16:40


In this Mission Matters session hosted by Adam Torres, Eraj Akhtar (CTO & Co-Founder, Excite Capital LLC) and Namuun Battulga (CEO, Jenko Tour JSC & Igo Hotel and Resorts) discuss physics-based, quantum-inspired AI trading and Mongolia's emergence as a cost-efficient, secure data center location powered by a new 70MW plant. They share partner criteria, address security considerations, and outline a mission to scale globally distributed compute and real-economy growth across Asia. Follow Adam on Instagram at https://www.instagram.com/askadamtorres/ for up to date information on book releases and tour schedule. Apply to be a guest on our podcast: https://missionmatters.lpages.co/podcastguest/ Visit our website: https://missionmatters.com/ More FREE content from Mission Matters here: https://linktr.ee/missionmattersmedia Learn more about your ad choices. Visit podcastchoices.com/adchoices

HODLong 后浪
Ep.64 [EN]: Vlad: Building a secure & low latency / cost trading venue Lighter on world's largest public blockchain

HODLong 后浪

Play Episode Listen Later Oct 13, 2025 47:43


Show Notes:Chapters 00:00 Introduction and Background of Lighter02:26 The Launch of Lighter and Its Features05:25 Transition from Private to Public Beta07:56 Trading Volume and Metrics10:37 Open Interest and Volume Dynamics13:31 Incentive Programs and User Engagement15:56 Points System and User Behavior18:42 Future Developments and Season Two21:27 Verifiable Matching and Liquidations24:09 Fee Structure and Token Philosophy24:45 Retail vs. Professional Trading27:12 Fee Structures and Trading Tiers29:00 Latency and Advantages for Premium Accounts32:43 Order Flow and System Verification35:40 Single Sequencer Challenges38:46 Auto-Deleveraging and Liquidation Processes41:33 Criteria for Asset Listings43:25 Community-Driven Regional Strategies If you like this episode, you're welcome to tip with Ethereum / Solana / Bitcoin:如果喜欢本作品,欢迎打赏ETH/SOL/BTC:ETH: 0x83Fe9765a57C9bA36700b983Af33FD3c9920Ef20SOL: AaCeeEX5xBH6QchuRaUj3CEHED8vv5bUizxUpMsr1KytBTC: 3ACPRhHVbh3cu8zqtqSPpzNnNULbZwaNqG Important Disclaimer: All opinions expressed by Mable Jiang, or other podcast guests, are solely their opinion. This podcast is for informational purposes only and should not be construed as investment advice. Mable Jiang may hold positions in some of the projects discussed on this show. 重要声明:Mable Jiang或嘉宾在播客中的观点仅代表他们的个人看法。此播客仅用于提供信息,不作为投资参考。Mable Jiang有时可能会在此节目中讨论的某项目中持有头寸。

Epik Mellon - the QA Cafe Podcast
“Latency, Experience, and Cable Modems” with Jason Livingood, VP of Technology Policy, Product & Standards at Comcast

Epik Mellon - the QA Cafe Podcast

Play Episode Listen Later Oct 7, 2025 52:02


A fantastic episode with one of the founding minds behind Broadband, Jason Livingood and I chat about luck, experience, the shifting landscape of broadband speed and content, and the good work of people like the late Dave Taht that has changed the way we treat the “old ideas” of how Internet services should be delivered.

Chain Reaction
Austin Federa: From Solana Foundation to Double Zero's Fiber Revolution

Chain Reaction

Play Episode Listen Later Sep 25, 2025 75:24


Join Alex Golding as he sits down with Austin Federa, Co-founder of DoubleZero, to explore how they're building permissionless high-performance fiber infrastructure that could revolutionize blockchain performance. Austin shares the technical vision behind creating a parallel internet for distributed systems, starting with Solana validators as their initial market.DoubleZero: https://doublezero.xyz

Telemetry Now
Tracking the Red Sea Cable Cuts with Kentik's Cloud Latency Map

Telemetry Now

Play Episode Listen Later Sep 25, 2025 51:07


Doug Madory joins us to unpack the recent Red Sea submarine cable cuts and how Kentik's Cloud Latency Map revealed the global impact in real-time, offering critical insight into cloud performance, interconnectivity, and internet resilience.

IBM Analytics Insights Podcasts
Making Data Simple: Live Data, Smarter AI with Snow Leopard founder Deepti Srivastava

IBM Analytics Insights Podcasts

Play Episode Listen Later Sep 17, 2025 39:24


Send us a textWhat if AI could tap into live operational data — without ETL or RAG? In this episode, Deepti Srivastava, founder of Snow Leopard, reveals how her company is transforming enterprise data access with intelligent data retrieval, semantic intelligence, and a governance-first approach. Tune in for a fresh perspective on the future of AI and the startup journey behind it.We explore how companies are revolutionizing their data access and AI strategies. Deepti Srivastava, founder of Snow Leopard, shares her insights on bridging the gap between live operational data and generative AI — and how it's changing the game for enterprises worldwide.We dive into Snow Leopard's innovative approach to data retrieval, semantic intelligence, and governance-first architecture.04:54 Meeting Deepti Srivastava 14:06 AI with No ETL, no RAG 17:11 Snow Leopard's Intelligent Data Fetching 19:00 Live Query Challenges 21:01 Snow Leopard's Secret Sauce 22:14 Latency 23:48 Schema Changes 25:02 Use Cases 26:06 Snow Leopard's Roadmap 29:16 Getting Started 33:30 The Startup Journey 34:12 A Woman in Technology 36:03 The Contrarian View

Making Data Simple
Making Data Simple: Live Data, Smarter AI with Snow Leopard founder Deepti Srivastava

Making Data Simple

Play Episode Listen Later Sep 17, 2025 39:24


Send us a textWhat if AI could tap into live operational data — without ETL or RAG? In this episode, Deepti Srivastava, founder of Snow Leopard, reveals how her company is transforming enterprise data access with intelligent data retrieval, semantic intelligence, and a governance-first approach. Tune in for a fresh perspective on the future of AI and the startup journey behind it.We explore how companies are revolutionizing their data access and AI strategies. Deepti Srivastava, founder of Snow Leopard, shares her insights on bridging the gap between live operational data and generative AI — and how it's changing the game for enterprises worldwide.We dive into Snow Leopard's innovative approach to data retrieval, semantic intelligence, and governance-first architecture.04:54 Meeting Deepti Srivastava 14:06 AI with No ETL, no RAG 17:11 Snow Leopard's Intelligent Data Fetching 19:00 Live Query Challenges 21:01 Snow Leopard's Secret Sauce 22:14 Latency 23:48 Schema Changes 25:02 Use Cases 26:06 Snow Leopard's Roadmap 29:16 Getting Started 33:30 The Startup Journey 34:12 A Woman in Technology 36:03 The Contrarian View

PodRocket - A web development podcast from LogRocket
Modularizing the monolith with Jimmy Bogard

PodRocket - A web development podcast from LogRocket

Play Episode Listen Later Sep 11, 2025 32:28


Jimmy Bogard joins Pod Rocket to talk about making monoliths more modular, why boundaries matter, and how to avoid turning systems into distributed monoliths. From refactoring techniques and database migrations at scale to lessons from Stripe and WordPress, he shares practical ways to balance architecture choices. We also explore how tools like Claude and Lambda fit into modern development and what teams should watch for with latency, transactions, and growing complexity. Links Website: https://www.jimmybogard.com X: https://x.com/jbogard Github: https://github.com/jbogard LinkedIn: https://www.linkedin.com/in/jimmybogard/ Resources Modularizing the Monolith - Jimmy Bogard - NDC Oslo 2024: https://www.youtube.com/watch?v=fc6_NtD9soI Chapters We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Fill out our listener survey (https://t.co/oKVAEXipxu)! Let us know by sending an email to our producer, Em, at emily.kochanek@logrocket.com (mailto:emily.kochanek@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understanding where your users are struggling by trying it for free at LogRocket.com. Try LogRocket for free today. (https://logrocket.com/signup/?pdr) Special Guest: Jimmy Bogard.

Packet Pushers - Heavy Networking
HN795: Adventures In Latency

Packet Pushers - Heavy Networking

Play Episode Listen Later Sep 5, 2025 57:36


Monitoring and troubleshooting latency can be tricky. If it’s in the network, was it the IP stack? A NIC? A switch buffer? A middlebox somewhere on the WAN? If it’s the application, can you, the network engineer, bring receipts to the app team? And what if you need to build and operate a network that’s... Read more »

Packet Pushers - Full Podcast Feed
HN795: Adventures In Latency

Packet Pushers - Full Podcast Feed

Play Episode Listen Later Sep 5, 2025 57:36


Monitoring and troubleshooting latency can be tricky. If it’s in the network, was it the IP stack? A NIC? A switch buffer? A middlebox somewhere on the WAN? If it’s the application, can you, the network engineer, bring receipts to the app team? And what if you need to build and operate a network that’s... Read more »

Packet Pushers - Fat Pipe
HN795: Adventures In Latency

Packet Pushers - Fat Pipe

Play Episode Listen Later Sep 5, 2025 57:36


Monitoring and troubleshooting latency can be tricky. If it’s in the network, was it the IP stack? A NIC? A switch buffer? A middlebox somewhere on the WAN? If it’s the application, can you, the network engineer, bring receipts to the app team? And what if you need to build and operate a network that’s... Read more »

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Today we are joined by Gorkem and Batuhan from Fal.ai, the fastest growing generative media inference provider. They recently raised a $125M Series C and crossed $100M ARR. We covered how they pivoted from dbt pipelines to diffusion models inference, what were the models that really changed the trajectory of image generation, and the future of AI videos. Enjoy! 00:00 - Introductions 04:58 - History of Major AI Models and Their Impact on Fal.ai 07:06 - Pivoting to Generative Media and Strategic Business Decisions 10:46 - Technical discussion on CUDA optimization and kernel development 12:42 - Inference Engine Architecture and Kernel Reusability 14:59 - Performance Gains and Latency Trade-offs 15:50 - Discussion of model latency importance and performance optimization 17:56 - Importance of Latency and User Engagement 18:46 - Impact of Open Source Model Releases and Competitive Advantage 19:00 - Partnerships with closed source model developers 20:06 - Collaborations with Closed-Source Model Providers 21:28 - Serving Audio Models and Infrastructure Scalability 22:29 - Serverless GPU infrastructure and technical stack 23:52 - GPU Prioritization: H100s and Blackwell Optimization 25:00 - Discussion on ASICs vs. General Purpose GPUs 26:10 - Architectural Trends: MMDiTs and Model Innovation 27:35 - Rise and Decline of Distillation and Consistency Models 28:15 - Draft Mode and Streaming in Image Generation Workflows 29:46 - Generative Video Models and the Role of Latency 30:14 - Auto-Regressive Image Models and Industry Reactions 31:35 - Discussion of OpenAI's Sora and competition in video generation 34:44 - World Models and Creative Applications in Games and Movies 35:27 - Video Models' Revenue Share and Open-Source Contributions 36:40 - Rise of Chinese Labs and Partnerships 38:03 - Top Trending Models on Hugging Face and ByteDance's Role 39:29 - Monetization Strategies for Open Models 40:48 - Usage Distribution and Model Turnover on FAL 42:11 - Revenue Share vs. Open Model Usage Optimization 42:47 - Moderation and NSFW Content on the Platform 44:03 - Advertising as a key use case for generative media 45:37 - Generative Video in Startup Marketing and Virality 46:56 - LoRA Usage and Fine-Tuning Popularity 47:17 - LoRA ecosystem and fine-tuning discussion 49:25 - Post-Training of Video Models and Future of Fine-Tuning 50:21 - ComfyUI Pipelines and Workflow Complexity 52:31 - Requests for startups and future opportunities in the space 53:33 - Data Collection and RedPajama-Style Initiatives for Media Models 53:46 - RL for Image and Video Models: Unknown Potential 55:11 - Requests for Models: Editing and Conversational Video Models 57:12 - VO3 Capabilities: Lip Sync, TTS, and Timing 58:23 - Bitter Lesson and the Future of Model Workflows 58:44 - FAL's hiring approach and team structure 59:29 - Team Structure and Scaling Applied ML and Performance Teams 1:01:41 - Developer Experience Tools and Low-Code/No-Code Integration 1:03:04 - Improving Hiring Process with Public Challenges and Benchmarks 1:04:02 - Closing Remarks and Culture at FAL

EM360 Podcast
How to Prepare Your Team for Edge Computing?

EM360 Podcast

Play Episode Listen Later Sep 4, 2025 23:38


In a time when the world is run by data and real-time actions, edge computing is quickly becoming a must-have in enterprise technology. In the recent episode of the Tech Transformed podcast, hosted by Shubhangi Dua, a Podcast Producer and B2B Tech Journalist, discusses the complexities of this distributed future with guest Dmitry Panenkov, Founder and CEO of emma.The conversation dives into how latency is the driving force behind edge adoption. Applications like autonomous vehicles and real-time analytics cannot afford to wait on a round trip to a centralised data centre. They need to compute where the data is generated.Rather than viewing edge as a rival to the cloud, the discussion highlights it as a natural extension. Edge environments bring speed, resilience and data control, all necessary capabilities for modern applications. Adopting Edge ComputingFor organisations looking to adopt edge computing, this episode lays out a practical step-by-step approach. The skills necessary in multi-cloud environments – automation, infrastructure as code, and observability – translate well to edge deployments. These capabilities are essential for managing the unique challenges of edge devices, which may be disconnected, have lower power, or be located in hard-to-reach areas. Without this level of operational maturity, Panenkov warns of a "zombie apocalypse" of unmanaged devices.Simplifying ComplexityManaging different APIs, SDKs, and vendor lock-ins across a distributed network can be a challenging task, and this is where platforms like emma become crucial.Alluding to emma's mission, Panenkov explains, "We're building a unified platform that simplifies the way people interact with different cloud and computer environments, whether these are in a public setting or private data centres or even at the edge."Overall, emma creates a unified API layer and user interface, which simplifies the complexity. It helps businesses manage, automate, and scale their workloads from a singular perspective and reduces the burden on IT teams. They also reduce the need for a large team of highly skilled professionals leads to substantial cost savings. emma's customers have experienced that their cloud bills went down significantly and updates could be rolled out much faster using the platform.TakeawaysEdge computing is becoming a reality for more organisations.Latency-sensitive applications drive the need for edge computing.Real-time analytics and industry automation benefit from edge computing.Edge computing enhances resilience, cost efficiency, and data sovereignty.Integrating edge into cloud strategies requires automation and observability.Maturity in operational practices, like automation and observability, is essential for...

Bitcoin Takeover Podcast
S16 E41: Yonatan Sompolinsky on Bitcoin, Kaspa & Proof of Work

Bitcoin Takeover Podcast

Play Episode Listen Later Aug 29, 2025 455:17


Yonatan Sompolinsky is an academic in the field of computer science, best known for his work on the GHOST protocol (Greedy Heaviest Observed Subtree, which was cited in the Ethereum whitepaper) and the way he applied his research to create Kaspa. In this episode, we talk about scaling Proof of Work and why Kaspa might be a worthy contender to process global payments. –––––––––––––––––––––––––––––––––––– Time stamps: 00:01:22 - Debunking rumors: Why some think Yonatan is Satoshi Nakamoto 00:02:52 - Candidates for Satoshi: Charles Hoskinson, Charlie Lee, Zooko, and Alex Chepurnoy 00:03:41 - Alex Chepurnoy as a Satoshi-like figure 00:04:07 - Kaspa overview: DAG structure, no orphaned blocks, generalization of Bitcoin 00:04:55 - Similarities between Kaspa and Bitcoin fundamentals 00:06:12 - Why Kaspa couldn't be built directly on Bitcoin 00:08:05 - Kaspa as generalization of Nakamoto consensus 00:11:55 - Origins of GHOST protocol and early DAG concepts for Bitcoin scaling 00:13:16 - Academic motivation for GHOST and transitioning to computer science 00:13:50 - Turtle pet named Bitcoin 00:15:22 - Increasing block rate in Bitcoin and GHOST protocol 00:16:57 - Meeting Gregory Maxwell and discovering GHOST flaws 00:20:00 - Yonatan's views on drivechains and Bitcoin maximalism 00:20:36 - Defining Bitcoin maximalism: Capital B vs lowercase b 00:23:18 - Satoshi's support for Namecoin and merged mining 00:24:12 - Bitcoin culture in 2013-2018: Opposing other functionalities 00:26:01 - Vitalik's 2014 article on Bitcoin maximalism 00:26:13 - Andrew Poelstra's opposition to other assets on Bitcoin 00:26:38 - Bitcoin culture: Distaste for DeFi, criticism of Ethereum as a scam 00:28:03 - Bitcoin Cash developments: Cash tokens, cash fusion, contracts 00:28:39 - Rejection of Ethereum in Bitcoin circles 00:30:18 - Ethereum's successful PoS transition despite critics 00:35:04 - Ethereum's innovation: From Plasma to ZK rollups, nurturing development 00:37:04 - Stacks protocol and criticism from Luke Dashjr 00:39:02 - Bitcoin culture justifying technical limitations 00:41:01 - Declining Bitcoin adoption as money, rise of altcoins for payments 00:43:02 - Kaspa's aspirations: Merging sound money with DeFi, beyond just payments 00:43:56 - Possibility of tokenized Bitcoin on Kaspa 00:46:30 - Native currency advantage and friction in bridges 00:48:49 - WBTC on Ethereum scale vs Bitcoin L2s 00:53:33 - Quotes: Richard Dawkins on atheism, Milton Friedman on Yap Island money 00:55:44 - Story of Kaspa's messy fair launch in 2021 01:14:08 - Tech demo of Kaspa wallet experience 01:28:45 - Kaspa confirmation times & transaction fees 01:43:26 - GHOST DAG visualizer 01:44:10 - Mining Kaspa 01:55:48 - Data pruning in Kaspa, DAG vs MimbleWimble 02:01:40 - Grin & the fairest launch 02:12:21 - Zcash scaling & ZKP OP code in Kaspa 02:19:50 - Jameson Lopp, cold storage & self custody elitism 02:35:08 - Social recovery 02:41:00 - Amir Taaki, DarkFi & DAO 02:53:10 - Nick Szabo's God Protocols 03:00:00 - Layer twos on Kaspa for DeFi 03:13:09 - How Kaspa's DeFi will resemble Solana 03:24:03 - Centralized exchanges vs DeFi 03:32:05 - The importance of community projects 03:37:00 - DAG KNIGHT and its resilience 03:51:00 - DAG KNIGHT tradeoffs 03:58:18 - Blockchain vs DAG, the bottleneck for Kaspa 04:03:00 - 100 blocks per second? 04:11:43 - Question from Quai's Dr. K 04:17:03 - Doesn't Kaspa require super fast internet? 04:23:10 - Are ASIC miners desirable? 04:33:53 - Why Proof of Work matters 04:35:55 - A short history of Bitcoin mining 04:44:00 - DAG's sequencing 04:49:09 - Phantom GHOST DAG 04:52:47 - Why Kaspa had high inflation initially 04:55:10 - Selfish mining 05:03:00 - K Heavy Hash & other community questions 06:33:20 - Latency settings in DAG KNIGHT for security 06:36:52 - Aviv Zohar's involvement in Kaspa research 06:38:07 - World priced in Kaspa after hyperinflation 06:39:51 - Kaspa's fate intertwined with crypto 06:40:29 - Kaspa contracts vs Solana, why better for banks 06:42:53 - Cohesive developer experience in Kaspa like Solana 06:45:22 - Incorporating ZK design in Kaspa smart contracts 06:47:22 - Heroes: Garry Kasparov 06:48:12 - Shift in attitude from academics like Hoskinson, Buterin, Back 06:53:07 - Adam Back's criticism of Kaspa 06:55:57 - Michael Jordan and LeBron analogy for Bitcoiners' mindset 06:58:02 - Can Kaspa flip Bitcoin in market cap 07:00:34 - Gold and USD market cap comparison 07:06:06 - Collaboration with Kai team 07:10:37 - Community improvement: More context on crypto 07:13:43 - Theoretical maximum TPS for Kaspa 07:16:05 - Full ZK on L1 improvements 07:17:45 - Atomic composability and logic zones in Kaspa 07:23:12 - Sparkle and monolithic UX feel 07:26:00 - Wrapping up: Beating podcast length record, final thoughts on Bitcoin and Kaspa 07:27:31 - Why Yonatan called a scammer despite explanations 07:32:29 - Luke Dashjr's views and disconnect 07:33:01 - Hope for Bitcoin scaling and revolution

The Connectivity Podcast
EP59: Connectivity in the gaming industry: trends, drivers, latency & bandwidth

The Connectivity Podcast

Play Episode Listen Later Aug 28, 2025 16:33


In this episode, Markus Viitamäki, Senior Infrastructure Architect at Embark Studios joins the podcast to discuss trends in the gaming industry, main drivers to creating a new game, and how much latency and bandwidth are affecting the gaming experience. 

The InfoQ Podcast
Why Rust Will Help You Deliver Better Low-latency Systems and Happier Developers

The InfoQ Podcast

Play Episode Listen Later Aug 25, 2025 42:58


Andrew Lamb, a veteran of database engine development, shares his thoughts on why Rust is the right tool for developing low-latency systems, not only from the perspective of the code's performance, but also looking at productivity and developer joy. He discusses the overall experience of adopting Rust after a decade of programming in C/C++. Read a transcript of this interview: http://bit.ly/45qi4eK Subscribe to the Software Architects' Newsletter for your monthly guide to the essential news and experience from industry peers on emerging patterns and technologies: https://www.infoq.com/software-architects-newsletter Upcoming Events: InfoQ Dev Summit Munich (October 15-16, 2025) Essential insights on critical software development priorities. https://devsummit.infoq.com/conference/munich2025 QCon San Francisco 2025 (November 17-21, 2025) Get practical inspiration and best practices on emerging software trends directly from senior software developers at early adopter companies. https://qconsf.com/ QCon AI New York 2025 (December 16-17, 2025) https://ai.qconferences.com/ QCon London 2026 (March 16-19, 2026) https://qconlondon.com/ The InfoQ Podcasts: Weekly inspiration to drive innovation and build great teams from senior software leaders. Listen to all our podcasts and read interview transcripts: - The InfoQ Podcast https://www.infoq.com/podcasts/ - Engineering Culture Podcast by InfoQ https://www.infoq.com/podcasts/#engineering_culture - Generally AI: https://www.infoq.com/generally-ai-podcast/ Follow InfoQ: - Mastodon: https://techhub.social/@infoq - X: https://x.com/InfoQ?from=@ - LinkedIn: https://www.linkedin.com/company/infoq/ - Facebook: https://www.facebook.com/InfoQdotcom# - Instagram: https://www.instagram.com/infoqdotcom/?hl=en - Youtube: https://www.youtube.com/infoq - Bluesky: https://bsky.app/profile/infoq.com Write for InfoQ: Learn and share the changes and innovations in professional software development. - Join a community of experts. - Increase your visibility. - Grow your career. https://www.infoq.com/write-for-infoq

The Pro Audio Suite
Latency: The Invisible Killer of Your Audio Flow

The Pro Audio Suite

Play Episode Listen Later Aug 12, 2025 40:09 Transcription Available


Latency, it's that tiny, annoying delay between what you say and what you hear back. In the studio, online, or even in your own headphones, it can trip you up, wreck your timing, and make you feel like you're talking to yourself in slow motion. In this episode of The Pro Audio Suite, Robbo, AP, George, and Robert dig into: What latency really is (and why it's not just a tech buzzword) How it sneaks into your recording chain The difference between “good” latency and “bad” latency Fixes you can do right now without buying a new rig When hardware or interface upgrades actually make sense Whether you're a VO artist fighting through a remote session, a podcaster dealing with talkback lag, or a studio pro chasing perfect sync, this is your guide to killing the delay and getting back in the groove. Proudly supported by Tri-Booth and Austrian Audio, we only partner with brands we believe in, and these two deliver the goods for pro audio pros like you.

Audio Branding
How Online Technology is Changing Music Collaboration: A Conversation with Rebekah Wilson - Part 2

Audio Branding

Play Episode Listen Later Jul 30, 2025 25:28


“I have a regular chat with a friend of mine in New Zealand. He's a tetraplegic and a musician, so he invents his own music instruments that he can play with his limited motion, and he can send me his instrument over MIDI to where I am across the ocean and we can play together and we can have an engagement. It's not possible for him to come to see me in Europe. It would be so expensive, and a lot of work. So, you know, thank God there's the internet for him, you know. He gets to participate, he has remote concerts, he still plays with his friends. It's really special.” – Rebekah Wilson This episode is the second half of my conversation with Source Elements CEO and remote collaboration specialist Rebekah Wilson as we discuss how physics and neurology collide when it comes to reducing latency, how the pandemic transformed online music collaboration and gave rise to today's generation of at-home musicians, and where Rebekah sees sound, technology, and music itself heading in the future, over both the coming decades and even generations from now. As always, if you have questions for my guest, you're welcome to reach out through the links in the show notes. If you have questions for me, visit audiobrandingpodcast.com, where you'll find a lot of ways to get in touch. Plus, subscribing to the newsletter will let you know when the new podcasts are available, along with other interesting bits of audio-related news. And if you're getting some value from listening, the best ways to show your support are to share this podcast with a friend and leave an honest review. Both those things really help, and I'd love to feature your review on future podcasts. You can leave one either in written or in voice format from the podcast's main page. I would so appreciate that. (0:00:01) - Impact of Latency on Music CollaborationWe continue our talk about the science of latency, and Rebekah explains how it impacts music in ways that our brains only dimly perceive. “If you add a little bit of latency onto that,” she says, “music's like, one, two... three… music's not very friendly to that [sort of] latency.” She tells us more about how our brains unconsciously adapt to latency, and how technology relies both on improving speed and taking advantage of our ability to filter out information gaps. “What's happening is that you're anticipating it based on this model that's in your brain,” Rebekah explains. “For example, every time you look at a wall or your surroundings, if it's not moving, your brain's not processing it.”(0:06:02) - Advancements in Remote Music CollaborationShe talks about how the COVID-19 pandemic's lockdown phase led to a boom in online collaboration, some of which continues to thrive today. “There remained a group of people,” she says, “a small group of people, you know, scattered around the world… who were like, ‘You know what? Some interesting things came out of this. Some interesting artistic development is possible here and it's worth pursuing.” We discuss the technical and creative innovations that emerged from that period, and where they might lead in the years to come as we continue to innovate. “What we love as humans,” Rebekah says, “is to seek new forms of expression. This is what we do, we're adventurers. So we go out, we go into the desert, we go out into the oceans, and we look for where something new is. And you know, music and performance and being together on the internet is still very new for us as humans.”(0:12:42) - Expanding Music Collaboration With TechnologyOur conversation wraps up as we continue to talk about online collaboration and creative efforts that can now

MLOps.community
Real-time Feature Generation at Lyft // Rakesh Kumar // #334

MLOps.community

Play Episode Listen Later Jul 25, 2025 58:04


Real-time Feature Generation at Lyft // MLOps Podcast #334 with Rakesh Kumar, Senior Staff Software Engineer at Lyft.Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractThis session delves into real-time feature generation at Lyft. Real-time feature generation is critical for Lyft where accurate up-to-the-minute marketplace data is paramount for optimal operational efficiency. We will explore how the infrastructure handles the immense challenge of processing tens of millions of events per minute to generate features that truly reflect current marketplace conditions. Lyft has built this massive infrastructure over time, evolving from a humble start and a naive pipeline. Through lessons learned and iterative improvements, Lyft has made several trade-offs to achieve low-latency, real-time feature delivery. MLOps plays a critical role in managing the lifecycle of these real-time feature pipelines, including monitoring and deployment. We will discuss the practicalities of building and maintaining high-throughput, low-latency real-time feature generation systems that power Lyft's dynamic marketplace and business-critical products.// BioRakesh Kumar is a Senior Staff Software Engineer at Lyft, specializing in building and scaling Machine Learning platforms. Rakesh has expertise in MLOps, including real-time feature generation, experimentation platforms, and deploying ML models at scale. He is passionate about sharing his knowledge and fostering a culture of innovation. This is evident in his contributions to the tech community through blog posts, conference presentations, and reviewing technical publications.// Related LinksWebsite: https://englife101.io/https://eng.lyft.com/search?q=rakeshhttps://eng.lyft.com/real-time-spatial-temporal-forecasting-lyft-fa90b3f3ec24https://eng.lyft.com/evolution-of-streaming-pipelines-in-lyfts-marketplace-74295eaf1ebaStreaming Ecosystem Complexities and Cost Management // Rohit Agrawal // MLOps Podcast #302 - https://youtu.be/0axFbQwHEh8~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Rakesh on LinkedIn: /rakeshkumar1007/Timestamps:[00:00] Rakesh preferred coffee[00:24] Real-time machine learning[04:51] Latency tricks explanation[09:28] Real-time problem evolution[15:51] Config management complexity[18:57] Data contract implementation[23:36] Feature store[28:23] Offline vs online workflows[31:02] Decision-making in tech shifts[36:54] Cost evaluation frequency[40:48] Model feature discussion[49:09] Hot shard tricks[55:05] Pipeline feature bundling[57:38] Wrap up

Let's Talk AI
#216 - Grok 4, Project Rainier, Kimi K2

Let's Talk AI

Play Episode Listen Later Jul 14, 2025 102:10 Transcription Available


Our 216th episode with a summary and discussion of last week's big AI news! Recorded on 07/11/2025 Hosted by Andrey Kurenkov and Jeremie Harris. Feel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai Read out our text newsletter and comment on the podcast at https://lastweekin.ai/. In this episode: xAI launches Grok 4 with breakthrough performance across benchmarks, becoming the first true frontier model outside established labs, alongside a $300/month subscription tier Grok's alignment challenges emerge with antisemitic responses, highlighting the difficulty of steering models toward "truth-seeking" without harmful biases Perplexity and OpenAI launch AI-powered browsers to compete with Google Chrome, signaling a major shift in how users interact with AI systems Meta study reveals AI tools actually slow down experienced developers by 20% on complex tasks, contradicting expectations and anecdotal reports of productivity gains Timestamps + Links: (00:00:10) Intro / Banter (00:01:02) News Preview       Tools & Apps (00:01:59) Elon Musk's xAI launches Grok 4 alongside a $300 monthly subscription | TechCrunch (00:15:28) Elon Musk's AI chatbot is suddenly posting antisemitic tropes (00:29:52) Perplexity launches Comet, an AI-powered web browser | TechCrunch (00:32:54) OpenAI is reportedly releasing an AI browser in the coming weeks | TechCrunch (00:33:27) Replit Launches New Feature for its Agent, CEO Calls it ‘Deep Research for Coding' (00:34:40) Cursor launches a web app to manage AI coding agents (00:36:07) Cursor apologizes for unclear pricing changes that upset users | TechCrunch       Applications & Business (00:39:10) Lovable on track to raise $150M at $2B valuation (00:41:11) Amazon built a massive AI supercluster for Anthropic called Project Rainier – here's what we know so far (00:46:35) Elon Musk confirms xAI is buying an overseas power plant and shipping the whole thing to the U.S. to power its new data center — 1 million AI GPUs and up to 2 Gigawatts of power under one roof, equivalent to powering 1.9 million homes (00:48:16) Microsoft's own AI chip delayed six months in major setback — in-house chip now reportedly expected in 2026, but won't hold a candle to Nvidia Blackwell (00:49:54) Ilya Sutskever becomes CEO of Safe Superintelligence after Meta poached Daniel Gross (00:52:46) OpenAI's Stock Compensation Reflect Steep Costs of Talent Wars       Projects & Open Source (00:58:04) Hugging Face Releases SmolLM3: A 3B Long-Context, Multilingual Reasoning Model - MarkTechPost (00:58:33) Kimi K2: Open Agentic Intelligence (00:58:59) Kyutai Releases 2B Parameter Streaming Text-to-Speech TTS with 220ms Latency and 2.5M Hours of Training       Research & Advancements (01:02:14) Does Math Reasoning Improve General LLM Capabilities? Understanding Transferability of LLM Reasoning (01:07:58) Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity (01:13:03) Mitigating Goal Misgeneralization with Minimax Regret (01:17:01) Correlated Errors in Large Language Models (01:20:31) What skills does SWE-bench Verified evaluate?       Policy & Safety (01:22:53) Evaluating Frontier Models for Stealth and Situational Awareness (01:25:49) When Chain of Thought is Necessary, Language Models Struggle to Evade Monitors (01:30:09) Why Do Some Language Models Fake Alignment While Others Don't? (01:34:35) Positive review only': Researchers hide AI prompts in papers (01:35:40) Google faces EU antitrust complaint over AI Overviews (01:36:41) The transfer of user data by DeepSeek to China is unlawful': Germany calls for Google and Apple to remove the AI app from their stores (01:37:30) Virology Capabilities Test (VCT): A Multimodal Virology Q&A Benchmark

a16z
Enabling Agents and Battling Bots on an AI-Centric Web

a16z

Play Episode Listen Later Jul 4, 2025 26:24


Taken from the AI + a16z podcast, Arcjet CEO David Mytton sits down with a16z partner Joel de la Garza to discuss the increasing complexity of managing who can access websites, and other web apps, and what they can do there. A primary challenge is determining whether automated traffic is coming from bad actors and troublesome bots, or perhaps AI agents trying to buy a product on behalf of a real customer.Joel and David dive into the challenge of analyzing every request without adding latency, and how faster inference at the edge opens up new possibilities for fraud prevention, content filtering, and even ad tech.Topics include:Why traditional threat analysis won't work for the AI-powered webThe need for full-context security checksHow to perform sub-second, cost-effective inferenceThe wide range of potential actors and actions behind any given visitAs David puts it, lower inference costs are key to letting apps act on the full context window — everything you know about the user, the session, and your application. Follow everyone on social media:David MyttonJoel de la GarzaCheck out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts. Stay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. 

This Week in Virology
TWiV 1229: Virology throughout Europe

This Week in Virology

Play Episode Listen Later Jun 22, 2025 83:18


Rich travels to Dubrovnik for the European Congress of Virology 2025 and Vincent joins via Zoom to speak with Stéphane Blanc, Vanda Juranić Lisnić, and Elisabeth Puchhammer-Stöckl about their work on plant viruses, cytomegalovirus, and Epstein-Barr virus. Hosts: Vincent Racaniello and Rich Condit Guests: Stéphane Blanc, Vanda Juranić Lisnić, and Elisabeth Puchhammer-Stöckl Subscribe (free): Apple Podcasts, RSS, email Become a patron of TWiV! Links for this episode Support science education at MicrobeTV Assembled plant viruses move through plants (PLoS Path) Genome formula of multipartite virus (PLoS Path) Immune surveillance of cytomegalovirus in tissues (Cell Mol Immunol) Cytomegalovirus and NK cells (Nat Commun) Epstein-Barr virus and multiple sclerosis (J Clin Inves) Epstein-Barr virus and lymphoproliferative disease (Transplant) Timestamps by Jolene Ramsey. Thanks! Intro music is by Ronald Jenkees Send your virology questions and comments to twiv@microbe.tv Content in this podcast should not be construed as medical advice.

Heal Thy Self with Dr. G
When No One Tells You About Herpes! #378

Heal Thy Self with Dr. G

Play Episode Listen Later May 5, 2025 26:47


Over 70% of the world has herpes—yet it's still taboo. In this episode, Dr. G breaks down the truth about HSV-1 & HSV-2, from how it spreads to how to heal physically and emotionally. He shares the Heal Thyself protocol, featuring powerful supplements, nervous system tools, and mindset shifts to reduce outbreaks and reclaim your peace. #wellnessjourney #herpes #wellness ==== Thank You To Our Sponsors! Calroy Head on over to at calroy.com/drg and Save over $50 when you purchase the Vascanox and Arterosil bundle! ==== Timestamps: 00:00:00 - Understanding the Herpes Virus 00:02:56 - Prevalence, Latency & Treatment 06:00 - Transmission: Myths & Facts 08:58 - Triggers, Treatments & Misconceptions 12:02:47 - Antiviral Drugs & Holistic Healing 15:09 - Treatment: Sleep, Stress & Supplements 18:09 - Natural Herpes Remedies 21:15 - Treatment & Emotional Roots 24:10 - Healing Herpes: Shame & Self-Ownership Be sure to like and subscribe to #HealThySelf Hosted by Doctor Christian Gonzalez N.D. Follow Doctor G on Instagram @doctor.gonzalez https://www.instagram.com/doctor.gonzalez/ Sign up for our newsletter! https://drchristiangonzalez.com/newsletter/