POPULARITY
Jimmy Bogard joins Pod Rocket to talk about making monoliths more modular, why boundaries matter, and how to avoid turning systems into distributed monoliths. From refactoring techniques and database migrations at scale to lessons from Stripe and WordPress, he shares practical ways to balance architecture choices. We also explore how tools like Claude and Lambda fit into modern development and what teams should watch for with latency, transactions, and growing complexity. Links Website: https://www.jimmybogard.com X: https://x.com/jbogard Github: https://github.com/jbogard LinkedIn: https://www.linkedin.com/in/jimmybogard/ Resources Modularizing the Monolith - Jimmy Bogard - NDC Oslo 2024: https://www.youtube.com/watch?v=fc6_NtD9soI Chapters We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Fill out our listener survey (https://t.co/oKVAEXipxu)! Let us know by sending an email to our producer, Em, at emily.kochanek@logrocket.com (mailto:emily.kochanek@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understanding where your users are struggling by trying it for free at LogRocket.com. Try LogRocket for free today. (https://logrocket.com/signup/?pdr) Special Guest: Jimmy Bogard.
Monitoring and troubleshooting latency can be tricky. If it’s in the network, was it the IP stack? A NIC? A switch buffer? A middlebox somewhere on the WAN? If it’s the application, can you, the network engineer, bring receipts to the app team? And what if you need to build and operate a network that’s... Read more »
Monitoring and troubleshooting latency can be tricky. If it’s in the network, was it the IP stack? A NIC? A switch buffer? A middlebox somewhere on the WAN? If it’s the application, can you, the network engineer, bring receipts to the app team? And what if you need to build and operate a network that’s... Read more »
Monitoring and troubleshooting latency can be tricky. If it’s in the network, was it the IP stack? A NIC? A switch buffer? A middlebox somewhere on the WAN? If it’s the application, can you, the network engineer, bring receipts to the app team? And what if you need to build and operate a network that’s... Read more »
Today we are joined by Gorkem and Batuhan from Fal.ai, the fastest growing generative media inference provider. They recently raised a $125M Series C and crossed $100M ARR. We covered how they pivoted from dbt pipelines to diffusion models inference, what were the models that really changed the trajectory of image generation, and the future of AI videos. Enjoy! 00:00 - Introductions 04:58 - History of Major AI Models and Their Impact on Fal.ai 07:06 - Pivoting to Generative Media and Strategic Business Decisions 10:46 - Technical discussion on CUDA optimization and kernel development 12:42 - Inference Engine Architecture and Kernel Reusability 14:59 - Performance Gains and Latency Trade-offs 15:50 - Discussion of model latency importance and performance optimization 17:56 - Importance of Latency and User Engagement 18:46 - Impact of Open Source Model Releases and Competitive Advantage 19:00 - Partnerships with closed source model developers 20:06 - Collaborations with Closed-Source Model Providers 21:28 - Serving Audio Models and Infrastructure Scalability 22:29 - Serverless GPU infrastructure and technical stack 23:52 - GPU Prioritization: H100s and Blackwell Optimization 25:00 - Discussion on ASICs vs. General Purpose GPUs 26:10 - Architectural Trends: MMDiTs and Model Innovation 27:35 - Rise and Decline of Distillation and Consistency Models 28:15 - Draft Mode and Streaming in Image Generation Workflows 29:46 - Generative Video Models and the Role of Latency 30:14 - Auto-Regressive Image Models and Industry Reactions 31:35 - Discussion of OpenAI's Sora and competition in video generation 34:44 - World Models and Creative Applications in Games and Movies 35:27 - Video Models' Revenue Share and Open-Source Contributions 36:40 - Rise of Chinese Labs and Partnerships 38:03 - Top Trending Models on Hugging Face and ByteDance's Role 39:29 - Monetization Strategies for Open Models 40:48 - Usage Distribution and Model Turnover on FAL 42:11 - Revenue Share vs. Open Model Usage Optimization 42:47 - Moderation and NSFW Content on the Platform 44:03 - Advertising as a key use case for generative media 45:37 - Generative Video in Startup Marketing and Virality 46:56 - LoRA Usage and Fine-Tuning Popularity 47:17 - LoRA ecosystem and fine-tuning discussion 49:25 - Post-Training of Video Models and Future of Fine-Tuning 50:21 - ComfyUI Pipelines and Workflow Complexity 52:31 - Requests for startups and future opportunities in the space 53:33 - Data Collection and RedPajama-Style Initiatives for Media Models 53:46 - RL for Image and Video Models: Unknown Potential 55:11 - Requests for Models: Editing and Conversational Video Models 57:12 - VO3 Capabilities: Lip Sync, TTS, and Timing 58:23 - Bitter Lesson and the Future of Model Workflows 58:44 - FAL's hiring approach and team structure 59:29 - Team Structure and Scaling Applied ML and Performance Teams 1:01:41 - Developer Experience Tools and Low-Code/No-Code Integration 1:03:04 - Improving Hiring Process with Public Challenges and Benchmarks 1:04:02 - Closing Remarks and Culture at FAL
In a time when the world is run by data and real-time actions, edge computing is quickly becoming a must-have in enterprise technology. In the recent episode of the Tech Transformed podcast, hosted by Shubhangi Dua, a Podcast Producer and B2B Tech Journalist, discusses the complexities of this distributed future with guest Dmitry Panenkov, Founder and CEO of emma.The conversation dives into how latency is the driving force behind edge adoption. Applications like autonomous vehicles and real-time analytics cannot afford to wait on a round trip to a centralised data centre. They need to compute where the data is generated.Rather than viewing edge as a rival to the cloud, the discussion highlights it as a natural extension. Edge environments bring speed, resilience and data control, all necessary capabilities for modern applications. Adopting Edge ComputingFor organisations looking to adopt edge computing, this episode lays out a practical step-by-step approach. The skills necessary in multi-cloud environments – automation, infrastructure as code, and observability – translate well to edge deployments. These capabilities are essential for managing the unique challenges of edge devices, which may be disconnected, have lower power, or be located in hard-to-reach areas. Without this level of operational maturity, Panenkov warns of a "zombie apocalypse" of unmanaged devices.Simplifying ComplexityManaging different APIs, SDKs, and vendor lock-ins across a distributed network can be a challenging task, and this is where platforms like emma become crucial.Alluding to emma's mission, Panenkov explains, "We're building a unified platform that simplifies the way people interact with different cloud and computer environments, whether these are in a public setting or private data centres or even at the edge."Overall, emma creates a unified API layer and user interface, which simplifies the complexity. It helps businesses manage, automate, and scale their workloads from a singular perspective and reduces the burden on IT teams. They also reduce the need for a large team of highly skilled professionals leads to substantial cost savings. emma's customers have experienced that their cloud bills went down significantly and updates could be rolled out much faster using the platform.TakeawaysEdge computing is becoming a reality for more organisations.Latency-sensitive applications drive the need for edge computing.Real-time analytics and industry automation benefit from edge computing.Edge computing enhances resilience, cost efficiency, and data sovereignty.Integrating edge into cloud strategies requires automation and observability.Maturity in operational practices, like automation and observability, is essential for...
Join Lois Houston and Nikita Abraham as they chat with Yunus Mohammed, a Principal Instructor at Oracle University, about the key stages of AI model development. From gathering and preparing data to selecting, training, and deploying models, learn how each phase impacts AI's real-world effectiveness. The discussion also highlights why monitoring AI performance and addressing evolving challenges are critical for long-term success. AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/252500 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hey everyone! In our last episode, we spoke about generative AI and gen AI agents. Today, we're going to look at the key stages in a typical AI workflow. We'll also discuss how data quality, feedback loops, and business goals influence AI success. With us today is Yunus Mohammed, a Principal Instructor at Oracle University. 01:00 Lois: Hi Yunus! We're excited to have you here! Can you walk us through the various steps in developing and deploying an AI model? Yunus: The first point is the collect data. We gather relevant data, either historical or real time. Like customer transactions, support tickets, survey feedbacks, or sensor logs. A travel company, for example, can collect past booking data to predict future demand. So, data is the most crucial and the important component for building your AI models. But it's not just the data. You need to prepare the data. In the prepared data process, we clean, organize, and label the data. AI can't learn from messy spreadsheets. We try to make the data more understandable and organized, like removing duplicates, filling missing values in the data with some default values or formatting dates. All these comes under organization of the data and give a label to the data, so that the data becomes more supervised. After preparing the data, I go for selecting the model to train. So now, we pick what type of model fits your goals. It can be a traditional ML model or a deep learning network model, or it can be a generative model. The model is chosen based on the business problems and the data we have. So, we train the model using the prepared data, so it can learn the patterns of the data. Then after the model is trained, I need to evaluate the model. You check how well the model performs. Is it accurate? Is it fair? The metrics of the evaluation will vary based on the goal that you're trying to reach. If your model misclassifies emails as spam and it is doing it very much often, then it is not ready. So I need to train it further. So I need to train it to a level when it identifies the official mail as official mail and spam mail as spam mail accurately. After evaluating and making sure your model is perfectly fitting, you go for the next step, which is called the deploy model. Once we are happy, we put it into the real world, like into a CRM, or a web application, or an API. So, I can configure that with an API, which is application programming interface, or I add it to a CRM, Customer Relationship Management, or I add it to a web application that I've got. Like for example, a chatbot becomes available on your company's website, and the chatbot might be using a generative AI model. Once I have deployed the model and it is working fine, I need to keep track of this model, how it is working, and need to monitor and improve whenever needed. So I go for a stage, which is called as monitor and improve. So AI isn't set in and forget it. So over time, there are lot of changes that is happening to the data. So we monitor performance and retrain when needed. An e-commerce recommendation model needs updates as there might be trends which are shifting. So the end user finally sees the results after all the processes. A better product, or a smarter service, or a faster decision-making model, if we do this right. That is, if we process the flow perfectly, they may not even realize AI is behind it to give them the accurate results. 04:59 Nikita: Got it. So, everything in AI begins with data. But what are the different types of data used in AI development? Yunus: We work with three main types of data: structured, unstructured, and semi-structured. Structured data is like a clean set of tables in Excel or databases, which consists of rows and columns with clear and consistent data information. Unstructured is messy data, like your email or customer calls that records videos or social media posts, so they all comes under unstructured data. Semi-structured data is things like logs on XML files or JSON files. Not quite neat but not entirely messy either. So they are, they are termed semi-structured. So structured, unstructured, and then you've got the semi-structured. 05:58 Nikita: Ok… and how do the data needs vary for different AI approaches? Yunus: Machine learning often needs labeled data. Like a bank might feed past transactions labeled as fraud or not fraud to train a fraud detection model. But machine learning also includes unsupervised learning, like clustering customer spending behavior. Here, no labels are needed. In deep learning, it needs a lot of data, usually unstructured, like thousands of loan documents, call recordings, or scan checks. These are fed into the models and the neural networks to detect and complex patterns. Data science focus on insights rather than the predictions. So a data scientist at the bank might use customer relationship management exports and customer demographies to analyze which age group prefers credit cards over the loans. Then we have got generative AI that thrives on diverse, unstructured internet scalable data. Like it is getting data from books, code, images, chat logs. So these models, like ChatGPT, are trained to generate responses or mimic the styles and synthesize content. So generative AI can power a banking virtual assistant trained on chat logs and frequently asked questions to answer customer queries 24/7. 07:35 Lois: What are the challenges when dealing with data? Yunus: Data isn't just about having enough. We must also think about quality. Is it accurate and relevant? Volume. Do we have enough for the model to learn from? And is my data consisting of any kind of unfairly defined structures, like rejecting more loan applications from a certain zip code, which actually gives you a bias of data? And also the privacy. Are we handling personal data responsibly or not? Especially data which is critical or which is regulated, like the banking sector or health data of the patients. Before building anything smart, we must start smart. 08:23 Lois: So, we've established that collecting the right data is non-negotiable for success. Then comes preparing it, right? Yunus: This is arguably the most important part of any AI or data science project. Clean data leads to reliable predictions. Imagine you have a column for age, and someone accidentally entered an age of like 999. That's likely a data entry error. Or maybe a few rows have missing ages. So we either fix, remove, or impute such issues. This step ensures our model isn't misled by incorrect values. Dates are often stored in different formats. For instance, a date, can be stored as the month and the day values, or it can be stored in some places as day first and month next. We want to bring everything into a consistent, usable format. This process is called as transformation. The machine learning models can get confused if one feature, like example the income ranges from 10,000 to 100,000, and another, like the number of kids, range from 0 to 5. So we normalize or scale values to bring them to a similar range, say 0 or 1. So we actually put it as yes or no options. So models don't understand words like small, medium, or large. We convert them into numbers using encoding. One simple way is assigning 1, 2, and 3 respectively. And then you have got removing stop words like the punctuations, et cetera, and break the sentence into smaller meaningful units called as tokens. This is actually used for generative AI tasks. In deep learning, especially for Gen AI, image or audio inputs must be of uniform size and format. 10:31 Lois: And does each AI system have a different way of preparing data? Yunus: For machine learning ML, focus is on cleaning, encoding, and scaling. Deep learning needs resizing and normalization for text and images. Data science, about reshaping, aggregating, and getting it ready for insights. The generative AI needs special preparation like chunking, tokenizing large documents, or compressing images. 11:06 Oracle University's Race to Certification 2025 is your ticket to free training and certification in today's hottest tech. Whether you're starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That's education.oracle.com/race-to-certification-2025. 11:50 Nikita: Welcome back! Yunus, how does a user choose the right model to solve their business problem? Yunus: Just like a business uses different dashboards for marketing versus finance, in AI, we use different model types, depending on what we are trying to solve. Like classification is choosing a category. Real-world example can be whether the email is a spam or not. Use in fraud detection, medical diagnosis, et cetera. So what you do is you classify that particular data and then accurately access that classification of data. Regression, which is used for predicting a number, like, what will be the price of a house next month? Or it can be a useful in common forecasting sales demands or on the cost. Clustering, things without labels. So real-world examples can be segmenting customers based on behavior for targeted marketing. It helps discovering hidden patterns in large data sets. Generation, that is creating new content. So AI writing product description or generating images can be a real-world example for this. And it can be used in a concept of generative AI models like ChatGPT or Dall-E, which operates on the generative AI principles. 13:16 Nikita: And how do you train a model? Yunus: We feed it with data in small chunks or batches and then compare its guesses to the correct values, adjusting its thinking like weights to improve next time, and the cycle repeats until the model gets good at making predictions. So if you're building a fraud detection system, ML may be enough. If you want to analyze medical images, you will need deep learning. If you're building a chatbot, go for a generative model like the LLM. And for all of these use cases, you need to select and train the applicable models as and when appropriate. 14:04 Lois: OK, now that the model's been trained, what else needs to happen before it can be deployed? Yunus: Evaluate the model, assess a model's accuracy, reliability, and real-world usefulness before it's put to work. That is, how often is the model right? Does it consistently perform well? Is it practical in the real world to use this model or not? Because if I have bad predictions, doesn't just look bad, it can lead to costly business mistakes. Think of recommending the wrong product to a customer or misidentifying a financial risk. So what we do here is we start with splitting the data into two parts. So we train the data by training data. And this is like teaching the model. And then we have got the testing data. This is actually used for checking how well the model has learned. So once trained, the model makes predictions. We compare the predictions to the actual answers, just like checking your answer after a quiz. We try to go in for tailored evaluation based on AI types. Like machine learning, we care about accuracy in prediction. Deep learning is about fitting complex data like voice or images, where the model repeatedly sees examples and tunes itself to reduce errors. Data science, we look for patterns and insights, such as which features will matter. In generative AI, we judge by output quality. Is it coherent, useful, and is it natural? The model improves with the accuracy and the number of epochs the training has been done on. 15:59 Nikita: So, after all that, we finally come to deploying the model… Yunus: Deploying a model means we are integrating it into our actual business system. So it can start making decisions, automating tasks, or supporting customer experiences in real time. Think of it like this. Training is teaching the model. Evaluating is testing it. And deployment is giving it a job. The model needs a home either in the cloud or inside your company's own servers. Think of it like putting the AI in place where it can be reached by other tools. Exposed via API or embedded in an app, or you can say application, this is how the AI becomes usable. Then, we have got the concept of receives live data and returns predictions. So receives live data and returns prediction is when the model listens to real-time inputs like a user typing, or user trying to search or click or making a transaction, and then instantly, your AI responds with a recommendation, decisions, or results. Deploying the model isn't the end of the story. It is just the beginning of the AI's real-world journey. Models may work well on day one, but things change. Customer behavior might shift. New products get introduced in the market. Economic conditions might evolve, like the era of COVID, where the demand shifted and the economical conditions actually changed. 17:48 Lois: Then it's about monitoring and improving the model to keep things reliable over time. Yunus: The monitor and improve loop is a continuous process that ensures an AI model remains accurate, fair, and effective after deployment. The live predictions, the model is running in real time, making decisions or recommendations. The monitor performance are those predictions still accurate and helpful. Is latency acceptable? This is where we track metrics, user feedbacks, and operational impact. Then, we go for detect issues, like accuracy is declining, are responses feeling biased, are customers dropping off due to long response times? And the next step will be to reframe or update the model. So we add fresh data, tweak the logic, or even use better architectures to deploy the uploaded model, and the new version replaces the old one and the cycle continues again. 18:58 Lois: And are there challenges during this step? Yunus: The common issues, which are related to monitor and improve consist of model drift, bias, and latency of failures. In model drift, the model becomes less accurate as the environment changes. Or bias, the model may favor or penalize certain groups unfairly. Latency or failures, if the model is too slow or fails unpredictably, it disrupts the user experience. Let's take the loan approvals. In loan approvals, if we notice an unusually high rejection rate due to model bias, we might retrain the model with more diverse or balanced data. For a chatbot, we watch for customer satisfaction, which might arise due to model failure and fine-tune the responses for the model. So in forecasting demand, if the predictions no longer match real trends, say post-pandemic, due to the model drift, we update the model with fresh data. 20:11 Nikita: Thanks for that, Yunus. Any final thoughts before we let you go? Yunus: No matter how advanced your model is, its effectiveness depends on the quality of the data you feed it. That means, the data needs to be clean, structured, and relevant. It should map itself to the problem you're solving. If the foundation is weak, the results will be also. So data preparation is not just a technical step, it is a business critical stage. Once deployed, AI systems must be monitored continuously, and you need to watch for drops in performance for any bias being generated or outdated logic, and improve the model with new data or refinements. That's what makes AI reliable, ethical, and sustainable in the long run. 21:09 Nikita: Yunus, thank you for this really insightful session. If you're interested in learning more about the topics we discussed today, go to mylearn.oracle.com and search for the AI for You course. Lois: That's right. You'll find skill checks to help you assess your understanding of these concepts. In our next episode, we'll discuss the idea of buy versus build in the context of AI. Until then, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 21:39 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
Yonatan Sompolinsky is an academic in the field of computer science, best known for his work on the GHOST protocol (Greedy Heaviest Observed Subtree, which was cited in the Ethereum whitepaper) and the way he applied his research to create Kaspa. In this episode, we talk about scaling Proof of Work and why Kaspa might be a worthy contender to process global payments. –––––––––––––––––––––––––––––––––––– Time stamps: 00:01:22 - Debunking rumors: Why some think Yonatan is Satoshi Nakamoto 00:02:52 - Candidates for Satoshi: Charles Hoskinson, Charlie Lee, Zooko, and Alex Chepurnoy 00:03:41 - Alex Chepurnoy as a Satoshi-like figure 00:04:07 - Kaspa overview: DAG structure, no orphaned blocks, generalization of Bitcoin 00:04:55 - Similarities between Kaspa and Bitcoin fundamentals 00:06:12 - Why Kaspa couldn't be built directly on Bitcoin 00:08:05 - Kaspa as generalization of Nakamoto consensus 00:11:55 - Origins of GHOST protocol and early DAG concepts for Bitcoin scaling 00:13:16 - Academic motivation for GHOST and transitioning to computer science 00:13:50 - Turtle pet named Bitcoin 00:15:22 - Increasing block rate in Bitcoin and GHOST protocol 00:16:57 - Meeting Gregory Maxwell and discovering GHOST flaws 00:20:00 - Yonatan's views on drivechains and Bitcoin maximalism 00:20:36 - Defining Bitcoin maximalism: Capital B vs lowercase b 00:23:18 - Satoshi's support for Namecoin and merged mining 00:24:12 - Bitcoin culture in 2013-2018: Opposing other functionalities 00:26:01 - Vitalik's 2014 article on Bitcoin maximalism 00:26:13 - Andrew Poelstra's opposition to other assets on Bitcoin 00:26:38 - Bitcoin culture: Distaste for DeFi, criticism of Ethereum as a scam 00:28:03 - Bitcoin Cash developments: Cash tokens, cash fusion, contracts 00:28:39 - Rejection of Ethereum in Bitcoin circles 00:30:18 - Ethereum's successful PoS transition despite critics 00:35:04 - Ethereum's innovation: From Plasma to ZK rollups, nurturing development 00:37:04 - Stacks protocol and criticism from Luke Dashjr 00:39:02 - Bitcoin culture justifying technical limitations 00:41:01 - Declining Bitcoin adoption as money, rise of altcoins for payments 00:43:02 - Kaspa's aspirations: Merging sound money with DeFi, beyond just payments 00:43:56 - Possibility of tokenized Bitcoin on Kaspa 00:46:30 - Native currency advantage and friction in bridges 00:48:49 - WBTC on Ethereum scale vs Bitcoin L2s 00:53:33 - Quotes: Richard Dawkins on atheism, Milton Friedman on Yap Island money 00:55:44 - Story of Kaspa's messy fair launch in 2021 01:14:08 - Tech demo of Kaspa wallet experience 01:28:45 - Kaspa confirmation times & transaction fees 01:43:26 - GHOST DAG visualizer 01:44:10 - Mining Kaspa 01:55:48 - Data pruning in Kaspa, DAG vs MimbleWimble 02:01:40 - Grin & the fairest launch 02:12:21 - Zcash scaling & ZKP OP code in Kaspa 02:19:50 - Jameson Lopp, cold storage & self custody elitism 02:35:08 - Social recovery 02:41:00 - Amir Taaki, DarkFi & DAO 02:53:10 - Nick Szabo's God Protocols 03:00:00 - Layer twos on Kaspa for DeFi 03:13:09 - How Kaspa's DeFi will resemble Solana 03:24:03 - Centralized exchanges vs DeFi 03:32:05 - The importance of community projects 03:37:00 - DAG KNIGHT and its resilience 03:51:00 - DAG KNIGHT tradeoffs 03:58:18 - Blockchain vs DAG, the bottleneck for Kaspa 04:03:00 - 100 blocks per second? 04:11:43 - Question from Quai's Dr. K 04:17:03 - Doesn't Kaspa require super fast internet? 04:23:10 - Are ASIC miners desirable? 04:33:53 - Why Proof of Work matters 04:35:55 - A short history of Bitcoin mining 04:44:00 - DAG's sequencing 04:49:09 - Phantom GHOST DAG 04:52:47 - Why Kaspa had high inflation initially 04:55:10 - Selfish mining 05:03:00 - K Heavy Hash & other community questions 06:33:20 - Latency settings in DAG KNIGHT for security 06:36:52 - Aviv Zohar's involvement in Kaspa research 06:38:07 - World priced in Kaspa after hyperinflation 06:39:51 - Kaspa's fate intertwined with crypto 06:40:29 - Kaspa contracts vs Solana, why better for banks 06:42:53 - Cohesive developer experience in Kaspa like Solana 06:45:22 - Incorporating ZK design in Kaspa smart contracts 06:47:22 - Heroes: Garry Kasparov 06:48:12 - Shift in attitude from academics like Hoskinson, Buterin, Back 06:53:07 - Adam Back's criticism of Kaspa 06:55:57 - Michael Jordan and LeBron analogy for Bitcoiners' mindset 06:58:02 - Can Kaspa flip Bitcoin in market cap 07:00:34 - Gold and USD market cap comparison 07:06:06 - Collaboration with Kai team 07:10:37 - Community improvement: More context on crypto 07:13:43 - Theoretical maximum TPS for Kaspa 07:16:05 - Full ZK on L1 improvements 07:17:45 - Atomic composability and logic zones in Kaspa 07:23:12 - Sparkle and monolithic UX feel 07:26:00 - Wrapping up: Beating podcast length record, final thoughts on Bitcoin and Kaspa 07:27:31 - Why Yonatan called a scammer despite explanations 07:32:29 - Luke Dashjr's views and disconnect 07:33:01 - Hope for Bitcoin scaling and revolution
In this episode, Markus Viitamäki, Senior Infrastructure Architect at Embark Studios joins the podcast to discuss trends in the gaming industry, main drivers to creating a new game, and how much latency and bandwidth are affecting the gaming experience.
Andrew Lamb, a veteran of database engine development, shares his thoughts on why Rust is the right tool for developing low-latency systems, not only from the perspective of the code's performance, but also looking at productivity and developer joy. He discusses the overall experience of adopting Rust after a decade of programming in C/C++. Read a transcript of this interview: http://bit.ly/45qi4eK Subscribe to the Software Architects' Newsletter for your monthly guide to the essential news and experience from industry peers on emerging patterns and technologies: https://www.infoq.com/software-architects-newsletter Upcoming Events: InfoQ Dev Summit Munich (October 15-16, 2025) Essential insights on critical software development priorities. https://devsummit.infoq.com/conference/munich2025 QCon San Francisco 2025 (November 17-21, 2025) Get practical inspiration and best practices on emerging software trends directly from senior software developers at early adopter companies. https://qconsf.com/ QCon AI New York 2025 (December 16-17, 2025) https://ai.qconferences.com/ QCon London 2026 (March 16-19, 2026) https://qconlondon.com/ The InfoQ Podcasts: Weekly inspiration to drive innovation and build great teams from senior software leaders. Listen to all our podcasts and read interview transcripts: - The InfoQ Podcast https://www.infoq.com/podcasts/ - Engineering Culture Podcast by InfoQ https://www.infoq.com/podcasts/#engineering_culture - Generally AI: https://www.infoq.com/generally-ai-podcast/ Follow InfoQ: - Mastodon: https://techhub.social/@infoq - X: https://x.com/InfoQ?from=@ - LinkedIn: https://www.linkedin.com/company/infoq/ - Facebook: https://www.facebook.com/InfoQdotcom# - Instagram: https://www.instagram.com/infoqdotcom/?hl=en - Youtube: https://www.youtube.com/infoq - Bluesky: https://bsky.app/profile/infoq.com Write for InfoQ: Learn and share the changes and innovations in professional software development. - Join a community of experts. - Increase your visibility. - Grow your career. https://www.infoq.com/write-for-infoq
Latency, it's that tiny, annoying delay between what you say and what you hear back. In the studio, online, or even in your own headphones, it can trip you up, wreck your timing, and make you feel like you're talking to yourself in slow motion. In this episode of The Pro Audio Suite, Robbo, AP, George, and Robert dig into: What latency really is (and why it's not just a tech buzzword) How it sneaks into your recording chain The difference between “good” latency and “bad” latency Fixes you can do right now without buying a new rig When hardware or interface upgrades actually make sense Whether you're a VO artist fighting through a remote session, a podcaster dealing with talkback lag, or a studio pro chasing perfect sync, this is your guide to killing the delay and getting back in the groove. Proudly supported by Tri-Booth and Austrian Audio, we only partner with brands we believe in, and these two deliver the goods for pro audio pros like you.
“I have a regular chat with a friend of mine in New Zealand. He's a tetraplegic and a musician, so he invents his own music instruments that he can play with his limited motion, and he can send me his instrument over MIDI to where I am across the ocean and we can play together and we can have an engagement. It's not possible for him to come to see me in Europe. It would be so expensive, and a lot of work. So, you know, thank God there's the internet for him, you know. He gets to participate, he has remote concerts, he still plays with his friends. It's really special.” – Rebekah Wilson This episode is the second half of my conversation with Source Elements CEO and remote collaboration specialist Rebekah Wilson as we discuss how physics and neurology collide when it comes to reducing latency, how the pandemic transformed online music collaboration and gave rise to today's generation of at-home musicians, and where Rebekah sees sound, technology, and music itself heading in the future, over both the coming decades and even generations from now. As always, if you have questions for my guest, you're welcome to reach out through the links in the show notes. If you have questions for me, visit audiobrandingpodcast.com, where you'll find a lot of ways to get in touch. Plus, subscribing to the newsletter will let you know when the new podcasts are available, along with other interesting bits of audio-related news. And if you're getting some value from listening, the best ways to show your support are to share this podcast with a friend and leave an honest review. Both those things really help, and I'd love to feature your review on future podcasts. You can leave one either in written or in voice format from the podcast's main page. I would so appreciate that. (0:00:01) - Impact of Latency on Music CollaborationWe continue our talk about the science of latency, and Rebekah explains how it impacts music in ways that our brains only dimly perceive. “If you add a little bit of latency onto that,” she says, “music's like, one, two... three… music's not very friendly to that [sort of] latency.” She tells us more about how our brains unconsciously adapt to latency, and how technology relies both on improving speed and taking advantage of our ability to filter out information gaps. “What's happening is that you're anticipating it based on this model that's in your brain,” Rebekah explains. “For example, every time you look at a wall or your surroundings, if it's not moving, your brain's not processing it.”(0:06:02) - Advancements in Remote Music CollaborationShe talks about how the COVID-19 pandemic's lockdown phase led to a boom in online collaboration, some of which continues to thrive today. “There remained a group of people,” she says, “a small group of people, you know, scattered around the world… who were like, ‘You know what? Some interesting things came out of this. Some interesting artistic development is possible here and it's worth pursuing.” We discuss the technical and creative innovations that emerged from that period, and where they might lead in the years to come as we continue to innovate. “What we love as humans,” Rebekah says, “is to seek new forms of expression. This is what we do, we're adventurers. So we go out, we go into the desert, we go out into the oceans, and we look for where something new is. And you know, music and performance and being together on the internet is still very new for us as humans.”(0:12:42) - Expanding Music Collaboration With TechnologyOur conversation wraps up as we continue to talk about online collaboration and creative efforts that can now
Real-time Feature Generation at Lyft // MLOps Podcast #334 with Rakesh Kumar, Senior Staff Software Engineer at Lyft.Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractThis session delves into real-time feature generation at Lyft. Real-time feature generation is critical for Lyft where accurate up-to-the-minute marketplace data is paramount for optimal operational efficiency. We will explore how the infrastructure handles the immense challenge of processing tens of millions of events per minute to generate features that truly reflect current marketplace conditions. Lyft has built this massive infrastructure over time, evolving from a humble start and a naive pipeline. Through lessons learned and iterative improvements, Lyft has made several trade-offs to achieve low-latency, real-time feature delivery. MLOps plays a critical role in managing the lifecycle of these real-time feature pipelines, including monitoring and deployment. We will discuss the practicalities of building and maintaining high-throughput, low-latency real-time feature generation systems that power Lyft's dynamic marketplace and business-critical products.// BioRakesh Kumar is a Senior Staff Software Engineer at Lyft, specializing in building and scaling Machine Learning platforms. Rakesh has expertise in MLOps, including real-time feature generation, experimentation platforms, and deploying ML models at scale. He is passionate about sharing his knowledge and fostering a culture of innovation. This is evident in his contributions to the tech community through blog posts, conference presentations, and reviewing technical publications.// Related LinksWebsite: https://englife101.io/https://eng.lyft.com/search?q=rakeshhttps://eng.lyft.com/real-time-spatial-temporal-forecasting-lyft-fa90b3f3ec24https://eng.lyft.com/evolution-of-streaming-pipelines-in-lyfts-marketplace-74295eaf1ebaStreaming Ecosystem Complexities and Cost Management // Rohit Agrawal // MLOps Podcast #302 - https://youtu.be/0axFbQwHEh8~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Rakesh on LinkedIn: /rakeshkumar1007/Timestamps:[00:00] Rakesh preferred coffee[00:24] Real-time machine learning[04:51] Latency tricks explanation[09:28] Real-time problem evolution[15:51] Config management complexity[18:57] Data contract implementation[23:36] Feature store[28:23] Offline vs online workflows[31:02] Decision-making in tech shifts[36:54] Cost evaluation frequency[40:48] Model feature discussion[49:09] Hot shard tricks[55:05] Pipeline feature bundling[57:38] Wrap up
This week, we highlight Disney's new content bundling deals with Charter and ITV, as well as Paramount's new agreement with DIRECTV. We discuss Amazon's investment in AI to help remaster SD content into HD, and what we think the impact of AI can be in the video streaming workflow. We also discuss why the media is getting it wrong when they suggest that Google has won the living room, as YouTube tightens its enforcement of its monetization guidelines. We break down the latest rumors that Apple will acquire Formula 1 content and why Apple's F1 movie was about more than just content, since it ties into the hardware of the iPhone. Finally, we discuss some recent online comments about ultra-low-latency deployments for media use cases, which overlook any tangible or measurable business benefits.Podcast produced by Security Halt Media
Our 216th episode with a summary and discussion of last week's big AI news! Recorded on 07/11/2025 Hosted by Andrey Kurenkov and Jeremie Harris. Feel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai Read out our text newsletter and comment on the podcast at https://lastweekin.ai/. In this episode: xAI launches Grok 4 with breakthrough performance across benchmarks, becoming the first true frontier model outside established labs, alongside a $300/month subscription tier Grok's alignment challenges emerge with antisemitic responses, highlighting the difficulty of steering models toward "truth-seeking" without harmful biases Perplexity and OpenAI launch AI-powered browsers to compete with Google Chrome, signaling a major shift in how users interact with AI systems Meta study reveals AI tools actually slow down experienced developers by 20% on complex tasks, contradicting expectations and anecdotal reports of productivity gains Timestamps + Links: (00:00:10) Intro / Banter (00:01:02) News Preview Tools & Apps (00:01:59) Elon Musk's xAI launches Grok 4 alongside a $300 monthly subscription | TechCrunch (00:15:28) Elon Musk's AI chatbot is suddenly posting antisemitic tropes (00:29:52) Perplexity launches Comet, an AI-powered web browser | TechCrunch (00:32:54) OpenAI is reportedly releasing an AI browser in the coming weeks | TechCrunch (00:33:27) Replit Launches New Feature for its Agent, CEO Calls it ‘Deep Research for Coding' (00:34:40) Cursor launches a web app to manage AI coding agents (00:36:07) Cursor apologizes for unclear pricing changes that upset users | TechCrunch Applications & Business (00:39:10) Lovable on track to raise $150M at $2B valuation (00:41:11) Amazon built a massive AI supercluster for Anthropic called Project Rainier – here's what we know so far (00:46:35) Elon Musk confirms xAI is buying an overseas power plant and shipping the whole thing to the U.S. to power its new data center — 1 million AI GPUs and up to 2 Gigawatts of power under one roof, equivalent to powering 1.9 million homes (00:48:16) Microsoft's own AI chip delayed six months in major setback — in-house chip now reportedly expected in 2026, but won't hold a candle to Nvidia Blackwell (00:49:54) Ilya Sutskever becomes CEO of Safe Superintelligence after Meta poached Daniel Gross (00:52:46) OpenAI's Stock Compensation Reflect Steep Costs of Talent Wars Projects & Open Source (00:58:04) Hugging Face Releases SmolLM3: A 3B Long-Context, Multilingual Reasoning Model - MarkTechPost (00:58:33) Kimi K2: Open Agentic Intelligence (00:58:59) Kyutai Releases 2B Parameter Streaming Text-to-Speech TTS with 220ms Latency and 2.5M Hours of Training Research & Advancements (01:02:14) Does Math Reasoning Improve General LLM Capabilities? Understanding Transferability of LLM Reasoning (01:07:58) Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity (01:13:03) Mitigating Goal Misgeneralization with Minimax Regret (01:17:01) Correlated Errors in Large Language Models (01:20:31) What skills does SWE-bench Verified evaluate? Policy & Safety (01:22:53) Evaluating Frontier Models for Stealth and Situational Awareness (01:25:49) When Chain of Thought is Necessary, Language Models Struggle to Evade Monitors (01:30:09) Why Do Some Language Models Fake Alignment While Others Don't? (01:34:35) Positive review only': Researchers hide AI prompts in papers (01:35:40) Google faces EU antitrust complaint over AI Overviews (01:36:41) The transfer of user data by DeepSeek to China is unlawful': Germany calls for Google and Apple to remove the AI app from their stores (01:37:30) Virology Capabilities Test (VCT): A Multimodal Virology Q&A Benchmark
Brady Volpe and DOCSIS expert John Downey recap the latest from ANGA COM 2025 in Germany.
Taken from the AI + a16z podcast, Arcjet CEO David Mytton sits down with a16z partner Joel de la Garza to discuss the increasing complexity of managing who can access websites, and other web apps, and what they can do there. A primary challenge is determining whether automated traffic is coming from bad actors and troublesome bots, or perhaps AI agents trying to buy a product on behalf of a real customer.Joel and David dive into the challenge of analyzing every request without adding latency, and how faster inference at the edge opens up new possibilities for fraud prevention, content filtering, and even ad tech.Topics include:Why traditional threat analysis won't work for the AI-powered webThe need for full-context security checksHow to perform sub-second, cost-effective inferenceThe wide range of potential actors and actions behind any given visitAs David puts it, lower inference costs are key to letting apps act on the full context window — everything you know about the user, the session, and your application. Follow everyone on social media:David MyttonJoel de la GarzaCheck out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts. Stay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
This special CDN focused podcast details the latest on delivery pricing and industry trends. I discuss delivery commits, QoE measurement, DIY deployments, HD HDR bitrates, and the impact of vendors exiting the market. I also cover how content owners perform capacity planning at the ASN level, leaching, latency, and why multicasting and P2P won't positively impact the industry. The data comes from my CDN pricing survey, as well as hosting panels and private events at the NAB Show Streaming Summit in April, which included OTT platforms, content owners, broadcasters, sports leagues, and others.Podcast produced by Security Halt Media
This week, we highlight all the sports news from Peacock, Pac-12, ESPN, Optus Sport, Amazon, IMG, and NASCAR, detailing viewership stats and new licensing deals. We also cover the recent layoffs at Disney, the BBC's low latency live streaming trial, and the current ad loads of Max, Prime Video and Peacock. Finally, we cover some vendor news, including Wowza's acquisition of AI tech company AVA Intellect, and discuss some recent data I collected on the use of AI in the streaming video workflow.Podcast produced by Security Halt Media
Rich travels to Dubrovnik for the European Congress of Virology 2025 and Vincent joins via Zoom to speak with Stéphane Blanc, Vanda Juranić Lisnić, and Elisabeth Puchhammer-Stöckl about their work on plant viruses, cytomegalovirus, and Epstein-Barr virus. Hosts: Vincent Racaniello and Rich Condit Guests: Stéphane Blanc, Vanda Juranić Lisnić, and Elisabeth Puchhammer-Stöckl Subscribe (free): Apple Podcasts, RSS, email Become a patron of TWiV! Links for this episode Support science education at MicrobeTV Assembled plant viruses move through plants (PLoS Path) Genome formula of multipartite virus (PLoS Path) Immune surveillance of cytomegalovirus in tissues (Cell Mol Immunol) Cytomegalovirus and NK cells (Nat Commun) Epstein-Barr virus and multiple sclerosis (J Clin Inves) Epstein-Barr virus and lymphoproliferative disease (Transplant) Timestamps by Jolene Ramsey. Thanks! Intro music is by Ronald Jenkees Send your virology questions and comments to twiv@microbe.tv Content in this podcast should not be construed as medical advice.
I, Stewart Alsop, welcomed Alex Levin, CEO and co-founder of Regal, to this episode of the Crazy Wisdom Podcast to discuss the fascinating world of AI phone agents. Alex shared some incredible insights into how AI is already transforming customer interactions and what the future holds for company agents, machine-to-machine communication, and even the nature of knowledge itself.Check out this GPT we trained on the conversation!Timestamps00:29 Alex Levin shares that people are often more honest with AI agents than human agents, especially regarding payments.02:41 The surprising persistence of voice as a preferred channel for customer interaction, and how AI is set to revolutionize it.05:15 Discussion of the three types of AI agents: personal, work, and company agents, and how conversational AI will become the main interface with brands.07:12 Exploring the shift to machine-to-machine interactions and how AI changes what knowledge humans need versus what machines need.10:56 The looming challenge of centralization versus decentralization in AI, and how Americans often prioritize experience over privacy.14:11 Alex explains how tokenized data can offer personalized experiences without compromising specific individual privacy.25:44 Voice is predicted to become the primary way we interact with brands and technology due to its naturalness and efficiency.33:21 Why AI agents are easier to implement in contact centers due to different entropy compared to typical software.38:13 How Regal ensures AI agents stay on script and avoid "hallucinations" by proper training and guardrails.46:11 The technical challenges in replicating human conversational latency and nuances in AI voice interactions.Key InsightsAI Elicits HonestyPeople tend to be more forthright with AI agents, particularly in financially sensitive situations like discussing overdue payments. Alex speculates this is because individuals may feel less judged by an AI, leading to more truthful disclosures compared to interactions with human agents.Voice is King, AI is its HeirDespite predictions of its decline, voice remains a dominant channel for customer interactions. Alex believes that within three to five years, AI will handle as much as 90% of these voice interactions, transforming customer service with its efficiency and availability.The Rise of Company AgentsThe primary interface with most brands is expected to shift from websites and apps to conversational AI agents. This is because voice is a more natural, faster, and emotive way for humans to interact, a behavior already seen in younger generations.Machine-to-Machine FutureWe're moving towards a world where AI agents representing companies will interact directly with AI agents representing consumers. This "machine-to-machine" (M2M) paradigm will redefine commerce and the nature of how businesses and customers engage.Ontology of KnowledgeAs AI systems process vast amounts of information, creating a clear "ontology of knowledge" becomes crucial. This means structuring and categorizing information so AI can understand the context and user's underlying intent, rather than just processing raw data.Tokenized Data for PrivacyA potential solution to privacy concerns is "tokenized data." Instead of providing AI with specific personal details, users could share generalized tokens (e.g., "high-intent buyer in 30s") that allow for personalized experiences without revealing sensitive, identifiable information.AI Highlights Human InconsistenciesImplementing AI often brings to light existing inconsistencies or unacknowledged issues within a company. For instance, AI might reveal discrepancies between official scripts and how top-performing human agents actually communicate, forcing companies to address these differences.Influence as a Key Human SkillIn a future increasingly shaped by AI, Sam Altman (via Alex) suggests that the ability to "influence" others will be a paramount human skill. This uniquely human trait will be vital, whether for interacting with other people or for guiding and shaping AI systems.Contact Information* Regal AI: regal.ai* Email: hello@regal.ai* LinkedIn: www.linkedin.com/in/alexlevin1/
Over 70% of the world has herpes—yet it's still taboo. In this episode, Dr. G breaks down the truth about HSV-1 & HSV-2, from how it spreads to how to heal physically and emotionally. He shares the Heal Thyself protocol, featuring powerful supplements, nervous system tools, and mindset shifts to reduce outbreaks and reclaim your peace. #wellnessjourney #herpes #wellness ==== Thank You To Our Sponsors! Calroy Head on over to at calroy.com/drg and Save over $50 when you purchase the Vascanox and Arterosil bundle! ==== Timestamps: 00:00:00 - Understanding the Herpes Virus 00:02:56 - Prevalence, Latency & Treatment 06:00 - Transmission: Myths & Facts 08:58 - Triggers, Treatments & Misconceptions 12:02:47 - Antiviral Drugs & Holistic Healing 15:09 - Treatment: Sleep, Stress & Supplements 18:09 - Natural Herpes Remedies 21:15 - Treatment & Emotional Roots 24:10 - Healing Herpes: Shame & Self-Ownership Be sure to like and subscribe to #HealThySelf Hosted by Doctor Christian Gonzalez N.D. Follow Doctor G on Instagram @doctor.gonzalez https://www.instagram.com/doctor.gonzalez/ Sign up for our newsletter! https://drchristiangonzalez.com/newsletter/
No Priors: Artificial Intelligence | Machine Learning | Technology | Startups
On this episode of No Priors, Sarah sits down with Isa Fulford, one of the masterminds behind deep research. They unpack how the initiative began, the role of human expert data, and what it takes to build agents with real-world capability and even taste. Isa shares the differences between deep research and OpenAI's o3 model, the challenges around latency, and how she sees agent capabilities evolving. Plus, OpenAI has announced that deep research is free for all US users starting today. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @IsaFulf Show Notes: 0:00 Deep research's inception & evolution 6:12 Data creation 7:20 Reinforcement fine-tuning 9:05 Why human expert data matters 11:23 Failure modes of agents 13:55 The roadmap ahead for Deep Research 18:32 How do agents develop taste? 19:29 Experience and path to building a broadly capable agent 22:03 Deep research vs. o3 25:55 Latency 27:56 Predictions for agent capabilities
To keep you in your flow, M365 Copilot Pages will now open in the M365 Copilot App home, rather than opening in Loop. Keep curating and conversing with Copilot. Also, Teams town hall meetings that have been created using a Teams Premium license will enjoy a much lower latency between the production and audience experience. - Microsoft Viva Connections: New Engage card in Connections dashboard - Removing Microsoft 365 Copilot Actions from Targeted Release - Microsoft Purview | Retiring Classic Content Search, Classic eDiscovery (Standard) Cases, Export PowerShell Parameters - Microsoft 365 Copilot: Create a PowerPoint slide from a file or prompt - Microsoft Teams Premium: Ultra-low latency (ULL) attendee experience for town halls - Microsoft Teams: town hall organizers, co-organizers, presenters can join the event to preview as attendee - Pages created in Microsoft 365 Copilot Chat will open in Microsoft 365 Copilot app Join Daniel Glenn and Darrell as a Service Webster as they cover the latest messages in the Microsoft 365 Message Center. Check out Darrell & Daniel's own YouTube channels at: Darrell - https://youtube.com/modernworkmentor Daniel - https://youtube.com/DanielGlenn
In this episode, we will discuss several three recent Apple announcements: a recent new software update which brings lossless audio and ultra-low latency audio to AirPods Max, delivering the ultimate listening experience and even greater performance for music production. Let's go to the show to learn more; new operating system updates and WWDC2025 is scheduled … Continue reading Episode 277, Lossless audio and ultra low latency audio come to AirPods Max; iOS 18.4 updates; and WWDC2025 announced →
Ben Holmes, product engineer at Warp, joins PodRocket to talk about local-first web apps and what it takes to run a database directly in the browser. He breaks down how moving data closer to the user can reduce latency, improve performance, and simplify frontend development. Learn about SQLite in the browser, syncing challenges, handling conflicts, and tools like WebAssembly, IndexedDB, and CRDTs. Plus, Ben shares insights from building his own SimpleSyncEngine and where local-first development is headed! Links https://bholmes.dev https://www.linkedin.com/in/bholmesdev https://www.youtube.com/@bholmesdev https://x.com/bholmesdev https://bsky.app/profile/bholmes.dev https://github.com/bholmesdev We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guest: Ben Holmes.
Industrial Talk is onsite at DistribuTech 2025 and talking to Marcus McCarthy, Sr. Vice President at Siemens Grid Software about "Energy Solutions for the Future". Scott MacKenzie and Marcus McCarthy discuss the evolving utility industry and the role of digital twins in improving efficiency and reliability. Marcus highlights the challenges of aging infrastructure, increased power demand, and the need for carbon removal. He emphasizes the importance of accurate digital models for better planning and decision-making. Marcus explains how Siemens' digital twin solutions enable real-time operations and scenario simulations, enhancing network management. They also touch on the practicality of cloud technology and the industry's readiness to adopt new technologies. The conversation underscores the urgency for utilities to invest in digital twins to meet future energy demands and optimize grid performance. Action Items [ ] Connect with Marcus McCarthy on LinkedIn or at marcus.mccarthy@siemens.com to discuss further [ ] Establish accurate digital models of the utility network (digital twins) with temporal stamping to simulate future scenarios [ ] Explore how high-energy consumption facilities like data centers can optimize their interaction with the grid Outline Introduction and Welcome Scott MacKenzie as a passionate industry professional dedicated to transferring cutting-edge industry innovations and trends. Scott MacKenzie welcomes listeners to the Industrial Talk Podcast, highlighting the celebration of industry professionals worldwide. Scott mentions the podcast is brought to you by Siemens Smart Infrastructure and Grid Software, encouraging listeners to visit siemens.com for more information. Scott and Marcus discuss the massive scale of the Distribute Tech conference in Dallas, Texas, and Scott's limited time to explore the solutions. Background on Marcus McCarthy Marcus shares his background, mentioning his move from Ireland to the US about 12 years ago. Marcus discusses his career in utilities, focusing on distribution and transmission software systems. Scott and Marcus agree on the positive aspects of the utility industry, including the people and the current market dynamics. Marcus reflects on the industry's shift from a quiet period to a time of rapid change and innovation. Challenges and Pressures in the Utility Industry Marcus highlights the increasing demand for power and the need for safe and reliable delivery. Scott and Marcus discuss the challenges of aging infrastructure and the need for modernization. Marcus explains the complexities of meeting future power demands while addressing carbon removal efforts. Scott shares his experience as a lineman and the evolution of the utility industry from a linear design to a more distributed energy system. Digital Twin and Its Importance Scott expresses his enthusiasm for digital twin technology and its potential for simulation and decision-making. Marcus explains the critical role of digital twin in achieving faster, better decision-making in complex environments. Marcus discusses the importance of standardized models and the sharing of planning data among different players in the industry. Marcus highlights the need for real-time operations and the challenges of integrating planning and operations data. Cloud Technology and Latency
In this episode we are looking at a sector where IT and tech innovation is taking efficiency to a whole new level - manufacturing.Manufacturing is in a precarious position as an industry. In the global north, growth is largely stagnant, according to those same UN statistics. Even in high-growth economies like China, it's slowing down. It's also notoriously inefficient. So, can tech help? And if so, what does that look like? Joining us to discuss is Dan Klein, an advisor on data and digital transformation with a special interest in the manufacturing sector.This is Technology Now, a weekly show from Hewlett Packard Enterprise. Every week we look at a story that's been making headlines, take a look at the technology behind it, and explain why it matters to organizations and what we can learn from it. About this week's guest, Dan Klein: https://www.linkedin.com/in/dplklein/?originalSubdomain=uk Sources cited in this week's episode: UN stats on the state of global manufacturing: https://stat.unido.org/portal/storage/file/publications/qiip/World_Manufacturing_Production_2024_Q1.pdfStatista report on global manufacturing and efficiency: https://www.statista.com/outlook/io/manufacturing/worldwide Water on Mars: https://pubs.geoscienceworld.org/gsa/geology/article/52/12/939/648640/Seismic-discontinuity-in-the-Martian-crust
Tech behind the Trends on The Element Podcast | Hewlett Packard Enterprise
In this episode we are looking at a sector where IT and tech innovation is taking efficiency to a whole new level - manufacturing.Manufacturing is in a precarious position as an industry. In the global north, growth is largely stagnant, according to those same UN statistics. Even in high-growth economies like China, it's slowing down. It's also notoriously inefficient. So, can tech help? And if so, what does that look like? Joining us to discuss is Dan Klein, an advisor on data and digital transformation with a special interest in the manufacturing sector.This is Technology Now, a weekly show from Hewlett Packard Enterprise. Every week we look at a story that's been making headlines, take a look at the technology behind it, and explain why it matters to organizations and what we can learn from it. About this week's guest, Dan Klein: https://www.linkedin.com/in/dplklein/?originalSubdomain=uk Sources cited in this week's episode: UN stats on the state of global manufacturing: https://stat.unido.org/portal/storage/file/publications/qiip/World_Manufacturing_Production_2024_Q1.pdfStatista report on global manufacturing and efficiency: https://www.statista.com/outlook/io/manufacturing/worldwide Water on Mars: https://pubs.geoscienceworld.org/gsa/geology/article/52/12/939/648640/Seismic-discontinuity-in-the-Martian-crust
In this episode we are looking at a sector where IT and tech innovation is taking efficiency to a whole new level - manufacturing.Manufacturing is in a precarious position as an industry. In the global north, growth is largely stagnant, according to those same UN statistics. Even in high-growth economies like China, it's slowing down. It's also notoriously inefficient. So, can tech help? And if so, what does that look like? Joining us to discuss is Dan Klein, an advisor on data and digital transformation with a special interest in the manufacturing sector.This is Technology Now, a weekly show from Hewlett Packard Enterprise. Every week we look at a story that's been making headlines, take a look at the technology behind it, and explain why it matters to organizations and what we can learn from it. About this week's guest, Dan Klein: https://www.linkedin.com/in/dplklein/?originalSubdomain=uk Sources cited in this week's episode: UN stats on the state of global manufacturing: https://stat.unido.org/portal/storage/file/publications/qiip/World_Manufacturing_Production_2024_Q1.pdfStatista report on global manufacturing and efficiency: https://www.statista.com/outlook/io/manufacturing/worldwide Water on Mars: https://pubs.geoscienceworld.org/gsa/geology/article/52/12/939/648640/Seismic-discontinuity-in-the-Martian-crust
Recently a new white paper was released on the topic of latency-sensitive workloads. I invited Mark Achtemichuck (X, LinkedIn) to the show to go over the various recommendations and best practices. Mark highlight many important configuration settings, and also recommends everyone to not only read the white paper, but also the vSphere 8 performance documentation. Also, his VMware Explore session comes highly recommended, make sure to watch it!Disclaimer: The thoughts and opinions shared in this podcast are our own/guest(s), and not necessarily those of Broadcom or VMware by Broadcom.
In this episode of Validated, Austin discusses his new venture DoubleZero with co-founders Andrew McConnell and Matteo Ward. They discuss the necessity of creating high-performance networking specifically tailored for blockchains, comparing it to the traditional internet and private networks. They delve into their backgrounds and how their experiences in telecom and high-frequency trading influence the development of Double Zero. This episode covers various technical topics including the limitations of the public internet, the importance of a purpose-built network, and how DoubleZero provides a decentralized, efficient, and secure connectivity for blockchain validators. DISCLAIMER The content herein is provided for educational, informational, and entertainment purposes only, and does not constitute an offer to sell or a solicitation of an offer to buy any securities, options, futures, or other derivatives related to securities in any jurisdiction, nor should not be relied upon as advice to buy, sell or hold any of the foregoing. This content is intended to be general in nature and is not specific to you, the user or anyone else. You should not make any decision, financial, investment, trading or otherwise, based on any of the information presented without undertaking independent due diligence and consultation with a professional advisor. Solana Foundation Foundation and its agents, advisors, council members, officers and employees (the “Foundation Parties”) make no representation or warranties, expressed or implied, as to the accuracy of the information herein and expressly disclaims any and all liability that may be based on such information or any errors or omissions therein. The Foundation Parties shall have no liability whatsoever, under contract, tort, trust or otherwise, to any person arising from or related to the content or any use of the information contained herein by you or any of your representatives. All opinions expressed herein are the speakers' own personal opinions and do not reflect the opinions of any entities.
TWiN discusses a study showing that repetitive injury reactivates HSV-1 in a human brain tissue model and induces phenotypes associated with Alzheimer's disease. Hosts: Vincent Racaniello and Tim Cheung Subscribe (free): Apple Podcasts, Google Podcasts, RSS Links for this episode MicrobeTV Discord Server Repetitive injury, herpes, and Alzheimers (Sci Signal) The tau of herpesvirus (TWiV 1187) Fishing for viruses in senile (TWiV 519) Timestamps by Jolene Ramsey. Thanks! Music is by Ronald Jenkees Send your neuroscience questions and comments to twin@microbe.tv
What's Qualcomm's CEO Cristiano Amon saying from Davos? He has great optimism for growth and the crucial role of collaboration between public-private partnerships in driving progress. Find out why below ⬇ Hosts Daniel Newman and Patrick Moorhead are back with another interview on The View From Davos. They met up with Qualcomm's Cristiano Amon, President and Chief Executive Officer, to discuss the latest tech advancements and market trends observed at this year's WEF. Cristiano shares his valuable insights from the forum including his optimism for growth and the crucial role of collaboration between public-private partnerships in driving progress. Check out the full interview for more on: AI in real-world applications and tangible value creation with are top of mind for business and government leaders Edge computing is key to unlocking AI's potential: Latency, privacy, and cost are driving a shift towards distributed computing power The lines between cloud and edge are blurring Qualcomm's role in powering AI innovation across industries, from mobile to automotive to industrial IoT A new era of IoT is dawning: Advances in AI, edge computing, and connectivity are creating opportunities for a resurgence of the Internet of Things.
Traditional SD-WAN ensures that business-critical apps get the best-performing network path to deliver a good user experience and meet service levels. But as SaaS and cloud adoption increase, the best path across a WAN may not be enough. Techniques like WAN ops and legacy caching techniques may have worked for enterprise or private apps, but... Read more »
Traditional SD-WAN ensures that business-critical apps get the best-performing network path to deliver a good user experience and meet service levels. But as SaaS and cloud adoption increase, the best path across a WAN may not be enough. Techniques like WAN ops and legacy caching techniques may have worked for enterprise or private apps, but... Read more »
Mixing Music with Dee Kei | Audio Production, Technical Tips, & Mindset
Thank you for being a subscriber to this exclusive content! SUBSCRIBE TO YOUTUBE Join the ‘Mixing Music Podcast' Discord! HIRE DEE KEI HIRE JAMES Find Dee Kei Braeden, and Jame on Social Media: Instagram: @DeeKeiMixes @JamesDeanMixes Twitter: @DeeKeiMixes CHECK OUT OUR OTHER RESOURCES Join the ‘Mixing Music Podcast' Group: Discord & Facebook The Mixing Music Podcast is sponsored by Izotope, Antares (Auto Tune), Plugin Boutique, Lauten Audio, Spreaker, Filepass, & Canva The Mixing Music Podcast is a video and audio series on the art of music production and post-production. Dee Kei and Lu are both professionals in the Los Angeles music industry having worked with names like Keyshia Cole, Trey Songz, Ray J, Smokepurrp, Benny the Butcher, Sueco the Child, Ari Lennox, G-Eazy, Phresher, Lucky Daye, DDG, Lil Xan, Masego, $SNOT, Kanye West, King Kanja, Dreamville, BET, Universal Music, Interscope Records, etc. This video podcast is meant to be used for educational purposes only. This show is filmed at IN THE MIX STUDIOS located in North Hollywood, California. If you would like to sponsor the show, please email us at deekeimixes@gmail.com.
Topics covered in this episode: dbos-transact-py Typed Python in 2024: Well adopted, yet usability challenges persist RightTyper Lazy self-installing Python scripts with uv Extras Joke Watch on YouTube About the show Sponsored by us! Support our work through: Our courses at Talk Python Training The Complete pytest Course Patreon Supporters Connect with the hosts Michael: @mkennedy@fosstodon.org / @mkennedy.codes (bsky) Brian: @brianokken@fosstodon.org / @brianokken.bsky.social Show: @pythonbytes@fosstodon.org / @pythonbytes.fm (bsky) Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 10am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it. Michael #1: dbos-transact-py DBOS Transact is a Python library providing ultra-lightweight durable execution. Durable execution means your program is resilient to any failure. If it is ever interrupted or crashes, all your workflows will automatically resume from the last completed step. Under the hood, DBOS Transact works by storing your program's execution state (which workflows are currently executing and which steps they've completed) in a Postgres database. Incredibly fast, for example 25x faster than AWS Step Functions. Brian #2: Typed Python in 2024: Well adopted, yet usability challenges persist Aaron Pollack on Engineering at Meta blog “Overall findings 88% of respondents “Always” or “Often” use Types in their Python code. IDE tooling, documentation, and catching bugs are drivers for the high adoption of types in survey responses, The usability of types and ability to express complex patterns still are challenges that leave some code unchecked. Latency in tooling and lack of types in popular libraries are limiting the effectiveness of type checkers. Inconsistency in type check implementations and poor discoverability of the documentation create friction in onboarding types into a project and seeking help when using the tools. “ Notes Seems to be a different survey than the 2023 (current) dev survey. Diff time frame and results. July 29 - Oct 8, 2024 Michael #3: RightTyper A fast and efficient type assistant for Python, including tensor shape inference Brian #4: Lazy self-installing Python scripts with uv Trey Hunner Creating your own ~/bin full of single-file command line scripts is common for *nix folks, still powerful but underutilized on Mac, and trickier but still useful on Windows. Python has been difficult in the past to use for standalone scripts if you need dependencies, but that's no longer the case with uv. Trey walks through user scripts (*nix and Mac) Using #! for scripts that don'thave dependencies Using #! with uv run --script and /// script for dependencies Discussion about how uv handles that. Extras Brian: Courses at pythontest.com If you live in a place (or are in a place in your life) where these prices are too much, let me know. I had a recent request and I really appreciate it. Michael: Python 3.14 update released Top episodes of 2024 at Talk Python Universal check for updates macOS: Settings > Keyboard > Keyboard shortcuts > App shortcuts > + Then add shortcut for single app, ^U and the menu title. Joke: Python with rizz
Over the last seven decades, some states successfully leveraged the threat of acquiring atomic weapons to compel concessions from superpowers. For many others, however, this coercive gambit failed to work. When does nuclear latency--the technical capacity to build the bomb--enable states to pursue effective coercion? In Leveraging Latency: How the Weak Compel the Strong with Nuclear Technology (Oxford UP, 2023), Tristan A. Volpe argues that having greater capacity to build weaponry doesn't translate to greater coercive advantage. Volpe finds that there is a trade-off between threatening proliferation and promising nuclear restraint. States need just enough bomb-making capacity to threaten proliferation but not so much that it becomes too difficult for them to offer nonproliferation assurances. The boundaries of this sweet spot align with the capacity to produce the fissile material at the heart of an atomic weapon. To test this argument, Volpe includes comparative case studies of four countries that leveraged latency against superpowers: Japan, West Germany, North Korea, and Iran. Volpe identifies a generalizable mechanism--the threat-assurance trade-off--that explains why more power often makes compellence less likely to work. Volpe proposes a framework that illuminates how technology shapes broader bargaining dynamics and helps to refine policy options for inhibiting the spread of nuclear weapons. As nuclear technology continues to cast a shadow over the global landscape, Leveraging Latency systematically assesses its coercive utility. Our guest today is Tristan Volpe, an Assistant Professor in the Defense Analysis Department at the Naval Postgraduate School and a nonresident fellow in the Nuclear Policy Program at the Carnegie Endowment for International Peace. Our host is Eleonora Mattiacci, an Associate Professor of Political Science at Amherst College. She is the author of "Volatile States in International Politics" (Oxford University Press, 2023). Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network
In this episode of The Cognitive Revolution, Nathan interviews Andrew White, Professor of Chemical Engineering at the University of Rochester and Head of Science at Future House. We explore groundbreaking AI systems for scientific discovery, including PaperQA and Aviary, and discuss how large language models are transforming research. Join us for an insightful conversation about the intersection of AI and scientific advancement with this pioneering researcher in his first-ever podcast appearance. Check out Future House: https://www.futurehouse.org Help shape our show by taking our quick listener survey at https://bit.ly/TurpentinePulse SPONSORS: Oracle Cloud Infrastructure (OCI): Oracle's next-generation cloud platform delivers blazing-fast AI and ML performance with 50% less for compute and 80% less for outbound networking compared to other cloud providers13. OCI powers industry leaders with secure infrastructure and application development capabilities. New U.S. customers can get their cloud bill cut in half by switching to OCI before December 31, 2024 at https://oracle.com/cognitive SelectQuote: Finding the right life insurance shouldn't be another task you put off. SelectQuote compares top-rated policies to get you the best coverage at the right price. Even in our AI-driven world, protecting your family's future remains essential. Get your personalized quote at https://selectquote.com/cognitive Shopify: Shopify is the world's leading e-commerce platform, offering a market-leading checkout system and exclusive AI apps like Quikly. Nobody does selling better than Shopify. Get a $1 per month trial at https://shopify.com/cognitive CHAPTERS: (00:00:00) Teaser (00:01:13) About the Episode (00:04:37) Andrew White's Journey (00:10:23) GPT-4 Red Team (00:15:33) GPT-4 & Chemistry (00:17:54) Sponsors: Oracle Cloud Infrastructure (OCI) | SelectQuote (00:20:19) Biology vs Physics (00:23:14) Conceptual Dark Matter (00:26:27) Future House Intro (00:30:42) Semi-Autonomous AI (00:35:39) Sponsors: Shopify (00:37:00) Lab Automation (00:39:46) In Silico Experiments (00:45:22) Cost of Experiments (00:51:30) Multi-Omic Models (00:54:54) Scale and Grokking (01:00:53) Future House Projects (01:10:42) Paper QA Insights (01:16:28) Generalizing to Other Domains (01:17:57) Using Figures Effectively (01:22:01) Need for Specialized Tools (01:24:23) Paper QA Cost & Latency (01:27:37) Aviary: Agents & Environments (01:31:42) Black Box Gradient Estimation (01:36:14) Open vs Closed Models (01:37:52) Improvement with Training (01:40:00) Runtime Choice & Q-Learning (01:43:43) Narrow vs General AI (01:48:22) Future Directions & Needs (01:53:22) Future House: What's Next? (01:55:32) Outro SOCIAL LINKS: Website: https://www.cognitiverevolution.ai Twitter (Podcast): https://x.com/cogrev_podcast Twitter (Nathan): https://x.com/labenz LinkedIn: https://www.linkedin.com/in/nathanlabenz/ Youtube: https://www.youtube.com/@CognitiveRevolutionPodcast Apple: https://podcasts.apple.com/de/podcast/the-cognitive-revolution-ai-builders-researchers-and/id1669813431 Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk
In this episode of The Cognitive Revolution, Nathan welcomes back Div Garg, Co-Founder and CEO of MultiOn, for his third appearance to discuss the evolving landscape of AI agents. We explore how agent development has shifted from open-ended frameworks to intelligent workflows, MultiOn's unique approach to agent development, and their journey toward achieving human-level performance. Dive into fascinating insights about data collection strategies, model fine-tuning techniques, and the future of agent authentication. Join us for an in-depth conversation about why 2025 might be the breakthrough year for AI agents. Check out MultiOn: https://www.multion.ai/ Help shape our show by taking our quick listener survey at https://bit.ly/TurpentinePulse SPONSORS: Oracle Cloud Infrastructure (OCI): Oracle's next-generation cloud platform delivers blazing-fast AI and ML performance with 50% less for compute and 80% less for outbound networking compared to other cloud providers13. OCI powers industry leaders with secure infrastructure and application development capabilities. New U.S. customers can get their cloud bill cut in half by switching to OCI before December 31, 2024 at https://oracle.com/cognitive SelectQuote: Finding the right life insurance shouldn't be another task you put off. SelectQuote compares top-rated policies to get you the best coverage at the right price. Even in our AI-driven world, protecting your family's future remains essential. Get your personalized quote at https://selectquote.com/cognitive Weights & Biases RAG++: Advanced training for building production-ready RAG applications. Learn from experts to overcome LLM challenges, evaluate systematically, and integrate advanced features. Includes free Cohere credits. Visit https://wandb.me/cr to start the RAG++ course today. RECOMMENDED PODCAST: Unpack Pricing - Dive into the dark arts of SaaS pricing with Metronome CEO Scott Woody and tech leaders. Learn how strategic pricing drives explosive revenue growth in today's biggest companies like Snowflake, Cockroach Labs, Dropbox and more. Apple: https://podcasts.apple.com/us/podcast/id1765716600 Spotify: https://open.spotify.com/show/38DK3W1Fq1xxQalhDSueFg CHAPTERS: (00:00:00) Teaser (00:00:40) About the Episode (00:04:10) The Rise of AI Agents (00:06:33) Open-Ended vs On-Rails (00:10:00) Agent Architecture (00:12:01) AI Learning & Feedback (00:14:01) Data Collection (Part 1) (00:18:27) Sponsors: Oracle Cloud Infrastructure (OCI) | SelectQuote (00:20:51) Data Collection (Part 2) (00:22:25) Self-Play & Rewards (00:25:04) Model Strategy & Agent Q (00:33:28) Sponsors: Weights & Biases RAG++ (00:34:39) Understanding Agent Q (00:43:16) Search & Learning (00:45:39) Benchmarks vs Reality (00:50:18) Positive Transfer & Scale (00:51:47) Fine-Tuning Strategies (00:55:16) Vision Strategy (01:00:16) Authentication & Security (01:03:48) Future of AI Agents (01:16:14) Cost, Latency, Reliability (01:19:30) Avoiding the Bitter Lesson (01:25:58) Agent-Assisted Future (01:27:11) Outro SOCIAL LINKS: Website: https://www.cognitiverevolution.ai Twitter (Podcast): https://x.com/cogrev_podcast Twitter (Nathan): https://x.com/labenz LinkedIn: https://www.linkedin.com/in/nathanlabenz/ Youtube: https://www.youtube.com/@CognitiveRevolutionPodcast Apple: https://podcasts.apple.com/de/podcast/the-cognitive-revolution-ai-builders-researchers-and/id1669813431 Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk
Sanjay Iyer, a consultant for 25 years, discusses the evolution of telecommunications companies, focusing on network, infrastructure, quality, and coverage analysis. He explains that coverage is the first aspect of a network, determining the reach and number of homes it can deliver service to. The structure of networks has evolved over the years, with different types of networks for broadband, such as fiber to the home, hybrid fiber coax, and fixed wireless axis. Assessing the Infrastructure Quality Sanjay explains the process of assessing the infrastructure quality of a telecommunications company, which involves evaluating speeds, latency, and other factors such as the density of homes in the neighborhood. Speeds are rated at megabits per second, but factors like the number of people using television, density of homes, and latency can affect the speed of upstream and downstream packets. Latency is another factor that covers systemic network design quality. Sanjay also mentions that there are temporary issues in a coax network, such as fluctuation noise and overhead versus underground cables. To understand the total quality of a network, it is essential to separate temporary issues from systemic problems. He suggests measuring the quality at a home level, rather than at the broad network level. Network Assessment Factors Sanjay explains the importance of assessing network outcomes such as latency and speed when buying a provider and explains why companies should focus on outcome metrics and infrastructure quality. He talks about the first and second metric, capital expenditure efficiency and network upgrades. Sanjya explains why getworks have been continuously groomed and expanded to deliver more bandwidth over the years, and understanding how they have done it historically and what it will take to achieve the gold standard of one gigabits per second downstream to every home is crucial and what it would cost. Challenges Faced when Analyzing Networks The conversation turns to the challenges companies face in analyzing their own networks, as there is no single source of truth for determining their network coverage. One challenge is the cost of bandwidth, which can be expensive and unpredictable. To get the bandwidth right, companies must calculate the capex efficiency model, which assumes an average number of households per node and exploits it to the entire country. This model is often incorrect, leading to unpredictable network costs. Another challenge is fiber optic and broadband penetration analysis. The Federal Communications Commission has created a national database that tracks every household's speed and coverage from service providers. This information is publicly available and can be used to analyze homes and serviceable locations. The FCC has also created a service coverage map at a national scale, which can be used to allocate government capital to underserved areas and subsidize network bills. Analyzing Market Share Sanjay discusses the process of analyzing market share in a given market. He uses the FCC database to measure network footprint, focusing on census block group levels to determine customer penetration. Machine learning is particularly interesting as it provides insights into customer profiles, economic or household level information, which can help predict underperformance, overperformance, and areas for improvement. Iyer is currently working on building tools to predict the ROI of broadband investments, analyzing existing footprints and adjacent locations, and predicting expansion paths. He is also involved in generative AI, which is popular but not widely adopted due to issues with LLM tech adoption. Iyer is developing a governance model that looks at all aspects of Gen AI, from use cases to production and costs, and is building products with an AI-first approach, using tools like chat and GPT to develop software products based on specific requirements. Timestamps: 04:30: Assessing Infrastructure Quality and Network Economics 08:37: Capital Expenditure Efficiency and Network Upgrades 13:27: Challenges in Network Data Availability 17:52: Fiber Optic and Broadband Penetration Analysis 21:21: Customer Churn Rate and Retention Strategy 25:45: Subscriber-Based Growth and Market Share Analysis 27:32: Sanjay Iyer's Current Practice and AI Focus Links: LinkedIn: https://www.linkedin.com/in/sanjay-iyer/ Website: https://www.combinatree.com/ Resource: https://umbrex.com/resources/how-to-analyze-a-telecommunications-company/ Unleashed is produced by Umbrex, which has a mission of connecting independent management consultants with one another, creating opportunities for members to meet, build relationships, and share lessons learned. Learn more at www.umbrex.com.
In this episode of N Is For Networking, co-hosts Ethan Banks and Holly Metlitzky take a question from college student Douglas that turns into a ride on the networking highway as they navigate the lanes of bandwidth and latency. Ethan and Holly define the concepts of bandwidth and latency and discuss current data transfer protocols... Read more »
In today's episode, we are honored to be joined by Leor Weinberger, the William and Ute Bowes Distinguished Professor of Virology, director of the Gladstone Center for Cell Circuitry, professor of pharmaceutical chemistry, and professor of biochemistry and biophysics at Gladstone Institutes/University of California, San Francisco. As a world-renowned virologist and quantitative biologist, Leor has made a significant impact in the field of HIV research with his groundbreaking discovery of the HIV virus latency circuit. Leor's lab studies the fundamental processes of viral biology in the pursuit of developing innovative first-in-class therapies against HIV. They use computational and experimental approaches, including quantitative, single-cell and single-molecule microscopy and mathematical modeling… Click play to find out: How quantitative and theoretical biophysics apply to HIV. Why HIV latency has always been a problem with successful treatment. What happens when viral loads are lower in the blood of infected individuals. When to administer a therapeutic that overcomes barriers to biodistribution. How are Leor and his team tackling the biggest challenges in human health? Tune in now to learn more about their unique and innovative approach to disrupting the way science is done – and how these discoveries have the potential to change lives! You can follow along with Leor and his fascinating work with the Gladstone Center for Cell Circuitry here. Episode also available on Apple Podcast: http://apple.co/30PvU9
Elon Musk is CEO of Neuralink, SpaceX, Tesla, xAI, and CTO of X. DJ Seo is COO & President of Neuralink. Matthew MacDougall is Head Neurosurgeon at Neuralink. Bliss Chapman is Brain Interface Software Lead at Neuralink. Noland Arbaugh is the first human to have a Neuralink device implanted in his brain. Transcript: https://lexfridman.com/elon-musk-and-neuralink-team-transcript Please support this podcast by checking out our sponsors: https://lexfridman.com/sponsors/ep438-sc SPONSOR DETAILS: - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - MasterClass: https://masterclass.com/lexpod to get 15% off - Notion: https://notion.com/lex - LMNT: https://drinkLMNT.com/lex to get free sample pack - Motific: https://motific.ai - BetterHelp: https://betterhelp.com/lex to get 10% off CONTACT LEX: Feedback - give feedback to Lex: https://lexfridman.com/survey AMA - submit questions, videos or call-in: https://lexfridman.com/ama Hiring - join our team: https://lexfridman.com/hiring Other - other ways to get in touch: https://lexfridman.com/contact EPISODE LINKS: Neuralink's X: https://x.com/neuralink Neuralink's Website: https://neuralink.com/ Elon's X: https://x.com/elonmusk DJ's X: https://x.com/djseo_ Matthew's X: https://x.com/matthewmacdoug4 Bliss's X: https://x.com/chapman_bliss Noland's X: https://x.com/ModdedQuad xAI: https://x.com/xai Tesla: https://x.com/tesla Tesla Optimus: https://x.com/tesla_optimus Tesla AI: https://x.com/Tesla_AI PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (09:26) - Elon Musk (12:42) - Telepathy (19:22) - Power of human mind (23:49) - Future of Neuralink (29:04) - Ayahuasca (38:33) - Merging with AI (43:21) - xAI (45:34) - Optimus (52:24) - Elon's approach to problem-solving (1:09:59) - History and geopolitics (1:14:30) - Lessons of history (1:18:49) - Collapse of empires (1:26:32) - Time (1:29:14) - Aliens and curiosity (1:36:48) - DJ Seo (1:44:57) - Neural dust (1:51:40) - History of brain–computer interface (1:59:44) - Biophysics of neural interfaces (2:10:12) - How Neuralink works (2:16:03) - Lex with Neuralink implant (2:36:01) - Digital telepathy (2:47:03) - Retracted threads (2:52:38) - Vertical integration (2:59:32) - Safety (3:09:27) - Upgrades (3:18:30) - Future capabilities (3:47:46) - Matthew MacDougall (3:53:35) - Neuroscience (4:00:44) - Neurosurgery (4:11:48) - Neuralink surgery (4:30:57) - Brain surgery details (4:46:40) - Implanting Neuralink on self (5:02:34) - Life and death (5:11:54) - Consciousness (5:14:48) - Bliss Chapman (5:28:04) - Neural signal (5:34:56) - Latency (5:39:36) - Neuralink app (5:44:17) - Intention vs action (5:55:31) - Calibration (6:05:03) - Webgrid (6:28:05) - Neural decoder (6:48:40) - Future improvements (6:57:36) - Noland Arbaugh (6:57:45) - Becoming paralyzed (7:11:20) - First Neuralink human participant (7:15:21) - Day of surgery (7:33:08) - Moving mouse with brain (7:58:27) - Webgrid (8:06:28) - Retracted threads (8:14:53) - App improvements (8:21:38) - Gaming (8:32:36) - Future Neuralink capabilities (8:35:31) - Controlling Optimus robot (8:39:53) - God