Computing discipline
POPULARITY
Categories
Roberto Coviello is an engineer at Meta Reality Labs, where he builds open-source showcases and samples to help push XR development forward. He's also known for his YouTube Channel full of in-depth XR tutorials that played a key role in his career. In this conversation we look at:How he started his professional journey as a developer, turned content creator, and the path that led him to become a Software Engineer at MetaWe look at several SDKs provided by Meta and what he thinks is a must-have for any MR appWhat are his thoughts on the role of AI for artists and developersHow he finds the right balance between employment and cultivating his dreamsSubscribe to XR AI Spotlight weekly newsletter
In this week's MBA Admissions podcast we began by discussing the upcoming new MBA admissions season. This week, NYU / Stern, Chicago / Booth, INSEAD and Cornell / Johnson have their Round 1 application deadlines. Graham highlighted the ongoing September series of admissions events, where Clear Admit hosts the majority of the top MBA programs to discuss Round 2 application strategy. The second session is on Wednesday, and includes Chicago / Booth, Columbia, Texas / McCombs, Toronto / Rotman and Yale SOM. Signups for this series are here: https://bit.ly/cainsidemba Our second livestream AMA is scheduled for Tuesday, September 23rd on YouTube; here's the link to Clear Admit's YouTube channel: https://bit.ly/cayoutubelive. Graham then noted a recently published MBA admissions-related tip that focuses on polishing your business school application essays. He also reminded listeners about the 25 videos in our free Admissions Academy video series, of which five videos are exclusively about the essay writing process. We also continue our series of Adcom Q&As; this week we hear from IMD's Francesco Farné. For this week, for the candidate profile review portion of the show, Alex selected three ApplyWire entries: This week's first MBA admissions candidate works for Microsoft as a software engineer. They want to switch into product management. This week's second MBA candidate has already completed one major career pivot from engineering and sales to real estate investing. They want to use the MBA to explore the latter, further. The final MBA candidate is from India, has a 755 GMAT score and works for the space agency. They also have quite significant activities outside of work. This episode was recorded in Paris, France and Cornwall, England. It was produced and engineered by the fabulous Dennis Crowley in Philadelphia, USA. Thanks to all of you who've been joining us and please remember to rate and review this show wherever you listen!
GPT-5のプロンプトエンジニアリングガイドを読み解き、良いプロンプトを書くための方法について話しました。Prompting guide https://cookbook.openai.com/examples/gpt-5/gpt-5_prompting_guideOptimizer tool: https://platform.openai.com/chat/edit?models=gpt-5&optimize=truehttps://cdn.openai.com/API/docs/gpt-5-for-coding-cheatsheet.pdfブログ https://tomoima525.hatenablog.com/entry/2025/08/31/065459 感想をぜひハッシュタグ #todayILearnedFM #tilfm でつぶやいてください!お便りフォーム https://forms.gle/J2ioXHS98dYNoMbq5Your co-hosts:Tomoaki Imai, Noxx CTO https://x.com/tomoaki_imai bsky: https://bsky.app/profile/tomoaki-imai.bsky.socialRyoichi Kato, Software Engineer https://x.com/ryo1kato bsky: https://bsky.app/profile/ryo1kato.bsky.social
No episódio 117 do podcast OsProgramadores, conversei com André Carvalho, Senior Software Engineer na Apple.André compartilhou sua trajetória, desde os primeiros passos no Brasil até chegar ao Vale do Silício para trabalhar em uma das maiores empresas de tecnologia do mundo. Foi uma conversa rica em aprendizados para quem sonha em trabalhar no exterior.
In this episode of The New Stack Agents, ServiceNow CTO and co-founder Pat Casey discusses why the company runs 90% of its workloads—including AI infrastructure—on its own physical servers rather than the public cloud. ServiceNow maintains GPU hubs across global data centers, enabling efficient, low-latency AI operations. Casey downplays the complexity of running AI models on-prem, noting their team's strong Kubernetes and Triton expertise. The company recently switched from GitHub Copilot to its own AI coding assistant, Windsurf, yielding a 10% productivity boost among 7,000 engineers. However, use of such tools isn't mandatory—performance remains the main metric. Casey also addresses the impact of AI on junior developers, acknowledging that AI tools often handle tasks traditionally assigned to them. While ServiceNow still hires many interns, he sees the entry-level tech job market as increasingly vulnerable. Despite these concerns, Casey remains optimistic, viewing the AI revolution as transformative and ultimately beneficial, though not without disruption or risk. Learn more from The New Stack about the latest in AI and development in ServiceNow ServiceNow Launches a Control Tower for AI AgentsServiceNow Acquires Data.World To Expand Its AI Data Strategy Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
The European Union's upcoming Cyber Resilience Act (CRA) goes into effect in October 2026, with the remainder of the requirements going into effect in December 2027, and introduces significant cybersecurity compliance requirements for software vendors, including those who rely heavily on open source components. At the Open Source Summit Europe, Christopher "CRob" Robinson of the Open Source Security Foundation highlighted concerns about how these regulations could impact open source maintainers. Many open source projects begin as personal solutions to shared problems and grow in popularity, often ending up embedded in critical systems across industries like automotive and energy. Despite this widespread use—Robinson noted up to 97% of commercial software contains open source—these projects are frequently maintained by individuals or small teams with limited resources.Developers often have no visibility into how their code is used, yet they're increasingly burdened by legal and compliance demands from downstream users, such as requests for Software Bills of Materials (SBOMs) and conformity assessments. The CRA raises the stakes, with potential penalties in the billions for noncompliance, putting immense pressure on the open source ecosystem. Learn more from The New Stack about Open Source Security:Open Source Propels the Fall of Security by ObscurityThere Is Just One Way To Do Open Source Security: TogetherJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
What does it take to build AI features at the scale of Microsoft Copilot? Senior Product Manager Stéphanie Visser reveals the massive shifts in product development, from focusing on pixels to tokens and embracing a culture of rapid, data-driven experimentation. Learn how the roles of PMs, engineers, and scientists are evolving and what it takes to succeed.In this episode, we cover:The shift from UX-focused products to output-quality-focused AI.How to run experiments and decide when an AI feature is ready to ship.The changing roles and expectations for PMs, engineers, and data scientists.Building trust and a strong product culture in a distributed AI team.This episode is a must-watch for product managers, engineers, and tech leaders looking to adapt their processes for the age of AI and accelerate their delivery cycles.Timestamps:00:00:00 - How Microsoft Builds AI Features00:00:49 - The #1 Thing That Changed for Product Managers01:28 - From Pixels to Tokens: The AI Product Shift02:58 - Why AI Is All About Output Quality, Not UX04:46 - When Is an AI Feature "Good Enough" to Ship?06:45 - The "Non-Embarrassment Bar" for Releasing AI09:07 - Why Old User Feedback Methods Don't Work for AI12:28 - The New Expectations for Software Engineers in AI15:33 - When to Involve Engineers in the Product Process17:43 - How Microsoft Structures Its AI Product Teams20:40 - Why 3-Month Planning Is Obsolete in the AI Era22:42 - How to Remove Bias From Your Product Decisions25:36 - Balancing Data vs. User Intuition in AI27:44 - The Biggest Bottleneck in AI Experimentation31:12 - How to Define the Right Metrics for Your AI Product33:39 - Building Trust and Culture in a Remote Team37:47 - The Most Underrated Skill for Product Managers40:57 - How to Cultivate a Strong Product Culture44:32 - The AI Tools a Microsoft PM Actually Uses46:29 - How to Manage the Expanding Scope of the PM RoleConnect with Stéphanie Visser:https://www.linkedin.com/in/stephanievisserConnect with Patrick Akil:https://www.linkedin.com/in/patrick-akilhttps://twitter.com/PatrickAkil_Sponsors:Xebia - https://xebia.com#ProductManagement #AI #Microsoft
While most conversations about generative AI focus on chatbots, Thomas Wiecki (PyMC Labs, PyMC) has been building systems that help companies make actual business decisions. In this episode, he shares how Bayesian modeling and synthetic consumers can be combined with LLMs to simulate customer reactions, guide marketing spend, and support strategy. Drawing from his work with Colgate and others, Thomas explains how to scale survey methods with AI, where agents fit into analytics workflows, and what it takes to make these systems reliable. We talk through: Using LLMs as “synthetic consumers” to simulate surveys and test product ideas How Bayesian modeling and causal graphs enable transparent, trustworthy decision-making Building closed-loop systems where AI generates and critiques ideas Guardrails for multi-agent workflows in marketing mix modeling Where generative AI breaks (and how to detect failure modes) The balance between useful models and “correct” models If you've ever wondered how to move from flashy prototypes to AI systems that actually inform business strategy, this episode shows what it takes. LINKS: The AI MMM Agent, An AI-Powered Shortcut to Bayesian Marketing Mix Insights (https://www.pymc-labs.com/blog-posts/the-ai-mmm-agent) AI-Powered Decision Making Under Uncertainty Workshop w/ Allen Downey & Chris Fonnesbeck (PyMC Labs) (https://youtube.com/live/2Auc57lxgeU) The Podcast livestream on YouTube (https://youtube.com/live/so4AzEbgSjw?feature=share) Upcoming Events on Luma (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk)
Listen now: Spotify, Apple and YouTubeIf you've wondered how much AI is really helping engineers - or feared it might replace them - this episode is for you.In this conversation, Marc and Ben sit down with Nick Meehan, a senior engineer and longtime collaborator, to explore how AI coding tools like Cursor, Claude, and Codex are reshaping day-to-day engineering work. Nick shares how his process evolved from copy-pasting into ChatGPT to using AI as a debugging partner, thought collaborator, and junior developer moving at superhuman speed.They cover the productivity gains, the pitfalls of vibe coding entire products, the new skills engineers need (critical thinking, architecture, team coordination), and what might never be replaced by AI. Nick also reflects on job security, how satisfaction in engineering is shifting, and where he sees the field heading over the next five years.Whether you're an engineer experimenting with coding agents, a product leader trying to understand their impact, or simply curious about how AI is changing the craft of engineering, you'll walk away with an insider's perspective on what's actually happening on the ground.All episodes of the podcast are also available on Spotify, Apple and YouTube.New to the pod? Subscribe below to get the next episode in your inbox
This episode explains what is arguably the best career advice you'll hear this week: the one skill that signifies seniority in software engineers is the ability to synthesise and optimise for multiple factors at once. Instead of focusing on a single factor, such as performance or maintainability, senior engineers identify and weigh the various trade-offs involved in any decision.Discover the key skill that distinguishes a senior engineer: It's the ability to synthesise multiple, competing factors—like performance, maintainability, cost, and time to market—rather than focusing on just one.Learn why single-factor thinking can hold you back: Junior engineers often optimise for what they know best or what is easiest to measure, which can harm the overall solution, the team, and their professional reputation.Understand how to demonstrate seniority in interviews and at work: You can show your maturity and wisdom by identifying the crucial trade-offs for any given problem, asking what factors need to be balanced, and exploring options that might satisfy multiple goals at once.Explore how to find better solutions by thinking in trade-offs: The goal isn't just to make sacrifices; often, the mark of a great senior engineer is finding a third option that effectively balances or optimises for multiple important factors simultaneously.Start practising this skill today: Challenge yourself to identify what you are giving up with any decision and consider factors you don't normally prioritise. Ask "What am I saying no to?" to develop this crucial skill.
In deze aflevering van de Product Owner Podcast duiken we in AI-coding. Tools zoals GitHub Copilot en Cursor maken het mogelijk om sneller, slimmer en met minder fouten te programmeren. Maar levert het ook écht de beloofde 55% tijdwinst op? Ruud spreekt met Martijn, Software Engineer bij Navara én AI-coding enthousiast. Ze bespreken de impact van AI op het ontwikkelproces, de risico's, de kwaliteit van de gegenereerde code en de rol van testing. Ook hoor je of elke developer zomaar met AI-coding aan de slag kan en wat dat betekent voor de samenwerking met product owners. Een aflevering vol inzichten, praktijkvoorbeelden en vooruitblik op de toekomst van AI in product development. In deze aflevering hebben we het over: ai-coding, vibe coding, ai, product development, github, cursor, engineer, ai enginering Over deze podcast: Deze aflevering is onderdeel van de speciale reeks ‘AI in Product Development', onderdeel van de Product Owner podcast. In de Product Owner podcast spreken we elke week met een interessante gast uit de wereld van product management en gaan we in op echte ervaringen, lessen en tactieken van product owners, ondernemers en specialisten. De Product Owner podcast is een initiatief van productowner.nl
良いデザインドキュメントの書き方について話しました。How to design document https://grantslatton.com/how-to-design-documentAmazonのデザインドキュメントの書き方 https://medium.com/@apappascs/write-like-an-amazonian-14-tips-for-clear-and-persuasive-communication-e2a11afc7362Architectural Decision record https://martinfowler.com/articles/scaling-architecture-conversationally.htmlForensic Design Document https://littlegreenviper.com/miscellany/forensic-design-documentation/感想をぜひハッシュタグ #todayILearnedFM #tilfm でつぶやいてください!お便りフォーム https://forms.gle/J2ioXHS98dYNoMbq5Your co-hosts:Tomoaki Imai, Noxx CTO https://x.com/tomoaki_imai bsky: https://bsky.app/profile/tomoaki-imai.bsky.socialRyoichi Kato, Software Engineer https://x.com/ryo1kato bsky: https://bsky.app/profile/ryo1kato.bsky.social
while(true) ist der Podcast rund um Informatik – im Studium, in der Forschung und im Alltag. Dr. Sebastian Hahner arbeitet als Software Engineer in der Automobilindustrie und entwickelt in seiner Freizeit im Rahmen zahlreicher Open-Source Projekte Werkzeuge für Content-Creator & Streamer. Zuvor forschte er als PostDoc am Karlsruher Institut für Technologie (KIT) an der Schnittstelle von Software-Architektur und Sicherheit. Der Podcast verbindet praxisnahe Einblicke, spannende Forschung und persönliche Erfahrungen aus der Welt der Informatik.
BONUS: Agile Tour Vienna 2025—Building Community-Driven Agile Excellence In this BONUS episode, we explore the upcoming Agile Tour Vienna 2025 (get your ticket now!) with three passionate organizers who are bringing together the Austrian agile community for a day of learning, networking, and innovation. Join us as we dive into what makes this community-driven event special, the challenges facing today's agile practitioners, and why local connections matter more than ever in our evolving professional landscape. The Heart of Community-Driven Events "For me, it's really about creating an event from the community for the community. So at the Agile Tour Vienna we really pay a lot of attention that the contributions are made by community members." - Sabina Lammert The foundation of Agile Tour Vienna lies in its commitment to authentic community engagement. Unlike corporate-led conferences focused on sales and marketing, this event prioritizes genuine knowledge sharing and peer-to-peer learning. The organizers emphasize creating space for meaningful conversations, where participants don't just consume content but actively contribute to discussions and support one another with real-world challenges. This approach fosters an intimate atmosphere where attendees leave with valuable professional connections and practical insights they can immediately apply. Balancing Local Expertise with Global Perspectives "This local aspect is very important, but then it needs to be enhanced by bringing in ideas from people from the outside world." - Robert Ruzitschka Agile Tour Vienna strikes a unique balance between showcasing local Austrian talent and bringing in internationally renowned speakers. The event features a carefully curated mix of practical experiences from Vienna-based practitioners working directly with teams and companies, combined with keynotes from global thought leaders. This blend creates opportunities for attendees to understand both the local context of agile implementation and broader industry trends, making the learning experience both immediately relevant and strategically valuable. A Thoughtfully Designed Experience "We make sure we have a good diversity within the speakers. We also take care that we have a good mix, because for me, agile started with the engineering practices." - Richard Brenner The 2025 program demonstrates attention to creating a comprehensive learning experience. The organizers ensure language accessibility by maintaining at least one English track throughout the day while also offering German sessions. The content spans from technical engineering practices to team coaching and business strategy, reflecting agile's evolution across organizational levels. The event takes place in a stunning castle location (Auersperg Palace) that enhances the intimate, family-like atmosphere the organizers work hard to cultivate. World-Class Content in an Intimate Setting "Agile Tour Vienna is never aiming to go big, but to stay small and familiar. By the end of the day, you know new people." - Sabina Lammert This year's highlights include keynotes from Dave Farley on engineering excellence and Mirella Muse on product operations, plus an innovative Comic Agile storytelling workshop. The organizers deliberately limit attendance to maintain the conference's intimate character, ensuring meaningful networking opportunities rather than overwhelming crowds. Additional touches like a professional barista bar and ample space for informal conversations between sessions create an environment where genuine professional relationships can develop. From Concept-Based to Context-Based Agility "The biggest challenge is that we go from concept-based agility to context-based agility. Companies realize the world is complex. There is no one framework to rule them all." - Richard Brenner The agile community faces a significant evolution as the methodology matures from underground movement to established practice. Organizations are moving away from rigid framework implementations toward contextual problem-solving approaches. This shift requires practitioners to focus on solving real business issues rather than introducing agile for its own sake. The challenge lies in maintaining agile's core values while adapting to diverse organizational contexts and avoiding the trap of seeking simple solutions for complex problems. Maintaining Values-Based Working "It's not about winning over something. It's about using common sense, getting into interaction and trying to find sometimes complex solutions for complex problems." - Sabina Lammert Rather than declaring agile "dead," the community must refocus on value-based working and continuous adaptation. The real challenge involves empowering people to constantly reevaluate situations and embrace the reality that today's solutions may not work in three weeks or three years. This requires normalizing the inspect-and-adapt mindset as standard practice rather than exception, moving beyond method-focused thinking toward principle-driven decision making. Sustaining Community Spirit Through Challenging Times "In times of crisis, people tend to fall back to old patterns of behavior. We need to keep the ideas that made us work in a specific way alive." - Robert Ruzitschka Economic and political uncertainties create pressure to abandon agile practices in favor of traditional command-and-control approaches. Community events like Agile Tour Vienna play a crucial role in maintaining momentum for collaborative, adaptive working methods. The discipline required for agile practices - continuous integration, experimental approaches, market-driven feedback collection - represents a more sophisticated and ultimately more sustainable way of working than traditional project management approaches. The Discipline of Adaptability The discussion revealed an important distinction about discipline in agile environments. Agile teams demonstrate remarkable discipline through practices like continuous integration, experimental product development, and systematic feedback collection. This represents a more humane form of discipline that acknowledges complexity and enables adaptation, contrasting sharply with the rigid discipline of following predetermined plans regardless of changing circumstances. About Robert Ruzitschka, Sabina Lammert, and Richard Brenner Robert Ruzitschka is a Senior Principal Engineer at Raiffeisen Bank International and leads a team of Engineering Coaches. You can connect with Robert Ruzitschka on LinkedIn. Sabina Lammert is Founder and Agile Coach of Leadventure and supports Teams and organizations to improve their way of collaboration. You can connect with Sabina Lammert on LinkedIn. Richard Brenner is a previous guest, he started as Software Engineer and is now working as Agile Coach helping clients to adopt agile ways of working. You can connect with Richard Brenner on LinkedIn.
While many people talk about “agents,” Shreya Shankar (UC Berkeley) has been building the systems that make them reliable. In this episode, she shares how AI agents and LLM judges can be used to process millions of documents accurately and cheaply. Drawing from work on projects ranging from databases of police misconduct reports to large-scale customer transcripts, Shreya explains the frameworks, error analysis, and guardrails needed to turn flaky LLM outputs into trustworthy pipelines. We talk through: - Treating LLM workflows as ETL pipelines for unstructured text - Error analysis: why you need humans reviewing the first 50–100 traces - Guardrails like retries, validators, and “gleaning” - How LLM judges work — rubrics, pairwise comparisons, and cost trade-offs - Cheap vs. expensive models: when to swap for savings - Where agents fit in (and where they don't) If you've ever wondered how to move beyond unreliable demos, this episode shows how to scale LLMs to millions of documents — without breaking the bank. LINKS Shreya's website (https://www.sh-reya.com/) DocETL, A system for LLM-powered data processing (https://www.docetl.org/) Upcoming Events on Luma (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk) Watch the podcast video on YouTube (https://youtu.be/3r_Hsjy85nk) Shreya's AI evals course, which she teaches with Hamel "Evals" Husain (https://maven.com/parlance-labs/evals?promoCode=GOHUGORGOHOME)
This interview was recorded for the GOTO Book Club.http://gotopia.tech/bookclubRead the full transcription of the interview hereCharles Humble - Freelance Techie, Podcaster, Editor, Author & ConsultantTrisha Gee - Lead Developer Evangelist at Gradle, Java Champion & Co-Author of "Getting to Know IntelliJ IDEA"RESOURCESCharleshttps://bsky.app/profile/charleshumble.bsky.socialhttps://linkedin.com/in/charleshumblehttps://mastodon.social/@charleshumblehttps://conissaunce.comTrishahttps://bsky.app/profile/trishagee.bsky.socialhttps://twitter.com/trisha_geehttps://www.linkedin.com/in/trishageehttps://trishagee.comhttps://github.com/trishageeLinkshttps://www.conissaunce.com/professional-skills-shortcut.htmlhttps://www.jeanettewinterson.comDESCRIPTIONTrisha Gee interviews Charles Humble on his project "Professional Skills for Software Engineers", a collection of 14 articles organized into four categories:• communication• critical thinking• documentation• networkingCharles argues that career success in software engineering oftentimes depends more on non-programming skills than technical ability. Both Charles and Trisha emphasize that these skills are learnable and essential, despite being undervalued in the industry as mere "soft skills".The conversation covers how intentional communication improves product development, the value of networking and public speaking for career advancement, and ways engineers can generate ideas for content creation while taking ownership of their career development. The interview makes a compelling case that developing these professional skills benefits both individual engineers and the industry as a whole.RECOMMENDED BOOKSCharles Humble • Professional Skills for Software EngineersKevlin Henney & Trisha Gee • 97 Things Every Java Programmer Should KnowAnne Currie, Sarah Hsu & Sara Bergman • Building Green SoftwareCal Newport • Deep WorkMartin Fowler • UML DistilledCathy O'Neil • Weapons ofBlueskyTwitterInstagramLinkedInFacebookCHANNEL MEMBERSHIP BONUSJoin this channel to get early access to videos & other perks:https://www.youtube.com/channel/UCs_tLP3AiwYKwdUHpltJPuA/joinLooking for a unique learning experience?Attend the next GOTO conference near you! Get your ticket: gotopia.techSUBSCRIBE TO OUR YOUTUBE CHANNEL - new videos posted daily!
Enterprise AI is still in its infancy, with less than 1% of enterprise data currently used to fuel AI, according to Raj Verma, CEO of SingleStore. While consumer AI is slightly more advanced, most organizations are only beginning to understand the scale of infrastructure needed for true AI adoption. Verma predicts AI will evolve in three phases: first, the easy tasks will be automated; next, complex tasks will become easier; and finally, the seemingly impossible will become achievable—likely within three years. However, to reach that point, enterprises must align their data strategies with their AI ambitions. Many have rushed into AI fearing obsolescence, but without preparing their data infrastructure, they're at risk of failure. Current legacy systems are not designed for the massive concurrency demands of agentic AI, potentially leading to underperformance. Verma emphasizes the need to move beyond siloed or "swim lane" databases toward unified, high-performance data platforms tailored for the scale and complexity of the AI era.Learn more from The New Stack about the latest evolution in AI infrastructure: How To Use AI To Design Intelligent, Adaptable InfrastructureHow to Support Developers in Building AI Workloads Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Mentor Sessions Ep. 026: Bitcoin's $2M Gold Parity, ETF Surge, MicroStrategy Risks & Network Threats | Vijay BoyapatiIs Bitcoin set for a historic bull market? In this explosive BTC Sessions interview, Vijay Boyapati, author of The Bullish Case for Bitcoin, reveals why ETF approval is fueling massive institutional adoption, potentially ending Bitcoin cycles. He predicts Bitcoin hitting gold parity at $1.5M–$2M within 5–10 years! Vijay dissects MicroStrategy's financial engineering, NAV premium risks, and treasury companies' threat to Bitcoin's decentralization. He warns of quantum computing challenges, advocates for Bitcoin Core ossification, and shares Ron Paul-inspired libertarian insights on Bitcoin as the global reserve currency. For Bitcoiners seeking bull market analysis this episode is a must-watch! Join the Bitcoin revolution, learn why Vijay champions self-custody, and discover how Bitcoin could reshape finance. Like, subscribe, and secure your financial future now!Key Topics:Bitcoin bull market and ETF approvalVijay Boyapati on institutional adoptionGold parity and $1.5M–$2M price predictionMicroStrategy's financial engineering risksTreasury companies and centralizationQuantum computing threats to BitcoinBitcoin Core ossification debateRon Paul and global reserve currencyChapters:00:00 Intro & Episode Summary01:16 Institutional Adoption02:02 ETF Approval Impact04:13 Gold ETF Comparison & Bull Market05:00 Are Bitcoin Cycles Dead?06:52 Gold Parity Price Prediction10:57 Gold to Bitcoin Shift12:30 Nation States & Bitcoin Reserves13:48 MicroStrategy's Bitcoin Strategy15:30 Financial Engineering & NAV Premium18:49 Centralization Risks24:27 NAV Discount Dangers36:40 Bitcoin Core Ossification41:50 Quantum Computing ThreatsAbout Vijay Boyapati:Author of The Bullish Case for Bitcoin, Software Engineer & Bitcoin AdvocateX: @real_vijayWebsite: bullishcaseforbitcoin.com
在外邦人の一時帰国というトピックについて雑談しました。感想をぜひハッシュタグ #tilfm でつぶやいてください!Your co-hosts:Tomoaki Imai, Noxx CTO https://twitter.com/tomoaki_imaiRyoichi Kato, Software Engineer https://twitter.com/ryo1kato
// GUEST //X: https://x.com/richrines // SPONSORS //iCoin: https://icointechnology.com/breedloveCowbolt: https://cowbolt.com/Heart and Soil Supplements (use discount code BREEDLOVE): https://heartandsoil.co/Blockware Solutions: https://mining.blockwaresolutions.com/breedloveIn Wolf's Clothing: https://wolfnyc.com/Onramp: https://onrampbitcoin.com/?grsf=breedloveMindlab Pro: https://www.mindlabpro.com/breedloveCoinbits: https://coinbits.app/breedloveThe Farm at Okefenokee: https://okefarm.com/Orange Pill App: https://www.orangepillapp.com/ // PRODUCTS I ENDORSE //Protect your mobile phone from SIM swap attacks: https://www.efani.com/breedloveLineage Provisions (use discount code BREEDLOVE): https://lineageprovisions.com/?ref=breedlove_22Colorado Craft Beef (use discount code BREEDLOVE): https://coloradocraftbeef.com/Salt of the Earth Electrolytes: http://drinksote.com/breedloveJawzrsize (code RobertBreedlove for 20% off): https://jawzrsize.com // UNLOCK THE WISDOM OF THE WORLD'S BEST NON-FICTION BOOKS //https://course.breedlove.io/ // SUBSCRIBE TO THE CLIPS CHANNEL //https://www.youtube.com/@robertbreedloveclips2996/videos // TIMESTAMPS //0:00 - WiM Episode Trailer1:13 - Bitcoin vs Sh*tcoin (Decentralization)5:30 - Is Mining Concentration a Concern?17:15 - iCoin Bitcoin Wallet18:44 - Cowbolt: Settle in Bitcoin19:59 - Bitcoin and Turing Completeness25:40 - Ethereum's Product Market Fit34:14 - Heart and Soil Supplements35:14 - Mine Bitcoin with Blockware Solutions36:16 - Tether and the Lightning Network49:54 - The Future of Central Banking on a Bitcoin Standard55:17 - Helping Lightning Startups with In Wolf's Clothing56:09 - Onramp Bitcoin Custody58:06 - The Future of Banking on a Bitcoin Standard1:03:21 - Will Government Become Irrelevant?1:07:22 - Working at Coinbase1:21:42 - Mind Lab Pro Supplements1:22:53 - Buy Bitcoin with Coinbits1:24:21 - Money, Language, and Religion1:33:24 - Coinbase and the Ethics of Sh*tcoins1:39:15 - Are Cryptos Just Unregistered Securities?1:43:50 - Is Bitcoin the Separation of Money and State?1:47:55 - Will AI Shrink Government?1:53:12 - Will AI and Bitcoin Bring UBI?2:01:02 - The Farm at Okefenokee2:02:12 - Orange Pill App2:02:40 - How Far Off is AGI?2:10:53 - Closed vs Open Source AI2:12:08 - Will Everyone Become a Software Engineer?2:19:53 - Closing Thoughts and Where to Find Rich Rines // PODCAST //Podcast Website: https://whatismoneypodcast.com/Apple Podcast: https://podcasts.apple.com/us/podcast/the-what-is-money-show/id1541404400Spotify: https://open.spotify.com/show/25LPvm8EewBGyfQQ1abIsERSS Feed: https://feeds.simplecast.com/MLdpYXYI // SUPPORT THIS CHANNEL //Bitcoin: 3D1gfxKZKMtfWaD1bkwiR6JsDzu6e9bZQ7Sats via Strike: https://strike.me/breedlove22Dollars via Paypal: https://www.paypal.com/paypalme/RBreedloveDollars via Venmo: https://account.venmo.com/u/Robert-Breedlove-2 // SOCIAL //Breedlove X: https://x.com/Breedlove22WiM? X: https://x.com/WhatisMoneyShowLinkedin: https://www.linkedin.com/in/breedlove22/Instagram: https://www.instagram.com/breedlove_22/TikTok: https://www.tiktok.com/@breedlove22Substack: https://breedlove22.substack.com/All My Current Work: https://linktr.ee/robertbreedlove
Anthropic's Model Context Protocol (MCP) has become the standard for connecting AI agents to tools and data, but its security has lagged behind. In The New Stack Agents podcast, Tzvika Shneider, CEO of API security startup Pynt, discussed the growing risks MCP introduces. Shneider sees MCP as a natural evolution from traditional APIs to LLMs and now to AI agents. However, MCP adds complexity and vulnerability, especially as agents interact across multiple servers. Pynt's research found that 72% of MCP plugins expose high-risk operations, like code execution or accessing privileged APIs, often without proper approval or validation. The danger compounds when untrusted inputs from one agent influence another with elevated permissions. Unlike traditional APIs, MCP calls are made by non-deterministic agents, making it harder to enforce security guardrails. While MCP exploits remain rare for now, most companies lack mature security strategies for it. Shneider believes MCP merely highlights existing API vulnerabilities, and organizations are only beginning to address these risks. Learn more from The New Stack about the latest in Model Context Protocol: Model Context Protocol: A Primer for the Developers Building With MCP? Mind the Security Gaps MCP-UI Creators on Why AI Agents Need Rich User InterfacesJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
Rahul Auradkar, executive VP and GM at Salesforce, grew up in India with a deep passion for cricket, where his love for the game sparked an early interest in data. This fascination with statistics laid the foundation for his current work leading Salesforce's Data Cloud and Einstein (Unified Data Services) team. Auradkar reflects on how structured data has evolved—from relational databases in enterprise applications to data warehouses, data lakes, and lakehouses. He explains how initial efforts focused on analyzing structured data, which later fed back into business processes. Eventually, businesses realized that the byproducts of data—what he calls "data exhaust"—were themselves valuable. The rise of "old AI," or predictive AI, shifted perceptions, showing that data exhaust could define the application itself. As varied systems emerged with distinct protocols and SQL variants, data silos formed, trapping valuable insights. Auradkar emphasizes that the ongoing challenge is unifying these silos to enable seamless, meaningful business interactions—something Salesforce aims to solve with its Data Cloud and agentic AI platform.Learn more from The New Stack about the evolution of structured data and agent AI: How Enterprises and Startups Can Master AI With Smarter Data Practices Enterprise AI Success Demands Real-Time Data PlatformsJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
In this episode of the Don't Panic, It's Just Data podcast, Kevin Petrie, VP of Research at BARC and the podcast host, is joined by Dainius Jocas, Search Engineer at Vinted, and Radu Gheorghe, Software Engineer at Vespa.ai. They discuss how Vinted, an online marketplace for secondhand products, modernised its data architecture to address new AI search use cases and the challenges faced with Elasticsearch. From the switch to Vespa and the advantages of supporting multiple languages and complex queries, the podcast offers insights on the trade-offs organisations must think about when updating their search systems, especially regarding AI and machine learning applications.Vinted Elasticsearch ChallengesVinted's search architecture was built on Elasticsearch before they switched to Vespa. Elasticsearch is a functional system that presents a few major challenges. With over 20 supported languages, the company's "index per language" approach created significant sharding problems, leading to infrastructure imbalances and constant adjustments."The index for the French language, the biggest language that we support, was more than three times bigger than the second biggest language, which created imbalances in the Elasticsearch data nodes' load," Jocas explained.In addition to these technical obstacles, organisational issues arose as teams responsible for different parts of the search process found themselves "pointing fingers at each other at an increasing rate." The need for a more integrated, effective solution became clear.The Solution: A New Platform for a New EraThe search for a better solution led Vinted to Vespa. The initial adoption was a "one success story" when a machine learning engineer, working on recommendations, discovered that Vespa was ten times faster than Elasticsearch for their use case. This initial benchmark, run on a single decommissioned server, was a "true testament to how efficient Vespa is when it comes to serving requests,” Jocas told Petrie.Vespa helped Vinted solve their language problem by allowing it to set a language per document. Thus, it eliminates the need for separate indexes and the associated sharding headaches. As Jocas put it, "We got out of the sharding problem once and for all."TakeawaysVinted faced challenges with its initial Elasticsearch architecture.The need for better integration between matching and ranking was identified.Vespa outperformed Elasticsearch in handling image search and recommendations.Transitioning to Vespa involved significant learning and support from developers.Vespa allows for language-specific document handling, simplifying architecture.Organisations must evaluate the complexity and volume of their data before transitioning.Vespa is optimised for query performance, while Elasticsearch excels in data writing.The learning curve for Vespa can be steep, but support is available.It's important to focus on optimising new systems rather than emulating old ones.Partial updates in Vespa are more efficient than in Elasticsearch.Chapters00:00 Introduction to Vinted and...
最近開発した Agentic AI についての学びについて話しました。Agentic AI https://blogs.nvidia.com/blog/what-is-agentic-ai/ カスケードvsシングルエージェントAI セントリックな仕様設計Criteria Driftを用いた評価 https://arxiv.org/pdf/2404.12272 感想をぜひハッシュタグ #todayILearnedFM #tilfm でつぶやいてください!Your co-hosts:Tomoaki Imai, Noxx CTO https://x.com/tomoaki_imai bsky: https://bsky.app/profile/tomoaki-imai.bsky.socialRyoichi Kato, Software Engineer https://x.com/ryo1kato bsky: https://bsky.app/profile/ryo1kato.bsky.social
In this week's episode ofThe New Stack Agents, Scott Carey, editor-in-chief of LeadDev, discussed their first AI Impact Report, which explores how engineering teams are adopting AI tools. The report shows that two-thirds of developers are actively using AI, with another 20% in pilot stages and only 2% having no plans to use AI — a group Carey finds particularly intriguing. Popular tools include Cursor (43%) and GitHub Copilot (37%), with others like OpenAI, Gemini, and Claude following, while Amazon Q and Replit lag behind.Most developers use AI for code generation, documentation, and research, but usage for DevOps tasks like testing, deployment, and IT automation remains low. Carey finds this underutilization frustrating, given AI's potential impact in these areas. The report also highlights concern for junior developers, with 54% of respondents expecting fewer future hires at that level. While many believe AI boosts productivity, some remain unsure — a sign that organizations still struggle to measure developer performance effectively.Learn more from The New Stack about the latest insights about the AI tool adoption: AI Adoption: Why Businesses Struggle to Move from Development to Production3 Strategies for Speeding Up AI Adoption Among DevelopersAI Everywhere: Overcoming Barriers to AdoptionJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
While much of the AI world chases ever-larger models, Ravin Kumar (Google DeepMind) and his team build across the size spectrum, from billions of parameters down to this week's release: Gemma 270M, the smallest member yet of the Gemma 3 open-weight family. At just 270 million parameters, a quarter the size of Gemma 1B, it's designed for speed, efficiency, and fine-tuning. We explore what makes 270M special, where it fits alongside its billion-parameter siblings, and why you might reach for it in production even if you think “small” means “just for experiments.” We talk through: - Where 270M fits into the Gemma 3 lineup — and why it exists - On-device use cases where latency, privacy, and efficiency matter - How smaller models open up rapid, targeted fine-tuning - Running multiple models in parallel without heavyweight hardware - Why “small” models might drive the next big wave of AI adoption If you've ever wondered what you'd do with a model this size (or how to squeeze the most out of it) this episode will show you how small can punch far above its weight. LINKS Introducing Gemma 3 270M: The compact model for hyper-efficient AI (Google Developer Blog) (https://developers.googleblog.com/en/introducing-gemma-3-270m/) Full Model Fine-Tune Guide using Hugging Face Transformers (https://ai.google.dev/gemma/docs/core/huggingface_text_full_finetune) The Gemma 270M model on HuggingFace (https://huggingface.co/google/gemma-3-270m) The Gemma 270M model on Ollama (https://ollama.com/library/gemma3:270m) Building AI Agents with Gemma 3, a workshop with Ravin and Hugo (https://www.youtube.com/live/-IWstEStqok) (Code here (https://github.com/canyon289/ai_agent_basics)) From Images to Agents: Building and Evaluating Multimodal AI Workflows, a workshop with Ravin and Hugo (https://www.youtube.com/live/FNlM7lSt8Uk)(Code here (https://github.com/canyon289/ai_image_agent)) Evaluating AI Agents: From Demos to Dependability, an upcoming workshop with Ravin and Hugo (https://lu.ma/ezgny3dl) Upcoming Events on Luma (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk) Watch the podcast video on YouTube (https://youtu.be/VZDw6C2A_8E)
In this episode of Android Back Stage, Hosts Tor, and Chet are joined by Melba and Ralston, Software Engineers on the Compose Team, to talk about Autofill in Compose. Learn about Autofill services, best practices, and more! Chapters: 0:00 - Intro 0:40 - What is Autofill? 1:21 - Autofill team journey and semantics 2:15 - Defining semantics in Compose 3:29 - Bridging views and Compose for Autofill 4:23 - Developer Experience: Early autofill in Compose 7:02 - Autofill services and communication flow 7:59 - User authentication and multiple autofill services 12:53 - The Autofill flow: From tap to fill 21:56 - Handling list types and saving credentials (commit) 30:11 - The “Asteriks” bug and semantics refractor 31:09 - Performance improvements in Autofill 37:48 - Compose's architectural advantages (composition over inheritance) 40:30 - Best practices and future improvements
Dario Gieselaar, Principal Software Engineer from Elastic gets brutally honest about what it really takes to succeed in a top tech company. We uncover the uncomfortable truths about the promotion game, why direct feedback is crucial (even when it hurts), and why a "winning" mindset is everything. This is the conversation about career growth they don't have in HR meetings.In this episode, we cover:Navigating promotion cycles and advocating for yourselfGiving and receiving critical feedback without being seen as "negative"The importance of accountability and managing low-performing colleaguesWhy a top-down approach can be more effective than team democracyHow to "disagree and commit" to keep projects moving forwardThis video is for any software engineer who wants to level up their career, navigate corporate dynamics, and become a high-impact performer in a competitive environment.Full episode on YouTube ▶️https://youtu.be/1q7Jjbi5GJ4Beyond Coding Podcast with
On this Salcedo Storm Podcast:Andy Hopper is a Software Engineer, Texas Guardsman, and conservative activist who unseated a RINO incumbent and now represents Texas House District 64 in the Texas State House.ANDRepresentative Brian Harrison represents the 10th district in the Texas State House. Prior to that he was President Trump's Chief of Staff at HHS.
Traditional software expects 100% passing tests. In LLM-powered systems, that's not just unrealistic — it's a feature, not a bug. Eric Ma leads research data science in Moderna's data science and AI group, and over breakfast at SciPy we explored why AI products break the old rules, what skills different personas bring (and miss), and how to keep systems alive after the launch hype fades. You'll hear the clink of coffee cups, the murmur of SciPy in the background, and the occasional bite of frittata as we talk (hopefully also a feature, not a bug!) We talk through: • The three personas — and the blind spots each has when shipping AI systems • Why “perfect” tests can be a sign you're testing the wrong thing • Development vs. production observability loops — and why you need both • How curiosity about failing data separates good builders from great ones • Ways large organizations can create space for experimentation without losing delivery focus If you want to build AI products that thrive in the messy real world, this episode will help you embrace the chaos — and make it work for you. LINKS Eric' Website (https://ericmjl.github.io/) More about the workshops Eric and Hugo taught at SciPy (https://hugobowne.substack.com/p/stress-testing-llms-evaluation-frameworks) Upcoming Events on Luma (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk)
In a world where "AI" gets thrown around constantly, what does it really mean for those working in tech? What are the moments at work that actually feel fun—or genuinely rewarding? The idea of a “Cat in a Black Box” feels strangely fitting for engineers: so much of what they build is unseen, uncertain, until it's opened. Amid hectic days, she finds joy in dancing K-pop and hip-hop—movements that become more than just choreography. A dance instructor for over four years, she leads with quiet confidence behind the scenes, while holding onto a traditional Korean mindset she hopes never fades.
2021年に発生した5時間以上にわたるFacebookの大規模障害というトピックについて話していきます。Meta Tech Blog https://engineering.fb.com/2021/10/05/networking-traffic/outage-details/Wikipedia https://en.wikipedia.org/wiki/2021_Facebook_outageCloudflare の https://blog.cloudflare.com/october-2021-facebook-outage/感想をぜひハッシュタグ #todayILearnedFM #tilfm でつぶやいてください!Your co-hosts:Tomoaki Imai, Noxx CTO https://x.com/tomoaki_imai bsky: https://bsky.app/profile/tomoaki-imai.bsky.socialRyoichi Kato, Software Engineer https://x.com/ryo1kato
While AI training garners most of the spotlight — and investment — the demands ofAI inferenceare shaping up to be an even bigger challenge. In this episode ofThe New Stack Makers, Sid Sheth, founder and CEO of d-Matrix, argues that inference is anything but one-size-fits-all. Different use cases — from low-cost to high-interactivity or throughput-optimized — require tailored hardware, and existing GPU architectures aren't built to address all these needs simultaneously.“The world of inference is going to be truly heterogeneous,” Sheth said, meaning specialized hardware will be required to meet diverse performance profiles. A major bottleneck? The distance between memory and compute. Inference, especially in generative AI and agentic workflows, requires constant memory access, so minimizing the distance data must travel is key to improving performance and reducing cost.To address this, d-Matrix developed Corsair, a modular platform where memory and compute are vertically stacked — “like pancakes” — enabling faster, more efficient inference. The result is scalable, flexible AI infrastructure purpose-built for inference at scale.Learn more from The New Stack about inference compute and AIScaling AI Inference at the Edge with Distributed PostgreSQLDeep Infra Is Building an AI Inference Cloud for DevelopersJoin our community of newsletter subscribers to stay on top of the news and at the top of your game
In the latest episode ofThe New Stack Agents, Naveen Rao, VP of AI at Databricks and a former neuroscientist, reflects on the evolution of AI, neural networks, and the energy constraints that define both biological and artificial intelligence. Rao, who once built circuit systems as a child and later studied the brain's 20-watt efficiency at Duke and Brown, argues that current AI development—relying on massive energy-intensive data centers—is unsustainable. He believes true intelligence should emerge from low-power, efficient systems, more aligned with biological computing.Rao warns that the industry is headed toward “model collapse,” where large language models (LLMs) begin training on AI-generated content instead of real-world data, leading to compounding inaccuracies and hallucinations. He stresses the importance of grounding AI in reality and moving beyond brute-force scaling. Rao sees intelligence not just as a function of computing power, but as a distributed, observational system—“life is a learning machine,” he says—hinting at a need to fundamentally rethink how we build AI.Learn more from The New Stack about the latest insights about the evolution of AI and neural networks: The 50-Year Story of the Rise, Fall, and Rebirth of Neural NetworksThe Evolution of the AI Stack: From Foundation to AgentsJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.
IVS京都というイベントに参加してNotion/VercelのCEOから創業について聴いた話、イベント開催、日本からのリモートワークについて話しました。IVS京都 https://www.ivs.events/ファウンディングエンジニア・デイ~起業・スタートアップに飛び込む技術者の集い~ **https://lu.ma/jv2e4ewr登壇スライド: シリコンバレーでスタートアップを共同創業したファウンディングエンジニアとしての学びhttps://speakerdeck.com/tomoima525/sirikonbaredesutatoatupuwogong-tong-chuang-ye-sitahuaundeinguenziniatositenoxue-bi感想をぜひハッシュタグ #todayILearnedFM #tilfm でつぶやいてください!Your co-hosts:Tomoaki Imai, Noxx CTO https://x.com/tomoaki_imai bsky: https://bsky.app/profile/tomoaki-imai.bsky.socialRyoichi Kato, Software Engineer https://x.com/ryo1kato bsky: https://bsky.app/profile/ryo1kato.bsky.social
Art Hu, Global CIO at Lenovo, shares proven strategies for implementing AI at scale in one of the world's largest technology companies. Learn how to navigate uncertainty, build organizational agility, and drive real business value from AI investments.In this episode, you'll learn:Why "no regret" AI investments beat waiting for perfect solutions • How to transform fear of job loss into workforce empowerment • The framework Lenovo uses to evaluate AI opportunities across every business function • Why pull-based learning environments outperform top-down AI mandates • How software engineers are expanding beyond code to become business architectsKey insights covered:✓ Agility as competitive advantage: Accept that AI technologies chosen today won't remain cutting-edge in six months. Build organizational agility instead of seeking guaranteed outcomes.✓ Reframe the AI conversation: AI automates specific tasks within jobs, not entire positions. Leaders must help teams decompose roles and reconstruct them around uniquely human contributions.✓ Create environments, not mandates: Lenovo built hundreds of approved AI agents across legal, marketing, finance, and HR. When employees experiment with relevant tools, they naturally request advanced training.✓ Leadership requires hands-on experience: Senior executives must personally engage with AI tools to lead effectively. You cannot manage what you don't understand.Art Hu oversees technology strategy for a company selling four devices per second globally. His dual perspective as both Global CIO and Chief Delivery & Technology Officer provides unique insights into bridging the gap between AI potential and practical business outcomes.Perfect for: CIOs, CTOs, business executives, and technology leaders navigating AI transformation in their organizations.
Joe Cuevas Jr. is a seasoned software engineer based in San Antonio, Texas, with a strong focus on front-end development and user experience. At Southwest Research Institute, he contributes to innovative engineering solutions that span industries. With a background that blends technical precision and creative problem-solving, Joe brings a thoughtful perspective to the evolving world of software development. His passion for clean design, scalable code, and collaborative innovation makes him a standout voice in today's tech landscape. Topics of Discussion: [1:46] Why Joe fell in love with programming after a single VB.NET class. [5:00] What guided Joe towards the enterprise side of .NET. [6:25] Joe's favorite foundational .NET and ASP.NET Core books. [9:06] Learning algorithms, concurrency, and testing to make up for no CS degree. [10:00] Why test automation is essential and doing testing immediately after writing the code. [12:14] Thoughts on test-driven development vs. testing right after coding. [14:18] Previewing Microsoft Semantic Kernel in Action, by Daniel Costa. [15:38] How Joe became a Manning book reviewer, and what he's learned. [17:18] Prompt engineering and agent instructions: what's ready now. [18:21] Codex and the moment that blew Joe's mind. [22:26] How Joe sees software architecture evolving in the future. [24:09] Large language models vs. older code bases, and where they shine. [27:57] Dependency injections. [32:43] Joe's advice for junior developers and why persistence beats perfection. Mentioned in this Episode: Clear Measure Way Architect Forum Software Engineer Forum Programming with Palermo — New Video Podcast! Email us at programming@palermo.net. Joe Cuevas LinkedIn ASP.NET Core in Action ASP.NET Core in Action, Third Edition Unit Testing Principles, Practice, and Patterns Unit Testing Principles, Practices, and Patterns: Effective testing styles, patterns, and reliable automation for unit testing, mocking, and integration testing with examples in C# Dependency Injection in .NET Microsoft Semantic Kernel in Action Explore — LeetCode Joe on GitHub Want to Learn More? Visit AzureDevOps.Show for show notes and additional episodes.
A senior software engineer for Turso, Thorpe is part of an experimental program in the Maine state prison system that allows incarcerated people to work remote jobs from custody. Learn more about your ad choices. Visit podcastchoices.com/adchoices
断片化に強いという特徴があるmallocメモリアロケーターの実装のひとつ、jemallocの20年の歴史について紹介しました。jemalloc Postmortem by Jason Evansブログ https://jasone.github.io/2025/06/12/jemalloc-postmortem/Hacker Newsの議論 https://news.ycombinator.com/item?id=44264958感想をぜひハッシュタグ #todayILearnedFM #tilfm でつぶやいてください!Your co-hosts:Tomoaki Imai, Noxx CTO https://x.com/tomoaki_imai bsky: https://bsky.app/profile/tomoaki-imai.bsky.socialRyoichi Kato, Software Engineer https://x.com/ryo1kato bsky: https://bsky.app/profile/ryo1kato.bsky.social
Fal.ai, once focused on machine learning infrastructure, has evolved into a major player in generative media. In this episode of The New Stack Agents, hosts speak with Fal.ai CEO Burkay Gur and investor Glenn Solomon of Notable Capital. Originally aiming to optimize Python runtimes, Fal.ai shifted direction as generative AI exploded, driven by tools like DALL·E and ChatGPT. Today, Fal.ai hosts hundreds of models—from image to audio and video—and emphasizes fast, optimized inference to meet growing demand.Speed became Fal.ai's competitive edge, especially as newer generative models require GPU power not just for training but also for inference. Solomon noted that while optimization alone isn't a sustainable business model, Fal's value lies in speed and developer experience. Fal.ai offers both an easy-to-use web interface and developer-focused APIs, appealing to both technical and non-technical users.Gur also addressed generative AI's impact on creatives, arguing that while the cost of creation has plummeted, the cost of creativity remains—and may even increase as content becomes easier to produce.Learn more from The New Stack about AI's impact on creatives:AI Will Steal Developer Jobs (But Not How You Think) How AI Agents Will Change the Web for Users and Developers Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Sze Wong is the Founder and CEO of Zerion Software, a software development company building technology to help teams perform better field data collection.Listen to Sze talk about how he turned teaching java into a business, how he nearly went bankrupt after failing to recognize the macroeconomics of his business, how he founded and pivoted Zerion Software to become an idea factory, how he embraces and nurtures new forms of software development such as vibe coding, and much more. Hosted by Perry Tiu.Episode Links:• Sze's LinkedIn: https://www.linkedin.com/in/szeywong• Sze's Website: https://www.szewong.com/• Zenith Venture Studio: https://www.zenithstudio.io/• Craft your world: Create a life worth living through entrepreneurship: https://www.amazon.com/Craft-your-world-through-entrepreneurship/dp/B0BCD4ZY3P—Interested being on the show? contact@perrytiu.comSponsorship enquiries: sponsor@perrytiu.comFollow Podcast Ruined by a Software Engineer and leave a review• Apple Podcasts: https://apple.co/3RASg8x• Spotify: https://spoti.fi/3RBAXEw• Youtube: https://youtube.com/@perrytiuMore Podcast Ruined by a Software Engineer• Website: https://perrytiu.com/podcast• Merch: https://perrytiu.com/shop• RSS Feed: https://perrytiu.com/podcast/rss.xmlFollow Perry Tiu• Twitter: https://twitter.com/perry_tiu• LinkedIn: https://linkedin.com/in/perrytiu• Instagram: https://instagram.com/doctorpoor
In this episode of Clocking Out, we meet Carla Stickler, a former Broadway actor who once brought Elphaba to life in Wicked and now writes code as a web engineer at Spotify.After landing her first big tour right out of college, Carla returned to New York expecting stardom. Only to find herself waiting tables, battling rejection, and facing the hard truth that talent alone doesn't guarantee success. That humbling chapter forced her ego to take a back seat and ultimately reshaped her definition of success.That shift in perspective made her not only more grounded but also a better colleague, and it led to a decade on stage. Landing big roles on Broadway and building a career many people dream about. But behind the curtain, the reality was far more complicated. The physical toll of performing eight shows a week mounted, and when Carla transitioned to teaching at the college level, she found herself struggling to inspire students to enter an industry she no longer felt supported by.Burnout, financial instability, and a deepening disillusionment led her to make a bold pivot: software engineering. With no tech background but plenty of grit, Carla enrolled in a coding bootcamp and discovered a surprising new outlet for her creativity. Today, she thrives in tech and champions artists looking to reinvent their careers. She proves that there's always overlap in every profession, no skill or experience is ever wasted. When you set your mind to it, you really can do anything.In this conversation, Carla reflects on ambition, ego, rejection, and what it means to let go of the dream you thought you had to chase. If you've ever questioned whether it's too late, or too risky, to start over. Carla's story is your proof that it's not. Tune in for a masterclass in reinvention.Connect with Carla at:TikTok: https://www.tiktok.com/@sticklercarla Instagram: https://www.instagram.com/sticklercarla/ LinkedIn: https://www.linkedin.com/in/carlastickler/Personal website: www.carlastickler.comCompany website: www.creators.spotify.comFollow Raymond: LinkedIn: Raymond LeeInstagram: @raymondlee.coTiktok: @clockingoutpodX: @hrentrepreneurYoutube: @clockingoutwithraymondleeClick Here to become a guest on Clocking Out
Jack Landry is a former Division I athlete and software engineer turned men's mindset and fitness coach. After working at companies like Fidelity and Amazon, he left the corporate world to pursue a more meaningful path—helping high-achieving men who feel unfulfilled create lives of purpose, discipline, and freedom. Drawing from his own transformation through fitness, faith, and personal development, Jack now runs a thriving coaching business and creates inspiring content for his 70,000+ followers on Instagram. His mission is to guide others in stepping into their God-given potential while building a strong foundation for their future—personally, spiritually, and professionally.
Traditional headless browsers weren't built for AI agents, often breaking when web elements shift even slightly. Paul Klein IV, founder of Browserbase and its open-source tool Stagehand, is tackling this by creating a browser infrastructure designed specifically for AI control. On The New Stack Agents podcast, Klein explained that Stagehand enables AI agents to interpret vague, natural-language instructions and still function reliably—even when web pages change. This flexibility contrasts with brittle legacy tools built for deterministic testing. Instead of writing 100 scripts for 100 websites, one AI-powered script can now handle thousands.Klein's broader vision is a world where AI can fully operate the web on behalf of users—automating tasks like filing taxes without human input. He acknowledges the technical challenges, from running browsers on servers to handling edge cases like time zones and emojis. The episode also touches on Klein's concerns with AWS, which he says held a “partnership” meeting that felt more like corporate espionage. Still, Klein remains confident in Browserbase's community-driven edge.Learn more from The New Stack about the latest insights in AI browser based tools: Why Headless Browsers Are a Key Technology for AI Agents Ladybird: That Rare Breed of Browser Based on Web Standards Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Colab is cozy. But production won't fit on a single GPU. Zach Mueller leads Accelerate at Hugging Face and spends his days helping people go from solo scripts to scalable systems. In this episode, he joins me to demystify distributed training and inference — not just for research labs, but for any ML engineer trying to ship real software. We talk through: • From Colab to clusters: why scaling isn't just about training massive models, but serving agents, handling load, and speeding up iteration • Zero-to-two GPUs: how to get started without Kubernetes, Slurm, or a PhD in networking • Scaling tradeoffs: when to care about interconnects, which infra bottlenecks actually matter, and how to avoid chasing performance ghosts • The GPU middle class: strategies for training and serving on a shoestring, with just a few cards or modest credits • Local experiments, global impact: why learning distributed systems—even just a little—can set you apart as an engineer If you've ever stared at a Hugging Face training script and wondered how to run it on something more than your laptop: this one's for you. LINKS Zach on LinkedIn (https://www.linkedin.com/in/zachary-mueller-135257118/) Hugo's blog post on Stop Buliding AI Agents (https://www.linkedin.com/posts/hugo-bowne-anderson-045939a5_yesterday-i-posted-about-stop-building-ai-activity-7346942036752613376-b8-t/) Upcoming Events on Luma (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk) Hugo's recent newsletter about upcoming events and more! (https://hugobowne.substack.com/p/stop-building-agents)
Faaaaala Dev! Começamos a 7ª temporada do #FalaDev com um papo de peso, diretamente do nosso novo formato de áudio e também com vídeo!Nesta nova temporada você verá muito conhecimento técnico através de convidados super renomados no mercado tech. Bora!No episódio de estreia, o PV Faria, nosso host, recebe Fábio Vedovelli, Software Engineer, para uma conversa cheia de reflexões sobre tecnologia, carreira, inteligência artificial e arquitetura de software.Neste bate-papo, o Vedovelli compartilha sua trajetória desde os primórdios da web até a criação de seus próprios produtos com IA, além de trazer uma visão prática e apaixonada sobre o framework Remix / React Router 7, e como ele se tornou referência ao ensinar sobre tecnologias antes mesmo de virarem tendência.Se você curte conteúdo técnico, histórias reais de quem vive o mercado e dicas valiosas sobre evolução na carreira, esse episódio é para você.Dá o play e já se inscreve no canal, ein!Host: PV FariaConvidado: Fábio Vedovelli • X• Youtube• Curso React Router 7 + IAIndependente do seu momento de carreira e objetivos, temos o conteúdo ideal
In this episode, you'll meet Mariena Quintanilla, a Los Angeles-based career flipper who proves you can pivot, even within the same industry.After spending nearly two decades in tech, moving through roles in software engineering, product management, solutions architecture, and customer experience (plus navigating a layoff), Mariena had a realization: she didn't want to keep climbing the corporate ladder but she still loved tech.So, she made the leap. Today, Mariena runs Mellonhead.co, an AI education and consulting company helping others make sense of emerging technology.In our conversation, we talk about:The inner work that happens before, during, and after a big leapBuilding a financial safety net and jumping anywayThe identity shifts that come with walking away from a titleWhy knowing who you are matters more than knowing where you're goingMariena doesn't sugarcoat the messy middle. She shares what it really felt like to start over, the fears she faced, and the courage it took to rebuild on her own terms.Connect with MarienaLinkedIn: https://www.linkedin.com/in/mariena/Website: https://mellonhead.co/ Thanks for listening to The Career Flipper!If you enjoyed this episode, let's spread the word! Share it with a friend, subscribe, and leave a review—it helps other career flippers find the show.Let's Stay Connected:Join the community: thecareerflipper.comTikTok: @thecareerflipperInstagram: @thecareerflipperpodGot a career flip story? I'd love to hear it—and maybe even have you on the podcast! Whether you've completed your flip, are just starting, or are in the thick of it, submit your story here: https://www.thecareerflipper.com Want to support the show?Looking for a speaker? I'd love to talk about career changes at your next event.Collaborate through sponsorships or affiliates! Let's work together.Email me: hello@thecareerflipper.comCheck Out My Customer Service CoursesBefore my career flip, I led customer experience teams and created online courses that have helped over 12,000 students worldwide. Whether you're switching to customer service or sharpening your skills to run your own business, these courses are packed with practical tips. Learn more at thecareerflipper.com/courses.Other Ways to Get Involved:Buy me a coffee!Explore my furniture flipsMusic CreditsSeason 1: Intro and outro music by audionautix.com. Season 2: Intro and outro original music by Jenny Dempsey, recorded in a home studio.What's the best that could happen?
Meta Platforms is unlikely to alter its “pay-or-consent” model further, which could lead to new EU antitrust charges, Goldman Sachs is integrating an autonomous AI software engineer named Devin, and a new study found that using AI coding assistants actually slowed down developers by 19%. MP3 Please SUBSCRIBE HERE for free or get DTNS LiveContinue reading "Goldman Sachs Is Integrating An Autonomous AI Software Engineer Named Devin – DTH"
Get 10% off Hugo's "Building LLM Applications for Data Scientists and Software Engineers" online course!Today's clip is from episode 135 of the podcast, with Teemu Säilynoja.Alex and Teemu discuss the importance of simulation-based calibration (SBC). They explore the practical implementation of SBC in probabilistic programming languages, the challenges faced in developing SBC methods, and the significance of both prior and posterior SBC in ensuring model reliability. The discussion emphasizes the need for careful model implementation and inference algorithms to achieve accurate calibration.Get the full conversation here.Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.
On today's show Ben and Andrew answer questions about the future of engineering jobs, the definition of vibe coding, Meta's AI upside, ChatGPT-led fashion shows, xAI as a third-tier streamer, and bitter lessons as autonomous driving becomes more viable. At the end: An emailer follow-ups on last week's conversation about normies and AI risk.