Podcasts about software engineers

Computing discipline

  • 2,057PODCASTS
  • 4,271EPISODES
  • 40mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Feb 13, 2026LATEST

POPULARITY

20192020202120222023202420252026

Categories



Best podcasts about software engineers

Show all podcasts related to software engineers

Latest podcast episodes about software engineers

The New Stack Podcast
The reason AI agents shouldn't touch your source code — and what they should do instead

The New Stack Podcast

Play Episode Listen Later Feb 13, 2026 22:41


Dynatrace is at a pivotal point, expanding beyond traditional observability into a platform designed for autonomous operations and security powered by agentic AI. In an interview on *The New Stack Makers*, recorded at the Dynatrace Perform conference, Chief Technology Strategist Alois Reitbauer discussed his vision for AI-managed production environments. The conversation followed Dynatrace's acquisition of DevCycle, a feature-management platform. Reitbauer highlighted feature flags—long used in software development—as a critical safety mechanism in the age of agentic AI. Rather than allowing AI agents to rewrite and deploy code, Dynatrace envisions them operating within guardrails by adjusting configuration settings through feature flags. This approach limits risk while enabling faster, automated decision-making. Customers, Reitbauer noted, are increasingly comfortable with AI handling defined tasks under constraints, but not with agents making sweeping, unsupervised changes. By combining AI with controlled configuration tools, Dynatrace aims to create a safer path toward truly autonomous operations. Learn more from The New Stack about the latest in progressive delivery: Why You Can't Build AI Without Progressive Delivery Continuous Delivery: Gold Standard for Software Development Join our community of newsletter subscribers to stay on top of the news and at the top of your game.  Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Develpreneur: Become a Better Developer and Entrepreneur
Balancing Building and Customer Feedback Without Getting Stuck

Develpreneur: Become a Better Developer and Entrepreneur

Play Episode Listen Later Feb 12, 2026 31:53


If you've ever shipped fast only to realize no one wanted what you built, you've felt the tension behind balancing building and feedback. As developers, we're trained to execute against known requirements. As soon as you step into product ownership, consulting, or entrepreneurship, those guardrails disappear. Now you have to decide what to build, who it's for, and why it matters—while still making forward progress. Get it wrong, and you either drown in feedback or disappear into code. Get it right, and you create steady momentum without wasting effort. This interview continues our discussion with Tyler Dane as we break down a practical, repeatable system for balancing building and feedback so you can keep shipping and stay aligned with real customer needs. About Tyler Dane Tyler Dane has dedicated his career to helping people better manage—and truly appreciate—their time. After working as a full-time Software Engineer, Tyler recently stepped away from traditional employment to focus entirely on building Compass Calendar, a productivity app designed to help everyday users visualize and plan their day more intentionally. The tool is built from firsthand experience, not theory—shaped by years of experimenting with productivity systems, tools, and workflows. In a bold reset, Tyler sold most of his belongings and relocated to San Francisco to focus on growing the product, collaborating with partners, and pushing Compass forward. Outside of coding, Tyler creates YouTube videos and writes about time management and productivity. After consuming countless productivity books, tools, and frameworks, he realized a common trap: doing more without actually accomplishing what matters. That insight led him to break productivity down into its most practical, nuanced components—cutting through hustle culture noise to focus on systems that actually work. Tyler is unapologetically honest and independent. With no investors, no sponsors, and nothing to sell beyond the value of his work, his focus is simple: help people get more done—and appreciate the limited time they have to do it. Follow Tyler on LinkedIn, YouTube, and X. Balancing building and feedback starts with a clear v1 The biggest cause of wasted effort isn't bad code—it's unclear scope. A clear v1 isn't a long feature list; it's a decision about which problem you are solving first. When v1 is defined, feedback becomes directional instead of distracting. You can evaluate every request with a simple question: Does this help solve the v1 problem? If the answer is no, it goes into a parking lot—not the backlog. Without that clarity, every conversation feels urgent, and every idea feels equally important. Balancing building and feedback by timeboxing your week Unstructured time leads to extremes. One week becomes all coding. The next becomes all conversations. Neither works for long. Timeboxing forces balance by design. Decide when you build and when you listen—and protect those blocks like production systems. This removes decision fatigue and prevents emotional swings based on the latest conversation. The Weekly Balance Blueprint Pick a structure: daily outreach blocks or one dedicated feedback day Convert feedback into next-week priorities instead of mid-week pivots Consistency matters more than perfection. Balancing building and feedback with daily "business refocus" blocks Short check-ins keep you out of the weeds. Spend 10–15 minutes at the start and end of your day to reconnect with the business context. Ask yourself: Who is this for? What problem am I solving? What actually moved the product forward today? These moments prevent scope creep and help you code with intent instead of habit. Balancing building and feedback using personal sprints Personal sprints introduce rhythm. Two- or three-week cycles work well because they're long enough to produce meaningful output and short enough to adjust course. Each sprint should include: Focused build time Planned feedback windows Explicit integration of what you learned This keeps learning and execution tightly coupled, rather than competing for attention. Balancing building and feedback through problem-first customer research Feedback becomes overwhelming when you ask the wrong questions. Feature requests are noisy. Problems are signals. Focus conversations on how people experience the problem today, what frustrates them, and what "better" looks like. This approach surfaces patterns instead of opinions. Problem-First Customer Conversations Ask about pains, workarounds, and desired outcomes Use "not our customer" signals to narrow your focus Clarity often comes from who you don't build for. Balancing building and feedback to prevent feature overload Not all feedback belongs in your product. Filtering input is a leadership skill. Use your v1 definition and target customer as a lens. Some ideas are valuable later. Some indicate a different market entirely. Saying "no" protects your momentum and your sanity. Balancing building and feedback by turning conversations into messaging Customer conversations don't just shape the product—they shape how you talk about it. The language people use to describe their pain becomes your marketing copy. When your messaging mirrors real problems, alignment improves across sales, onboarding, and product decisions. Balancing building and feedback with journaling to spot patterns Writing creates distance. Distance creates clarity. A lightweight journaling habit helps you spot repeated mistakes, drifting priorities, and false assumptions before they become expensive. Over time, patterns become impossible to ignore. The Founder Feedback Journal Capture decisions, assumptions, and outcomes daily Review monthly to identify drift and reset priorities It's one of the simplest tools with the highest long-term ROI. Conclusion Balancing building and feedback isn't about splitting your time evenly—it's about building a system that keeps you moving forward without losing direction. Clear scope, protected time, intentional feedback loops, and honest reflection create momentum that compounds. Start small. Adjust deliberately. And remember: progress comes from building the right things, not just building faster. Stay Connected: Join the Developreneur Community We invite you to join our community and share your coding journey with us. Whether you're a seasoned developer or just starting, there's always room to learn and grow together. Contact us at info@develpreneur.com with your questions, feedback, or suggestions for future episodes. Together, let's continue exploring the exciting world of software development. Additional Resources Embrace FeedBack For Better Teams Maximizing Developer Effectiveness: Feedback Loops Turning Feedback into Future Success: A Guide for Developers Building Better Foundations Podcast Videos – With Bonus Content

Telecom Reseller
Amazon's Tejas Patel on Distributed Systems, AI, and Managing Massive Scale, Podcast

Telecom Reseller

Play Episode Listen Later Feb 12, 2026


At ITEXPO / MSP EXPO, Doug Green, Publisher of Technology Reseller News, spoke with Tejas Patel, Software Engineer at Amazon, for a technical deep dive into how one of the world's largest platforms manages scale, reliability, and the growing role of AI in operations. Amazon operates in an environment defined by extreme traffic variability—from daily fluctuations to massive surges during Prime events. Patel explained that the company relies on distributed systems and microservices architecture to scale every layer of the stack, including databases, caching layers, and application servers. “We scale everything at a massive scale,” he noted, adding that AI-driven traffic prediction models help prepare systems for anticipated spikes, ensuring elasticity and resilience under pressure. Even with rigorous lower-environment testing and simulated traffic, real-world production environments introduce unpredictable behaviors. When outages or functional errors occur, the first priority is customer impact mitigation. “The short-term goal is to make our functionalities available for customers as soon as possible,” Patel said. After stabilizing services, engineering teams conduct root cause analysis and implement long-term fixes to prevent recurrence. On-call teams remain a core part of this model, though that may evolve. AI is increasingly part of that evolution. Patel described how AI systems can detect latency drops, identify anomalies, trigger workflows, and begin root cause investigations—sometimes before engineers are alerted. While still in a supervised phase, AI is gradually moving from passive support to more autonomous operational roles. “AI has a lot of protocols built where it can talk to all the systems,” he explained, envisioning a future where AI mitigates issues proactively while engineers oversee the broader architecture. For MSPs and channel professionals looking to understand large-scale technology environments, Patel emphasized the foundational importance of distributed systems. “Distributed system is everywhere,” he said. “It's the backbone of a large-scale product.” As AI models and inference platforms continue to expand globally, scalable distributed infrastructure will remain essential to delivering reliable, uninterrupted user experiences. Visit https://www.amazon.com/

The New Stack Podcast
You can't fire a bot: The blunt truth about AI slop and your job

The New Stack Podcast

Play Episode Listen Later Feb 11, 2026 57:18


Matan-Paul Shetrit, Director of Product Management at Writer, argues that people must take responsibility for how they use AI. If someone produces poor-quality output, he says, the blame lies with the user—not the tool. He believes many misunderstand AI's role, confusing its ability to accelerate work with an abdication of accountability. Speaking on The New Stack Agents podcast, Shetrit emphasized that “we're all becoming editors,” meaning professionals increasingly review and refine AI-generated content rather than create everything from scratch. However, ultimate responsibility remains human. If an AI-generated presentation contains errors, the presenter—not the AI—is accountable. Shetrit also discussed the evolving AI landscape, contrasting massive general-purpose models from companies like OpenAI and Google with smaller, specialized models. At Writer, the focus is on enabling enterprise-scale AI adoption by reducing costs, improving accuracy, and increasing speed. He argues that bespoke, narrowly focused models tailored to specific use cases are essential for delivering reliable, cost-effective AI solutions at scale. Learn more from The New Stack about the latest around enterprise development: Why Pure AI Coding Won't Work for Enterprise Software How To Use Vibe Coding Safely in the Enterprise Join our community of newsletter subscribers to stay on top of the news and at the top of your game.  Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Develpreneur: Become a Better Developer and Entrepreneur
Customer Feedback for Developers: How to Listen Without Losing Your Vision

Develpreneur: Become a Better Developer and Entrepreneur

Play Episode Listen Later Feb 10, 2026 26:24


Customer feedback for developers is one of the fastest ways to improve a product—and one of the easiest ways to derail it. When you're building something you care about, every comment feels important. The challenge is learning how to listen without letting feedback pull you in ten different directions. This episode explores how developers can use customer feedback to sharpen focus, avoid scope creep, and move faster—without losing the original vision that made the product worth building in the first place. About Tyler Dane Tyler Dane has dedicated his career to helping people better manage—and truly appreciate—their time. After working as a full-time Software Engineer, Tyler recently stepped away from traditional employment to focus entirely on building Compass Calendar, a productivity app designed to help everyday users visualize and plan their day more intentionally. The tool is built from firsthand experience, not theory—shaped by years of experimenting with productivity systems, tools, and workflows. In a bold reset, Tyler sold most of his belongings and relocated to San Francisco to focus on growing the product, collaborating with partners, and pushing Compass forward. Outside of coding, Tyler creates YouTube videos and writes about time management and productivity. After consuming countless productivity books, tools, and frameworks, he realized a common trap: doing more without actually accomplishing what matters. That insight led him to break productivity down into its most practical, nuanced components—cutting through hustle culture noise to focus on systems that actually work. Tyler is unapologetically honest and independent. With no investors, no sponsors, and nothing to sell beyond the value of his work, his focus is simple: help people get more done—and appreciate the limited time they have to do it. Follow Tyler on LinkedIn, YouTube, and X. Customer feedback for developers: Why "this is great, but…" matters Most useful feedback doesn't sound negative at first. It usually starts with, "This is great, but…" That "but" is where the signal lives. For developers, the mistake isn't ignoring feedback—it's stopping at the compliment. The real value is understanding what's missing, confusing, or blocking progress. Teams that grow fastest learn to treat that follow-up as actionable data, not criticism. The "This Is Great, But…" Checklist Capture the "but" immediately before it gets softened or forgotten Translate it into a concrete problem statement you can validate Customer feedback for developers: how to find the right people to talk to Not all feedback is equal. Talking to the wrong audience can send you down expensive paths that don't actually improve your product. Customer feedback for developers works best when it comes from people who: Actively experience the problem you're solving Would realistically adopt or pay for your solution Share similar workflows and constraints Broad feedback feels productive but often leads to vague changes. Focused conversations lead to clarity. Customer feedback for developers: filtering input to prevent scope creep Scope creep rarely starts with bad intent. It starts with trying to please everyone. The fix isn't saying "no" to customers—it's filtering feedback through a clear lens: Does this solve the core problem? Does this help our ideal user? Does this move the product forward right now? Avoid Scope Creep Without Ignoring Customers Separate "interesting ideas" from "next priorities." Keep a backlog for later so good ideas don't hijack today's focus Customer feedback for developers: balancing vision with real user needs Strong products sit at the intersection of vision and reality. If you only follow feedback, you become reactive. If you ignore it, you risk building in isolation. Customer feedback for developers should challenge assumptions—not erase direction. The goal is refinement, not reinvention, with every conversation. Customer feedback for developers: building momentum with faster shipping One consistent theme is speed. Slow feedback loops kill momentum. Shipping faster—even in small increments—creates learning. Fast cycles: Reveal what actually matters Improve judgment over time Reduce emotional attachment to individual decisions Build Momentum With Speed and Structure Short shipping cycles reduce overthinking Volume creates clarity faster than perfect planning Customer feedback for developers: choosing a niche in a crowded market General tools struggle in saturated spaces. Customer feedback for developers becomes clearer when you narrow your audience. Niching down doesn't limit opportunity—it increases relevance. How to position against "feature-parity" giants You don't win by copying large platforms. You win by serving a specific workflow better than anyone else. Self-direction when you don't have a manager Without an external structure, prioritization becomes your job. Customer feedback replaces task assignments—but only if you actively use it to set direction. Clear priorities beat unlimited freedom. Conclusion Customer feedback for developers isn't about collecting opinions—it's about building judgment. When you listen to the right people, filter ruthlessly, and ship quickly, feedback becomes a growth engine instead of a distraction. If you're building something of your own, treat feedback as fuel—not a steering wheel. Stay Connected: Join the Developreneur Community We invite you to join our community and share your coding journey with us. Whether you're a seasoned developer or just starting, there's always room to learn and grow together. Contact us at info@develpreneur.com with your questions, feedback, or suggestions for future episodes. Together, let's continue exploring the exciting world of software development. Additional Resources Embrace FeedBack For Better Teams Feedback And Career Help – Does The Bootcamp Provide It? Turning Feedback into Future Success: A Guide for Developers Building Better Foundations Podcast Videos – With Bonus Content

The New Stack Podcast
GitLab CEO on why AI isn't helping enterprise ship code faster

The New Stack Podcast

Play Episode Listen Later Feb 10, 2026 57:18


AI coding assistants are boosting developer productivity, but most enterprises aren't shipping software any faster. GitLab CEO Bill Staples says the reason is simple: coding was never the main bottleneck. After speaking with more than 60 customers, Staples found that developers spend only 10–20% of their time writing code. The remaining 80–90% is consumed by reviews, CI/CD pipelines, security scans, compliance checks, and deployment—areas that remain largely unautomated. Faster code generation only worsens downstream queues.GitLab's response is its newly GA'ed Duo Agent Platform, designed to automate the full software development lifecycle. The platform introduces “agent flows,” multi-step orchestrations that can take work from issue creation through merge requests, testing, and validation. Staples argues that context is the key differentiator. Unlike standalone coding tools that only see local code, GitLab's all-in-one platform gives agents access to issues, epics, pipeline history, security data, and more through a unified knowledge graph.Staples believes this platform approach, rather than fragmented point solutions, is what will finally unlock enterprise software delivery at scale. Learn more from The New Stack about the latest around GitLab and AI: GitLab Launches Its AI Agent Platform in Public BetaGitLab's Field CTO Predicts: When DevSecOps Meets AIJoin our community of newsletter subscribers to stay on top of the news and at the top of your game. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Today I Learned
198. もし複数の興味があるなら、次の2-3年を無駄に過ごすな

Today I Learned

Play Episode Listen Later Feb 8, 2026 38:08


激動の時代の中、これまでのライフスタイルから脱却し、経済的にも人生的にも独立して生きるためのアドバイスについて話しました。If you have multiple interests, do not waste the next 2-3 years https://x.com/i/status/2010042119121957316感想をぜひハッシュタグ #tilfm でつぶやいてください!お便りフォーム https://forms.gle/J2ioXHS98dYNoMbq5Your co-hosts:Tomoaki Imai, Noxx CTO https://x.com/tomoaki_imai bsky: https://bsky.app/profile/tomoaki-imai.bsky.socialRyoichi Kato, Software Engineer ⁠https://x.com/ryo1kato bsky: https://bsky.app/profile/ryo1kato.bsky.social

Paroles de Tech Leaders
Stratégies GenAI : de la maîtrise technologique à l'excellence produit - Marianne DUCOURNAU (Qonto) & Benoit GANTAUME #S07EP28

Paroles de Tech Leaders

Play Episode Listen Later Feb 8, 2026 20:36


Hors-Série Tech.Rocks Summit 2025 : nous poursuivons notre immersion au cœur des enjeux du Tech.Rocks Summit avec Benoit Gantaume. Dans ce nouvel épisode, il tend le micro à Marianne Ducournau, Head of AI Products chez Qonto, pour explorer la réalité concrète de l'IA générative au sein d'une scale-up de premier plan.Forte de son expérience chez Uber et Amazon, Marianne nous plonge dans les coulisses de l'intégration de l'IA dans le quotidien de 600 000 clients. Loin des discours théoriques, elle nous partage la stratégie de Qonto pour transformer des technologies complexes, de l'océrisation intelligente à l'IA agentique, en véritables leviers de simplification pour les entrepreneurs.Au cours de cet échange, elle lève le voile sur un dilemme crucial pour tout Tech Leader : quand faut-il développer en interne et quand faut-il s'appuyer sur des solutions sur l'étagère ?. Marianne revient également sur le défi de la prédictibilité et l'importance vitale de la "data quality" : car si le LLM répond toujours avec conviction, seul un dataset d'évaluation rigoureux permet de sortir du mirage du POC pour atteindre une fiabilité industrielle.Enfin, elle nous livre une vision résolument optimiste de l'évolution de nos métiers. Pour elle, l'IA n'est pas une menace, mais un catalyseur qui fait tomber les silos entre Product Managers, Data Scientists et Software Engineers.Un épisode essentiel pour découvrir comment une organisation tech de pointe navigue dans l'incertitude de l'IA pour bâtir des produits robustes, scalables et centrés sur la valeur client.*******************1ère communauté des professionnel•les de la Tech en France, Tech.Rocks a pour mission de faire rayonner les tech leaders tout au long de l'année.Tech.Rocks Summit 2026 - Paris - Profitez de notre tarif "Fan avant l'heure"

The New Stack Podcast
The enterprise is not ready for "the rise of the developer"

The New Stack Podcast

Play Episode Listen Later Feb 5, 2026 25:50


Sean O'Dell of Dynatrace argues that enterprises are unprepared for a major shift brought on by AI: the rise of the developer. Speaking at Dynatrace Perform in Las Vegas, O'Dell explains that AI-assisted and “vibe” coding are collapsing traditional boundaries in software development. Developers, once insulated from production by layers of operations and governance, are now regaining end-to-end ownership of the entire software lifecycle — from development and testing to deployment and security. This shift challenges long-standing enterprise structures built around separation of duties and risk mitigation. At the same time, the definition of “developer” is expanding. With AI lowering technical barriers, software creation is becoming more about creative intent than mastery of specialized tools, opening the door to nontraditional developers. Experimentation is also moving into production environments, a change that would have seemed reckless just 18 months ago. According to O'Dell, enterprises now understand AI well enough to experiment confidently, but many are not ready for the cultural, operational, and security implications of developers — broadly defined — taking full control again.Learn more from The New Stack about the latest around enterprise developers and AI: Retool's New AI-Powered App Builder Lets Non-Developers Build Enterprise AppsSolving 3 Enterprise AI Problems Developers FaceEnterprise Platform Teams Are Stuck in Day 2 HellJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.  Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Beyond The Baselines
Software Engineer Takes Her Show To Private and Commercial Clubs

Beyond The Baselines

Play Episode Listen Later Feb 3, 2026 46:34


Louise Fahys, co-founder of Plan2Play Artificial intelligence is no longer a future concept in club management — it is already reshaping how private and commercial clubs operate. But according to Louise Fahys, we are only scratching the surface. Fahys is the co-founder and CTO of Plan2Play, a court and sport booking platform built by people who understand both software engineering and the realities of club life. Her view is clear: the next generation of club operations will be driven by intelligent, conversational interfaces — think ChatGPT-style applications — where members interact directly with technology to book courts, schedule lessons, manage guest play, and personalize their club experience. AI Is Going to Change Everything in Club Management AI is already easing the workload for Directors of Racquets, Golf, and Operations. Tasks that once required hours of manual setup — like creating round robins, allocating courts, or balancing player levels — can now be handled in seconds. Names go in, constraints go in, and AI produces fair, efficient scheduling by level, gender, or randomization. And that, Fahys says, is just the beginning. The real shift will come through dynamic pricing. Much like airlines adjust pricing based on demand, clubs will increasingly use AI to price court time, tee times, lessons, clinics, amenities, and guest fees in real time. One-hour bookings will replace fragmented half-hour gaps. Utilization improves. Revenue becomes more predictable. Member experience improves. Data Will Confirm What Clubs Already Suspect AI will also validate long-held assumptions in club operations. Fahys notes that most club professionals already understand that the average lifetime value of a pickleball participant differs from that of a tennis member — and that tennis often differs again from padel or squash. AI won't just confirm those differences; it will quantify them. That data will influence everything from facility development to membership structures, programming decisions, and long-term capital planning for both private clubs and commercial operators. The End of the “Fiefdom” Era One of the most challenging areas for clubs, particularly member-owned facilities, is change. Software transitions are often resisted — not because the technology isn't effective, but because long-standing habits and informal traditions are deeply ingrained. Unspoken court ownership. Preferred time slots. Long-tenured directors controlling access “the way it's always been done.” AI introduces transparency. And transparency challenges tradition. As clubs move toward data-driven scheduling and access, those informal systems may begin to fade. For some, that will feel uncomfortable. For others, it will represent progress — fairer access, clearer policies, and a better overall member experience. Looking Ahead Fahys believes the clubs that embrace AI thoughtfully — using it as a tool to enhance service rather than replace hospitality — will be the ones that thrive. The technology is not about removing people from the equation; it's about freeing professionals to focus on what matters most: relationships, programming, and experience. The future of club management is arriving faster than many expect. And for those willing to engage with it, the opportunities are significant.

Jaani
AI-stress till AI-expert: Varför AI-102 är din karta framåt

Jaani

Play Episode Listen Later Feb 2, 2026 2:59


Känner du dig stressad över den explosionsartade utvecklingen inom AI? Vibe coding, chatbots, agenter och "Agentic AI" – buzzwordsen är många och det kan vara svårt att veta var man ska lägga sin energi.I det här avsnittet dyker vi ner i hur du kan strukturera ditt lärande. Vi tittar specifikt på Microsofts certifiering Azure AI Engineer Associate (AI-102), inte bara för att ta ett certifikat, utan för att använda det som ett index över vad du faktiskt behöver kunna idag.Oavsett om du är Software Engineer eller Data Engineer så förändras din roll nu. Vi diskuterar varför dessa två roller närmar sig varandra, vikten av att förstå molnet i stort, och varför du måste börja "vibe koda" redan idag.Här är de viktigaste punkterna från avsnittet:Djungeln av nya begreppMånga känner en stress över var man ska börja med AI. Är det chatbots, agenter eller kodning som gäller?Vikten av att hitta en struktur för sitt lärande istället för att hoppa på varje ny trend.Certifiering som vägvisare (AI-102)Varför Microsoft Azure AI Engineer Associate (AI-102) är en utmärkt utgångspunkt.Certifieringen ger en "end-to-end" förståelse för hur man bygger en AI-applikation.Innehåll: Generativ AI, chatbots, AI-agenter och Computer Vision.Kom ihåg: Även om det är ett Microsoft-cert, är kunskapen applicerbar på nästan alla molnplattformar.Råd till specifika rollerFör Software Engineers: Du har redan verktygen (t.ex. GitHub Enterprise). Lär dig behärska Copilot och "vibe coding".För Data Engineers: Bli nyfiken på mjukvaruutveckling. Hur bygger man applikationen som datan ska bo i?Framtidens teamarbeteGränserna suddas ut: Software Engineers och Data Engineers kommer att jobba tätare ihop i teamen framöver.Vikten av att ha en gemensam förståelse för utvecklingsprocessen.Grundkravet: MolnetGlöm inte basen. För att lyckas med AI måste du ha en grundläggande förståelse för Cloud (t.ex. Azure) och de tjänster som möjliggör AI-lösningarna.Vill du veta mer?Läs mer om certifieringen AI-102 och utforska resurserna som nämns i videon för att ta nästa steg i din karriär. https://www.jonasjaani.se

Today I Learned
197. 競争は敗者のすることだ

Today I Learned

Play Episode Listen Later Feb 1, 2026 29:49


スタンフォード大学で2014年秋学期に開講された「CS183B:スタートアップの始め方(How to Start a Startup)」という授業の第5回目、ピーターティールによる「競争は敗者のすることだ」Competition is for Losers という講演について取り上げました動画 https://www.youtube.com/watch?v=3Fx5Q8xGU8k全講義 https://www.youtube.com/playlist?list=PLU630Cd0ZQCMeQiSvU7DJmDJDitdE7m7rX でのKousukeさんによるまとめ https://x.com/kosuke_agos/status/2005171369659826258?s=20「良い戦略、悪い戦略」https://amzn.to/4pQwBXT感想をぜひハッシュタグ #tilfm でつぶやいてください!お便りフォーム https://forms.gle/J2ioXHS98dYNoMbq5Your co-hosts:Tomoaki Imai, Noxx CTO https://x.com/tomoaki_imai bsky: https://bsky.app/profile/tomoaki-imai.bsky.socialRyoichi Kato, Software Engineer ⁠https://x.com/ryo1kato bsky: https://bsky.app/profile/ryo1kato.bsky.social

Today I Learned
196. AI Agents と経済

Today I Learned

Play Episode Listen Later Jan 25, 2026 49:18


AI Agents が社会に普及したときに経済観点から及ぼす影響について話しました。An Economy of AI Agents https://arxiv.org/pdf/2509.01063 Anthropicのセーフティに関するテクニカルレポート https://www-cdn.anthropic.com/4263b940cabb546aa0e3283f35b686f4f3b2ff47.pdf感想をぜひハッシュタグ #tilfm でつぶやいてください!お便りフォーム https://forms.gle/J2ioXHS98dYNoMbq5Your co-hosts:Tomoaki Imai, Noxx CTO https://x.com/tomoaki_imai bsky: https://bsky.app/profile/tomoaki-imai.bsky.socialRyoichi Kato, Software Engineer ⁠https://x.com/ryo1kato bsky: https://bsky.app/profile/ryo1kato.bsky.social

alfalfa
The End of Coding, Why "Trying Hard" Makes You A Loser & Bill Ackman's Warning | Ep. 273

alfalfa

Play Episode Listen Later Jan 22, 2026 93:20


From the "Hard to Kill" special forces event in Vegas to the science of why complaining physically rewires your brain. We debate the widening political gap between young men and women, breakdown the "Free Soul" aesthetic of the guy doing better than you, and ask if Claude Code has officially killed the junior developer.Welcome to the Alfalfa Podcast

The New Stack Podcast
CTO Chris Aniszczyk on the CNCF push for AI interoperability

The New Stack Podcast

Play Episode Listen Later Jan 22, 2026 23:33


Chris Aniszczyk, co-founder and CTO of the Cloud Native Computing Foundation (CNCF), argues that AI agents resemble microservices at a surface level, though they differ in how they are scaled and managed. In an interview ahead of KubeCon/CloudNativeCon Europe, he emphasized that being “AI native” requires being cloud native by default. Cloud-native technologies such as containers, microservices, Kubernetes, gRPC, Prometheus, and OpenTelemetry provide the scalability, resilience, and observability needed to support AI systems at scale. Aniszczyk noted that major AI platforms like ChatGPT and Claude already rely on Kubernetes and other CNCF projects.To address growing complexity in running generative and agentic AI workloads, the CNCF has launched efforts to extend its conformance programs to AI. New requirements—such as dynamic resource allocation for GPUs and TPUs and specialized networking for inference workloads—are being handled inconsistently across the industry. CNCF aims to establish a baseline of compatibility to ensure vendor neutrality. Aniszczyk also highlighted CNCF incubation projects like Metal³ for bare-metal Kubernetes and OpenYurt for managing edge-based Kubernetes deployments. Learn more from The New Stack about CNCF and what to expect in 2026:Why the CNCF's New Executive Director Is Obsessed With InferenceCNCF Dragonfly Speeds Container, Model Sharing with P2PJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.  Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Celebrity Interviews
From Software Engineer to Overnight Sensation: Wanz's "Thrift Shop" Story

Celebrity Interviews

Play Episode Listen Later Jan 22, 2026 24:16


Singer Wanz shares his extraordinary journey from near-retirement to international fame on The Neil Haley Show, recounting how one June evening phone call in 2012 changed his life forever. After decades of grinding in Seattle's music scene alongside friends who became members of Soundgarden, Alice in Chains, and Pearl Jam, Wanz had resigned himself to life as a software test engineer, believing there was no such thing as an old pop star. When he met Ben Haggerty (Macklemore) and Ryan Lewis for the first time, they were looking for a singer who sounded like the legendary West Coast hook singer Nate Dogg, and within 45 minutes, Wanz recorded what would become one of the most recognizable hooks in modern music. By August 29, 2012, when the "Thrift Shop" video dropped, Wanz watched in amazement as the numbers skyrocketed, leading him to quit his secure job with no savings or safety net after a sold-out show at San Francisco's Fillmore brought him to tears.At 53 years old, Wanz experienced the fulfillment of every dream he'd ever had as "Thrift Shop" topped charts worldwide and earned him two Grammy awards. He describes the electric moment of walking on stage as the crowd elevated to another level when he began singing, with thousands of voices joining his. After the "Thrift Shop" phenomenon peaked and touring with Macklemore ended, Wanz returned to his passion for creating original music, releasing a five-song EP called "Wander" about his journey through depression and back to hope. His tribute song "To Nate Dogg," featuring Warren G and earning the blessing of Nate Dogg's son, represents both homage to his inspiration and the beginning of his post-"Thrift Shop" career. Wanz's message to aspiring artists reflects his own improbable success story: never stop doing what makes you and others happy, because at any age and any moment, you never know where your passion might take you.

Today I Learned
195. ソフトウェア設計の結合バランス

Today I Learned

Play Episode Listen Later Jan 18, 2026 39:52


「ソフトウェア設計の結合バランス 持続可能な成長を支えるモジュール化の原則」https://amzn.to/49ddlxD (2024)原書 Balancing Coupling in Software Design: Universal Design Principles for Architecting Modular Software https://amzn.to/4pla5X5ソフトウェアのモジュール間の結合を評価するための三つの次元 = 結合強度、距離、変動性コンポーネント間の 安定性 = NOT (変動性 AND 強度)連鎖的な変更の 変更コスト = 変動性 AND 距離モジュール性 = 強度 XOR 距離複雑性 = NOT モジュール性メンテナンスの労力 = 強度 AND 距離 AND 変動性感想をぜひハッシュタグ #tilfm でつぶやいてください!お便りフォーム https://forms.gle/J2ioXHS98dYNoMbq5Your co-hosts:Tomoaki Imai, Noxx CTO https://x.com/tomoaki_imai bsky: https://bsky.app/profile/tomoaki-imai.bsky.socialRyoichi Kato, Software Engineer ⁠https://x.com/ryo1kato bsky: https://bsky.app/profile/ryo1kato.bsky.social

MLOps.community
Conversation with the MLflow Maintainers

MLOps.community

Play Episode Listen Later Jan 16, 2026 58:23


Corey Zumar is a Product Manager at Databricks, working on MLflow and LLM evaluation, tracing, and lifecycle tooling for generative AI.Jules Damji is a Lead Developer Advocate at Databricks, working on Spark, lakehouse technologies, and developer education across the data and AI community.Danny Chiao is an Engineering Leader at Databricks, working on data and AI observability, quality, and production-grade governance for ML and agent systems.MLflow Leading Open Source // MLOps Podcast #356 with Databricks' Corey Zumar, Jules Damji, and Danny ChiaoJoin the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterShoutout to Databricks for powering this MLOps Podcast episode.// AbstractMLflow isn't just for data scientists anymore—and pretending it is is holding teams back. Corey Zumar, Jules Damji, and Danny Chiao break down how MLflow is being rebuilt for GenAI, agents, and real production systems where evals are messy, memory is risky, and governance actually matters. The takeaway: if your AI stack treats agents like fancy chatbots or splits ML and software tooling, you're already behind.// BioCorey ZumarCorey has been working as a Software Engineer at Databricks for the last 4 years and has been an active contributor to and maintainer of MLflow since its first release. Jules Damji Jules is a developer advocate at Databricks Inc., an MLflow and Apache Spark™ contributor, and Learning Spark, 2nd Edition coauthor. He is a hands-on developer with over 25 years of experience. He has worked at leading companies, such as Sun Microsystems, Netscape, @Home, Opsware/LoudCloud, VeriSign, ProQuest, Hortonworks, Anyscale, and Databricks, building large-scale distributed systems. He holds a B.Sc. and M.Sc. in computer science (from Oregon State University and Cal State, Chico, respectively) and an MA in political advocacy and communication (from Johns Hopkins University)Danny ChiaoDanny is an engineering lead at Databricks, leading efforts around data observability (quality, data classification). Previously, Danny led efforts at Tecton (+ Feast, an open source feature store) and Google to build ML infrastructure and large-scale ML-powered features. Danny holds a Bachelor's Degree in Computer Science from MIT.// Related LinksWebsite: https://mlflow.org/https://www.databricks.com/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Corey on LinkedIn: /corey-zumar/Connect with Jules on LinkedIn: /dmatrix/Connect with Danny on LinkedIn: /danny-chiao/Timestamps:[00:00] MLflow Open Source Focus[00:49] MLflow Agents in Production[00:00] AI UX Design Patterns[12:19] Context Management in Chat[19:24] Human Feedback in MLflow[24:37] Prompt Entropy and Optimization[30:55] Evolving MLFlow Personas[36:27] Persona Expansion vs Separation[47:27] Product Ecosystem Design[54:03] PII vs Business Sensitivity[57:51] Wrap up

The New Stack Podcast
Solving the Problems that Accompany API Sprawl with AI

The New Stack Podcast

Play Episode Listen Later Jan 15, 2026 19:19


API sprawl creates hidden security risks and missed revenue opportunities when organizations lose visibility into the APIs they build. According to IBM's Neeraj Nargund, APIs power the core business processes enterprises want to scale, making automated discovery, observability, and governance essential—especially when thousands of APIs exist across teams and environments. Strong governance helps identify endpoints, remediate shadow APIs, and manage risk at scale. At the same time, enterprises increasingly want to monetize the data APIs generate, packaging insights into products and pricing and segmenting usage, a need amplified by the rise of AI.To address these challenges, Nargund highlights “smart APIs,” which are infused with AI to provide context awareness, event-driven behavior, and AI-assisted governance throughout the API lifecycle. These APIs help interpret and act on data, integrate with AI agents, and support real-time, streaming use cases.IBM's latest API Connect release embeds AI across API management and is designed for hybrid and multi-cloud environments, offering centralized governance, observability, and control through a single hybrid control plane.Learn more from The New Stack about smart APIs: Redefining API Management for the AI-Driven Enterprise How To Accelerate Growth With AI-Powered Smart APIs Wrangle Account Sprawl With an AI Gateway Join our community of newsletter subscribers to stay on top of the news and at the top of your game.  Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Dev Interrupted
Inventing the Ralph Wiggum Loop | Creator Geoffrey Huntley

Dev Interrupted

Play Episode Listen Later Jan 13, 2026 58:14


Geoffrey Huntley argues that while software development as a profession is effectively dead, software engineering is more alive—and critical—than ever before. In this episode, the creator of the viral "Ralph" agent joins us to explain how simple bash loops and deterministic context allocation are fundamentally changing the unit economics of code. We dive deep into the mechanics of managing "context rot," avoiding "compaction," and why building your own "Gas Town" of autonomous agents is the only way to survive the coming rift.LinearB: Measure the impact of GitHub Copilot and CursorFollow the show:Subscribe to our Substack Follow us on LinkedInSubscribe to our YouTube ChannelLeave us a ReviewFollow the hosts:Follow AndrewFollow BenFollow DanFollow today's guest(s):Geoffrey's Website & Blog: ghuntley.comBuild Your Own Coding Agent Workshop: ghuntley.com/agent Ralph Wiggum as a Software Engineer: ghuntley.com/ralphSteve Yegge's "Welcome to Gas Town": Read on MediumThe "Cursed" Programming Language: github.com/ghuntley/cursedOFFERS Start Free Trial: Get started with LinearB's AI productivity platform for free. Book a Demo: Learn how you can ship faster, improve DevEx, and lead with confidence in the AI era. LEARN ABOUT LINEARB AI Code Reviews: Automate reviews to catch bugs, security risks, and performance issues before they hit production. AI & Productivity Insights: Go beyond DORA with AI-powered recommendations and dashboards to measure and improve performance. AI-Powered Workflow Automations: Use AI-generated PR descriptions, smart routing, and other automations to reduce developer toil. MCP Server: Interact with your engineering data using natural language to build custom reports and get answers on the fly.

The New Stack Podcast
CloudBees CEO: Why Migration Is a Mirage Costing You Millions

The New Stack Podcast

Play Episode Listen Later Jan 13, 2026 34:08


A CloudBees survey reveals that enterprise migration projects often fail to deliver promised modernization benefits. In 2024, 57% of enterprises spent over $1 million on migrations, with average overruns costing $315,000 per project. In The New Stack Makers podcast, CloudBees CEO Anuj Kapur describes this pattern as “the migration mirage,” where organizations chase modernization through costly migrations that push value further into the future. Findings from the CloudBees 2025 DevOps Migration Index show leaders routinely underestimate the longevity and resilience of existing systems. Kapur notes that applications often outlast CIOs, yet new leadership repeatedly mandates wholesale replacement. The report argues modernization has been mistakenly equated with migration, which diverts resources from customer value to replatforming efforts. Beyond financial strain, migration erodes developer morale by forcing engineers to rework functioning systems instead of building new solutions. CloudBees advocates meeting developers where they are, setting flexible guardrails rather than enforcing rigid platforms. Kapur believes this approach, combined with emerging code assistance tools, could spark a new renaissance in software development by 2026.Learn more from The New Stack about enterprise modernization: Why AI Alone Fails at Large-Scale Code ModernizationHow AI Can Speed up Modernization of Your Legacy IT SystemsJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.   Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Today I Learned
194. 自己検閲をやめて発信していこう

Today I Learned

Play Episode Listen Later Jan 11, 2026 34:27


Show Notehttps://kerrick.blog/articles/2025/confessions-of-a-software-developer-no-more-self-censorship/https://overreacted.io/things-i-dont-know-as-of-2018/感想をぜひハッシュタグ #tilfm でつぶやいてください!お便りフォーム https://forms.gle/J2ioXHS98dYNoMbq5Your co-hosts:Tomoaki Imai, Noxx CTO https://x.com/tomoaki_imai bsky: https://bsky.app/profile/tomoaki-imai.bsky.socialRyoichi Kato, Software Engineer ⁠https://x.com/ryo1kato bsky: https://bsky.app/profile/ryo1kato.bsky.social

Breaking Into Cybersecurity
From Software Engineer to Cybersecurity Expert | Jason Casey (Beyond Identity)

Breaking Into Cybersecurity

Play Episode Listen Later Jan 9, 2026 26:56


Jason Casey's Journey: From Software Engineering to CybersecurityIn this episode of 'Breaking into Cybersecurity,' we chat with Jason Casey, who shares his unique path from being a software engineer to a cybersecurity expert. Jason discusses his initial work on network protocols, his transition to big data intelligence, and eventually moving into cybersecurity defense. He stresses the importance of curiosity and understanding fundamental principles, and provides insights into the evolving role of AI in programming and cybersecurity. Jason also shares how he has helped others grow in their careers, and offers valuable advice for those looking to break into and excel in the cybersecurity field. Tune in to hear his inspiring journey and learn practical tips for success.00:00 Introduction to Jason Casey's Cybersecurity Journey01:00 Early Career and Transition to Cybersecurity03:24 Deep Dive into Networking and Security Challenges06:37 Big Data Intelligence and Forensics08:54 Principles and Fundamentals in Tech Careers11:58 Mentorship and Career Development16:04 AI in Programming and Cybersecurity25:50 Final Advice for Aspiring Cybersecurity ProfessionalsSponsored by CPF Coaching LLC - http://cpf-coaching.com/The Breaking into Cybersecurity: It's a conversation about what they did before, why they pivoted into cyber, what the process was they went through, how they keep up, and advice/tips/tricks along the way.The Breaking into Cybersecurity Leadership Series is an additional series focused on cybersecurity leadership and hearing directly from different leaders in cybersecurity (high and low) on what it takes to be a successful leader. We focus on the skills and competencies associated with cybersecurity leadership, as well as tips/tricks/advice from cybersecurity leaders.Check out our books:Develop Your Cybersecurity Career Path: How to Break into Cybersecurity at Any Level https://amzn.to/3443AUIHack the Cybersecurity Interview: Navigate Cybersecurity Interviews with Confidence, from Entry-level to Expert roleshttps://www.amazon.com/Hack-Cybersecurity-Interview-Interviews-Entry-level/dp/1835461298/Hacker Inc.: Mindset For Your Careerhttps://www.amazon.com/Hacker-Inc-Mindset-Your-Career/dp/B0DKTK1R93/---About the hosts:Renee Small is the CEO of Cyber Human Capital, one of the leading human resources business partners in the field of cybersecurity, and author of the Amazon #1 best-selling book, Magnetic Hiring: Your Company's Secret Weapon to Attracting Top Cyber Security Talent. She is committed to helping leaders close the cybersecurity talent gap by hiring from within and encouraging more people to enter the lucrative cybersecurity profession. https://www.linkedin.com/in/reneebrownsmall/Download a free copy of her book at [magnetichiring.com/book](http://magnetichiring.com/book)Christophe Foulon focuses on helping secure people and processes, drawing on a solid understanding of the technologies involved. He has over ten years of experience as an Information Security Manager and Cybersecurity Strategist. He is passionate about customer service, process improvement, and information security. He has significant expertise in optimizing technology use while balancing its implications for people, processes, and information security, through a consultative approach.https://www.linkedin.com/in/christophefoulon/Find out more about CPF-Coaching at [https://www.cpf-coaching.com](https://www.cpf-coaching.com/)- Website: https://www.cyberhubpodcast.com/breakingintocybersecurity- Podcast: https://podcasters.spotify.com/pod/show/breaking-into-cybersecuri- YouTube: https://www.youtube.com/c/BreakingIntoCybersecurity- Linkedin: https://www.linkedin.com/company/breaking-into-cybersecurity/- Twitter: https://twitter.com/BreakintoCyber- Twitch: https://www.twitch.tv/breakingintocybersecurity

The New Stack Podcast
Human Cognition Can't Keep Up with Modern Networks. What's Next?

The New Stack Podcast

Play Episode Listen Later Jan 7, 2026 23:16


IBM's recent acquisitions of Red Hat, HashiCorp, and its planned purchase of Confluent reflect a deliberate strategy to build the infrastructure required for enterprise AI. According to IBM's Sanil Nambiar, AI depends on consistent hybrid cloud runtimes (Red Hat), programmable and automated infrastructure (HashiCorp), and real-time, trustworthy data (Confluent). Without these foundations, AI cannot function effectively. Nambiar argues that modern, software-defined networks have become too complex for humans to manage alone, overwhelmed by fragmented data, escalating tool sophistication, and a widening skills gap that makes veteran “tribal knowledge” hard to transfer. Trust, he says, is the biggest barrier to AI adoption in networking, since errors can cause costly outages. To address this, IBM launched IBM Network Intelligence, a “network-native” AI solution that combines time-series foundation models with reasoning large language models. This architecture enables AI agents to detect subtle warning patterns, collapse incident response times, and deliver accurate, trustworthy insights for real-world network operations.Learn more from The New Stack about AI infrastructure and IBM's approach:  AI in Network Observability: The Dawn of Network Intelligence How Agentic AI Is Redefining Campus and Branch Network Needs Join our community of newsletter subscribers to stay on top of the news and at the top of your game.  Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Today I Learned
193. Thinking in Systems (世界はシステムで動く)

Today I Learned

Play Episode Listen Later Jan 4, 2026 40:00


「世界はシステムで動く ―― いま起きていることの本質をつかむ考え方 」https://amzn.to/3LcfAcs「熊とワルツを リスクを愉しむプロジェクト管理」 https://amzn.to/499lbIE (本編中では「ワインバーグ」と発言しましたが、トム・デマルコ著の間違いでした。)感想をぜひハッシュタグ #tilfm でつぶやいてください!お便りフォーム https://forms.gle/J2ioXHS98dYNoMbq5Your co-hosts:Tomoaki Imai, Noxx CTO  https://x.com/tomoaki_imai bsky: https://bsky.app/profile/tomoaki-imai.bsky.socialRyoichi Kato, Software Engineer ⁠https://x.com/ryo1kato bsky: https://bsky.app/profile/ryo1kato.bsky.social

Rails with Jason
301 - Bekki Freeman, Staff Software Engineer at Caribou and Co-Organizer of Rocky Mountain Ruby

Rails with Jason

Play Episode Listen Later Jan 2, 2026 52:35 Transcription Available


In this episode I talk with Becky Freeman, staff engineer at Caribou and co-organizer of Rocky Mountain Ruby, about legacy code, refactoring long-running applications, and the psychological skills required to get team buy-in for technical improvements.Links:Bekki Freeman on LinkedInRocky Mountain RubyCaribouNonsense Monthly

Rails with Jason
299 - Eleni Konior, Senior Staff Software Engineer at Cisco Meraki

Rails with Jason

Play Episode Listen Later Jan 2, 2026 56:37 Transcription Available


In this episode I talk with Eleni Konior about her path from economics to graphic design to programming, and how creative skills benefit technical work. We discuss building customer-focused features, the importance of assuming the customer's role, and AI in products beyond chatbots—like proactively surfacing recommendations based on user behavior.Links:datgreekchick.comNonsense Monthly

The Tech Trek
How To Hire Outlier Software Engineers

The Tech Trek

Play Episode Listen Later Dec 30, 2025 21:48


Yogi Goel, cofounder and CEO of Maxima AI, breaks down how he hires outlier talent, people who think like future founders and thrive when the plan changes fast. We get practical on what to look for beyond pedigree, how to assess it without relying on easy resume signals, and how culture scales when your team doubles.Yogi also shares what Maxima AI is building, an agentic platform for enterprise accounting that automates day to day operations and month end work, and why the best teams win by pairing speed with real ownership.Key takeaways• Outlier candidates often look “non standard” on paper, the signal is founder mentality, fast thinking, grit, and a point to prove• Hiring gets easier when it is always on, keep a living bench of great people long before you have a headcount• Use long form conversations to assess how someone thinks, not just what they have done, ask for their life story and listen for the choices they highlight• Train the specifics, but set a baseline for domain aptitude, then coach the narrow parts once the fundamentals are there• Culture scales through leaders and through what you reward and penalize, not through posters and slogansTimestamped highlights00:39 What Maxima AI does and the real value of agentic accounting01:38 Defining an outlier candidate as a future founder, and why school matters less than you think07:34 The conveyor belt approach to recruiting, building an inventory of great people before you need them11:35 Where to draw the line on training, test for general aptitude, coach the specifics14:20 How diverse teams disagree productively, bring evidence, run small bets, then double down or pivot18:25 Scaling culture with values driven leaders, and the simple rule of reward versus penaltyA line worth keeping“Culture is two things, what you reward and what you penalize.”Pro tips you can steal• Keep a short list of the best people you have ever met for each function, update it constantly• Ask candidates for their journey from day zero, then pay attention to what they choose to emphasize• When the team disagrees, grab quick evidence, customer texts, small pulse checks, then place a small bet that will not kill the company• Expect great people to want autonomy and scope, manage like a mentor, not a hovercraftCall to actionIf this episode helped you rethink hiring, share it with a founder or engineering leader who is building a team right now. Follow the show for more conversations on people, impact, and technology, and connect with Yogi Goel on LinkedIn by searching his name and Maxima AI.

The New Stack Podcast
From Group Science Project to Enterprise Service: Rethinking OpenTelemetry

The New Stack Podcast

Play Episode Listen Later Dec 30, 2025 17:20


Ari Zilka, founder of MyDecisive.ai and former Hortonworks CPO, argues that most observability vendors now offer essentially identical, reactive dashboards that highlight problems only after systems are already broken. After speaking with all 23 observability vendors at KubeCon + CloudNativeCon North America 2025, Zilka said these tools fail to meaningfully reduce mean time to resolution (MTTR), a long-standing demand he heard repeatedly from thousands of CIOs during his time at New Relic.Zilka believes observability must shift from reactive monitoring to proactive operations, where systems automatically respond to telemetry in real time. MyDecisive.ai is his attempt to solve this, acting as a “bump in the wire” that intercepts telemetry and uses AI-driven logic to trigger actions like rolling back faulty releases.He also criticized the rising cost and complexity of OpenTelemetry adoption, noting that many companies now require large, specialized teams just to maintain OTel stacks. MyDecisive aims to turn OpenTelemetry into an enterprise-ready service that reduces human intervention and operational overhead.Learn more from The New Stack about OpenTelemetry:Observability Is Stuck in the Past. Your Users Aren't. Setting Up OpenTelemetry on the Frontend Because I Hate MyselfHow to Make OpenTelemetry Better in the BrowserJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.  Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Today I Learned
192. 年末特番2025年

Today I Learned

Play Episode Listen Later Dec 28, 2025 51:31


2025年のこのポッドキャストとホストを振り返りました。Show Note2025エピソード ランキング TOP 5176. 上達の法則 1,544再生 https://podcasts.apple.com/us/podcast/id1529233853?i=1000725421865161. Metaをレイオフされたはなし 1,525再生 https://podcasts.apple.com/us/podcast/id1529233853?i=1000709809864175. 良いデザインドキュメントの書き方 1,475再生 https://podcasts.apple.com/us/podcast/id1529233853?i=1000724284603177. GPT-5のプロンプトエンジニアリングガイドを読み解く 1,418再生 https://podcasts.apple.com/us/podcast/id1529233853?i=1000726784686159. エンジニアにおすすめの本 1,391再生 https://podcasts.apple.com/us/podcast/id1529233853?i=1000708011992## 買ってよかったものDell 4Kモニタ https://amzn.to/48XDOAr仕事机の足下ヒータ https://amzn.to/3N0SPZxモバイルディスプレイ ⁠https://amzn.to/4peW4e1⁠## 本亡刻のシェオル SFファンタジー https://amzn.to/4pX7vrlかがみの孤城 https://amzn.to/3NjXc1T一億年のテレスコープ https://amzn.to/4pTQiin日本のピアノは世界の非常識? https://amzn.to/45kKN41科学的根拠に基づく最強の勉強法 https://amzn.to/4qnXWS0アノマリー ⁠https://amzn.to/4rCGmLA⁠ザリガニが鳴くところ ⁠https://amzn.to/4jcJ9qS⁠BORN TO RUN ⁠https://amzn.to/3XyMo1X⁠奇妙で不思議な菌類の世界 ⁠https://amzn.to/4as7zu6⁠感想をぜひハッシュタグ #tilfm でつぶやいてください!お便りフォーム https://forms.gle/J2ioXHS98dYNoMbq5Your co-hosts:Tomoaki Imai, Noxx CTO  https://x.com/tomoaki_imai bsky: https://bsky.app/profile/tomoaki-imai.bsky.socialRyoichi Kato, Software Engineer ⁠https://x.com/ryo1kato bsky: https://bsky.app/profile/ryo1kato.bsky.social買ってよかったもの本

Today I Learned
191. 大企業でヒラ社員として20年たって大事にしていることリスト

Today I Learned

Play Episode Listen Later Dec 21, 2025 39:36


「ザ・ゴール ― 企業の究極の目的とは何か」https://amzn.to/49cwK1Q感想をぜひハッシュタグ #tilfm でつぶやいてください!お便りフォーム https://forms.gle/J2ioXHS98dYNoMbq5Your co-hosts:Tomoaki Imai, Noxx CTO  https://x.com/tomoaki_imai bsky: https://bsky.app/profile/tomoaki-imai.bsky.socialRyoichi Kato, Software Engineer ⁠https://x.com/ryo1kato bsky: https://bsky.app/profile/ryo1kato.bsky.social

The New Stack Podcast
Do All Your AI Workloads Actually Require Expensive GPUs?

The New Stack Podcast

Play Episode Listen Later Dec 18, 2025 29:49


GPUs dominate today's AI landscape, but Google argues they are not necessary for every workload. As AI adoption has grown, customers have increasingly demanded compute options that deliver high performance with lower cost and power consumption. Drawing on its long history of custom silicon, Google introduced Axion CPUs in 2024 to meet needs for massive scale, flexibility, and general-purpose computing alongside AI workloads. The Axion-based C4A instance is generally available, while the newer N4A virtual machines promise up to 2x price performance.In this episode, Andrei Gueletii, a technical solutions consultant for Google Cloud joined Gari Singh, a product manager for Google Kubernetes Engine (GKE), and Pranay Bakre, a principal solutions engineer at Arm for this episode, recorded at KubeCon + CloudNativeCon North America, in Atlanta. Built on Arm Neoverse V2 cores, Axion processors emphasize energy efficiency and customization, including flexible machine shapes that let users tailor memory and CPU resources. These features are particularly valuable for platform engineering teams, which must optimize centralized infrastructure for cost, FinOps goals, and price performance as they scale.Importantly, many AI tasks—such as inference for smaller models or batch-oriented jobs—do not require GPUs. CPUs can be more efficient when GPU memory is underutilized or latency demands are low. By decoupling workloads and choosing the right compute for each task, organizations can significantly reduce AI compute costs.Learn more from The New Stack about the Axion-based C4A: Beyond Speed: Why Your Next App Must Be Multi-ArchitectureArm: See a Demo About Migrating a x86-Based App to ARM64Join our community of newsletter subscribers to stay on top of the news and at the top of your game.  Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The New Stack Podcast
Breaking Data Team Silos Is the Key to Getting AI to Production

The New Stack Podcast

Play Episode Listen Later Dec 17, 2025 30:47


Enterprises are racing to deploy AI services, but the teams responsible for running them in production are seeing familiar problems reemerge—most notably, silos between data scientists and operations teams, reminiscent of the old DevOps divide. In a discussion recorded at AWS re:Invent 2025, IBM's Thanos Matzanas and Martin Fuentes argue that the challenge isn't new technology but repeating organizational patterns. As data teams move from internal projects to revenue-critical, customer-facing applications, they face new pressures around reliability, observability, and accountability.The speakers stress that many existing observability and governance practices still apply. Standard metrics, KPIs, SLOs, access controls, and audit logs remain essential foundations, even as AI introduces non-determinism and a heavier reliance on human feedback to assess quality. Tools like OpenTelemetry provide common ground, but culture matters more than tooling.Both emphasize starting with business value and breaking down silos early by involving data teams in production discussions. Rather than replacing observability professionals, AI should augment human expertise, especially in critical systems where trust, safety, and compliance are paramount.Learn more from The New Stack about enabling AI with silos: Are Your AI Co-Pilots Trapping Data in Isolated Silos?Break the AI Gridlock at the Intersection of Velocity and TrustTaming AI Observability: Control Is the Key to SuccessJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.  Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Today I Learned
190. AI Native Software Engineer

Today I Learned

Play Episode Listen Later Dec 14, 2025 40:17


Show NoteAI NativeなSoftware engineerになる方法について話しました。https://addyo.substack.com/p/the-ai-native-software-engineer生産性のはてに失われるもの https://tomoima525.hatenablog.com/entry/2025/05/28/050000感想をぜひハッシュタグ #tilfm でつぶやいてください!お便りフォーム https://forms.gle/J2ioXHS98dYNoMbq5Your co-hosts:Tomoaki Imai, Noxx CTO https://x.com/tomoaki_imai bsky: https://bsky.app/profile/tomoaki-imai.bsky.socialRyoichi Kato, Software Engineer ⁠https://x.com/ryo1kato bsky: https://bsky.app/profile/ryo1kato.bsky.social

Startup Hustle
Why We Still Need Software Engineers in the Age of AI with Brian Jenney

Startup Hustle

Play Episode Listen Later Dec 11, 2025 29:56


AI can scaffold an app in seconds, but can it refactor that thousand-line React file when the first bug hits production? In this episode, I sit down with Brian Jenney software engineer and program owner of the coding bootcamp Parsity, to draw a hard line between “code that runs” and “code that lasts.” From mentoring career-switchers to stress-testing AI in real-world pipelines, Brian shares why craftsmanship and product judgment still beat copy-paste prompts.

The New Stack Podcast
Kubernetes GPU Management Just Got a Major Upgrade

The New Stack Podcast

Play Episode Listen Later Dec 11, 2025 35:26


Nvidia Distinguished Engineer Kevin Klues noted that low-level systems work is invisible when done well and highly visible when it fails — a dynamic that frames current Kubernetes innovations for AI. At KubeCon + CloudNativeCon North America 2025, Klues and AWS product manager Jesse Butler discussed two emerging capabilities: dynamic resource allocation (DRA) and a new workload abstraction designed for sophisticated AI scheduling.DRA, now generally available in Kubernetes 1.34, fixes long-standing limitations in GPU requests. Instead of simply asking for a number of GPUs, users can specify types and configurations. Modeled after persistent volumes, DRA allows any specialized hardware to be exposed through standardized interfaces, enabling vendors to deliver custom device drivers cleanly. Butler called it one of the most elegant designs in Kubernetes.Yet complex AI workloads require more coordination. A forthcoming workload abstraction, debuting in Kubernetes 1.35, will let users define pod groups with strict scheduling and topology rules — ensuring multi-node jobs start fully or not at all. Klues emphasized that this abstraction will shape Kubernetes' AI trajectory for the next decade and encouraged community involvement.Learn more from The New Stack about dynamic resource allocation: Kubernetes Primer: Dynamic Resource Allocation (DRA) for GPU WorkloadsKubernetes v1.34 Introduces Benefits but Also New Blind SpotsJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.   Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The New Stack Podcast
The Rise of the Cognitive Architect

The New Stack Podcast

Play Episode Listen Later Dec 10, 2025 22:53


At KubeCon North America 2025, GitLab's Emilio Salvador outlined how developers are shifting from individual coders to leaders of hybrid human–AI teams. He envisions developers evolving into “cognitive architects,” responsible for breaking down large, complex problems and distributing work across both AI agents and humans. Complementing this is the emerging role of the “AI guardian,” reflecting growing skepticism around AI-generated code. Even as AI produces more code, humans remain accountable for reviewing quality, security, and compliance.Salvador also described GitLab's “AI paradox”: developers may code faster with AI, but overall productivity stalls because testing, security, and compliance processes haven't kept pace. To fix this, he argues organizations must apply AI across the entire development lifecycle, not just in coding. GitLab's Duo Agent Platform aims to support that end-to-end transformation.Looking ahead, Salvador predicts the rise of a proactive “meta agent” that functions like a full team member. Still, he warns that enterprise adoption remains slow and advises organizations to start small, build skills, and scale gradually.Learn more from The New Stack about the evolving role of "cognitive architects":The Engineer in the AI Age: The Orchestrator and ArchitectThe New Role of Enterprise Architecture in the AI EraThe Architect's Guide to Understanding Agentic AIJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.  Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The New Stack Podcast
Why the CNCF's New Executive Director is Obsessed With Inference

The New Stack Podcast

Play Episode Listen Later Dec 9, 2025 25:09


Jonathan Bryce, the new CNCF executive director, argues that inference—not model training—will define the next decade of computing. Speaking at KubeCon North America 2025, he emphasized that while the industry obsesses over massive LLM training runs, the real opportunity lies in efficiently serving these models at scale. Cloud-native infrastructure, he says, is uniquely suited to this shift because inference requires real-time deployment, security, scaling, and observability—strengths of the CNCF ecosystem. Bryce believes Kubernetes is already central to modern inference stacks, with projects like Ray, KServe, and emerging GPU-oriented tooling enabling teams to deploy and operationalize models. To bring consistency to this fast-moving space, the CNCF launched a Kubernetes AI Conformance Program, ensuring environments support GPU workloads and Dynamic Resource Allocation. With AI agents poised to multiply inference demand by executing parallel, multi-step tasks, efficiency becomes essential. Bryce predicts that smaller, task-specific models and cloud-native routing optimizations will drive major performance gains. Ultimately, he sees CNCF technologies forming the foundation for what he calls “the biggest workload mankind will ever have.” Learn more from The New Stack about inference: Confronting AI's Next Big Challenge: Inference Compute Deep Infra Is Building an AI Inference Cloud for Developers Join our community of newsletter subscribers to stay on top of the news and at the top of your game.  Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The VentureFizz Podcast
Episode 406: Bill Simmons - Serial Entrepreneur, Orbit.me & DataXu

The VentureFizz Podcast

Play Episode Listen Later Dec 8, 2025 57:01


Episode 406 of The VentureFizz Podcast features Bill Simmons, serial entrepreneur and Co-Founder of Orbit.me and DataXu. I'm going to use the cliché: Bill actually is a rocket scientist. His background is in aerospace engineering, he holds a PhD from MIT, and he worked on 13 space missions. In addition, he was part of a major Government competition for simulating options for space travel to Mars. His team simulated 35 billion possible options to generate 1,100 different Mars missions that were all feasible. This groundbreaking technology that leverage Big Data, which we now recognize as AI and machine learning, launched his first company, DataXu, in 2008. DataXu became a pioneer in the programmatic ad platform category and raised over $87M in funding. The company scaled as a major player in the Boston tech scene, and was acquired by Roku in 2019. Now, Bill is tackling a challenge we all likely face with his new startup, Orbit.me. Information is scattered across texts, multiple email inboxes, LinkedIn, WhatsApp, and social apps—it's impossible to keep track of what matters. Orbit.me is a perfect use case for AI, organizing your scattered messages into “Orbits” which are dedicated spaces built around the real contexts of your life, like parenting, work, or other important matters. Chapters: 00:00 Introduction 02:41 Current Status of Space Travel & Mars 08:34 Bill Simmons Background Story 10:21 Academic Experience 13:12 Space Missions including Mars Research 17:19 How DataXu Came to Fruition & Focus on AdTech 20:03 Scaling DataXu & Market Strategies 23:15 The Competitive Landscape of AdTech 26:20 The Technology Behind Real-Time Bidding 29:49 Building DataXu's Culture During Growth 32:51 DataXu Acquisition by Roku 35:54 The Transition to Product Management & Experience at The Trade Desk 37:13 The Details of Orbit.me 43:07 The Team Behind Orbit.me 48:58 The Evolving Role of Software Engineers in the AI Era 52:01 Lightening Round Questions

Irish Tech News Audio Articles
Fixify Chooses Cork for EU Hub, Creating 50 High-Tech Jobs

Irish Tech News Audio Articles

Play Episode Listen Later Dec 8, 2025 3:56


Fixify, a leading provider in AI-driven IT support automation, has selected Cork City as the home of its new EU Centre of Excellence, creating 50 skilled jobs in the region over the next 18 months. The new facility will serve as a regional base for Fixify's development, support, and customer success for worldwide operations. This project is supported by the Irish Government through IDA Ireland. Attending the event, Taoiseach Micheál Martin TD said: "This announcement from Fixify to select Cork as the home of its new EU Centre of Excellence demonstrates a deep commitment to the region and creates 50 high-tech jobs in an exciting and growing sector. I have no doubt that these highly skilled jobs in IT, software engineering and data analysis will be a further boost to the workforce in the region. I want to acknowledge the role of IDA Ireland in supporting this project and I look forward to seeing the continued growth of Fixify in Cork over the coming years." Minister for Enterprise Tourism & Employment Peter Burke TD said: "Fixify's decision to establish its EU Centre of Excellence in Cork is very welcome news and is a strong endorsement of Ireland's position as a global leader in technology and innovation. This investment will bring 50 high-quality jobs to the region and further strengthen our thriving digital ecosystem. Cork's deep talent pool, supported by world-class institutions like UCC and MTU, and its proven track record in attracting and sustaining high-value FDI, make it ideally placed to support Fixify's growth. I wish the Fixify team in Cork the very best for the future." Fixify is now hiring in roles including IT Helpdesk Analysts, Software Engineers, Data Engineers, and Data Scientists. To explore career opportunities with Fixify, please visit Fixify careers. "We chose Cork for Fixify's European base - a city that brings together deep technical expertise, quality of life and community spirit - the conditions that make great work last," said Matt Peters, CEO Fixify. "Establishing our base here enables Fixify to tap into Ireland's exceptional talent and contribute to its thriving tech ecosystem as we scale automation and support that remains genuinely human worldwide." "Our investment in Cork is a strong vote of confidence in Ireland's technology talent and infrastructure," added Caroline Coughlan, Director, Employee Experience & People Operations at Fixify "Over the next 18 months, we will be scaling our presence here in parallel with delivering outstanding value to our customers across EMEA." IDA Ireland CEO Michael Lohan said: "I am very pleased that Fixify has chosen Cork as home to its EU Centre of Excellence as it recognises the quality and depth of the South West region's talent pool, Ireland's vibrant culture, and our pro-business environment. I wish to congratulate Fixify on this expansion and look forward to supporting them as they enhance Ireland's reputation as home to a thriving technology sector" See more stories here. More about Irish Tech News Irish Tech News are Ireland's No. 1 Online Tech Publication and often Ireland's No.1 Tech Podcast too. You can find hundreds of fantastic previous episodes and subscribe using whatever platform you like via our Anchor.fm page here: https://anchor.fm/irish-tech-news If you'd like to be featured in an upcoming Podcast email us at Simon@IrishTechNews.ie now to discuss. Irish Tech News have a range of services available to help promote your business. Why not drop us a line at Info@IrishTechNews.ie now to find out more about how we can help you reach our audience. You can also find and follow us on Twitter, LinkedIn, Facebook, Instagram, TikTok and Snapchat.

Cloud Security Podcast
AI-First Vulnerability Management: Should CISOs Build or Buy?

Cloud Security Podcast

Play Episode Listen Later Dec 4, 2025 61:30


Thinking of building your own AI security tool? In this episode, Santiago Castiñeira, CTO of Maze, breaks down the realities of the "Build vs. Buy" debate for AI-first vulnerability management.While building a prototype script is easy, scaling it into a maintainable, audit-proof system is a massive undertaking requiring specialized skills often missing in security teams. The "RAG drug" relies too heavily on Retrieval-Augmented Generation for precise technical data like version numbers, which often fails .The conversation gets into the architecture required for a true AI-first system, moving beyond simple chatbots to complex multi-agent workflows that can reason about context and risk . We also cover the critical importance of rigorous "evals" over "vibe checks" to ensure AI reliability, the hidden costs of LLM inference at scale, and why well-crafted agents might soon be indistinguishable from super-intelligence .Guest Socials -⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Santiago's LinkedinPodcast Twitter - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@CloudSecPod⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Podcast- Youtube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠- ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Cloud Security Newsletter ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠If you are interested in AI Cybersecurity, you can check out our sister podcast -⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ AI Security Podcast⁠Questions asked:(00:00) Introduction(02:00) Who is Santiago Castiñeira?(02:40) What is "AI-First" Vulnerability Management? (Rules vs. Reasoning)(04:55) The "Build vs. Buy" Debate: Can I Just Use ChatGPT?(07:30) The "Bus Factor" Risk of Internal Tools(08:30) Why MCP (Model Context Protocol) Struggles at Scale(10:15) The Architecture of an AI-First Security System(13:45) The Problem with "Vibe Checks": Why You Need Proper Evals(17:20) Where to Start if You Must Build Internally(19:00) The Hidden Need for Data & Software Engineers in Security Teams(21:50) Managing Prompt Drift and Consistency(27:30) The Challenge of Changing LLM Models (Claude vs. Gemini)(30:20) Rethinking Vulnerability Management Metrics in the AI Era(33:30) Surprises in AI Agent Behavior: "Let's Get Back on Topic"(35:30) The Hidden Cost of AI: Token Usage at Scale(37:15) Multi-Agent Governance: Preventing Rogue Agents(41:15) The Future: Semi-Autonomous Security Fleets(45:30) Why RAG Fails for Precise Technical Data (The "RAG Drug")(47:30) How to Evaluate AI Vendors: Is it AI-First or AI-Sprinkled?(50:20) Common Architectural Mistakes: Vibe Evals & Cost Ignorance(56:00) Unpopular Opinion: Well-Crafted Agents vs. Super Intelligence(58:15) Final Questions: Kids, Argentine Steak, and Closing

The Joe Reis Show
Data Contracts Are For Software Engineers, Not Just Data Teams w/ Mark Freeman and Chad Sanderson

The Joe Reis Show

Play Episode Listen Later Dec 3, 2025 49:50


In this episode, I sit down with Mark Freeman and Chad Sanderson (Gable.ai) to discuss the release of their new O'Reilly book, Data Contracts: Developing Production-Grade Pipelines at Scale. They dive deep into the chaotic journey of writing a 350-page book while simultaneously building a venture-backed startup.The conversation takes a sharp turn into the evolution of Data Contracts. While the concept started with data engineers, Mark and Chad explain why they pivoted their focus to software engineers. They argue that software engineers are facing a "Data Lake Moment, "prioritizing speed over craftsmanship, resulting in massive technical debt and integration failures.Gable: https://www.gable.ai/

The New Stack Podcast
Helm 4: What's New in the Open Source Kubernetes Package Manager?

The New Stack Podcast

Play Episode Listen Later Dec 3, 2025 24:45


Helm — originally a hackathon project called Kate's Place — turned 10 in 2025, marking the milestone with the release of Helm 4, its first major update in six years. Created by Matt Butcher and colleagues as a playful take on “K8s,” the early project won a small prize but quickly grew into a serious effort when Deus leadership recognized the need for a Kubernetes package manager. Renamed Helm, it rapidly expanded with community contributors and became one of the first CNCF graduating projects.Helm 4 reflects years of accumulated design debt and evolving use cases. After the rapid iterations of Helm 1, 2, and 3, the latest version modernizes logging, improves dependency management, and introduces WebAssembly-based plugins for cross-platform portability—addressing the growing diversity of operating systems and architectures. Beyond headline features, maintainers emphasize that mature projects increasingly deliver “boring” but essential improvements, such as better logging, which simplify workflows and integrate more cleanly with other tools. Helm's re-architected internals also lay the foundation for new chart and package capabilities in upcoming 4.x releases. Learn more from The New Stack about Helm: The Super Helm Chart: To Deploy or Not To Deploy?Kubernetes Gets a New Resource Orchestrator in the Form of KroJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.   Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The New Stack Podcast
All About Cedar, an Open Source Solution for Fine-Tuning Kubernetes Authorization

The New Stack Podcast

Play Episode Listen Later Dec 2, 2025 16:13


Kubernetes has relied on role-based access control (RBAC) since 2017, but its simplicity limits what developers can express, said Micah Hausler, principal engineer at AWS, on The New Stack Makers. RBAC only allows actions; it can't enforce conditions, denials, or attribute-based rules. Seeking a more expressive authorization model for Kubernetes, Hausler explored Cedar, an authorization engine and policy language created at AWS in 2022 and later open-sourced. Although not designed specifically for Kubernetes, Cedar proved capable of modeling its authorization needs in a concise, readable way. Hausler highlighted Cedar's clarity—nontechnical users can often understand policies at a glance—as well as its schema validation, autocomplete support, and formal verification, which ensures policies are correct and produce only allow or deny outcomes.Now onboarding to the CNCF sandbox, Cedar is used by companies like Cloudflare and MongoDB and offers language-agnostic tooling, including a Go implementation donated by StrongDM. The project is actively seeking contributors, especially to expand bindings for languages like TypeScript, JavaScript, and Python.Learn more from The New Stack about Cedar:Ceph: 20 Years of Cutting-Edge Storage at the Edge The Cedar Programming Language: Authorization SimplifiedJoin our community of newsletter subscribers to stay on top of the news and at the top of your game.  Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

KAJ Studio Podcast
Expert Coach JC Clark Reveals REMOTE Career Secrets You Need to Know

KAJ Studio Podcast

Play Episode Listen Later Nov 28, 2025 29:10


Join career coach and former finance professional turned software engineer, JC Clark, as she shares hard-won insights from her journey of 1800+ job applications. Discover insider tips to land high-paying remote jobs, build powerful professional networks, and navigate career changes. Learn how to thrive in virtual workplaces while maintaining work-life balance, especially for working parents.

Resilient Cyber
Resilient Cyber w/ Jesus and John - Post-Quantum Cryptography for Engineers

Resilient Cyber

Play Episode Listen Later Nov 19, 2025 22:39


In this episode of Resilient Cyber, I'm joined by Jesus Alejandro Cardenes Cabre, SVP of Product Architecture and John Xiaremba, Software Engineer, both from the VIA Knowledge Hub team to dig into all things post-quantum cryptography (PQC). This includes PQC standards, as well as practical steps developers must take today to mitigate future risks.

Software Engineering Radio - The Podcast for Professional Software Developers
SE Radio 694: Jennings Anderson and Amy Rose on Overture Maps

Software Engineering Radio - The Podcast for Professional Software Developers

Play Episode Listen Later Nov 12, 2025 63:45


Jennings Anderson, a Software Engineer with Meta Platforms, and Amy Rose, the Chief Technology Officer at Overture Maps Foundation, speak with host Gregory M. Kapfhammer about the Overture Maps project, which creates reliable, easy-to-use, and interoperable open map data. After exploring the foundations of geospatial information systems, Gregory and his guests dive deep into the implementation of Overture Maps through features like the Global Entity Reference System (GERS). In addition to discussing the organizational structure of the Overture Maps Foundation and the need for a unified database of geospatial data, Jennings and Amy explain how to implement applications using data from Overture Maps. Brought to you by IEEE Computer Society and IEEE Software magazine.

The Engineering Leadership Podcast
Brex 3.0: An 18-Month Operational Evolution & the Brex Hacker House “AI Startup within a Startup" experiment w/ James Reggio #236

The Engineering Leadership Podcast

Play Episode Listen Later Nov 12, 2025 45:30


James Reggio (CTO @ Brex) shares the story of "Brex 3.0", an 18-month journey behind their operational evolution. We explore how they rewound their org from a Series E to a Series C mindset, and replaced siloed OKRs with seasonal "marquee initiatives." James deconstructs the “Brex Hacker House”, an AI-focused startup within a startup experiment aimed to disrupt their core business. This conversation is all about evolving operational rhythms, layers of management, product building, and culture change! ABOUT JAMES REGGIOJames Reggio is Brex's Chief Technology Officer. James is a forward thinking technology leader who currently oversees Brex's entire Engineering org. James joined Brex in 2020 as Principal Engineer and has played a vital role in building the company's mobile app and AI capabilities. Prior to Brex, James had an extensive career as a Software Engineer at leading companies such as Microsoft, Salesforce, AirBnB, Stripe and more. Additionally, James founded two companies: Altair Management and Banter, a social discovery platform for podcasts that was later acquired by Convoy in 2018. James received his B.A. of Science from The University of Texas Austin. SHOW NOTES:The birth of Brex 3.0: Using a layoff as a "moment to refound the company" (3:38)Moving from a Series E to a Series C operational mindset (5:28)The problem with a GM model: How siloed OKRs and roadmaps created "deadlock" (6:07)New rituals: Why the CEO became "chief editor of the roadmap" (8:16)The impact on morale: "Folks just knew how their work fit into the bigger picture" (11:16)The challenge of the new model: Who do you hold accountable when you "win and lose as a team"? (13:43)The lesson for reintroducing systems: "Less is more" (15:43)The "Startup within a Startup": Launching an internal team to disrupt Brex (16:49)“What if we were founding Brex again today?” The 4 constraints for the "Hacker House" experiment (17:58)Questions eng leaders should ask when running a similar experiment to Brex (21:02)Aha moment: "With agentic coating, code is so cheap" (22:35)Managing the two narratives: "compounding" the core biz vs. “innovating" with AI (26:01)A surprising dynamic: Why the AI team struggled to see their impact (while the core team didn't) (29:38)Building alongside your customer to iterate / experiment faster (36:06)The turnaround is over: Brex hits 50% YoY growth and cash-flow positive (38:45)Rapid fire questions (42:10) This episode wouldn't have been possible without the help of our incredible production team:Patrick Gallagher - Producer & Co-HostJerry Li - Co-HostNoah Olberding - Associate Producer, Audio & Video Editor https://www.linkedin.com/in/noah-olberding/Dan Overheim - Audio Engineer, Dan's also an avid 3D printer - https://www.bnd3d.com/Ellie Coggins Angus - Copywriter, Check out her other work at https://elliecoggins.com/about/ Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Talk Python To Me - Python conversations for passionate developers
#526: Building Data Science with Foundation LLM Models

Talk Python To Me - Python conversations for passionate developers

Play Episode Listen Later Nov 1, 2025 67:24 Transcription Available


Today, we're talking about building real AI products with foundation models. Not toy demos, not vibes. We'll get into the boring dashboards that save launches, evals that change your mind, and the shift from analyst to AI app builder. Our guide is Hugo Bowne-Anderson, educator, podcaster, and data scientist, who's been in the trenches from scalable Python to LLM apps. If you care about shipping LLM features without burning the house down, stick around. Episode sponsors Posit NordStellar Talk Python Courses Links from the show Hugo Bowne-Anderson: x.com Vanishing Gradients Podcast: vanishinggradients.fireside.fm Fundamentals of Dask: High Performance Data Science Course: training.talkpython.fm Building LLM Applications for Data Scientists and Software Engineers: maven.com marimo: a next-generation Python notebook: marimo.io DevDocs (Offline aggregated docs): devdocs.io Elgato Stream Deck: elgato.com Sentry's Seer: talkpython.fm The End of Programming as We Know It: oreilly.com LorikeetCX AI Concierge: lorikeetcx.ai Text to SQL & AI Query Generator: text2sql.ai Inverse relationship enthusiasm for AI and traditional projects: oreilly.com Watch this episode on YouTube: youtube.com Episode #526 deep-dive: talkpython.fm/526 Episode transcripts: talkpython.fm Theme Song: Developer Rap

Sound & Vision
Gretchen Andrew

Sound & Vision

Play Episode Listen Later Oct 23, 2025 79:08


Episode 497 / Gretchen AndrewGretchen Andrew is an artist born in Los Angeles, United States, 1988 who lives and Works in London and Park City, Utah. She studied Information Systems and got a BS from Boston College, and worked for Intuit as a Software Engineer, Google as a People Technology Manager, and apprenticed with Billy Childish at his studio.She's had shows at Gray Area, San Francisco, Heft Gallery, NYC, Hope 93, London. FxHash, Berlin Art Week, Galloire, Dubai UAE,  Falko Alexander, Cologne, Germany, Annka Kultys Gallery, London, United Kingdom and many others.She's shown at fairs including 2025 Expo Chicago, 2024 Untitled Miami, Paris Photo (21C Award, solo presentation) and the 2022 Vienna Contemporary (solo presentation).She has lectured at the Tate Modern, the Luma Foundation in Zurich, the Mia Foundation in Dubai and the University of Chicago.