Computing discipline
POPULARITY
Categories
I'm not stressed by AI itself. I'm stressed by the insatiable greed of those who profit from it, even if it means sacrificing large parts of the population. I'm also stressed about how ruthlessly it can be abused to cause deliberate harm.In this episode I'm not taking you into world of fire science, but rather into my own thoughts on how the AI revolution influences our lives. And I was influenced it just last week - through a phishing attack on the IAFSS, and through reading a very disturbing piece of fiction I found on the Internet...In the episode I comment on the targeted phishing attack against our association that used well-researched details and a cloned voice pulled from public audio. From there, we step into a stark forecast of near-term AI disruption in white-collar work. Agent teams can already write, review, and ship production code in loops, compressing time and cost while jolting stock prices across entire sectors the moment capabilities drop. Then we get specific about our field. Some tasks in fire safety are ripe for automation—code interpretation, routine calculations, device placement, and documentation—where speed and consistency help. But holistic fire strategy is contextual and slow to validate, with scarce, standardized case data and long feedback loops. Buildings are messy, multidisciplinary systems; that friction is a temporary moat against full automation. The larger risk may be macroeconomic: if AI compresses demand and margins across white-collar industries, construction cools, and safety work gets squeezed. Paradoxically, low digitalization in construction buys time, making it harder to train and deploy one-size-fits-all models.I'm still to large extent positive Fire Safety Engineering won't be directly disrupted at the same scale as Software Engineers got, but as a part of a larger ecosystem we won't be untouched either... I hope the version of the future that plays out is more optimistic than the one I got worried about.Read the Citrini piece here, if you have not yet: https://www.citriniresearch.com/p/2028gic----The Fire Science Show is produced by the Fire Science Media in collaboration with OFR Consultants. Thank you to the podcast sponsor for their continuous support towards our mission.
Josh Lipinski is the founder of the Breakwater Supply outdoor brand which is a digital-first ecommerce retailer that sells specialized waterproof backpacks, shoes, and other outdoor gear. Prior to becoming an entrepreneur, Josh worked for over 15 years in tech as a Software Engineer and Senior Manager in the Boston area. He currently leads all day-to-day operations of Breakwater Supply, which involves sourcing and product development, marketing and advertising, logistics, fulfillment, and web development. Josh joins Justin to discuss this fast-growing brand!
OpenClawから学ぶ自律的AIについて話しました。https://openclaw.ai/https://www.moltbook.com/感想をぜひハッシュタグ #tilfm でつぶやいてください!お便りフォーム https://forms.gle/J2ioXHS98dYNoMbq5Your co-hosts:Tomoaki Imai, Noxx CTO https://x.com/tomoaki_imai bsky: https://bsky.app/profile/tomoaki-imai.bsky.socialRyoichi Kato, Software Engineer https://x.com/ryo1kato bsky: https://bsky.app/profile/ryo1kato.bsky.social
OnThe New Stack Agents, Gavriel Cohen discusses why he built NanoClaw, a minimalist alternative to OpenClaw, after discovering security and architectural flaws in the rapidly growing agentic framework. Cohen, co-founder of AI marketing agencyQwibit, had been running agents across operations, sales, and research usingClaude Code. When Clawdbot (laterOpenClaw) launched, it initially seemed ideal. But Cohen grew concerned after noticing questionable dependencies—including his own outdated GitHub package—excessive WhatsApp data storage, a massive AI-generated codebase nearing 400,000 lines, and a lack of OS-level isolation between agents. In response, he createdNanoClawwith radical minimalism: only a few hundred core lines, minimal dependencies, and containerized agents. Built around Claude Code “skills,” NanoClaw enables modular, build-time integrations while keeping the runtime small enough to audit easily. Cohen argues AI changes coding norms—favoring duplication over DRY, relaxing strict file limits, and treating code as disposable. His goal is simple, secure infrastructure that enterprises can fully understand and trust. Learn more from The New Stack about the latest around personal AI agents Anthropic: You can still use your Claude accounts to run OpenClaw, NanoClaw and Co. It took a researcher fewer than 2 hours to hijack OpenClaw OpenClaw is being called a security “Dumpster fire,” but there is a way to stay safe Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
A few weeks after Dynatrace acquired DevCycle, Michael Beemer and Andrew Norris discussed on The New Stack Makers podcast how feature flagging is becoming a critical safeguard in the AI era. By integrating DevCycle's feature flagging into the Dynatrace observability platform, the combined solution delivers a “360-degree view” of software performance at the feature level. This closes a key visibility gap, enabling teams to see exactly how individual features affect systems in production. As “agentic development” accelerates—where AI agents rapidly generate code—feature flags act as a safety net. They allow teams to test, control, and roll back AI-generated changes in live environments, keeping a human in the loop before full releases. This reduces risk while speeding enterprise adoption of AI tools. The discussion also highlighted support for the Cloud Native Computing Foundation's OpenFeature standard to avoid vendor lock-in. Ultimately, developers are evolving into “conductors,” orchestrating AI agents with feature flags as their baton. Learn more from The New Stack about the latest around AI enterprise development: Why You Can't Build AI Without Progressive Delivery Beyond automation: Dynatrace unveils agentic AI that fixes problems on its own Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
I've been delaying this episode for a long time because the topic is genuinely difficult and, for many of us, scary. AI is threatening not just to our livelihood, but to our sense of self-worth as creators.In this episode, I don't offer false guarantees about job security. Instead, I frame the problem through the lens of microeconomics and rational incentives to help you understand how to remain employable. We discuss why you must separate your ego from your current skill set and how to position yourself not as a competitor to AI, but as a force multiplier.• The Hard Truth: I explain why the "abstinence" approach—hoping the industry rejects AI or that it turns out to be a bubble—is a high-risk gamble that is unlikely to succeed.• Ego vs. Employability: We discuss the difficult mental shift required to disconnect your self-worth from the act of writing code manually, allowing you to adopt new tools without feeling like you are losing your identity.• The Microeconomics of Your Job: Understand the cold reality that a rational market only pays you if you generate more value than you cost; if AI can do the same task with less risk or cost, the market will choose AI.• The Non-Zero Sum Game: Learn why the economy isn't a fixed pie. The goal isn't just to survive, but to recognize that the combination of Human + AI can generate more total value than either can alone.• Multiplicative Value: I challenge you to stop thinking about linear skill acquisition and start thinking like a manager: how can you use AI to multiply your output and become indispensable?• Accepting Atrophy: We confront the reality that your core coding skills may degrade over time as you rely on AI, and why accepting this trade-off might be necessary for your career survival.
We are at a unique point in history where there is finally an alternative to human coding. If AI can write the code effectively, what is left for the software engineer?In this episode, Joris Conijn (AWS CTO at Xebia) argues that the era of "just coding" is over. We discuss why senior developers are safe (for now), why juniors are at risk of never learning the fundamentals, and how "Shadow AI" is forcing companies to change their security strategies.Most importantly, we break down the difference between a "Programmer" and a "Software Engineer" with the introduction of agentic tools. If you want to future-proof your career and move from writing lines of code to designing systems, this conversation is for you.In this episode, we cover:Why banning AI at work actually increases your security riskHow to use AI to automate the boring parts of the SDLC (requirements & user stories)The critical difference between "Coding" and "System Architecture"Why you should check your AI Agents into your Git repositoryThe 20-year problem: what happens when engineers never learn the fundamentals?Connect with Joris Conijn:https://www.linkedin.com/in/jorisconijnTIMESTAMPS00:00:00 - Intro 00:01:11 - What Keeps a CTO Excited About Tech? 00:02:58 - Stop Being the "Department of No" in Security 00:05:28 - The Real Risk of Banning AI at Work 00:06:32 - When Developers Hold the Organization Hostage 00:08:14 - The Hidden Dangers of Instant AI Code Fixes 00:09:50 - Will Future Devs Understand Object Oriented Programming? 00:11:36 - Using AI to Accelerate Learning vs Copy-Pasting 00:13:17 - Why Testing Matters More When AI Writes Code 00:16:42 - Automating the Boring Parts of the SDLC 00:19:06 - How to Turn Meeting Transcripts into User Stories 00:21:36 - The Critical Skill of Making Implicit Knowledge Explicit 00:23:10 - Why You Should Stop Obsessing Over Story Points 00:27:46 - The "A-Team" Approach to High-Trust Development 00:29:54 - Running Parallel Workflows with AI Agents 00:33:34 - Pro Tip: Check Your AI Agents into Git 00:35:52 - Balancing Autonomy and Governance in Large Teams 00:39:19 - There Is Finally an Alternative to Human Coders 00:41:07 - Programmer vs Software Engineer: What is the Difference? 00:44:45 - How to Teach Software Engineering in the AI Era#SoftwareEngineering #SystemDesign #AIAgents
Lessons from React2Shell https://dev.to/cheetah100/lessons-from-react2shell-1m8b感想をぜひハッシュタグ #tilfm でつぶやいてください!お便りフォーム https://forms.gle/J2ioXHS98dYNoMbq5Your co-hosts:Tomoaki Imai, Noxx CTO https://x.com/tomoaki_imai bsky: https://bsky.app/profile/tomoaki-imai.bsky.socialRyoichi Kato, Software Engineer https://x.com/ryo1kato bsky: https://bsky.app/profile/ryo1kato.bsky.social
In this episode, we sit down with Joel Plotnik, the longtime drummer for Phil Wickham. Joel shares his incredible journey from the early days of youth group to playing on the biggest worship stages in the world. We dive deep into the realities of being a "hired gun" in Los Angeles, why he decided to pursue a career in software engineering while still touring, and how to maintain a heart of humility when the spotlight is on you.
Dynatrace is at a pivotal point, expanding beyond traditional observability into a platform designed for autonomous operations and security powered by agentic AI. In an interview on *The New Stack Makers*, recorded at the Dynatrace Perform conference, Chief Technology Strategist Alois Reitbauer discussed his vision for AI-managed production environments. The conversation followed Dynatrace's acquisition of DevCycle, a feature-management platform. Reitbauer highlighted feature flags—long used in software development—as a critical safety mechanism in the age of agentic AI. Rather than allowing AI agents to rewrite and deploy code, Dynatrace envisions them operating within guardrails by adjusting configuration settings through feature flags. This approach limits risk while enabling faster, automated decision-making. Customers, Reitbauer noted, are increasingly comfortable with AI handling defined tasks under constraints, but not with agents making sweeping, unsupervised changes. By combining AI with controlled configuration tools, Dynatrace aims to create a safer path toward truly autonomous operations. Learn more from The New Stack about the latest in progressive delivery: Why You Can't Build AI Without Progressive Delivery Continuous Delivery: Gold Standard for Software Development Join our community of newsletter subscribers to stay on top of the news and at the top of your game. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
If you've ever shipped fast only to realize no one wanted what you built, you've felt the tension behind balancing building and feedback. As developers, we're trained to execute against known requirements. As soon as you step into product ownership, consulting, or entrepreneurship, those guardrails disappear. Now you have to decide what to build, who it's for, and why it matters—while still making forward progress. Get it wrong, and you either drown in feedback or disappear into code. Get it right, and you create steady momentum without wasting effort. This interview continues our discussion with Tyler Dane as we break down a practical, repeatable system for balancing building and feedback so you can keep shipping and stay aligned with real customer needs. About Tyler Dane Tyler Dane has dedicated his career to helping people better manage—and truly appreciate—their time. After working as a full-time Software Engineer, Tyler recently stepped away from traditional employment to focus entirely on building Compass Calendar, a productivity app designed to help everyday users visualize and plan their day more intentionally. The tool is built from firsthand experience, not theory—shaped by years of experimenting with productivity systems, tools, and workflows. In a bold reset, Tyler sold most of his belongings and relocated to San Francisco to focus on growing the product, collaborating with partners, and pushing Compass forward. Outside of coding, Tyler creates YouTube videos and writes about time management and productivity. After consuming countless productivity books, tools, and frameworks, he realized a common trap: doing more without actually accomplishing what matters. That insight led him to break productivity down into its most practical, nuanced components—cutting through hustle culture noise to focus on systems that actually work. Tyler is unapologetically honest and independent. With no investors, no sponsors, and nothing to sell beyond the value of his work, his focus is simple: help people get more done—and appreciate the limited time they have to do it. Follow Tyler on LinkedIn, YouTube, and X. Balancing building and feedback starts with a clear v1 The biggest cause of wasted effort isn't bad code—it's unclear scope. A clear v1 isn't a long feature list; it's a decision about which problem you are solving first. When v1 is defined, feedback becomes directional instead of distracting. You can evaluate every request with a simple question: Does this help solve the v1 problem? If the answer is no, it goes into a parking lot—not the backlog. Without that clarity, every conversation feels urgent, and every idea feels equally important. Balancing building and feedback by timeboxing your week Unstructured time leads to extremes. One week becomes all coding. The next becomes all conversations. Neither works for long. Timeboxing forces balance by design. Decide when you build and when you listen—and protect those blocks like production systems. This removes decision fatigue and prevents emotional swings based on the latest conversation. The Weekly Balance Blueprint Pick a structure: daily outreach blocks or one dedicated feedback day Convert feedback into next-week priorities instead of mid-week pivots Consistency matters more than perfection. Balancing building and feedback with daily "business refocus" blocks Short check-ins keep you out of the weeds. Spend 10–15 minutes at the start and end of your day to reconnect with the business context. Ask yourself: Who is this for? What problem am I solving? What actually moved the product forward today? These moments prevent scope creep and help you code with intent instead of habit. Balancing building and feedback using personal sprints Personal sprints introduce rhythm. Two- or three-week cycles work well because they're long enough to produce meaningful output and short enough to adjust course. Each sprint should include: Focused build time Planned feedback windows Explicit integration of what you learned This keeps learning and execution tightly coupled, rather than competing for attention. Balancing building and feedback through problem-first customer research Feedback becomes overwhelming when you ask the wrong questions. Feature requests are noisy. Problems are signals. Focus conversations on how people experience the problem today, what frustrates them, and what "better" looks like. This approach surfaces patterns instead of opinions. Problem-First Customer Conversations Ask about pains, workarounds, and desired outcomes Use "not our customer" signals to narrow your focus Clarity often comes from who you don't build for. Balancing building and feedback to prevent feature overload Not all feedback belongs in your product. Filtering input is a leadership skill. Use your v1 definition and target customer as a lens. Some ideas are valuable later. Some indicate a different market entirely. Saying "no" protects your momentum and your sanity. Balancing building and feedback by turning conversations into messaging Customer conversations don't just shape the product—they shape how you talk about it. The language people use to describe their pain becomes your marketing copy. When your messaging mirrors real problems, alignment improves across sales, onboarding, and product decisions. Balancing building and feedback with journaling to spot patterns Writing creates distance. Distance creates clarity. A lightweight journaling habit helps you spot repeated mistakes, drifting priorities, and false assumptions before they become expensive. Over time, patterns become impossible to ignore. The Founder Feedback Journal Capture decisions, assumptions, and outcomes daily Review monthly to identify drift and reset priorities It's one of the simplest tools with the highest long-term ROI. Conclusion Balancing building and feedback isn't about splitting your time evenly—it's about building a system that keeps you moving forward without losing direction. Clear scope, protected time, intentional feedback loops, and honest reflection create momentum that compounds. Start small. Adjust deliberately. And remember: progress comes from building the right things, not just building faster. Stay Connected: Join the Developreneur Community We invite you to join our community and share your coding journey with us. Whether you're a seasoned developer or just starting, there's always room to learn and grow together. Contact us at info@develpreneur.com with your questions, feedback, or suggestions for future episodes. Together, let's continue exploring the exciting world of software development. Additional Resources Embrace FeedBack For Better Teams Maximizing Developer Effectiveness: Feedback Loops Turning Feedback into Future Success: A Guide for Developers Building Better Foundations Podcast Videos – With Bonus Content
At ITEXPO / MSP EXPO, Doug Green, Publisher of Technology Reseller News, spoke with Tejas Patel, Software Engineer at Amazon, for a technical deep dive into how one of the world's largest platforms manages scale, reliability, and the growing role of AI in operations. Amazon operates in an environment defined by extreme traffic variability—from daily fluctuations to massive surges during Prime events. Patel explained that the company relies on distributed systems and microservices architecture to scale every layer of the stack, including databases, caching layers, and application servers. “We scale everything at a massive scale,” he noted, adding that AI-driven traffic prediction models help prepare systems for anticipated spikes, ensuring elasticity and resilience under pressure. Even with rigorous lower-environment testing and simulated traffic, real-world production environments introduce unpredictable behaviors. When outages or functional errors occur, the first priority is customer impact mitigation. “The short-term goal is to make our functionalities available for customers as soon as possible,” Patel said. After stabilizing services, engineering teams conduct root cause analysis and implement long-term fixes to prevent recurrence. On-call teams remain a core part of this model, though that may evolve. AI is increasingly part of that evolution. Patel described how AI systems can detect latency drops, identify anomalies, trigger workflows, and begin root cause investigations—sometimes before engineers are alerted. While still in a supervised phase, AI is gradually moving from passive support to more autonomous operational roles. “AI has a lot of protocols built where it can talk to all the systems,” he explained, envisioning a future where AI mitigates issues proactively while engineers oversee the broader architecture. For MSPs and channel professionals looking to understand large-scale technology environments, Patel emphasized the foundational importance of distributed systems. “Distributed system is everywhere,” he said. “It's the backbone of a large-scale product.” As AI models and inference platforms continue to expand globally, scalable distributed infrastructure will remain essential to delivering reliable, uninterrupted user experiences. Visit https://www.amazon.com/
Matan-Paul Shetrit, Director of Product Management at Writer, argues that people must take responsibility for how they use AI. If someone produces poor-quality output, he says, the blame lies with the user—not the tool. He believes many misunderstand AI's role, confusing its ability to accelerate work with an abdication of accountability. Speaking on The New Stack Agents podcast, Shetrit emphasized that “we're all becoming editors,” meaning professionals increasingly review and refine AI-generated content rather than create everything from scratch. However, ultimate responsibility remains human. If an AI-generated presentation contains errors, the presenter—not the AI—is accountable. Shetrit also discussed the evolving AI landscape, contrasting massive general-purpose models from companies like OpenAI and Google with smaller, specialized models. At Writer, the focus is on enabling enterprise-scale AI adoption by reducing costs, improving accuracy, and increasing speed. He argues that bespoke, narrowly focused models tailored to specific use cases are essential for delivering reliable, cost-effective AI solutions at scale. Learn more from The New Stack about the latest around enterprise development: Why Pure AI Coding Won't Work for Enterprise Software How To Use Vibe Coding Safely in the Enterprise Join our community of newsletter subscribers to stay on top of the news and at the top of your game. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Is your internal developer platform actually improving velocity, or is it a bottleneck? We discuss why platform teams building "cool" abstractions is a red flag, and you should aim to create the best platform for software engineers.In this episode, we cover:Why "Golden Paths" can turn into roadblocks for developers.The danger of Shadow IT and why it's a symptom of a failed platform.How to measure if your platform is saving time.Connect with Adnan Alshar:https://www.linkedin.com/in/adnanmalshar92Connect with Jelmer de Jong:https://www.linkedin.com/in/jelmerdejong-xebia00:00:00 - Intro 00:00:54 - Is DevOps Dead? The Truth About Platform Engineering 00:03:07 - Why Developers Are Drowning in Complexity Today 00:04:37 - Why Having No Platform Is Better Than a Bad Platform 00:07:20 - Treating Software Engineers as Customers of the Platform 00:11:26 - The Exact Moment You Should Start Building a Platform 00:14:18 - Who Should Be on Your First Platform Team? 00:17:33 - Turning Your Angriest Developers Into Platform Evangelists 00:18:57 - Key Metrics: How to Measure Platform Engineering Success 00:21:01 - Why 60% of Companies Don't Measure Platform Success00:23:35 - Why No Metrics Is the Biggest Red Flag00:25:23 - The Disconnect Between Executives and AI Readiness 00:31:34 - Integrating AI Tools and Large Language Models Securely 00:34:22 - Shadow IT: The Symptom of a Broken Platform 00:38:03 - How to Scale Without Becoming a Bottleneck 00:41:45 - Don't Forget the Business Side of Platform Engineering#PlatformEngineering #DevOps #DeveloperProductivity
Send a textThis week on The Route to Networking podcast, Antonio is joined by Paul Chu, an AI software engineer and Master's student at ENS Paris-Saclay, whose journey has taken him from France's competitive engineering system to research labs at Stanford and the heart of the Bay Area tech ecosystem.At a time when AI is moving faster than ever, it's refreshing to hear from someone breaking into the industry right now. Paul offers a real-time perspective on what it actually takes to get started, stand out, and build momentum in such a competitive space.From securing a $15,000 research grant to auditing lectures from Andrew Ng, Paul shares how he positioned himself without a traditional research background and why curiosity and adaptability mattered more than credentials.He reflects on winning his first major hackathon in San Francisco, joining robotics projects outside his comfort zone, and how high-pressure environments accelerated both his technical ability and confidence. The conversation also explores the differences between the Paris and San Francisco tech scenes, networking as an introvert, documenting his journey on YouTube, and why AI should be seen as an enabler rather than a shortcut.The episode concludes with a quick-fire round covering large language models, career-defining moments, stepping into uncomfortable rooms, and the mindset required to grow in the AI frontier.
Customer feedback for developers is one of the fastest ways to improve a product—and one of the easiest ways to derail it. When you're building something you care about, every comment feels important. The challenge is learning how to listen without letting feedback pull you in ten different directions. This episode explores how developers can use customer feedback to sharpen focus, avoid scope creep, and move faster—without losing the original vision that made the product worth building in the first place. About Tyler Dane Tyler Dane has dedicated his career to helping people better manage—and truly appreciate—their time. After working as a full-time Software Engineer, Tyler recently stepped away from traditional employment to focus entirely on building Compass Calendar, a productivity app designed to help everyday users visualize and plan their day more intentionally. The tool is built from firsthand experience, not theory—shaped by years of experimenting with productivity systems, tools, and workflows. In a bold reset, Tyler sold most of his belongings and relocated to San Francisco to focus on growing the product, collaborating with partners, and pushing Compass forward. Outside of coding, Tyler creates YouTube videos and writes about time management and productivity. After consuming countless productivity books, tools, and frameworks, he realized a common trap: doing more without actually accomplishing what matters. That insight led him to break productivity down into its most practical, nuanced components—cutting through hustle culture noise to focus on systems that actually work. Tyler is unapologetically honest and independent. With no investors, no sponsors, and nothing to sell beyond the value of his work, his focus is simple: help people get more done—and appreciate the limited time they have to do it. Follow Tyler on LinkedIn, YouTube, and X. Customer feedback for developers: Why "this is great, but…" matters Most useful feedback doesn't sound negative at first. It usually starts with, "This is great, but…" That "but" is where the signal lives. For developers, the mistake isn't ignoring feedback—it's stopping at the compliment. The real value is understanding what's missing, confusing, or blocking progress. Teams that grow fastest learn to treat that follow-up as actionable data, not criticism. The "This Is Great, But…" Checklist Capture the "but" immediately before it gets softened or forgotten Translate it into a concrete problem statement you can validate Customer feedback for developers: how to find the right people to talk to Not all feedback is equal. Talking to the wrong audience can send you down expensive paths that don't actually improve your product. Customer feedback for developers works best when it comes from people who: Actively experience the problem you're solving Would realistically adopt or pay for your solution Share similar workflows and constraints Broad feedback feels productive but often leads to vague changes. Focused conversations lead to clarity. Customer feedback for developers: filtering input to prevent scope creep Scope creep rarely starts with bad intent. It starts with trying to please everyone. The fix isn't saying "no" to customers—it's filtering feedback through a clear lens: Does this solve the core problem? Does this help our ideal user? Does this move the product forward right now? Avoid Scope Creep Without Ignoring Customers Separate "interesting ideas" from "next priorities." Keep a backlog for later so good ideas don't hijack today's focus Customer feedback for developers: balancing vision with real user needs Strong products sit at the intersection of vision and reality. If you only follow feedback, you become reactive. If you ignore it, you risk building in isolation. Customer feedback for developers should challenge assumptions—not erase direction. The goal is refinement, not reinvention, with every conversation. Customer feedback for developers: building momentum with faster shipping One consistent theme is speed. Slow feedback loops kill momentum. Shipping faster—even in small increments—creates learning. Fast cycles: Reveal what actually matters Improve judgment over time Reduce emotional attachment to individual decisions Build Momentum With Speed and Structure Short shipping cycles reduce overthinking Volume creates clarity faster than perfect planning Customer feedback for developers: choosing a niche in a crowded market General tools struggle in saturated spaces. Customer feedback for developers becomes clearer when you narrow your audience. Niching down doesn't limit opportunity—it increases relevance. How to position against "feature-parity" giants You don't win by copying large platforms. You win by serving a specific workflow better than anyone else. Self-direction when you don't have a manager Without an external structure, prioritization becomes your job. Customer feedback replaces task assignments—but only if you actively use it to set direction. Clear priorities beat unlimited freedom. Conclusion Customer feedback for developers isn't about collecting opinions—it's about building judgment. When you listen to the right people, filter ruthlessly, and ship quickly, feedback becomes a growth engine instead of a distraction. If you're building something of your own, treat feedback as fuel—not a steering wheel. Stay Connected: Join the Developreneur Community We invite you to join our community and share your coding journey with us. Whether you're a seasoned developer or just starting, there's always room to learn and grow together. Contact us at info@develpreneur.com with your questions, feedback, or suggestions for future episodes. Together, let's continue exploring the exciting world of software development. Additional Resources Embrace FeedBack For Better Teams Feedback And Career Help – Does The Bootcamp Provide It? Turning Feedback into Future Success: A Guide for Developers Building Better Foundations Podcast Videos – With Bonus Content
AI coding assistants are boosting developer productivity, but most enterprises aren't shipping software any faster. GitLab CEO Bill Staples says the reason is simple: coding was never the main bottleneck. After speaking with more than 60 customers, Staples found that developers spend only 10–20% of their time writing code. The remaining 80–90% is consumed by reviews, CI/CD pipelines, security scans, compliance checks, and deployment—areas that remain largely unautomated. Faster code generation only worsens downstream queues.GitLab's response is its newly GA'ed Duo Agent Platform, designed to automate the full software development lifecycle. The platform introduces “agent flows,” multi-step orchestrations that can take work from issue creation through merge requests, testing, and validation. Staples argues that context is the key differentiator. Unlike standalone coding tools that only see local code, GitLab's all-in-one platform gives agents access to issues, epics, pipeline history, security data, and more through a unified knowledge graph.Staples believes this platform approach, rather than fragmented point solutions, is what will finally unlock enterprise software delivery at scale. Learn more from The New Stack about the latest around GitLab and AI: GitLab Launches Its AI Agent Platform in Public BetaGitLab's Field CTO Predicts: When DevSecOps Meets AIJoin our community of newsletter subscribers to stay on top of the news and at the top of your game. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
In this episode of the WeArePoWEr Podcast, we're joined by Asia Sharif, a Software Engineer, public speaker, mentor and all-round force of nature.Asia shares the story behind the origin of her name, how identity has shaped her confidence and her honest experiences with imposter syndrome in tech. She reflects on building self-belief, navigating underrepresentation in engineering and the importance of mentorship in STEM, while also breaking down her path into coding and software engineering and how public speaking became a tool for impact and advocacy.She also opens up about her battle with cancer, the challenges she faced and the mindset that helped her keep moving forward.What to expect in this episode:The story and meaning behind Asia's nameImposter syndrome & building self-beliefMentorship in STEM and engineeringCoding, software engineering & tech careersPublic speaking and confidenceAsia's cancer journey, resilience & recoveryAwards, achievements & successFind out more about We Are PoWEr here.
激動の時代の中、これまでのライフスタイルから脱却し、経済的にも人生的にも独立して生きるためのアドバイスについて話しました。If you have multiple interests, do not waste the next 2-3 years https://x.com/i/status/2010042119121957316感想をぜひハッシュタグ #tilfm でつぶやいてください!お便りフォーム https://forms.gle/J2ioXHS98dYNoMbq5Your co-hosts:Tomoaki Imai, Noxx CTO https://x.com/tomoaki_imai bsky: https://bsky.app/profile/tomoaki-imai.bsky.socialRyoichi Kato, Software Engineer https://x.com/ryo1kato bsky: https://bsky.app/profile/ryo1kato.bsky.social
Hors-Série Tech.Rocks Summit 2025 : nous poursuivons notre immersion au cœur des enjeux du Tech.Rocks Summit avec Benoit Gantaume. Dans ce nouvel épisode, il tend le micro à Marianne Ducournau, Head of AI Products chez Qonto, pour explorer la réalité concrète de l'IA générative au sein d'une scale-up de premier plan.Forte de son expérience chez Uber et Amazon, Marianne nous plonge dans les coulisses de l'intégration de l'IA dans le quotidien de 600 000 clients. Loin des discours théoriques, elle nous partage la stratégie de Qonto pour transformer des technologies complexes, de l'océrisation intelligente à l'IA agentique, en véritables leviers de simplification pour les entrepreneurs.Au cours de cet échange, elle lève le voile sur un dilemme crucial pour tout Tech Leader : quand faut-il développer en interne et quand faut-il s'appuyer sur des solutions sur l'étagère ?. Marianne revient également sur le défi de la prédictibilité et l'importance vitale de la "data quality" : car si le LLM répond toujours avec conviction, seul un dataset d'évaluation rigoureux permet de sortir du mirage du POC pour atteindre une fiabilité industrielle.Enfin, elle nous livre une vision résolument optimiste de l'évolution de nos métiers. Pour elle, l'IA n'est pas une menace, mais un catalyseur qui fait tomber les silos entre Product Managers, Data Scientists et Software Engineers.Un épisode essentiel pour découvrir comment une organisation tech de pointe navigue dans l'incertitude de l'IA pour bâtir des produits robustes, scalables et centrés sur la valeur client.*******************1ère communauté des professionnel•les de la Tech en France, Tech.Rocks a pour mission de faire rayonner les tech leaders tout au long de l'année.Tech.Rocks Summit 2026 - Paris - Profitez de notre tarif "Fan avant l'heure"
South East Technological University's (SETU) sixth annual Women in Technology event will bring together role models from industry and academia to challenge perceptions of technology and encourage more young women to consider careers in the sector. The event at SETU Arena in Waterford, on Thursday, 12 March, aims to grow young women's understanding of technology and demonstrate the career paths open to them in computing. Building on the success of last year's event, which welcomed over 1,000 female students from Cork, Tipperary, Kilkenny, Wexford, and Waterford, this year's programme promises to be more engaging than ever. Attendees will hear from inspirational keynote speakers who are leading the way in technology. These include Phil Healy, a two-time Irish Olympian who has successfully combined elite sport with a career as a Software Developer at Sun Life, and Likhitha Gaddi, a Software Engineer at Google. Alongside the keynote talks, the event, sponsored by Sun Life, Google, Security Risk Advisors and Nearform, will feature exhibition stands from some of the region's largest technology companies. Students will have the opportunity to interact directly with professionals working in technology, engineering, ICT, and software development, gaining insight into real-world career pathways. Amanda Freeman-Gater, Assistant Head of the Computing and Mathematics Department at SETU, believes that encouraging more women into technology is essential for the future of the sector. "The technology industry needs more women studying the wide range of technological programmes available, including those at SETU," said Ms Freeman-Gater, "Graduates can go on to build careers in dynamic technical roles that offer flexibility and the chance to work collaboratively on innovative ideas, services, and products." While there has been a recent shortfall in the number of women entering technology fields, this was not always the case, she adds. "Ada Lovelace is widely recognised as the world's first computer programmer. We must now focus on developing the next generation of female tech talent to create a more balanced and inclusive workforce. Women make up half the world's population, so it is only logical they should make up half the workforce in technology." SETU's Women in Technology event is open to second-level and third-level female students and teachers. The event will feature exhibitions, technology demonstrations, industry speakers, and information on SETU's wide range of third-level programmes, which provide pathways to exciting and rewarding careers in technology. Schools that register for SETU's Women in Technology event at SETU | Women in Technology 2026 will be entered into a draw to win a free bus to the event, while attendees will also be in with a chance to win one of six laptops. More about Irish Tech News Irish Tech News are Ireland's No. 1 Online Tech Publication and often Ireland's No.1 Tech Podcast too. You can find hundreds of fantastic previous episodes and subscribe using whatever platform you like via our Anchor.fm page here: https://anchor.fm/irish-tech-news If you'd like to be featured in an upcoming Podcast email us at Simon@IrishTechNews.ie now to discuss. Irish Tech News have a range of services available to help promote your business. Why not drop us a line at Info@IrishTechNews.ie now to find out more about how we can help you reach our audience. You can also find and follow us on Twitter, LinkedIn, Facebook, Instagram, TikTok and Snapchat.
Sean O'Dell of Dynatrace argues that enterprises are unprepared for a major shift brought on by AI: the rise of the developer. Speaking at Dynatrace Perform in Las Vegas, O'Dell explains that AI-assisted and “vibe” coding are collapsing traditional boundaries in software development. Developers, once insulated from production by layers of operations and governance, are now regaining end-to-end ownership of the entire software lifecycle — from development and testing to deployment and security. This shift challenges long-standing enterprise structures built around separation of duties and risk mitigation. At the same time, the definition of “developer” is expanding. With AI lowering technical barriers, software creation is becoming more about creative intent than mastery of specialized tools, opening the door to nontraditional developers. Experimentation is also moving into production environments, a change that would have seemed reckless just 18 months ago. According to O'Dell, enterprises now understand AI well enough to experiment confidently, but many are not ready for the cultural, operational, and security implications of developers — broadly defined — taking full control again.Learn more from The New Stack about the latest around enterprise developers and AI: Retool's New AI-Powered App Builder Lets Non-Developers Build Enterprise AppsSolving 3 Enterprise AI Problems Developers FaceEnterprise Platform Teams Are Stuck in Day 2 HellJoin our community of newsletter subscribers to stay on top of the news and at the top of your game. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Engineering hasn't become easier, writing code has just become faster. Time to stop fighting symptoms and start thinking in systems. In this Q&A, I break down the career advice I'd give to any engineer, from mastering architecture to knowing when to quit a high-paying job.In this episode, we cover:How "Systems Thinking" can be applied in practiceThe "Golden Handcuffs": Why high salaries keep engineers in toxic jobsHow to transition into leadership without waiting for a titleTimestamps00:00:00 - Intro 00:00:58 - How to innovate in stubborn legacy companies 00:04:49 - The "Golden Handcuffs": Money vs. Mental Health 00:07:27 - Stop solving symptoms: Systems Thinking explained 00:13:10 - Transitioning from Senior Engineer to Solutions Architect 00:15:08 - Communicating technical risks to non-technical bosses 00:17:48 - Proving leadership before you have the title 00:22:25 - My strategy for dealing with Imposter Syndrome 00:26:12 - Creating a "Zettelkasten" to retain technical knowledge 00:29:12 - The mindset that makes me stress-proof at work 00:33:10 - Learning to code with a product/design background 00:38:40 - Working with international remote teams 00:40:35 - Career Pivot: Software Engineering to Cyber Security 00:43:20 - Solopreneur opportunities in the "Education Gold Rush" 00:51:50 - Future Predictions: Vibe Coding vs. Vibe Engineering#SoftwareEngineering #CareerAdvice #SystemsThinking
Louise Fahys, co-founder of Plan2Play Artificial intelligence is no longer a future concept in club management — it is already reshaping how private and commercial clubs operate. But according to Louise Fahys, we are only scratching the surface. Fahys is the co-founder and CTO of Plan2Play, a court and sport booking platform built by people who understand both software engineering and the realities of club life. Her view is clear: the next generation of club operations will be driven by intelligent, conversational interfaces — think ChatGPT-style applications — where members interact directly with technology to book courts, schedule lessons, manage guest play, and personalize their club experience. AI Is Going to Change Everything in Club Management AI is already easing the workload for Directors of Racquets, Golf, and Operations. Tasks that once required hours of manual setup — like creating round robins, allocating courts, or balancing player levels — can now be handled in seconds. Names go in, constraints go in, and AI produces fair, efficient scheduling by level, gender, or randomization. And that, Fahys says, is just the beginning. The real shift will come through dynamic pricing. Much like airlines adjust pricing based on demand, clubs will increasingly use AI to price court time, tee times, lessons, clinics, amenities, and guest fees in real time. One-hour bookings will replace fragmented half-hour gaps. Utilization improves. Revenue becomes more predictable. Member experience improves. Data Will Confirm What Clubs Already Suspect AI will also validate long-held assumptions in club operations. Fahys notes that most club professionals already understand that the average lifetime value of a pickleball participant differs from that of a tennis member — and that tennis often differs again from padel or squash. AI won't just confirm those differences; it will quantify them. That data will influence everything from facility development to membership structures, programming decisions, and long-term capital planning for both private clubs and commercial operators. The End of the “Fiefdom” Era One of the most challenging areas for clubs, particularly member-owned facilities, is change. Software transitions are often resisted — not because the technology isn't effective, but because long-standing habits and informal traditions are deeply ingrained. Unspoken court ownership. Preferred time slots. Long-tenured directors controlling access “the way it's always been done.” AI introduces transparency. And transparency challenges tradition. As clubs move toward data-driven scheduling and access, those informal systems may begin to fade. For some, that will feel uncomfortable. For others, it will represent progress — fairer access, clearer policies, and a better overall member experience. Looking Ahead Fahys believes the clubs that embrace AI thoughtfully — using it as a tool to enhance service rather than replace hospitality — will be the ones that thrive. The technology is not about removing people from the equation; it's about freeing professionals to focus on what matters most: relationships, programming, and experience. The future of club management is arriving faster than many expect. And for those willing to engage with it, the opportunities are significant.
スタンフォード大学で2014年秋学期に開講された「CS183B:スタートアップの始め方(How to Start a Startup)」という授業の第5回目、ピーターティールによる「競争は敗者のすることだ」Competition is for Losers という講演について取り上げました動画 https://www.youtube.com/watch?v=3Fx5Q8xGU8k全講義 https://www.youtube.com/playlist?list=PLU630Cd0ZQCMeQiSvU7DJmDJDitdE7m7rX でのKousukeさんによるまとめ https://x.com/kosuke_agos/status/2005171369659826258?s=20「良い戦略、悪い戦略」https://amzn.to/4pQwBXT感想をぜひハッシュタグ #tilfm でつぶやいてください!お便りフォーム https://forms.gle/J2ioXHS98dYNoMbq5Your co-hosts:Tomoaki Imai, Noxx CTO https://x.com/tomoaki_imai bsky: https://bsky.app/profile/tomoaki-imai.bsky.socialRyoichi Kato, Software Engineer https://x.com/ryo1kato bsky: https://bsky.app/profile/ryo1kato.bsky.social
This interview was recorded for GOTO State of the Art in October 2025.https://gotopia.techRead the full transcription of this interview here:https://gotopia.tech/articles/415Nathen Harvey - DORA Lead, Product Manager at Google Cloud & AuthorCharles Humble - Freelance Techie, Podcaster, Editor, Author & ConsultantRESOURCESNathenhttps://bsky.app/profile/nathenharvey.bsky.socialhttps://x.com/nathenharveyhttps://github.com/nathenharveyhttps://www.linkedin.com/in/nathenhttps://linktr.ee/nathenharveyhttp://nathenharvey.comCharleshttps://bsky.app/profile/charleshumble.bsky.socialhttps://linkedin.com/in/charleshumblehttps://mastodon.social/@charleshumblehttps://conissaunce.comLinkshttps://dora.devhttps://dora.dev/research/2025/dora-reporthttps://dora.dev/research/2024/dora-reporthttps://thenewstack.io/ebooks/kubernetes/kubernetes-at-the-edge-container-orchestration-at-scaleDESCRIPTIONCharles Humble speaks with Nathen Harvey, leader of Google's DORA research team, about the real impact of AI on software development.Drawing from surveys of nearly 5,000 practitioners, Nathen reveals a surprising finding: increased AI adoption initially correlates with decreased stability and throughput - the very metrics teams have optimized for decades. The conversation explores why this happens, what capabilities organizations need before scaling AI adoption, and how AI acts as an amplifier of existing systems rather than a silver bullet.Nathen introduces DORA's seven AI capabilities model and discusses critical issues around trust, documentation, skill devaluation, and the future of software delivery in an AI-native world.RECOMMENDED BOOKSEmily Freeman & Nathen Harvey • 97 Things Every Cloud Engineer Should Know • https://amzn.to/3UlWBLtCharles Humble • Professional Skills for Software Engineers • https://www.conissaunce.com/professional-skills-shortcutNicole Forsgren, Jez Humble & Gene Kim • Accelerate • https://amzn.to/442Rep0Kim, Humble, Debois, Willis & Forsgren • The DevOps Handbook • https://amzn.to/47oAf3lJez Humble & David Farley • Continuous Delivery • https://amzn.to/452ZRkyJez Humble, Joanne Molesky & Barry O'Reilly • Lean Enterprise • https://amzn.to/47pcOXDAdrienne Braganza Tacke • "Looks Good to Me": Constructive Code Reviews • https://amzn.to/3E75XrDYevgeniy Brikman • Fundamentals of DevOps and Software Delivery • https://amzn.to/3WMPMFUBlueskyTwitterInstagramLinkedInFacebookCHANNEL MEMBERSHIP BONUSJoin this channel to get early access to videos & other perks:https://www.youtube.com/channel/UCs_tLP3AiwYKwdUHpltJPuA/joinLooking for a unique learning experience?Attend the next GOTO conference near you! Get your ticket: gotopia.techSUBSCRIBE TO OUR YOUTUBE CHANNEL - new videos posted daily!
Join Maariyaah Afzal, Founder and CEO of Silas Insurtech, for a fascinating look at the intersection of deep domain expertise and cutting-edge technology. Maariyaah spent years in the trenches at AIG and Lloyd's of London, experiencing firsthand the frustration of spending more time fighting emails and PDFs than analyzing risk. In this episode, she shares her journey of pivoting from underwriting to software engineering to build the solution the industry desperately needed: an AI-driven platform that turns complex documents into structured, decision-ready insights.
AI Agents が社会に普及したときに経済観点から及ぼす影響について話しました。An Economy of AI Agents https://arxiv.org/pdf/2509.01063 Anthropicのセーフティに関するテクニカルレポート https://www-cdn.anthropic.com/4263b940cabb546aa0e3283f35b686f4f3b2ff47.pdf感想をぜひハッシュタグ #tilfm でつぶやいてください!お便りフォーム https://forms.gle/J2ioXHS98dYNoMbq5Your co-hosts:Tomoaki Imai, Noxx CTO https://x.com/tomoaki_imai bsky: https://bsky.app/profile/tomoaki-imai.bsky.socialRyoichi Kato, Software Engineer https://x.com/ryo1kato bsky: https://bsky.app/profile/ryo1kato.bsky.social
From the "Hard to Kill" special forces event in Vegas to the science of why complaining physically rewires your brain. We debate the widening political gap between young men and women, breakdown the "Free Soul" aesthetic of the guy doing better than you, and ask if Claude Code has officially killed the junior developer.Welcome to the Alfalfa Podcast
Chris Aniszczyk, co-founder and CTO of the Cloud Native Computing Foundation (CNCF), argues that AI agents resemble microservices at a surface level, though they differ in how they are scaled and managed. In an interview ahead of KubeCon/CloudNativeCon Europe, he emphasized that being “AI native” requires being cloud native by default. Cloud-native technologies such as containers, microservices, Kubernetes, gRPC, Prometheus, and OpenTelemetry provide the scalability, resilience, and observability needed to support AI systems at scale. Aniszczyk noted that major AI platforms like ChatGPT and Claude already rely on Kubernetes and other CNCF projects.To address growing complexity in running generative and agentic AI workloads, the CNCF has launched efforts to extend its conformance programs to AI. New requirements—such as dynamic resource allocation for GPUs and TPUs and specialized networking for inference workloads—are being handled inconsistently across the industry. CNCF aims to establish a baseline of compatibility to ensure vendor neutrality. Aniszczyk also highlighted CNCF incubation projects like Metal³ for bare-metal Kubernetes and OpenYurt for managing edge-based Kubernetes deployments. Learn more from The New Stack about CNCF and what to expect in 2026:Why the CNCF's New Executive Director Is Obsessed With InferenceCNCF Dragonfly Speeds Container, Model Sharing with P2PJoin our community of newsletter subscribers to stay on top of the news and at the top of your game. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
「ソフトウェア設計の結合バランス 持続可能な成長を支えるモジュール化の原則」https://amzn.to/49ddlxD (2024)原書 Balancing Coupling in Software Design: Universal Design Principles for Architecting Modular Software https://amzn.to/4pla5X5ソフトウェアのモジュール間の結合を評価するための三つの次元 = 結合強度、距離、変動性コンポーネント間の 安定性 = NOT (変動性 AND 強度)連鎖的な変更の 変更コスト = 変動性 AND 距離モジュール性 = 強度 XOR 距離複雑性 = NOT モジュール性メンテナンスの労力 = 強度 AND 距離 AND 変動性感想をぜひハッシュタグ #tilfm でつぶやいてください!お便りフォーム https://forms.gle/J2ioXHS98dYNoMbq5Your co-hosts:Tomoaki Imai, Noxx CTO https://x.com/tomoaki_imai bsky: https://bsky.app/profile/tomoaki-imai.bsky.socialRyoichi Kato, Software Engineer https://x.com/ryo1kato bsky: https://bsky.app/profile/ryo1kato.bsky.social
Corey Zumar is a Product Manager at Databricks, working on MLflow and LLM evaluation, tracing, and lifecycle tooling for generative AI.Jules Damji is a Lead Developer Advocate at Databricks, working on Spark, lakehouse technologies, and developer education across the data and AI community.Danny Chiao is an Engineering Leader at Databricks, working on data and AI observability, quality, and production-grade governance for ML and agent systems.MLflow Leading Open Source // MLOps Podcast #356 with Databricks' Corey Zumar, Jules Damji, and Danny ChiaoJoin the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterShoutout to Databricks for powering this MLOps Podcast episode.// AbstractMLflow isn't just for data scientists anymore—and pretending it is is holding teams back. Corey Zumar, Jules Damji, and Danny Chiao break down how MLflow is being rebuilt for GenAI, agents, and real production systems where evals are messy, memory is risky, and governance actually matters. The takeaway: if your AI stack treats agents like fancy chatbots or splits ML and software tooling, you're already behind.// BioCorey ZumarCorey has been working as a Software Engineer at Databricks for the last 4 years and has been an active contributor to and maintainer of MLflow since its first release. Jules Damji Jules is a developer advocate at Databricks Inc., an MLflow and Apache Spark™ contributor, and Learning Spark, 2nd Edition coauthor. He is a hands-on developer with over 25 years of experience. He has worked at leading companies, such as Sun Microsystems, Netscape, @Home, Opsware/LoudCloud, VeriSign, ProQuest, Hortonworks, Anyscale, and Databricks, building large-scale distributed systems. He holds a B.Sc. and M.Sc. in computer science (from Oregon State University and Cal State, Chico, respectively) and an MA in political advocacy and communication (from Johns Hopkins University)Danny ChiaoDanny is an engineering lead at Databricks, leading efforts around data observability (quality, data classification). Previously, Danny led efforts at Tecton (+ Feast, an open source feature store) and Google to build ML infrastructure and large-scale ML-powered features. Danny holds a Bachelor's Degree in Computer Science from MIT.// Related LinksWebsite: https://mlflow.org/https://www.databricks.com/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Corey on LinkedIn: /corey-zumar/Connect with Jules on LinkedIn: /dmatrix/Connect with Danny on LinkedIn: /danny-chiao/Timestamps:[00:00] MLflow Open Source Focus[00:49] MLflow Agents in Production[00:00] AI UX Design Patterns[12:19] Context Management in Chat[19:24] Human Feedback in MLflow[24:37] Prompt Entropy and Optimization[30:55] Evolving MLFlow Personas[36:27] Persona Expansion vs Separation[47:27] Product Ecosystem Design[54:03] PII vs Business Sensitivity[57:51] Wrap up
Dr. Alexander Kihm is the Founder and CEO of POMA AI, a chunking technology that intakes large libraries of data (PDFs, JSONs, images), chunks it with its proprietary solution, and outputs context-rich data that can be consumed with higher fidelity and accuracy while consuming ever so less tokens.Listen to Alex talk about how he had front row seats to the Volkswagen scandal, how he got to contribute directly to the legislature of Andorra, where does chunking fit in the world of Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG), what is this elusive "tree" that POMA AI produces, and much more. Hosted by Perry Tiu.Episode Links:• POMA AI: https://www.poma-ai.com• POMA AI's Twitter: https://x.com/_POMA_AI_• POMA AI's LinkedIn: https://www.linkedin.com/company/poma-science/• Alex's LinkedIn: https://de.linkedin.com/in/dr-alexander-kihm-27a902338—Interested being on the show? contact@perrytiu.comSponsorship enquiries: sponsor@perrytiu.comFollow Podcast Ruined by a Software Engineer and leave a review• Apple Podcasts: https://apple.co/3RASg8x• Spotify: https://open.spotify.com/show/6Is85V7q2hLIBtmynIhnJr?si=perrytiu• Youtube: https://youtube.com/@perrytiuMore Podcast Ruined by a Software Engineer• Website: https://perrytiu.com/podcast• Merch: https://perrytiu.com/shop• RSS Feed: https://perrytiu.com/podcast/rss.xmlFollow Perry Tiu• Twitter: https://twitter.com/perry_tiu• LinkedIn: https://linkedin.com/in/perrytiu• Instagram: https://instagram.com/doctorpoor
API sprawl creates hidden security risks and missed revenue opportunities when organizations lose visibility into the APIs they build. According to IBM's Neeraj Nargund, APIs power the core business processes enterprises want to scale, making automated discovery, observability, and governance essential—especially when thousands of APIs exist across teams and environments. Strong governance helps identify endpoints, remediate shadow APIs, and manage risk at scale. At the same time, enterprises increasingly want to monetize the data APIs generate, packaging insights into products and pricing and segmenting usage, a need amplified by the rise of AI.To address these challenges, Nargund highlights “smart APIs,” which are infused with AI to provide context awareness, event-driven behavior, and AI-assisted governance throughout the API lifecycle. These APIs help interpret and act on data, integrate with AI agents, and support real-time, streaming use cases.IBM's latest API Connect release embeds AI across API management and is designed for hybrid and multi-cloud environments, offering centralized governance, observability, and control through a single hybrid control plane.Learn more from The New Stack about smart APIs: Redefining API Management for the AI-Driven Enterprise How To Accelerate Growth With AI-Powered Smart APIs Wrangle Account Sprawl With an AI Gateway Join our community of newsletter subscribers to stay on top of the news and at the top of your game. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Geoffrey Huntley argues that while software development as a profession is effectively dead, software engineering is more alive—and critical—than ever before. In this episode, the creator of the viral "Ralph" agent joins us to explain how simple bash loops and deterministic context allocation are fundamentally changing the unit economics of code. We dive deep into the mechanics of managing "context rot," avoiding "compaction," and why building your own "Gas Town" of autonomous agents is the only way to survive the coming rift.LinearB: Measure the impact of GitHub Copilot and CursorFollow the show:Subscribe to our Substack Follow us on LinkedInSubscribe to our YouTube ChannelLeave us a ReviewFollow the hosts:Follow AndrewFollow BenFollow DanFollow today's guest(s):Geoffrey's Website & Blog: ghuntley.comBuild Your Own Coding Agent Workshop: ghuntley.com/agent Ralph Wiggum as a Software Engineer: ghuntley.com/ralphSteve Yegge's "Welcome to Gas Town": Read on MediumThe "Cursed" Programming Language: github.com/ghuntley/cursedOFFERS Start Free Trial: Get started with LinearB's AI productivity platform for free. Book a Demo: Learn how you can ship faster, improve DevEx, and lead with confidence in the AI era. LEARN ABOUT LINEARB AI Code Reviews: Automate reviews to catch bugs, security risks, and performance issues before they hit production. AI & Productivity Insights: Go beyond DORA with AI-powered recommendations and dashboards to measure and improve performance. AI-Powered Workflow Automations: Use AI-generated PR descriptions, smart routing, and other automations to reduce developer toil. MCP Server: Interact with your engineering data using natural language to build custom reports and get answers on the fly.
A CloudBees survey reveals that enterprise migration projects often fail to deliver promised modernization benefits. In 2024, 57% of enterprises spent over $1 million on migrations, with average overruns costing $315,000 per project. In The New Stack Makers podcast, CloudBees CEO Anuj Kapur describes this pattern as “the migration mirage,” where organizations chase modernization through costly migrations that push value further into the future. Findings from the CloudBees 2025 DevOps Migration Index show leaders routinely underestimate the longevity and resilience of existing systems. Kapur notes that applications often outlast CIOs, yet new leadership repeatedly mandates wholesale replacement. The report argues modernization has been mistakenly equated with migration, which diverts resources from customer value to replatforming efforts. Beyond financial strain, migration erodes developer morale by forcing engineers to rework functioning systems instead of building new solutions. CloudBees advocates meeting developers where they are, setting flexible guardrails rather than enforcing rigid platforms. Kapur believes this approach, combined with emerging code assistance tools, could spark a new renaissance in software development by 2026.Learn more from The New Stack about enterprise modernization: Why AI Alone Fails at Large-Scale Code ModernizationHow AI Can Speed up Modernization of Your Legacy IT SystemsJoin our community of newsletter subscribers to stay on top of the news and at the top of your game. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Show Notehttps://kerrick.blog/articles/2025/confessions-of-a-software-developer-no-more-self-censorship/https://overreacted.io/things-i-dont-know-as-of-2018/感想をぜひハッシュタグ #tilfm でつぶやいてください!お便りフォーム https://forms.gle/J2ioXHS98dYNoMbq5Your co-hosts:Tomoaki Imai, Noxx CTO https://x.com/tomoaki_imai bsky: https://bsky.app/profile/tomoaki-imai.bsky.socialRyoichi Kato, Software Engineer https://x.com/ryo1kato bsky: https://bsky.app/profile/ryo1kato.bsky.social
Jason Casey's Journey: From Software Engineering to CybersecurityIn this episode of 'Breaking into Cybersecurity,' we chat with Jason Casey, who shares his unique path from being a software engineer to a cybersecurity expert. Jason discusses his initial work on network protocols, his transition to big data intelligence, and eventually moving into cybersecurity defense. He stresses the importance of curiosity and understanding fundamental principles, and provides insights into the evolving role of AI in programming and cybersecurity. Jason also shares how he has helped others grow in their careers, and offers valuable advice for those looking to break into and excel in the cybersecurity field. Tune in to hear his inspiring journey and learn practical tips for success.00:00 Introduction to Jason Casey's Cybersecurity Journey01:00 Early Career and Transition to Cybersecurity03:24 Deep Dive into Networking and Security Challenges06:37 Big Data Intelligence and Forensics08:54 Principles and Fundamentals in Tech Careers11:58 Mentorship and Career Development16:04 AI in Programming and Cybersecurity25:50 Final Advice for Aspiring Cybersecurity ProfessionalsSponsored by CPF Coaching LLC - http://cpf-coaching.com/The Breaking into Cybersecurity: It's a conversation about what they did before, why they pivoted into cyber, what the process was they went through, how they keep up, and advice/tips/tricks along the way.The Breaking into Cybersecurity Leadership Series is an additional series focused on cybersecurity leadership and hearing directly from different leaders in cybersecurity (high and low) on what it takes to be a successful leader. We focus on the skills and competencies associated with cybersecurity leadership, as well as tips/tricks/advice from cybersecurity leaders.Check out our books:Develop Your Cybersecurity Career Path: How to Break into Cybersecurity at Any Level https://amzn.to/3443AUIHack the Cybersecurity Interview: Navigate Cybersecurity Interviews with Confidence, from Entry-level to Expert roleshttps://www.amazon.com/Hack-Cybersecurity-Interview-Interviews-Entry-level/dp/1835461298/Hacker Inc.: Mindset For Your Careerhttps://www.amazon.com/Hacker-Inc-Mindset-Your-Career/dp/B0DKTK1R93/---About the hosts:Renee Small is the CEO of Cyber Human Capital, one of the leading human resources business partners in the field of cybersecurity, and author of the Amazon #1 best-selling book, Magnetic Hiring: Your Company's Secret Weapon to Attracting Top Cyber Security Talent. She is committed to helping leaders close the cybersecurity talent gap by hiring from within and encouraging more people to enter the lucrative cybersecurity profession. https://www.linkedin.com/in/reneebrownsmall/Download a free copy of her book at [magnetichiring.com/book](http://magnetichiring.com/book)Christophe Foulon focuses on helping secure people and processes, drawing on a solid understanding of the technologies involved. He has over ten years of experience as an Information Security Manager and Cybersecurity Strategist. He is passionate about customer service, process improvement, and information security. He has significant expertise in optimizing technology use while balancing its implications for people, processes, and information security, through a consultative approach.https://www.linkedin.com/in/christophefoulon/Find out more about CPF-Coaching at [https://www.cpf-coaching.com](https://www.cpf-coaching.com/)- Website: https://www.cyberhubpodcast.com/breakingintocybersecurity- Podcast: https://podcasters.spotify.com/pod/show/breaking-into-cybersecuri- YouTube: https://www.youtube.com/c/BreakingIntoCybersecurity- Linkedin: https://www.linkedin.com/company/breaking-into-cybersecurity/- Twitter: https://twitter.com/BreakintoCyber- Twitch: https://www.twitch.tv/breakingintocybersecurity
IBM's recent acquisitions of Red Hat, HashiCorp, and its planned purchase of Confluent reflect a deliberate strategy to build the infrastructure required for enterprise AI. According to IBM's Sanil Nambiar, AI depends on consistent hybrid cloud runtimes (Red Hat), programmable and automated infrastructure (HashiCorp), and real-time, trustworthy data (Confluent). Without these foundations, AI cannot function effectively. Nambiar argues that modern, software-defined networks have become too complex for humans to manage alone, overwhelmed by fragmented data, escalating tool sophistication, and a widening skills gap that makes veteran “tribal knowledge” hard to transfer. Trust, he says, is the biggest barrier to AI adoption in networking, since errors can cause costly outages. To address this, IBM launched IBM Network Intelligence, a “network-native” AI solution that combines time-series foundation models with reasoning large language models. This architecture enables AI agents to detect subtle warning patterns, collapse incident response times, and deliver accurate, trustworthy insights for real-world network operations.Learn more from The New Stack about AI infrastructure and IBM's approach: AI in Network Observability: The Dawn of Network Intelligence How Agentic AI Is Redefining Campus and Branch Network Needs Join our community of newsletter subscribers to stay on top of the news and at the top of your game. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Software engineers often think adding AI is just a simple API call, but moving from a Proof of Concept to a stable production system requires a completely different mindset. Maria Vechtomova breaks down the harsh reality of MLOps, why rigorous evaluation is non-negotiable, and why autonomous agents are riskier than you think.In this episode, we cover:The essential MLOps principles every software engineer must learnHow to bridge the gap between a demo and a production-grade solutionStrategies for evaluating agents and detecting model driftThe security risks of customer service agents and prompt injectionPractical tips for using AI tools to boost your own productivityConnect with Maria:https://www.linkedin.com/in/maria-vechtomovaTimestamps: 00:00:00 - Intro 00:01:25 - Why the AI Hype Was Actually Good for Monitoring 00:03:07 - Real-World AI Use Cases That Deliver Actual Value 00:05:16 - MLOps Basics Every Software Engineer Needs to Know 00:08:08 - The Hidden Complexity of Deploying Agents to Production 00:12:02 - Minimum Requirements for Moving from PoC to Production 00:15:41 - Step-by-Step Guide to Evaluating AI Features Before Launch 00:18:08 - How to Handle Data Labeling and Drift Detection 00:21:55 - Why You Likely Need Custom Tools for Monitoring 00:24:56 - Why Engineers Build AI Features They Don't Need00:26:01 - How Software Engineers Can Learn Data Science Principles 00:31:36 - The Dangerous Security Risks of Autonomous Customer Service Agents 00:34:44 - Why Human-in-the-Loop is Essential for Avoiding Reputational Damage 00:36:18 - Boosting Developer Productivity with Opinionated AI Prompts 00:39:20 - Using Voice Notes and AI to Organize Your Life#MLOps #SoftwareEngineering #ArtificialIntelligence
「世界はシステムで動く ―― いま起きていることの本質をつかむ考え方 」https://amzn.to/3LcfAcs「熊とワルツを リスクを愉しむプロジェクト管理」 https://amzn.to/499lbIE (本編中では「ワインバーグ」と発言しましたが、トム・デマルコ著の間違いでした。)感想をぜひハッシュタグ #tilfm でつぶやいてください!お便りフォーム https://forms.gle/J2ioXHS98dYNoMbq5Your co-hosts:Tomoaki Imai, Noxx CTO https://x.com/tomoaki_imai bsky: https://bsky.app/profile/tomoaki-imai.bsky.socialRyoichi Kato, Software Engineer https://x.com/ryo1kato bsky: https://bsky.app/profile/ryo1kato.bsky.social
In this episode I talk with Eleni Konior about her path from economics to graphic design to programming, and how creative skills benefit technical work. We discuss building customer-focused features, the importance of assuming the customer's role, and AI in products beyond chatbots—like proactively surfacing recommendations based on user behavior.Links:datgreekchick.comNonsense Monthly
In this episode I talk with Becky Freeman, staff engineer at Caribou and co-organizer of Rocky Mountain Ruby, about legacy code, refactoring long-running applications, and the psychological skills required to get team buy-in for technical improvements.Links:Bekki Freeman on LinkedInRocky Mountain RubyCaribouNonsense Monthly
This interview was recorded for GOTO Unscripted.https://gotopia.techRead the full transcription of this interview here:https://gotopia.tech/articles/408Michael Nygard - Chief Architect at Nubank & Author of "Release It!"Charles Humble - Freelance Techie, Podcaster, Editor, Author & ConsultantFULL TALK TITLEBuilding Software That Survives: Autonomy, Architecture & Alignment at ScaleRESOURCESMichaelhttps://www.linkedin.com/in/mtnygardhttps://twitter.com/mtnygardhttp://www.michaelnygard.comCharleshttps://bsky.app/profile/charleshumble.bsky.socialhttps://linkedin.com/in/charleshumblehttps://mastodon.social/@charleshumblehttps://conissaunce.comDESCRIPTIONMichael Nygard, author of the influential "Release It!" and Chief Architect at Nuank, discusses his journey from programmer to technical leader.In this conversation, he shares insights from major transformation projects at Sabre and Nubank, exploring the nuances of centralization versus autonomy, the often-misunderstood implications of Conway's Law, and how architectural boundaries can reduce the need for constant organizational alignment.He emphasizes that effective technical leadership involves more than reorganizations - it requires understanding communication structures, celebrating the right behaviors, and creating systems that enable teams to operate independently within well-defined boundaries.RECOMMENDED BOOKSMichael Nygard • Release It! 2nd Edition • https://amzn.to/3WJeKV8Michael Nygard • Release It! 1st Edition • https://amzn.to/3XCkiRfRichard Monson-Haefel • 97 Things Every Software Architect Should Know • https://amzn.to/3JdRYU2Charles Humble • Professional Skills for Software Engineers • https://www.conissaunce.com/professional-skills-shortcutPatterson, Grenny, McMillan & Switzler • Crucial Conversations • https://amzn.to/3LhGHTaYevgeniy Brikman • Fundamentals of DevOps and Software Delivery • https://amzn.to/3WMPMFUTod Golding • Building Multi-Tenant SaaS Architectures • https://amzn.to/3YfM49oJacqui Read • Communication Patterns • https://amzn.to/3E37lvvMatthew Skelton & Manuel Pais • Team Topologies • http://amzn.to/3sVLyLQJames Stanier • Become an Effective Software Engineering Manager • https://amzn.to/3vHrx1EBlueskyTwitterInstagramLinkedInFacebookCHANNEL MEMBERSHIP BONUSJoin this channel to get early access to videos & other perks:https://www.youtube.com/channel/UCs_tLP3AiwYKwdUHpltJPuA/joinLooking for a unique learning experience?Attend the next GOTO conference near you! Get your ticket: gotopia.techSUBSCRIBE TO OUR YOUTUBE CHANNEL - new videos posted daily!
Yogi Goel, cofounder and CEO of Maxima AI, breaks down how he hires outlier talent, people who think like future founders and thrive when the plan changes fast. We get practical on what to look for beyond pedigree, how to assess it without relying on easy resume signals, and how culture scales when your team doubles.Yogi also shares what Maxima AI is building, an agentic platform for enterprise accounting that automates day to day operations and month end work, and why the best teams win by pairing speed with real ownership.Key takeaways• Outlier candidates often look “non standard” on paper, the signal is founder mentality, fast thinking, grit, and a point to prove• Hiring gets easier when it is always on, keep a living bench of great people long before you have a headcount• Use long form conversations to assess how someone thinks, not just what they have done, ask for their life story and listen for the choices they highlight• Train the specifics, but set a baseline for domain aptitude, then coach the narrow parts once the fundamentals are there• Culture scales through leaders and through what you reward and penalize, not through posters and slogansTimestamped highlights00:39 What Maxima AI does and the real value of agentic accounting01:38 Defining an outlier candidate as a future founder, and why school matters less than you think07:34 The conveyor belt approach to recruiting, building an inventory of great people before you need them11:35 Where to draw the line on training, test for general aptitude, coach the specifics14:20 How diverse teams disagree productively, bring evidence, run small bets, then double down or pivot18:25 Scaling culture with values driven leaders, and the simple rule of reward versus penaltyA line worth keeping“Culture is two things, what you reward and what you penalize.”Pro tips you can steal• Keep a short list of the best people you have ever met for each function, update it constantly• Ask candidates for their journey from day zero, then pay attention to what they choose to emphasize• When the team disagrees, grab quick evidence, customer texts, small pulse checks, then place a small bet that will not kill the company• Expect great people to want autonomy and scope, manage like a mentor, not a hovercraftCall to actionIf this episode helped you rethink hiring, share it with a founder or engineering leader who is building a team right now. Follow the show for more conversations on people, impact, and technology, and connect with Yogi Goel on LinkedIn by searching his name and Maxima AI.
Ari Zilka, founder of MyDecisive.ai and former Hortonworks CPO, argues that most observability vendors now offer essentially identical, reactive dashboards that highlight problems only after systems are already broken. After speaking with all 23 observability vendors at KubeCon + CloudNativeCon North America 2025, Zilka said these tools fail to meaningfully reduce mean time to resolution (MTTR), a long-standing demand he heard repeatedly from thousands of CIOs during his time at New Relic.Zilka believes observability must shift from reactive monitoring to proactive operations, where systems automatically respond to telemetry in real time. MyDecisive.ai is his attempt to solve this, acting as a “bump in the wire” that intercepts telemetry and uses AI-driven logic to trigger actions like rolling back faulty releases.He also criticized the rising cost and complexity of OpenTelemetry adoption, noting that many companies now require large, specialized teams just to maintain OTel stacks. MyDecisive aims to turn OpenTelemetry into an enterprise-ready service that reduces human intervention and operational overhead.Learn more from The New Stack about OpenTelemetry:Observability Is Stuck in the Past. Your Users Aren't. Setting Up OpenTelemetry on the Frontend Because I Hate MyselfHow to Make OpenTelemetry Better in the BrowserJoin our community of newsletter subscribers to stay on top of the news and at the top of your game. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
AI can scaffold an app in seconds, but can it refactor that thousand-line React file when the first bug hits production? In this episode, I sit down with Brian Jenney software engineer and program owner of the coding bootcamp Parsity, to draw a hard line between “code that runs” and “code that lasts.” From mentoring career-switchers to stress-testing AI in real-world pipelines, Brian shares why craftsmanship and product judgment still beat copy-paste prompts.
Episode 406 of The VentureFizz Podcast features Bill Simmons, serial entrepreneur and Co-Founder of Orbit.me and DataXu. I'm going to use the cliché: Bill actually is a rocket scientist. His background is in aerospace engineering, he holds a PhD from MIT, and he worked on 13 space missions. In addition, he was part of a major Government competition for simulating options for space travel to Mars. His team simulated 35 billion possible options to generate 1,100 different Mars missions that were all feasible. This groundbreaking technology that leverage Big Data, which we now recognize as AI and machine learning, launched his first company, DataXu, in 2008. DataXu became a pioneer in the programmatic ad platform category and raised over $87M in funding. The company scaled as a major player in the Boston tech scene, and was acquired by Roku in 2019. Now, Bill is tackling a challenge we all likely face with his new startup, Orbit.me. Information is scattered across texts, multiple email inboxes, LinkedIn, WhatsApp, and social apps—it's impossible to keep track of what matters. Orbit.me is a perfect use case for AI, organizing your scattered messages into “Orbits” which are dedicated spaces built around the real contexts of your life, like parenting, work, or other important matters. Chapters: 00:00 Introduction 02:41 Current Status of Space Travel & Mars 08:34 Bill Simmons Background Story 10:21 Academic Experience 13:12 Space Missions including Mars Research 17:19 How DataXu Came to Fruition & Focus on AdTech 20:03 Scaling DataXu & Market Strategies 23:15 The Competitive Landscape of AdTech 26:20 The Technology Behind Real-Time Bidding 29:49 Building DataXu's Culture During Growth 32:51 DataXu Acquisition by Roku 35:54 The Transition to Product Management & Experience at The Trade Desk 37:13 The Details of Orbit.me 43:07 The Team Behind Orbit.me 48:58 The Evolving Role of Software Engineers in the AI Era 52:01 Lightening Round Questions
Thinking of building your own AI security tool? In this episode, Santiago Castiñeira, CTO of Maze, breaks down the realities of the "Build vs. Buy" debate for AI-first vulnerability management.While building a prototype script is easy, scaling it into a maintainable, audit-proof system is a massive undertaking requiring specialized skills often missing in security teams. The "RAG drug" relies too heavily on Retrieval-Augmented Generation for precise technical data like version numbers, which often fails .The conversation gets into the architecture required for a true AI-first system, moving beyond simple chatbots to complex multi-agent workflows that can reason about context and risk . We also cover the critical importance of rigorous "evals" over "vibe checks" to ensure AI reliability, the hidden costs of LLM inference at scale, and why well-crafted agents might soon be indistinguishable from super-intelligence .Guest Socials - Santiago's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:00) Who is Santiago Castiñeira?(02:40) What is "AI-First" Vulnerability Management? (Rules vs. Reasoning)(04:55) The "Build vs. Buy" Debate: Can I Just Use ChatGPT?(07:30) The "Bus Factor" Risk of Internal Tools(08:30) Why MCP (Model Context Protocol) Struggles at Scale(10:15) The Architecture of an AI-First Security System(13:45) The Problem with "Vibe Checks": Why You Need Proper Evals(17:20) Where to Start if You Must Build Internally(19:00) The Hidden Need for Data & Software Engineers in Security Teams(21:50) Managing Prompt Drift and Consistency(27:30) The Challenge of Changing LLM Models (Claude vs. Gemini)(30:20) Rethinking Vulnerability Management Metrics in the AI Era(33:30) Surprises in AI Agent Behavior: "Let's Get Back on Topic"(35:30) The Hidden Cost of AI: Token Usage at Scale(37:15) Multi-Agent Governance: Preventing Rogue Agents(41:15) The Future: Semi-Autonomous Security Fleets(45:30) Why RAG Fails for Precise Technical Data (The "RAG Drug")(47:30) How to Evaluate AI Vendors: Is it AI-First or AI-Sprinkled?(50:20) Common Architectural Mistakes: Vibe Evals & Cost Ignorance(56:00) Unpopular Opinion: Well-Crafted Agents vs. Super Intelligence(58:15) Final Questions: Kids, Argentine Steak, and Closing
Join career coach and former finance professional turned software engineer, JC Clark, as she shares hard-won insights from her journey of 1800+ job applications. Discover insider tips to land high-paying remote jobs, build powerful professional networks, and navigate career changes. Learn how to thrive in virtual workplaces while maintaining work-life balance, especially for working parents.