The Tech Trek

Follow The Tech Trek
Share on
Copy link to clipboard

Everyone is on their own trek. And we can all use a little help along the way. The Tech Trek features conversations with top leaders in technology on how they are transforming their industry and organization. We explore the intersections of technology, m

Elevano


    • Feb 27, 2026 LATEST EPISODE
    • weekdays NEW EPISODES
    • 26m AVG DURATION
    • 633 EPISODES


    Search for episodes from The Tech Trek with a specific topic:

    Latest episodes from The Tech Trek

    From Exit to Starting Over: What Nobody Tells You About Building Again

    Play Episode Listen Later Feb 27, 2026 19:30


    Harry Gestetner built a creator economy platform in college, sold it, and walked away. Then he did the one thing nobody expected. He jumped back in and started building hardware.In this episode, the founder and CEO of Orion (a sleep tech company making smart mattress covers) sits down to talk about what really happens after an exit, why most founders can't stay away from building, and what changes when you go from software to physical products.Harry shares what surprised him about the acquisition process, how he thinks about evaluating new startup ideas, and why he believes hardware is "life on hard mode." He also gets into the mental side of founding, from managing stress to staying sharp when everything feels uncertain.What You'll Walk Away WithGoing through an exit sounds like the finish line, but Harry explains why it's actually a reset. You trade ownership and freedom for financial security, and at some point, most founders start craving the creative control they gave up.Not every idea deserves your time. Harry talks about running new concepts through a "disqualification period" where you actively try to poke holes before committing. The ones that survive that process are worth going all in on.Hardware changes the game. Software lets you pivot fast. Hardware gives you 18 month product cycles, inventory headaches, and supply chain complexity. Conviction has to be higher before you start.The best startup ideas come from problems you and your friends actually have. If enough people share that problem, you've got a market.Knowledge compounds across startups. Harry compares the founder journey to an elastic band. Once you've been stretched, you never go back to your original form. Every challenge you survive makes the next one more manageable.Timestamped Highlights[00:34] What Orion actually does and how it makes six hours of sleep feel like ten[03:01] The emotional arc of an exit that nobody talks about, from relief to restlessness[05:34] How Harry evaluates startup ideas and why he uses a disqualification process[09:30] Why building hardware is "life on hard mode" and what made him take it on anyway[10:39] The elastic band theory of founder growth and why learning compounds over time[15:49] His advice for early career founders: pick one thing and go all inWords That Stuck"As a founder, you're sort of like an elastic band. The more you get stretched, you never go back to the original form."Tactical TakeawaysRun every new idea through a disqualification period. Actively look for reasons it won't work before you commit. The ideas that survive that scrutiny are the ones worth building.Build around problems you personally experience. If your friends share the same frustration, there's a good chance others do too. That's your market signal.If you're going to start something, go all in. Stop hedging across multiple projects. Pick one idea and dedicate yourself to it completely until it works.Keep Up With The ShowIf this episode hit home, share it with a founder or someone thinking about taking the leap. Subscribe wherever you listen so you never miss an episode. And connect with us on LinkedIn for more conversations like this one.

    Edge AI Is Shifting From Chat To Action

    Play Episode Listen Later Feb 26, 2026 26:48


    Behnam Bastani, CEO and cofounder of OpenInfer, breaks down why the last two years of AI feel explosive, and why the next wave is not chat, it is action at the edge.We get into always on inference, what actually forces compute to move closer to the data, and the missing layer that makes edge AI scale: the Android like infrastructure that lets devices collaborate instead of living in silos.Key takeaways• The hype spike is real, but the runway is decades, it took compute, sensors, and communication protocols maturing over generations to unlock this moment• AI is shifting from conversational to actionable, which means continuous, always on inference becomes the norm• Edge wins when cost, reliability, and data sovereignty matter, cloud and edge will coexist, but the workload placement changes• The biggest bottleneck is not just silicon, it is the infrastructure layer that makes building and deploying across devices easy, plus a shared fabric so devices can cooperate• Adoption is as much a human story as a technical one, this shift lands faster and broader than previous tech transitions, so anxiety is predictable and needs real attentionTimestamped highlights00:38 OpenInfer's mission, intelligence on every physical surface, and why collaboration matters02:07 Electricity as the earlier revolution, intelligence as the next kind of power, and the control problem05:54 Where we really are on the maturity curve, early products are here, mass adoption and safety take time08:31 When the device boundary disappears, it stops being you versus the agent, it becomes one system11:04 Always on inference, and the three forces pushing compute to the edge: cost, reliability, data sovereignty14:40 The Android moment for edge AI, why the operating system layer unlocks developers, apps, and adoptionA line worth replayingThose are going to be the three pillars that really enforces that edge and cloud are going to live together.Pro tips for builders• If your product needs real time decisions, design for intermittent networks from day one, reliability is not optional• Treat data sovereignty as a product feature, not a compliance afterthought, it is becoming the moat• Push for interoperability early, the fabric that lets devices share the right data is what makes edge feel seamlessCall to actionIf this episode helped you rethink where AI should run and what it takes to ship it in the real world, follow the show and share it with one builder who is working on edge, robotics, devices, or applied AI.

    How to Build a Data Team From Scratch (And Get Leadership to Invest)

    Play Episode Listen Later Feb 25, 2026 24:38


    Building data capability from zero is not a tooling problem, it is a trust and prioritization problem. In this episode, Laura Guerin, Head of Data and Data Science at Bevi, breaks down how she goes from blank slate to real business impact, without getting trapped in endless plumbing or endless meetings. Laura shares how she runs an early listening tour, prototypes value before asking for bigger investment, and decides when to hire scrappy generalists versus specialists. We also get practical on AI, where it helps, where it is unnecessary, and why quality data and a clean semantic layer still decide whether anything works.Key takeaways• Start with business priorities, then map data work to the actions and outcomes leaders actually care about• Prototype the end deliverable fast, even if the backend is duct tape at first, then scale after stakeholders see value• Use cases first for AI, most problems do not need AI, but the right problems can see real acceleration• Early teams win with adaptable generalists who can wear multiple hats across data, analytics, and data science• Trust is a shared responsibility, build reliability, then create a culture where users flag weirdness quicklyTimestamped highlights00:44 Bevy explained, smart bottle less dispensers and why the business context matters for data priorities02:01 The listening tour playbook, exec alignment, stakeholder map, and using AI to synthesize themes into a SWOT04:00 The MVP reality, manual prototypes to prove value, then the conversation about scalable pipelines06:33 AI without the hype, use cases, when AI is not needed, and two examples with clear business impact09:22 Hiring from zero, why generalists first, the data analytics data science spectrum, and the personality traits that matter14:21 Self service reimagined, Slack as the interface, semantic layer and permissions, and how to keep a single source of truth20:19 Keeping trust when things break, checks and balances plus a shared responsibility model22:39 Making innovation real, baking it into expectations so the team has time to learn and test new approachesA line worth stealingData on its own is not typically a priority. It is more about the action or the impact that comes out of the data.Pro tips• Run a structured listening tour early, capture themes, then pick two or three priorities you can deliver quickly• Show the business an MVP output first, then use that proof to justify the unglamorous backend work• Treat AI like any other tool, define the problem, validate the use case, then confirm the data quality inputsCall to actionIf you are building analytics, data products, or AI inside a growing company, follow the show and subscribe so you do not miss the next operator level conversation. Share this episode with one leader who is asking for data outcomes but has not funded the foundation yet.

    The Hiring Mistake That Kills Most Startups (And What to Do Instead)

    Play Episode Listen Later Feb 24, 2026 27:12


    Riya Grover, CEO and co founder of Sequence, breaks down what “good CEO” actually looks like when the job is messy, fast, and high stakes. This is a practical conversation about building excellence through people, clarity, and direction, not through heroics or micromanagement. Riya runs a revenue automation platform for finance teams, helping companies automate order to cash, billing, invoicing, accounts receivable, and revenue recognition. From that seat, she shares a founder level view on leadership that is direct, repeatable, and built for real operating constraints.Key takeaways• The CEO's highest leverage job is building the bench, your company becomes the team you assemble• High performance culture comes from a clear bar, fast decisions when it is not met, and leaders who own outcomes• Great teams do not need more policies, they need context, goals, trade offs, and clarity• Separate reversible decisions from irreversible ones, move fast on two way doors, slow down on one way doors• Hiring signal to watch, motivation and hunger for the stretch challenge often beats the “done it before” resumeTimestamped highlights00:32 What Sequence does, why order to cash is still painfully manual01:48 The CEO role is less about functions, more about direction and execution03:23 Excellence starts with talent density, do not compromise on the bar06:10 Why companies win, direction plus distribution, and the Figma example11:01 Getting real feedback as a leader, how to reduce hierarchy and increase ownership14:39 “They need clarity,” decision frameworks over micromanagement18:01 The hidden damage of the founder weighing in on every micro decision20:53 Hiring underrated talent, motivation, ambiguity tolerance, and the stretch role24:38 Why the CEO should invest time in hiring, the leverage math is obviousA line worth keepingThey do not need policies, they need clarity. Pro tips you can steal• Promote leaders who have done the job and set the pace, it earns trust and improves decision quality• Give teams context and constraints, then treat your input like any other input• Use the door test, reversible decisions get speed and delegation, irreversible ones get more diligence• In hiring, look for motivation plus clear thinking, then bet on aptitude over the perfect backgroundCall to actionIf this one helped you think more clearly about leadership and hiring, follow the show and share the episode with one operator who is building under pressure. New conversations drop with different guests and different problems, so you always have something useful to steal.

    The CPTO Role Explained, How Product and Engineering Move Faster Together

    Play Episode Listen Later Feb 23, 2026 25:10


    Arnie Katz has been running product and engineering under one roof since before most companies even considered combining the roles. As CPTO at GoFundMe, he oversees the teams behind a platform processing over 2.5 donations every second, with more than $40 billion in help facilitated worldwide. Arnie breaks down why the CPTO title keeps gaining traction, how he thinks about the role like a portfolio manager, and where the real trade offs live when one person holds both the product and technology reins.Key TakeawaysThe CPTO role works like a portfolio manager. Arnie manages the company's largest investment center by balancing short term business wins against long term platform bets, knowing when to take on technical debt and when to pay it down.Velocity, coordination, and alignment are the three biggest wins. When product and engineering report to one leader, decisions happen faster, roadmap conflicts get resolved without executive tug of war, and technical investments stay tied to business outcomes.The disadvantages are real. Without separate CPO and CTO voices at the executive table, certain perspectives can get muted. His fix: build a leadership bench strong enough to create the right tension underneath him.AI is changing what small teams can deliver. GoFundMe's eight person team behind Giving Funds is shipping at a pace that would have been impossible five years ago.Timestamped Highlights[00:38] The scale most people don't realize about GoFundMe, including 2.5 donations per second and GoFundMe Pro for nonprofits.[02:02] How Arnie first landed the CPTO title at StubHub seven years ago, and why it clicked.[09:11] The real downside of collapsing two C suite roles into one, and how Arnie designs around it.[13:57] His portfolio approach to technical debt, sequencing re platforming in areas like identity and payments while other teams ship business value.[18:38] AI reshaping engineering velocity, the future of the SDLC, and product teams prototyping without writing code.[23:06] Where the CPTO model is headed as the industry evolves.The Line That Stuck"I often think of myself as a portfolio manager. My job is to invest money where the company gets the best returns, where the mission gets the best return, where the shareholder gets the best returns."Pro TipsSequence your bets instead of spreading them thin. GoFundMe gave their identity and payments teams nine months of runway to re platform with no feature expectations while other squads picked up the pace on near term results.Build leadership that creates productive friction. Without CPO vs. CTO tension at the exec level, let your VPs and SVPs push back against each other. That tension is where the best decisions come from.Think in time horizons, not just priorities. Short term moves for 0.1% to 0.5% metric lifts. Midterm bets for 1% to 5% gains. Long term swings that could transform the business. Allocate across all three.If this conversation changed how you think about product and engineering working together, share it with someone on your team. Subscribe to The Tech Trek so you never miss an episode, and connect with Arnie on LinkedIn to keep the conversation going.GoFundMe is offering listeners of The Tech Trek a chance to open their own Giving Fund. For the first 50 people who open a Giving Fund and add $25 or more to their Giving Fund, GoFundMe will add an additional $25 to that Giving Fund. If you have a Giving Fund but have never contributed into it, you can also participate. The deadline for this incentive is March 13. To get this incentive, click here to start your Giving Fund.

    How AI Fixes the Healthcare Incentive Problem

    Play Episode Listen Later Feb 20, 2026 28:05


    Anjali Jameson, Chief Product Officer at Arbiter, says the hard part is not gathering data. It is getting action across patients, providers, and payers without breaking what already works.“Automating something that's broken is not going to necessarily give us better outcomes.”Arbiter is a care orchestration platform built for patients, providers, and payers together, not a single point solution. The operating spine ingests and makes actionable data across the patient journey, including provider directories, EMR integrations, claims, and financial and policy data from health plans, then connects it to highly personalized multi channel agentic outreach. You will hear why cross system context matters, how total cost of care stays in view while each stakeholder chases different leading metrics, and what it looks like to move from automation into optimization, like going from a call center scheduling flow to 60 percent conversion and pushing toward 95 percent conversion.Timeline00:40 Care orchestration platform, operating spine, data across the patient journey04:33 Misaligned incentives, prior authorizations, 12 to 14 hours a week09:42 Total cost of care, star metric, building for different metrics12:25 Long form personalized videos, transportation, education, medication management15:02 Prior authorization from three to six days to almost instantaneous22:07 COVID, provider messaging two, three X, AI responds fasterSubscribe and share it with someone who is building in health tech.

    Stakeholder Expectations, Deliver Value Faster

    Play Episode Listen Later Feb 19, 2026 24:28


    Most data teams do not have a tooling problem. They have a customer service problem.Mo Villagran, Associate Director of Insights, Analytics, and Data at Cambrex, argues that stakeholder expectation management is the difference between being a trusted advisor and being an order taker."In a simple word, it's really just customer service."In this episode, Mo breaks down how to manage stakeholder expectations, define expected delivery value, and keep projects aligned to real business outcomes instead of chasing rebranded tools. She shares why simple solutions often win, how to show progress even when the work is plumbing, and why qualitative stakeholder testimony beats dashboard count KPIs. You will also hear how she thinks about AI as a tool, when it works, when it is just a cool toy, and how to build trust by demoing in real time.00:02:00 Stakeholder expectation management is customer service00:03:00 Why skeleton teams can still deliver value00:06:00 Who defines expected delivery value, and how to shape it00:09:00 Negotiate expectations, do not become an order taker00:18:00 How to show progress when there is nothing visual00:21:00 Stop chasing quantitative KPIs, win with testimonySubscribe and share this episode with anyone who is knee deep in stakeholder management.

    Pick The Jockey, Not The Idea

    Play Episode Listen Later Feb 18, 2026 31:14


    Ashok Krishnamurthi, Managing Partner at Great Point Ventures, says the biggest mistake in venture capital is confusing prediction with judgment.Early stage investing is not about perfect stories, it is about first principles and picking the founder who can execute when the story breaks.This episode is for startup founders and investors who want a cleaner filter for what matters.“You have to learn to check your ego at the door because it's a partnership.”Ashok shares his path from engineering into building companies, then into venture capital, and explains how he forms an investment thesis when markets are noisy. We talk about founder evaluation, why picking the jockey matters more than the idea, and how first principles thinking shows up in real domains like healthcare data and cancer. We also get practical about artificial intelligence, why AI is not only a compute race, and how AI inference, energy efficiency, and cost shape what wins.00:00 Why legacy matters more than VC metrics02:28 Engineer to founder to venture capital11:16 How to pick the jockey14:21 First principles, cancer data, and AI constraints23:24 AI is here to stay, keep your mind open30:15 How to reach AshokIf this episode helped, subscribe and share it with a builder or investor who will use it.

    How to Break Into Robotics Without a Perfect Background

    Play Episode Listen Later Feb 17, 2026 24:55


    Aditya Agarwal did not plan to work in robotics. He got rejected from his first-choice major, joined a student club to keep his parents off his back, and stumbled into one of the fastest-growing fields in tech. Now he is Head of Robotics at Medra, a company building physical AI scientists that let researchers run experiments remotely at speeds a traditional lab cannot touch."Even the companies that have made the most progress haven't deployed at the scale of laptops, cars, or phones. So if you have experience scaling hardware products, that is super valuable at an early-stage robotics company."What we get into: why the PhD requirement is mostly gone, how AI is shrinking the hardware development timeline, and the cheapest way to start building with robotics today if you cannot afford to go back to school or take a step back in your career.Timestamped Highlights01:19 The accidental path into robotics that actually worked03:04 Whether you still need an engineering degree for hardware roles04:48 Master's degree vs. early-stage startup: what gets you there faster10:57 How AI is replacing the guesswork in hardware configuration15:51 How to start learning robotics at home without spending much18:38 Why rigid hiring processes are costing robotics teams good candidatesIf this one lands, subscribe and share it with someone who has been thinking about making a move into the space.

    Stablecoins, AI Fraud, and the Future of Sports Payouts

    Play Episode Listen Later Feb 16, 2026 19:36


    Ronak Desai, Co-founder and CPTO at Payment Labs, breaks down a surprisingly hard problem that sits at the intersection of fintech, sports, and compliance. If you have ever assumed paying winners is just a simple payout flow, this episode will change that view fast.Payment Labs helps tournament organizers, league operators, and modern sports businesses handle payouts plus tax compliance and support, all in one system. Ronak explains why spot payments are high risk, why manual workflows still dominate the space, and how stablecoins and AI are about to reshape fraud, identity, and trust.Key TakeawaysOne time payouts are a fraud magnet, inconsistent winners and risk based rules make verification and compliance much harder than payrollSolving payments without solving tax and forms still leaves the biggest liability sitting with the organizerMany sports and esports operators still run payouts in a surprisingly analog way, checks, cash, and post event cleanupAI is now good enough to pressure identity verification, and stablecoins make recovery harder because transfers are effectively finalProduct adoption depends on meeting users where they are, younger athletes expect texting and simple flows, not tickets and portalsTimestamped Highlights00:29 What Payment Labs actually does, payouts plus tax compliance plus support for sports, esports, and creator economy use cases01:15 The origin story, a real tax problem hit an esports operator and exposed how broken the payout workflow is02:46 Why spot payments raise risk, random recipients, fraud pressure, and why bank partners treat this differently than payroll04:58 The industry reality check, still running on checks and cash, and what digitizing the workflow unlocks next06:58 AI fraud versus AI detection, how identity verification is getting bypassed and why stablecoin rails raise the stakes11:55 The NIL wild west and the product lesson, meet athletes where they already live, including iMessage supportA Line Worth RepeatingNow you have AI committing the fraud and then you have AI detecting the fraud.Pro Tips for Builders and OperatorsIf your users are young and mobile first, build support where they already communicate, texting beats ticketing for adoptionDo not bolt on AI for a storyline, use it where it replaces manual work you already do and frees time for higher leverage decisionsMap your tasks with the Eisenhower quadrant, then automate what is repetitive before you chase shiny featuresCall to ActionIf this episode helped you think differently about fintech, fraud, and modern payout infrastructure, follow the show and share it with a founder or operator who touches payments. For more conversations at the intersection of tech, data, and real world execution, connect with Amir on LinkedIn and subscribe to the Elevano newsletter.

    The Founder Rules Nobody Tells You

    Play Episode Listen Later Feb 13, 2026 25:31


    Healey Cypher, CEO of BoomPop and COO at Atomic, breaks down what separates founders who win from founders who stall. You will hear a clear way to judge whether an idea is truly worth building, plus the trust mechanics that get investors, customers, and teammates to actually follow you.This conversation is a practical map for tech builders who want to pick smarter problems, execute faster, and earn credibility without the founder theater.Key TakeawaysFounders matter most, but the idea is still a gate, the same great team can get wildly different outcomes depending on the market and timingVC backed is a specific game, it requires not just big potential, but fast scale, and the incentives are not the same as building a profitable lifestyle businessA quick reality check for market size, if you need more than about five to seven percent penetration to hit meaningful revenue, it is usually a brutal pathPainkillers beat vitamins, solve an urgent problem people feel right now, or you risk getting cut the moment budgets tightenTrust is built through authenticity, logic, and empathy, if one wobbles, people feel it fast, and progress slows everywhereTimestamped Highlights00:00:00 Healey's background, why BoomPop, and what the episode is really about00:02:00 The post pandemic spend shift and the why now behind modern events and group travel00:04:30 Founder versus idea, why execution dominates, but the opportunity still decides the ceiling00:06:40 The VC reality, power law returns, speed, and why some good businesses are still a no for venture00:09:15 A simple market math test, penetration levels that become a growth wall00:19:00 Trust as a founder skill, the three ingredients and how to spot when one is missing00:21:30 Vulnerability as a shortcut to real connection, plus the giver mindset that makes people want you to winA line worth stealingIf everyone wants you to win, it is a lot easier to win.Pro Tips for Tech FoundersAsk yourself what you naturally look forward to doing, that is often your zone of strength, hire around the tasks you dreadLearn the financial basics early, especially cash flow, it is the scoreboard that keeps you alive long enough to winWhen trust is lagging, check the three levers, are you showing the real you, can people follow your reasoning, do they feel you care about their outcomesWhat's next:If you build products, lead teams, or are thinking about starting something, follow the show so you do not miss episodes like this. Also connect with me on LinkedIn for short takeaways and clips from each conversation.

    Modernizing Healthcare Without the Buzzwords

    Play Episode Listen Later Feb 12, 2026 26:08


    Ty Wang, cofounder and CEO of Angle Health, breaks down what it means to give back through public service, then shows how that same mindset drives his mission to modernize healthcare for small and midsize businesses. We get into why legacy health plans feel opaque and painful, what an AI native health plan actually changes behind the scenes, and how better data and workflows can create real cost stability for employers.Ty shares his path from a federal scholarship and national service work to Palantir, and why he chose one of the most regulated, least glamorous industries to build in. If you have ever wondered why healthcare feels impossible to navigate, or why renewals can blindside a company, this conversation will give you a clear mental model of the problem and a practical view of what modernization looks like when it actually ships. Key TakeawaysHealthcare feels broken because the infrastructure is fragmented, data is siloed, and even basic questions become hard to answer across inconsistent systemsModernizing healthcare is not just about a new app, it is about rebuilding the operational core so workflows, claims, underwriting, and member experience can run on integrated dataSmall and midsize businesses are hit hardest by cost volatility because they lack transparency, predictability, and negotiating leverage, yet health insurance is often a top line item after payrollA strong approach to regulated markets is collaborative, treat regulators as partners in consumer protection, not obstacles to work aroundMission and impact can be a recruiting advantage, especially when the technical problems are genuinely hard and the outcomes touch real people fastTimestamped Highlights00:40 What Angle Health is, and what AI native means in a real health plan02:05 The scholarship path that pulled Ty into public service and set his trajectory04:06 The personal story behind the mission, the American dream, and why access matters09:38 Why healthcare infrastructure is so complex, and how siloed systems create bad experiences11:33 Why SMBs get squeezed, and how manual administration blocks customization at scale13:20 The real pain point for employers, cost volatility and zero predictability before renewal16:55 Why the tech can expand beyond SMBs, but why the SMB market is already massive19:51 Lessons from building in a regulated industry, and why credibility and funding matter22:26 Hiring for high agency, mission driven talent in a world full of AI companiesA line that sticks“Unless you are lucky enough to work for a big company, these modern healthcare services are still largely inaccessible to the vast majority of Americans.”Pro Tips for tech operators and buildersIf you are modernizing a legacy industry, start with the infrastructure layer, fix the data model, integrate the systems, then automate workflowsIn regulated markets, build relationships early, show how your product improves consumer outcomes, and make compliance a design constraint, not a bolt onWhen selling into SMBs, predictability beats perfection, give customers a clear breakdown of what drives costs and what they can controlWhat's next:If this episode helped you see healthcare and legacy modernization more clearly, follow the show on Apple Podcasts or Spotify and subscribe so you do not miss the next conversation. Also, share it with one operator or builder who is trying to modernize a messy industry.

    The Hidden Fintech Behind the Compute Boom

    Play Episode Listen Later Feb 11, 2026 23:31


    Gabe Ravacci, CTO and co-founder at Internet Backyard, breaks down what the “computer economy” really looks like when you zoom in on data centers, billing, invoicing, and the financial plumbing nobody wants to touch. He shares how a rejected YC application, a finance stint, and a handful of hard lessons pushed him from hardware curiosity to building fintech infrastructure for compute.If you care about where compute is headed, or you are early in your career and trying to find your path without overplanning it, this one will land.Key Takeaways• Startups often happen “by accident” when your competence meets the right problem at the right time• Compute accessibility is not only a chip problem, it is also a finance and operations problem• Rejection can be data, not a verdict, treat it as feedback to sharpen the craft• A real online presence is less about networking and more about being genuinely useful in public• Time blocking and single task focus beats grinding when you are juggling school, work, and a startupTimestamped Highlights00:28 What Internet Backyard is building, fintech infrastructure for data center financial operations01:37 The first startup attempt, cheaper compute via FPGA based prototyping, and why investors passed04:48 The pivot, from hardware tools to a finance informed view of compute and transparency gaps06:55 How Gabe reframed YC rejection, process over outcome, “a tree of failures” that builds skill08:29 Building a digital brand on X, what he posted, how he learned in public, and why it worked13:36 The real balancing act, dropping classes, finishing the degree well, and strict time blocking20:00 Books that shaped his thinking, Siddhartha, The Art of Learning, Finite and Infinite GamesA line worth keeping“The process is really more important than any outcome.”Pro Tips for builders• Treat learning like a skill, ask better questions before you chase better answers• Make focus a system, set blocks, mute distractions, and do one thing at a time• Share what you are learning in public, not to perform, but to be useful and find signalCall to ActionIf this episode sparked an idea, follow or subscribe so you do not miss the next one. Also check out Amir's newsletter for more conversations at the intersection of people, impact, and technology.

    Data Fabric Meets AI, The Trust Layer Most Teams Skip

    Play Episode Listen Later Feb 10, 2026 29:14


    Data leaders are being asked to ship real AI outcomes while the foundations are still messy. In this conversation, Dave Shuman, Chief Data Officer at Precisely, breaks down what actually determines whether AI adoption sticks, from hiring “comb shaped” talent to building trusted data products that make AI outputs believable and usable.If you are building in data, AI, or analytics, this episode is a practical map for what needs to be true before AI can move from demos to dependable, repeatable impact.Key TakeawaysComb shaped talent beats narrow specialization, AI work rewards people who can span multiple skills and collaborate wellAdoption is a trust problem, and trust starts with data integrity, lineage, context, and a semantic layer that business users can understandOpen source drives the innovation, commercialization makes it safe and usable at enterprise scale, especially around security and supportData must be fit for purpose, start every AI project by asking what data it needs, who curates it, and what the known warts areHumans are still the last mile, small workflow choices can make adoption jump, even when the model is already accurateTimestamped Highlights00:56 The shift from T shaped to comb shaped talent, what modern AI teams actually need to look like05:36 Hiring for team fit over “world class” niche skills, and when to bring in trusted partners for depth07:37 How open source sparks the ideas, and why enterprises still need hardened, supported versions to scale11:31 Where AI adoption is today, why summarization is only the beginning, and what unlocks “AI 2.0”13:39 The trust stack for AI, clean integrated data, lineage, context, catalog, semantic layer, then agents19:26 A real adoption lesson from machine learning, and why the human experience decides if the system winsA line worth stealing“You do not just take generative AI and throw it at your chaos of data and expect it to make magic out of it.”Pro Tips for data and AI leadersHire and build teams like Tetris, fill skill voids across the group instead of chasing one perfect profileUse partners for the sharp edges, but require knowledge transfer so your team levels up every engagementMake adoption easier by designing for human behavior, sometimes the smallest workflow tweak beats more accuracyBuild governed data products in a catalog, then validate AI outputs side by side with dashboards to earn trust fastCall to ActionIf this helped you think more clearly about AI adoption, talent, and data foundations, follow the show and turn on notifications so you do not miss the next episode. Also, share it with one data or engineering leader who is trying to get AI out of pilots and into real workflows.

    Cloud Costs vs AI Workloads, The Storage Decisions That Decide Scale

    Play Episode Listen Later Feb 9, 2026 26:26


    Cloud bills are climbing, AI pipelines are exploding, and storage is quietly becoming the bottleneck nobody wants to own. Ugur Tigli, CTO at MinIO, breaks down what actually changes when AI workloads hit your infrastructure, and how teams can keep performance high without letting costs spiral. In this conversation, we get practical about object storage, S3 as the modern standard, what open source really means for security and speed, and why “cloud” is more of an operating model than a place. Key takeaways• AI multiplies data, not just compute, training and inference create more checkpoints, more versions, more storage pressure • Object storage and S3 are simplifying the persistence layer, even as the layers above it get more complex • Open source can improve security feedback loops because the community surfaces regressions fast, the real risk is running unsupported, outdated versions • Public cloud costs are often less about storage and more about variable charges like egress, many teams move data on prem to regain predictability • The bar for infrastructure teams is rising, Kubernetes, modern storage, and AI workflow literacy are becoming table stakes Timestamped highlights00:00 Why cloud and AI workloads force a fresh look at storage, operating models, and cost control 00:00 What MinIO is, and why high performance object storage sits at the center of modern data platforms 01:23 Why MinIO chose open source, and how they balance freedom with commercial reality 04:08 Open source and security, why faster feedback beats the closed source perception, plus the real risk factor 09:44 Cloud cost realities, egress, replication, and why “fixed costs” drive many teams back inside their own walls 15:04 The persistence layer is getting simpler, S3 becomes the standard, while the upper stack gets messier 18:00 Skills gap, why teams need DevOps plus AIOps thinking to run modern storage at scale 20:22 What happens to AI costs next, competition, software ecosystem maturity, and why data growth still wins A line worth keeping“Cloud is not a destination for us, it's more of an operating model.” Pro tips for builders and tech leaders• If your AI initiative is still a pilot, track egress and data movement early, that is where “surprise” costs tend to show up • Standardize around containerized deployment where possible, it reduces the gap between public and private environments, but plan for integration friction like identity and key management • Treat storage as a performance system, not a procurement line item, the right persistence layer can unblock training, inference, and downstream pipelines What's next:If you're building with AI, running data platforms, or trying to get your cloud costs under control, follow the show and subscribe so you do not miss upcoming episodes. Share this one with a teammate who owns infrastructure, data, or platform engineering.

    AI Is Changing Art Faster Than You Think.

    Play Episode Listen Later Feb 6, 2026 50:35


    This is an early conversation I am bringing back because it feels even more relevant now, the intersection of AI and art is turning into a real cultural shift.I sit down with Marnie Benney, independent curator at the intersection of contemporary art and technology, and co-founder of AIartists.org, a major community for artists working with AI. We talk about what AI art actually is beyond the headlines, where authorship gets messy, and why artists might be the best people to pressure test the societal impact of machine learning.Key takeaways• AI in art is not a single thing, it is a spectrum of choices, dataset, process, medium, and intent• The most interesting work treats AI as a collaborator, not a shortcut, a back and forth that reshapes the artist's decisions• Authorship is still unsettled, some artists see AI as a tool like an instrument, others treat it as a creative partner• The fear that AI replaces creativity misses the point, artists can use the machine's unexpected output to expand human expression• Access matters, compute, tooling, and collaboration between artists and technologists will shape who gets to experiment at the frontierTimestamped highlights00:04:00 Curating science, climate, and public engagement, the path into tech driven exhibitions00:07:41 What AI art can mean in practice, datasets, iteration loops, and choosing an output medium00:10:48 Who gets credit, tool versus collaborator, and the art world's evolving rules00:13:51 Fear, job displacement, and a healthier frame, human plus machine as a creative partnership00:22:57 The new skill stack, what artists need to learn, and where collaboration beats handoffs00:29:28 The pushback from traditional art circles, philosophy and intention versus novelty00:37:17 Inside the New York exhibition, collaboration between human and machine, visuals, sculpture, and sound00:48:16 The magic of the unknown, why the output can surprise even the artistA line that stuck“Artists are largely showing a mirror to society of what this technology is, for the positive and the negative.”Pro tips for builders and operators• Treat creative communities as an early signal, artists surface second order effects before markets do• If you are building AI products, study authorship debates, they map directly to credit, accountability, and trust• Collaboration beats delegation, when domain experts and technologists iterate together, the work gets sharper fastCall to actionIf this episode hits for you, follow the show so you do not miss the next drop. And if you are building in data, AI, or modern tech teams, follow me on LinkedIn for more conversations that connect technology to real world impact.

    AI in the Enterprise, Why Pilots Fail and What Actually Scales

    Play Episode Listen Later Feb 5, 2026 23:35


    Most teams are approaching AI from the wrong direction, either chasing the tech with no clear problem or spinning up endless pilots that never earn their keep. In this episode, Amir Bormand sits down with Steve Wunker, Managing Director at New Markets Advisors and co author of AI and the Octopus Organization, to break down what actually works in enterprise AI.You will hear why the real challenge is organizational, not technical, how IT and business have to co own the outcome, and what it takes to keep AI systems valuable over time. If you are trying to move beyond experimentation and into real impact, this conversation gives you a practical blueprint.Key takeaways• Pick a handful of high impact problems, not hundreds of small pilots, focus is what creates measurable ROI• Treat AI as a workflow and change program, not a tool you bolt onto an existing process• IT has to evolve from order taker to strategic partner, including stronger AI ops and ongoing evaluation• Start with the destination, redefine the value proposition first, then redesign the operating model around it• Ongoing ownership matters, AI is not a one and done delivery, it needs stewardship to stay usefulTimestamped highlights00:39 What New Markets Advisors actually does, innovation with a capital I, plus AI in value props and operations01:54 The two common mistakes, pushing AI everywhere and launching hundreds of disconnected pilots04:19 Why IT cannot just take orders anymore, plus why AI ops is not the same as DevOps07:56 Why the octopus is the perfect model for an AI age organization, distributed intelligence and rapid coordination11:08 The HelloFresh example, redesign the destination first, then let everything cascade from that17:37 The line you will remember, AI is an ongoing commitment, not a project you ship and forget20:50 A cautionary pattern from the dotcom era, avoid swinging from timid pilots to extreme headcount mandatesA line worth keepingYou cannot date your AI system, you need to get married to it.Pro tips for leaders building real AI outcomes• Define success metrics before you build, then measure pre and post, otherwise you are guessing• Redesign the process, do not just swap one step for a model, aim for fewer steps, not faster steps• Assign long term ownership, budget for maintenance, evaluation, and model oversight from day oneCall to actionIf this episode helped you rethink how to drive AI results, follow the show and subscribe so you do not miss the next conversation. Share it with a leader who is stuck in pilot mode and wants a path to production.

    AI Is Rewriting Manufacturing Quality, Here's What Changes

    Play Episode Listen Later Feb 4, 2026 25:08


    Manufacturing is getting faster, messier, and more expensive when quality slips.Daniel First, Founder and CEO at Axion, joins Amir to break down how AI is changing the way manufacturers detect issues in the field, trace root causes across messy data, and shorten the time from “customers are hurting” to “we fixed it.”Episode SummaryDaniel First, Founder and CEO at Axion, explains why modern manufacturing is living in the bottom of the quality curve longer than ever, and how AI can help companies spot issues early, investigate faster, and actually close the loop before warranty costs and customer trust spiral. If you work anywhere near hardware, infrastructure, or complex systems, this is a sharp look at what “AI first” means when real products fail in the real world.You will hear why quality is becoming a competitive weapon, how unstructured signals hide the truth, and what changes when AI agents start doing the detection, investigation, and coordination work humans have been drowning in.What you will take awayQuality is not just a defect problem, it is a speed and trust problem, especially when product cycles keep compressing.AI creates leverage by pulling together signals across the full product life cycle, not by sprinkling a chatbot on one system.The fastest teams win by finding issues earlier, scoping impact correctly, and fixing what matters before customers notice the pattern.A clear ROI often lives in warranty cost avoidance and downtime reduction, not just “efficiency” metrics.“AI first” gets real when strategy becomes operational, and contradictions in how teams prioritize issues get exposed.Timestamped highlights00:00 Why manufacturing is a different kind of problem, and why speed is harder than it looks01:10 What Axion does, and how it detects, investigates, and resolves customer impacting issues05:10 The new reality, faster product cycles mean living in the bottom of the quality curve10:05 Why it can take hundreds of days to truly solve an issue, and where the time disappears16:20 How to evaluate AI vendors in manufacturing, specialization, integrations, and cross system workflows22:40 The shift coming to quality teams, from reading data all day to making higher level decisions28:10 What “AI first” looks like in practice, and how AI exposes misalignment across teamsA line worth repeating“Humans are not that great at investigating tens of millions of unstructured data points, but AI can detect, scope, root cause, and confirm the fix.”Pro tips you can applyWhen evaluating an AI solution, ask three questions up front: how specialized the AI must be, whether you need a full workflow solution or just an API, and whether the use case spans multiple systems and teams.Treat early detection as a first class objective, the longer the accumulation phase, the more cost and customer damage you silently absorb.Align issue prioritization to strategy, not just frequency, cost, or the loudest internal voice.Follow:If this episode helped you think differently about quality, speed, and AI in the real world, follow the show on Apple Podcasts or Spotify so you do not miss the next one. If you want more conversations like this, subscribe to the newsletter and connect with Amir on LinkedIn.

    Synthetic Data Explained, When It Helps AI and When It Hurts

    Play Episode Listen Later Feb 3, 2026 26:18


    Synthetic data is moving from a niche concept to a practical tool for shipping AI in the real world. In this episode, Amit Shivpuja, Director of Data Product and AI Enablement at Walmart, breaks down where synthetic data actually helps, where it can quietly hurt you, and how to think about it like a data leader, not a demo builder.We dig into what blocks AI from reaching production, how regulated industries end up with an unfair advantage, and the simple test that tells you whether synthetic data belongs anywhere near a decision making system.Key Takeaways• AI success still lives or dies on data quality, trust, and traceability, not model hype. • Synthetic data is best for exploration, stress testing, and prototyping, but it should not be the backbone of high stakes decisions. • If you cannot explain how an output was produced, synthetic only pipelines become a risk multiplier fast. • Regulated industries often move faster with AI because their data standards, definitions, and documentation are already disciplined. • The smartest teams plan data early in the product requirements phase, including whether they need synthetic data, third party data, or better metadata. Timestamped Highlights00:01 The real blockers to getting AI into production, data, culture, and unrealistic scale assumptions 03:40 The satellite launch pad analogy, why data is the enabling infrastructure for every serious AI effort 07:52 Regulated vs unregulated industries, why structure and standards can become a hidden advantage 10:47 A clean definition of synthetic data, what it is, and what it is not 16:56 The “explainability” yardstick, when synthetic data is reasonable and when it is a red flag 19:57 When to think about data in stakeholder conversations, why data literacy matters before the build starts A line worth sharing“AI is like launching satellites. Data is the launch pad.” Pro Tips for tech leaders shipping AI• Start data discovery at the same time you write product requirements, not after the prototype works• Use synthetic data early, then set milestones to shift weight toward real world data as you approach production • Sanity check the solution, sometimes a report, an email, or a deterministic workflow beats an AI system Call to ActionIf this episode helped you think more clearly about data strategy and AI delivery, follow the show on Apple Podcasts and Spotify, and share it with a builder or leader who is trying to get AI out of pilot mode. You can also follow me on LinkedIn for more episodes and clips.

    The Real Learning Curve of Engineering Management

    Play Episode Listen Later Feb 2, 2026 25:52


    Tom Pethtel, VP of Engineering at Flock Safety, breaks down the real learning curve of moving from builder to manager, and how to keep your technical edge while scaling your impact through people.You will hear how Tom's path from rural Ohio to leading high stakes engineering teams shaped his approach to leadership, hiring, and staying close to the customer. Key Takeaways​ Promotions usually come from doing your current job well, plus stepping into the work above you that is not getting done​ Great leaders do not fully detach from the craft, they stay close enough to the work to make good calls and keep context​ Put yourself where the real learning is happening, watch customers, go to the failure point, get proximity to the source of truth​ Hiring is not only pedigree, it is fundamentals plus grit, the willingness to solve what looks hard because it is “just software”​ As you scale to teams of teams, your job becomes time allocation, jump on the biggest business fire while still making rounds everywhereTimestamped Highlights00:32 What Flock Safety actually builds, from AI enabled devices to Drone as a First Responder02:04 Dropping out of Georgia Tech, switching disciplines, and choosing software for speed and impact03:30 A life threatening detour, learning you owe 18,000 dollars, and teaching yourself to build an iPhone app to survive06:33 Why Tom values grit and non traditional backgrounds in hiring, and the “it is just software” mindset08:46 Proximity and learning, go to the problem, plus the lessons he borrows from Toyota Production System09:55 A practical story of chasing expertise, from Kodak to Nokia, and hiring the right leader by going where the knowledge lives14:27 The truth about becoming a manager, you rarely feel ready, you take the seat and learn fast19:18 Leading teams of teams, you cannot be everywhere, so you go where the biggest fire is, without neglecting the rest22:08 The promotion playbook, stop only doing your job, start solving the next jobA line worth stealing“Do your job really well, plus go do the work above you that is not getting done, that's how you rise.”Pro Tips for engineers stepping into leadership​ Stay technical enough to keep your judgment sharp, even if it is only five or ten percent of your week​ If you want to grow, chase proximity, sit with the customer, sit with the failure, sit with the best people in the space​ Measure your impact as leverage, if a team of ten is producing ten times, your role is not less valuable, it is multiplied​ When you lead multiple disciplines, rotate your attention intentionally, do not camp on one fire for a full yearCall to ActionIf this episode helped you rethink leadership, share it with one builder who is about to step into management. Subscribe on Apple Podcasts, Spotify, and YouTube, and follow Amir on LinkedIn for more conversations with operators building real teams in the real world.

    Retention for Engineering Teams, What Keeps Top People Around

    Play Episode Listen Later Jan 30, 2026 30:58


    Phil Freo, VP of Product and Engineering at Close, has lived the rare arc from founding engineer to executive leader. In this conversation, he breaks down why he stayed nearly 12 years, and what it takes to build a team that people actually want to grow with.We get into retention that is earned, not hoped for, the culture choices that compound over time, and the practical systems that make remote work and knowledge sharing hold up at scale.Key takeaways• Staying for a decade is not about loyalty, it is about the job evolving and your scope evolving with it• Strong retention is often a downstream effect of clear values, internal growth opportunities, and leaders who trust people to level up• Remote can work long term when you design for it, hire for communication, and invest in real relationship building• Documentation is not optional in remote, and short lived chat history can force healthier knowledge capture• Bootstrapped, customer funded growth can create stability and control that makes teams feel safer during chaotic marketsTimestamped highlights00:02:13 The founders, the pivots, and why Phil joined before Close was even Close00:06:17 Why he stayed so long, the role keeps changing, and the work gets more interesting as the team grows00:10:54 “Build a house you want to live in”, how valuing tenure shapes culture, code quality, and decision making00:14:14 Remote as a retention advantage, moving life forward without leaving the company behind00:20:23 Over documenting on purpose, plus the Slack retention window that forces real knowledge capture00:22:48 Bootstrapped versus VC backed, why steady growth can be a competitive advantage when markets tighten00:28:18 The career accelerant most people underuse, initiative, and championing ideas before you are askedOne line worth stealing“Inertia is really powerful. One person championing an idea can really make a difference.”Practical ideas you can apply• If you want growth where you are, do not wait for permission, propose the problem, the plan, and the first step• If you lead a team, create parallel growth paths, management is not the only promotion ladder• If you are remote, hire for writing, decision clarity, and follow through, not just technical depth• If Slack is your company memory, it is not memory, move durable knowledge into docs, issues, and specsStay connected:If this episode sparked an idea, follow or subscribe so you do not miss the next one. And if you want more conversations on building durable product and engineering teams, check out my LinkedIn and newsletter.

    Data Orchestration and Open Source Strategy

    Play Episode Listen Later Jan 29, 2026 23:26


    Pete Hunt, CEO of Dagster Labs, joins Amir Bormand to break down why modern data teams are moving past task based orchestration, and what it really takes to run reliable pipelines at scale. If you have ever wrestled with Apache Airflow pain, multi team deployments, or unclear data lineage, this conversation will give you a clearer mental model and a practical way to think about the next generation of data infrastructure. Key Takeaways• Data orchestration is not just scheduling, it is the control layer that keeps data assets reliable, observable, and usable• Asset based thinking makes debugging easier because the system maps code directly to the data artifacts your business depends on• Multi team data platforms need isolation by default, without it, shared dependencies and shared failures become a tax on every team• Good software engineering practices reduce data chaos, and the tools can get simpler over time as best practices harden• Open source makes sense for core infrastructure, with commercial layers reserved for features larger teams actually need Timestamped Highlights00:00:50 What Dagster is, and why orchestration matters for every data driven team00:04:18 The origin story, why critical institutions still cannot answer basic questions about their data00:07:02 The architectural shift, moving from task based workflows to asset based pipelines00:08:25 The multi tenancy problem, why shared environments break down across teams, and what to do instead00:11:21 The path out of complexity, why software engineering best practices are the unlock for data teams00:17:53 Open source as a strategy, what belongs in the open core, and what belongs in the paid layer A Line Worth RepeatingData orchestration is infrastructure, and most teams want their core infrastructure to be open source. Pro Tips for Data and Platform Teams• If debugging feels impossible, you may be modeling your system around tasks instead of the data assets the business actually consumes• If multiple teams share one codebase, isolate dependencies and runtime early, shared Python environments become a silent reliability risk• Reduce cognitive load by tightening concepts, fewer new nouns usually means a smoother developer experience Call to ActionIf this episode helped you rethink data orchestration, follow the show on Apple Podcasts and Spotify, and subscribe so you do not miss future conversations on data, AI, and the infrastructure choices that shape real outcomes.

    How Great Investors Spot Real Moats in AI

    Play Episode Listen Later Jan 28, 2026 30:53


    Sandesh Patnam, Managing Partner at Premji Invest, breaks down how long duration capital changes the way you evaluate companies, founders, and moats. We talk about what most growth investors miss, why product strength still matters, and how to separate real AI businesses from thin wrappers in a noisy market.Premji Invest is a captive, evergreen fund built to grow an endowment that supports major education work, which gives the team flexibility on time horizon and partnership style. Sandesh shares how that shows up in diligence, how they think about backing contrarian founders, and why the best companies in this AI era may still be ahead of us.Key TakeawaysFocus on the long arc, not quarter by quarter optics, founders make better decisions when they are not trapped in short term metricsIn growth investing, TAM models and KPI spreadsheets can distract from the core question, does the product have real strength and an expanding roadmapEnduring outcomes often come from backing a contrarian view early, then helping it move from contrarian to consensus over timeEvergreen capital changes behavior, you can slow down, build relationships, and partner across private and public markets instead of treating IPO as the finish lineIn AI, separate the stack into data center, foundation models, and applications, then look for defensibility like vertical depth, data moats, and compounding usage valueTimestamped highlights00:38 Premji Invest explained, evergreen structure, one LP, and why public markets can be part of the journey, not the exit04:47 Two common growth investor lenses and what gets missed when product and roadmap do not lead the thesis08:48 Partnership mindset, building trust, and being the first call when things get hard12:48 The contrarian to consensus path, what creates alpha, and how to support founders through the lonely middle19:54 Why rushing decisions is a trap, and how flexibility changes when and how you can partner with a company20:55 AI investing framework, three layers, what looks frothy, what can endure, and where moats still exist26:48 The cost of intelligence is collapsing, why this may still be the early internet moment, and what that implies for the next waveA line that stuck with me“We want to be the first port of call when the seas are turbulent.”Practical moves you can stealPressure test the roadmap, ask when product two ships, what adjacency comes next, and what tradeoffs change at scaleWhen evaluating AI apps, demand a defensibility story beyond the model, look for proprietary data, vertical workflow depth, and value that improves with usageTreat speed as a risk factor, if you cannot complete your churn cycle of doubt and validation, step back rather than force certaintyCall to ActionIf you liked this one, follow the show and share it with a founder, operator, or investor who is building in AI right now. For more conversations at the intersection of tech, business, and execution, subscribe and connect with me on LinkedIn.

    Outsource the Typing, How AI Agents Change Software Engineering

    Play Episode Listen Later Jan 27, 2026 25:11


    Software engineering is changing fast, but not in the way most hot takes claim. Robert Brennan, Co founder and CEO at OpenHands, breaks down what happens when you outsource the typing to the LLM and let software agents handle the repetitive grind, without giving up the judgment that keeps a codebase healthy. This is a practical conversation about agentic development, the real productivity gains teams are seeing, and which skills will matter most as the SDLC keeps evolving. Key TakeawaysAI in the IDE is now table stakes for most engineers, the bigger jump is learning when to delegate work to an agentThe best early wins are the unglamorous tasks, fixing tests, resolving merge conflicts, dependency updates, and other maintenance work that burns time and attentionBigger output creates new bottlenecks, QA and code review can become the limiting factor if your workflow does not adaptSenior engineering judgment becomes more valuable, good architecture and clean abstractions make it easier to delegate safely and avoid turning the codebase into a messThe most durable human edge is empathy, for users, for teammates, and for your future self maintaining the systemTimestamped Highlights00:40 What OpenHands actually is, a development agent that writes code, runs it, debugs, and iterates toward completion02:38 The adoption curve, why most teams start with IDE help, and what “agent engineers” do differently to get outsized gains06:00 If an engineer becomes 10x faster, where does the time go, more creative problem solving, less toil15:01 A real example of the SDLC shifting, a designer shipping working prototypes and even small UI changes directly16:51 The messy middle, why many teams see only moderate gains until they redraw the lines between signal and noise20:42 Skills that last, empathy, critical thinking, and designing systems other people can understand22:35 Why this is still early, even if models stopped improving today, most orgs have not learned how to use them well yetA line worth sharing“The durable competitive advantage that humans have over AI is empathy.”Pro Tips for Tech TeamsStart by delegating low creativity tasks, CI failures, dependency bumps, and coverage improvements are great training wheelsDefine “safe zones” for non engineers contributing, like UI tweaks, while keeping application logic behind clearer guardrailsInvest in abstractions and conventions, you want a codebase an agent can work with, and a human can trustTrack where throughput stalls, if PR review and QA are the bottleneck, productivity gains will not show up where you expectCall to ActionIf you got value from this one, follow the show and share it with an engineer or product leader who is sorting out what “agentic development” actually means in practice.

    Turning Compliance Into Product

    Play Episode Listen Later Jan 26, 2026 30:12


    Deborah Hanus, Co-founder and CEO at Sparrow, joins Amir to unpack the founder journey from academia to building a scaled company. They dig into why leave management is still a messy, high stakes problem, and how Sparrow is turning it into a clean, guided experience for both HR and employees.Sparrow helps companies provide employee leave across the United States and Canada, and Deborah shares what it really takes to scale a compliance driven business without slowing down. From founder resilience and early stage emotional swings to hiring, onboarding, and culture design, this one is packed with lessons for operators and builders.Key takeaways• Academia can be real founder training, especially for building resilience and hearing “no” without losing your edge• Early stage startups feel brutal because you have too few data points, it is easy to overreact to every win or setback• Compliance and leave are fundamentally data problems, the right info to the right person at the right time changes everything• Scaling leadership is mostly communication and alignment, five people and 250 people require totally different systems• Culture does not stay stable by accident, values must drive hiring, training, rewards, and performance managementTimestamped highlights00:37 What Sparrow does, and the 300 million dollars in payroll cost savings milestone01:37 Why academia can prepare you for founding, and how customer pain beats outside skepticism03:40 The leave compliance mess, and why state by state rules made the problem explode08:25 The two real ways startups die, and why morale matters as much as cash12:55 Leading at scale, onboarding, clarity, and the feedback questions that keep teams aligned19:54 “Scale intentionally” as a culture principle for a company that cannot afford to break things25:48 Keeping values stable while everything else evolves as the team growsA line worth sharing“Companies end when you run out of cash or you run out of morale.”Pro tips you can steal• Treat the employee journey like a product journey, from recruiting through promotions and hard moments• Before a big change, collect questions early so the message lands where people actually are• After a meeting, ask “What were the main points?” to see what people heard, then tighten your messaging• Invest in onboarding and goal clarity to prevent teams from drifting into competing prioritiesCall to actionIf you enjoyed this conversation, follow and subscribe so you do not miss what is next.

    Why Insurance Is a Goldmine for AI and Data

    Play Episode Listen Later Jan 23, 2026 24:31


    Max Bruner, Founder and CEO of Anzen, joins Amir Bormand to break down why insurance is quietly one of the biggest data and workflow opportunities in tech right now. They dig into Max's unconventional path from foreign policy to building an executive liability marketplace, and what it really takes to modernize a slow moving industry with AI.If you care about building in real world markets, scaling with discipline, and using AI for more than content, this one will sharpen your thinking fast. Key Takeaways• Insurance is not flashy, but it is foundational, massive, profitable, and packed with repeatable workflows that software can improve• The best tech opportunities are often in slow moving industries with lots of data and outdated systems• Better decision making comes from predicting outcome impact and pressure testing your thinking with a strong community around you• AI value is clearest when it drives real operations, faster transactions, lower costs, and better service• Fundraising is a pipeline game now, treat it like sales, build the plan, hit the numbers, run a tight processTimestamped Highlights00:42 What Anzen actually does, a one stop marketplace for executive liability quotes across the US02:29 From Arabic studies and foreign policy to discovering insurance through political risk08:12 The curiosity engine, how deep research habits shaped his ability to build in new domains11:23 Decision guardrails, learning from outcomes and using trusted people to keep you efficient13:12 Why choose insurance, building in industries that make the world work, plus the profit reality17:29 The startup advantage, modern infrastructure vs incumbent legacy systems, and why catching up takes time20:36 Raising in today's market, what changed, what worked, and why the pitch volume mattersA line worth stealing“Sometimes in tech we miss the application, there are massive industries to go change if we apply technology in the right way.” Max BrunerPro Tips for builders• Pick markets with repeatable workflows, you can ship measurable value faster• Spend your time where the outcome impact is high, skip low ROI rabbit holes• Build a real financial plan before fundraising, then operate close to it• Run fundraising like a sales process, pipeline, volume, and discipline winCall to ActionIf you enjoyed this conversation, follow the show and leave a quick review, it helps more builders find it.

    Defending Against Bots At Scale

    Play Episode Listen Later Jan 22, 2026 29:12


    Stu Solomon, CEO of HUMAN, joins Amir to unpack a blind spot most teams underestimate: a huge share of online activity is not people at all, it is automated traffic. They break down how verification really works at internet scale, why agentic workflows change the rules, and what it will take to build trust when bots transact with bots.If you have ever wondered how fraud, fake clicks, account abuse, and synthetic behavior get caught in real time, this episode is a clear, practical look behind the curtain.Key takeaways• Most of the internet is machine traffic now, the goal is no longer spotting bots, it is separating good machines from bad ones• Trust is built by combining behavior, infrastructure signals, and identity or credential history into fast decisions at scale• Agentic systems lower the barrier to entry for attackers, less skilled actors can now create outsized impact• The hard part is accountability, when a machine acts with your authority, who owns the outcome• Adoption follows convenience, but visibility matters, if it feels like a black box, people will not trust itTimestamped highlights00:33 HUMAN in plain English, making split second decisions about who is human, and whether they are safe03:59 The trust stack, behavior signals, infrastructure clues, and identity or credential history10:19 The real shift with AI, lower barriers for attackers, plus the rise of agentic autonomy14:37 The cake story, an agent completes the task, then surprises you with a 750 dollar bill17:22 Bots talking to bots, where accountability and liability get messy fast24:18 Security builds trust, trust unlocks adoption, and society is already closer than it thinksA line you will remember“We have always operated on the notion that if you are human, you are good, and if you are a machine, you are bad. That is simply not the case anymore.”Practical ideas you can use• Add guardrails when you delegate to tools, especially budgets, limits, and approval steps• Watch for trust signals, not just identity checks, behavior plus infrastructure plus history beats any single data point• Design for visibility, show users what the system did and why, so trust can compound over timeFollow:If this episode helped you think more clearly about trust, fraud, and agentic systems, follow the show, subscribe for more conversations like this, and share it with a teammate who is building in ads, ecommerce, identity, security, or AI.

    Trust but Verify, How Great Tech Leaders Delegate

    Play Episode Listen Later Jan 21, 2026 26:11


    Mek Stittri, CTO at Stuut, breaks down a leadership skill that sounds simple but gets messy fast, trust, then verify. You will learn how to delegate without losing control, how to stay close to the work without becoming a micromanager, and how AI is changing what it means to review and own technical outcomes. Key takeaways• Trust and verify starts with alignment, define success clearly, then keep a real line of sight to outcomes• Verification is not micromanagement, it is accountability, your team's results are your responsibility as a leader• Use lightweight mechanisms like weekly reports, and stay ready to answer questions three levels deep when speed matters• AI is pushing engineers toward system design and management skills, you will manage agents and outputs, not just code• Fast feedback prevents slow damage, address issues early, praise in public, give direct feedback in privateTimestamped highlights00:41 Stuut in one minute, agented AI for finance ops, starting with collections and faster cash outcomes01:54 Trust without verification becomes disconnect, why leaders still need to get close to the details03:42 The three levels deep idea, how to keep situational awareness without hovering06:33 The next five years, engineers managing teams of agents, system design as the differentiator11:40 Feedback as a gift, why speed and privacy matter when coaching16:54 The timing art, when to wait, when to jump in, using time and impact as your signal19:43 Two leaders who shaped Mek's leadership style, letting people struggle, learn, and then win23:29 Curiosity as the engine behind trust and verificationA line worth repeating“Feedback is a blessing.” Practical coaching moves you can borrow• Set the bar up front, define the end goal and what good looks like• Build a steady cadence, short weekly updates beat occasional deep dives• Calibrate your involvement, give space early, step in when time passes or impact expands• Make feedback faster, smaller course corrections beat late big confrontations• Use AI as a reviewer, get quick context on unfamiliar code and decisions so you can ask better questionsCall to actionIf you found this useful, follow the show and share it with a leader who is leveling up from IC to manager. For more leadership and hiring insights in tech, subscribe and connect with Amir on LinkedIn.

    Insurance is really just a big data problem

    Play Episode Listen Later Jan 20, 2026 23:24


    Michael Topol, Co-founder and Co-CEO at MGT Insurance, explains why insurance is quietly becoming one of the most interesting data and AI problems in tech.We get practical about turning messy legacy data into usable signals, how agentic tools change decision making, and why culture and team design matter as much as the models.MGT Insurance is building a fully verticalized AI and agentic native insurance company for small businesses, pairing experienced insurance operators with top tier technologists. Michael breaks down what changed in the last few years that makes real disruption possible now, and what modern product delivery looks like when prototyping is cheap and iteration is fast.Key takeaways• Insurance is a data business at its core, but most incumbents cannot use their data fast enough because it lives across silos, mainframes, and old systems.• Modern AI lets teams combine internal data with public signals to speed up underwriting and improve consistency, without losing human judgement.• Vibe coding and rapid prototyping collapse the gap between idea and implementation, bringing product, engineering, and the business closer together.• Senior talent gets more leverage in an AI driven workflow, and small teams can ship faster by focusing on problem solving, not just building.• Pod based teams, fixed outcome planning, and strong culture help regulated companies move quickly while staying inside the rules.Timestamped highlights00:44 What MGT Insurance is, and what “AI and agentic native” means in practice02:09 Why small business insurance matters more than most people realize06:06 The real blocker for incumbents, data exists but it is not usable08:55 Vibe coding in a regulated industry, where it helps first12:54 Requirements are shifting, prototypes bring teams closer to the real problem17:26 The pod structure, plus the Basecamp inspired approach to scoping and shipping20:52 Better, faster, cheaper, why AI finally makes all three possible22:11 Where to connect, and who they are hiringA line you will remember“Insurance is really just a big data problem.”Pro tips you can steal• Build cross functional pods early, include a domain expert, a technical product lead, and a senior engineer from day one.• Scope for outcomes, not perfect specs, then let the team decide the depth as they build.• Use AI to automate collection and synthesis, then keep humans focused on the decisions and trade offs.Call to actionIf you enjoyed this one, follow the show and share it with a builder who is trying to ship faster with a smaller team.

    How VCs Really Pick Winners in Open Source and AI

    Play Episode Listen Later Jan 19, 2026 25:54


    Marco DeMeireles, co founder and managing partner at ANSA, breaks down how a modern VC firm wins by being focused, data driven, and allergic to hype. If you want a clearer view of how investors evaluate open source, mission critical industries, and AI categories, this is a practical, operator minded look behind the curtain. Marco explains ANSA's focus on what they call undercover markets, from open source and open core businesses to defense, intelligence, cybersecurity, healthcare IT, and infrastructure companies that become deeply embedded and rarely lose customers. We also get into how they raised their first fund, why portfolio concentration changes everything, and how they push founders toward efficiency and profitability without killing ambition. Key Takeaways• In open source, two things matter more than most people admit: founder DNA tied to the project, and what you put behind the paywall that enterprises will pay for• Concentration forces rigor, fewer bets means deeper diligence, clearer underwriting, and more hands on support post investment• Great early stage support is not just advice, it is people, capital planning, and operating help that changes outcomes• AI investing gets easier when you start with category selection, avoid fickle demand, then hunt for non obvious wedges in real workflows• Long term winners tend to show compounding growth, improving efficiency, real demand, durable business models, founder strength, and an asymmetric risk reward at the price Timestamped Highlights00:00 Marco's quick intro and what ANSA invests in00:36 Undercover markets, open source, and mission critical industries explained01:54 The two open source filters that change how ANSA underwrites a deal03:31 Why open source can work in defense, plus the Defense Unicorns example05:29 How a new firm raises a first fund, and what the right LP partners look for10:50 The three levers ANSA pulls with founders: people, capital, operations15:22 Marco's six part framework for evaluating investments17:39 How to tell who wins in crowded AI categories, and why niche wedges matter21:41 The first investment they will never forget, and the air gapped cloud problem A line worth stealing“You can't outsource greatness. You can't outsource people selection.” Pro Tips• If you are building open source, be intentional about what is free versus paid, security, compliance, and auditability tend to earn real pricing power• If your business depends on paid acquisition, test a path to organic growth early, it can unlock profitability and give you leverage in fundraising and exits• In crowded AI spaces, pick a wedge where documentation is heavy, complexity is low, and ROI is obvious, then expand once you own that lane Call to ActionIf this episode helped you think more clearly about investing and building, follow the show, subscribe, and share it with one founder or operator who is navigating funding, pricing, or go to market right now

    AI That Actually Improves Customer Experience

    Play Episode Listen Later Jan 16, 2026 28:51


    AI is everywhere, but most teams are stuck talking about efficiency and headcount. In this episode, Dave Edelman, executive advisor and best selling author, shares a sharper lens, how to use AI to create real customer value and real growth.We get into the high road vs low road of AI, what personalization should look like now, and why data has to become an enterprise asset, not a bunch of disconnected departmental files.Key Takeaways• Efficiency is table stakes, the real win is using AI to build new experiences that customers actually want• Start with customer friction, find the biggest compromises and frustrations in your category, then design around that• Personalization is no longer limited by content scale in the same way, AI changes the economics of tailoring experiences• You do not always need one giant database, modern tools can pull and connect data across systems in real time• Treat data as an enterprise resource, getting cross functional alignment is often the hardest and most important stepTimestamped Highlights• 00:46 Dave's origin story, from early loyalty programs to Segment of One marketing• 03:33 The high road and low road of AI, growth experiences vs spam at scale• 06:51 Where to start, map the biggest customer frustrations, then build use cases from there• 16:31 The data myth, why you may not need a single mega database to get value from AI• 21:31 Data as a leadership problem, shifting from functional ownership to enterprise ownership• 25:14 Strategy that actually sticks, balancing bottom up automation with top down customer led directionA line worth stealing“Use those efficiencies to invest in growth.”Pro Tips you can apply this week• List the top five customer frustrations in your category, pick one and design an AI powered fix that removes a compromise• Audit your data reality, identify where the same customer facts live in multiple places, then decide what must be unified first• Run a simple test and learn loop, create multiple variations of one experience, measure what works, and keep iterating• Put strategy on the calendar, make room for a recurring discussion that is not just metrics and cost cuttingCall to ActionIf this episode helped you think differently about AI and growth, follow the show, leave a quick rating, and share it with one operator who is building product, data, or customer experience right now.

    The New Go To Market Playbook

    Play Episode Listen Later Jan 15, 2026 25:03


    Amanda Kahlow, CEO and founder of 1Mind, joins Amir to break down what AI changes in modern sales and go to market, and what it does not. If you lead revenue, product, or growth, this is a practical look at where AI creates leverage today, where humans still matter, and how teams actually adopt it without chaos.Amanda shares how “go to market superhumans” can handle everything from early buyer conversations to demos, sales engineering support, and customer success. They also dig into trust, hallucinations, and why the bar for AI feels higher than the bar for people.Key takeaways• Most buyers want answers early, without the pressure that comes with talking to a salesperson• AI can remove friction by turning static content into a two way conversation that helps buyers move faster• The hardest part of adoption is not capability, it is change management and trust inside the team• Humans still shine in relationship and nuance, but AI can outperform on recall, depth, and real time access to the right info• As AI levels the selling experience, product quality matters more, and the best product has a clearer path to winTimestamped highlights00:31 What 1Mind builds, and what “go to market superhumans” actually do across the full buyer journey02:00 The buyer lens, why early conversations matter, and how AI gives control back to the buyer06:14 Why the SDR experience is frustrating for buyers, and where AI can improve both sides09:42 Change management in the real world, why “everyone build an agent” gets messy fast13:04 Why “swivel chair” AI fails, and what real time help should look like in live conversations15:52 Hallucinations and trust, plus the blunt question every leader should ask about human error22:26 Competitive advantage today, and why adoption eventually pushes markets toward “best product wins”A line worth sharing“Do your humans hallucinate, and how often do they do it?”Pro tips you can use this week• Start with low stakes usage, bring AI into calls quietly, then ask it for a summary and what you missed• Build adoption top down, define what good looks like, otherwise you get a pile of similar agents and no clarity• Focus AI on what it does best first, recall, context, and instant answers, then expand into workflow and process laterCall to actionIf this episode sparked ideas for your sales team or your product led funnel, follow the show so you do not miss the next one. Share it with one revenue leader who is trying to modernize their go to market motion, and connect with Amir on LinkedIn for more clips and operator level takes.

    Students Run This 100M Venture Fund

    Play Episode Listen Later Jan 14, 2026 30:12


    What if the best people on your investing team are still in college? Peter Harris, Partner at University Growth Fund, breaks down how they run a roughly 100 million dollar venture fund with 50 to 60 students doing real diligence, real founder calls, and real deal work.You will hear how their student led model stays disciplined with checks and balances, why repeat games matter in venture and in business, and how this approach creates a flywheel that helps founders, investors, and the next generation of operators win together.Key Takeaways• Student led does not mean unstructured, the process is built around clear stages, data room access, investment memos, student votes, and an advisory style investment committee, with final fiduciary responsibility held by the partners• Real autonomy is the unlock, when interns are trusted with meaningful work, the best ones level up fast and start leading teams, not just supporting them• The goal is win win win outcomes, founders get capital plus a high effort support network, investors get disciplined underwriting, students get experience that compounds into career leverage• Repeat games beat short term incentives, the alumni network becomes a long term advantage, bringing the fund into high quality opportunities years later• Mistakes are inevitable, the difference is containment and systems, avoiding errors big enough to break trust, then building process improvements so they do not repeatTimestamped Highlights00:32 A 100 million dollar fund powered by 50 to 60 students, and what empowered really means01:43 The decision path, from founder screen to student memo to student vote to the advisory investment committee06:44 Why most venture internships underdeliver, and how longer tenures change outcomes10:37 Repeat games and the trust flywheel, how former students now pull the fund into top tier deals13:55 What happens when something goes wrong, damage control, learning loops, and confidentiality as a core discipline24:39 The bigger vision, expanding beyond venture into additional asset classes to create more student opportunitiesA line worth stealingIf you give people real autonomy, they'll surprise you with what they do.Pro Tips• If you are building an internship program, start by deciding what real ownership means, then build guardrails around it, not the other way around• Treat trust like an asset, design your process so every stakeholder wants to work with you againCall to ActionIf you enjoyed this one, follow The Tech Trek and share it with a founder, operator, or student who cares about building real advantage through talent and process.

    Remote Surgical Robotics Is Coming Faster Than You Think

    Play Episode Listen Later Jan 13, 2026 24:34


    Yulun Wang, executive chairman and co founder at Sovato Health, joins Amir Bormand to unpack the next wave after telemedicine, procedural care at a distance. If you have ever wondered what it would take for a top surgeon to operate without being in the same room, this conversation gets practical fast, from the real bottlenecks inside operating rooms to the health system changes required to make remote robotics mainstream.Key takeaways• Better care can actually cost less when the right expertise reaches the right patient at the right time• Telemedicine is already normalized, which sets the stage for faster adoption of remote procedures once infrastructure and workflows catch up• Surgical robots already have two sides, the surgeon console and the patient side, today connected by a short cable, the leap is making that connection work reliably across hundreds or thousands of miles• Volume drives proficiency, the outcomes gap between high volume specialists and low volume settings is one of the biggest reasons access matters• Operating rooms spend more than half their time on steps around surgery, which creates room to dramatically increase surgeon throughput when workflows are redesignedTimestamped highlights• 00:42 What Sovato Health is building, bringing procedural expertise to patients without requiring travel• 02:10 The early days of surgical robotics and the transatlantic gallbladder surgery on September 7, 2001• 05:30 The counterintuitive idea, higher quality care can reduce total cost in healthcare• 10:27 What actually changes for patients, local hospitals stay the destination, expertise becomes the thing that travels• 14:57 Why repetition matters, the first question patients ask is still the right one• 17:53 Inside the operating room schedule, where time is really spent and why productivity can jumpA line that sticks“Healthcare is different, higher quality, if done right, costs less.”Practical angles you can steal• If you are building in regulated industries, adoption is rarely about the tech alone, it is about trust, workflows, and incentives• If you sell into health systems, position the value around system level outcomes, access, quality, and margin improvement, not just novelty• If you are designing new workflows, look for the hidden capacity, the biggest gains often sit outside the core taskCall to actionIf you want more conversations like this at the intersection of tech, systems, and real world impact, follow The Tech Trek on Apple Podcasts and Spotify.

    From AI Pilot to Production

    Play Episode Listen Later Jan 12, 2026 28:58


    Moiz Kohari, VP of Enterprise AI and Data Intelligence at DDN, breaks down what it actually takes to get AI into production and keep it there. If your org is stuck in pilot mode, this conversation will help you spot the real blockers, from trust and hallucinations to data architecture and GPU bottlenecks.Key takeaways• GenAI success in the enterprise is less about the demo and more about trust, accuracy, and knowing when the system should say “I don't know.”• “Operationalizing” usually fails at the handoff, when humans stay permanently in the loop and the business never captures the full benefit.• Data architecture is the multiplier. If your data is siloed, slow, or hard to access safely, your AI roadmap stalls, no matter how good your models are.• GPU spend is only worth it if your pipelines can feed the GPUs fast enough. A lot of teams are IO bound, so utilization stays low and budgets get burned.• The real win is better decisions, faster. Moving from end of day batch thinking to intraday intelligence can change risk, margin, and response time in major ways.Timestamped highlights00:35 What DDN does, and why data velocity matters when GPUs are the pricey line item02:12 AI vs GenAI in the enterprise, and why “taking the human out” is where value shows up08:43 Hallucinations, trust, and why “always answering” creates real production risk12:00 What teams do with the speed gains, and why faster delivery shifts you toward harder problems12:58 From hours to minutes, how GPU acceleration changes intraday risk and decision making in finance20:16 Data architecture choices, POSIX vs object storage, and why your IO layer can make or break AI readinessA line worth stealing“Speed is great, but trust is the frontier. If your system can't admit what it doesn't know, production is where the project stops.”Pro tips you can apply this week• Pick one workflow where the output can be checked quickly, then design the path from pilot to production up front, including who approves what and how exceptions get handled.• Audit your bottleneck before you buy more compute. If your GPUs are waiting on data, fix storage, networking, and pipeline throughput first.• Build “confidence behavior” into the system. Decide when it should answer, when it should cite, and when it should escalate to a human.Call to actionIf you got value from this one, follow the show and turn on notifications so you do not miss the next episode.

    The Right Way to Lead in Your First 90 Days

    Play Episode Listen Later Jan 9, 2026 20:35


    New leaders face a choice fast. Do you adapt to the organization you inherit, or reshape it around the way you lead?In this conversation, Amir sits down with Gian Perrone, engineering leader at Nav, to unpack how org design really works in the first 30 to 120 days, and how to drive change without spiking anxiety or losing trust.You will hear how Gian treats leadership as triage, why “listen and learn” is rarely passive, and what separates a thoughtful reorg from one that feels chaotic.Key takeawaysLeaders almost always arrive with hypotheses, the real work is testing them without rushing to force a playbookA reorg is not automatically bad, perception turns negative when the why is unclear and people feel unsafeOver communicating helps, but thinking out loud too often can create noise, a structured comms plan keeps change steadyA simple way to spot a collaborative culture is to disagree in the interview and see how they respondManagers are the front line in change, set clear expectations so teams hear a consistent story about what is changing and whyTimestamped highlights00:01 What Nav does, and the real question behind org design for new leaders01:59 Why “first 90 days” is usually triage, not passive observation04:14 The reorg stopwatch, and why structure reflects your worldview08:36 How to communicate change without destabilizing teams12:54 A practical interview move to test whether a company truly collaborates17:03 The manager layer, how Gian sets expectations so change lands wellA line worth repeating“If you arrive and something is on fire, you are going to fix it.”A few practical moves worth stealingWhen you are new, write down your hypotheses early, then use real signals to confirm or kill themFloat a change as a real idea first, gather feedback, then come back with details before you finalizeCreate a simple comms map of who hears what, when, and from whom, then follow itBe matter of fact about changes, teams often mirror the tone you setCall to actionIf this episode helped you think more clearly about leadership and org design, follow the show and share it with one operator who is navigating change right now.

    How to Ship AI Agents Fast Without Breaking Everything

    Play Episode Listen Later Jan 8, 2026 28:11


    Nir Soudry, Head of R&D at 7AI, breaks down how teams can move from early experimentation to real production work fast, without shipping chaos. If you are building AI features or agent workflows, this conversation is a practical look at speed, safety, and what it actually takes to earn customer trust.Nir shares how 7AI ships in tight loops with a real customer in mind, why pushing decisions closer to the engineers removes bottlenecks, and how guardrails and evaluation keep fast releases from turning into security risks. You will also hear a grounded take on human plus AI collaboration, and why “just hook up an LLM” falls apart at scale.Key takeaways• Speed starts with focus, pick one customer and ship something usable in two or three weeks, then iterate every couple of weeks based on real feedback• If you want velocity, remove the meeting chain, get engineers in the room with customers and push decisions downstream• Agent workflows are not automatically testable, you need scoped blast radius, strong input and output guardrails, and an evaluation plan that matches real production complexity• “LLM as a judge” helps, but it is not magic, you still need humans reviewing, labeling, and tuning, especially once you have multi step workflows• In security, trust is earned through side by side proof, run a real pilot against human outcomes, measure accuracy and thoroughness, then improve with tight feedback loopsTimestamped highlights00:28 What 7AI is building, security alert fatigue, and why minutes matter02:03 A fast shipping cadence, one customer, quick prototypes, rapid iterations03:51 The velocity playbook, engineers plus sales in the same meetings, fewer bottlenecks08:08 Shipping agents safely, blast radius, guardrails, and why testing is still hard14:37 Human plus AI in practice, how ideas become working agents with review and monitoring18:04 Why early AI adoption works for some customers, and how pilots build confidence24:12 The startup reality, faster execution, traction, and why hiring still mattersA line worth sharing“When it's wrong, click a button, and next time it will be better.”Pro tips you can steal• Run a two to four week pilot with one real customer and ship weekly, the goal is learning speed, not perfect coverage• Put engineers directly in customer conversations, keep leadership focused on unblocking, not gatekeeping• Treat every agent like a product surface, define strict inputs and outputs, sanitize both, and limit what it can affect• Build evaluation around real workflows, not single prompts, and combine automated checks with human review• Add feedback buttons everywhere, route feedback to both model improvement and the team that tunes production behaviorCall to actionIf you want more conversations like this on building real tech that ships, follow and subscribe to The Tech Trek.

    Why Pricing Breaks as You Scale

    Play Episode Listen Later Jan 7, 2026 27:13


    B2B pricing is still way harder than it should be, even in 2026. In this conversation, Tina Kung, Founder and CTO at Nue.ai, breaks down why quote to revenue can take weeks, and how a flexible pricing engine can turn it into something closer to one click.You will hear how fast changing pricing models, AI driven products, and new selling motions are forcing revenue teams to rethink the entire system, not just one tool in the stack.Key takeaways• B2B quoting is basically a shopping cart, but the real complexity is cross team workflow, accounting controls, and downstream revenue rules.• Fragmented systems break the moment pricing changes, and in fast markets that can mean you only get one real pricing change per year.• AI companies often evolve from simple subscriptions to usage, services, and even physical goods, which creates billing chaos without a unified backbone.• Commit based models can make revenue more predictable while staying flexible for customers, but only if you can track entitlement, burn down, overspend, and approvals cleanly.• The most useful AI in revenue ops is not just insight, it is action, meaning it can generate the right transaction safely inside a system of record.Timestamped highlights00:43 What Nue.ai actually does, one platform for billing, usage, and revenue ops with intelligence on top02:43 Why a one minute checkout in B2C turns into weeks or months in B2B05:28 The real reason quote to revenue stays broken, fragmentation and brittle integrations08:03 How AI era pricing evolves, subscriptions to consumption, services, and physical goods12:51 Why Tina designed for flexibility from day one, and what 70 plus customer calls revealed19:42 Transactional intelligence, AI that can create the quote, route approvals, and move revenue work forwardA line worth keeping“It should be as easy as one click.”Practical moves you can steal• Map every pricing change to the downstream work it triggers, quoting, billing, revenue recognition, and approvals, then measure how many handoffs exist today.• If you sell both self serve and enterprise, design for multiple selling motions early, because the same objects can have totally different context and risk.• Treat pricing as a product surface, if your systems make changes slow, you are giving up speed in the market.Call to actionIf you want more conversations like this on how modern tech companies actually operate, follow the show on Apple Podcasts or Spotify, and connect with me on LinkedIn for clips and episode takeaways.

    Physical AI in Farming, Autonomy That Actually Pays Off

    Play Episode Listen Later Jan 6, 2026 26:51


    Tim Bucher, CEO and cofounder of Agtonomy, joins Amir to break down what physical AI looks like when it leaves the lab and shows up on the farm. Tim shares how his sixth generation farming roots and a lucky intro computer science class led to a career that included Microsoft, Apple, and Dell, then back into agriculture with a mission that hits the real world fast.This conversation is about building tech that earns its keep, delivers clear ROI, and improves quality of life for the people who keep the food supply moving.Key takeaways• Deep domain experience is a real advantage, especially in ag tech, you cannot fake the last mile of operations• The win is ROI first, but quality of life is right behind it, less stress, more time, and fewer dangerous moments on the job• Agtonomy focuses on autonomy software inside existing equipment ecosystems, not building tractors from scratch, because service networks and financing matter• One operator can run multiple vehicles, shifting the role from tractor driver to tech enabled fleet operator• Hiring can change when the work changes, some farms started attracting younger candidates by posting roles like ag tech operatorTimestamped highlights00:42 What Agtonomy does, physical AI for off road equipment like tractors01:45 Tim's origin story, sixth generation farming roots and the class that changed his path03:59 Lessons from Bill Gates, Steve Jobs, and Michael Dell, and how Tim filtered the mantras into his own leadership05:53 The moment everything shifted, labor pressure, regulations, and the prototype built to save his own farm09:17 The blunt advice for ag tech founders, if you do not have a farmer on the team, fix that11:54 ROI in plain terms, one person operating a fleet from a phone or tablet14:29 Why Agtonomy partners with equipment manufacturers instead of building new vehicles, dealers, parts, service, and financing are the backbone17:39 The overlooked benefit, quality of life, reduced stress, and a more resilient food supply chain20:18 How farms started hiring differently, “ag tech operator” roles and even “video game experience” as a signalA line that stuck with me“This is not just for Trattori farms. This is for the whole world. Let's go save the world.”Pro tips you can actually use• If you are building in a physical industry, hire a real operator early, not just advisors, get someone who lives the workflow• Write job posts that match the modern workflow, if the work is screen based, label it that way and recruit for it• Design onboarding around familiar tools, if your UI feels like a phone app, training time can collapseCall to actionIf you got value from this one, follow the show and share it with a builder who cares about real world impact. For more conversations like this, subscribe and connect with Amir on LinkedIn.

    The Simple Framework to Pick AI Projects That Actually Pay Off

    Play Episode Listen Later Jan 5, 2026 22:43


    Data and AI are everywhere right now, but most teams are still guessing where to start. In this episode, Cameran Hetrick, VP of Data and Insights at BetterUp, breaks down what actually works when you move from AI hype to real business impact. You will hear a practical way to choose AI and analytics projects, how to spot low risk wins, and why clean, governed data still decides what is possible. Cameran also shares a simple mindset shift, stop copying broken workflows, and start rethinking the outcome you are trying to create.Key Takeaways• AI is a catchall term right now, the best early wins usually come from “assist” use cases that boost speed and quality, not full replacement• Start with low context, low complexity work, then earn your way into higher context projects as data quality and governance mature• Pick use cases with an impact versus effort lens, quick wins create proof, buy in, and budget for bigger bets• Stakeholders often ask for a data point or feature, but the real value comes from digging into the goal, and redesigning the workflow• Data teams cannot stop at insights, adoption matters, if the next team cannot act on the output, the project stallsTimestamped Highlights00:40 BetterUp's mission, building a human transformation platform for peak performance01:57 AI as a “catchall,” where expectations are realistic, and where they are not05:19 A useful way to think about AI work, context versus complexity, and why “intern level” framing helps07:33 How to choose projects with an impact and level of effort calculator, and why trust in data is everything10:33 The hard part, translating stakeholder requests into real outcomes, and reimagining workflows instead of automating bad ones13:47 Systems thinking across handoffs, plus why teams need deeper business fluency, including P and L basics16:59 The last mile problem, if the next stakeholder cannot act, the value never lands20:27 The bottom line, AI does not change the fundamentals, it accelerates themA Line Worth Saving“AI is like an intern, it still needs direction from somebody who understands the mechanics of the business.” Practical Moves You Can Use• Run every idea through two quick questions, what business impact do we expect, and what level of effort will it take• Look for a win you can explain in one minute, then use it to fund the harder work• When someone asks for a metric or feature, ask why twice, then validate the workflow, then redesign the outcome• Invest in governed data early, untrusted outputs kill adoption fastCall to ActionIf this episode helped you think more clearly about AI in the real world, follow the show, leave a quick review, and share it with one operator who is trying to move from experiments to impact. You can also follow Amir on LinkedIn for more clips and practical notes from each episode.

    How To Hire Outlier Software Engineers

    Play Episode Listen Later Dec 30, 2025 21:48


    Yogi Goel, cofounder and CEO of Maxima AI, breaks down how he hires outlier talent, people who think like future founders and thrive when the plan changes fast. We get practical on what to look for beyond pedigree, how to assess it without relying on easy resume signals, and how culture scales when your team doubles.Yogi also shares what Maxima AI is building, an agentic platform for enterprise accounting that automates day to day operations and month end work, and why the best teams win by pairing speed with real ownership.Key takeaways• Outlier candidates often look “non standard” on paper, the signal is founder mentality, fast thinking, grit, and a point to prove• Hiring gets easier when it is always on, keep a living bench of great people long before you have a headcount• Use long form conversations to assess how someone thinks, not just what they have done, ask for their life story and listen for the choices they highlight• Train the specifics, but set a baseline for domain aptitude, then coach the narrow parts once the fundamentals are there• Culture scales through leaders and through what you reward and penalize, not through posters and slogansTimestamped highlights00:39 What Maxima AI does and the real value of agentic accounting01:38 Defining an outlier candidate as a future founder, and why school matters less than you think07:34 The conveyor belt approach to recruiting, building an inventory of great people before you need them11:35 Where to draw the line on training, test for general aptitude, coach the specifics14:20 How diverse teams disagree productively, bring evidence, run small bets, then double down or pivot18:25 Scaling culture with values driven leaders, and the simple rule of reward versus penaltyA line worth keeping“Culture is two things, what you reward and what you penalize.”Pro tips you can steal• Keep a short list of the best people you have ever met for each function, update it constantly• Ask candidates for their journey from day zero, then pay attention to what they choose to emphasize• When the team disagrees, grab quick evidence, customer texts, small pulse checks, then place a small bet that will not kill the company• Expect great people to want autonomy and scope, manage like a mentor, not a hovercraftCall to actionIf this episode helped you rethink hiring, share it with a founder or engineering leader who is building a team right now. Follow the show for more conversations on people, impact, and technology, and connect with Yogi Goel on LinkedIn by searching his name and Maxima AI.

    From Big Tech to Startup Founder, What Changes Fast

    Play Episode Listen Later Dec 29, 2025 26:03


    Chandan Lodha, Co-founder at CoinTracker, joins Amir Bormand to unpack the real shift from big tech to building your own company. From Harvard to Google to Y Combinator, Chandan shares what pushed him to take the leap, how he found the right idea, and what he had to unlearn to lead at startup speed.This conversation is for builders and leaders who want to grow faster, ship faster, and build teams that can actually execute.Key Takeaways• The early career advantage is learning velocity, optimize for environments that stretch you fast• Managing the business is rarely the hardest part, people problems scale with headcount• Big company habits can break you at a startup, especially around distribution, speed, and getting your first users• YC helped most through peer proximity, being surrounded by real users and founders who move quickly• Founder growth is a system, use feedback loops like reviews, 360 input, and personal goal trackingTimestamped Highlights00:00 From Harvard and Google to founder mode, what made him leave the safe path00:35 CoinTracker in plain English, crypto taxes and accounting for individuals and businesses03:32 Leap first, think later, the messy six month search for a real idea05:00 Runway reality, setting a 12 to 18 month window to figure it out06:09 Crypto skepticism to conviction, reading the Bitcoin white paper changed his frame10:05 Leadership lessons at 100 people, why people issues become the main work14:43 Y Combinator benefits, users everywhere and a practical playbook for early company building17:55 Personal growth systems, performance feedback and personal OKRs, plus changing your mind on three issues each year21:04 Becoming a new parent, structure, efficiency, and cutting non essentials23:24 The two skills to build before you leap, building and sellingA line worth keepingManaging the business is easy, managing people is hard.Pro Tips• Set a real runway window, then use it to iterate hard with users every week• Expect to unlearn big company instincts, distribution and speed do not come for free• Build a feedback cadence for yourself, not just your team, reviews and 360 input can surface blind spots• Practice building and selling in small side projects now, those skills compound in any startupCall to ActionIf this episode helped you think differently about leadership and the founder path, follow The Tech Trek on Apple Podcasts or Spotify, and share it with one person who is building or thinking about making the leap.

    Engineering for EBITDA and the Private Equity Playbook

    Play Episode Listen Later Dec 23, 2025 32:03


    Joel Dolisy, CTO at WellSky, joins the podcast to reveal why organizational design is the ultimate "operating system" for scaling tech companies. This conversation is a deep dive into how engineering leaders must adapt their strategies when moving between the hyper growth of Venture Capital and the disciplined profitability of Private Equity.Building a high performing team is about much more than just hiring. Joel explains the necessity of maximizing the "multiplier effect" where the collective output far exceeds the sum of individual parts. We explore the pragmatic reality of digital transformation, the "art" of timing disruptive technology adoption like Generative AI, and how to use the Three Horizons framework to keep your core business stable while chasing the next big innovation. Whether you are leading a team of ten or an organization of hundreds, these insights on design principles and leadership context are essential for navigating the complexities of modern software delivery.Core InsightsShifting the perspective of software from a cost center to a core growth enabler is the fundamental requirement for any company aiming to be a true innovator.Private Equity environments require a specialized leadership approach because the "hold period" clock dictates when to prioritize aggressive growth versus EBITDA margin acceleration.Scaling successfully requires a "skeleton" of design principles, such as maintaining team sizes around eight people to ensure optimal communication flow and minimize overhead.The most critical role of a senior leader is providing constant context to the engineering org, ensuring teams understand the "why" behind shifting constraints as the company matures.Timestamped Highlights01:12 Defining the broad remit of a CTO from infrastructure and security to the unusual addition of UX.04:44 Treating your organizational structure as a living operating system that must be upgraded as you grow.10:07 Why innovation must include internal efficiency gains to free up resources for new revenue streams.15:01 Navigating the massive waves of disruption from the internet to mobile and now large language models.23:11 The tactical differences in funding engineering efforts during a five to seven year Private Equity hold period.28:57 Applying Team Topologies to create clear responsibilities across platform, feature, and enablement teams.Words to Lead By"You are trying to optimize what a set of people can do together to create bigger and greater things than the sum of the individual parts there".Expert Tactics for Tech LeadersWhen evaluating new technology like AI, Joel suggests looking at the "adoption curve compression". Unlike the mid nineties when businesses had a decade to figure out the internet, the window to integrate modern disruptors is shrinking. Leaders should use the Three Horizons framework to move dollars from the core business (Horizon 1) to speculative innovation (Horizon 3) without making knee jerk reactions based solely on hype.Join the ConversationIf you found these insights on organizational design helpful, please subscribe to the show on your favorite platform and share this episode with a fellow engineering leader. You can also connect with Joel Dolisy on LinkedIn to keep up with his latest thoughts on healthcare technology and leadership.

    Why Your AI Strategy Will Fail Without A Business Plan

    Play Episode Listen Later Dec 22, 2025 24:20


    Stop chasing shiny objects and start driving real business outcomes. Marathon Health CTO Venkat Chittoor joins the show to explain why AI is the ultimate enabler for digital transformation but only when it is anchored by a rock solid business strategy. Essential Insights for Tech LeadersAI is not a standalone strategy. It is a powerful tool to accelerate a pre-existing business North Star. Success in digital transformation follows a specific maturity curve. Start with personal productivity, move to replacing mundane tasks, and eventually aim for cognitive automation. Governance must come before experimentation. Establishing guardrails for data privacy is critical before launching any AI pilot. Measure value through tangible efficiency gains. In healthcare, this means reducing administrative burden or "pajama time" so providers can focus on patient care. Don't let marketing speak fool you. Always validate vendor claims against your specific industry use cases. Timestamped Highlights00:50 Defining advanced primary care and the mission of Marathon Health 02:44 Why AI strategy is useless without a defined business strategy 05:01 The three steps of AI adoption from productivity to cognition 12:14 How to define success metrics for a pilot versus a scaled V1 solution 16:40 Real world ROI including call deflections and charting efficiency 21:43 Advice for leaders on data quality and avoiding vendor traps A Perspective to CarryAI is actually enabling [efficiency], but without a solid business strategy, AI strategy is not useful. Tactical Advice for the FieldWhen launching an AI initiative, focus heavily on the underlying data quality. Ensure your team accounts for data recency, accuracy, and potential biases, as these factors determine whether an experiment succeeds or fails. Start small with pilots to build muscle memory before attempting to scale complex systems. Join the ConversationIf you found these insights helpful, subscribe to the podcast for more deep dives into the tech landscape. You can also connect with Venkat Chittoor on LinkedIn to follow his work in healthcare innovation.

    Data Governance for Growth: Moving Beyond Compliance

    Play Episode Listen Later Dec 19, 2025 20:49


    Stop treating data governance as a "data cop" function and start using it as a high ROI offensive weapon. In this episode, Peter Kapur, Head of Data Governance and Data Quality at CarMax, breaks down how to move beyond defensive compliance to drive profitability, customer experience, and better data science outcomes.Critical Insights for LeadersShift from defense to offense Data defense covers the mandatory regulatory and legal requirements like privacy and cybersecurity. Data offense involves everything else that hits your bottom line, such as investing in data quality to save or make money.Prioritize problems over frameworks Avoid bringing rigid policies and "data geek" terminology to business leaders. Instead, spend time listening to their specific data struggles and apply governance capabilities as solutions to those problems.Data quality makes governance tangible Without high quality data, governance is just a collection of abstract policies. Improving data quality empowers data scientists to produce better models and gives analytics teams the ability to discover and trust their data.Key Moments in the Conversation02:41 Defining the clear line between defensive regulation and offensive growth 06:03 Why data quality and data governance must sit together to be effective 11:00 Shifting from "data school" to "business school" to communicate value 13:12 Quantifying the ROI of data governance through customer wins and time savings 18:35 Actionable advice for starting an offensive strategy from scratch Wisdom from the Episode"If we meet the laws, we meet the regulations, we meet the legal, how do we leverage our data? It is a mindset shift versus, let me lock my data down, no one use it." Tactical Advice for ImplementationEnsure adoption through personalization Design tools and processes that are personalized to specific roles so they feel like a natural part of the workflow rather than a burden.Focus on the eye of the consumer Treat every person in the organization as a "data citizen" and remember that data quality is ultimately defined by the needs of the people consuming it.Join the ConversationSubscribe to the podcast on your favorite platform to catch every episode. Follow us on LinkedIn to stay updated on the latest trends in data leadership.

    Stop Pushing Products and Start Predicting Intent

    Play Episode Listen Later Dec 18, 2025 27:06


    Afrooz Ansaripour, Director of Data Science at Walmart, joins the show to explain how global leaders are shifting from simple historical tracking to predicting psychological triggers and customer intent. This episode explores the evolution of customer intelligence and how Generative AI is turning massive data sets into personalized, value driven experiences. Listeners will learn how to balance hyper personalization with foundational privacy to build lasting consumer trust.Key InsightsPredict intent rather than just reporting past transactions to understand why a customer is with the brand.Use Generative AI as an explainability layer to transform complex data platforms from black boxes into conversational tools.Prioritize customer trust as a critical part of the user experience rather than just a legal requirement.Integrate digital and physical signals to create a 360 degree view that reveals insights which would otherwise be invisible.Focus on rapid technology adoption and curiosity as the primary drivers of success in modern AI teams.Timestamped Highlights01:51 Identifying the challenges and opportunities when managing millions of real time signals.06:43 Strategies for showing genuine value to the customer without making them feel like just a part of a sale.09:51 How LLMs are fundamentally changing the way data teams interpret unstructured feedback and behavioral patterns.14:42 Managing privacy and ethical data practices while building personalized conversational AI.19:14 Stitching together the online and offline journey to create a seamless customer experience.22:52 The necessary evolution of data science skills toward storytelling and execution bias.A Powerful Thought"Personalization should never come at the expense of customer trust." Tactical StepsCombat the garbage in garbage out problem by refining cleaning processes to handle modern AI requirements.Build an interactive layer or chatbot on top of data products to make insights instantly accessible and automated.Translate technical insights into real world decisions to ensure customers actually benefit from data models.Next StepsSubscribe to the show for more insights into the future of tech. Share this episode with a peer who is currently navigating the complexities of customer data.

    The Real Bottleneck in Healthcare AI Is Data Access

    Play Episode Listen Later Dec 17, 2025 35:30


    Shahryar Qadri, CTO of OneImaging, joins me to unpack a hard truth about healthcare tech: the goal is not to remove humans, it is to give them more room to be human.We talk about where cost “optimization” actually helps patients, why radiology is a perfect fit for AI but still held back by data access, and how better workflows can improve trust, speed, and outcomes without losing the human touch.OneImaging sits in the radiology benefits space, helping members book imaging in a national network with more transparency and a high touch booking experience, while helping employers cut imaging costs significantly.Key takeaways• The “human touch” in healthcare is not going away, the better play is using tech to increase capacity so caregivers can spend more time being caregivers• Cost optimization is not always about paying less for expertise, it is often about wasting less human time, improving trust, and removing friction around services• Healthcare still runs on outdated plumbing in places you would not expect, including fax based workflows that slow everything down• Radiology is one of the best real world use cases for AI, but the bigger blocker is getting access to imaging data in usable form, not model capability• Your health data is already “there”, but it is not working for you yet. The next wave is tools that scan your longitudinal record and surface what to ask your doctor about, so you can be a stronger advocate for your own careTimestamped highlights• 00:36 What OneImaging actually does, and why “transparent imaging” is more than a pricing story• 02:00 Why healthcare stays personal, and how tech should increase capacity instead of replacing care• 03:36 The real definition of cost optimization, commodity versus service, and where trust matters• 07:01 The surprising reality of imaging ops, why it still feels like 1998, and what gets digitized next• 17:19 AI in radiology is real, but the data access and interoperability gap is the bottleneck• 24:21 Your CDs are full of value, the problem is we do almost nothing with that data todayA line worth replaying“These LLM models are the worst that they'll ever be today. They're only going to get better and better and better.”Call to actionIf this episode sparked a new way of thinking about healthcare tech, follow The Tech Trek on your podcast app, share it with a friend in product or engineering, and connect with me on LinkedIn for more conversations like this.

    How to Pay Down Tech Debt Without Slowing Delivery

    Play Episode Listen Later Dec 16, 2025 30:36


    Swarupa Mahambrey, Vice President of Software Engineering at The College Board, breaks down what tech debt really looks like in a mission critical environment, and how an engineering mindset can prevent it from quietly choking delivery. She shares a practical operating model for paying down debt without stopping the roadmap, and the cultural habits that make it stick.You will hear how College Board carved out durable space for engineering excellence, how they use testing and automation to protect reliability at scale, and how to make the trade offs between features, simplicity, and user experience without slowing the team to a crawl.Key Takeaways• Tech debt behaves like financial debt, delay the payment and the interest compounds until even simple changes become painful• A permanent allocation of capacity can work, dedicating 20 percent of every sprint to tech debt can reduce support load and improve delivery• Shipping more features can slow you down, simplifying workflows and validating with real usage can increase velocity and reduce tickets• Resilience is not about avoiding every failure, it is about designing for graceful degradation so spikes and outages become small blips instead of crises• Automation is not “extra,” it is part of the definition of done, including unit tests as acceptance criteria and clear code coverage expectationsTimestamped Highlights• 00:00 Why tech debt is a mindset problem, not just a backlog problem• 01:00 Tech debt explained with a real example, what happens when a proof of concept becomes production• 03:45 The feature trap, how “powerful” workflows can overwhelm users and explode maintenance costs• 11:03 Engineering Tuesday, one day a week to strengthen foundations, not ship features• 14:39 Stability vs resilience, designing systems that bend instead of shatter• 20:06 Testing and automation at scale, unit tests as a requirement and code coverage guardrailsA line worth keeping“If we don't intentionally carve out space for engineering excellence, the urgent will always crowd out the important.”Practical moves you can steal• Protect a fixed slice of capacity for tech debt, make it part of the operating model, not a one time cleanup• Treat automation as acceptance criteria, no test, no merge, no release• Use pilots and targeted releases to learn early, then iterate based on metrics and real user behavior• Design for graceful degradation with retries, fallback paths, and clear failure visibilityCall to actionIf this episode helped you think differently about tech debt and engineering culture, follow The Tech Trek, leave a quick rating, and share it with one engineer who is fighting fires right now.

    Trust but Verify, How to Use AI in Engineering Without Breaking Security

    Play Episode Listen Later Dec 15, 2025 30:15


    Software is still eating the world, and AI is speeding up the clock. In this episode, Amir talks with Tariq Shaukat, co CEO at Sonar, about what it really takes for non tech companies to build like software companies, without breaking trust, security, or quality. Tariq shares how leaders can treat AI like a serious capability, not a shiny add on, and why clean code, governance, and smart pricing models are becoming board level topics. Key Takeaways• “Every company is a software company” does not mean selling SaaS, it means software is now core to differentiation, even in legacy industries. • The hardest shift is not tools, it is mindset: moving from slow, capital style planning to fast iteration, test, learn, and ship. • AI works best when leaders stay educated and involved, outsourcing the whole strategy is a real risk. • “Trust but verify” needs to be a default posture, especially for code generation, security, and compliance. • Pricing will keep moving toward value aligned consumption models, not simple per seat formulas. Timestamped Highlights• 00:56 What Sonar does, and why clean code is really about security, reliability, and maintainability • 05:36 The Tesla lesson: mechanics commoditize, software becomes the experience people buy • 09:11 Culture plus education: why software capability cannot live in one silo • 14:21 Cutting through AI hype with program discipline and a “trust but verify” mindset • 18:23 Boards, governance, and setting an “acceptable use” policy for AI before something goes wrong • 25:18 How software pricing changes in an AI world, and why Sonar prices by lines of code analyzed A line worth saving:“Define acceptable risk as opposed to no risk.” Pro Tips you can steal• Write down what you want AI to achieve, the steps to get there, and the metric you will use to verify outcomes. • For code generation, scan and review before shipping, treat AI output like a draft, not a final answer.• Set clear rules for what is allowed with AI inside the company, then iterate as you learn. Call to ActionIf you want more conversations like this on software leadership, AI governance, and building real impact, follow The Tech Trek and subscribe on your favorite podcast app. If someone on your team is wrestling with AI rollout or developer productivity, share this episode with them.

    How Great Teams Align Goals That Actually Drive Growth

    Play Episode Listen Later Dec 12, 2025 26:42


    Gregg Altschul, Vice President of Technology at FanDuel, shares a clear and practical look at how leaders can create real alignment across personal, team, and company goals. He explains why transparency drives trust, how to build a path for growth at every level, and why the best managers help people pursue their long term North Star while still delivering for the business. This is a thoughtful and modern blueprint for tech leadership and team development.Key TakeawaysTeams move faster when the company goal is translated into a simple set of objectives that every level can understand and act on.Transparency is the anchor for healthy goal setting and creates the space for honest conversations about career direction.Managers should encourage long term North Star thinking since it keeps people growing even after short term milestones are reached.Succession planning should be an active part of how teams operate so progress never depends on a single person.People can stay committed to their work even if they have long term plans outside the company, and supporting those plans often improves retention.Timestamped Highlights02:19 How top level business goals get distilled into specific team and personal goals that engineers can act on.04:57 The role of transparency in helping teams understand the why behind each objective.07:34 Helping ICs tie personal development to broader company needs while still honoring their ambitions.09:28 Creating a safe environment for honest career conversations in a world of hybrid and remote work.15:14 Why knowing a person's long term plans makes succession planning easier for everyone.17:45 How Gregg works with his own manager on growth even when the title ladder narrows at the VP level.A standout idea from Gregg“As long as you have a North Star you will grow. Whether you ever reach the exact role you picture is not really the point. The point is growth.”Call to actionIf this conversation helped you rethink how goals work inside your team, share it with a colleague who will appreciate it. Follow the show so you never miss new episodes and connect with me on LinkedIn for more conversations with leaders shaping the future of engineering and data.

    How To Grow From Engineer To CTO And Still Love The Code

    Play Episode Listen Later Dec 11, 2025 25:26


    Ken Ringdahl, CTO at Emburse, joins The Tech Trek to share what it really looks like to grow from engineer to CTO without losing your love for building. He talks about staying close to the code while leading a three hundred person org, how he learned the business side on the job instead of through an MBA, and why curiosity is still his strongest tool. If you are an engineer who cares about leadership, AI, and long term impact, this one will hit close to home. the-tech-trek_copy-of-ken-ringd…Key takeawaysThe best engineering leaders stay technical for as long as they can, then pick their spots to lean in where the business needs them most.You can learn the business side on the job by raising your hand for cross functional work and building real relationships with sales, finance, and product leaders.Curiosity is a career advantage, both in technology and in leadership, because the quality of your questions shapes the quality of your decisions.A practical AI strategy comes from listening to customers, partners, and internal experts, then translating that into focused product bets instead of chasing shiny tools.Do not rush into management just for the title, a deep foundation as an engineer will make every future leadership decision stronger.Timestamped highlights00:38 Ken explains what Emburse does and how modern spend management lives at the intersection of software, data, and finance. the-tech-trek_copy-of-ken-ringd…01:30 How he balances being an engineer at heart with the reality of leading many teams and products as CTO.03:41 Ken reflects on missing his coding days, what he still tinkers with, and why he chose the bridge role between tech and business.08:32 Learning leadership without an MBA, creating your own opportunities, and attaching yourself to people you can learn from across the company.14:58 How he stays smart on AI through office hours, internal experts, cloud partners, customers, and investor networks.21:22 His biggest advice for engineers who want to move into leadership and why he actually went back to a more hands on role before moving up again.One line that stayed with me“Even if you want to be a leader, do not rush it. Do not go so fast that you do not get that foundation.” the-tech-trek_copy-of-ken-ringd…Practical moves for your own careerStay technical as long as you can, then choose a few focus areas such as architecture, AI strategy, or cloud patterns where you can still go deep.Use curiosity as your main tool, ask simple but sharp questions of finance, sales, and customers so you see how technology really creates value.Look for chances to run cross functional projects early in your career so that by the time you step into leadership, you already understand how the wider business works.Treat partners, customers, and internal experts as an extended brain trust, especially when you are trying to shape an AI and platform strategy.Listen and stay connectedIf this episode helped you think differently about your own path from engineer to leader, follow The Tech Trek, leave a rating on your favorite podcast app, and share it with one person on your team. To keep the conversation going, connect with Ken on LinkedIn and find me there as well for more stories from leaders who are building real impact with technology.

    Claim The Tech Trek

    In order to claim this podcast we'll send an email to with a verification link. Simply click the link and you will be able to edit tags, request a refresh, and other features to take control of your podcast page!

    Claim Cancel