Everyone is on their own trek. And we can all use a little help along the way. The Tech Trek features conversations with top leaders in technology on how they are transforming their industry and organization. We explore the intersections of technology, m

Raj Koo, CTO at DTEX, joins The Tech Trek for a sharp conversation on insider risk, shadow AI, and why security teams need a more modern way to think about intent. This episode is worth your time if you are trying to understand how AI is changing cyber risk, why non malicious behavior can still create major exposure, and what it takes to protect the business without slowing down innovation. Raj explains why the old approach of blocking known bad behavior is no longer enough. As employees bring personal AI tools into the workplace, security teams are dealing with a new reality, one where productivity gains, agentic workflows, and data exposure are all colliding at once. In this episodeWhy DTEX focuses on inferring intent, not just catching exfiltrationWhy shadow AI is different from shadow IT, and harder to controlHow non malicious employee behavior can become the biggest insider risk categoryWhy agentic AI raises the stakes for visibility and governanceHow mature insider risk programs are shrinking response times even as costs rise Timestamped highlights00:00 Raj Koo on inferring intent in cybersecurity01:59 Why early warning signals matter more than the exfiltration point04:38 The rising cost of insider risk06:25 How shadow AI became a major non malicious risk08:13 Why shadow AI is more complex than shadow IT17:53 Detection times are improving, but the cost problem is getting worse Standout lineSecurity has a chance to stop being seen as the function that blocks productivity and start being seen as the function that helps the business adopt better tools safely. Practical takeawayIf your team is dealing with AI adoption in the wild, start with visibility before judgment. Understand which tools people are using, what they are using them for, and where the real risk sits before defaulting to blanket restrictions. Link to 2026 Cost of Insider Risks Global Report: https://ponemon.dtex.ai/Follow The Tech Trek for more conversations with builders, operators, and technology leaders shaping how modern companies work.

Sumeet Arora, Chief Product Officer at Teradata, joins The Tech Trek for a sharp conversation on the shift from human driven SaaS to agentic software. This episode digs into what changes when software stops just supporting human workflows and starts driving outcomes alongside people, why trust and governance matter more as AI systems take on more responsibility, and what serious companies need to do now to prepare.This is a practical discussion about where the market actually is, what gets overhyped, and what leaders should focus on beneath the noise. Sumeet lays out a clear view of the emerging enterprise stack, from knowledge and context to agents, governance, and outcomes. He also explains why the winners may not be the loudest companies in AI, but the ones that get their data, knowledge, and operating model right.In this episode• Why agentic software is a real shift, but still in its early stages• What trust, governance, and explainability need to look like in an AI first enterprise• How software companies should rethink product strategy for agents as well as humans• Why every employee may need to become a manager of AI agents• Why knowledge infrastructure could matter more than the agent layer itselfTimestamped highlights• 00:45 Teradata's role in helping enterprises become autonomous• 02:34 Where we really are in the agentic AI maturity curve• 10:16 How software shifts from workflow centric to outcome centric• 16:17 Why every employee may need an AI workforce• 21:57 The skill gap between enterprise users and agentic adoption• 24:48 Why knowledge, not just agents, will define the winnersStandout line“The fundamental winners will be ones who get the knowledge fabric correct.”Practical takeawayIf you are building for an AI driven future, do not start with agents alone. Start with trusted knowledge, usable context, clear policies, and systems that can explain decisions. The companies that treat agentic AI as a stack, not a feature, will be in a much stronger position.Follow The Tech Trek for more conversations with leaders shaping the future of technology, product, AI, and enterprise transformation.

Victor Fang, CEO and Founder of Anchain AI, joins The Tech Trek for a timely conversation on crypto crime, AI driven fraud, and what financial institutions need to understand as digital assets move closer to the mainstream. This episode is worth your time if you care about cybersecurity, compliance, crypto risk, anti money laundering, or where agentic AI is starting to reshape investigation work.This conversation goes beyond headlines. Victor breaks down how bad actors are using generative AI for phishing, identity fraud, exploit development, and ransomware, then explains how defenders are using AI, graph intelligence, and agent workflows to fight back. It is a sharp look at the collision of crypto, cybersecurity, regulation, and AI infrastructure.In this episodeWhat crypto crime actually looks like today, from exchange hacks to romance scams and ransomwareWhy crypto risk now extends well beyond crypto native usersHow financial institutions, regulators, and compliance teams are adaptingWhere AI is helping attackers move faster, and where it is giving defenders an edgeWhy agentic workflows and MCP powered investigation tools could change this category fastTimestamped highlights00:00 Victor Fang on crypto crime, AI versus AI, and agentic AML00:53 What Anchain AI does and why blockchain investigation is becoming more important01:56 How generative AI is already being used in crypto crime and phishing06:30 What banks, regulators, and AML teams need to understand about crypto adoption10:44 Why Victor believes AI can give defenders the advantage16:17 How Anchain uses blockchain data, graph intelligence, and agent workflows to investigate faster22:04 Why the company's MCP server could extend beyond crypto into KYC and financial applications25:21 What the next wave of agent driven security and investigation might look likeOne standout idea from the conversation, crypto is much closer to you than you think.Practical takeawaysCrypto risk is no longer a niche issue, it is increasingly tied to broader fraud, ransomware, and financial crimeAI is accelerating both offense and defense, which raises the bar for security and compliance teamsAgentic investigation workflows could dramatically reduce manual work in AML, fraud, and cyber operationsCompanies building in regulated spaces need infrastructure that can handle both speed and scrutinyFollow The Tech Trek for more conversations with builders, operators, and technical leaders shaping what comes next.

Cam Crow, Director of Data and Analytics at Vacatia, joins The Tech Trek to unpack what happens when a startup outgrows informal ways of working. This episode looks at how data teams can introduce project management frameworks without killing speed, how to manage stakeholder demand as complexity rises, and why the right operating model matters even more as AI begins to reshape analytics work.Cam shares a practical view from the middle of real growth, from startup scrappiness to acquisitions, migrations, and a much wider stakeholder base. He explains when process becomes necessary, how to build trust during that shift, and where AI is starting to change both delivery workflows and the future of business insights.In this episode• Why early stage teams should add process cautiously, not by default• The moment speed and quality start breaking under too many competing requests• How public communication and domain based stakeholder channels reduce friction• Why planning routines matter as much for stakeholders as they do for the data team• Where AI fits today, from faster delivery to semantic layers that support better answersHighlights00:00 Cam Crowe joins the show to discuss project management frameworks through the lens of data, startup growth, and stakeholder alignment01:58 Why Cam resisted formal sprint planning in the startup phase and why that made sense at the time05:58 The tipping point where too many priorities start hurting both velocity and quality11:49 How moving conversations out of direct messages and into domain channels changed team operations15:03 Inside the two week development cycle and the planning week that keeps stakeholders engaged21:08 How Cam is thinking about AI, semantic layers, and the future of on demand analyticsA standout idea from this conversation, process should be added conservatively, only when the business truly needs it.Practical takeaways• Do not formalize too early, but do not wait until the system is already breaking• Make prioritization visible once demand exceeds capacity• Use shared channels instead of one to one communication to reduce bottlenecks• Build stakeholder rituals into the operating model, not just team rituals• Treat AI readiness as an infrastructure challenge, not just a tooling decisionFollow The Tech Trek for more conversations with operators, builders, and technology leaders shaping how modern teams work and scale.

Deep Sogani, SVP and Group Data Management Officer at Datasite, joins The Tech Trek to unpack why data governance, lineage, and business process design have become mission critical in the age of AI. This conversation gets past the surface level AI hype and into the operational reality, how companies actually build trustworthy systems, where AI initiatives break down, and why strong data foundations now shape business outcomes in real time.This episode explores the shift from downstream analytics to data that actively drives live decisions, workflows, and automation. Deep explains why many AI projects fail before the model even matters, how business architecture should lead technical design, and why human oversight still matters in high stakes environments.In this episodeWhy AI has made data governance and data lineage far more operationalWhy business process clarity matters before data architecture or tooling decisionsHow real time AI changes the demands on data quality and system designWhere agentic AI fits, from workflow automation to more advanced decision supportWhy human judgment still matters in AI systems shaped by risk, ethics, and securityTimestamped highlights01:47 Why AI raises the stakes for governance, lineage, and trust in data04:57 Why business architecture has to lead before technical design09:11 The progression from predictive models to agentic AI workflows17:55 Why the human in the loop is still essential21:16 What makes an AI project worth prioritizing26:06 What has changed, and what has not, in AI related change managementStandout line“Business architecture and business thinking should dictate the what and the why, and the data architecture is the how part which needs to follow.”Practical takeawayIf you are evaluating AI inside the enterprise, do not start with the tool. Start with the business problem, the workflow, the decision risk, and the quality of the data behind it. Strong models on the wrong problem still fail.Follow The Tech Trek for more conversations with leaders shaping technology, data, AI, and the future of modern business.

Suresh Martha, Head of Data Driven Innovation and Analytics at EMD Serono, joins The Tech Trek for a practical conversation on what leadership looks like when your team is asked to take on new technical capabilities. This episode is about extending team impact, evaluating new tools, building credibility with stakeholders, and leading through change without pretending to be the deepest expert in every domain.For data leaders, analytics managers, technology executives, and operators, this conversation gets into the real work behind capability building. Suresh breaks down how to assess whether a new technology is worth pursuing, when to start with a pilot, how to upskill internal talent, and how to hire for skills your team does not yet have.In this episode• How to evaluate whether a new tool or technology actually adds business value• Why small pilots help leaders build trust before asking for larger investment• What it takes to lead technical work you have not personally done yourself• How to hire for capabilities your team does not yet have• Why business context and data knowledge still matter as much as technical depthTimestamped highlights00:04 Extending technical impact as a leader when new capabilities land on your team03:37 A simple framework for evaluating new tools, investment, and fit05:28 Hiring for skills your team does not yet have07:44 Upskilling as a leader so you can guide the work with confidence12:06 Managing experts whose technical depth goes beyond your own15:21 Making room for learning and experimentation while still deliveringStandout lineAs long as I understand the intricacies and can explain that, that is what matters, especially for a leader.A practical takeawayStart small. Pick a real business problem. Run a focused pilot. Measure the outcome. Earn the right to scale.Follow The Tech Trek for more conversations with leaders building teams, systems, and technical capability inside modern businesses.

Sourish Samanta, Director AI and ML at Advance Auto Parts, joins The Tech Trek for a grounded conversation on where machine learning still creates the most business value, where generative AI fits, and why many teams are chasing the wrong solution. This episode is worth your time if you want a clearer view of how serious operators think about AI strategy, product delivery, and practical use cases that can ship now. This conversation cuts through the noise around AI and gets back to first principles. Sourish explains why machine learning remains the foundation behind today's AI wave, how to choose between deterministic and creative systems, and what it actually takes to build production ready products that solve real business problems.In this episode:Why machine learning is still the core layer behind modern AIWhen to use machine learning, when to use generative AI, and when simple analytics is enoughWhat a real product mindset looks like for AI and ML teamsHow pod based teams can ship faster with better cross functional alignmentWhy AI and ML talent need to spend time continuously reskillingTimestamped highlights:00:00 Why machine learning remains the foundation of today's AI stack01:57 The difference between ML teams, AI teams, and agent focused workflows05:56 Choosing the right solve, from forecasting and inventory to creative content generation10:09 The product mindset required to turn AI ideas into working systems13:51 Why some business problems need analytics, not AI15:52 Why AI teams need to spend part of their time learning, testing, and staying currentStandout line:AI is not the strategy. Solving the right problem is.Practical takeaway:If you are leading an AI initiative, start by classifying the problem. If the outcome needs consistency, prediction, or forecasting, machine learning may be the better path. If the outcome needs creativity or flexible generation, generative AI may be a better fit. And in some cases, the best answer is still a clean dashboard and strong analytics.Follow The Tech Trek for more conversations on AI, data, engineering, and how technology actually gets applied inside real businesses.

Shamoon Siddiqui, CEO and Founder of Human Friendly Robotics, joins The Tech Trek to break down what it really takes to bring robotics into construction. This is not a futuristic thought experiment. It is a grounded conversation about where robots can create value now, why construction has lagged so badly on productivity, and how focused automation could reshape one of the world's biggest industries.At the center of the discussion is Tyler, a tile laying robot built as a practical entry point into construction automation. Shamoon explains why repeatable workflows matter, where human skill still wins, and how robotics can improve speed, safety, and job site economics without needing to look like a science fiction demo.In this episode• Why construction productivity has moved backward while other industries have surged ahead• Why tiling is the right entry point for construction robotics• How Human Friendly Robotics thinks about deployment, rentals, and product iteration• Where robots can reduce hidden job site injuries tied to repetitive strain• Why the long game is much bigger than tile, with plumbing, electrical, and HVAC in sightTimestamped highlights00:35 Why construction is the right market for robotics right now03:56 The bigger shift from humans moving atoms to machines handling more physical work08:29 Why the business model is built around rentals, not one time equipment sales10:24 The wedge strategy today and the larger vision across licensed trades12:12 The overlooked safety problem of repetitive strain in construction20:44 Why useful robots matter more than robots built for flashy demos“Version one is not going to be as good as version five, but if you continue to rent it from us, we can make sure you get version five when it's ready.”Practical takeawayThe smartest automation wedge is not the flashiest one. Start with repetitive, measurable work, prove productivity gains in the real world, and expand from there.Follow The Tech Trek for more conversations on robotics, AI, startups, and the technologies changing how real work gets done.#ConstructionTech #Robotics #Automation #ai #FutureOfWork

Mary Elizabeth Porray, Global Vice Chair Client Technology and COO, Growth and Innovation at EY, joins The Tech Trek for a grounded conversation about what it actually takes to operationalize emerging technologies inside a global enterprise. This episode goes past the AI hype cycle and into the real work of adoption, change management, process redesign, workforce trust, and leadership in ambiguity. A lot of companies are asking what AI can do. Fewer are asking what needs to change for AI to actually work. Mary Elizabeth shares how EY is thinking about experimentation, employee experience, guardrails, internal adoption, and the cultural shifts required to move from curiosity to real impact.In this episodeWhy culture, not technology, is often the biggest blocker to emerging tech adoptionWhy AI is not a magic wand, but can help teams solve problems in a different wayHow leaders can identify the right starting points by listening for real pain pointsWhy productivity gains have to create psychological space, not just more workHow affinity groups, storytelling, and visible leadership help drive adoptionTimestamped highlights01:58 Why cultural norms often slow down emerging technology adoption03:25 AI hype, false expectations, and what the technology can realistically change05:55 The mental load of AI at work, and why EY created Thrive Time11:20 Why AI pilots need to go deeper than surface level experimentation15:19 How AI is creating a shared language between business and technology teams29:29 How storytelling, affinity groups, and positive momentum help people lean inOne line that sticks: AI is not something you dabble in.A practical takeawayThe best place to start is not with the flashiest use case. It is with a real pain point. If a process should take one week and actually takes eight, that is a signal worth following.Follow The Tech Trek for more conversations with leaders building through change, scaling technology, and shaping how modern work actually gets done.

Michael White, Co founder and CEO of Multiply, joins the show to talk about the path from engineering leadership to the CEO seat, and what it really takes to build in a high trust, high complexity market. If you are thinking about founder readiness, leadership growth, or where AI creates real value in fintech, this episode gets into the parts that matter.Michael shares how early entrepreneurial instincts showed up long before Multiply, what changed as he moved from builder to company leader, and why some of the most important skills in leadership have less to do with code and more to do with communication, conviction, and influence. He also breaks down how Multiply is using AI to improve the mortgage experience without removing the human element people still need in a major financial decision. In this episode:• The mindset shift from engineer to CEO• Why leadership becomes a form of sales• How founder timing can be an advantage, not a delay• Where AI fits in the mortgage process, and where it does not• Why startups can move faster than legacy players in AI adoption Timestamped highlights00:43 What Multiply is building, and why an AI native mortgage company sees a better path to homeownership01:47 The childhood business story that hinted at an entrepreneurial future06:20 What changed in the move from engineering leadership to founder and CEO08:45 Why so much of leadership comes down to influence, alignment, and selling the vision17:19 Why mortgages are such a strong use case for AI, and why the back office is the real opportunity22:39 The startup advantage in AI, speed, focus, and freedom from legacy systems Follow the show for more conversations with founders, operators, and technology leaders building what comes next.

Susan Liu, Partner at Uncork Capital, joins Amir to break down what actually matters when backing early stage AI companies. From founder market fit to product wedge to the reality of churn, this conversation gets past the hype and into how strong companies separate themselves in a crowded market.If you are building, funding, or evaluating AI startups, this episode gives you a sharper lens on where the market is heading, what Series A investors now expect, and why real ROI is becoming the line between momentum and fallout.What stood out• The best early stage founders usually have earned insight, meaning they have lived the problem before building the solution• In crowded AI markets, the goal is not to be interesting, it is to become one of the few companies that actually wins• AI buyers still care about the same core question, does this drive revenue or cut cost in a measurable way• The Series A bar has moved up fast, and strong growth alone is not enough if retention is weak• Some of today's biggest AI winners may still face painful churn if they are not truly essential to the customerTimestamped Highlights00:37 Susan breaks down how Uncork Capital invests at seed and what it takes to get real conviction early02:00 The three-part framework she uses to evaluate companies, team, market, and product wedge with traction09:42 Why crowded AI markets are not necessarily a red flag, and how winners still pull away from the pack17:04 The ROI test every AI startup has to pass if it wants to survive renewals19:05 Susan's honest take on 2026, cautious optimism, bigger impact, and a likely wave of churn24:33 What founders need now to raise a strong Series A in a market where the bar is higher than everOne line that stuck“If you cannot prove one of these two, it is going to be a tough sell. Companies are not going to renew.”Practical takeaways for operators and founders• If your product cannot clearly tie to revenue growth or cost savings, buyers will eventually cut it• Founder credibility matters more when the market gets noisy, especially in AI• A compelling wedge wins attention, but retention is what keeps the story alive• Happy customers who will speak for you can be one of the strongest assets in a fundraiseStay connectedIf this episode gave you a better lens on AI startups, venture, and what actually drives durable value, follow the show, share it with a founder or operator in your network, and keep up with Amir on LinkedIn for more conversations like this.

What happens to e commerce when AI agents start shopping instead of humans?Maju Kuruvilla, Founder and CEO of Spangle, joins the show to unpack a shift most companies are not prepared for. If AI agents become buyers, the entire digital shopping experience must change. Websites today are designed for human psychology, not machines making decisions.In this conversation, Maju explains why context is becoming the most important layer in commerce. From marketing clicks to storefront visits, most companies lose the context that originally inspired a purchase. The future belongs to systems that can capture, carry, and act on that context across every channel. The discussion explores agent driven shopping, the limits of traditional customer data systems, and how AI can reshape both online and physical retail experiences.Key Takeaways• Context matters more than identity. Knowing what someone is trying to do right now is often more valuable than knowing who they are.• Most e commerce experiences reset the customer journey. When someone clicks from an ad to a site, the original inspiration is usually lost.• AI agents will shop differently than humans. They are not influenced by visual design or marketing psychology the same way people are.• Commerce will not become fully agent driven. Instead, brands must design experiences that work for humans, agents, and hybrid interactions.• Physical retail may benefit the most from AI driven context because stores can blend digital signals with real world behavior.Timestamped Highlights00:00 Why the next generation of e commerce will be built for AI agents, not just human shoppers.02:08 The hidden problem in online shopping today. Most websites lose the context that brought the customer there.06:11 Buyer agents and seller agents. How commerce may evolve into AI systems negotiating purchases.11:38 Why a simple request like “buy a red sweater” is actually a complex problem of interpretation and context.16:30 How AI could transform physical stores through dynamic recommendations and real time shopping guidance.22:30 Why collecting endless customer data might be the wrong approach to personalization.27:59 The future of autonomous shopping and why personal AI agents may eventually handle everyday purchases.A Moment That Sticks“Context is what matters. The fact that I bought a TV before is interesting, but not important. What matters is what I am trying to do right now.”Practical Insight for BuildersIf you are building AI driven commerce tools, start with the product layer.According to Maju, the foundation is making your product catalog intelligent. AI systems need rich product understanding so they can match intent with inventory. Once the catalog becomes machine readable and context aware, everything else becomes easier to automate.Call to ActionIf you enjoyed this conversation, follow the show and share this episode with someone working at the intersection of AI, commerce, or product development.New conversations every week with the builders shaping the future of technology.

Most people never think about the technology behind construction equipment rentals. But behind every crane, excavator, and lift is an industry still running on paper, spreadsheets, and manual workflows.In this episode, Andy Feis, CEO and Co-Founder of Renterra, joins Amir to explain how a hundred billion dollar equipment rental market is finally entering the modern software era. The conversation explores how operational software, telematics data, and AI are reshaping one of the most overlooked parts of the industrial economy. Andy shares how rental companies manage fleets of expensive machines, why legacy workflows still dominate the industry, and how platforms like Renterra are bringing cloud software and automation to a sector that has largely been left behind by the tech revolution.This episode also explores the intersection of operational data, AI automation, and real world infrastructure. From fleet optimization to automated maintenance insights, the future of equipment rental may look very different than it does today.Key Takeaways• The equipment rental industry is a massive but overlooked market where over half of construction equipment is rented rather than owned.• Many rental businesses still run critical operations using pen and paper, manual inspections, and outdated spreadsheets.• Operational software is the first step toward modernization, helping companies manage inventory, dispatch, pricing, and maintenance.• Telematics data from machines unlocks powerful insights around maintenance timing, asset valuation, and fleet utilization.• AI will not replace the physical work in industrial sectors, but it can automate low value operational tasks and dramatically improve decision making.Timestamped Highlights00:00 Introducing the hidden technology opportunity inside the equipment rental industry02:00 Why many rental companies still rely on paper, binders, and manual equipment checks06:20 How Andy Feis discovered a massive opportunity inside industrial operations09:00The low hanging fruit in modernizing equipment rental workflows11:14 What kind of data heavy machines actually generate and how it can be used13:03 Where AI actually helps blue collar industries today20:18 The roadmap for modernizing the industry and what comes nextA Moment That Stuck“The industrial sector is an enormous part of the economy, but it has been one of the last places to feel the impact of the broader tech revolution.” Pro TipsIf you are building technology for legacy industries, start with operational efficiency before advanced analytics.Modernization works best when it removes friction from existing workflows. Once companies see time savings and operational improvements, they become far more open to deeper data and AI driven insights.Call to ActionIf you enjoy conversations about technology transforming real world industries, follow the show and share this episode with someone building in construction, logistics, or industrial software.

Luke Fischer, cofounder and CEO of SkyFi, breaks down how earth intelligence is becoming searchable, and why that changes decision making across defense, energy, logistics, and agriculture.You will hear how his path from Army special operations aviation to Head of Flight Ops at Uber shaped SkyFi's product mindset, plus a practical look at what geospatial imagery and analytics can actually answer today.Key Takeaways• Networks are not nice to have, they are the fastest path to trust, hiring, and deals, especially in government and high stakes markets• SkyFi's core unlock is access, making it possible to task satellites, pull history, and ask questions of the data, not just look at images• Going commercial first can create a faster iteration loop, then government adoption follows once the product is battle tested• The real product future is answers, not imagery, using natural language queries that return decisions grade insight• Privacy is not only about resolution, it is also about who can buy data, screening, and compliance, because access is the real leverage pointTimestamped Highlights00:47 Earth intelligence in plain English, task satellites, pull decades of history, ask questions like vessel detection or soil moisture06:32 Why veteran resumes miss the mark, and how to translate leadership without goofy title inflation10:44 The origin story, a broken buying experience in satellite imagery turns into SkyFi's wedge16:42 Selling into government, people game first, acquisition reality, and why patience is a feature19:46 Use cases you will not expect, livestock behavior, barge counting, palm heights, mineral detection, and more28:10 Where this is headed, ask a question about the world, get an answer, then move toward proactive intelligenceA line worth repeating“Startups are the same thing, you are finding the right people with the right traits to solve these undefined problems in being comfortable with risk.”Practical moves you can stealIf you are hiring, screen for comfort with ambiguity, not just pedigree, undefined problems are the job in high growth workIf you are selling, build your network before you need it, warm paths beat cold volume every timeIf you are building product, shorten the feedback loop, commercial iteration can harden the product before slower cycle buyers adoptCall to ActionIf this episode sparked ideas for how data, defense, or AI driven analytics will reshape markets, follow the show and turn on notifications so you do not miss the next one. Also share it with one operator who makes high stakes decisions and would appreciate a clearer view of what is happening on the ground.

Anish Agarwal went from MIT PhD researcher to founding Traversal, an AI company building intelligent site reliability engineering agents for the enterprise. In this episode, he breaks down what it actually takes to lead an AI first company when your entire career was built inside a lab.This is not your typical founder story. Anish never planned to start a company. He was on track to be a professor at Columbia when generative AI hit and rewired his trajectory. Now he is two years into the CEO seat, recruiting top talent away from high paying jobs, and building a product at the intersection of causal machine learning and agentic systems.We get into the mechanics of that transition. How do you go from publishing papers to pitching investors? What does storytelling look like when you are convincing engineers to leave comfortable roles and bet on your vision? And what happens when you start a company without even having an idea?Anish also tackles a question the AI space is wrestling with right now. Is a PhD becoming table stakes for building an AI first company? His answer is more nuanced than you might expect. It is not the degree. It is the training. Reading the landscape, navigating uncertainty, and evaluating models with scientific rigor. Those skills separate builders from everyone else.Key TakeawaysThe best AI founders are not chasing credentials. They are leveraging research instincts to read where models and architectures are heading, and that foresight creates real competitive edges.Starting a company without an idea is not reckless if you have the right co founders. Anish and his team showed up to a WeWork every day and treated idea exploration like a research problem until the right opportunity clicked.Storytelling is the most underrated leadership skill in technical companies. Whether you are recruiting, raising capital, or explaining your product to nontechnical buyers, packaging complexity into a clear narrative is what moves people.Every decision as a founder is a bet, including the decision to do nothing. Viewing inaction as a strategic choice changes how you prioritize and how fast you move.As AI writes more code, someone has to make sure it works in production. That gap between code generation and reliability is where Traversal lives, and it is only getting wider.Timestamped Highlights(00:36) What Traversal does and why AI powered site reliability engineering is a massive unsolved problem in enterprise software(02:00) The moment generative AI changed everything and why Anish walked away from a career he loved(08:43) How Traversal found its problem without starting with an idea, and the co founder dynamic that made it work(14:29) The real advantage of a PhD in AI and why it has nothing to do with the letters after your name(19:49) Advice for PhDs entering the job market on how to position research experience so hiring managers actually get it(20:29) Two years into the CEO role, what Anish wishes he had known and the skills that matter most for early stage foundersWords That Stuck"If AI is writing your code, it has to fix it too. And right now it is only writing the code."Founder PlaybookPick a problem that sustains you for decades. Anish looks for problems that keep getting more complicated because that is where long term value compounds. If the problem has a ceiling, your company does too.Treat recruiting like a core product skill. Painting a compelling picture of the mission is not a nice to have. It is the engine that pulls exceptional talent away from safe, well paying jobs.Think of everything as a series of bets. Fundraising, hiring, product decisions, even waiting. Inaction is a bet too. Once you see it that way, you stop overthinking and start moving with intention.Subscribe to The Tech Trek wherever you listen. If this one hit home, share it with a founder or tech leader navigating their own leap. Follow the show on LinkedIn for more.

Harry Gestetner built a creator economy platform in college, sold it, and walked away. Then he did the one thing nobody expected. He jumped back in and started building hardware.In this episode, the founder and CEO of Orion (a sleep tech company making smart mattress covers) sits down to talk about what really happens after an exit, why most founders can't stay away from building, and what changes when you go from software to physical products.Harry shares what surprised him about the acquisition process, how he thinks about evaluating new startup ideas, and why he believes hardware is "life on hard mode." He also gets into the mental side of founding, from managing stress to staying sharp when everything feels uncertain.What You'll Walk Away WithGoing through an exit sounds like the finish line, but Harry explains why it's actually a reset. You trade ownership and freedom for financial security, and at some point, most founders start craving the creative control they gave up.Not every idea deserves your time. Harry talks about running new concepts through a "disqualification period" where you actively try to poke holes before committing. The ones that survive that process are worth going all in on.Hardware changes the game. Software lets you pivot fast. Hardware gives you 18 month product cycles, inventory headaches, and supply chain complexity. Conviction has to be higher before you start.The best startup ideas come from problems you and your friends actually have. If enough people share that problem, you've got a market.Knowledge compounds across startups. Harry compares the founder journey to an elastic band. Once you've been stretched, you never go back to your original form. Every challenge you survive makes the next one more manageable.Timestamped Highlights[00:34] What Orion actually does and how it makes six hours of sleep feel like ten[03:01] The emotional arc of an exit that nobody talks about, from relief to restlessness[05:34] How Harry evaluates startup ideas and why he uses a disqualification process[09:30] Why building hardware is "life on hard mode" and what made him take it on anyway[10:39] The elastic band theory of founder growth and why learning compounds over time[15:49] His advice for early career founders: pick one thing and go all inWords That Stuck"As a founder, you're sort of like an elastic band. The more you get stretched, you never go back to the original form."Tactical TakeawaysRun every new idea through a disqualification period. Actively look for reasons it won't work before you commit. The ideas that survive that scrutiny are the ones worth building.Build around problems you personally experience. If your friends share the same frustration, there's a good chance others do too. That's your market signal.If you're going to start something, go all in. Stop hedging across multiple projects. Pick one idea and dedicate yourself to it completely until it works.Keep Up With The ShowIf this episode hit home, share it with a founder or someone thinking about taking the leap. Subscribe wherever you listen so you never miss an episode. And connect with us on LinkedIn for more conversations like this one.

Behnam Bastani, CEO and cofounder of OpenInfer, breaks down why the last two years of AI feel explosive, and why the next wave is not chat, it is action at the edge.We get into always on inference, what actually forces compute to move closer to the data, and the missing layer that makes edge AI scale: the Android like infrastructure that lets devices collaborate instead of living in silos.Key takeaways• The hype spike is real, but the runway is decades, it took compute, sensors, and communication protocols maturing over generations to unlock this moment• AI is shifting from conversational to actionable, which means continuous, always on inference becomes the norm• Edge wins when cost, reliability, and data sovereignty matter, cloud and edge will coexist, but the workload placement changes• The biggest bottleneck is not just silicon, it is the infrastructure layer that makes building and deploying across devices easy, plus a shared fabric so devices can cooperate• Adoption is as much a human story as a technical one, this shift lands faster and broader than previous tech transitions, so anxiety is predictable and needs real attentionTimestamped highlights00:38 OpenInfer's mission, intelligence on every physical surface, and why collaboration matters02:07 Electricity as the earlier revolution, intelligence as the next kind of power, and the control problem05:54 Where we really are on the maturity curve, early products are here, mass adoption and safety take time08:31 When the device boundary disappears, it stops being you versus the agent, it becomes one system11:04 Always on inference, and the three forces pushing compute to the edge: cost, reliability, data sovereignty14:40 The Android moment for edge AI, why the operating system layer unlocks developers, apps, and adoptionA line worth replayingThose are going to be the three pillars that really enforces that edge and cloud are going to live together.Pro tips for builders• If your product needs real time decisions, design for intermittent networks from day one, reliability is not optional• Treat data sovereignty as a product feature, not a compliance afterthought, it is becoming the moat• Push for interoperability early, the fabric that lets devices share the right data is what makes edge feel seamlessCall to actionIf this episode helped you rethink where AI should run and what it takes to ship it in the real world, follow the show and share it with one builder who is working on edge, robotics, devices, or applied AI.

Building data capability from zero is not a tooling problem, it is a trust and prioritization problem. In this episode, Laura Guerin, Head of Data and Data Science at Bevi, breaks down how she goes from blank slate to real business impact, without getting trapped in endless plumbing or endless meetings. Laura shares how she runs an early listening tour, prototypes value before asking for bigger investment, and decides when to hire scrappy generalists versus specialists. We also get practical on AI, where it helps, where it is unnecessary, and why quality data and a clean semantic layer still decide whether anything works.Key takeaways• Start with business priorities, then map data work to the actions and outcomes leaders actually care about• Prototype the end deliverable fast, even if the backend is duct tape at first, then scale after stakeholders see value• Use cases first for AI, most problems do not need AI, but the right problems can see real acceleration• Early teams win with adaptable generalists who can wear multiple hats across data, analytics, and data science• Trust is a shared responsibility, build reliability, then create a culture where users flag weirdness quicklyTimestamped highlights00:44 Bevy explained, smart bottle less dispensers and why the business context matters for data priorities02:01 The listening tour playbook, exec alignment, stakeholder map, and using AI to synthesize themes into a SWOT04:00 The MVP reality, manual prototypes to prove value, then the conversation about scalable pipelines06:33 AI without the hype, use cases, when AI is not needed, and two examples with clear business impact09:22 Hiring from zero, why generalists first, the data analytics data science spectrum, and the personality traits that matter14:21 Self service reimagined, Slack as the interface, semantic layer and permissions, and how to keep a single source of truth20:19 Keeping trust when things break, checks and balances plus a shared responsibility model22:39 Making innovation real, baking it into expectations so the team has time to learn and test new approachesA line worth stealingData on its own is not typically a priority. It is more about the action or the impact that comes out of the data.Pro tips• Run a structured listening tour early, capture themes, then pick two or three priorities you can deliver quickly• Show the business an MVP output first, then use that proof to justify the unglamorous backend work• Treat AI like any other tool, define the problem, validate the use case, then confirm the data quality inputsCall to actionIf you are building analytics, data products, or AI inside a growing company, follow the show and subscribe so you do not miss the next operator level conversation. Share this episode with one leader who is asking for data outcomes but has not funded the foundation yet.

Riya Grover, CEO and co founder of Sequence, breaks down what “good CEO” actually looks like when the job is messy, fast, and high stakes. This is a practical conversation about building excellence through people, clarity, and direction, not through heroics or micromanagement. Riya runs a revenue automation platform for finance teams, helping companies automate order to cash, billing, invoicing, accounts receivable, and revenue recognition. From that seat, she shares a founder level view on leadership that is direct, repeatable, and built for real operating constraints.Key takeaways• The CEO's highest leverage job is building the bench, your company becomes the team you assemble• High performance culture comes from a clear bar, fast decisions when it is not met, and leaders who own outcomes• Great teams do not need more policies, they need context, goals, trade offs, and clarity• Separate reversible decisions from irreversible ones, move fast on two way doors, slow down on one way doors• Hiring signal to watch, motivation and hunger for the stretch challenge often beats the “done it before” resumeTimestamped highlights00:32 What Sequence does, why order to cash is still painfully manual01:48 The CEO role is less about functions, more about direction and execution03:23 Excellence starts with talent density, do not compromise on the bar06:10 Why companies win, direction plus distribution, and the Figma example11:01 Getting real feedback as a leader, how to reduce hierarchy and increase ownership14:39 “They need clarity,” decision frameworks over micromanagement18:01 The hidden damage of the founder weighing in on every micro decision20:53 Hiring underrated talent, motivation, ambiguity tolerance, and the stretch role24:38 Why the CEO should invest time in hiring, the leverage math is obviousA line worth keepingThey do not need policies, they need clarity. Pro tips you can steal• Promote leaders who have done the job and set the pace, it earns trust and improves decision quality• Give teams context and constraints, then treat your input like any other input• Use the door test, reversible decisions get speed and delegation, irreversible ones get more diligence• In hiring, look for motivation plus clear thinking, then bet on aptitude over the perfect backgroundCall to actionIf this one helped you think more clearly about leadership and hiring, follow the show and share the episode with one operator who is building under pressure. New conversations drop with different guests and different problems, so you always have something useful to steal.

Arnie Katz has been running product and engineering under one roof since before most companies even considered combining the roles. As CPTO at GoFundMe, he oversees the teams behind a platform processing over 2.5 donations every second, with more than $40 billion in help facilitated worldwide. Arnie breaks down why the CPTO title keeps gaining traction, how he thinks about the role like a portfolio manager, and where the real trade offs live when one person holds both the product and technology reins.Key TakeawaysThe CPTO role works like a portfolio manager. Arnie manages the company's largest investment center by balancing short term business wins against long term platform bets, knowing when to take on technical debt and when to pay it down.Velocity, coordination, and alignment are the three biggest wins. When product and engineering report to one leader, decisions happen faster, roadmap conflicts get resolved without executive tug of war, and technical investments stay tied to business outcomes.The disadvantages are real. Without separate CPO and CTO voices at the executive table, certain perspectives can get muted. His fix: build a leadership bench strong enough to create the right tension underneath him.AI is changing what small teams can deliver. GoFundMe's eight person team behind Giving Funds is shipping at a pace that would have been impossible five years ago.Timestamped Highlights[00:38] The scale most people don't realize about GoFundMe, including 2.5 donations per second and GoFundMe Pro for nonprofits.[02:02] How Arnie first landed the CPTO title at StubHub seven years ago, and why it clicked.[09:11] The real downside of collapsing two C suite roles into one, and how Arnie designs around it.[13:57] His portfolio approach to technical debt, sequencing re platforming in areas like identity and payments while other teams ship business value.[18:38] AI reshaping engineering velocity, the future of the SDLC, and product teams prototyping without writing code.[23:06] Where the CPTO model is headed as the industry evolves.The Line That Stuck"I often think of myself as a portfolio manager. My job is to invest money where the company gets the best returns, where the mission gets the best return, where the shareholder gets the best returns."Pro TipsSequence your bets instead of spreading them thin. GoFundMe gave their identity and payments teams nine months of runway to re platform with no feature expectations while other squads picked up the pace on near term results.Build leadership that creates productive friction. Without CPO vs. CTO tension at the exec level, let your VPs and SVPs push back against each other. That tension is where the best decisions come from.Think in time horizons, not just priorities. Short term moves for 0.1% to 0.5% metric lifts. Midterm bets for 1% to 5% gains. Long term swings that could transform the business. Allocate across all three.If this conversation changed how you think about product and engineering working together, share it with someone on your team. Subscribe to The Tech Trek so you never miss an episode, and connect with Arnie on LinkedIn to keep the conversation going.GoFundMe is offering listeners of The Tech Trek a chance to open their own Giving Fund. For the first 50 people who open a Giving Fund and add $25 or more to their Giving Fund, GoFundMe will add an additional $25 to that Giving Fund. If you have a Giving Fund but have never contributed into it, you can also participate. The deadline for this incentive is March 13. To get this incentive, click here to start your Giving Fund.

Anjali Jameson, Chief Product Officer at Arbiter, says the hard part is not gathering data. It is getting action across patients, providers, and payers without breaking what already works.“Automating something that's broken is not going to necessarily give us better outcomes.”Arbiter is a care orchestration platform built for patients, providers, and payers together, not a single point solution. The operating spine ingests and makes actionable data across the patient journey, including provider directories, EMR integrations, claims, and financial and policy data from health plans, then connects it to highly personalized multi channel agentic outreach. You will hear why cross system context matters, how total cost of care stays in view while each stakeholder chases different leading metrics, and what it looks like to move from automation into optimization, like going from a call center scheduling flow to 60 percent conversion and pushing toward 95 percent conversion.Timeline00:40 Care orchestration platform, operating spine, data across the patient journey04:33 Misaligned incentives, prior authorizations, 12 to 14 hours a week09:42 Total cost of care, star metric, building for different metrics12:25 Long form personalized videos, transportation, education, medication management15:02 Prior authorization from three to six days to almost instantaneous22:07 COVID, provider messaging two, three X, AI responds fasterSubscribe and share it with someone who is building in health tech.

Most data teams do not have a tooling problem. They have a customer service problem.Mo Villagran, Associate Director of Insights, Analytics, and Data at Cambrex, argues that stakeholder expectation management is the difference between being a trusted advisor and being an order taker."In a simple word, it's really just customer service."In this episode, Mo breaks down how to manage stakeholder expectations, define expected delivery value, and keep projects aligned to real business outcomes instead of chasing rebranded tools. She shares why simple solutions often win, how to show progress even when the work is plumbing, and why qualitative stakeholder testimony beats dashboard count KPIs. You will also hear how she thinks about AI as a tool, when it works, when it is just a cool toy, and how to build trust by demoing in real time.00:02:00 Stakeholder expectation management is customer service00:03:00 Why skeleton teams can still deliver value00:06:00 Who defines expected delivery value, and how to shape it00:09:00 Negotiate expectations, do not become an order taker00:18:00 How to show progress when there is nothing visual00:21:00 Stop chasing quantitative KPIs, win with testimonySubscribe and share this episode with anyone who is knee deep in stakeholder management.

Ashok Krishnamurthi, Managing Partner at Great Point Ventures, says the biggest mistake in venture capital is confusing prediction with judgment.Early stage investing is not about perfect stories, it is about first principles and picking the founder who can execute when the story breaks.This episode is for startup founders and investors who want a cleaner filter for what matters.“You have to learn to check your ego at the door because it's a partnership.”Ashok shares his path from engineering into building companies, then into venture capital, and explains how he forms an investment thesis when markets are noisy. We talk about founder evaluation, why picking the jockey matters more than the idea, and how first principles thinking shows up in real domains like healthcare data and cancer. We also get practical about artificial intelligence, why AI is not only a compute race, and how AI inference, energy efficiency, and cost shape what wins.00:00 Why legacy matters more than VC metrics02:28 Engineer to founder to venture capital11:16 How to pick the jockey14:21 First principles, cancer data, and AI constraints23:24 AI is here to stay, keep your mind open30:15 How to reach AshokIf this episode helped, subscribe and share it with a builder or investor who will use it.

Aditya Agarwal did not plan to work in robotics. He got rejected from his first-choice major, joined a student club to keep his parents off his back, and stumbled into one of the fastest-growing fields in tech. Now he is Head of Robotics at Medra, a company building physical AI scientists that let researchers run experiments remotely at speeds a traditional lab cannot touch."Even the companies that have made the most progress haven't deployed at the scale of laptops, cars, or phones. So if you have experience scaling hardware products, that is super valuable at an early-stage robotics company."What we get into: why the PhD requirement is mostly gone, how AI is shrinking the hardware development timeline, and the cheapest way to start building with robotics today if you cannot afford to go back to school or take a step back in your career.Timestamped Highlights01:19 The accidental path into robotics that actually worked03:04 Whether you still need an engineering degree for hardware roles04:48 Master's degree vs. early-stage startup: what gets you there faster10:57 How AI is replacing the guesswork in hardware configuration15:51 How to start learning robotics at home without spending much18:38 Why rigid hiring processes are costing robotics teams good candidatesIf this one lands, subscribe and share it with someone who has been thinking about making a move into the space.

Ronak Desai, Co-founder and CPTO at Payment Labs, breaks down a surprisingly hard problem that sits at the intersection of fintech, sports, and compliance. If you have ever assumed paying winners is just a simple payout flow, this episode will change that view fast.Payment Labs helps tournament organizers, league operators, and modern sports businesses handle payouts plus tax compliance and support, all in one system. Ronak explains why spot payments are high risk, why manual workflows still dominate the space, and how stablecoins and AI are about to reshape fraud, identity, and trust.Key TakeawaysOne time payouts are a fraud magnet, inconsistent winners and risk based rules make verification and compliance much harder than payrollSolving payments without solving tax and forms still leaves the biggest liability sitting with the organizerMany sports and esports operators still run payouts in a surprisingly analog way, checks, cash, and post event cleanupAI is now good enough to pressure identity verification, and stablecoins make recovery harder because transfers are effectively finalProduct adoption depends on meeting users where they are, younger athletes expect texting and simple flows, not tickets and portalsTimestamped Highlights00:29 What Payment Labs actually does, payouts plus tax compliance plus support for sports, esports, and creator economy use cases01:15 The origin story, a real tax problem hit an esports operator and exposed how broken the payout workflow is02:46 Why spot payments raise risk, random recipients, fraud pressure, and why bank partners treat this differently than payroll04:58 The industry reality check, still running on checks and cash, and what digitizing the workflow unlocks next06:58 AI fraud versus AI detection, how identity verification is getting bypassed and why stablecoin rails raise the stakes11:55 The NIL wild west and the product lesson, meet athletes where they already live, including iMessage supportA Line Worth RepeatingNow you have AI committing the fraud and then you have AI detecting the fraud.Pro Tips for Builders and OperatorsIf your users are young and mobile first, build support where they already communicate, texting beats ticketing for adoptionDo not bolt on AI for a storyline, use it where it replaces manual work you already do and frees time for higher leverage decisionsMap your tasks with the Eisenhower quadrant, then automate what is repetitive before you chase shiny featuresCall to ActionIf this episode helped you think differently about fintech, fraud, and modern payout infrastructure, follow the show and share it with a founder or operator who touches payments. For more conversations at the intersection of tech, data, and real world execution, connect with Amir on LinkedIn and subscribe to the Elevano newsletter.

Healey Cypher, CEO of BoomPop and COO at Atomic, breaks down what separates founders who win from founders who stall. You will hear a clear way to judge whether an idea is truly worth building, plus the trust mechanics that get investors, customers, and teammates to actually follow you.This conversation is a practical map for tech builders who want to pick smarter problems, execute faster, and earn credibility without the founder theater.Key TakeawaysFounders matter most, but the idea is still a gate, the same great team can get wildly different outcomes depending on the market and timingVC backed is a specific game, it requires not just big potential, but fast scale, and the incentives are not the same as building a profitable lifestyle businessA quick reality check for market size, if you need more than about five to seven percent penetration to hit meaningful revenue, it is usually a brutal pathPainkillers beat vitamins, solve an urgent problem people feel right now, or you risk getting cut the moment budgets tightenTrust is built through authenticity, logic, and empathy, if one wobbles, people feel it fast, and progress slows everywhereTimestamped Highlights00:00:00 Healey's background, why BoomPop, and what the episode is really about00:02:00 The post pandemic spend shift and the why now behind modern events and group travel00:04:30 Founder versus idea, why execution dominates, but the opportunity still decides the ceiling00:06:40 The VC reality, power law returns, speed, and why some good businesses are still a no for venture00:09:15 A simple market math test, penetration levels that become a growth wall00:19:00 Trust as a founder skill, the three ingredients and how to spot when one is missing00:21:30 Vulnerability as a shortcut to real connection, plus the giver mindset that makes people want you to winA line worth stealingIf everyone wants you to win, it is a lot easier to win.Pro Tips for Tech FoundersAsk yourself what you naturally look forward to doing, that is often your zone of strength, hire around the tasks you dreadLearn the financial basics early, especially cash flow, it is the scoreboard that keeps you alive long enough to winWhen trust is lagging, check the three levers, are you showing the real you, can people follow your reasoning, do they feel you care about their outcomesWhat's next:If you build products, lead teams, or are thinking about starting something, follow the show so you do not miss episodes like this. Also connect with me on LinkedIn for short takeaways and clips from each conversation.

Ty Wang, cofounder and CEO of Angle Health, breaks down what it means to give back through public service, then shows how that same mindset drives his mission to modernize healthcare for small and midsize businesses. We get into why legacy health plans feel opaque and painful, what an AI native health plan actually changes behind the scenes, and how better data and workflows can create real cost stability for employers.Ty shares his path from a federal scholarship and national service work to Palantir, and why he chose one of the most regulated, least glamorous industries to build in. If you have ever wondered why healthcare feels impossible to navigate, or why renewals can blindside a company, this conversation will give you a clear mental model of the problem and a practical view of what modernization looks like when it actually ships. Key TakeawaysHealthcare feels broken because the infrastructure is fragmented, data is siloed, and even basic questions become hard to answer across inconsistent systemsModernizing healthcare is not just about a new app, it is about rebuilding the operational core so workflows, claims, underwriting, and member experience can run on integrated dataSmall and midsize businesses are hit hardest by cost volatility because they lack transparency, predictability, and negotiating leverage, yet health insurance is often a top line item after payrollA strong approach to regulated markets is collaborative, treat regulators as partners in consumer protection, not obstacles to work aroundMission and impact can be a recruiting advantage, especially when the technical problems are genuinely hard and the outcomes touch real people fastTimestamped Highlights00:40 What Angle Health is, and what AI native means in a real health plan02:05 The scholarship path that pulled Ty into public service and set his trajectory04:06 The personal story behind the mission, the American dream, and why access matters09:38 Why healthcare infrastructure is so complex, and how siloed systems create bad experiences11:33 Why SMBs get squeezed, and how manual administration blocks customization at scale13:20 The real pain point for employers, cost volatility and zero predictability before renewal16:55 Why the tech can expand beyond SMBs, but why the SMB market is already massive19:51 Lessons from building in a regulated industry, and why credibility and funding matter22:26 Hiring for high agency, mission driven talent in a world full of AI companiesA line that sticks“Unless you are lucky enough to work for a big company, these modern healthcare services are still largely inaccessible to the vast majority of Americans.”Pro Tips for tech operators and buildersIf you are modernizing a legacy industry, start with the infrastructure layer, fix the data model, integrate the systems, then automate workflowsIn regulated markets, build relationships early, show how your product improves consumer outcomes, and make compliance a design constraint, not a bolt onWhen selling into SMBs, predictability beats perfection, give customers a clear breakdown of what drives costs and what they can controlWhat's next:If this episode helped you see healthcare and legacy modernization more clearly, follow the show on Apple Podcasts or Spotify and subscribe so you do not miss the next conversation. Also, share it with one operator or builder who is trying to modernize a messy industry.

Gabe Ravacci, CTO and co-founder at Internet Backyard, breaks down what the “computer economy” really looks like when you zoom in on data centers, billing, invoicing, and the financial plumbing nobody wants to touch. He shares how a rejected YC application, a finance stint, and a handful of hard lessons pushed him from hardware curiosity to building fintech infrastructure for compute.If you care about where compute is headed, or you are early in your career and trying to find your path without overplanning it, this one will land.Key Takeaways• Startups often happen “by accident” when your competence meets the right problem at the right time• Compute accessibility is not only a chip problem, it is also a finance and operations problem• Rejection can be data, not a verdict, treat it as feedback to sharpen the craft• A real online presence is less about networking and more about being genuinely useful in public• Time blocking and single task focus beats grinding when you are juggling school, work, and a startupTimestamped Highlights00:28 What Internet Backyard is building, fintech infrastructure for data center financial operations01:37 The first startup attempt, cheaper compute via FPGA based prototyping, and why investors passed04:48 The pivot, from hardware tools to a finance informed view of compute and transparency gaps06:55 How Gabe reframed YC rejection, process over outcome, “a tree of failures” that builds skill08:29 Building a digital brand on X, what he posted, how he learned in public, and why it worked13:36 The real balancing act, dropping classes, finishing the degree well, and strict time blocking20:00 Books that shaped his thinking, Siddhartha, The Art of Learning, Finite and Infinite GamesA line worth keeping“The process is really more important than any outcome.”Pro Tips for builders• Treat learning like a skill, ask better questions before you chase better answers• Make focus a system, set blocks, mute distractions, and do one thing at a time• Share what you are learning in public, not to perform, but to be useful and find signalCall to ActionIf this episode sparked an idea, follow or subscribe so you do not miss the next one. Also check out Amir's newsletter for more conversations at the intersection of people, impact, and technology.

Data leaders are being asked to ship real AI outcomes while the foundations are still messy. In this conversation, Dave Shuman, Chief Data Officer at Precisely, breaks down what actually determines whether AI adoption sticks, from hiring “comb shaped” talent to building trusted data products that make AI outputs believable and usable.If you are building in data, AI, or analytics, this episode is a practical map for what needs to be true before AI can move from demos to dependable, repeatable impact.Key TakeawaysComb shaped talent beats narrow specialization, AI work rewards people who can span multiple skills and collaborate wellAdoption is a trust problem, and trust starts with data integrity, lineage, context, and a semantic layer that business users can understandOpen source drives the innovation, commercialization makes it safe and usable at enterprise scale, especially around security and supportData must be fit for purpose, start every AI project by asking what data it needs, who curates it, and what the known warts areHumans are still the last mile, small workflow choices can make adoption jump, even when the model is already accurateTimestamped Highlights00:56 The shift from T shaped to comb shaped talent, what modern AI teams actually need to look like05:36 Hiring for team fit over “world class” niche skills, and when to bring in trusted partners for depth07:37 How open source sparks the ideas, and why enterprises still need hardened, supported versions to scale11:31 Where AI adoption is today, why summarization is only the beginning, and what unlocks “AI 2.0”13:39 The trust stack for AI, clean integrated data, lineage, context, catalog, semantic layer, then agents19:26 A real adoption lesson from machine learning, and why the human experience decides if the system winsA line worth stealing“You do not just take generative AI and throw it at your chaos of data and expect it to make magic out of it.”Pro Tips for data and AI leadersHire and build teams like Tetris, fill skill voids across the group instead of chasing one perfect profileUse partners for the sharp edges, but require knowledge transfer so your team levels up every engagementMake adoption easier by designing for human behavior, sometimes the smallest workflow tweak beats more accuracyBuild governed data products in a catalog, then validate AI outputs side by side with dashboards to earn trust fastCall to ActionIf this helped you think more clearly about AI adoption, talent, and data foundations, follow the show and turn on notifications so you do not miss the next episode. Also, share it with one data or engineering leader who is trying to get AI out of pilots and into real workflows.

Cloud bills are climbing, AI pipelines are exploding, and storage is quietly becoming the bottleneck nobody wants to own. Ugur Tigli, CTO at MinIO, breaks down what actually changes when AI workloads hit your infrastructure, and how teams can keep performance high without letting costs spiral. In this conversation, we get practical about object storage, S3 as the modern standard, what open source really means for security and speed, and why “cloud” is more of an operating model than a place. Key takeaways• AI multiplies data, not just compute, training and inference create more checkpoints, more versions, more storage pressure • Object storage and S3 are simplifying the persistence layer, even as the layers above it get more complex • Open source can improve security feedback loops because the community surfaces regressions fast, the real risk is running unsupported, outdated versions • Public cloud costs are often less about storage and more about variable charges like egress, many teams move data on prem to regain predictability • The bar for infrastructure teams is rising, Kubernetes, modern storage, and AI workflow literacy are becoming table stakes Timestamped highlights00:00 Why cloud and AI workloads force a fresh look at storage, operating models, and cost control 00:00 What MinIO is, and why high performance object storage sits at the center of modern data platforms 01:23 Why MinIO chose open source, and how they balance freedom with commercial reality 04:08 Open source and security, why faster feedback beats the closed source perception, plus the real risk factor 09:44 Cloud cost realities, egress, replication, and why “fixed costs” drive many teams back inside their own walls 15:04 The persistence layer is getting simpler, S3 becomes the standard, while the upper stack gets messier 18:00 Skills gap, why teams need DevOps plus AIOps thinking to run modern storage at scale 20:22 What happens to AI costs next, competition, software ecosystem maturity, and why data growth still wins A line worth keeping“Cloud is not a destination for us, it's more of an operating model.” Pro tips for builders and tech leaders• If your AI initiative is still a pilot, track egress and data movement early, that is where “surprise” costs tend to show up • Standardize around containerized deployment where possible, it reduces the gap between public and private environments, but plan for integration friction like identity and key management • Treat storage as a performance system, not a procurement line item, the right persistence layer can unblock training, inference, and downstream pipelines What's next:If you're building with AI, running data platforms, or trying to get your cloud costs under control, follow the show and subscribe so you do not miss upcoming episodes. Share this one with a teammate who owns infrastructure, data, or platform engineering.

This is an early conversation I am bringing back because it feels even more relevant now, the intersection of AI and art is turning into a real cultural shift.I sit down with Marnie Benney, independent curator at the intersection of contemporary art and technology, and co-founder of AIartists.org, a major community for artists working with AI. We talk about what AI art actually is beyond the headlines, where authorship gets messy, and why artists might be the best people to pressure test the societal impact of machine learning.Key takeaways• AI in art is not a single thing, it is a spectrum of choices, dataset, process, medium, and intent• The most interesting work treats AI as a collaborator, not a shortcut, a back and forth that reshapes the artist's decisions• Authorship is still unsettled, some artists see AI as a tool like an instrument, others treat it as a creative partner• The fear that AI replaces creativity misses the point, artists can use the machine's unexpected output to expand human expression• Access matters, compute, tooling, and collaboration between artists and technologists will shape who gets to experiment at the frontierTimestamped highlights00:04:00 Curating science, climate, and public engagement, the path into tech driven exhibitions00:07:41 What AI art can mean in practice, datasets, iteration loops, and choosing an output medium00:10:48 Who gets credit, tool versus collaborator, and the art world's evolving rules00:13:51 Fear, job displacement, and a healthier frame, human plus machine as a creative partnership00:22:57 The new skill stack, what artists need to learn, and where collaboration beats handoffs00:29:28 The pushback from traditional art circles, philosophy and intention versus novelty00:37:17 Inside the New York exhibition, collaboration between human and machine, visuals, sculpture, and sound00:48:16 The magic of the unknown, why the output can surprise even the artistA line that stuck“Artists are largely showing a mirror to society of what this technology is, for the positive and the negative.”Pro tips for builders and operators• Treat creative communities as an early signal, artists surface second order effects before markets do• If you are building AI products, study authorship debates, they map directly to credit, accountability, and trust• Collaboration beats delegation, when domain experts and technologists iterate together, the work gets sharper fastCall to actionIf this episode hits for you, follow the show so you do not miss the next drop. And if you are building in data, AI, or modern tech teams, follow me on LinkedIn for more conversations that connect technology to real world impact.

Most teams are approaching AI from the wrong direction, either chasing the tech with no clear problem or spinning up endless pilots that never earn their keep. In this episode, Amir Bormand sits down with Steve Wunker, Managing Director at New Markets Advisors and co author of AI and the Octopus Organization, to break down what actually works in enterprise AI.You will hear why the real challenge is organizational, not technical, how IT and business have to co own the outcome, and what it takes to keep AI systems valuable over time. If you are trying to move beyond experimentation and into real impact, this conversation gives you a practical blueprint.Key takeaways• Pick a handful of high impact problems, not hundreds of small pilots, focus is what creates measurable ROI• Treat AI as a workflow and change program, not a tool you bolt onto an existing process• IT has to evolve from order taker to strategic partner, including stronger AI ops and ongoing evaluation• Start with the destination, redefine the value proposition first, then redesign the operating model around it• Ongoing ownership matters, AI is not a one and done delivery, it needs stewardship to stay usefulTimestamped highlights00:39 What New Markets Advisors actually does, innovation with a capital I, plus AI in value props and operations01:54 The two common mistakes, pushing AI everywhere and launching hundreds of disconnected pilots04:19 Why IT cannot just take orders anymore, plus why AI ops is not the same as DevOps07:56 Why the octopus is the perfect model for an AI age organization, distributed intelligence and rapid coordination11:08 The HelloFresh example, redesign the destination first, then let everything cascade from that17:37 The line you will remember, AI is an ongoing commitment, not a project you ship and forget20:50 A cautionary pattern from the dotcom era, avoid swinging from timid pilots to extreme headcount mandatesA line worth keepingYou cannot date your AI system, you need to get married to it.Pro tips for leaders building real AI outcomes• Define success metrics before you build, then measure pre and post, otherwise you are guessing• Redesign the process, do not just swap one step for a model, aim for fewer steps, not faster steps• Assign long term ownership, budget for maintenance, evaluation, and model oversight from day oneCall to actionIf this episode helped you rethink how to drive AI results, follow the show and subscribe so you do not miss the next conversation. Share it with a leader who is stuck in pilot mode and wants a path to production.

Manufacturing is getting faster, messier, and more expensive when quality slips.Daniel First, Founder and CEO at Axion, joins Amir to break down how AI is changing the way manufacturers detect issues in the field, trace root causes across messy data, and shorten the time from “customers are hurting” to “we fixed it.”Episode SummaryDaniel First, Founder and CEO at Axion, explains why modern manufacturing is living in the bottom of the quality curve longer than ever, and how AI can help companies spot issues early, investigate faster, and actually close the loop before warranty costs and customer trust spiral. If you work anywhere near hardware, infrastructure, or complex systems, this is a sharp look at what “AI first” means when real products fail in the real world.You will hear why quality is becoming a competitive weapon, how unstructured signals hide the truth, and what changes when AI agents start doing the detection, investigation, and coordination work humans have been drowning in.What you will take awayQuality is not just a defect problem, it is a speed and trust problem, especially when product cycles keep compressing.AI creates leverage by pulling together signals across the full product life cycle, not by sprinkling a chatbot on one system.The fastest teams win by finding issues earlier, scoping impact correctly, and fixing what matters before customers notice the pattern.A clear ROI often lives in warranty cost avoidance and downtime reduction, not just “efficiency” metrics.“AI first” gets real when strategy becomes operational, and contradictions in how teams prioritize issues get exposed.Timestamped highlights00:00 Why manufacturing is a different kind of problem, and why speed is harder than it looks01:10 What Axion does, and how it detects, investigates, and resolves customer impacting issues05:10 The new reality, faster product cycles mean living in the bottom of the quality curve10:05 Why it can take hundreds of days to truly solve an issue, and where the time disappears16:20 How to evaluate AI vendors in manufacturing, specialization, integrations, and cross system workflows22:40 The shift coming to quality teams, from reading data all day to making higher level decisions28:10 What “AI first” looks like in practice, and how AI exposes misalignment across teamsA line worth repeating“Humans are not that great at investigating tens of millions of unstructured data points, but AI can detect, scope, root cause, and confirm the fix.”Pro tips you can applyWhen evaluating an AI solution, ask three questions up front: how specialized the AI must be, whether you need a full workflow solution or just an API, and whether the use case spans multiple systems and teams.Treat early detection as a first class objective, the longer the accumulation phase, the more cost and customer damage you silently absorb.Align issue prioritization to strategy, not just frequency, cost, or the loudest internal voice.Follow:If this episode helped you think differently about quality, speed, and AI in the real world, follow the show on Apple Podcasts or Spotify so you do not miss the next one. If you want more conversations like this, subscribe to the newsletter and connect with Amir on LinkedIn.

Synthetic data is moving from a niche concept to a practical tool for shipping AI in the real world. In this episode, Amit Shivpuja, Director of Data Product and AI Enablement at Walmart, breaks down where synthetic data actually helps, where it can quietly hurt you, and how to think about it like a data leader, not a demo builder.We dig into what blocks AI from reaching production, how regulated industries end up with an unfair advantage, and the simple test that tells you whether synthetic data belongs anywhere near a decision making system.Key Takeaways• AI success still lives or dies on data quality, trust, and traceability, not model hype. • Synthetic data is best for exploration, stress testing, and prototyping, but it should not be the backbone of high stakes decisions. • If you cannot explain how an output was produced, synthetic only pipelines become a risk multiplier fast. • Regulated industries often move faster with AI because their data standards, definitions, and documentation are already disciplined. • The smartest teams plan data early in the product requirements phase, including whether they need synthetic data, third party data, or better metadata. Timestamped Highlights00:01 The real blockers to getting AI into production, data, culture, and unrealistic scale assumptions 03:40 The satellite launch pad analogy, why data is the enabling infrastructure for every serious AI effort 07:52 Regulated vs unregulated industries, why structure and standards can become a hidden advantage 10:47 A clean definition of synthetic data, what it is, and what it is not 16:56 The “explainability” yardstick, when synthetic data is reasonable and when it is a red flag 19:57 When to think about data in stakeholder conversations, why data literacy matters before the build starts A line worth sharing“AI is like launching satellites. Data is the launch pad.” Pro Tips for tech leaders shipping AI• Start data discovery at the same time you write product requirements, not after the prototype works• Use synthetic data early, then set milestones to shift weight toward real world data as you approach production • Sanity check the solution, sometimes a report, an email, or a deterministic workflow beats an AI system Call to ActionIf this episode helped you think more clearly about data strategy and AI delivery, follow the show on Apple Podcasts and Spotify, and share it with a builder or leader who is trying to get AI out of pilot mode. You can also follow me on LinkedIn for more episodes and clips.

Tom Pethtel, VP of Engineering at Flock Safety, breaks down the real learning curve of moving from builder to manager, and how to keep your technical edge while scaling your impact through people.You will hear how Tom's path from rural Ohio to leading high stakes engineering teams shaped his approach to leadership, hiring, and staying close to the customer. Key Takeaways Promotions usually come from doing your current job well, plus stepping into the work above you that is not getting done Great leaders do not fully detach from the craft, they stay close enough to the work to make good calls and keep context Put yourself where the real learning is happening, watch customers, go to the failure point, get proximity to the source of truth Hiring is not only pedigree, it is fundamentals plus grit, the willingness to solve what looks hard because it is “just software” As you scale to teams of teams, your job becomes time allocation, jump on the biggest business fire while still making rounds everywhereTimestamped Highlights00:32 What Flock Safety actually builds, from AI enabled devices to Drone as a First Responder02:04 Dropping out of Georgia Tech, switching disciplines, and choosing software for speed and impact03:30 A life threatening detour, learning you owe 18,000 dollars, and teaching yourself to build an iPhone app to survive06:33 Why Tom values grit and non traditional backgrounds in hiring, and the “it is just software” mindset08:46 Proximity and learning, go to the problem, plus the lessons he borrows from Toyota Production System09:55 A practical story of chasing expertise, from Kodak to Nokia, and hiring the right leader by going where the knowledge lives14:27 The truth about becoming a manager, you rarely feel ready, you take the seat and learn fast19:18 Leading teams of teams, you cannot be everywhere, so you go where the biggest fire is, without neglecting the rest22:08 The promotion playbook, stop only doing your job, start solving the next jobA line worth stealing“Do your job really well, plus go do the work above you that is not getting done, that's how you rise.”Pro Tips for engineers stepping into leadership Stay technical enough to keep your judgment sharp, even if it is only five or ten percent of your week If you want to grow, chase proximity, sit with the customer, sit with the failure, sit with the best people in the space Measure your impact as leverage, if a team of ten is producing ten times, your role is not less valuable, it is multiplied When you lead multiple disciplines, rotate your attention intentionally, do not camp on one fire for a full yearCall to ActionIf this episode helped you rethink leadership, share it with one builder who is about to step into management. Subscribe on Apple Podcasts, Spotify, and YouTube, and follow Amir on LinkedIn for more conversations with operators building real teams in the real world.

Phil Freo, VP of Product and Engineering at Close, has lived the rare arc from founding engineer to executive leader. In this conversation, he breaks down why he stayed nearly 12 years, and what it takes to build a team that people actually want to grow with.We get into retention that is earned, not hoped for, the culture choices that compound over time, and the practical systems that make remote work and knowledge sharing hold up at scale.Key takeaways• Staying for a decade is not about loyalty, it is about the job evolving and your scope evolving with it• Strong retention is often a downstream effect of clear values, internal growth opportunities, and leaders who trust people to level up• Remote can work long term when you design for it, hire for communication, and invest in real relationship building• Documentation is not optional in remote, and short lived chat history can force healthier knowledge capture• Bootstrapped, customer funded growth can create stability and control that makes teams feel safer during chaotic marketsTimestamped highlights00:02:13 The founders, the pivots, and why Phil joined before Close was even Close00:06:17 Why he stayed so long, the role keeps changing, and the work gets more interesting as the team grows00:10:54 “Build a house you want to live in”, how valuing tenure shapes culture, code quality, and decision making00:14:14 Remote as a retention advantage, moving life forward without leaving the company behind00:20:23 Over documenting on purpose, plus the Slack retention window that forces real knowledge capture00:22:48 Bootstrapped versus VC backed, why steady growth can be a competitive advantage when markets tighten00:28:18 The career accelerant most people underuse, initiative, and championing ideas before you are askedOne line worth stealing“Inertia is really powerful. One person championing an idea can really make a difference.”Practical ideas you can apply• If you want growth where you are, do not wait for permission, propose the problem, the plan, and the first step• If you lead a team, create parallel growth paths, management is not the only promotion ladder• If you are remote, hire for writing, decision clarity, and follow through, not just technical depth• If Slack is your company memory, it is not memory, move durable knowledge into docs, issues, and specsStay connected:If this episode sparked an idea, follow or subscribe so you do not miss the next one. And if you want more conversations on building durable product and engineering teams, check out my LinkedIn and newsletter.

Pete Hunt, CEO of Dagster Labs, joins Amir Bormand to break down why modern data teams are moving past task based orchestration, and what it really takes to run reliable pipelines at scale. If you have ever wrestled with Apache Airflow pain, multi team deployments, or unclear data lineage, this conversation will give you a clearer mental model and a practical way to think about the next generation of data infrastructure. Key Takeaways• Data orchestration is not just scheduling, it is the control layer that keeps data assets reliable, observable, and usable• Asset based thinking makes debugging easier because the system maps code directly to the data artifacts your business depends on• Multi team data platforms need isolation by default, without it, shared dependencies and shared failures become a tax on every team• Good software engineering practices reduce data chaos, and the tools can get simpler over time as best practices harden• Open source makes sense for core infrastructure, with commercial layers reserved for features larger teams actually need Timestamped Highlights00:00:50 What Dagster is, and why orchestration matters for every data driven team00:04:18 The origin story, why critical institutions still cannot answer basic questions about their data00:07:02 The architectural shift, moving from task based workflows to asset based pipelines00:08:25 The multi tenancy problem, why shared environments break down across teams, and what to do instead00:11:21 The path out of complexity, why software engineering best practices are the unlock for data teams00:17:53 Open source as a strategy, what belongs in the open core, and what belongs in the paid layer A Line Worth RepeatingData orchestration is infrastructure, and most teams want their core infrastructure to be open source. Pro Tips for Data and Platform Teams• If debugging feels impossible, you may be modeling your system around tasks instead of the data assets the business actually consumes• If multiple teams share one codebase, isolate dependencies and runtime early, shared Python environments become a silent reliability risk• Reduce cognitive load by tightening concepts, fewer new nouns usually means a smoother developer experience Call to ActionIf this episode helped you rethink data orchestration, follow the show on Apple Podcasts and Spotify, and subscribe so you do not miss future conversations on data, AI, and the infrastructure choices that shape real outcomes.

Sandesh Patnam, Managing Partner at Premji Invest, breaks down how long duration capital changes the way you evaluate companies, founders, and moats. We talk about what most growth investors miss, why product strength still matters, and how to separate real AI businesses from thin wrappers in a noisy market.Premji Invest is a captive, evergreen fund built to grow an endowment that supports major education work, which gives the team flexibility on time horizon and partnership style. Sandesh shares how that shows up in diligence, how they think about backing contrarian founders, and why the best companies in this AI era may still be ahead of us.Key TakeawaysFocus on the long arc, not quarter by quarter optics, founders make better decisions when they are not trapped in short term metricsIn growth investing, TAM models and KPI spreadsheets can distract from the core question, does the product have real strength and an expanding roadmapEnduring outcomes often come from backing a contrarian view early, then helping it move from contrarian to consensus over timeEvergreen capital changes behavior, you can slow down, build relationships, and partner across private and public markets instead of treating IPO as the finish lineIn AI, separate the stack into data center, foundation models, and applications, then look for defensibility like vertical depth, data moats, and compounding usage valueTimestamped highlights00:38 Premji Invest explained, evergreen structure, one LP, and why public markets can be part of the journey, not the exit04:47 Two common growth investor lenses and what gets missed when product and roadmap do not lead the thesis08:48 Partnership mindset, building trust, and being the first call when things get hard12:48 The contrarian to consensus path, what creates alpha, and how to support founders through the lonely middle19:54 Why rushing decisions is a trap, and how flexibility changes when and how you can partner with a company20:55 AI investing framework, three layers, what looks frothy, what can endure, and where moats still exist26:48 The cost of intelligence is collapsing, why this may still be the early internet moment, and what that implies for the next waveA line that stuck with me“We want to be the first port of call when the seas are turbulent.”Practical moves you can stealPressure test the roadmap, ask when product two ships, what adjacency comes next, and what tradeoffs change at scaleWhen evaluating AI apps, demand a defensibility story beyond the model, look for proprietary data, vertical workflow depth, and value that improves with usageTreat speed as a risk factor, if you cannot complete your churn cycle of doubt and validation, step back rather than force certaintyCall to ActionIf you liked this one, follow the show and share it with a founder, operator, or investor who is building in AI right now. For more conversations at the intersection of tech, business, and execution, subscribe and connect with me on LinkedIn.

Software engineering is changing fast, but not in the way most hot takes claim. Robert Brennan, Co founder and CEO at OpenHands, breaks down what happens when you outsource the typing to the LLM and let software agents handle the repetitive grind, without giving up the judgment that keeps a codebase healthy. This is a practical conversation about agentic development, the real productivity gains teams are seeing, and which skills will matter most as the SDLC keeps evolving. Key TakeawaysAI in the IDE is now table stakes for most engineers, the bigger jump is learning when to delegate work to an agentThe best early wins are the unglamorous tasks, fixing tests, resolving merge conflicts, dependency updates, and other maintenance work that burns time and attentionBigger output creates new bottlenecks, QA and code review can become the limiting factor if your workflow does not adaptSenior engineering judgment becomes more valuable, good architecture and clean abstractions make it easier to delegate safely and avoid turning the codebase into a messThe most durable human edge is empathy, for users, for teammates, and for your future self maintaining the systemTimestamped Highlights00:40 What OpenHands actually is, a development agent that writes code, runs it, debugs, and iterates toward completion02:38 The adoption curve, why most teams start with IDE help, and what “agent engineers” do differently to get outsized gains06:00 If an engineer becomes 10x faster, where does the time go, more creative problem solving, less toil15:01 A real example of the SDLC shifting, a designer shipping working prototypes and even small UI changes directly16:51 The messy middle, why many teams see only moderate gains until they redraw the lines between signal and noise20:42 Skills that last, empathy, critical thinking, and designing systems other people can understand22:35 Why this is still early, even if models stopped improving today, most orgs have not learned how to use them well yetA line worth sharing“The durable competitive advantage that humans have over AI is empathy.”Pro Tips for Tech TeamsStart by delegating low creativity tasks, CI failures, dependency bumps, and coverage improvements are great training wheelsDefine “safe zones” for non engineers contributing, like UI tweaks, while keeping application logic behind clearer guardrailsInvest in abstractions and conventions, you want a codebase an agent can work with, and a human can trustTrack where throughput stalls, if PR review and QA are the bottleneck, productivity gains will not show up where you expectCall to ActionIf you got value from this one, follow the show and share it with an engineer or product leader who is sorting out what “agentic development” actually means in practice.

Deborah Hanus, Co-founder and CEO at Sparrow, joins Amir to unpack the founder journey from academia to building a scaled company. They dig into why leave management is still a messy, high stakes problem, and how Sparrow is turning it into a clean, guided experience for both HR and employees.Sparrow helps companies provide employee leave across the United States and Canada, and Deborah shares what it really takes to scale a compliance driven business without slowing down. From founder resilience and early stage emotional swings to hiring, onboarding, and culture design, this one is packed with lessons for operators and builders.Key takeaways• Academia can be real founder training, especially for building resilience and hearing “no” without losing your edge• Early stage startups feel brutal because you have too few data points, it is easy to overreact to every win or setback• Compliance and leave are fundamentally data problems, the right info to the right person at the right time changes everything• Scaling leadership is mostly communication and alignment, five people and 250 people require totally different systems• Culture does not stay stable by accident, values must drive hiring, training, rewards, and performance managementTimestamped highlights00:37 What Sparrow does, and the 300 million dollars in payroll cost savings milestone01:37 Why academia can prepare you for founding, and how customer pain beats outside skepticism03:40 The leave compliance mess, and why state by state rules made the problem explode08:25 The two real ways startups die, and why morale matters as much as cash12:55 Leading at scale, onboarding, clarity, and the feedback questions that keep teams aligned19:54 “Scale intentionally” as a culture principle for a company that cannot afford to break things25:48 Keeping values stable while everything else evolves as the team growsA line worth sharing“Companies end when you run out of cash or you run out of morale.”Pro tips you can steal• Treat the employee journey like a product journey, from recruiting through promotions and hard moments• Before a big change, collect questions early so the message lands where people actually are• After a meeting, ask “What were the main points?” to see what people heard, then tighten your messaging• Invest in onboarding and goal clarity to prevent teams from drifting into competing prioritiesCall to actionIf you enjoyed this conversation, follow and subscribe so you do not miss what is next.

Max Bruner, Founder and CEO of Anzen, joins Amir Bormand to break down why insurance is quietly one of the biggest data and workflow opportunities in tech right now. They dig into Max's unconventional path from foreign policy to building an executive liability marketplace, and what it really takes to modernize a slow moving industry with AI.If you care about building in real world markets, scaling with discipline, and using AI for more than content, this one will sharpen your thinking fast. Key Takeaways• Insurance is not flashy, but it is foundational, massive, profitable, and packed with repeatable workflows that software can improve• The best tech opportunities are often in slow moving industries with lots of data and outdated systems• Better decision making comes from predicting outcome impact and pressure testing your thinking with a strong community around you• AI value is clearest when it drives real operations, faster transactions, lower costs, and better service• Fundraising is a pipeline game now, treat it like sales, build the plan, hit the numbers, run a tight processTimestamped Highlights00:42 What Anzen actually does, a one stop marketplace for executive liability quotes across the US02:29 From Arabic studies and foreign policy to discovering insurance through political risk08:12 The curiosity engine, how deep research habits shaped his ability to build in new domains11:23 Decision guardrails, learning from outcomes and using trusted people to keep you efficient13:12 Why choose insurance, building in industries that make the world work, plus the profit reality17:29 The startup advantage, modern infrastructure vs incumbent legacy systems, and why catching up takes time20:36 Raising in today's market, what changed, what worked, and why the pitch volume mattersA line worth stealing“Sometimes in tech we miss the application, there are massive industries to go change if we apply technology in the right way.” Max BrunerPro Tips for builders• Pick markets with repeatable workflows, you can ship measurable value faster• Spend your time where the outcome impact is high, skip low ROI rabbit holes• Build a real financial plan before fundraising, then operate close to it• Run fundraising like a sales process, pipeline, volume, and discipline winCall to ActionIf you enjoyed this conversation, follow the show and leave a quick review, it helps more builders find it.

Stu Solomon, CEO of HUMAN, joins Amir to unpack a blind spot most teams underestimate: a huge share of online activity is not people at all, it is automated traffic. They break down how verification really works at internet scale, why agentic workflows change the rules, and what it will take to build trust when bots transact with bots.If you have ever wondered how fraud, fake clicks, account abuse, and synthetic behavior get caught in real time, this episode is a clear, practical look behind the curtain.Key takeaways• Most of the internet is machine traffic now, the goal is no longer spotting bots, it is separating good machines from bad ones• Trust is built by combining behavior, infrastructure signals, and identity or credential history into fast decisions at scale• Agentic systems lower the barrier to entry for attackers, less skilled actors can now create outsized impact• The hard part is accountability, when a machine acts with your authority, who owns the outcome• Adoption follows convenience, but visibility matters, if it feels like a black box, people will not trust itTimestamped highlights00:33 HUMAN in plain English, making split second decisions about who is human, and whether they are safe03:59 The trust stack, behavior signals, infrastructure clues, and identity or credential history10:19 The real shift with AI, lower barriers for attackers, plus the rise of agentic autonomy14:37 The cake story, an agent completes the task, then surprises you with a 750 dollar bill17:22 Bots talking to bots, where accountability and liability get messy fast24:18 Security builds trust, trust unlocks adoption, and society is already closer than it thinksA line you will remember“We have always operated on the notion that if you are human, you are good, and if you are a machine, you are bad. That is simply not the case anymore.”Practical ideas you can use• Add guardrails when you delegate to tools, especially budgets, limits, and approval steps• Watch for trust signals, not just identity checks, behavior plus infrastructure plus history beats any single data point• Design for visibility, show users what the system did and why, so trust can compound over timeFollow:If this episode helped you think more clearly about trust, fraud, and agentic systems, follow the show, subscribe for more conversations like this, and share it with a teammate who is building in ads, ecommerce, identity, security, or AI.

Mek Stittri, CTO at Stuut, breaks down a leadership skill that sounds simple but gets messy fast, trust, then verify. You will learn how to delegate without losing control, how to stay close to the work without becoming a micromanager, and how AI is changing what it means to review and own technical outcomes. Key takeaways• Trust and verify starts with alignment, define success clearly, then keep a real line of sight to outcomes• Verification is not micromanagement, it is accountability, your team's results are your responsibility as a leader• Use lightweight mechanisms like weekly reports, and stay ready to answer questions three levels deep when speed matters• AI is pushing engineers toward system design and management skills, you will manage agents and outputs, not just code• Fast feedback prevents slow damage, address issues early, praise in public, give direct feedback in privateTimestamped highlights00:41 Stuut in one minute, agented AI for finance ops, starting with collections and faster cash outcomes01:54 Trust without verification becomes disconnect, why leaders still need to get close to the details03:42 The three levels deep idea, how to keep situational awareness without hovering06:33 The next five years, engineers managing teams of agents, system design as the differentiator11:40 Feedback as a gift, why speed and privacy matter when coaching16:54 The timing art, when to wait, when to jump in, using time and impact as your signal19:43 Two leaders who shaped Mek's leadership style, letting people struggle, learn, and then win23:29 Curiosity as the engine behind trust and verificationA line worth repeating“Feedback is a blessing.” Practical coaching moves you can borrow• Set the bar up front, define the end goal and what good looks like• Build a steady cadence, short weekly updates beat occasional deep dives• Calibrate your involvement, give space early, step in when time passes or impact expands• Make feedback faster, smaller course corrections beat late big confrontations• Use AI as a reviewer, get quick context on unfamiliar code and decisions so you can ask better questionsCall to actionIf you found this useful, follow the show and share it with a leader who is leveling up from IC to manager. For more leadership and hiring insights in tech, subscribe and connect with Amir on LinkedIn.

Michael Topol, Co-founder and Co-CEO at MGT Insurance, explains why insurance is quietly becoming one of the most interesting data and AI problems in tech.We get practical about turning messy legacy data into usable signals, how agentic tools change decision making, and why culture and team design matter as much as the models.MGT Insurance is building a fully verticalized AI and agentic native insurance company for small businesses, pairing experienced insurance operators with top tier technologists. Michael breaks down what changed in the last few years that makes real disruption possible now, and what modern product delivery looks like when prototyping is cheap and iteration is fast.Key takeaways• Insurance is a data business at its core, but most incumbents cannot use their data fast enough because it lives across silos, mainframes, and old systems.• Modern AI lets teams combine internal data with public signals to speed up underwriting and improve consistency, without losing human judgement.• Vibe coding and rapid prototyping collapse the gap between idea and implementation, bringing product, engineering, and the business closer together.• Senior talent gets more leverage in an AI driven workflow, and small teams can ship faster by focusing on problem solving, not just building.• Pod based teams, fixed outcome planning, and strong culture help regulated companies move quickly while staying inside the rules.Timestamped highlights00:44 What MGT Insurance is, and what “AI and agentic native” means in practice02:09 Why small business insurance matters more than most people realize06:06 The real blocker for incumbents, data exists but it is not usable08:55 Vibe coding in a regulated industry, where it helps first12:54 Requirements are shifting, prototypes bring teams closer to the real problem17:26 The pod structure, plus the Basecamp inspired approach to scoping and shipping20:52 Better, faster, cheaper, why AI finally makes all three possible22:11 Where to connect, and who they are hiringA line you will remember“Insurance is really just a big data problem.”Pro tips you can steal• Build cross functional pods early, include a domain expert, a technical product lead, and a senior engineer from day one.• Scope for outcomes, not perfect specs, then let the team decide the depth as they build.• Use AI to automate collection and synthesis, then keep humans focused on the decisions and trade offs.Call to actionIf you enjoyed this one, follow the show and share it with a builder who is trying to ship faster with a smaller team.

Marco DeMeireles, co founder and managing partner at ANSA, breaks down how a modern VC firm wins by being focused, data driven, and allergic to hype. If you want a clearer view of how investors evaluate open source, mission critical industries, and AI categories, this is a practical, operator minded look behind the curtain. Marco explains ANSA's focus on what they call undercover markets, from open source and open core businesses to defense, intelligence, cybersecurity, healthcare IT, and infrastructure companies that become deeply embedded and rarely lose customers. We also get into how they raised their first fund, why portfolio concentration changes everything, and how they push founders toward efficiency and profitability without killing ambition. Key Takeaways• In open source, two things matter more than most people admit: founder DNA tied to the project, and what you put behind the paywall that enterprises will pay for• Concentration forces rigor, fewer bets means deeper diligence, clearer underwriting, and more hands on support post investment• Great early stage support is not just advice, it is people, capital planning, and operating help that changes outcomes• AI investing gets easier when you start with category selection, avoid fickle demand, then hunt for non obvious wedges in real workflows• Long term winners tend to show compounding growth, improving efficiency, real demand, durable business models, founder strength, and an asymmetric risk reward at the price Timestamped Highlights00:00 Marco's quick intro and what ANSA invests in00:36 Undercover markets, open source, and mission critical industries explained01:54 The two open source filters that change how ANSA underwrites a deal03:31 Why open source can work in defense, plus the Defense Unicorns example05:29 How a new firm raises a first fund, and what the right LP partners look for10:50 The three levers ANSA pulls with founders: people, capital, operations15:22 Marco's six part framework for evaluating investments17:39 How to tell who wins in crowded AI categories, and why niche wedges matter21:41 The first investment they will never forget, and the air gapped cloud problem A line worth stealing“You can't outsource greatness. You can't outsource people selection.” Pro Tips• If you are building open source, be intentional about what is free versus paid, security, compliance, and auditability tend to earn real pricing power• If your business depends on paid acquisition, test a path to organic growth early, it can unlock profitability and give you leverage in fundraising and exits• In crowded AI spaces, pick a wedge where documentation is heavy, complexity is low, and ROI is obvious, then expand once you own that lane Call to ActionIf this episode helped you think more clearly about investing and building, follow the show, subscribe, and share it with one founder or operator who is navigating funding, pricing, or go to market right now

AI is everywhere, but most teams are stuck talking about efficiency and headcount. In this episode, Dave Edelman, executive advisor and best selling author, shares a sharper lens, how to use AI to create real customer value and real growth.We get into the high road vs low road of AI, what personalization should look like now, and why data has to become an enterprise asset, not a bunch of disconnected departmental files.Key Takeaways• Efficiency is table stakes, the real win is using AI to build new experiences that customers actually want• Start with customer friction, find the biggest compromises and frustrations in your category, then design around that• Personalization is no longer limited by content scale in the same way, AI changes the economics of tailoring experiences• You do not always need one giant database, modern tools can pull and connect data across systems in real time• Treat data as an enterprise resource, getting cross functional alignment is often the hardest and most important stepTimestamped Highlights• 00:46 Dave's origin story, from early loyalty programs to Segment of One marketing• 03:33 The high road and low road of AI, growth experiences vs spam at scale• 06:51 Where to start, map the biggest customer frustrations, then build use cases from there• 16:31 The data myth, why you may not need a single mega database to get value from AI• 21:31 Data as a leadership problem, shifting from functional ownership to enterprise ownership• 25:14 Strategy that actually sticks, balancing bottom up automation with top down customer led directionA line worth stealing“Use those efficiencies to invest in growth.”Pro Tips you can apply this week• List the top five customer frustrations in your category, pick one and design an AI powered fix that removes a compromise• Audit your data reality, identify where the same customer facts live in multiple places, then decide what must be unified first• Run a simple test and learn loop, create multiple variations of one experience, measure what works, and keep iterating• Put strategy on the calendar, make room for a recurring discussion that is not just metrics and cost cuttingCall to ActionIf this episode helped you think differently about AI and growth, follow the show, leave a quick rating, and share it with one operator who is building product, data, or customer experience right now.

Amanda Kahlow, CEO and founder of 1Mind, joins Amir to break down what AI changes in modern sales and go to market, and what it does not. If you lead revenue, product, or growth, this is a practical look at where AI creates leverage today, where humans still matter, and how teams actually adopt it without chaos.Amanda shares how “go to market superhumans” can handle everything from early buyer conversations to demos, sales engineering support, and customer success. They also dig into trust, hallucinations, and why the bar for AI feels higher than the bar for people.Key takeaways• Most buyers want answers early, without the pressure that comes with talking to a salesperson• AI can remove friction by turning static content into a two way conversation that helps buyers move faster• The hardest part of adoption is not capability, it is change management and trust inside the team• Humans still shine in relationship and nuance, but AI can outperform on recall, depth, and real time access to the right info• As AI levels the selling experience, product quality matters more, and the best product has a clearer path to winTimestamped highlights00:31 What 1Mind builds, and what “go to market superhumans” actually do across the full buyer journey02:00 The buyer lens, why early conversations matter, and how AI gives control back to the buyer06:14 Why the SDR experience is frustrating for buyers, and where AI can improve both sides09:42 Change management in the real world, why “everyone build an agent” gets messy fast13:04 Why “swivel chair” AI fails, and what real time help should look like in live conversations15:52 Hallucinations and trust, plus the blunt question every leader should ask about human error22:26 Competitive advantage today, and why adoption eventually pushes markets toward “best product wins”A line worth sharing“Do your humans hallucinate, and how often do they do it?”Pro tips you can use this week• Start with low stakes usage, bring AI into calls quietly, then ask it for a summary and what you missed• Build adoption top down, define what good looks like, otherwise you get a pile of similar agents and no clarity• Focus AI on what it does best first, recall, context, and instant answers, then expand into workflow and process laterCall to actionIf this episode sparked ideas for your sales team or your product led funnel, follow the show so you do not miss the next one. Share it with one revenue leader who is trying to modernize their go to market motion, and connect with Amir on LinkedIn for more clips and operator level takes.

What if the best people on your investing team are still in college? Peter Harris, Partner at University Growth Fund, breaks down how they run a roughly 100 million dollar venture fund with 50 to 60 students doing real diligence, real founder calls, and real deal work.You will hear how their student led model stays disciplined with checks and balances, why repeat games matter in venture and in business, and how this approach creates a flywheel that helps founders, investors, and the next generation of operators win together.Key Takeaways• Student led does not mean unstructured, the process is built around clear stages, data room access, investment memos, student votes, and an advisory style investment committee, with final fiduciary responsibility held by the partners• Real autonomy is the unlock, when interns are trusted with meaningful work, the best ones level up fast and start leading teams, not just supporting them• The goal is win win win outcomes, founders get capital plus a high effort support network, investors get disciplined underwriting, students get experience that compounds into career leverage• Repeat games beat short term incentives, the alumni network becomes a long term advantage, bringing the fund into high quality opportunities years later• Mistakes are inevitable, the difference is containment and systems, avoiding errors big enough to break trust, then building process improvements so they do not repeatTimestamped Highlights00:32 A 100 million dollar fund powered by 50 to 60 students, and what empowered really means01:43 The decision path, from founder screen to student memo to student vote to the advisory investment committee06:44 Why most venture internships underdeliver, and how longer tenures change outcomes10:37 Repeat games and the trust flywheel, how former students now pull the fund into top tier deals13:55 What happens when something goes wrong, damage control, learning loops, and confidentiality as a core discipline24:39 The bigger vision, expanding beyond venture into additional asset classes to create more student opportunitiesA line worth stealingIf you give people real autonomy, they'll surprise you with what they do.Pro Tips• If you are building an internship program, start by deciding what real ownership means, then build guardrails around it, not the other way around• Treat trust like an asset, design your process so every stakeholder wants to work with you againCall to ActionIf you enjoyed this one, follow The Tech Trek and share it with a founder, operator, or student who cares about building real advantage through talent and process.

Yulun Wang, executive chairman and co founder at Sovato Health, joins Amir Bormand to unpack the next wave after telemedicine, procedural care at a distance. If you have ever wondered what it would take for a top surgeon to operate without being in the same room, this conversation gets practical fast, from the real bottlenecks inside operating rooms to the health system changes required to make remote robotics mainstream.Key takeaways• Better care can actually cost less when the right expertise reaches the right patient at the right time• Telemedicine is already normalized, which sets the stage for faster adoption of remote procedures once infrastructure and workflows catch up• Surgical robots already have two sides, the surgeon console and the patient side, today connected by a short cable, the leap is making that connection work reliably across hundreds or thousands of miles• Volume drives proficiency, the outcomes gap between high volume specialists and low volume settings is one of the biggest reasons access matters• Operating rooms spend more than half their time on steps around surgery, which creates room to dramatically increase surgeon throughput when workflows are redesignedTimestamped highlights• 00:42 What Sovato Health is building, bringing procedural expertise to patients without requiring travel• 02:10 The early days of surgical robotics and the transatlantic gallbladder surgery on September 7, 2001• 05:30 The counterintuitive idea, higher quality care can reduce total cost in healthcare• 10:27 What actually changes for patients, local hospitals stay the destination, expertise becomes the thing that travels• 14:57 Why repetition matters, the first question patients ask is still the right one• 17:53 Inside the operating room schedule, where time is really spent and why productivity can jumpA line that sticks“Healthcare is different, higher quality, if done right, costs less.”Practical angles you can steal• If you are building in regulated industries, adoption is rarely about the tech alone, it is about trust, workflows, and incentives• If you sell into health systems, position the value around system level outcomes, access, quality, and margin improvement, not just novelty• If you are designing new workflows, look for the hidden capacity, the biggest gains often sit outside the core taskCall to actionIf you want more conversations like this at the intersection of tech, systems, and real world impact, follow The Tech Trek on Apple Podcasts and Spotify.

Moiz Kohari, VP of Enterprise AI and Data Intelligence at DDN, breaks down what it actually takes to get AI into production and keep it there. If your org is stuck in pilot mode, this conversation will help you spot the real blockers, from trust and hallucinations to data architecture and GPU bottlenecks.Key takeaways• GenAI success in the enterprise is less about the demo and more about trust, accuracy, and knowing when the system should say “I don't know.”• “Operationalizing” usually fails at the handoff, when humans stay permanently in the loop and the business never captures the full benefit.• Data architecture is the multiplier. If your data is siloed, slow, or hard to access safely, your AI roadmap stalls, no matter how good your models are.• GPU spend is only worth it if your pipelines can feed the GPUs fast enough. A lot of teams are IO bound, so utilization stays low and budgets get burned.• The real win is better decisions, faster. Moving from end of day batch thinking to intraday intelligence can change risk, margin, and response time in major ways.Timestamped highlights00:35 What DDN does, and why data velocity matters when GPUs are the pricey line item02:12 AI vs GenAI in the enterprise, and why “taking the human out” is where value shows up08:43 Hallucinations, trust, and why “always answering” creates real production risk12:00 What teams do with the speed gains, and why faster delivery shifts you toward harder problems12:58 From hours to minutes, how GPU acceleration changes intraday risk and decision making in finance20:16 Data architecture choices, POSIX vs object storage, and why your IO layer can make or break AI readinessA line worth stealing“Speed is great, but trust is the frontier. If your system can't admit what it doesn't know, production is where the project stops.”Pro tips you can apply this week• Pick one workflow where the output can be checked quickly, then design the path from pilot to production up front, including who approves what and how exceptions get handled.• Audit your bottleneck before you buy more compute. If your GPUs are waiting on data, fix storage, networking, and pipeline throughput first.• Build “confidence behavior” into the system. Decide when it should answer, when it should cite, and when it should escalate to a human.Call to actionIf you got value from this one, follow the show and turn on notifications so you do not miss the next episode.

New leaders face a choice fast. Do you adapt to the organization you inherit, or reshape it around the way you lead?In this conversation, Amir sits down with Gian Perrone, engineering leader at Nav, to unpack how org design really works in the first 30 to 120 days, and how to drive change without spiking anxiety or losing trust.You will hear how Gian treats leadership as triage, why “listen and learn” is rarely passive, and what separates a thoughtful reorg from one that feels chaotic.Key takeawaysLeaders almost always arrive with hypotheses, the real work is testing them without rushing to force a playbookA reorg is not automatically bad, perception turns negative when the why is unclear and people feel unsafeOver communicating helps, but thinking out loud too often can create noise, a structured comms plan keeps change steadyA simple way to spot a collaborative culture is to disagree in the interview and see how they respondManagers are the front line in change, set clear expectations so teams hear a consistent story about what is changing and whyTimestamped highlights00:01 What Nav does, and the real question behind org design for new leaders01:59 Why “first 90 days” is usually triage, not passive observation04:14 The reorg stopwatch, and why structure reflects your worldview08:36 How to communicate change without destabilizing teams12:54 A practical interview move to test whether a company truly collaborates17:03 The manager layer, how Gian sets expectations so change lands wellA line worth repeating“If you arrive and something is on fire, you are going to fix it.”A few practical moves worth stealingWhen you are new, write down your hypotheses early, then use real signals to confirm or kill themFloat a change as a real idea first, gather feedback, then come back with details before you finalizeCreate a simple comms map of who hears what, when, and from whom, then follow itBe matter of fact about changes, teams often mirror the tone you setCall to actionIf this episode helped you think more clearly about leadership and org design, follow the show and share it with one operator who is navigating change right now.