POPULARITY
In this episode of the Future Conceived podcast, we celebrate the remarkable achievement of Dr. Diana Monsivais, this year's recipient of the prestigious SSR Virendra B. Mahesh New Investigator Award. This award recognizes outstanding research completed and published within the twelve years (or within the years) after earning a doctoral degree, signifying scientific excellence and dedication to advancing reproductive health. Dr. Monsivais is known for her innovative work that deepens our understanding of uterine biology and holds important translational implications for women's health.
Speaking from a President's View with Cash MaheshCash Mahesh is a President with a proven track record of accelerating global business growth through strategic initiatives. With an engineering background, I wanted to discuss with Cash how, as a president, he views the effectiveness of technical employees communicating with nontechnical audiences. With his technical background, I'm sure he'll have valuable insights. To learn more about Cash, visit https://www.linkedin.com/in/prakashmahesh/__TEACH THE GEEK (http://teachthegeek.com) Prefer video? Visit http://youtube.teachthegeek.comGet Public Speaking Tips for STEM Professionals at http://teachthegeek.com/tips
Bhavesh Mehta and Mahesh Kumar—senior technology leaders at Uber and co-authors of the practical guide AI-First Leader—discuss the lessons learned from Nova Bridge's collapse, and share best practices for mitigating hidden risks that can derail ambitious AI projects. They also share specific ways that small businesses and Fortune 500 companies can embrace AI from a place of empowerment rather than fear. Key Takeaways: Ways to align C-suite leaders and engineering teams around a unified AI roadmap The most underestimated human factor that determines whether an AI transformation succeeds How overlooked vulnerabilities, insufficient oversight, and the rush to deploy led to unexpected fallout of the Nova Bridge Chat The unforeseen dangers lurking within AI systems Guest Bio: Bhavesh Mehta is a technology leader and co-author of AI-First Leader, a practical guide for executives navigating enterprise AI adoption. With over 20 years of experience across Cisco, Uber, and VMware, Bhavesh has architected large-scale conversational and generative AI systems that support millions of users daily. His work bridges deep technical design and executive strategy, helping organizations deploy AI responsibly and at scale. Mahesh Kumar is a seasoned product executive and co-author of AI-First Leader, a practical guide for executives navigating enterprise AI adoption. With over 20 years of experience across Uber, Veritas, and VMware, Mahesh has led the development of multi-billion-dollar product portfolios and enterprise AI strategies. Known for bridging deep technology with strategic vision, he helps organizations move from experimentation to large-scale AI transformation. His work focuses on responsible innovation, combining business storytelling with technical fluency to make AI both accessible and actionable for leaders. ---------------------------------------------------------------------------------------- About this Show: The Brave Technologist is here to shed light on the opportunities and challenges of emerging tech. To make it digestible, less scary, and more approachable for all! Join us as we embark on a mission to demystify artificial intelligence, challenge the status quo, and empower everyday people to embrace the digital revolution. Whether you're a tech enthusiast, a curious mind, or an industry professional, this podcast invites you to join the conversation and explore the future of AI together. The Brave Technologist Podcast is hosted by Luke Mulks, VP Business Operations at Brave Software—makers of the privacy-respecting Brave browser and Search engine, and now powering AI everywhere with the Brave Search API. Music by: Ari Dvorin Produced by: Sam Laliberte
Dr. JP Novin welcomes Dr. Mahesh Daas, President of Boston Architectural College, to the podcast to discuss AI's impact on higher education and the future workforce. Dr. Novin highlights Plexuss's role in enrollment marketing and shares his pursuit of a doctorate in higher education. Dr. Daas provides an overview of Boston Architectural College's 137-year history and its distinctive work-integrated educational model, along with the institution's scholarly focus on robotics, AI, and design computation, including works such as Towards Robotic Architecture.Their conversation frames AI as the fourth industrial revolution, using the SMR framework, and explores how creative and cognitive work is increasingly vulnerable, while human excellence and the bespoke, relationship-driven nature of architecture continue to hold value. Dr. Daas emphasizes that education must prioritize “learning to learn” in order to adapt to collaboration with nonhuman intelligences and that experimentation through “sandboxes” is essential before regulation. He further stresses the responsibility of leadership to ensure that rapid AI adoption does not deepen social inequity by creating a divide between “humans with AI” and “humans without AI,” urging thoughtful, critical engagement with AI to empower individuals and society during this period of disruption.
Daniel is joined by Mahesh Tirupattur, chief executive officer at Analog Bits. Mahesh leads strategic planning to develop and implement Analog Bits' vision and mission of enabling the silicon digital world with interfacing IP to the analog world. Additionally, Mahesh oversees all aspects of Analog Bits' operations to ensure… Read More
In this episode of the TriMetric Roadmap Podcast, Scott and Jeff sit down with Mahesh M. Thakur, CEO of Decisive AI, to discuss how business owners can move from AI hype to AI results. Mahesh—one of the world's few Master Certified Stakeholder-Centered Coaches (MCC) and a former executive at Microsoft, Amazon, Intuit, and GoDaddy—shares how leaders can align their teams on a “True North,” integrate AI intelligently, and transform culture, execution, and bottom-line growth. You'll hear how 92% of AI investments currently fail, what separates the 8% that succeed, and how smaller companies can use AI to level the playing field with billion-dollar enterprises. ---- About Mahesh M. Thakur Mahesh M. Thakur is the Founder and CEO of Decisive AI, where he helps CEOs and leadership teams align on their True North and turn AI into ROI™. A former executive at Microsoft, Amazon, Intuit, and GoDaddy, Mahesh is among the world's top 0.1% of Master Certified Stakeholder-Centered Coaches (MCC). He's advised leaders at JPMorgan Chase, Meta, Google, Walmart, PayPal, and John Deere, helping them achieve measurable transformations—from doubling growth rates to cutting decision cycles by 40%. Connect: LinkedIn | Instagram | X (Twitter) ---- 1. What does “AI to ROI” mean? AI to ROI™ is Mahesh's framework for turning artificial intelligence investments into tangible business outcomes—revenue growth, cost savings, and cultural transformation. It blends technology strategy with human alignment. 2. Why do 90% of AI projects fail? Most fail because they lack a clear framework, leadership alignment, or measurable objectives. Companies often jump in due to FOMO, invest heavily, and never achieve integration or adoption that creates ROI. 3. Who should be investing in AI? Every business—from tech firms to law practices to construction companies—can benefit from AI. It's no longer limited to Big Tech; mid-market companies ($10M–$100M) can now compete on equal footing using accessible tools. 4. What is a “Test + Learn™” culture? It's a structured approach that allows organizations to experiment rapidly, measure outcomes, and scale only what works—turning innovation into a repeatable discipline. 5. How can small and mid-sized companies use AI effectively? Start by identifying one or two workflows where automation saves time or improves customer outcomes. Build simple pilots, collect data, and align leadership around shared metrics before scaling. 6. What is the “True North” concept in leadership? True North represents the organization's unifying direction—its mission, values, and goals. Mahesh helps CEOs and teams align on this so that AI and strategy serve the same purpose. 7. How does AI connect to leadership and faith? Mahesh believes great leaders must balance faith and data. Without alignment, AI becomes fear-driven. True mastery integrates wisdom, courage, and belief that technology should serve humanity, not replace it. 8. What's next for Decisive AI and Business Freedom Advisors? Scott and Mahesh announced upcoming collaborations to help business owners design both a Business Roadmap and an AI Roadmap for 2026. Follow BusinessFreedomAdvisors.com for updates. ----- Links & Resources Watch Full Episode on Fathom Connect with Mahesh on LinkedIn Learn More About Business Freedom Advisors Follow Scott Landis on LinkedIn Subscribe to the TriMetric Roadmap Podcast
In this episode, Rachel visits with Dr. Mahesh Nair, an associate professor of Meat Science at Colorado State University. We bonded over our mutual frustration surrounding the ad campaigns that crop up on The Facebook comparing a package of grocery store packaged ground beef, and a vacuum packed, direct to consumer-type pound of ground and the accompanying claim that "their" beef is better than that available in the store. It's crap and Mahesh is going to tell you how to shut that argument down in two words. He's my fave. And yes, there will need to be tee shirts.This episode is brought to you by the generous support of Adam Rose at Iliff Custom Cabinetry. Find him at www.iliffcustomcabinetry.com or on The Facebook at https://www.facebook.com/icucab/. If you see Adam, please let him know you heard about him here.Check our our cows on the Anywhere Cam site at https://anywhere.cam/. Scroll down to the Hereford cows and tada!As always, check your cows, check your fields, and check your neighbors.
Send us a textVARANASI to the WORLD Reaction! Telugu | Mahesh Babu | Priyanka Chopra | SS Rajamouli | Cinemondo! #varanasi #varanastrailer ##varanasitrailerreaction #ssrajamouli #trailerreaction Kathy and Mark react to Varanasi To the World! Varanasi (stylised as Vāranāsi) is an upcoming Indian Telugu-language action-adventure film directed by S. S. Rajamouli, who co-wrote the screenplay with V. Vijayendra Prasad. T the film stars Mahesh Babu, Priyanka Chopra and Prithviraj Sukumaran. Rajamouli conceived the film as a globetrotting action adventure rooted in Indian cultural themes, drawing inspiration from the structure and emotional tone of classic adventure cinema. #varanasi #varanasitrailer#varanasitrailerreaction #varansitotheworld#ssrajamouli #maheshbabu #priyankachopra#mmkeeravaani#trailer#trailerreaction#globetrotter#globetrottereventSupport the show
[1:55] Why every organisation now needs a Chief AI Officer (CAIO) [1:55] Why companies fail at AI adoption (“boiling the ocean”) [5:25] How to get AI ROI fast using a test-and-learn method [8:46] Why alignment and culture matter more than technology [15:02] Why internal data + internal models = long-term competitive advantage [4:41] How AI exposes leaders with fixed mindsets [14:46] How Silicon Valley thinking influences AI leadership [9:55] Why RPE (revenue per employee) is the new metric [5:25] Why AI is a “force multiplier” for leaders Learn more about your ad choices. Visit megaphone.fm/adchoices
Episode-27. Hello Samsad 2082-08-05 Final {Mahesh Bartaula}
Follow Proof of Coverage Media: https://x.com/Proof_CoverageIn this episode Connor & Mahesh sit down with Akshay Poshatwar & Nishikant Bahalkar, founders of Qiro, to discuss their ground-breaking platform aimed at tackling adverse selection through a unique credit risk underwriting protocol. Akshay shares his journey from the FinTech industry in India, highlighting the potential for global capital access through decentralized finance. Nishikant adds his insights on the evolution of blockchain and DeFi, emphasizing the benefits and challenges in Real World Assets (RWA) lending. The discussion includes how Qiro aims to bridge TradFi and DeFi, offering solutions for different asset classes and the promising future of decentralized infrastructure networks (DePIN). The episode wraps up with a look towards the company's rapid growth and their search for strategic hires, particularly in business development within the U.S. and Europe.Timestamps: 00:00 - Introduction 00:46 - Meet the Qiro Team01:53 - Founders' Backgrounds and Journey09:35 - Understanding Qiro's Unique Platform13:30 - Challenges and Innovations in RWA Lending24:36 - DePIN Networks and Future Prospects31:13 - Hiring and Growth at Qiro33:52 - Conclusion and Contact InformationDisclaimer: The hosts and the firms they represent may hold stakes in the companies mentioned in this podcast. None of this is financial advice.
In this episode of Take-Away with Sam Oches, Sam talks with Mahesh Sadarangani, the CEO of Philz Coffee, an artisanal coffee brand that, for more than 20 years, has committed itself to serving hand-crafted coffee using sustainably sourced beans from around the world. Mahesh arrived at Philz from Wingstop in 2021, and he set out to modernize and optimize the chain while protecting its quality commitment. He joined the podcast to talk about how Philz manages to obsess over its high-quality coffee even as it adds tools like drive thrus and loyalty programs. In this conversation, you'll find out why:The coffee category is highly fragmented — which is good news for all of usYour brand's original values are probably still relevant for your path forwardRestaurant brands need multiple formats to fulfill their potential Even saturated categories are ripe for something outside the boxYour experience defines your relationship with your guest; don't mess with it too muchHuman connection is alive and well in the restaurant industry Have feedback or ideas for Take-Away? Email Sam at sam.oches@informa.com.
Follow Proof of Coverage Media: https://x.com/Proof_CoverageSantiago Santos, Jason Badeaux, Mahesh Ramakrishnan, and Connor Lovely explore the intersection of politics, technology, and the future of energy infrastructure. The conversation begins with reflections on New York politics and the power of authentic storytelling before shifting to the energy crisis, rising electricity prices, and the misalignment of traditional utility models. Jason discusses Daylight's mission to decentralize energy markets, empowering individuals to participate in energy production and finance through distributed energy resources and tokenized financial products. The hosts draw parallels between marathon running and long-term commitment, contrasting this with modern shortcut culture, while examining how AI, electric vehicles, and DeFi are reshaping energy demand and financing. The episode offers insightful commentary on innovation, capital markets, and the future of sustainable power.Timestamps:00:00 - Introduction01:45 - New York City Challenges02:30 - The American Dream Debate04:55 - New York Marathon07:51 - DePIN and Daylight Announcement08:25 - Electricity Pricing Issues11:01 - Infrastructure Challenges12:36 - Market Dynamics and Demand Growth13:53 - Daylight App Features15:32 - Distributed Energy Resources17:42 - Future of Energy Prices19:22 - Australia's Solar Market Success22:55 - Customer Uptake and Market Expansion25:46 - Tokenized Yield Products27:43 - Capital Market Dynamics29:27 - Duration Risk Management31:09 - Quality of Underwriting in Solar Loans33:38 - Daylight Pitch to Traditional Allocators35:09 - Market Size and Opportunities40:45 - Potential Collaboration with Base Power47:23 - Challenges and Risks in the Token Market50:44 - Speculative Capital & Revenue54:08 - Helium's Market Position and Comparisons56:35 - On-Chain Revenue and Governance DynamicsDisclaimer: The hosts and the firms they represent may hold stakes in the companies mentioned in this podcast. None of this is financial advice.
In today's Cloud Wars Live, Mahesh Thiagarajan, EVP, Oracle Cloud Infrastructure, speaks with Bob Evans about Oracle's bold strategy to lead in the AI infrastructure race. He details how Oracle is scaling zeta-level compute, launching a 1.5 gigawatt GPU campus, and engineering full-stack solutions that combine bare-metal hardware, custom networking, and advanced software. With OCI's rapid innovation and massive scale, Oracle is positioning itself as a serious challenger to cloud incumbents like AWS, Microsoft, and Google Cloud.Scaling AI at OracleThe Big Themes:Enterprise Data Continuity and Cloud Strategy: Enterprises rely on mission-critical data, such as databases, and migrating that data to the cloud remains a major strategic priority. The challenge isn't simply moving data: It's building a cloud platform that delivers real value to customers. As Thiagarajan and his team began developing Oracle Cloud Infrastructure to support these needs, they focused on core fundamentals: performance, cost efficiency, and security. This illustrates that for today's cloud providers, success isn't just about innovative features, but about engineering deep, resilient infrastructure.Customer‑First Execution: Thiagarajan repeatedly states there is no perfect playbook. The approach: wake up every day, talk to partners, figure out what customers need and execute. This mindset emphasises responsiveness and pragmatism. Given the rapid pace of change in cloud and AI, large providers cannot wait for general frameworks to emerge. They must iterate, partner, and build in real time.“Late” As An Advantage: Thiagarajan observes that arriving in cloud later gave Oracle the ability to learn from first movers' mistakes and benefit from newer hardware generations without legacy baggage. While first movers often carry large legacy systems, later entrants can design for new architectures (bare‑metal, custom networking) from the ground up. That doesn't guarantee success but presents an advantage if leveraged.The Big Quote: “You earn trust with [partners] by getting their products out to market fast into the hands of the customers, because that really translates to them, the end customer, being happy."More from Mahesh Thiagarajan and Oracle:Connect with Mahesh Thiagarajan on LinkedIn or take a look at his Oracle blog posts. Visit Cloud Wars for more.
Negotiate Anything: Negotiation | Persuasion | Influence | Sales | Leadership | Conflict Management
Mahesh Guruswamy — Chief Product and Technology Officer at Kickstarter and author of How to Deliver Bad News and Get Away with It — sits down with Kwame Christian to reveal the emotional side of leadership no one talks about. Buy the Book: How to Deliver Bad News and Get Away with It: A Manager's Guide by Mahesh Guruswamy From earning $40,000 in his first tech job to leading global teams, Mahesh learned that success isn't about titles or wealth — it's about courage, gratitude, and making the right call even when it hurts. In this powerful conversation, you'll learn: Why the hardest decisions are often the right ones How to “roll the dice” and take bold career risks The secret to staying grounded in gratitude and perspective Why authenticity—not ambition—is the real mark of leadership If you've ever wondered why success can still feel empty, or why doing the right thing sometimes hurts the most — this episode will give you the clarity you've been looking for. Buy the Book: How to Deliver Bad News and Get Away with It: A Manager's Guide by Mahesh Guruswamy
Mahesh Guruswamy — Chief Product and Technology Officer at Kickstarter and author of How to Deliver Bad News and Get Away with It — sits down with Kwame Christian to reveal the emotional side of leadership no one talks about. Buy the Book: How to Deliver Bad News and Get Away with It: A Manager's Guide by Mahesh Guruswamy From earning $40,000 in his first tech job to leading global teams, Mahesh learned that success isn't about titles or wealth — it's about courage, gratitude, and making the right call even when it hurts. In this powerful conversation, you'll learn: Why the hardest decisions are often the right ones How to “roll the dice” and take bold career risks The secret to staying grounded in gratitude and perspective Why authenticity—not ambition—is the real mark of leadership If you've ever wondered why success can still feel empty, or why doing the right thing sometimes hurts the most — this episode will give you the clarity you've been looking for. Buy the Book: How to Deliver Bad News and Get Away with It: A Manager's Guide by Mahesh Guruswamy
Mahesh Thakur, CEO & C-Suite Coach, shares insights on leadership alignment, true north, and transformative coaching from his journey at Microsoft, Amazon, and beyond.00:33- About Mahesh M ThakurMahesh M Thakur, CEO and C-Suite Coach.
Rent To Retirement: Building Financial Independence Through Turnkey Real Estate Investing
This episode is sponsored by…IMN - Single Family Rental West ForumSFR West returns to Arizona! Reconnect with the SFR community through IMN's signature mix of dynamic panels, insightful speakers, and high-impact networking.Gain the perspective and connections to thrive in a changing landscape—save 20% with code REU2333RTR.https://tinyurl.com/SFR-West-RTRStruggling with today's housing affordability crisis? In this episode of the Rent To Retirement Podcast, hosts Adam Schroeder sit down with Mahesh Shetty, Founder & CEO of Ely Homes, to discuss innovative ways lease-to-own programs are helping renters achieve homeownership.Mahesh shares his journey from investing in New York City hotels to building Ely Homes into a platform that not only provides quality rental properties but also creates pathways for families to become homeowners. He explains how investors can benefit, how tenants can transition into buyers, and why this strategy is key in today's challenging housing market.⏱ Episode Highlights00:00 – Introduction to Mahesh Shetty 02:00 – Lessons from early NYC hotel investments07:15 – Pivoting into single-family rentals during the 2008 downturn12:00 – The affordability crisis & why lease purchase matters14:45 – Helping tenants improve credit & access down payment assistance18:20 – Success rates & building trust with renters20:00 – Raising capital & networking with investors24:30 – Speaking at IMN - Single Family Rental West Forum & industry thought leadership26:40 – Final thoughts: Building homes & futures for familiesIf you're looking to expand your real estate investing strategy while making a positive impact on communities, this episode is for you!
Rent To Retirement: Building Financial Independence Through Turnkey Real Estate Investing
This episode is sponsored by…IMN - Single Family Rental West ForumSFR West returns to Arizona! Reconnect with the SFR community through IMN's signature mix of dynamic panels, insightful speakers, and high-impact networking.Gain the perspective and connections to thrive in a changing landscape—save 20% with code REU2333RTR.https://tinyurl.com/SFR-West-RTRStruggling with today's housing affordability crisis? In this episode of the Rent To Retirement Podcast, hosts Adam Schroeder sit down with Mahesh Shetty, Founder & CEO of Ely Homes, to discuss innovative ways lease-to-own programs are helping renters achieve homeownership.Mahesh shares his journey from investing in New York City hotels to building Ely Homes into a platform that not only provides quality rental properties but also creates pathways for families to become homeowners. He explains how investors can benefit, how tenants can transition into buyers, and why this strategy is key in today's challenging housing market.⏱ Episode Highlights00:00 – Introduction to Mahesh Shetty 02:00 – Lessons from early NYC hotel investments07:15 – Pivoting into single-family rentals during the 2008 downturn12:00 – The affordability crisis & why lease purchase matters14:45 – Helping tenants improve credit & access down payment assistance18:20 – Success rates & building trust with renters20:00 – Raising capital & networking with investors24:30 – Speaking at IMN - Single Family Rental West Forum & industry thought leadership26:40 – Final thoughts: Building homes & futures for familiesIf you're looking to expand your real estate investing strategy while making a positive impact on communities, this episode is for you!
Today, we're talking to Mahesh Guruswamy, CPTO at Kickstarter. We discuss how to effectively deliver bad news in corporate settings, why CTOs can no longer be the "nice guy" in today's business environment, and how AI tools are reshaping both personal and professional life. All of this right here, right now, on the Modern CTO Podcast! To learn more about Mahesh and pick up a copy of the bo
How can leaders navigate the messy middle of management, especially when it comes to delivering difficult news without damaging relationships or morale? In this episode, Kevin talks with Mahesh Guruswamy about the real-world challenges leaders face when communicating unwelcome information, from missed deadlines to ethical violations. Mahesh shares a thoughtful approach to raising the temperature of conversations gradually and when situations call for urgent, high-stakes responses. They also discuss the difference between technical and adaptive feedback, the importance of intentional communication, and the human side of letting team members go. Listen For 00:00 Introduction 02:02 Meet Mahesh Guruswamy 06:03 The Messy Middle of Leadership 06:24 When Should Leaders Deliver Bad News 07:17 Listening to Your Intuition as a Leader 08:06 Raising the Temperature Slowly 10:24 When to Start at a Higher Temperature 12:04 When Urgency or Ethics Demand Immediate Action 13:04 Communicating the Stakes with Your Team 13:46 Writing as a Tool for Delivering Difficult News 14:51 Lessons from Amazon on Written Communication 16:06 Documenting Over Slide Decks for Clarity 17:17 Reviewing Recordings to Improve Communication 18:45 The Power of Leadership Language 21:11 Balancing Policy and Humanity in Difficult Conversations 22:09 Helping Team Members Find Better Fit Elsewhere 22:58 Avoiding Emotional Delivery of Feedback 23:59 Two Types of Feedback Technical and Adaptive 25:42 Giving Feedback to Your Boss 26:56 Should You Be a Manager Key Questions to Ask 28:28 Can You Succeed Without External Validation 28:55 Giving Credit to the Team Not Yourself 30:31 Mahesh's Personal Interests 32:51 Final Thoughts and Call to Action Mahesh's Story: Mahesh Guruswamy is the author of How to Deliver Bad News and Get Away with It: A Manager's Guide. He is a seasoned product development executive who has been in the software development space for over twenty years and has managed teams of varying sizes for over a decade. He is currently the chief product and technology officer at Kickstarter. Before that, he ran product development teams at Mosaic, Kajabi, and Smartsheet. Mahesh caught the writing bug from his favorite author, Stephen King. He started out writing short stories and eventually discovered that long-form writing was a great medium to share information with product development teams. Mahesh is passionate about mentoring others, especially folks who are interested in becoming a people manager and newer managers who are just getting going. This Episode is brought to you by... Flexible Leadership is every leader's guide to greater success in a world of increasing complexity and chaos. Book Recommendations How to Deliver Bad News and Get Away With It: A Manager's Guide by Mahesh Guruswamy Leadership on the Line: Staying Alive through the Dangers of Leading by Ronald A. Heifetz and Marty Linsky Never Flinch by Stephen King Like this? Communicate Like a Leader with Dianna Booher Leadership, Communication and Credibility with Jack Modzelewski How to Communicate Effectively with Anyone, Anywhere with Raúl Sánchez and Dan Bullock How to Communicate More Effectively and Lead a Better Life with Michael Hoeppner Join Our Community If you want to view our live podcast episodes, hear about new releases, or chat with others who enjoy this podcast join one of our communities below. Join the Facebook Group Join the LinkedIn Group Leave a Review If you liked this conversation, we'd be thrilled if you'd let others know by leaving a review on Apple Podcasts. Here's a quick guide for posting a review. Review on Apple: https://remarkablepodcast.com/itunes Podcast Better! Sign up with Libsyn and get up to 2 months free! Use promo code: RLP
In this episode of the Prime Venture Partners Podcast, Sanjay Swamy hosts Mahesh Joshi, Head of Asia Private Equity at BlueOrchard and author of H.I.T. Investing.He shares powerful insights on how companies across India and Asia are solving critical problems like:Financial inclusion for underserved Kirana storesGender equity in small business financeClimate resilience through tech and EV adoptionLeveraging UPI, DPI, and IndiaStack to scale affordable services
What’s the next era of network management and operations? Total Network Operations talks to Mahesh Jethanandani, Chair of NETCONF Working Group and Distinguished Engineer at Arrcus. Mahesh describes a workshop from December of 2024 that sought to investigate the past, present, and future of network management and operations. He talks about the IETF’s role in... Read more »
What’s the next era of network management and operations? Total Network Operations talks to Mahesh Jethanandani, Chair of NETCONF Working Group and Distinguished Engineer at Arrcus. Mahesh describes a workshop from December of 2024 that sought to investigate the past, present, and future of network management and operations. He talks about the IETF’s role in... Read more »
This week, Monika unpacks SEBI's case against global trading firm Jane Street, accused of manipulating India's markets through high-speed trading and deep capital. With alleged unfair profits of over ₹36,500 crore, the case raises big questions around market fairness and retail investor safety. Monika explains what happened, why it matters, and what lessons investors should take away. The core message: avoid speculation, understand the risks of F&O, and stay focused on long-term investing. SEBI's action is a positive move, but individual investors must remain cautious and grounded.She also breaks down how the F&O (futures and options) market works—what these derivative instruments are, what they were designed for, and why they are high-risk products that magnify both gains and losses. Originally meant for hedging, F&O today is often used by retail traders for speculation—despite the odds being heavily stacked against them. Monika cautions listeners that the data is clear: the vast majority of individual F&O traders lose money.In listener questions, Abhishek asks whether he should take a ₹40 lakh loan to buy a ₹75–85 lakh residential plot despite personal reservations and strong pressure from family. He also wants to know whether redeeming well-performing mutual funds to reinvest elsewhere makes sense or if it hurts compounding. Ayan asks about the nominee claim process when mutual fund units are transferred after the original investor's death—how it works, what documents are needed, and how smooth the process typically is. Mahesh, a salaried professional with a balanced financial setup, wants to know whether he should increase his home loan EMI or mutual fund SIPs after a raise.Chapters:(00:00 – 00:00) Lessons from Jane Street and Market Manipulation(00:00 – 00:00) Real Estate vs Financial Assets: Should You Buy That Plot?(00:00 – 00:00) Redeeming and Reinvesting Mutual Fund Profits(00:00 – 00:00) Mutual Fund Transmission After Death(00:00 – 00:00) Home Loan vs Mutual Fund SIPs: Where to Allocate Extra Incomehttps://www.sebi.gov.in/enforcement/orders/jul-2025/interim-order-in-the-matter-of-index-manipulation-by-jane-street-group_95040.htmlIf you have financial questions that you'd like answers for, please email us at mailme@monikahalan.com Monika's book on basic money managementhttps://www.monikahalan.com/lets-talk-money-english/Monika's book on mutual fundshttps://www.monikahalan.com/lets-talk-mutual-funds/Monika's workbook on recording your financial lifehttps://www.monikahalan.com/lets-talk-legacy/Calculatorshttps://investor.sebi.gov.in/calculators/index.htmlYou can find Monika on her social media @monikahalan. Twitter @MonikaHalanInstagram @MonikaHalanFacebook @MonikaHalanLinkedIn @MonikaHalanProduction House: www.inoutcreatives.comProduction Assistant: Anshika Gogoi
Mahesh Kafle and Asmita Adhikari are a musical duo, known for their heartfelt folk-pop hits and vocal chemistry. Mahesh, a former journalist turned singer-composer, gained fame with viral songs like Nacha Firiri and Maya Birani, while Asmita rose to prominence from Nepal Idol and built an fanbase through playback singing and global tours. Married in 2025, their creative and personal bond has made them one of the most admired pairs in Nepali music.
Zero Is the New Hero: Inside the Global Network for Zero Can businesses truly achieve net zero emissions by 2030—or even sooner? On this episode of The Samuele Tini Show, host Samuele Tini welcomes sustainability powerhouse Mahesh Ramanujam, former president and CEO of the U.S. Green Building Council (USGBC) and founder of the Global Network for Zero. From his early life in India, where sustainability meant survival, Mahesh has led a global transformation in green building practices through the renowned LEED certification. Now, he's going further, leading an ambitious global movement aiming for total decarbonisation. In this insightful episode, you'll discover: Why Net Zero is no longer just an ambition, but a necessity for businesses everywhere. How certification can create transparency, build consumer trust, and spur competitors into action. The critical role of technology, particularly AI, in rapidly scaling sustainable solutions. Powerful examples of businesses already achieving remarkable progress toward zero emissions. Mahesh delivers an optimistic yet pragmatic vision: sustainability isn't just about protecting our planet—it's about driving innovation, growth, and lasting value for all.
In this Pocket Sized Pep Talks, you'll learn:Why avoiding discomfort does more damage than the truth ever could.How to deliver tough messages with empathy, not ego.The mindset shift that turns hard talks into moments of leadership.The particular moment or conversation that inspired Mahesh to write this book.The title is bold — How to Deliver Bad News and Get Away with It. Mahesh explains why “getting away with it” was part of this title and content.Delivering bad news is part of a manager's job, not a failure. Mahesh discusses how leaders shift their mindset to see these moments as opportunities rather than setbacks.How detecting early warning signs before bad news becomes unavoidable is critical, and what some of those warning signs would be.If you left with just one thing, what one idea to walk away with and implement would be.The impact Stephen King's storytelling had on Manesh's writing style. To learn more about this guest:GUEST EMAIL mahesh.gkumar@gmail.comGUEST WEBSITE: maheshguruswamy.comSOCIAL MEDIA: https://www.linkedin.com/in/maheshguruswamy/https://x.com/mahesh_gkumarhttps://maheshguruswamy.substack.com/
Mahesh Guruswamy is a seasoned product development executive who has been in the software development space for over twenty years and has managed teams of varying sizes for over a decade. He is currently the chief technology officer at Kickstarter. Before that, he was an executive running product development teams at Mosaic, Kajabi, and Smartsheet.Listen NOW as Mahesh reveals principles from his book, "How to Deliver Bad News and Get Away With It".
On this collaborative episode of Mahesh the Geek, Mahesh is joined by Dr. Read Hayes, executive director of the Loss Prevention Research Council (LPRC), as they explore the evolution of loss prevention, emphasizing the importance of prevention over response in public safety. They discuss the integration of technology, such as AI and body-worn cameras, in enhancing crime detection and prevention. The dialogue also highlights the significance of collaboration between retailers and law enforcement, the challenges of data sharing and the behavioral cues that can indicate potential criminal activity. Mahesh and Dr. Hayes also discuss insights into future trends in crime prevention and the role of technology in shaping these developments. Read Hayes, PhD, is a Research Scientist and Criminologist at the University of Florida, and Director of the LPRC. The LPRC includes 100 major retail corporations, multiple law enforcement agencies, trade associations and more than 170 protective solution/tech partners working together year-round in the field, in VR and in simulation labs with scientists and practitioners to increase people and place safety by reducing theft, fraud and violence. Dr. Hayes has authored four books and more than 320 journal and trade articles.
From scaling Canva to transforming New Zealand's startup scene - Mahesh Muralidhar knows how to think big. In this episode, we unpack Mahesh's journey at Canva, his mission to make NZ happier and wealthier, and how Phase One Ventures is backing the next generation of billion-dollar companies.Learn more about Phase One VenturesEver wanted your own financial adviser? James is picking one listener to coach for a whole year - apply now and you might just star on the podcast too! Apply hereFor more money tips follow us on:FacebookInstagramThe content in this podcast is the opinion of the hosts. It should not be treated as financial advice. It is important to take into consideration your own personal situation and goals before making any financial decisions.
10X Success Hacks for Startups, Innovations and Ventures (consulting and training tips)
In this episode, I sit down with NAFA co-founder Riya Thosar to explore how a group of tech professionals is reshaping the future of Marathi cinema in North America.
Follow Proof of Coverage Media: https://x.com/Proof_CoverageConnor, Mahesh, Santi, and Jason are joined by Amir Haleem of Helium to explore the evolving landscape of decentralized networks. They dive into Helium's impressive revenue growth - from $400K to $2.7M per month - driven by its mobile subscriber base, and discuss the complexities of blending off-chain and on-chain revenue. The conversation covers tokenized equity, sustainable business models beyond token sales, and the convergence of crypto with traditional finance. Amir shares how Helium has shifted from a crypto-first approach to prioritizing service delivery and user satisfaction, offering key lessons in product distribution, user retention, and innovative tokenomics.Timestamps:00:00 - Introduction02:25 - Microstrategy and Digital Asset Accumulation 03:42 - Market Trends: Crypto and Wall Street 05:03 - Santi's Perspective on Market Efficiency 06:19 - DePIN Projects and Public Market Strategies 06:41 - Helium's Potential for Going Public 08:50 - Cash Flow and Tokenomics in DePIN12:07 - Helium's Recent Revenue Growth 12:55 - PMF for DePIN Networks 18:03 - User Engagement and Helium's Growth 19:05 - Helium's Revenue Sources Explained 21:06 - Convergence of Off-Chain and On-Chain Revenue 24:12 - Learning from Helium's Evolution 25:03 - Focus on Distribution Over Product 27:27 - Daily Active Users and Their Interaction 31:26 - Valuable Users and Helium's Ecosystem 33:45 - Cloud Points and User Experience 36:44 - Retention Curves: Crypto vs. Traditional Users 39:59 - Aligning Token and Equity Interests Disclaimer: The hosts and the firms they represent may hold stakes in the companies mentioned in this podcast. None of this is financial advice.
Today, we're talking with Mahesh Guruswamy, CPTO at Kickstarter. In this episode, we discuss: How daily reflection on hard moments helped him grow as a leader—and led to his widely praised book How to Deliver Bad News and Get Away with It. How Kickstarter went from “printing money” and experiencing “insane” PMF to plateauing - and how they created urgency and competition to quickly turn it around The actual process Kickstarter used to activate its creator and backer communities to shape a better UX, build real loyalty, and re-ignite growth Links LinkedIn: https://www.linkedin.com/in/maheshguruswamy/ Website: https://www.maheshguruswamy.com/ Substack: https://maheshguruswamy.substack.com/ Resources How to Deliver Bad News and Get Away with It: A Manager's Guide, By Mahesh Guruswamy: https://www.amazon.com/How-Deliver-Bad-News-Away/dp/B0D7FHTTNN Tell Your CEO They're Wrong (And KEEP Your Career) | Steve Nash, Dir. Product (GumGum): https://youtu.be/oeEBy8mJfB0 Chapters 00:00 Intro 01:09 Mahesh's Background and Kickstarter Connection 02:08 The Power of Writing and Self-Reflection 04:28 Publishing a Book: How to Deliver Bad News 05:53 Effective Communication and Feedback 06:46 Navigating Tough Conversations 11:03 Kickstarter's Evolution and Challenges 14:16 Listening to Customers and Adapting 19:36 Navigating Market Shifts and Growth 20:34 Prioritizing Customer Feedback 25:59 Building and Engaging the Community 29:54 Competitive Strategies and Organizational Culture 34:05 Leadership and Legacy 37:03 Conclusion and Future Prospects Follow LaunchPod on YouTube We have a new YouTube page (https://www.youtube.com/@LaunchPod.byLogRocket)! Watch full episodes of our interviews with PM leaders and subscribe! What does LogRocket do? LogRocket's Galileo AI watches user sessions for you and surfaces the technical and usability issues holding back your web and mobile apps. Understand where your users are struggling by trying it for free at LogRocket.com (https://logrocket.com/signup/?pdr). Special Guest: Mahesh Guruswamy.
Follow Proof of Coverage Media: https://x.com/Proof_CoverageIn the inaugural episode of the DePIN Roundtable, Connor Lovely is joined by Santiago R Santos, Mahesh Ramakrishnan, and Jason Badeaux to explore the evolving world of decentralized physical infrastructure networks. They reflect on Bitcoin's foundational role, early model flaws, and growing interest from non-crypto sectors, highlighted by Santiago's insights from a Dubai conference. Mahesh discusses the collegial shift among fund managers, fundraising challenges, and the upcoming DePIN Summit in Africa. Jason shares updates on Daylight's energy subscription model and lessons in team building. The group debates token models, product-market fit, and the experiences of projects like Helium, emphasizing the need for better alignment between user needs and tokenomics. The episode closes with an optimistic outlook on DePIN's potential to transform infrastructure as SaaS did for software.Timestamps:00:00 - Introduction01:11 - Welcome to the DePIN Roundtable02:09 - Insights from the Dubai Conference05:24 - Fundraising Challenges in Crypto08:53 - Santi's Fundraising Success12:04 - Building a Strong Team15:55 - Daylight's Revenue Generation20:30 - Daylight's Growth and Hiring21:09 - Energy Subscription Model25:11 - Token Launch Considerations30:00 - Learning from Helium's Journey35:09 - Supply Sinks in DePIN Networks41:11 - Innovative Token Use CasesDisclaimer: The hosts and the firms they represent may hold stakes in the companies mentioned in this podcast. None of this is financial advice.
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Today, we're joined by Mahesh Sathiamoorthy, co-founder and CEO of Bespoke Labs, to discuss how reinforcement learning (RL) is reshaping the way we build custom agents on top of foundation models. Mahesh highlights the crucial role of data curation, evaluation, and error analysis in model performance, and explains why RL offers a more robust alternative to prompting, and how it can improve multi-step tool use capabilities. We also explore the limitations of supervised fine-tuning (SFT) for tool-augmented reasoning tasks, the reward-shaping strategies they've used, and Bespoke Labs' open-source libraries like Curator. We also touch on the models MiniCheck for hallucination detection and MiniChart for chart-based QA. The complete show notes for this episode can be found at https://twimlai.com/go/731.
www.aapm.org
"Welcome to Money 911, the show where we align your wealth, health, and peace of mind. I'm your host, Kris Miller—your Legacy Wealth Strategist. Today's episode is about expanding vision and embracing evolution. We're diving into a powerful conversation on how AI is not just a tool—but a transformational force for leadership. Our guest is a visionary who empowers C-suite leaders to not just adapt—but dominate in the new era. From Microsoft AI to boardrooms across the country, their work is reshaping the DNA of executive success and redefining what it means to lead. Friends, if you've ever wondered how to future-proof your business, lead with clarity, and multiply your ROI—this conversation is for you. Let's welcome our extraordinary guest…" Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode, Mahesh Guruswamy, CTO at Kickstarter, shares smart ways to deliver bad news at work. With over 20 years in tech, Mahesh knows how to talk about hard things in a kind and clear way. He says don't wait—speak up early, but do it with care. Whether it's with your team, your boss, or a client, honesty and empathy matter. Want to learn how to give feedback without fear? Mahesh explains how to do it right—and still keep strong relationships.
Mahesh Ram, serial entrepreneur and former Head of AI at Zoom, shares his journey from building pioneering companies in education and AI to helping launch FUNDA, a vibrant founder-to-founder community. He discusses the evolution of AI-first startups, lessons from working closely with Zoom founder Eric Yuan, and what it takes to build enduring tech companies today. Mahesh offers real-world advice on how founders can navigate the rapidly changing startup landscape, with a deep focus on customer obsession, rapid product iteration, and embedding technology into core workflows.##In this episode, you'll learn:[01:50] How Mahesh went from immigrant kid in New York to serial entrepreneur in Silicon Valley[06:50] Why Mahesh believes frustration often leads to the best startup ideas[10:55] Inside Zoom's AI journey—and how Mahesh helped launch AI Companion at record speed[13:14] Lessons on leadership from Eric Yuan: customer obsession and quality over cost[20:41] Mahesh's advice to AI-first founders: ship fast, sell faster, and validate deeply[24:23] What separates point solutions from workflow-embedded companies[27:03] Mahesh's nuanced take on AI's societal risks—and why we're not ready[31:25] What is Funda? Why a grassroots founder community is boomingThe nonprofit organization that Mahesh supports: UStriveAbout Mahesh RamMahesh Ram is a serial entrepreneur and expert in artificial intelligence, most recently serving as Head of AI at Zoom. He was co-founder and CEO of Solvvy, a pioneering AI startup in customer experience, acquired by Zoom. Prior to that, he led GlobalEnglish, a business English learning platform used by millions worldwide. Mahesh advises founders, invests in early-stage startups, and is a founding member of FUNDA, a growing grassroots community of founders of Indian origin. He is deeply passionate about education, technology, and building systems that simplify complex problems.About FUNDAFunda is a pay-it-forward community for founders of Indian origin, designed to support early-stage entrepreneurs through collaboration, connection, and shared experience. Built by founders for founders, Funda now includes over 1,250 members across Silicon Valley, Texas, and India. The community offers peer support, curated events, and access to a trusted network—entirely volunteer-driven and mission-focused.About UStriveMahesh actively volunteers with UStrive, a nonprofit providing free virtual mentoring for high school and college students with financial need. The platform matches students with mentors to help navigate college admissions and financial aid processes—removing barriers to higher education for underserved youth.Subscribe to our podcast and stay tuned for our next episode.
We are happy to announce that there will be a dedicated MCP track at the 2025 AI Engineer World's Fair, taking place Jun 3rd to 5th in San Francisco, where the MCP core team and major contributors and builders will be meeting. Join us and apply to speak or sponsor!When we first wrote Why MCP Won, we had no idea how quickly it was about to win.In the past 4 weeks, OpenAI and now Google have now announced the MCP support, effectively confirming our prediction that MCP was the presumptive winner of the agent standard wars. MCP has now overtaken OpenAPI, the incumbent option and most direct alternative, in GitHub stars (3 months ahead of conservative trendline):We have explored the state of MCP at AIE (now the first ever >100k views workshop):And since then, we've added a 7th reason why MCP won - this team acts very quickly on feedback, with the 2025-03-26 spec update adding support for stateless/resumable/streamable HTTP transports, and comprehensive authz capabilities based on OAuth 2.1.This bodes very well for the future of the community and project. For protocol and history nerds, we also asked David and Justin to tell the origin story of MCP, which we leave to the reader to enjoy (you can also skim the transcripts, or, the changelogs of a certain favored IDE). It's incredible the impact that individual engineers solving their own problems can have on an entire industry.Full video episodeLike and subscribe on YouTube!Show Links* David* Justin* MCP* Why MCP WonTimestamps* 00:00 Introduction and Guest Welcome* 00:37 What is MCP?* 02:00 The Origin Story of MCP* 05:18 Development Challenges and Solutions* 08:06 Technical Details and Inspirations* 29:45 MCP vs Open API* 32:48 Building MCP Servers* 40:39 Exploring Model Independence in LLMs* 41:36 Building Richer Systems with MCP* 43:13 Understanding Agents in MCP* 45:45 Nesting and Tool Confusion in MCP* 49:11 Client Control and Tool Invocation* 52:08 Authorization and Trust in MCP Servers* 01:01:34 Future Roadmap and Stateless Servers* 01:10:07 Open Source Governance and Community Involvement* 01:18:12 Wishlist and Closing RemarksTranscriptAlessio [00:00:02]: Hey, everyone. Welcome back to Latent Space. This is Alessio, partner and CTO at Decibel, and I'm joined by my co-host Swyx, founder of Small AI.swyx [00:00:10]: Hey, morning. And today we have a remote recording, I guess, with David and Justin from Anthropic over in London. Welcome. Hey, good You guys have created a storm of hype because of MCP, and I'm really glad to have you on. Thanks for making the time. What is MCP? Let's start with a crisp what definition from the horse's mouth, and then we'll go into the origin story. But let's start off right off the bat. What is MCP?Justin/David [00:00:43]: Yeah, sure. So Model Context Protocol, or MCP for short, is basically something we've designed to help AI applications extend themselves or integrate with an ecosystem of plugins, basically. The terminology is a bit different. We use this client-server terminology, and we can talk about why that is and where that came from. But at the end of the day, it really is that. It's like extending and enhancing the functionality of AI application.swyx [00:01:05]: David, would you add anything?Justin/David [00:01:07]: Yeah, I think that's actually a good description. I think there's like a lot of different ways for how people are trying to explain it. But at the core, I think what Justin said is like extending AI applications is really what this is about. And I think the interesting bit here that I want to highlight, it's AI applications and not models themselves that this is focused on. That's a common misconception that we can talk about a bit later. But yeah. Another version that we've used and gotten to like is like MCP is kind of like the USB-C port of AI applications and that it's meant to be this universal connector to a whole ecosystem of things.swyx [00:01:44]: Yeah. Specifically, an interesting feature is, like you said, the client and server. And it's a sort of two-way, right? Like in the same way that said a USB-C is two-way, which could be super interesting. Yeah, let's go into a little bit of the origin story. There's many people who've tried to make statistics. There's many people who've tried to build open source. I think there's an overall, also, my sense is that Anthropic is going hard after developers in the way that other labs are not. And so I'm also curious if there was any external influence or was it just you two guys just in a room somewhere riffing?Justin/David [00:02:18]: It is actually mostly like us two guys in a room riffing. So this is not part of a big strategy. You know, if you roll back time a little bit and go into like July 2024. I was like, started. I started at Anthropic like three months earlier or two months earlier. And I was mostly working on internal developer tooling, which is what I've been doing for like years and years before. And as part of that, I think there was an effort of like, how do I empower more like employees at Anthropic to use, you know, to integrate really deeply with the models we have? Because we've seen these, like, how good it is, how amazing it will become even in the future. And of course, you know, just dogfoot your own model as much as you can. And as part of that. From my development tooling background, I quickly got frustrated by the idea that, you know, on one hand side, I have Cloud Desktop, which is this amazing tool with artifacts, which I really enjoyed. But it was very limited to exactly that feature set. And it was there was no way to extend it. And on the other hand side, I like work in IDEs, which could greatly like act on like the file system and a bunch of other things. But then they don't have artifacts or something like that. And so what I constantly did was just copy. Things back and forth on between Cloud Desktop and the IDE, and that quickly got me, honestly, just very frustrated. And part of that frustration wasn't like, how do I go and fix this? What, what do we need? And back to like this development developer, like focus that I have, I really thought about like, well, I know how to build all these integrations, but what do I need to do to let these applications let me do this? And so it's very quickly that you see that this is clearly like an M times N problem. Like you have multiple like applications. And multiple integrations you want to build and like, what that is better there to fix this than using a protocol. And at the same time, I was actually working on an LSP related thing internally that didn't go anywhere. But you put these things together in someone's brain and let them wait for like a few weeks. And out of that comes like the idea of like, let's build some, some protocol. And so back to like this little room, like it was literally just me going to a room with Justin and go like, I think we should build something like this. Uh, this is a good idea. And Justin. Lucky for me, just really took an interest in the idea, um, and, and took it from there to like, to, to build something, together with me, that's really the inception story is like, it's us to, from then on, just going and building it over, over the course of like, like a month and a half of like building the protocol, building the first integration, like Justin did a lot of the, like the heavy lifting of the first integrations in cloud desktop. I did a lot of the first, um, proof of concept of how this can look like in an IDE. And if you, we could talk about like some of. All the tidbits you can find way before the inception of like before the official release, if you were looking at the right repositories at the right time, but there you go. That's like some of the, the rough story.Alessio [00:05:12]: Uh, what was the timeline when, I know November 25th was like the official announcement date. When did you guys start working on it?Justin/David [00:05:19]: Justin, when did we start working on that? I think it, I think it was around July. I think, yeah, I, as soon as David pitched this initial idea, I got excited pretty quickly and we started working on it, I think. I think almost immediately after that conversation and then, I don't know, it was a couple, maybe a few months of, uh, building the really unrewarding bits, if we're being honest, because for, for establishing something that's like this communication protocol has clients and servers and like SDKs everywhere, there's just like a lot of like laying the groundwork that you have to do. So it was a pretty, uh, that was a pretty slow couple of months. But then afterward, once you get some things talking over that wire, it really starts to get exciting and you can start building. All sorts of crazy things. And I think this really came to a head. And I don't remember exactly when it was, maybe like approximately a month before release, there was an internal hackathon where some folks really got excited about MCP and started building all sorts of crazy applications. I think the coolest one of which was like an MCP server that can control a 3d printer or something. And so like, suddenly people are feeling this power of like cloud connecting to the outside world in a really tangible way. And that, that really added some, uh, some juice to us and to the release.Alessio [00:06:32]: Yeah. And we'll go into the technical details, but I just want to wrap up here. You mentioned you could have seen some things coming if you were looking in the right places. We always want to know what are the places to get alpha, how, how, how to find MCP early.Justin/David [00:06:44]: I'm a big Zed user. I liked the Zed editor. The first MCP implementation on an IDE was in Zed. It was written by me and it was there like a month and a half before the official release. Just because we needed to do it in the open because it's an open source project. Um, and so it was, it was not, it was named slightly differently because we. We were not set on the name yet, but it was there.swyx [00:07:05]: I'm happy to go a little bit. Anthropic also had some preview of a model with Zed, right? Some kind of fast editing, uh, model. Um, uh, I, I'm con I confess, you know, I'm a cursor windsurf user. Haven't tried Zed. Uh, what's, what's your, you know, unrelated or, you know, unsolicited two second pitch for, for Zed. That's a good question.Justin/David [00:07:28]: I, it really depends what you value in editors. For me. I, I wouldn't even say I like, I love Zed more than others. I like them all like complimentary in, in a way or another, like I do use windsurf. I do use Zed. Um, but I think my, my main pitch for Zed is low latency, super smooth experience editor with a decent enough AI integration.swyx [00:07:51]: I mean, and maybe, you know, I think that's, that's all it is for a lot of people. Uh, I think a lot of people obviously very tied to the VS code paradigm and the extensions that come along with it. Okay. So I wanted to go back a little bit. You know, on, on, on some of the things that you mentioned, Justin, uh, which was building MCP on paper, you know, obviously we only see the end result. It just seems inspired by LSP. And I, I think both of you have acknowledged that. So how much is there to build? And when you say build, is it a lot of code or a lot of design? Cause I felt like it's a lot of design, right? Like you're picking JSON RPC, like how much did you base off of LSP and, and, you know, what, what, what was the sort of hard, hard parts?Justin/David [00:08:29]: Yeah, absolutely. I mean, uh, we, we definitely did take heavy inspiration from LSP. David had much more prior experience with it than I did working on developer tools. So, you know, I've mostly worked on products or, or sort of infrastructural things. LSP was new to me. But as a, as a, like, or from design principles, it really makes a ton of sense because it does solve this M times N problem that David referred to where, you know, in the world before LSP, you had all these different IDEs and editors, and then all these different languages that each wants to support or that their users want them to support. And then everyone's just building like one. And so, like, you use Vim and you might have really great support for, like, honestly, I don't know, C or something, and then, like, you switch over to JetBrains and you have the Java support, but then, like, you don't get to use the great JetBrains Java support in Vim and you don't get to use the great C support in JetBrains or something like that. So LSP largely, I think, solved this problem by creating this common language that they could all speak and that, you know, you can have some people focus on really robust language server implementations, and then the IDE developers can really focus on that side. And they both benefit. So that was, like, our key takeaway for MCP is, like, that same principle and that same problem in the space of AI applications and extensions to AI applications. But in terms of, like, concrete particulars, I mean, we did take JSON RPC and we took this idea of bidirectionality, but I think we quickly took it down a different route after that. I guess there is one other principle from LSP that we try to stick to today, which is, like, this focus on how features manifest. More than. The semantics of things, if that makes sense. David refers to it as being presentation focused, where, like, basically thinking and, like, offering different primitives, not because necessarily the semantics of them are very different, but because you want them to show up in the application differently. Like, that was a key sort of insight about how LSP was developed. And that's also something we try to apply to MCP. But like I said, then from there, like, yeah, we spent a lot of time, really a lot of time, and we could go into this more separately, like, thinking about each of the primitives that we want to offer in MCP. And why they should be different, like, why we want to have all these different concepts. That was a significant amount of work. That was the design work, as you allude to. But then also already out of the gate, we had three different languages that we wanted to at least support to some degree. That was TypeScript, Python, and then for the Z integration, it was Rust. So there was some SDK building work in those languages, a mixture of clients and servers to build out to try to create this, like, internal ecosystem that we could start playing with. And then, yeah, I guess just trying to make everything, like, robust over, like, I don't know, this whole, like, concept that we have for local MCP, where you, like, launch subprocesses and stuff and making that robust took some time as well. Yeah, maybe adding to that, I think the LSP inference goes even a little bit further. Like, we did take actually quite a look at criticisms on LSP, like, things that LSP didn't do right and things that people felt they would love to have different and really took that to heart to, like, see, you know, what are some of the things. that we wish, you know, we should do better. We took a, you know, like, a lengthy, like, look at, like, their very unique approach to JSON RPC, I may say, and then we decided that this is not what we do. And so there's, like, these differences, but it's clearly very, very inspired. Because I think when you're trying to build and focus, if you're trying to build something like MCP, you kind of want to pick the areas you want to innovate in, but you kind of want to be boring about the other parts in pattern matching LSP. So the problem allows you to be boring in a lot of the core pieces that you want to be boring in. Like, the choice of JSON RPC is very non-controversial to us because it's just, like, it doesn't matter at all, like, what the action, like, bites on the bar that you're speaking. It makes no difference to us. The innovation is on the primitives you choose and these type of things. And so there's way more focus on that that we wanted to do. So having some prior art is good there, basically.swyx [00:12:26]: It does. I wanted to double click. I mean, there's so many things you can go into. Obviously, I am passionate about protocol design. I wanted to show you guys this. I mean, I think you guys know, but, you know, you already referred to the M times N problem. And I can just share my screen here about anyone working in developer tools has faced this exact issue where you see the God box, basically. Like, the fundamental problem and solution of all infrastructure engineering is you have things going to N things, and then you put the God box and they'll all be better, right? So here is one problem for Uber. One problem for... GraphQL, one problem for Temporal, where I used to work at, and this is from React. And I was just kind of curious, like, you know, did you solve N times N problems at Facebook? Like, it sounds like, David, you did that for a living, right? Like, this is just N times N for a living.Justin/David [00:13:16]: David Pérez- Yeah, yeah. To some degree, for sure. I did. God, what a good example of this, but like, I did a bunch of this kind of work on like source control systems and these type of things. And so there were a bunch of these type of problems. And so you just shove them into something that everyone can read from and everyone can write to, and you build your God box somewhere, and it works. But yeah, it's just in developer tooling, you're absolutely right. In developer tooling, this is everywhere, right?swyx [00:13:47]: And that, you know, it shows up everywhere. And what was interesting is I think everyone who makes the God box then has the same set of problems, which is also you now have like composability off and remotes versus local. So, you know, there's this very common set of problems. So I kind of want to take a meta lesson on how to do the God box, but, you know, we can talk about the sort of development stuff later. I wanted to double click on, again, the presentation that Justin mentioned of like how features manifest and how you said some things are the same, but you just want to reify some concepts so they show up differently. And I had that sense, you know, when I was looking at the MCP docs, I'm like, why do these two things need to be the difference in other? I think a lot of people treat tool calling as the solution to everything, right? And sometimes you can actually sort of view kinds of different kinds of tool calls as different things. And sometimes they're resources. Sometimes they're actually taking actions. Sometimes they're something else that I don't really know yet. But I just want to see, like, what are some things that you sort of mentally group as adjacent concepts and why were they important to you to emphasize?Justin/David [00:14:58]: Yeah, I can chat about this a bit. I think fundamentally we every sort of primitive that we thought through, we thought from the perspective of the application developer first, like if I'm building an application, whether it is an IDE or, you know, call a desktop or some agent interface or whatever the case may be, what are the different things that I would want to receive from like an integration? And I think once you take that lens, it becomes quite clear that that tool calling is necessary, but very insufficient. Like there are many other things you would want to do besides just get tools. And plug them into the model and you want to have some way of differentiating what those different things are. So the kind of core primitives that we started MCP with, we've since added a couple more, but the core ones are really tools, which we've already talked about. It's like adding, adding tools directly to the model or function calling is sometimes called resources, which is basically like bits of data or context that you might want to add to the context. So excuse me, to the, to the model context. And this, this is the first primitive where it's like, we, we. Decided this could be like application controlled, like maybe you want a model to automatically search through and, and find relevant resources and bring them into context. But maybe you also want that to be an explicit UI affordance in the application where the user can like, you know, pick through a dropdown or like a paperclip menu or whatever, and find specific things and tag them in. And then that becomes part of like their message to the LLM. Like those are both use cases for resources. And then the third one is prompts. Which are deliberately meant to be like user initiated or. Like. User substituted. Text or messages. So like the analogy here would be like, if you're an editor, like a slash command or something like that, or like an at, you know, auto completion type thing where it's like, I have this kind of macro effectively that I want to drop in and use. And we have sort of expressed opinions through MCP about the different ways that these things could manifest, but ultimately it is for application developers to decide, okay, you, you get these different concepts expressed differently. Um, and it's very useful as an application developer because you can decide. The appropriate experience for each, and actually this can be a point of differentiation to, like, we were also thinking, you know, from the application developer perspective, they, you know, application developers don't want to be commoditized. They don't want the application to end up the same as every other AI application. So like, what are the unique things that they could do to like create the best user experience even while connecting up to this big open ecosystem of integration? I, yeah. And I think to add to that, the, I think there are two, two aspects to that, that I want to. I want to mention the first one is that interestingly enough, like while nowadays tool calling is obviously like probably like 95% plus of the integrations, and I wish there would be, you know, more clients doing tool resources, doing prompts. The, the very first implementation in that is actually a prompt implementation. It doesn't deal with tools. And, and it, we found this actually quite useful because what it allows you to do is, for example, build an MCP server that takes like a backtrack. So it's, it's not necessarily like a tool that literally just like rawizes from Sentry or any other like online platform that, that tracks your, your crashes. And just lets you pull this into the context window beforehand. And so it's quite nice that way that it's like a user driven interaction that you does the user decide when to pull this in and don't have to wait for the model to do it. And so it's a great way to craft the prompt in a way. And I think similarly, you know, I wish, you know, more MCP servers today would bring prompts as examples of, like how to even use the tools. Yeah. at the same time. The resources bits are quite interesting as well. And I wish we would see more usage there because it's very easy to envision, but yet nobody has really implemented it. A system where like an MCP server exposes, you know, a set of documents that you have, your database, whatever you might want to as a set of resources. And then like a client application would build a full rack index around this, right? This is definitely an application use case we had in mind as to why these are exposed in such a way that they're not model driven, because you might want to have way more resource content than is, you know, realistically usable in a context window. And so I think, you know, I wish applications and I hope applications will do this in the next few months, use these primitives, you know, way better, because I think there's way more rich experiences to be created that way. Yeah, completely agree with that. And I would also add that I would go into it if I haven't.Alessio [00:19:30]: I think that's a great point. And everybody just, you know, has a hammer and wants to do tool calling on everything. I think a lot of people do tool calling to do a database query. They don't use resources for it. What are like the, I guess, maybe like pros and cons or like when people should use a tool versus a resource, especially when it comes to like things that do have an API interface, like for a database, you can do a tool that does a SQL query versus when should you do that or a resource instead with the data? Yeah.Justin/David [00:20:00]: The way we separate these is like tools are always meant to be initiated by the model. It's sort of like at the model's discretion that it will like find the right tool and apply it. So if that's the interaction you want as a server developer, where it's like, okay, this, you know, suddenly I've given the LLM the ability to run a SQL queries, for example, that makes sense as a tool. But resources are more flexible, basically. And I think, to be completely honest, the story here is practically a bit complicated today. Because many clients don't support resources yet. But like, I think in an ideal world where all these concepts are fully realized, and there's like full ecosystem support, you would do resources for things like the schemas of your database tables and stuff like that, as a way to like either allow the user to say like, okay, now, you know, cloud, I want to talk to you about this database table. Here it is. Let's have this conversation. Or maybe the particular AI application that you're using, like, you know, could be something agentic, like cloud code. is able to just like agentically look up resources and find the right schema of the database table you're talking about, like both those interactions are possible. But I think like, anytime you have this sort of like, you want to list a bunch of entities, and then read any of them, that makes sense to model as resources. Resources are also, they're uniquely identified by a URI, always. And so you can also think of them as like, you know, sort of general purpose transformers, even like, if you want to support an interaction where a user just like drops a URI in, and then you like automatically figure out how to interpret that, you could use MCP servers to do that interpretation. One of the interesting side notes here, back to the Z example of resources, is that has like a prompt library that you can do, that people can interact with. And we just exposed a set of default prompts that we want everyone to have as part of that prompt library. Yeah, resources for a while so that like, you boot up Zed and Zed will just populate the prompt library from an MCP server, which was quite a cool interaction. And that was, again, a very specific, like, both sides needed to agree upon the URI format and the underlying data format. And but that was a nice and kind of like neat little application of resources. There's also going back to that perspective of like, as an application developer, what are the things that I would want? Yeah. We also applied this thinking to like, you know, like, we can do this, we can do this, we can do this, we can do this. Like what existing features of applications could conceivably be kind of like factored out into MCP servers if you were to take that approach today. And so like basically any IDE where you have like an attachment menu that I think naturally models as resources. It's just, you know, those implementations already existed.swyx [00:22:49]: Yeah, I think the immediate like, you know, when you introduced it for cloud desktop and I saw the at sign there, I was like, oh, yeah, that's what Cursor has. But this is for everyone else. And, you know, I think like that that is a really good design target because it's something that already exists and people can map on pretty neatly. I was actually featuring this chart from Mahesh's workshop that presumably you guys agreed on. I think this is so useful that it should be on the front page of the docs. Like probably should be. I think that's a good suggestion.Justin/David [00:23:19]: Do you want to do you want to do a PR for this? I love it.swyx [00:23:21]: Yeah, do a PR. I've done a PR for just Mahesh's workshop in general, just because I'm like, you know. I know.SPEAKER_03 [00:23:28]: I approve. Yeah.swyx [00:23:30]: Thank you. Yeah. I mean, like, but, you know, I think for me as a developer relations person, I always insist on having a map for people. Here are all the main things you have to understand. We'll spend the next two hours going through this. So some one image that kind of covers all this, I think is pretty helpful. And I like your emphasis on prompts. I would say that it's interesting that like I think, you know, in the earliest early days of like chat GPT and cloud, people. Often came up with, oh, you can't really follow my screen, can you? In the early days of chat of, of chat, GPT and all that, like a lot, a lot of people started like, you know, GitHub for prompts, like we'll do prop manager libraries and, and like those never really took off. And I think something like this is helpful and important. I would say like, I've also seen prompt file from human loop, I think, as, as other ways to standardize how people share prompts. But yeah, I agree that like, there should be. There should be more innovation here. And I think probably people want some dynamicism, which I think you, you afford, you allow for. And I like that you have multi-step that this was, this is the main thing that got me like, like these guys really get it. You know, I think you, you maybe have a published some research that says like, actually sometimes to get, to get the model working the right way, you have to do multi-step prompting or jailbreaking to, to, to behave the way that you want. And so I think prompts are not just single conversations. They're sometimes chains of conversations. Yeah.Alessio [00:25:05]: Another question that I had when I was looking at some server implementations, the server builders kind of decide what data gets eventually returned, especially for tool calls. For example, the Google maps one, right? If you just look through it, they decide what, you know, attributes kind of get returned and the user can not override that if there's a missing one. That has always been my gripe with like SDKs in general, when people build like API wrapper SDKs. And then they miss one parameter that maybe it's new and then I can not use it. How do you guys think about that? And like, yeah, how much should the user be able to intervene in that versus just letting the server designer do all the work?Justin/David [00:25:41]: I think we probably bear responsibility for the Google maps one, because I think that's one of the reference servers we've released. I mean, in general, for things like for tool results in particular, we've actually made the deliberate decision, at least thus far, for tool results to be not like sort of structured JSON data, not matching a schema, really, but as like a text or images or basically like messages that you would pass into the LLM directly. And so I guess the correlation that is, you really should just return a whole jumble of data and trust the LLM to like sort through it and see. I mean, I think we've clearly done a lot of work. But I think we really need to be able to shift and like, you know, extract the information it cares about, because that's what that's exactly what they excel at. And we really try to think about like, yeah, how to, you know, use LLMs to their full potential and not maybe over specify and then end up with something that doesn't scale as LLMs themselves get better and better. So really, yeah, I suppose what should be happening in this example server, which again, will request welcome. It would be great. It's like if all these result types were literally just passed through from the API that it's calling, and then the API would be able to pass through automatically.Alessio [00:26:50]: Thank you for joining us.Alessio [00:27:19]: It's a hard to sign decisions on where to draw the line.Justin/David [00:27:22]: I'll maybe throw AI under the bus a little bit here and just say that Claude wrote a lot of these example servers. No surprise at all. But I do think, sorry, I do think there's an interesting point in this that I do think people at the moment still to mostly still just apply their normal software engineering API approaches to this. And I think we're still need a little bit more relearning of how to build something for LLMs and trust them, particularly, you know, as they are getting significantly better year to year. Right. And I think, you know, two years ago, maybe that approach would have been very valid. But nowadays, just like just throw data at that thing that is really good at dealing with data is a good approach to this problem. And I think it's just like unlearning like 20, 30, 40 years of software engineering practices that go a little bit into this to some degree. If I could add to that real quickly, just one framing as well for MCP is thinking in terms of like how crazily fast AI is advancing. I mean, it's exciting. It's also scary. Like thinking, us thinking that like the biggest bottleneck to, you know, the next wave of capabilities for models might actually be their ability to like interact with the outside world to like, you know, read data from outside data sources or like take stateful actions. Working at Anthropic, we absolutely care about doing that. Safely and with the right control and alignment measures in place and everything. But also as AI gets better, people will want that. That'll be key to like becoming productive with AI is like being able to connect them up to all those things. So MCP is also sort of like a bet on the future and where this is all going and how important that will be.Alessio [00:29:05]: Yeah. Yeah, I would say any API attribute that says formatted underscore should kind of be gone and we should just get the raw data from all of them. Because why, you know, why are you formatting? For me, the, the model is definitely smart enough to format an address. So I think that should go to the end user.swyx [00:29:23]: Yeah. I have, I think Alessio is about to move on to like server implementation. I wanted to, I think we were talking, we're still talking about sort of MCP design and goals and intentions. And we've, I think we've indirectly identified like some problems that MCP is really trying to address. But I wanted to give you the spot to directly take on MCP versus open API, because I think obviously there's a, this is a top question. I wanted to sort of recap everything we just talked about and give people a nice little segment that, that people can say, say, like, this is a definitive answer on MCP versus open API.Justin/David [00:29:56]: Yeah, I think fundamentally, I mean, open API specifications are a very great tool. And like I've used them a lot in developing APIs and consumers of APIs. I think fundamentally, or we think that they're just like too granular for what you want to do with LLMs. Like they don't express higher level AI specific concepts like this whole mental model. Yeah. But we've talked about with the primitives of MCP and thinking from the perspective of the application developer, like you don't get any of that when you encode this information into an open API specification. So we believe that models will benefit more from like the purpose built or purpose design tools, resources, prompts, and the other primitives than just kind of like, here's our REST API, go wild. I do think there, there's another aspect. I think that I'm not an open API expert, so I might, everything might not be perfectly accurate. But I do think that we're... Like there's been, and we can talk about this a bit more later. There's a deliberate design decision to make the protocol somewhat stateful because we do really believe that AI applications and AI like interactions will become inherently more stateful and that we're the current state of like, like need for statelessness is more a temporary point in time that will, you know, to some degree that will always exist. But I think like more statefulness will become increasingly more popular, particularly when you think about additional modalities that go beyond just pure text-based, you know, interactions with models, like it might be like video, audio, whatever other modalities exist and out there already. And so I do think that like having something a bit more stateful is just inherently useful in this interaction pattern. I do think they're actually more complimentary open API and MCP than if people wanted to make it out. Like people look. For these, like, you know, A versus B and like, you know, have, have all the, all the developers of these things go in a room and fist fight it out. But that's rarely what's going on. I think it's actually, they're very complimentary and they have their little space where they're very, very strong. And I think, you know, just use the best tool for the job. And if you want to have a rich interaction between an AI application, it's probably like, it's probably MCP. That's the right choice. And if, if you want to have like an API spec somewhere that is very easy and like a model can read. And to interpret, and that's what, what worked for you, then open API is the way to go. One more thing to add here is that we've already seen people, I mean, this happened very early. People in the community built like bridges between the two as well. So like, if what you have is an open API specification and no one's, you know, building a custom MCP server for it, there are already like translators that will take that and re-expose it as MCP. And you could do the other direction too. Awesome.Alessio [00:32:43]: Yeah. I think there's the other side of MCPs that people don't talk as much. Okay. I think there's the other side of MCPs that people don't talk as much about because it doesn't go viral, which is building the servers. So I think everybody does the tweets about like connect the cloud desktop to XMCP. It's amazing. How would you guys suggest people start with building servers? I think the spec is like, so there's so many things you can do that. It's almost like, how do you draw the line between being very descriptive as a server developer versus like going back to our discussion before, like just take the data and then let them auto manipulate it later. Do you have any suggestions for people?Justin/David [00:33:16]: I. I think there, I have a few suggestions. I think that one of the best things I think about MCP and something that we got right very early is that it's just very, very easy to build like something very simple that might not be amazing, but it's pretty, it's good enough because models are very good and get this going within like half an hour, you know? And so I think that the best part is just like pick the language of, you know, of your choice that you love the most, pick the SDK for it, if there's an SDK for it, and then just go build a tool of the thing that matters to you personally. And that you want to use. You want to see the model like interact with, build the server, throw the tool in, don't even worry too much about the description just yet, like do a bit of like, write your little description as you think about it and just give it to the model and just throw it to standard IO protocol transport wise into like an application that you like and see it do things. And I think that's part of the magic that, or like, you know, empowerment and magic for developers to get so quickly to something that the model does. Or something that you care about. That I think really gets you going and gets you into this flow of like, okay, I see this thing can do cool things. Now I go and, and can expand on this and now I can go and like really think about like, which are the different tools I want, which are the different raw resources and prompts I want. Okay. Now that I have that. Okay. Now do I, what do my evals look like for how I want this to go? How do I optimize my prompts for the evals using like tools like that? This is infinite depth so that you can do. But. Okay. Just start. As simple as possible and just go build a server in like half an hour in the language of your choice and how the model interacts with the things that matter to you. And I think that's where the fun is at. And I think people, I think a lot of what MCP makes great is it just adds a lot of fun to the development piece to just go and have models do things quickly. I also, I'm quite partial, again, to using AI to help me do the coding. Like, I think even during the initial development process, we realized it was quite easy to basically just take all the SDK code. Again, you know, what David suggested, like, you know, pick the language you care about, and then pick the SDK. And once you have that, you can literally just drop the whole SDK code into an LLM's context window and say, okay, now that you know MCP, build me a server that does that. This, this, this. And like, the results, I think, are astounding. Like, I mean, it might not be perfect around every single corner or whatever. And you can refine it over time. But like, it's a great way to kind of like one shot something that basically does what you want, and then you can iterate from there. And like David said, there has been a big emphasis from the beginning on like making servers as easy and simple to build as possible, which certainly helps with LLMs doing it too. We often find that like, getting started is like, you know, 100, 200 lines of code in the last couple of years. It's really quite easy. Yeah. And if you don't have an SDK, again, give the like, give the subset of the spec that you care about to the model, and like another SDK and just have it build you an SDK. And it usually works for like, that subset. Building a full SDK is a different story. But like, to get a model to tool call in Haskell or whatever, like language you like, it's probably pretty straightforward.swyx [00:36:32]: Yeah. Sorry.Alessio [00:36:34]: No, I was gonna say, I co-hosted a hackathon at the AGI house. I'm a personal agent, and one of the personal agents somebody built was like an MCP server builder agent, where they will basically put the URL of the API spec, and it will build an MCP server for them. Do you see that today as kind of like, yeah, most servers are just kind of like a layer on top of an existing API without too much opinion? And how, yeah, do you think that's kind of like how it's going to be going forward? Just like AI generated, exposed to API that already exists? Or are we going to see kind of like net new MCP experiences that you... You couldn't do before?Justin/David [00:37:10]: I think, go for it. I think both, like, I, I think there, there will always be value in like, oh, I have, you know, I have my data over here, and I want to use some connector to bring it into my application over here. That use case will certainly remain. I think, you know, this, this kind of goes back to like, I think a lot of things today are maybe defaulting to tool use when some of the other primitives would be maybe more appropriate over time. And so it could still be that connector. It could still just be that sort of adapter layer, but could like actually adapt it onto different primitives, which is one, one way to add more value. But then I also think there's plenty of opportunity for use cases, which like do, you know, or for MCP servers that kind of do interesting things in and out themselves and aren't just adapters. Some of the earliest examples of this were like, you know, the memory MCP server, which gives the LLM the ability to remember things across conversations or like someone who's a close coworker built the... I shouldn't have said that, not a close coworker. Someone. Yeah. Built the sequential thinking MCP server, which gives a model the ability to like really think step-by-step and get better at its reasoning capabilities. This is something where it's like, it really isn't integrating with anything external. It's just providing this sort of like way of thinking for a model.Justin/David [00:38:27]: I guess either way though, I think AI authorship of the servers is totally possible. Like I've had a lot of success in prompting, just being like, Hey, I want to build an MCP server that like does this thing. And even if this thing is not. Adapting some other API, but it's doing something completely original. It's usually able to figure that out too. Yeah. I do. I do think that the, to add to that, I do think that a good part of, of what MCP servers will be, will be these like just API wrapper to some degree. Um, and that's good to be valid because that works and it gets you very, very far. But I think we're just very early, like in, in exploring what you can do. Um, and I think as client support for like certain primitives get better, like we can talk about sampling. I'm playing with my favorite topic and greatest frustration at the same time. Um, I think you can just see it very easily see like way, way, way richer experiences and we have, we have built them internally for as prototyping aspects. And I think you see some of that in the community already, but there's just, you know, things like, Hey, summarize my, you know, my, my, my, my favorite subreddits for the morning MCP server that nobody has built yet, but it's very easy to envision. And the protocol can totally do this. And these are like slightly richer experiences. And I think as people like go away from like the, oh, I just want to like, I'm just in this new world where I can hook up the things that matter to me, to the LLM, to like actually want a real workflow, a real, like, like more richer experience that I, I really want exposed to the model. I think then you will see these things pop up, but again, that's a, there's a little bit of a chicken and egg problem at the moment with like what a client supported versus, you know, what servers like authors want to do. Yeah.Alessio [00:40:10]: That, that, that was. That's kind of my next question on composability. Like how, how do you guys see that? Do you have plans for that? What's kind of like the import of MCPs, so to speak, into another MCP? Like if I want to build like the subreddit one, there's probably going to be like the Reddit API, uh, MCP, and then the summarization MCP. And then how do I, how do I do a super MCP?Justin/David [00:40:33]: Yeah. So, so this is an interesting topic and I think there, um, so there, there are two aspects to it. I think that the one aspect is like, how can I build something? I think agentically that you requires an LLM call and like a one form of fashion, like for summarization or so, but I'm staying model independent and for that, that's where like part of this by directionality comes in, in this more rich experience where we do have this facility for servers to ask the client again, who owns the LLM interaction, right? Like we talk about cursor, who like runs the, the, the loop with the LLM for you there that for the server author to ask the client for a completion. Um, and basically have it like summarize something for the server and return it back. And so now what model summarizes this depends on which one you have selected in cursor and not depends on what the author brings. The author doesn't bring an SDK. It doesn't have, you had an API key. It's completely model independent, how you can build this. There's just one aspect to that. The second aspect to building richer, richer systems with MCP is that you can easily envision an MCP server that serves something to like something like cursor or win server. For a cloud desktop, but at the same time, also is an MCP client at the same time and itself can use MCP servers to create a rich experience. And now you have a recursive property, which we actually quite carefully in the design principles, try to retain. You, you know, you see it all over the place and authorization and other aspects, um, to the spec that we retain this like recursive pattern. And now you can think about like, okay, I have, I have this little bundle of applications, both a server and a client. And I can add. Add these in chains and build basically graphs like, uh, DAGs out of MCP servers, um, uh, that can just richly interact with each other. A agentic MCP server can also use the whole ecosystem of MCP servers available to themselves. And I think that's a really cool environment, cool thing you can do. And people have experimented with this. And I think you see hopefully more of this, particularly when you think about like auto-selecting, auto-installing, there's a bunch of these things you can do that make, uh, make a really fun experience. I, I think practically there are some niceties we still need to add to the SDKs to make this really simple and like easy to execute on like this kind of recursive MCP server that is also a client or like kind of multiplexing together the behaviors of multiple MCP servers into one host, as we call it. These are things we definitely want to add. We haven't been able to yet, but like, uh, I think that would go some way to showcasing these things that we know are already possible, but not necessarily taken up that much yet. Okay.swyx [00:43:08]: This is, uh, very exciting. And very, I'm sure, I'm sure a lot of people get very, very, uh, a lot of ideas and inspiration from this. Is an MCP server that is also a client, is that an agent?Justin/David [00:43:19]: What's an agent? There's a lot of definitions of agents.swyx [00:43:22]: Because like you're, in some ways you're, you're requesting something and it's going off and doing stuff that you don't necessarily know. There's like a layer of abstraction between you and the ultimate raw source of the data. You could dispute that. Yeah. I just, I don't know if you have a hot take on agents.Justin/David [00:43:35]: I do think, I do think that you can build an agent that way. For me, I think you need to define the difference between. An MCP server plus client that is just a proxy versus an agent. I think there's a difference. And I think that difference might be in, um, you know, for example, using a sample loop to create a more richer experience to, uh, to, to have a model call tools while like inside that MCP server through these clients. I think then you have a, an actual like agent. Yeah. I do think it's very simple to build agents that way. Yeah. I think there are maybe a few paths here. Like it definitely feels like there's some relationship. Between MCP and agents. One possible version is like, maybe MCP is a great way to represent agents. Maybe there are some like, you know, features or specific things that are missing that would make the ergonomics of it better. And we should make that part of MCP. That's one possibility. Another is like, maybe MCP makes sense as kind of like a foundational communication layer for agents to like compose with other agents or something like that. Or there could be other possibilities entirely. Maybe MCP should specialize and narrowly focus on kind of the AI application side. And not as much on the agent side. I think it's a very live question and I think there are sort of trade-offs in every direction going back to the analogy of the God box. I think one thing that we have to be very careful about in designing a protocol and kind of curating or shepherding an ecosystem is like trying to do too much. I think it's, it's a very big, yeah, you know, you don't want a protocol that tries to do absolutely everything under the sun because then it'll be bad at everything too. And so I think the key question, which is still unresolved is like, to what degree are agents. Really? Really naturally fitting in to this existing model and paradigm or to what degree is it basically just like orthogonal? It should be something.swyx [00:45:17]: I think once you enable two way and once you enable client server to be the same and delegation of work to another MCP server, it's definitely more agentic than not. But I appreciate that you keep in mind simplicity and not trying to solve every problem under the sun. Cool. I'm happy to move on there. I mean, I'm going to double click on a couple of things that I marked out because they coincide with things that we wanted to ask you. Anyway, so the first one is, it's just a simple, how many MCP things can one implementation support, you know, so this is the, the, the sort of wide versus deep question. And, and this, this is direct relevance to the nesting of MCPs that we just talked about in April, 2024, when, when Claude was launching one of its first contexts, the first million token context example, they said you can support 250 tools. And in a lot of cases, you can't do that. You know, so to me, that's wide in, in the sense that you, you don't have tools that call tools. You just have the model and a flat hierarchy of tools, but then obviously you have tool confusion. It's going to happen when the tools are adjacent, you call the wrong tool. You're going to get the bad results, right? Do you have a recommendation of like a maximum number of MCP servers that are enabled at any given time?Justin/David [00:46:32]: I think be honest, like, I think there's not one answer to this because to some extent, it depends on the model that you're using. To some extent, it depends on like how well the tools are named and described for the model and stuff like that to avoid confusion. I mean, I think that the dream is certainly like you just furnish all this information to the LLM and it can make sense of everything. This, this kind of goes back to like the, the future we envision with MCP is like all this information is just brought to the model and it decides what to do with it. But today the reality or the practicalities might mean that like, yeah, maybe you, maybe in your client application, like the AI application, you do some fill in the blanks. Maybe you do some filtering over the tool set or like maybe you, you run like a faster, smaller LLM to like filter to what's most relevant and then only pass those tools to the bigger model. Or you could use an MCP server, which is a proxy to other MCP servers and does some filtering at that level or something like that. I think hundreds, as you referenced, is still a fairly safe bet, at least for Claude. I can't speak to the other models, but yeah, I don't know. I think over time we should just expect this to get better. So we're wary of like constraining anything and preventing that. Sort of long. Yeah, and obviously it highly, it highly depends on the overlap of the description, right? Like if you, if you have like very separate servers that do very separate things and the tools have very clear unique names, very clear, well-written descriptions, you know, your mileage might be more higher than if you have a GitLab and a GitHub server at the same time in your context. And, and then the overlap is quite significant because they look very similar to the model and confusion becomes easier. There's different considerations too. Depending on the AI application, if you're, if you're trying to build something very agentic, maybe you are trying to minimize the amount of times you need to go back to the user with a question or, you know, minimize the amount of like configurability in your interface or something. But if you're building other applications, you're building an IDE or you're building a chat application or whatever, like, I think it's totally reasonable to have affordances that allow the user to say like, at this moment, I want this feature set or at this different moment, I want this different feature set or something like that. And maybe not treat it as like always on. The full list always on all the time. Yeah.swyx [00:48:42]: That's where I think the concepts of resources and tools get to blend a little bit, right? Because now you're saying you want some degree of user control, right? Or application control. And other times you want the model to control it, right? So now we're choosing just subsets of tools. I don't know.Justin/David [00:49:00]: Yeah, I think it's a fair point or a fair concern. I guess the way I think about this is still like at the end of the day, and this is a core MCP design principle is like, ultimately, the concept of a tool is not a tool. It's a client application, and by extension, the user. Ultimately, they should be in full control of absolutely everything that's happening via MCP. When we say that tools are model controlled, what we really mean is like, tools should only be invoked by the model. Like there really shouldn't be an application interaction or a user interaction where it's like, okay, as a user, I now want you to use this tool. I mean, occasionally you might do that for prompting reasons, but like, I think that shouldn't be like a UI affordance. But I think the client application or the user deciding to like filter out the user, it's not a tool. I think the client application or the user deciding to like filter out things that MCP servers are offering, totally reasonable, or even like transform them. Like you could imagine a client application that takes tool descriptions from an MCP server and like enriches them, makes them better. We really want the client applications to have full control in the MCP paradigm. That in addition, though, like I think there, one thing that's very, very early in my thinking is there might be a addition to the protocol where you want to give the server author the ability to like logically group certain primitives together, potentially. Yeah. To inform that, because they might know some of these logical groupings better, and that could like encompasses prompts, resources, and tools at the same time. I mean, personally, we can have a design discussion on there. I mean, personally, my take would be that those should be separate MCP servers, and then the user should be able to compose them together. But we can figure it out.Alessio [00:50:31]: Is there going to be like a MCP standard library, so to speak, of like, hey, these are like the canonical servers, do not build this. We're just going to take care of those. And those can be maybe the building blocks that people can compose. Or do you expect people to just rebuild their own MCP servers for like a lot of things?Justin/David [00:50:49]: I think we will not be prescriptive in that sense. I think there will be inherently, you know, there's a lot of power. Well, let me rephrase it. Like, I have a long history in open source, and I feel the bizarre approach to this problem is somewhat useful, right? And I think so that the best and most interesting option wins. And I don't think we want to be very prescriptive. I will definitely foresee, and this already exists, that there will be like 25 GitHub servers and like 25, you know, Postgres servers and whatnot. And that's all cool. And that's good. And I think they all add in their own way. But effectively, eventually, over months or years, the ecosystem will converge to like a set of very widely used ones who basically, I don't know if you call it winning, but like that will be the most used ones. And I think that's completely fine. Because being prescriptive about this, I don't think it's any useful, any use. I do think, of course, that there will be like MCP servers, and you see them already that are driven by companies for their products. And, you know, they will inherently be probably the canonical implementation. Like if you want to work with Cloudflow workers and use an MCP server for that, you'll probably want to use the one developed by Cloudflare. Yeah. I think there's maybe a related thing here, too, just about like one big thing worth thinking about. We don't have any like solutions completely ready to go. It's this question of like trust or like, you know, vetting is maybe a better word. Like, how do you determine which MCP servers are like the kind of good and safe ones to use? Regardless of if there are any implementations of GitHub MCP servers, that could be totally fine. But you want to make sure that you're not using ones that are really like sus, right? And so trying to think about like how to kind of endow reputation or like, you know, if hypothetically. Anthropic is like, we've vetted this. It meets our criteria for secure coding or something. How can that be reflected in kind of this open model where everyone in the ecosystem can benefit? Don't really know the answer yet, but that's very much top of mind.Alessio [00:52:49]: But I think that's like a great design choice of MCPs, which is like language agnostic. Like already, and there's not, to my knowledge, an Anthropic official Ruby SDK, nor an OpenAI SDK. And Alex Roudal does a great job building those. But now with MCPs is like. You don't actually have to translate an SDK to all these languages. You just do one, one interface and kind of bless that interface as, as Anthropic. So yeah, that was, that was nice.swyx [00:53:18]: I have a quick answer to this thing. So like, obviously there's like five or six different registries already popped up. You guys announced your official registry that's gone away. And a registry is very tempting to offer download counts, likes, reviews, and some kind of trust thing. I think it's kind of brittle. Like no matter what kind of social proof or other thing you can, you can offer, the next update can compromise a trusted package. And actually that's the one that does the most damage, right? So abusing the trust system is like setting up a trust system creates the damage from the trust system. And so I actually want to encourage people to try out MCP Inspector because all you got to do is actually just look at the traffic. And like, I think that's, that goes for a lot of security issues.Justin/David [00:54:03]: Yeah, absolutely. Cool. And then I think like that's very classic, just supply chain problem that like all registries effectively have. And the, you know, there are different approaches to this problem. Like you can take the Apple approach and like vet things and like have like an army of, of both automated system and review teams to do this. And then you effectively build an app store, right? That's, that's one approach to this type of problem. It kind of works in, you know, in a very set, certain set of ways. But I don't think it works in an open source kind of ecosystem for which you always have a registry kind of approach, like similar to MPM and packages and PiPi.swyx [00:54:36]: And they all have inherently these, like these, these supply chain attack problems, right? Yeah, yeah, totally. Quick time check. I think we're going to go for another like 20, 25 minutes. Is that okay for you guys? Okay, awesome. Cool. I wanted to double click, take the time. So I'm going to sort of, we previewed a little bit on like the future coming stuff. So I want to leave the future coming stuff to the end, like registry, the, the, the stateless servers and remote servers, all the other stuff. But I wanted to double click a little bit. A little bit more on the launch, the core servers that are part of the official repo. And some of them are special ones, like the, like the ones we already talked about. So let me just pull them up already. So for example, you mentioned memory, you mentioned sequential thinking. And I think I really, really encourage people should look at these, what I call special servers. Like they're, they're not normal servers in the, in the sense that they, they wrap some API and it's just easier to interact with those than to work at the APIs. And so I'll, I'll highlight the, the memory one first, just because like, I think there are, there are a few memory startups, but actually you don't need them if you just use this one. It's also like 200 lines of code. It's super simple. And, and obviously then if you need to scale it up, you should probably do some, some more battle tested thing. But if you're interested, if you're just introducing memory, I think this is a really good implementation. I don't know if there's like special stories that you want to highlight with, with some of these.Justin/David [00:56:00]: I think, no, I don't, I don't think there's special stories. I think a lot of these, not all of them, but a lot of them originated from that hackathon that I mentioned before, where folks got excited about the idea of MCP. People internally inside Anthropik who wanted to have memory or like wanted to play around with the idea could quickly now prototype something using MCP in a way that wasn't possible before. Someone who's not like, you know, you don't have to become the, the end to end expert. You don't have access. You don't have to have access to this. Like, you know. You don't have to have this private, you know, proprietary code base. You can just now extend cloud with this memory capability. So that's how a lot of these came about. And then also just thinking about like, you know, what is the breadth of functionality that we want to demonstrate at launch?swyx [00:56:47]: Totally. And I think that is partially why it made your launch successful because you launch with a sufficiently spanning set of here's examples and then people just copy paste and expand from there. I would also highligh
Mahesh Swar is the CEO of Kantipur Media Group, Nepal's leading media conglomerate, and the President of the Nepalese Marketing Association. A visionary leader, he drives digital transformation, fosters innovation, and bridges global marketing trends with local expertise to shape Nepal's media and marketing landscape.
ParkourSC is transforming supply chain management with its dynamic decision intelligence platform, raising $90 million to tackle the industry's most pressing challenges. In this episode of Category Visionaries, we sat down with Mahesh Veerina, a four-time entrepreneur with multiple successful exits, to explore how ParkourSC is creating a new category focused on unlocking trapped value in fragmented supply chain systems. From his early success taking Ramp Networks public in 1999 to his current mission revolutionizing pharmaceutical supply chains, Mahesh shares invaluable insights on building category-defining companies. Topics Discussed: Mahesh's entrepreneurial journey through three successful exits including an IPO How ParkourSC pivoted from IoT sensors to supply chain intelligence The pandemic's role in accelerating supply chain visibility as a critical business need Why life sciences and pharma presented the perfect initial market Creating the "dynamic decision intelligence" category in supply chain technology Building a platform that integrates fragmented enterprise systems with real-time data The strategy for partnering with innovative "change agents" within large companies How ParkourSC's technology reduces supply chain planner "noise" by 50-60% GTM Lessons For B2B Founders: Market conditions can validate your vision: Supply chains were "back office" until COVID put them in the spotlight. Mahesh explains, "Any channel you switch on, it's supply chain—running out of milk and bread and essentials." This external validation can accelerate market acceptance of your solution. Find the value unlock through adjacent innovation: Rather than replacing existing systems, identify where trapped value exists. Mahesh notes, "Companies spend billions of dollars building their ERP systems... We are a category coming either as adjacency or sitting on top of some of these systems to unlock value." This approach reduces friction to adoption. Target industries with regulatory pressure and high-value problems: ParkourSC chose pharma/life sciences because it's "heavily regulated" with "$35 billion worth of product lost yearly to expirations and cold chain issues." Regulatory compliance and high-dollar waste create urgent problems worth solving. Co-create with innovative customers: Mahesh advises finding "innovators" within large companies who want to be change agents. "These are the early adopters that take bets... They bring a problem, give you a sandbox to play in." One such customer partner even became ParkourSC's Chief Strategy Officer. Expand from a narrow solution to platform vision: Start with solving one specific problem exceptionally well. Mahesh shares, "We got into the logistics, cold chain logistics... but very quickly found within these organizations it's great value but very narrow problem." They expanded systematically from logistics to operations, planning, and inventory—"the mother of all there, that's where all the money is stuck." // Sponsors: Front Lines — We help B2B tech companies launch, manage, and grow podcasts that drive demand, awareness, and thought leadership. www.FrontLines.io The Global Talent Co. — We help tech startups find, vet, hire, pay, and retain amazing marketing talent that costs 50-70% less than the US & Europe. www.GlobalTalent.co
SUMMARYMahesh Guruswamy, a seasoned tech leader and author, joins the podcast to share invaluable insights on entrepreneurship, business strategy, and hiring for success. With a background in major tech companies like Amazon, Mahesh has a unique perspective on what it takes to scale a business, the common pitfalls entrepreneurs face, and how to build a winning team.Throughout the episode, Mahesh dives deep into the importance of market research, why competitor analysis is crucial, and why being first to market isn't always the best move. He also shares hard-earned lessons from his own entrepreneurial journey, including financial struggles and how he turned them into a driving force for success. Whether you're a startup founder or looking to scale your business, this episode is packed with practical advice you can apply immediately.CHAPTERS02:35 - The Harsh Reality of Entrepreneurship05:10 - Why Most Entrepreneurs Skip Market Research07:45 - How to Identify a Winning Business Idea10:20 - Why You Shouldn't Always Be First in the Market12:55 - The Power of Competitor Analysis15:30 - Hiring Your First Tech Person: What to Look For18:05 - The “Plumber” Approach to Early-Stage Tech20:40 - The Biggest Lesson from Amazon's Work Culture23:15 - Overcoming Financial Hardships & Learning from Failure25:50 - How to Deliver Bad News & Still Win in BusinessGUEST DETAILSWebsite: MaheshGuruswamy.comBook: How to Deliver Bad News and Get Away With It (Available on Amazon, Barnes & Noble)Connect with Rudy Mawer:LinkedInInstagramFacebookTwitter
Join Scott "Shalom" Klein on his weekly radio show, Get Down To Business with guests:Mahesh GuruswamyAnjali SharmaJimi Gibson
Welcome back, loyal readers. First off, we had another strong week, with 18 new subscribers joining, thanks to Sunday, Sarah, Gotelé, Loque, Coree, Claire, Elizabeth, Lauren, Marina, Imma, Patricia, Beth, Mahesh, Olga, Heriberto, Leer, and Melissa. Thank you for trying Article Club, and I hope you like it here.This week's issue is dedicated to our article of the month. For all of you who are interested, we'll be reading, annotating, and discussing “Radicalized,” by Cory Doctorow. You'll learn more about the piece below, but here are a few tidbits:* It's a fictional novella written in 2019 about a man who becomes radicalized after his health insurance denies his claim. Sound familiar?* I read this piece in December, the week after all-things-Luigi Mangione* Mr. Doctorow‘s writing is fast-paced and his details eerily prescientSound compelling? If so, you're invited to join our deep dive on the article. We're meeting up to discuss the piece on Sunday, March 23, 2:00 - 3:30 pm PT. All you need to do is click the button below to sign up.
“You're Fired!” Two words that are never easy to speak and are even harder to hear. Dismissing employees is complicated — legally, emotionally, and professionally. Mastering the art of letting employees go isn't taught in business school, and too many managers fumble the process. Mahesh Guruswamy, chief technology officer at Kickstarter, has spent much of his career delivering tough news — not just to employees but also to customers, investors, and even higher-ups. Now, he's sharing his hard-earned wisdom in a new book — How to Deliver Bad News and Get Away With It: A Manager Guide. Whether you're a seasoned executive or a first-time manager, Mahesh's insights will arm you with tools to handle difficult conversations while building trust, retaining talent, and, yes — keeping your sanity intact. Monday Morning Radio is hosted by the father-son duo of Dean and Maxwell Rotbart. Photo: Mahesh Guruswami, How to Deliver Bad News and Get Away With ItPosted: February 17, 2025 Monday Morning Run Time: 52:16 Episode: 13.36 RELATED EPISODES: Lee Caraher on Business “Alumni” Networks and Boomerang Employees Discover the Power of Effective Communication to Support Career Advancement and Life Satisfaction Every Owner and Manager Needs to Have ‘The Revelation Conversation' With Their Employees
About Mahesh Guruswamy:Mahesh Guruswamy is a seasoned product development executive who has been in the software development space for over twenty years and has managed teams of varying sizes for over a decade. He is currently the chief technology officer at Kickstarter. Before that, he was an executive running product development teams at Mosaic, Kajabi, and Smartsheet.Mahesh caught the writing bug from his favorite author, Stephen King. He started out writing short stories and eventually discovered that long-form writing was a great medium to share information with product development teams. The essays he wrote over the last few years culminated in his first book "How to Deliver Bad News and Get Away With It" which is out now for purchase here: https://www.amazon.com/How-Deliver-Bad-News-Away/dp/B0D7FHTTNNIn this episode, Jennie Bellinger and Mahesh Guruswamy discuss:Transparency in delivering bad newsFrameworks for delivering critical feedbackImportance of understanding recipient's personalityBalance between business skills and people skills in leadershipCreating a common language and culture within teamsKey Takeaways:Being upfront and keeping customers/employees informed goes a long way in managing expectations and maintaining trust, even in difficult situations. The approach to delivering critical feedback should be tailored to the specific context and recipient. When providing critical feedback to superiors, it's important to be mentally prepared for potential backlash or even job loss, and Mahesh advises being ready to handle the worst-case scenario, as the benefits of having the difficult conversation often outweigh the risks.Incorporating a more people-centric approach can lead to better long-term outcomes.Establishing behavioral tenets or principles that guide team interactions can create a unifying framework for resolving conflicts and improving communication. "Companies don't hire CEOs for their people skills. You look at the most successful leaders in the world, most successful like technology leaders or corporate leaders there, because they can make the impossible happen, right? They have extremely good business skills.” — Mahesh GuruswamyConnect with Mahesh Guruswamy: Website: https://www.maheshguruswamy.com/LinkedIn: https://www.linkedin.com/in/maheshguruswamy/Book: https://www.amazon.com/How-Deliver-Bad-News-Away/dp/B0D7FHTTNNLink to Gift from Mahesh Guruswamy:Connect with Jennie:Website: https://badassdirectsalesmastery.com/Email: jennie@badassdirectsalesmastery.comFacebook personal page: https://facebook.com/jbellingerPLFacebook podcast page: http://facebook.com/BadassDirectSalesMasteryFacebook group for Badass Crew: https://facebook.com/groups/BadassDirectSalesMomsInstagram: https://instagram.com/BadassDirectSalesMasteryPersonal Instagram: https://instagram.com/jenniebellingerLinkedIn: https://linkedin.com/in/BadassDirectSalesMasteryThe Badass Direct Sales Mastery Podcast is currently sponsored by the following:Bella Grace Elixir: https://shopbellagrace.com/?ref=jenniebadassdirectsalesmasteryLeadBuddy Digital Marketing: Use code BDSM when checking out at https://leadbuddy.io/pro-monthly-9310?am_id=jennie582Show Notes by Podcastologist: Hanz Jimuel AlvarezAudio production by Turnkey Podcast Productions. You're the expert. Your podcast will prove it.
Mahesh Guruswamy is a seasoned product development executive and a published author who has been in the software development space for over twenty years and has managed teams of varying sizes for over a decade. He is currently the chief technology officer at Kickstarter. Before that, he was an executive running product development teams at Mosaic, Kajabi, and Smartsheet. Mahesh is passionate about mentoring others, especially folks who are interested in becoming a people manager and newer managers who are just getting going. Mahesh resides in Orange County with his wife, Krishma, and son, Vivaan. In his free time, you can find him either running in the trails around South Orange County or reading cosmic horror fiction.Connect with Mahesh Guruswamy:Website: https://maheshguruswamy.substack.com/ Book: https://www.amazon.com/How-Deliver-Bad-News-Away/dp/B0D7FHTTNN LinkedIn: https://www.linkedin.com/in/maheshguruswamy/ TurnKey Podcast Productions Important Links:Guest to Gold Video Series: www.TurnkeyPodcast.com/gold The Ultimate Podcast Launch Formula- www.TurnkeyPodcast.com/UPLFplusFREE workshop on how to "Be A Great Guest."Free E-Book 5 Ways to Make Money Podcasting at www.Turnkeypodcast.com/gift Ready to earn 6-figures with your podcast? See if you've got what it takes at TurnkeyPodcast.com/quizSales Training for Podcasters: https://podcasts.apple.com/us/podcast/sales-training-for-podcasters/id1540644376Nice Guys on Business: http://www.niceguysonbusiness.com/subscribe/The Turnkey Podcast: https://podcasts.apple.com/us/podcast/turnkey-podcast/id1485077152