Venture capital unit of Alphabet Inc.
POPULARITY
#738. En 2010, Satoshi hablaba de Ripple como un sistema único. Pero el Ripple de entonces no es el que conocemos hoy. ¿Qué pasó entre Ryan Fugger, Jed McCaleb y Chris Larsen? ¿Y por qué hay tantas conexiones con agencias, oligarcas tecnológicos y hasta videojuegos? Hoy abrimos unos cuantos agujeros de conejo de forma desordenada.• Notas de este episodio:https://podcast.pau.ninja/738• Comunidad + episodios exclusivos:https://sociedad.ninja/(00:00) Introducción(6:23) Los inquietantes orígenes de Ripple(10:15) Las extrañas relaciones entre los creadores de XRP(12:01) Conexiones indirectas con Ripple(12:34) Peter Theil(15:41) Elon Musk(17:02) Google Ventures(18:51) Catherine Austin Fitts(19:57) Brian Brooks(20:11) Rosie Rios
What’s Inside: Cannes Film Festival bans naked dressing on the red carpet The iconic event kicks off with new fashion rules and old-school glamour UCLA scientists may have found a breakthrough baldness treatment A molecule called PP405 could change the hair game by 2027 Episode Description:The Cannes Film Festival is known for its glitz, glamour, and show-stopping red carpet looks—but this year, they’re putting their foot down on fashion. Organizers have officially banned “naked dressing,” the sheer, skin-baring style that’s dominated celebrity wardrobes in recent years. As the 78th annual event kicks off, the festival is making it clear: elegance is back in, and body-baring looks are out. It’s a shift toward more classic, sophisticated style—at least for now. Meanwhile, in the world of science and self-esteem, researchers at UCLA may be on the verge of a baldness breakthrough. A molecule called PP405 is showing serious promise in early trials, reportedly waking up dormant hair follicles and prompting them to grow thick, real strands of hair—not just fuzz. Backed by Google Ventures and with $16 million raised so far, the treatment is moving through clinical trials and could be available by 2027 if it gets FDA approval. For millions of people, this could be a literal hair-raising game changer. Sources: Cannes Film Festival announcement (no link provided) UCLA Baldness Research (summary provided by user) Nina's What's Trending is your daily dose of the hottest headlines, viral moments, and must-know stories from The Jubal Show! From celebrity gossip and pop culture buzz to breaking news and weird internet trends, Nina’s got you covered with everything trending right now. She delivers it with wit, energy, and a touch of humor. Stay in the know and never miss a beat—because if it’s trending, Nina’s talking about it! ======This is just a tiny piece of The Jubal Show. You can find every podcast we have, including the full show every weekday right here…➡︎ https://thejubalshow.com/podcasts The Jubal Show is everywhere, and also these places:Website ➡︎ https://thejubalshow.comInstagram ➡︎ https://instagram.com/thejubalshowX/Twitter ➡︎ https://twitter.com/thejubalshowTiktok ➡︎ https://www.tiktok.com/@the.jubal.showFacebook ➡︎ https://facebook.com/thejubalshowYouTube ➡︎ https://www.youtube.com/@JubalFreshSupport the show: https://the-jubal-show.beehiiv.com/subscribeSee omnystudio.com/listener for privacy information.
After exiting a successful startup, Rafael Loureiro went looking for a better way to do estate planning…and came up empty. The experience? Clunky. Expensive. Outdated.So like any smart entrepreneur would do, he decided to build something better.Today, Rafael is the CEO of Wealth.com, a FinTech firm backed by Google Ventures and designed to modernize the estate planning process for advisors and their clients.In this Episode, Rafael sits down with Stacy to discuss: Rafael's backstory – How he went from Brazil to Silicon Valley to Wall Street Why his post-exit estate plan felt more like Mad Libs than modern finance How that clunky estate planning experience drove him to go from introverted software engineer to front-and-center CEOWhy it's actually easier to raise capital before the product exists (yes, really)About Rafael Loureiro:Rafael Loureiro is the CEO and co-founder of Wealth.com, the industry's leading estate planning platform. Under his leadership,Wealth.com has become the fastest-growing solution in the space, empowering 750+ wealth management firms to modernize the delivery of estate planning guidance. Backed by Google Ventures, Wealth.com is the only tech-led, end-to-end estate planning platform built specifically for financial institutions—helping advisors scale, drive efficiency, and serve clients across the wealth spectrum.Rafael was named ThinkAdvisor's 2024 Executive of the Year in the financial planning technology category and also received the Advisor Choice Award for Technology Providers: CEO of the Year in the 2024 WealthManagement.com Industry Awards, where Wealth.com was recognized as the Best Technology Provider in the Trust category. A technology entrepreneur and product innovator, Rafael brings more than 20 years of technical and executive leadership experience across start-ups, growth-stage ventures, and Fortune 500 companies. Prior to founding Wealth.com, he served as Chief Technology Officer at Emailage, a global fraud prevention SaaS startup acquired by Lexis Nexis Risk Solutions in 2020.Rafael is a member of the Forbes Finance Council and an active community leader in the Phoenix start-up ecosystem. Originally from Brazil, Rafael has been in Arizona for over 20 years. Outside of work, he enjoys time with his family, hiking the Arizona desert, playing pickleball, reading, and gaming. Want More Help With Storytelling? + Subscribe to my newsletter to get a weekly email that helps you use your words to power your growth:https://www.stacyhavener.com/subscribe - - -Make The Boutique Investment Collective part of your Billion Dollar Backstory. Gain access to invaluable resources, expert coaches, and a supportive community of other boutique founders, fund managers, and investment pros. Join Havener Capital's exclusive membership - - -Make The Boutique Investment Collective part of your Billion Dollar Backstory. Gain access to invaluable resources, expert coaches, and a supportive community of other boutique founders, fund managers, and investment pros. Join Havener Capital's exclusive membership
Rafie Faruq is a serial entrepreneur and the co-founder of Genie, a platform launched in 2017 to make high-quality legal documents accessible through open-source tools. Since then, Genie has served over 100,000 users across 120 jurisdictions and recently raised an $18M Series A from Google Ventures and Khosla Ventures. Rafie holds an MSc in Machine Learning, has advised the Law Policy Commission, contributed to the Law Society, and represented the Ministry of Justice on international trade missions. He also teaches meditation and is a mentor to many seeking meaning and clarity in life.In this conversation, we discuss:Rafie Faruq's unconventional path from bond trader to AI entrepreneur and what drove him to apply machine learning to legal access.How his early research on generative models led to new ways of thinking about document creation and automation.Why he believes the current definition of intelligence is outdated—and what a more human-centered version could look like.The role of emotional intelligence, purpose, and social connection in a future shaped by automation.What it means to build a business rooted in spiritual principles—and how Rafie applies concepts like rhythm, balance, and nature to team building.His reflections on power, responsibility, and why founders should see themselves as stewards of broader societal change.Resources:Subscribe to the AI & The Future of Work Newsletter: https://aiandwork.beehiiv.com/subscribeConnect with Rafie on LinkedIn: https://www.linkedin.com/in/rfaruq/AI fun fact article: https://www.mixonline.com/blog/mix-blog-who-wrote-that-ai-and-copyright-lawOn Entrepreneurship in Emerging Markets: https://podcasts.apple.com/us/podcast/linda-rottenberg-endeavor-co-founder-and-ceo/id1476885647?i=1000517670973
Sanyin Siang shares highlights from her journey, and how to accept positive affirmations and constructive criticism as data points in your life, importance of being generous, and how to be vulnerable.Sanyin helps leaders launch and create value by focusing on mindset, behavioral change, and team and culture building. Sanyin is a CEO Coach, Advisor, Author,the Executive Director of Duke University's Fuqua/Coach K Center on Leadership & Ethics (COLE) and a Professor with its Pratt School of Engineering.The COLE center is a leadership laboratory that engages all of Duke's Daytime MBA students and convenes high-level think tank gatherings to explore today's complex leadership opportunities and challenges.Sanyin coaches C-suite executives and is in the original cohort of Marshall Goldsmith's 100 Coaches. She is an advisor for GV (former Google Ventures), Duke Corporate Education, and the Sports Innovation Lab. Her thought leadership has appeared in Forbes, Fortune, The Wall Street Journal, and CNN. She has more than 1 million LinkedIn followers. She is a LinkedIn 2017 & 2018 Top 10 Influencer and a 2018 Thinkers50 On the Radar.Sanyin's board service has included those of The Emily K Center, The Museum of Life & Science, Duke Children's Hospital & Health Center. She is a Sr. Advisor with Dan Ariely's Center for Advanced Hindsight and a faculty with StoryLab at Duke. She has spoken to audiences from the White House to Global Sports Management and Owners Summits.Prior to Duke, Sanyin worked at the American Association for the Advancement of Science (AAAS), the world's largest federation of scientific and engineering societies, and publisher of Science. Her initiatives explored the ethical, social, and legal implications of technological advances before they became reality.Her book The Launch Book: Motivational Stories for Launching Your Idea, Business, or Next Career, uses behavioral science principles to help readers build the mindset for addressing major change.Sanyin received a BSE in Biomedical Engineering and an MBA from Duke University.Order "The Launch Book": https://www.amazon.com/gp/product/B074JC5L9V/ref=dbs_a_def_rwt_bibl_vppi_i0
Isomorphic Labs raised $600 million in its initial external funding round to enhance artificial intelligence applications in drug development. Thrive Capital led this round, supported by Google Ventures and existing investor Alphabet. Isomorphic Labs, formed from Google DeepMind in 2021, aims to advance its AI drug design engine and clinical programs. This funding follows significant raises in the AI-driven biotech sector, including Xaira Therapeutics, which secured $1 billion, and Formation Bio, which raised $372 million. Thrive Capital, co-founded by Joshua Kushner, has also led major investments in Databricks and OpenAI.Learn more on this news visit us at: https://greyjournal.net/news/ Hosted on Acast. See acast.com/privacy for more information.
#217: From the best AI tools to takeaways from losing a home in the LA fires, we dive deep into strategies for optimizing home insurance, the benefits of rope training and saunas, how think about where to live, and and more listener questions. Kevin Rose is a partner at True Ventures, an early-stage venture capital firm that has invested over $4 billion in a portfolio of over 400+ founders. Kevin previously founded Digg, Zero, Oak, and was a Partner at Google Ventures. Link to Full Show Notes: https://chrishutchins.com/kevin-rose-ama-2-ai-tools-wildfires-fitness Partner Deals MasterClass: Learn from the world's best with 15% off Bilt Rewards: Earn the most valuable points when you pay rent Facet: Personalized Financial Planning + $250 enrollment fee waived DeleteMe: 20% off removing your personal info from the web Superhuman: Free month of the fastest and best email with code ALLTHEHACKS For all the deals, discounts and promo codes from our partners, go to: chrishutchins.com/deals Resources Mentioned Kevin Rose: Community Forum | Newsletter Digital Thermometers Haven Sauna Trumpkin's Notes On Building A Sauna AI Tools ChatGPT Pro Claude Gemini Chatterbot Google NotebookLM ElevenLabs Coding: Replit | Cursor | BoltAI Simple AI Automation: Zapier | Make | Howie Podcasts: Snipd Note taking: Journal | Granola | Limitless ATH Podcast Ep #216: Purpose, Friendship, and Human Skills with Simon Sinek Leave a review: Apple Podcasts | Spotify Submit questions for our next AMA here Email for questions, hacks, deals, and feedback: podcast@allthehacks.com Full Show Notes (00:00) Introduction (02:16) How Kevin Generally Spends His Day (04:28) Saunas: Impact and How to Efficiently Operate (08:55) Rope Training and Exploring New Ideas (11:46) ChatGPT Hack (17:26) Kevin's Current AI Tool Stack (24:27) The Impact of AI Tools on Consumer Apps (28:09) Tools That Chris Uses (30:57) AI Tools for Podcasts and Note-Taking (37:53) Key Takeaways From the Recent L.A. Fires (47:27) Chris and Kevin's Favorite Physical Items or Purchases (50:00) Why You Should Record Your Belongings Every 6 Months (52:55) How Chris and Kevin Evaluate Where to Live (59:52) Ways to Stay Grounded and True to Yourself Connect with Chris Newsletter | Membership | X | Instagram | LinkedIn Editor's Note: The content on this page is accurate as of the posting date; however, some of our partner offers may have expired. Opinions expressed here are the author's alone, not those of any bank, credit card issuer, hotel, airline, or other entity. This content has not been reviewed, approved or otherwise endorsed by any of the entities included within the post. Learn more about your ad choices. Visit megaphone.fm/adchoices
The future of data infrastructure. We cover the explosion in compute demand, the petabytes of untapped enterprise data, energy-efficient GPUs, DeepSeek, the $500B Stargate project, and how AI is transforming data processing. Craig Dunham is CEO of Voltron Data, a company at the forefront of accelerating data processing for AI, analytics, and enterprise-scale workloads. Voltron provides the infrastructure necessary to handle enormous amounts of data — transforming bottlenecks into breakthroughs. By championing open-source frameworks like Apache Arrow, Voltron is building the connective tissue that allows businesses to process data at orders-of-magnitude speed and efficiency, reshaping industries from finance to healthcare to national security — partnering with the likes of Snowflake and Meta. Voltron have established themselves as a key part of the AI infrastructure stack and have raised a total of $110M from the likes of Coatue, LightSpeed, Google Ventures and BlackRock. With a deep background in scaling data infrastructure businesses, Craig is Voltron’s CEO. Before Voltron Data, Craig was the CEO of Lumar, a leading SaaS technical SEO platform. Prior to that, he held significant roles including General Manager at Guild Education and Seismic, where he led the integration of Seismic’s acquisition of The Savo Group and drove go-to-market strategies in the financial services sector. Craig began his career in investment banking with Citi and Lehman Brothers before transitioning into technology. He holds an MBA from Northwestern University’s Kellogg School of Management Sign up for new podcasts and our newsletter, and email me on danieldarling@focal.vcSee omnystudio.com/listener for privacy information.
Hey friends, Chase here. Today, I want to talk about something we all face at some point—the moment when what once felt like a dream becomes a trap. It's a tough place to be, but if we can recognize it early, we can turn it into an opportunity for growth and reinvention. When Success Becomes Stifling Back in 2010, I co-founded CreativeLive with a simple but ambitious idea: to give the world access to free, world-class creative education. We started small, but over time, we grew to serve tens of millions of people and raised more than $60 million from top-tier investors like Richard Branson, Reid Hoffman, and Google Ventures. At first, my unconventional background as a creator—not an MBA—was celebrated. I brought a personal brand and a community along for the ride, and it seemed like a competitive advantage. But eventually, the pressure to conform crept in. The same people who once admired my unique approach began questioning it. The Subtle Drift Away from Purpose What started as small adjustments in how I operated became a slow drift off course. It wasn't dramatic at first—just a few degrees off. But after walking that path for long enough, I found myself far from where I wanted to be. The very thing I had built to inspire creativity felt stifling and uncreative. And here's the thing—this story isn't unique to me. It happens to many of us. The artist who becomes a manager. The entrepreneur buried in spreadsheets. The corporate climber who wakes up one day and wonders how they got there. If you've ever felt that way—or if you do in the future—this is for you. Signs It's Time for a Change Listen to Your Gut When something feels off for too long, it's your gut telling you it's time to move. Ignoring that voice comes with real consequences—stress, poor health, and a fog that clouds your creativity and joy. Nothing Lasts Forever Even the most exciting roles can run their course. Dreams change, and that's okay. Letting go of a title or identity that no longer serves you is hard, but it's often necessary for growth. Build Your Escape Pod Early Don't wait for a crisis to figure out your next move. Start building skills, connections, and momentum now. When it's time to make a change, you'll be ready to transition on your terms—not out of desperation. Embrace Change as a Constant Change is inevitable. It's uncomfortable, but it's almost always a catalyst for something better. When you start hearing that little voice whispering about a new chapter, don't ignore it. Trust it. Your future self will thank you. Until next time, take care and keep pushing toward what feels true and right for you. Enjoy!
George Sivulka, hailed as a wunderkind by Peter Thiel, is the founder of Hebbia, a groundbreaking AI company revolutionizing financial research. Having worked at NASA as a teenager, he later earned a math degree from Stanford in just 2.5 years. George has secured $130M in funding for Hebbia at a $700M valuation, backed by Andreessen Horowitz, Index Ventures, Google Ventures, and Thiel himself. Hebbia's AI platform, trusted by firms like Centerview Partners and Charlesbank, has driven a 15x revenue growth in 18 months by pioneering RAG techniques that ensure LLM outputs remain within company-sanctioned data.In this conversation, we discuss:How Hebbia is reshaping financial research with retrieval-augmented generation (RAG).The leap from academia to entrepreneurship—how George Sivulka turned research into real-world impact.The expanding role of AI in high-stakes industries where precision is critical.The ethical challenges of AI and how transparency can make or break trust.Why AI agents will change knowledge work—but not in the way you might think.The future of AI-powered research and its impact on decision-making.Resources:Subscribe to the AI & The Future of Work Newsletter: https://aiandwork.beehiiv.com/subscribe Connect with Sivulka on LinkedIn: https://www.linkedin.com/in/sivulka/ AI fun fact article: https://www.chicagobooth.edu/review/what-ai-sees-market-that-you-might-not On How AI-Powered Writing is reshaping Enterprise Content Creation: https://podcasts.apple.com/us/podcast/may-habib-ceo-of-writer-discusses-llms-and-the/id1476885647?i=1000628256870
Design sprints have become a staple of the creative process at companies around the world and an indispensable tool in the pursuit of innovation. We owe a debt of thanks to Jake Knapp and his former colleagues at Google Ventures (now known as GV) who pioneered the design sprint. Visit our Substack for bonus content and more: https://designbetterpodcast.com/p/jake-knapp-click There is one gap that design sprints have not entirely addressed, though. What do you do if you're starting a new product or company from scratch? That is the subject of Jake Knapp and co-author John Zeratsky's newest book, Click: How to make what people want. Jake lays out the elements of what he calls a “foundation sprint” in this book. We chat with Jake about what makes a foundation sprint different than a design sprint, and some examples from the book of companies that have used foundation sprints effectively. We also talk to Jake about his decision to start Character, a VC fund aimed at helping startups at seed stage with capital and sprints, and the qualities that they look for in their founders when deciding to invest. Pre-order "Click" Bio Jake Knapp is a New York Times bestselling author and co-founder of Character. Previously, Jake built products like Microsoft Encarta and Gmail, co-founded Google Meet, and invented the Design Sprint. He has coached hundreds of teams at places like Miro, Slack, LEGO, IDEO, and NASA on product strategy and time management, and is a guest instructor at Harvard Business School. This is Jake's third appearance on Design Better. In his first interview with us, he discusses Sprint, and in his second interview he talks about his (and John Zeratsky's) book Make Time. Books & links mentioned Ten things we know to be true The Making of Prince of Persia The rest is history podcast Met opera on demand https://jakek.medium.com/ *** Premium Episodes on Design Better This ad-supported episode is available to everyone. If you'd like to hear it ad-free, upgrade to our premium subscription, where you'll get an additional 2 ad-free episodes per month (4 total). ✨New benefits: Premium subscribers also get access to the documentary Design Disruptors and our growing library of books, as well as our monthly AMAs with former guests, ad-free episodes, discounts and early access to workshops, and our monthly newsletter The Brief that compiles salient insights, quotes, readings, and creative processes uncovered in the show. Upgrade to paid *** Visiting the links below is one of the best ways to support our show: Masterclass: MasterClass is the only streaming platform where you can learn and grow with over 200+ of the world's best. People like Steph Curry, Paul Krugman, Malcolm Gladwell, Dianne Von Furstenberg, Margaret Atwood, Lavar Burton and so many more inspiring thinkers share their wisdom in a format that is easy to follow and can be streamed anywhere on a smartphone, computer, smart TV, or even in audio mode. MasterClass always has great offers during the holidays, sometimes up to as much as 50% off. Head over to http://masterclass.com/designbetter for the current offer. DUER: Eli and I are busy people. When we're not in the studio producing the podcast and publishing new articles, we're often doing something active—building, cooking, or on an adventure with family. Work and life blend together, and DU/ER makes clothing for people like us. DUER creates performance denim and lifestyle apparel that is made for doing. Check out DUER's flagship stores in LA or Denver, or order now at shopduer.com/DESIGNBETTER. When you use our exclusive URL, you'll get 20% off your first purchase. If you're interested in sponsoring the show, please contact us at: sponsors@thecuriositydepartment.com If you'd like to submit a guest idea, please contact us at: contact@thecuriositydepartment.com
Chandra Janakiraman is the chief product officer, executive vice president, and a board member at VRChat. Previously, he was a product leader at Meta, where he led Facebook's social experience interfaces and Reality Labs' growth; served as CPO at Headspace, where he helped relaunch the platform, driving a 4x subscriber boost; and was a GM at Zynga, delivering massive hit games that reached hundreds of millions. In our conversation, Chandra shares:• His playbook for developing a product strategy• The difference between “small s” and “big S” strategy• How to run strategy sprints• Who should be involved in strategy work• Common pitfalls in strategy development• The role of AI in future strategy development• More—Brought to you by:• Eppo—Run reliable, impactful experiments• Airtable ProductCentral—Launch to new heights with a unified system for product development• OneSchema—Import CSV data 10x faster—Find the transcript at: https://www.lennysnewsletter.com/p/an-operators-guide-to-product-strategy-chandra-janakiraman—Where to find Chandra Janakiraman:• LinkedIn: https://www.linkedin.com/in/chandramohanj/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Chandra's background(04:47) The importance of strategy(12:40) Defining product strategy(15:42) Developing a winning strategy: an overview(18:51) The preparation phase(30:46) The strategy sprint process(45:51) The design sprint (51:19) Document writing(57:39) Rolling out your strategy(01:01:28) Resourcing and roadmapping(01:04:42) Strategy lessons from Zynga(01:11:34) Strategy lessons from Meta(01:15:55) Big S strategy(01:26:58) AI in strategy formulation(01:38:12) Final thoughts and lightning round—Referenced:• Headspace: https://www.headspace.com/• Good Strategy, Bad Strategy | Richard Rumelt: https://www.lennysnewsletter.com/p/good-strategy-bad-strategy-richard• 5 essential questions to craft a winning strategy | Roger Martin (author, advisor, speaker): https://www.lennysnewsletter.com/p/the-ultimate-guide-to-strategy-roger-martin• VRChat: https://hello.vrchat.com/• Andrew Chen on LinkedIn: https://www.linkedin.com/in/pmandrewchen/• Template: Working Backwards PR FAQ: https://www.workingbackwards.com/resources/working-backwards-pr-faq• How LinkedIn became interesting: The inside story | Tomer Cohen (CPO at LinkedIn): https://www.lennysnewsletter.com/p/how-linkedin-became-interesting-tomer-cohen• Making time for what matters | Jake Knapp and John Zeratsky (authors of Sprint and Make Time, co-founders of Character Capital): https://www.lennysnewsletter.com/p/making-time-for-what-matters-jake• Identify your bullseye customer in one day | Michael Margolis (UX Research Partner at Google Ventures): https://www.lennysnewsletter.com/p/finding-your-bullseye-customer-michael-margolis• Chandra's flow chart: https://docs.google.com/document/d/1SLmQ0oRFadzJnNM3MJetnLUvB18U4W4GXU4KtJ2ujEQ/edit?tab=t.0• Chandra's strategy template: https://docs.google.com/document/d/1iNeYUaMnpicvkpVZO-gj9cCxLeHfWN0xtGm_QoxgemE/edit?tab=t.0#heading=h.5d3jz6v86yrs• Zynga: https://www.zynga.com/• David Foster Wallace's quote about water: https://www.goodreads.com/quotes/97082-there-are-these-two-young-fish-swimming-along-and-they• Oculus: https://en.wikipedia.org/wiki/Oculus• Elon Musk's quote: https://www.youtube.com/shorts/wf8TadbGYok• Concept car: https://en.wikipedia.org/wiki/Concept_car• Acquired podcast: The Mark Zuckerberg interview: https://www.acquired.fm/episodes/the-mark-zuckerberg-interview• Armand Ruiz on LinkedIn: https://www.linkedin.com/in/armand-ruiz/• What is a multi-armed bandit? Full explanation: https://amplitude.com/explore/experiment/multi-armed-bandit• IF on Prime Video: https://www.amazon.com/IF-John-Krasinski/dp/B0CW19SCVW• Dune: Part 2 on AppleTV+: https://tv.apple.com/us/movie/dune-part-two/umc.cmc.363aycnv6vy9qgekvew6fveb9• Dune Prophecy on Max: https://www.max.com/shows/dune-prophecy-2024/57660b16-a32a-476f-89da-3302ac379e91• Capybara Go on the App Store: https://apps.apple.com/ph/app/capybara-go/id6596787726• Bluesky: https://bsky.app/• Steve Jobs: The Lost Interview on Prime Video: https://www.amazon.com/Steve-Jobs-Lost-Interview/dp/B01IJD1BES—Recommended books:• The Art of War: https://www.amazon.com/Art-War-Sun-Tzu/dp/1599869772• Competitive Strategy: Techniques for Analyzing Industries and Competitors: https://www.amazon.com/Competitive-Strategy-Techniques-Industries-Competitors/dp/0684841487/• Good Strategy Bad Strategy: The Difference and Why It Matters: https://www.amazon.com/Good-Strategy-Bad-Difference-Matters/dp/0307886239/• Playing to Win: How Strategy Really Works: https://www.amazon.com/Playing-Win-Strategy-Really-Works/dp/142218739X• Make Time: How to Focus on What Matters Every Day: https://www.amazon.com/Make-Time-Focus-Matters-Every/dp/0525572422• Sprint: https://www.amazon.com/SPRINT-Jake-Zeratsky-Knapp/dp/0593076117• Walt Disney: The Triumph of the American Imagination: https://www.amazon.com/Walt-Disney-Triumph-American-Imagination/dp/0679757473• Creativity, Inc.: Overcoming the Unseen Forces That Stand in the Way of True Inspiration: https://www.amazon.com/Creativity-Inc-Expanded-Overcoming-Inspiration/dp/0593594649/• The Ten Faces of Innovation: Strategies for Heightening Creativity: https://www.amazon.com/dp/0385512074—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe
Today we have another episode of Better Done Than Perfect. Listen in as we talk to Michael Margolis, UX Research Partner at Google Ventures. You'll learn about their bullseye customer sprint, the difference between an ideal customer profile and the bullseye customer, how to apply prototypes, and more.Please head over to the episode page for the detailed recap and key takeaways.Show notesLearn More Faster – Michael's bookFollow Michael on LinkedInThanks for listening! If you found the episode useful, please spread the word about this new show on Twitter mentioning @userlist, or leave us a review on iTunes.SponsorThis show is brought to you by Userlist — an email automation platform for SaaS companies. It matches the complexity of your customer data, including many-to-many relationships between users and companies. Book your demo call today at userlist.com.Interested in sponsoring an episode? Learn more here.Leave a ReviewReviews are hugely important because they help new people discover this podcast. If you enjoyed listening to this episode, please leave a review on iTunes. Here's how.
How do you narrow down to your perfect customer? In this episode, we talk to Michael Margolis, UX Research Partner at Google Ventures. You'll learn about their bullseye customer sprint, the difference between an ideal customer profile and the bullseye customer, how to apply prototypes, and more.Visit our website for the detailed episode recap with key learnings.Learn More Faster – Michael's bookFollow Michael on LinkedInThanks for listening! If you found the episode useful, please spread the word about the show on Twitter mentioning @userlist, or leave us a review on iTunes.SponsorThis show is brought to you by Userlist — an email automation platform for SaaS companies. It matches the complexity of your customer data, including many-to-many relationships between users and companies. Book your demo call today at userlist.com.
For episode 10, Layman sits down again with Ēlen Awalom, this time to talk about unhealthy and healthy gender models, and the roles of attachment, trauma, and development in contemporary gender relations. Ēlen Awalom is an Eritrean-American Somatic Experiencing practitioner, writer, entrepreneur and cultural commentator. She lives and works in DC where she is the founder of The Almaz Institute, an organization that provides trauma-sensitive leadership and communication trainings steeped in the most cutting edge insights from developmental and somatic psychology. In a previous life she was an exhibiting Afrofuturist visual artist, women's dating coach focused on developing securely attached relationships, political activist and serial tech entrepreneur who successfully raised millions of dollars from Google Ventures. Ēlen's Professional website https://www.elenawalom.com/
Happy holidays! We'll be sharing snippets from Latent Space LIVE! through the break bringing you the best of 2024! We want to express our deepest appreciation to event sponsors AWS, Daylight Computer, Thoth.ai, StrongCompute, Notable Capital, and most of all all our LS supporters who helped fund the gorgeous venue and A/V production!For NeurIPS last year we did our standard conference podcast coverage interviewing selected papers (that we have now also done for ICLR and ICML), however we felt that we could be doing more to help AI Engineers 1) get more industry-relevant content, and 2) recap 2024 year in review from experts. As a result, we organized the first Latent Space LIVE!, our first in person miniconference, at NeurIPS 2024 in Vancouver.The single most requested domain was computer vision, and we could think of no one better to help us recap 2024 than our friends at Roboflow, who was one of our earliest guests in 2023 and had one of this year's top episodes in 2024 again. Roboflow has since raised a $40m Series B!LinksTheir slides are here:All the trends and papers they picked:* Isaac Robinson* Sora (see our Video Diffusion pod) - extending diffusion from images to video* SAM 2: Segment Anything in Images and Videos (see our SAM2 pod) - extending prompted masks to full video object segmentation* DETR Dominancy: DETRs show Pareto improvement over YOLOs* RT-DETR: DETRs Beat YOLOs on Real-time Object Detection* LW-DETR: A Transformer Replacement to YOLO for Real-Time Detection* D-FINE: Redefine Regression Task in DETRs as Fine-grained Distribution Refinement* Peter Robicheaux* MMVP (Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs)* * Florence 2 (Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks) * PalíGemma / PaliGemma 2* PaliGemma: A versatile 3B VLM for transfer* PaliGemma 2: A Family of Versatile VLMs for Transfer* AlMv2 (Multimodal Autoregressive Pre-training of Large Vision Encoders) * Vik Korrapati - MoondreamFull Talk on YouTubeWant more content like this? Like and subscribe to stay updated on our latest talks, interviews, and podcasts.Transcript/Timestamps[00:00:00] Intro[00:00:05] AI Charlie: welcome to Latent Space Live, our first mini conference held at NeurIPS 2024 in Vancouver. This is Charlie, your AI co host. When we were thinking of ways to add value to our academic conference coverage, we realized that there was a lack of good talks, just recapping the best of 2024, going domain by domain.[00:00:36] AI Charlie: We sent out a survey to the over 900 of you. who told us what you wanted, and then invited the best speakers in the Latent Space Network to cover each field. 200 of you joined us in person throughout the day, with over 2, 200 watching live online. Our second featured keynote is The Best of Vision 2024, with Peter Robichaud and Isaac [00:01:00] Robinson of Roboflow, with a special appearance from Vic Corrapati of Moondream.[00:01:05] AI Charlie: When we did a poll of our attendees, the highest interest domain of the year was vision. And so our first port of call was our friends at Roboflow. Joseph Nelson helped us kickstart our vision coverage in episode 7 last year, and this year came back as a guest host with Nikki Ravey of Meta to cover segment Anything 2.[00:01:25] AI Charlie: Roboflow have consistently been the leaders in open source vision models and tooling. With their SuperVision library recently eclipsing PyTorch's Vision library. And Roboflow Universe hosting hundreds of thousands of open source vision datasets and models. They have since announced a 40 million Series B led by Google Ventures.[00:01:46] AI Charlie: Woohoo.[00:01:48] Isaac's picks[00:01:48] Isaac Robinson: Hi, we're Isaac and Peter from Roboflow, and we're going to talk about the best papers of 2024 in computer vision. So, for us, we defined best as what made [00:02:00] the biggest shifts in the space. And to determine that, we looked at what are some major trends that happened and what papers most contributed to those trends.[00:02:09] Isaac Robinson: So I'm going to talk about a couple trends, Peter's going to talk about a trend, And then we're going to hand it off to Moondream. So, the trends that I'm interested in talking about are These are a major transition from models that run on per image basis to models that run using the same basic ideas on video.[00:02:28] Isaac Robinson: And then also how debtors are starting to take over the real time object detection scene from the YOLOs, which have been dominant for years.[00:02:37] Sora, OpenSora and Video Vision vs Generation[00:02:37] Isaac Robinson: So as a highlight we're going to talk about Sora, which from my perspective is the biggest paper of 2024, even though it came out in February. Is the what?[00:02:48] Isaac Robinson: Yeah. Yeah. So just it's a, SORA is just a a post. So I'm going to fill it in with details from replication efforts, including open SORA and related work, such as a stable [00:03:00] diffusion video. And then we're also going to talk about SAM2, which applies the SAM strategy to video. And then how debtors, These are the improvements in 2024 to debtors that are making them a Pareto improvement to YOLO based models.[00:03:15] Isaac Robinson: So to start this off, we're going to talk about the state of the art of video generation at the end of 2023, MagVIT MagVIT is a discrete token, video tokenizer akin to VQ, GAN, but applied to video sequences. And it actually outperforms state of the art handcrafted video compression frameworks.[00:03:38] Isaac Robinson: In terms of the bit rate versus human preference for quality and videos generated by autoregressing on these discrete tokens generate some pretty nice stuff, but up to like five seconds length and, you know, not super detailed. And then suddenly a few months later we have this, which when I saw it, it was totally mind blowing to me.[00:03:59] Isaac Robinson: 1080p, [00:04:00] a whole minute long. We've got light reflecting in puddles. That's reflective. Reminds me of those RTX demonstrations for next generation video games, such as Cyberpunk, but with better graphics. You can see some issues in the background if you look closely, but they're kind of, as with a lot of these models, the issues tend to be things that people aren't going to pay attention to unless they're looking for.[00:04:24] Isaac Robinson: In the same way that like six fingers on a hand. You're not going to notice is a giveaway unless you're looking for it. So yeah, as we said, SORA does not have a paper. So we're going to be filling it in with context from the rest of the computer vision scene attempting to replicate these efforts. So the first step, you have an LLM caption, a huge amount of videos.[00:04:48] Isaac Robinson: This, this is a trick that they introduced in Dolly 3, where they train a image captioning model to just generate very high quality captions for a huge corpus and then train a diffusion model [00:05:00] on that. Their Sora and their application efforts also show a bunch of other steps that are necessary for good video generation.[00:05:09] Isaac Robinson: Including filtering by aesthetic score and filtering by making sure the videos have enough motion. So they're not just like kind of the generators not learning to just generate static frames. So. Then we encode our video into a series of space time latents. Once again, SORA, very sparse in details.[00:05:29] Isaac Robinson: So the replication related works, OpenSORA actually uses a MAG VIT V2 itself to do this, but swapping out the discretization step with a classic VAE autoencoder framework. They show that there's a lot of benefit from getting the temporal compression, which makes a lot of sense as the Each sequential frames and videos have mostly redundant information.[00:05:53] Isaac Robinson: So by compressing against, compressing in the temporal space, you allow the latent to hold [00:06:00] a lot more semantic information while avoiding that duplicate. So, we've got our spacetime latents. Possibly via, there's some 3D VAE, presumably a MAG VATV2 and then you throw it into a diffusion transformer.[00:06:19] Isaac Robinson: So I think it's personally interesting to note that OpenSORA is using a MAG VATV2, which originally used an autoregressive transformer decoder to model the latent space, but is now using a diffusion diffusion transformer. So it's still a transformer happening. Just the question is like, is it?[00:06:37] Isaac Robinson: Parameterizing the stochastic differential equation is, or parameterizing a conditional distribution via autoregression. It's also it's also worth noting that most diffusion models today, the, the very high performance ones are switching away from the classic, like DDPM denoising diffusion probability modeling framework to rectified flows.[00:06:57] Isaac Robinson: Rectified flows have a very interesting property that as [00:07:00] they converge, they actually get closer to being able to be sampled with a single step. Which means that in practice, you can actually generate high quality samples much faster. Major problem of DDPM and related models for the past four years is just that they require many, many steps to generate high quality samples.[00:07:22] Isaac Robinson: So, and naturally, the third step is throwing lots of compute at the problem. So I didn't, I never figured out how to manage to get this video to loop, but we see very little compute, medium compute, lots of compute. This is so interesting because the the original diffusion transformer paper from Facebook actually showed that, in fact, the specific hyperparameters of the transformer didn't really matter that much.[00:07:48] Isaac Robinson: What mattered was that you were just increasing the amount of compute that the model had. So, I love how in the, once again, little blog posts, they don't even talk about [00:08:00] like the specific hyperparameters. They say, we're using a diffusion transformer, and we're just throwing more compute at it, and this is what happens.[00:08:08] Isaac Robinson: OpenSora shows similar results. The primary issue I think here is that no one else has 32x compute budget. So we end up with these we end up in the middle of the domain and most of the related work, which is still super, super cool. It's just a little disappointing considering the context. So I think this is a beautiful extension of the framework that was introduced in 22 and 23 for these very high quality per image generation and then extending that to videos.[00:08:39] Isaac Robinson: It's awesome. And it's GA as of Monday, except no one can seem to get access to it because they keep shutting down the login.[00:08:46] SAM and SAM2[00:08:46] Isaac Robinson: The next, so next paper I wanted to talk about is SAM. So we at Roboflow allow users to label data and train models on that data. Sam, for us, has saved our users 75 years of [00:09:00] labeling time.[00:09:00] Isaac Robinson: We are the, to the best of my knowledge, the largest SAM API that exists. We also, SAM also allows us to have our users train just pure bounding box regression models and use those to generate high quality masks which has the great side effect of requiring less training data to have a meaningful convergence.[00:09:20] Isaac Robinson: So most people are data limited in the real world. So anything that requires less data to get to a useful thing is that super useful. Most of our users actually run their object per frame object detectors on every frame in a video, or maybe not most, but many, many. And so Sam follows into this category of taking, Sam 2 falls into this category of taking something that really really works and applying it to a video which has the wonderful benefit of being plug and play with most of our Many of our users use cases.[00:09:53] Isaac Robinson: We're, we're still building out a sufficiently mature pipeline to take advantage of that, but it's, it's in the works. [00:10:00] So here we've got a great example. We can click on cells and then follow them. You even notice the cell goes away and comes back and we can still keep track of it which is very challenging for existing object trackers.[00:10:14] Isaac Robinson: High level overview of how SAM2 works. We there's a simple pipeline here where we can give, provide some type of prompt and it fills out the rest of the likely masks for that object throughout the rest of the video. So here we're giving a bounding box in the first frame, a set of positive negative points, or even just a simple mask.[00:10:36] Isaac Robinson: I'm going to assume people are somewhat familiar with SAM. So I'm going to just give a high level overview of how SAM works. You have an image encoder that runs on every frame. SAM two can be used on a single image, in which case the only difference between SAM two and SAM is that image encoder, which Sam used a standard VIT [00:11:00] Sam two replaced that with a hara hierarchical encoder, which gets approximately the same results, but leads to a six times faster inference, which is.[00:11:11] Isaac Robinson: Excellent, especially considering how in a trend of 23 was replacing the VAT with more efficient backbones. In the case where you're doing video segmentation, the difference is that you actually create a memory bank and you cross attend the features from the image encoder based on the memory bank.[00:11:31] Isaac Robinson: So the feature set that is created is essentially well, I'll go more into it in a couple of slides, but we take the features from the past couple frames, plus a set of object pointers and the set of prompts and use that to generate our new masks. Then we then fuse the new masks for this frame with the.[00:11:57] Isaac Robinson: Image features and add that to the memory bank. [00:12:00] It's, well, I'll say more in a minute. The just like SAM, the SAM2 actually uses a data engine to create its data set in that people are, they assembled a huge amount of reference data, used people to label some of it and train the model used the model to label more of it and asked people to refine the predictions of the model.[00:12:20] Isaac Robinson: And then ultimately the data set is just created from the engine Final output of the model on the reference data. It's very interesting. This paradigm is so interesting to me because it unifies a model in a dataset in a way that is very unique. It seems unlikely that another model could come in and have such a tight.[00:12:37] Isaac Robinson: So brief overview of how the memory bank works, the paper did not have a great visual, so I'm just, I'm going to fill in a bit more. So we take the last couple of frames from our video. And we take the last couple of frames from our video attend that, along with the set of prompts that we provided, they could come from the future, [00:13:00] they could come from anywhere in the video, as well as reference object pointers, saying, by the way, here's what we've found so far attending to the last few frames has the interesting benefit of allowing it to model complex object motion without actually[00:13:18] Isaac Robinson: By limiting the amount of frames that you attend to, you manage to keep the model running in real time. This is such an interesting topic for me because one would assume that attending to all of the frames is super essential, or having some type of summarization of all the frames is super essential for high performance.[00:13:35] Isaac Robinson: But we see in their later ablation that that actually is not the case. So here, just to make sure that there is some benchmarking happening, we just compared to some of the stuff that's came out prior, and indeed the SAM2 strategy does improve on the state of the art. This ablation deep in their dependencies was super interesting to me.[00:13:59] Isaac Robinson: [00:14:00] We see in section C, the number of memories. One would assume that increasing the count of memories would meaningfully increase performance. And we see that it has some impact, but not the type that you'd expect. And that it meaningfully decreases speed, which justifies, in my mind, just having this FIFO queue of memories.[00:14:20] Isaac Robinson: Although in the future, I'm super interested to see A more dedicated summarization of all of the last video, not just a stacking of the last frames. So that another extension of beautiful per frame work into the video domain.[00:14:42] Realtime detection: DETRs > YOLO[00:14:42] Isaac Robinson: The next trend I'm interested in talking about is this interesting at RoboFlow, we're super interested in training real time object detectors.[00:14:50] Isaac Robinson: Those are bread and butter. And so we're doing a lot to keep track of what is actually happening in that space. We are finally starting to see something change. So, [00:15:00] for years, YOLOs have been the dominant way of doing real time object detection, and we can see here that they've essentially stagnated.[00:15:08] Isaac Robinson: The performance between 10 and 11 is not meaningfully different, at least, you know, in this type of high level chart. And even from the last couple series, there's not. A major change so YOLOs have hit a plateau, debtors have not. So we can look here and see the YOLO series has this plateau. And then these RT debtor, LW debtor, and Define have meaningfully changed that plateau so that in fact, the best Define models are plus 4.[00:15:43] Isaac Robinson: 6 AP on Cocoa at the same latency. So three major steps to accomplish this. The first RT deditor, which is technically a 2023 paper preprint, but published officially in 24, so I'm going to include that. I hope that's okay. [00:16:00] That is showed that RT deditor showed that we could actually match or out speed YOLOs.[00:16:04] Isaac Robinson: And then LWdebtor showed that pre training is hugely effective on debtors and much less so on YOLOs. And then DeFine added the types of bells and whistles that we expect from these types, this, this arena. So the major improvements that RTdebtor shows was Taking the multi scale features that debtors typically pass into their encoder and decoupling them into a much more efficient transformer encoder.[00:16:30] Isaac Robinson: The transformer is of course, quadratic complexity. So decreasing the amount of stuff that you pass in at once is super helpful for increasing your runtime or increasing your throughput. So that change basically brought us up to yellow speed and then they do a hardcore analysis on. Benchmarking YOLOs, including the NMS step.[00:16:54] Isaac Robinson: Once you once you include the NMS in the latency calculation, you see that in fact, these debtors [00:17:00] are outperforming, at least this time, the the, the YOLOs that existed. Then LW debtor goes in and suggests that in fact, the frame, the huge boost here is from pre training. So, this is the define line, and this is the define line without pre training.[00:17:19] Isaac Robinson: It's within range, it's still an improvement over the YOLOs, but Really huge boost comes from the benefit of pre training. When YOLOx came out in 2021, they showed that they got much better results by having a much, much longer training time, but they found that when they did that, they actually did not benefit from pre training.[00:17:40] Isaac Robinson: So, you see in this graph from LWdebtor, in fact, YOLOs do have a real benefit from pre training, but it goes away as we increase the training time. Then, the debtors converge much faster. LWdebtor trains for only 50 epochs, RTdebtor is 60 epochs. So, one could assume that, in fact, [00:18:00] the entire extra gain from pre training is that you're not destroying your original weights.[00:18:06] Isaac Robinson: By relying on this long training cycle. And then LWdebtor also shows superior performance to our favorite data set, Roboflow 100 which means that they do better on the real world, not just on Cocoa. Then Define throws all the bells and whistles at it. Yellow models tend to have a lot of very specific complicated loss functions.[00:18:26] Isaac Robinson: This Define brings that into the debtor world and shows consistent improvement on a variety of debtor based frameworks. So bring these all together and we see that suddenly we have almost 60 AP on Cocoa while running in like 10 milliseconds. Huge, huge stuff. So we're spending a lot of time trying to build models that work better with less data and debtors are clearly becoming a promising step in that direction.[00:18:56] Isaac Robinson: The, what we're interested in seeing [00:19:00] from the debtors in this, this trend to next is. Codetter and the models that are currently sitting on the top of the leaderboard for large scale inference scale really well as you switch out the backbone. We're very interested in seeing and having people publish a paper, potentially us, on what happens if you take these real time ones and then throw a Swingy at it.[00:19:23] Isaac Robinson: Like, do we have a Pareto curve that extends from the real time domain all the way up to the super, super slow but high performance domain? We also want to see people benchmarking in RF100 more, because that type of data is what's relevant for most users. And we want to see more pre training, because pre training works now.[00:19:43] Isaac Robinson: It's super cool.[00:19:48] Peter's Picks[00:19:48] Peter Robicheaux: Alright, so, yeah, so in that theme one of the big things that we're focusing on is how do we get more out of our pre trained models. And one of the lenses to look at this is through sort of [00:20:00] this, this new requirement for like, how Fine grained visual details and your representations that are extracted from your foundation model.[00:20:08] Peter Robicheaux: So it's sort of a hook for this Oh, yeah, this is just a list of all the the papers that I'm going to mention I just want to make sure I set an actual paper so you can find it later[00:20:18] MMVP (Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs)[00:20:18] Peter Robicheaux: Yeah, so sort of the big hook here is that I make the claim that LLMs can't see if you go to if you go to Claude or ChatGPT you ask it to see this Watch and tell me what time it is, it fails, right?[00:20:34] Peter Robicheaux: And so you could say, like, maybe, maybe the Like, this is, like, a very classic test of an LLM, but you could say, Okay, maybe this, this image is, like, too zoomed out, And it just, like, it'll do better if we increase the resolution, And it has easier time finding these fine grained features, Like, where the watch hands are pointing.[00:20:53] Peter Robicheaux: Nodice. And you can say, okay, well, maybe the model just doesn't know how to tell time from knowing the position of the hands. But if you actually prompt [00:21:00] it textually, it's very easy for it to tell the time. So this to me is proof that these LLMs literally cannot see the position of the watch hands and it can't see those details.[00:21:08] Peter Robicheaux: So the question is sort of why? And for you anthropic heads out there, cloud fails too. So the, the, my first pick for best paper of 2024 Envision is this MMVP paper, which tries to investigate the Why do LLMs not have the ability to see fine grained details? And so, for instance, it comes up with a lot of images like this, where you ask it a question that seems very visually apparent to us, like, which way is the school bus facing?[00:21:32] Peter Robicheaux: And it gets it wrong, and then, of course, it makes up details to support its wrong claim. And so, the process by which it finds these images is sort of contained in its hypothesis for why it can't. See these details. So it hypothesizes that models that have been initialized with, with Clip as their vision encoder, they don't have fine grained details and the, the features extracted using Clip because Clip sort of doesn't need to find these fine grained [00:22:00] details to do its job correctly, which is just to match captions and images, right?[00:22:04] Peter Robicheaux: And sort of at a high level, even if ChatGPT wasn't initialized with Clip and wasn't trained contrastively at all. The vision encoder wasn't trained contrastively at all. Still, in order to do its job of capturing the image it could do a pretty good job without actually finding the exact position of all the objects and visual features in the image, right?[00:22:21] Peter Robicheaux: So This paper finds a set of difficult images for these types of models. And the way it does it is it looks for embeddings that are similar in clip space, but far in DynaV2 space. So DynaV2 is a foundation model that was trained self supervised purely on image data. And it kind of uses like some complex student teacher framework, but essentially, and like, it patches out like certain areas of the image or like crops with certain areas of the image and tries to make sure that those have consistent representations, which is a way for it to learn very fine grained visual features.[00:22:54] Peter Robicheaux: And so if you take things that are very close in clip space and very far in DynaV2 space, you get a set of images [00:23:00] that Basically, pairs of images that are hard for a chat GPT and other big language models to distinguish. So, if you then ask it questions about this image, well, as you can see from this chart, it's going to answer the same way for both images, right?[00:23:14] Peter Robicheaux: Because to, to, from the perspective of the vision encoder, they're the same image. And so if you ask a question like, how many eyes does this animal have? It answers the same for both. And like all these other models, including Lava do the same thing, right? And so this is the benchmark that they create, which is like finding clip, like clip line pairs, which is pairs of images that are similar in clip space and creating a data set of multiple choice questions based off of those.[00:23:39] Peter Robicheaux: And so how do these models do? Well, really bad. Lava, I think, So, so, chat2BT and Jim and I do a little bit better than random guessing, but, like, half of the performance of humans who find these problems to be very easy. Lava is, interestingly, extremely negatively correlated with this dataset. It does much, much, much, much worse [00:24:00] than random guessing, which means that this process has done a very good job of identifying hard images for, for Lava, specifically.[00:24:07] Peter Robicheaux: And that's because Lava is basically not trained for very long and is initialized from Clip, and so You would expect it to do poorly on this dataset. So, one of the proposed solutions that this paper attempts is by basically saying, Okay, well if clip features aren't enough, What if we train the visual encoder of the language model also on dyno features?[00:24:27] Peter Robicheaux: And so it, it proposes two different ways of doing this. One, additively which is basically interpolating between the two features, and then one is interleaving, which is just kind of like training one on the combination of both features. So there's this really interesting trend when you do the additive mixture of features.[00:24:45] Peter Robicheaux: So zero is all clip features and one is all DynaV2 features. So. It, as you, so I think it's helpful to look at the right most chart first, which is as you increase the number of DynaV2 features, your model does worse and worse and [00:25:00] worse on the actual language modeling task. And that's because DynaV2 features were trained completely from a self supervised manner and completely in image space.[00:25:08] Peter Robicheaux: It knows nothing about text. These features aren't really compatible with these text models. And so you can train an adapter all you want, but it seems that it's in such an alien language that it's like a very hard optimization for this. These models to solve. And so that kind of supports what's happening on the left, which is that, yeah, it gets better at answering these questions if as you include more dyna V two features up to a point, but then you, when you oversaturate, it completely loses its ability to like.[00:25:36] Peter Robicheaux: Answer language and do language tasks. So you can also see with the interleaving, like they essentially double the number of tokens that are going into these models and just train on both, and it still doesn't really solve the MMVP task. It gets Lava 1. 5 above random guessing by a little bit, but it's still not close to ChachiPT or, you know, Any like human performance, obviously.[00:25:59] Peter Robicheaux: [00:26:00] So clearly this proposed solution of just using DynaV2 features directly, isn't going to work. And basically what that means is that as a as a vision foundation model, DynaV2 is going to be insufficient for language tasks, right?[00:26:14] Florence 2 (Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks)[00:26:14] Peter Robicheaux: So my next pick for best paper of 2024 would be Florence 2, which tries to solve this problem by incorporating not only This dimension of spatial hierarchy, which is to say pixel level understanding, but also in making sure to include what they call semantic granularity, which ends up, the goal is basically to have features that are sufficient for finding objects in the image, so they're, they're, they have enough pixel information, but also can be talked about and can be reasoned about.[00:26:44] Peter Robicheaux: And that's on the semantic granularity axis. So here's an example of basically three different paradigms of labeling that they do. So they, they create a big dataset. One is text, which is just captioning. And you would expect a model that's trained [00:27:00] only on captioning to have similar performance like chat2BT and like not have spatial hierarchy, not have features that are meaningful at the pixel level.[00:27:08] Peter Robicheaux: And so they add another type, which is region text pairs, which is essentially either classifying a region or You're doing object detection or doing instance segmentation on that region or captioning that region. And then they have text phrased region annotations, which is essentially a triple. And basically, not only do you have a region that you've described, you also find it's like, It's placed in a descriptive paragraph about the image, which is basically trying to introduce even more like semantic understanding of these regions.[00:27:39] Peter Robicheaux: And so like, for instance, if you're saying a woman riding on the road, right, you have to know what a woman is and what the road is and that she's on top of it. And that's, that's basically composing a bunch of objects in this visual space, but also thinking about it semantically, right? And so the way that they do this is they take basically they just dump Features from a vision encoder [00:28:00] straight into a encoder decoder transformer.[00:28:03] Peter Robicheaux: And then they train a bunch of different tasks like object detection and so on as a language task. And I think that's one of the big things that we saw in 2024 is these, these vision language models operating in, on pixel space linguistically. So they introduced a bunch of new tokens to point to locations and[00:28:22] Peter Robicheaux: So how does it work? How does it actually do? We can see if you look at the graph on the right, which is using the, the Dino, the the Dino framework your, your pre trained Florence 2 models transfer very, very well. They get 60%, 60 percent map on Cocoa, which is like approaching state of the art and they train[00:28:42] Vik Korrapati: with, and they[00:28:43] Peter Robicheaux: train with a much more more efficiently.[00:28:47] Peter Robicheaux: So they, they converge a lot faster, which both of these things are pointing to the fact that they're actually leveraging their pre trained weights effectively. So where is it falling short? So these models, I forgot to mention, Florence is a 0. 2 [00:29:00] billion and a 0. 7 billion parameter count. So they're very, very small in terms of being a language model.[00:29:05] Peter Robicheaux: And I think that. This framework, you can see saturation. So, what this graph is showing is that if you train a Florence 2 model purely on the image level and region level annotations and not including the pixel level annotations, like this, segmentation, it actually performs better as an object detector.[00:29:25] Peter Robicheaux: And what that means is that it's not able to actually learn all the visual tasks that it's trying to learn because it doesn't have enough capacity.[00:29:32] PalíGemma / PaliGemma 2[00:29:32] Peter Robicheaux: So I'd like to see this paper explore larger model sizes, which brings us to our next big paper of 2024 or two papers. So PolyGemma came out earlier this year.[00:29:42] Peter Robicheaux: PolyGemma 2 was released, I think like a week or two ago. Oh, I forgot to mention, you can actually train You can, like, label text datasets on RoboFlow and you can train a Florence 2 model and you can actually train a PolyGemma 2 model on RoboFlow, which we got into the platform within, like, 14 hours of release, which I was really excited about.[00:29:59] Peter Robicheaux: So, anyway, so [00:30:00] PolyGemma 2, so PolyGemma is essentially doing the same thing, but instead of doing an encoder decoder, it just dumps everything into a decoder only transformer model. But it also introduced the concept of location tokens to point to objects in pixel space. PolyGemma 2, so PolyGemma uses Gemma as the language encoder, and it uses Gemma2B.[00:30:17] Peter Robicheaux: PolyGemma 2 introduces using multiple different sizes of language encoders. So, the way that they sort of get around having to do encoder decoder is they use the concept of prefix loss. Which basically means that when it's generating, tokens autoregressively, it's all those tokens in the prefix, which is like the image that it's looking at and like a description of the task that it's trying to do.[00:30:41] Peter Robicheaux: They're attending to each other fully, full attention. Which means that, you know, it can sort of. Find high level it's easier for the, the prefix to color, to color the output of the suffix and also to just find like features easily. So this is sort of [00:31:00] an example of like one of the tasks that was trained on, which is like, you describe the task in English and then you give it all these, like, You're asking for it to segment these two classes of objects, and then it finds, like, their locations using these tokens, and it finds their masks using some encoding of the masks into tokens.[00:31:24] Peter Robicheaux: And, yeah, so, one of my critiques, I guess, of PolyGemma 1, at least, is that You find that performance saturates as a pre trained model after only 300 million examples seen. So, what this graph is representing is each blue dot is a performance on some downstream task. And you can see that after seeing 300 million examples, It sort of does equally well on all of the downtrend tasks that they tried it on, which was a lot as 1 billion examples, which to me also kind of suggests a lack of capacity for this model.[00:31:58] Peter Robicheaux: PolyGemma2, [00:32:00] you can see the results on object detection. So these were transferred to to Coco. And you can see that this sort of also points to an increase in capacity being helpful to the model. You can see as. Both the resolution increases, and the parameter count of the language model increases, performance increases.[00:32:16] Peter Robicheaux: So resolution makes sense, obviously, it helps to find small images, or small objects in the image. But it also makes sense for another reason, which is that it kind of gives the model a thinking register, and it gives it more tokens to, like, process when making its predictions. But yeah, you could, you could say, oh, 43.[00:32:30] Peter Robicheaux: 6, that's not that great, like Florence 2 got 60. But this is not Training a dino or a debtor on top of this language or this image encoder. It's doing the raw language modeling task on Cocoa. So it doesn't have any of the bells and whistles. It doesn't have any of the fancy losses. It doesn't even have bipartite graph matching or anything like that.[00:32:52] Peter Robicheaux: Okay, the big result and one of the reasons that I was really excited about this paper is that they blow everything else away [00:33:00] on MMVP. I mean, 47. 3, sure, that's nowhere near human accuracy, which, again, is 94%, but for a, you know, a 2 billion language, 2 billion parameter language model to be chat2BT, that's quite the achievement.[00:33:12] Peter Robicheaux: And that sort of brings us to our final pick for paper of the year, which is AIMV2. So, AIMV2 sort of says, okay, Maybe this language model, like, maybe coming up with all these specific annotations to find features and with high fidelity and pixel space isn't actually necessary. And we can come up with an even simpler, more beautiful idea for combining you know, image tokens and pixel tokens in a way that's interfaceable for language tasks.[00:33:44] Peter Robicheaux: And this is nice because it can scale, you can come up with lots more data if you don't have to come up with all these annotations, right? So the way that it works. is it does something very, very similar to PolyGemo, where you have a vision encoder that dumps image tokens into a decoder only transformer.[00:33:59] Peter Robicheaux: But [00:34:00] the interesting thing is that it also autoregressively tries to learn the mean squared error of the image tokens. So instead of having to come up with fancy object detection or semantic, or segment, or segmentation labels, you can just try to reconstruct the image and have it learn fine grained features that way.[00:34:16] Peter Robicheaux: And it does this in kind of, I think, a beautiful way that's kind of compatible with the PolyGemma line of thinking, which is randomly sampling a prefix line of thinking Prefix length and using only this number of image tokens as the prefix. And so doing a similar thing with the causal. So the causal with prefix is the, the attention mask on the right.[00:34:35] Peter Robicheaux: So it's doing full block attention with some randomly sampled number of image tokens to then reconstruct the rest of the image and the downstream caption for that image. And so, This is the dataset that they train on. It's image or internet scale data, very high quality data created by the data filtering networks paper, essentially which is maybe The best clip data that exists.[00:34:59] Peter Robicheaux: [00:35:00] And we can see that this is finally a model that doesn't saturate. It's even at the highest parameter count, it's, it appears to be, oh, at the highest parameter account, it appears to be improving in performance with more and more samples seen. And so you can sort of think that. You know, if we just keep bumping the parameter count and increasing the example scene, which is the, the, the line of thinking for language models, then it'll keep getting better.[00:35:27] Peter Robicheaux: So how does it actually do at finding, oh, it also improves with resolution, which you would expect for a model that This is the ImageNet classification accuracy, but yeah, it does better if you increase the resolution, which means that it's actually leveraging and finding fine grained visual features.[00:35:44] Peter Robicheaux: And so how does that actually do compared to CLIP on Cocoa? Well, you can see that if you slap a transformer detection head on it, Entry now in Cocoa, it's just 60. 2, which is also within spitting distance of Soda, which means that it does a very good job of [00:36:00] finding visual features, but you could say, okay, well, wait a second.[00:36:03] Peter Robicheaux: Clip got to 59. 1, so. Like, how does this prove your claim at all? Because doesn't that mean like clip, which is known to be clip blind and do badly on MMVP, it's able to achieve a very high performance on fine, on this fine grained visual features task of object detection, well, they train on like, Tons of data.[00:36:24] Peter Robicheaux: They train on like objects, 365, Cocoa, Flickr and everything else. And so I think that this benchmark doesn't do a great job of selling how good of a pre trained model MV2 is. And we would like to see the performance on fewer data as examples and not trained to convergence on object detection. So seeing it in the real world on like a dataset, like RoboFlow 100, I think would be quite interesting.[00:36:48] Peter Robicheaux: And our, our, I guess our final, final pick for paper of 2024 would be Moondream. So introducing Vic to talk about that.[00:36:54] swyx: But overall, that was exactly what I was looking for. Like best of 2024, an amazing job. Yeah, you can, [00:37:00] if there's any other questions while Vic gets set up, like vision stuff,[00:37:07] swyx: yeah,[00:37:11] swyx: Vic, go ahead. Hi,[00:37:13] Vik Korrapati / Moondream[00:37:13] question: well, while we're getting set up, hi, over here, thanks for the really awesome talk. One of the things that's been weird and surprising is that the foundation model companies Even these MLMs, they're just like worse than RT Tether at detection still. Like, if you wanted to pay a bunch of money to auto label your detection dataset, If you gave it to OpenAI or Cloud, that would be like a big waste.[00:37:37] question: So I'm curious, just like, even Pali Gemma 2, like is worse. So, so I'm curious to hear your thoughts on like, how come, Nobody's cracked the code on like a generalist that really you know, beats a specialist model in computer vision like they have in in LLM land.[00:38:00][00:38:01] Isaac Robinson: Okay. It's a very, very interesting question. I think it depends on the specific domain. For image classification, it's basically there. In the, in AIMv2 showed, a simple attentional probe on the pre trained features gets like 90%, which is as well as anyone does. The, the, the, the bigger question, like, why isn't it transferring to object detection, especially like real time object detection.[00:38:25] Isaac Robinson: I think, in my mind, there are two answers. One is, object detection is really, really, really the architectures are super domain specific. You know, we see these, all these super, super complicated things, and it's not super easy to, to, to build something that just transfers naturally like that, whereas image classification, you know, clip pre training transfers super, super quickly.[00:38:48] Isaac Robinson: And the other thing is, until recently, the real time object detectors didn't even really benefit from pre training. Like, you see the YOLOs that are like, essentially saturated, showing very little [00:39:00] difference with pre training improvements, with using pre trained model at all. It's not surprising, necessarily, that People aren't looking at the effects of better and better pre training on real time detection.[00:39:12] Isaac Robinson: Maybe that'll change in the next year. Does that answer your question?[00:39:17] Peter Robicheaux: Can you guys hear me? Yeah, one thing I want to add is just like, or just to summarize, basically, is that like, Until 2024, you know, we haven't really seen a combination of transformer based object detectors and fancy losses, and PolyGemma suffers from the same problem, which is basically to say that these ResNet, or like the convolutional models, they have all these, like, extreme optimizations for doing object detection, but essentially, I think it's kind of been shown now that convolution models like just don't benefit from pre training and just don't like have the level of intelligence of transformer models.[00:39:56] swyx: Awesome. Hi,[00:39:59] Vik Korrapati: can [00:40:00] you hear me?[00:40:01] swyx: Cool. I hear you. See you. Are you sharing your screen?[00:40:04] Vik Korrapati: Hi. Might have forgotten to do that. Let me do[00:40:07] swyx: that. Sorry, should have done[00:40:08] Vik Korrapati: that.[00:40:17] swyx: Here's your screen. Oh, classic. You might have to quit zoom and restart. What? It's fine. We have a capture of your screen.[00:40:34] swyx: So let's get to it.[00:40:35] Vik Korrapati: Okay, easy enough.[00:40:49] Vik Korrapati: All right. Hi, everyone. My name is Vic. I've been working on Moondream for almost a year now. Like Shawn mentioned, I just went and looked and it turns out the first version I released December [00:41:00] 29, 2023. It's been a fascinating journey. So Moonbeam started off as a tiny vision language model. Since then, we've expanded scope a little bit to also try and build some tooling, client libraries, et cetera, to help people really deploy it.[00:41:13] Vik Korrapati: Unlike traditional large models that are focused at assistant type use cases, we're laser focused on building capabilities that developers can, sorry, it's yeah, we're basically focused on building capabilities that developers can use to build vision applications that can run anywhere. So, in a lot of cases for vision more so than for text, you really care about being able to run on the edge, run in real time, etc.[00:41:40] Vik Korrapati: So That's really important. We have we have different output modalities that we support. There's query where you can ask general English questions about an image and get back human like answers. There's captioning, which a lot of our users use for generating synthetic datasets to then train diffusion models and whatnot.[00:41:57] Vik Korrapati: We've done a lot of work to minimize those sessions there. [00:42:00] So that's. Use lot. We have open vocabulary object detection built in similar to a couple of more recent models like Palagem, et cetera, where rather than having to train a dedicated model, you can just say show me soccer balls in this image or show me if there are any deer in this image, it'll detect it.[00:42:14] Vik Korrapati: More recently, earlier this month, we released pointing capability where if all you're interested in is the center of an object you can just ask it to point out where that is. This is very useful when you're doing, you know, I automation type stuff. Let's see, LA we, we have two models out right now.[00:42:33] Vik Korrapati: There's a general purpose to be para model, which runs fair. Like it's, it's it's fine if you're running on server. It's good for our local Amma desktop friends and it can run on flagship, flagship mobile phones, but it never. so much for joining us today, and we'll see you in the [00:43:00] next one. Less memory even with our not yet fully optimized inference client.[00:43:06] Vik Korrapati: So the way we built our 0. 5b model was to start with the 2 billion parameter model and prune it while doing continual training to retain performance. We, our objective during the pruning was to preserve accuracy across a broad set of benchmarks. So the way we went about it was to estimate the importance of different components of the model, like attention heads, channels MLP rows and whatnot using basically a technique based on the gradient.[00:43:37] Vik Korrapati: I'm not sure how much people want to know details. We'll be writing a paper about this, but feel free to grab me if you have more questions. Then we iteratively prune a small chunk that will minimize loss and performance retrain the model to recover performance and bring it back. The 0. 5b we released is more of a proof of concept that this is possible.[00:43:54] Vik Korrapati: I think the thing that's really exciting about this is it makes it possible for for developers to build using the 2B param [00:44:00] model and just explore, build their application, and then once they're ready to deploy figure out what exactly they need out of the model and prune those capabilities into a smaller form factor that makes sense for their deployment target.[00:44:12] Vik Korrapati: So yeah, very excited about that. Let me talk to you folks a little bit about another problem I've been working on recently, which is similar to the clocks example we've been talking about. We had a customer reach out who was talking about, like, who had a bunch of gauges out in the field. This is very common in manufacturing and oil and gas, where you have a bunch of analog devices that you need to monitor.[00:44:34] Vik Korrapati: It's expensive to. And I was like, okay, let's have humans look at that and monitor stuff and make sure that the system gets shut down when the temperature goes over 80 or something. So I was like, yeah, this seems easy enough. Happy to, happy to help you distill that. Let's, let's get it going. Turns out our model couldn't do it at all.[00:44:51] Vik Korrapati: I went and looked at other open source models to see if I could just generate a bunch of data and learn from that. Did not work either. So I was like, let's look at what the folks with [00:45:00] hundreds of billions of dollars in market cap have to offer. And yeah, that doesn't work either. My hypothesis is that like the, the way these models are trained are using a large amount of image text data scraped from the internet.[00:45:15] Vik Korrapati: And that can be biased. In the case of gauges, most gauge images aren't gauges in the wild, they're product images. Detail images like these, where it's always set to zero. It's paired with an alt text that says something like GIVTO, pressure sensor, PSI, zero to 30 or something. And so the models are fairly good at picking up those details.[00:45:35] Vik Korrapati: It'll tell you that it's a pressure gauge. It'll tell you what the brand is, but it doesn't really learn to pay attention to the needle over there. And so, yeah, that's a gap we need to address. So naturally my mind goes to like, let's use synthetic data to, Solve this problem. That works, but it's problematic because it turned out we needed millions of synthetic gauge images to get to reasonable performance.[00:45:57] Vik Korrapati: And thinking about it, reading a gauge is like [00:46:00] not a one, like it's not a zero short process in our minds, right? Like if you had to tell me the reading in Celsius for this, Real world gauge. There's two dials on there. So first you have to figure out which one you have to be paying attention to, like the inner one or the outer one.[00:46:14] Vik Korrapati: You look at the tip of the needle, you look at what labels it's between, and you count how many and do some math to figure out what that probably is. So what happens if we just add that as a Chain of thought to give the model better understanding of the different sub, to allow the model to better learn the subtasks it needs to perform to accomplish this goal.[00:46:37] Vik Korrapati: So you can see in this example, this was actually generated by the latest version of our model. It's like, okay, Celsius is the inner scale. It's between 50 and 60. There's 10 ticks. So the second tick, it's a little debatable here, like there's a weird shadow situation going on, the dial is off, so I don't know what the ground truth is, but it works okay.[00:46:57] Vik Korrapati: There's points on there that are, the points [00:47:00] over there are actually grounded. I don't know if this is easy to see, but when I click on those, there's a little red dot that moves around on the image. The model actually has to predict where this points are, I was already trying to do this with bounding boxes, but then Malmo came out with pointing capabilities.[00:47:15] Vik Korrapati: And it's like pointing is a much better paradigm to to represent this. We see pretty good results. This one's actually for clock reading. I couldn't find our chart for gauge reading at the last minute. So the light. Blue chart is with our rounded chain of thought. This measures, we have, we built a clock reading benchmark about 500 images.[00:47:37] Vik Korrapati: This measures accuracy on that. You can see it's a lot more sample efficient when you're using the chain of thought to model. Another big benefit from this approach is like, you can kind of understand how the model is. it and how it's failing. So in this example, the actual correct reading is 54 Celsius, the model output [00:48:00] 56, not too bad but you can actually go and see where it messed up. Like it got a lot of these right, except instead of saying it was on the 7th tick, it actually predicted that it was the 8th tick and that's why it went with 56.[00:48:14] Vik Korrapati: So now that you know that this. Failing in this way, you can adjust how you're doing the chain of thought to maybe say like, actually count out each tick from 40, instead of just trying to say it's the eighth tick. Or you might say like, okay, I see that there's that middle thing, I'll count from there instead of all the way from 40.[00:48:31] Vik Korrapati: So helps a ton. The other thing I'm excited about is a few short prompting or test time training with this. Like if a customer has a specific gauge that like we're seeing minor errors on, they can give us a couple of examples where like, if it's miss detecting the. Needle, they can go in and correct that in the chain of thought.[00:48:49] Vik Korrapati: And hopefully that works the next time. Now, exciting approach, we only apply it to clocks and gauges. The real question is, is it going to generalize? Probably, like, there's some science [00:49:00] from text models that when you train on a broad number of tasks, it does generalize. And I'm seeing some science with our model as well.[00:49:05] Vik Korrapati: So, in addition to the image based chain of thought stuff, I also added some spelling based chain of thought to help it understand better understand OCR, I guess. I don't understand why everyone doesn't do this, by the way. Like, it's trivial benchmark question. It's Very, very easy to nail. But I also wanted to support it for stuff like license plate, partial matching, like, hey, does any license plate in this image start with WHA or whatever?[00:49:29] Vik Korrapati: So yeah, that sort of worked. All right, that, that ends my story about the gauges. If you think about what's going on over here it's interesting that like LLMs are showing enormous. Progress in reasoning, especially with the latest set of models that we've seen, but we're not really seeing, I have a feeling that VLMs are lagging behind, as we can see with these tasks that should be very simple for a human to do [00:50:00] that are very easy to find VLMs failing at.[00:50:04] Vik Korrapati: My hypothesis on why this is the case is because On the internet, there's a ton of data that talks about how to reason. There's books about how to solve problems. There's books critiquing the books about how to solve problems. But humans are just so good at perception that we never really talk about it.[00:50:20] Vik Korrapati: Like, maybe in art books where it's like, hey, to show that that mountain is further away, you need to desaturate it a bit or whatever. But the actual data on how to, like, look at images is, isn't really present. Also, the Data we have is kind of sketched. The best source of data we have is like image all text pairs on the internet and that's pretty low quality.[00:50:40] Vik Korrapati: So yeah, I, I think our solution here is really just we need to teach them how to operate on individual tasks and figure out how to scale that out. All right. Yep. So conclusion. At Moondream we're trying to build amazing PLMs that run everywhere. Very hard problem. Much work ahead, but we're making a ton of progress and I'm really excited [00:51:00] about If anyone wants to chat about more technical details about how we're doing this or interest in collaborating, please, please hit me up.[00:51:08] Isaac Robinson: Yeah,[00:51:09] swyx: like, I always, when people say, when people say multi modality, like, you know, I always think about vision as the first among equals in all the modalities. So, I really appreciate having the experts in the room. Get full access to Latent Space at www.latent.space/subscribe
Let us know your thoughts. Send us a Text Message. Follow me to see #HeadsTalk Podcast Audiograms every Monday on LinkedInEpisode Title:
The entrepreneurial path is rarely a straight line, and for Brian O'Kelley, it's been a winding road of remarkable successes, humbling failures, and surprising turns. He has built multiple companies, one of which was incredibly successful and culminated in a $1.6B exit. Brian's latest venture, Scope3, has attracted funding from top-tier investors like Room40 Ventures, Google Ventures, Craft Venture, and Venrock.
Michael Margolis has been a UX research partner at Google Ventures (GV) for nearly 15 years. He has developed a unique approach to helping startups identify their “bullseye customer”—the specific subset of their target market who initially is most likely to adopt their product. Michael has conducted over 300 hands-on research sprints with GV portfolio companies across various industries and helped develop the “design sprint” process made famous by the book Sprint. In our conversation, we discuss:• The step-by-step process of running a bullseye customer sprint• The most common mistakes founders make when picking their first customers• Practical tips for conducting effective customer interviews• How to create simple but effective prototypes for user research• The power of “watch parties” in aligning teams around customer insights• How to apply these methods beyond typical tech startups—Brought to you by:• Eppo—Run reliable, impactful experiments• Paragon—Ship every SaaS integration your customers want• Enterpret—Transform customer feedback into product growth—Find the transcript at: https://www.lennysnewsletter.com/p/finding-your-bullseye-customer-michael-margolis—Where to find Michael Margolis:• X: https://x.com/mmargolis• LinkedIn: https://www.linkedin.com/in/mmargolis/• Website: https://www.learnmorefaster.com/• Medium: https://medium.com/@mmargolis—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Michael's background(09:11) Bullseye customers vs. ideal customer profiles (ICPs)(12:32) An overview of the bullseye customer sprint(20:56) When to use the bullseye customer sprint(22:19) Step one: Agree on goals and key questions(23:48) Step two: Define your bullseye customer(25:52) The importance of a narrow target audience(29:00) An example of step two in action(38:24) Narrowing attributes and exclusion criteria(43:28) Step three: Recruiting and compensating participants(56:11) Step four: Creating effective prototypes(01:01:10) Step five: Drafting your interview guide(01:08:49) Step six: The watch party method(01:19:40) Common pitfalls and final thoughts(01:24:43) Closing thoughts and where to find Michael—Referenced:• Learn More Faster: How to Find Your Bullseye Customer and Their Perfect Product: https://www.learnmorefaster.com• Alcoa: https://www.alcoa.com• Dupont: https://www.dupont.com• Ericsson: https://www.ericsson.com• Google Ventures: https://www.gv.com/• Kate Aronowitz on LinkedIn: https://www.linkedin.com/in/katearonowitz/• Vanessa Cho on LinkedIn: https://www.linkedin.com/in/veecho/• How to kickstart and scale a consumer business—Step 2: Identify your super-specific who: https://www.lennysnewsletter.com/p/consumer-business-super-specific-who• When enough is enough | Andy Johns (ex-FB, Twitter, Quora): https://www.lennysnewsletter.com/p/when-enough-is-enough-andy-johns• Zipline for health care: https://www.flyzipline.com/solutions/healthcare• Jobs to Be Done framework: https://www.christenseninstitute.org/theory/jobs-to-be-done• User Interviews: https://www.userinterviews.com/• Respondent: https://www.respondent.io/• Flatiron Health: https://flatiron.com/• How to identify your ideal customer profile (ICP): https://www.lennysnewsletter.com/p/how-to-identify-your-ideal-customer• Gong: https://www.gong.io• Linear: https://linear.app• Gusto: https://gusto.com/• Humble Inquiry: The Gentle Art of Asking Instead of Telling: https://bookshop.org/p/books/humble-inquiry-second-edition-the-gentle-art-of-asking-instead-of-telling-edgar-h-schein/14739375• Figma: https://www.figma.com—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe
Origins - A podcast about Limited Partners, created by Notation Capital
Wesley Chan is a true legend in Silicon Valley. He spent time at Microsoft and HP, and in 2002 he jumped to Google. There, he worked as Sergey Brin's Chief of Staff and founded Google Analytics and Google Voice - all before then founding and leading the seed investing program at Google Ventures as GP. Under his tenure, GV was the first institutional check into companies like Plaid, Gusto, Lucid, and Robinhood. Wesley sits down with Nick Chirls, GP at Asylum Ventures, and Beezer Clarkson, LP at Sapphire Partners, to discuss his investment philosophy and what motivates him. Wesley discusses his early days at Google being pushed to make products that change the world, finding founders who have a 100-year plan, and being driven not by the desire to prove other people wrong, but to prove himself right. Learn more about Sapphire Partners: sapphireventures.com/sapphire-partners Learn more about OpenLP: openlp.vc Learn more about Asylum Ventures: asylum.vc Learn more about FPV Ventures: fpvventures.com Subscribe to the OpenLP newsletter for a monthly roundup of the latest venture insights, including the newest Origins episodes, delivered straight to your inbox. CHAPTERS: 0:00-Introduction & the Burning Question3:11-Working for Sergey Brin: “If you're not changing the world, you're wasting your time.” 9:22-They're not there to screw the world, they're there to improve the world. 15:07-I'm not a university or your tuition money. We're giving you a check to go win. 17:05-Knowing just enough to be dangerous 27:58-The Exit Market: No one wants to test the market. LPs, hang on. 33:07-FPV's take on AI 34:09-Founder success is the gift that keeps on giving
Vidu Shanmugarajah is a Partner at GV (formerly Google Ventures), where he invests in transformative tech companies across Europe. In this episode of Riding Unicorns, Vidu joins James Pringle and Hector Mason to share his journey from corporate law to venture capital, highlighting how his unique background has shaped his approach to investing.Vidu discusses his atypical career path, starting as an M&A lawyer before transitioning to private equity and eventually venturing into tech as an entrepreneur and investor. He sheds light on GV's distinct model as an independent venture firm backed by Alphabet, explaining how it leverages Alphabet's resources while maintaining a startup-focused ethos. Throughout the conversation, Vidu offers insights into his investment strategy, emphasizing the importance of identifying standout founders, the growing role of AI, and the value of diversity in venture capital.This episode provides a compelling look into the mindset of a successful VC, the nuances of investing in tech, and the evolving landscape of European startups. Whether you're a founder, investor, or just curious about the venture capital world, you won't want to miss Vidu's insights on building and scaling impactful tech companies.Don't forget to like, subscribe, and follow The Riding Unicorns podcast on your favourite platform and stay connected with us on social media for more inspiring episodes!
The First 100 | How Founders Acquired their First 100 Customers | Product-Market Fit
James Hawkins is the co-founder of PostHog, an open-source product analytics platform that lets developers track product usage, understand the impact of new features on user behavior, and integrate product and user data with data warehouses – all without sending any data to a 3rd party. The Company has raised to date $27 million from Y Combinator Continuity Fund and GV (formerly Google Ventures). Where to find James Hawkins:• Website: PostHog - How developers build successful products• LinkedIn (2)
#197: Together with Kevin Rose, we'll dive into points and miles strategies, credit card optimization, travel tips, and career growth. Specifically, we'll cover everything from underrated credit card tricks, tactics for earning multiple welcome bonuses, navigating the complexities of family travel, staying updated on the best deals, our favorite podcasts, and so much more. Kevin Rose is a partner at True Ventures, an early-stage venture capital firm that has invested over $4 billion in a portfolio of over 400+ founders. Kevin previously founded Digg, Zero, Oak, and was a Partner at Google Ventures. Link to Full Show Notes: https://chrishutchins.com/kevin-rose-ama-1 Partner Deals Stable: 50% off your first year of my favorite digital mailbox Superhuman: Free month of the fastest and best email with code ALLTHEHACKS Daffy: Free $25 to give to the charity of your choice Vuori: 20% off the most comfortable performance apparel I've ever worn Oracle: Free trial of the next generation Oracle Cloud Infrastructure For all the deals, discounts and promo codes from our partners, go to: chrishutchins.com/deals Resources Mentioned Kevin Rose: Website | Newsletter DEXA Scan: DexaFit Credit Cards Best Credit Cards Offers Page Capital One Venture X Rewards Credit Card Capital One Venture Rewards Credit Card U.S. Bank Altitude® Reserve Visa Infinite® Card Bank of America Preferred Rewards Robinhood Gold Credit Card Bilt Mastercard® Wells Fargo Autograph Journey℠ Card Wells Fargo Autograph℠ Card Citi Strata Premier℠ Card Credit Card Optimizer Tool How to Get Approved for Business Cards Pepper Rewards App (Referral code: 612342) Points and Miles Websites Frequent Miler Doctor Of Credit One Mile at a Time Travel point.me AwardCat 10xTravel Savanti Travel Hyatt Globalist Guest of Honor: A Complete Guide Podcasts Frequent Miler On The Air Animal Spirits The Memo by Howard Marks MoneyWise with Sam Parr The Peter Attia Drive diggnation Huberman Lab The Aliquot (Rhonda Patrick) ATH Podcast Chris' Newsletter: Sign Up | Past Issues Chris' Membership Ep #47: Finding Happiness, Success and Deep Purpose with Arthur Brooks Ep #86: The Best Deals for Vacation Rentals, Exchanges, Fractionals, Timeshares and More! Ep #170: Cash Back vs. Points: Are We Doing It All Wrong? Ep #196: Optimizing Metabolic Health to Live a Longer and Healthier Life with Dr. Casey Means Full Show Notes (02:11) Introduction (03:32) Strategizing Your Next Home Purchase (07:42) When Should You Consider REITs? (08:25) Mastering the Skill of Spending (12:15) Why You Should Invest Toward Your Pain Points (14:28) Maximizing Earnings in Less Than 6 Months (16:51) Most Overrated Credit Card Trick (20:55) Underrated Credit Card Strategies (23:30) Chris' Credit Card Optimizer Tool (25:34) Secret Perk on Amazon Visa Card (26:49) How to Earn Multiple Welcome Bonuses (30:26) Cancelling/Recycling Credit Cards (35:20) How to Get Weekly Alerts on Deals & Finds (37:01) Earning Points Beyond Regular Spending (40:00) Pepper Points (41:16) Alaska Airlines and Hawaiian Airlines Merger (44:03) Adding Authorized Users on Your Card (45:48) How to Search for Hotel Availability by Occupancy (46:49) Chris' Favorite Websites to Earn Points From (47:22) Best Credit Card Pairings (49:33) Using Consultants to Maximize Points (51:04) Relaxing on Vacation (54:03) Rental Car Protection on Points (55:57) Chris and Kevin's Favorite Airline Cabins (58:10) Splitting Kids & Nanny into Different Class Flight Seats (1:02:20) Earning Hyatt Globalist Status (1:05:52) Optimizing Vacation Rentals or Extended Trips (1:07:07) Chris and Kevin's Favorite Podcasts (1:09:31) How to Decide to Quit Your Job (1:13:49) Accelerating Your Career With Kids (1:20:13) Future Q&A Preview (1:21:04) The Reason We Have Ads in the Show (1:22:53) Where to Find Kevin
If you've worked closely with a product team, chances are that secret management has been a topic that you've had to wrangle with at some point. Either because your development or deployment teams don't have access to the right secrets, and that slows down how quickly you can get your code to production or worse because your secrets were exposed to the public and put your data and your customer's data at risk through a data breach. In this episode of the Convergence Podcast, Ashok welcomes Brian Vallelunga, CEO of Doppler, to discuss the too-often overlooked topic of secret management in software development. Before founding Doppler in 2018, Brian was a lead engineer at Uber, where he worked on special projects for the C-suite. Doppler is a secret office platform backed by industry heavy hitting venture capitalists like CRV, Google Ventures, Sequoia, Greylock, Kleiner Perkins and they're also a Y Combinator company. Brian shares insights on why development teams frequently struggle with managing secrets like API keys and database credentials, and he explains the far-reaching consequences of poor product security—ranging from data breaches to production slowdowns. Brian also discusses the importance of proactively training teams and developing secure workflows, providing real-life examples of high-profile data breaches at companies like Twitter and Toyota. Brian outlines 4 essential questions executives and senior engineers should ask to safeguard their systems. From developing playbooks for responding to breaches to ensuring secret rotation, this episode is packed with actionable advice for both technical and non-technical leaders. Unlock the full potential of your product team with Integral's player coaches, experts in lean, human-centered design. Visit integral.io/convergence for a free Product Success Lab workshop to gain clarity and confidence in tackling any product design or engineering challenge. Inside the episode... What secrets are and why they are critical in software development The challenges of secret management for both small startups and large companies High-profile data breaches at Twitter and Toyota and how they happened Key questions every executive and engineer should ask about secret management Proactive steps to train your team and secure your codebase How to clean up exposed secrets and prevent future mistakes Best practices for rotating secrets and monitoring security Mentioned in this episode Doppler (Secret management platform) AWS Secrets Manager Google Cloud Platform (GCP) Secrets Manager HashiCorp Vault Toyota and Twitter data breaches Subscribe to the Convergence podcast wherever you get podcasts including video episodes on YouTube at youtube.com/@convergencefmpodcast. Learn something? Give us a 5-star review and like the podcast on YouTube. It's how we grow!
Michael Margolis, UX Research Partner at Google Ventures, breaks down the Bullseye Customer Sprint methodology and how it helps startups quickly identify what to build and who to target. Learn how to use the “5 and 3 in 1 formula”—five bullseye customers, three prototypes, in one day—to accelerate learning and de-risk product decision-making.Gain practical strategies for streamlining customer discovery so you can build the right products, faster. Michael shares how to define your bullseye customers, prioritize the key questions to focus on, and avoid common pitfalls in early product development.About Michael: Michael Margolis joined Google Ventures in 2010 as the venture industry's first UX research partner. With over 30 years of experience, he's conducted 300 research sprints with GV portfolio companies across diverse sectors. His work has helped hundreds of companies boost conversions, test new concepts, streamline workflows, and define bullseye customers. Michael joined Google as a staff user experience researcher, where he conducted research for Gmail, led the UX research team for Google Apps, and managed Google's UX team in Seattle. Before Google, he spearheaded user research at Walmart.com and produced educational software at Electronic Arts. Michael earned his bachelor's and master's degrees in anthropology from Stanford University and is the author of Learn More Faster.Connect with Michael:You can follow Michael on LinkedIn or check out his articles on Medium.Resources: Learn More Faster by Michael MargolisConduct stellar user research even faster by Michael MargolisResearch Practice by Gregg BernsteinDisco Conf 2024, the global research and discovery conference by MazeUnpacking User Interviews: Turning Conversations into Insight by Maze Follow Maze on Social Media:X: @mazedesignHQInstagram: @mazedesignHQLinkedIn: https://www.linkedin.com/company/mazedesignTo get notified when new episodes air, subscribe at maze.co/podcast.See you next time!
Corporate venture capital investments have decreased over the past two years, with U.S.-based corporate investors involved in only 1,385 deals by mid-2024, maintaining last year's pace but significantly lower than in 2022. Despite prominent funding rounds, including Alphabet's $5 billion in Waymo and Disney's $1.5 billion in Epic Games, the overall deal flow remains low. Early-stage funding has seen 489 deals this year, down from over 700, while later-stage funding reflects a decrease from 270 in 2023 to 190 deals. Google Ventures leads in deal counts with 46 rounds, followed by Coinbase Ventures at 30, Samsung Next at 27, Salesforce Ventures tied at 26, and Nvidia at 21 rounds. A growing portion of U.S. startup funding is sourced from rounds backed by corporate investors, yet total deal numbers and investment capital are contracting, indicating that high-profile deals do not equate to increased corporate engagement. Data was collected as of September 13, 2024, focusing on corporate venture capital transactions to detail investment trends.Learn more on this news visit us at: https://greyjournal.net/ Hosted on Acast. See acast.com/privacy for more information.
Deena Shakir is an investor who is obsessed with expanding access to the basic health services people need and often can't access: pediatric care, community health and women's services. Her journey to investing passed through policymaking, journalism and big tech and her early techno optimism has given way to a much more nuanced and pragmatic view. She is able to see the big opportunities for impact hiding in plain sight.We discuss:The two obvious megatrends hitting healthcare: GLP1s and AIAnd the not so obvious opportunity: doing basic things betterHow Dobbs was an accelerant, not a deterrent, for investments in women's healthWhy Public Health is great training for healthcare foundersDeena is excited about “asset light” investments that combine new care models – like community health workers – and technology:“There are some things that won't change. And there are things that hopefully tech can help to navigate. And so these asset light models, these models that are leveraging under leveraged care workers – like community health workers that are providing culturally competent care – and at the end of the day, that are improving metrics and outcomes, are the ones that get me excited.”Relevant LinksLux CapitalJonathan Haidt article in The Atlantic titled “Why the past 10 years of American Life have been uniquely stupid”President Obama's Cairo speechARPA-H Sprint for Women's HealthHealth companies Deena mentions that she invests in:WaymarkSummer healthMaven Clinic About Our GuestDeena's investments span stages and sectors, and include women's health, digital health infrastructure, health equity, foodtech, and fintech. Above all, she seeks out extraordinary, often underdog, founders on a mission. Prior to Lux, Deena was a Partner at GV (formerly Google Ventures), led product partnerships at Google for health, search, and AI/ML, and directed social impact investments at Google.org. Deena also served as a Presidential Management Fellow at The U.S. Department of State under Secretary Clinton, where she helped launch President Obama's first Global Entrepreneurship...
I continue my conversation with Douglas Ferguson, an entrepreneur and human-centered technologist. He is the president of Voltage Control, a facilitation academy that develops leaders through various programs.In this part of the conversation, Douglas shares his story related to:Voltage Control: A facilitation academy focused on developing collaborative leaders.Facilitation skills: Valuable for individuals in various roles, not just facilitators.Community building: Emphasizes diversity, inclusion, and shared values.Entrepreneurial journey: Douglas's experiences in starting and growing Voltage Control.Leadership development: The importance of facilitation skills for effective leadership.And finally, he shares career tips for those aspiring to be facilitators.Douglas is an entrepreneur and human-centered technologist with over 26 years of experience. He is president of Voltage Control, a facilitation academy that develops leaders through certifications, workshops, and organizational coaching focused on facilitation mastery, innovation, and play. He has helped transform leaders, innovators, and creatives from Nike, U.S. SOCOM, Google, the Air Force, Gap, Tesla, MSU, Church & Dwight, Apple, Adobe, Dropbox, Fidelity, Vrbo, Liberty Mutual, Humana, and SAIC.Prior to Voltage Control, Douglas held CTO positions at numerous Austin startups, where he led product and engineering teams using agile, lean, and human-centered design principles. While CTO at Twyla, Douglas worked directly with Google Ventures running Design Sprints, and now brings this experience and process to companies everywhere.Douglas is currently active in the Austin startup community, where he serves on the board of several non-profits, mentors startups, and advises early-stage ventures.Douglas is a thought leader and often writes and talks about facilitation, leadership, collaboration, innovation, culture, meetings, and Design Sprints. He is also the author of four books: Magical Meetings, Beyond the Prototype, How to Remix Anything, and Start Within, He has been published in Forbes, Fast Company, Innovation Leader, and is a regular contributor to The Future Shapers. He publishes a weekly podcast called Facilitation Lab.When not facilitating or coaching facilitators, you might find Douglas patching up his Modular Synth, boxing, doing pilates, and taking photographs. He graduated from Virginia Polytechnic Institute and State University.Douglas may be contacted at::https://www.linkedin.com/in/douglasferguson/
I am in conversation with Douglas Ferguson. Douglas is an entrepreneur and human-centered technologist. He is the president of Voltage Control, a facilitation academy that develops leaders through various programs.In this part of the conversation, Douglas shares his story related to:Douglas's journey: From individual contributor to leader, emphasizing the importance of understanding and empathy in building successful teams.Teamwork and collaboration: Discusses storytelling, creating a positive culture, and the challenges and opportunities of working with distributed and cross-cultural teams.Effective meeting practices: Highlights the value of rituals, storytelling, and focusing on generation and exploration.Douglas is an entrepreneur and human-centered technologist with over 26 years of experience. He is president of Voltage Control, a facilitation academy that develops leaders through certifications, workshops, and organizational coaching focused on facilitation mastery, innovation, and play. He has helped transform leaders, innovators, and creatives from Nike, U.S. SOCOM, Google, the Air Force, Gap, Tesla, MSU, Church & Dwight, Apple, Adobe, Dropbox, Fidelity, Vrbo, Liberty Mutual, Humana, and SAIC.Prior to Voltage Control, Douglas held CTO positions at numerous Austin startups, where he led product and engineering teams using agile, lean, and human-centered design principles. While CTO at Twyla, Douglas worked directly with Google Ventures running Design Sprints, and now brings this experience and process to companies everywhere.Douglas is currently active in the Austin startup community, where he serves on the board of several non-profits, mentors startups, and advises early-stage ventures.Douglas is a thought leader and often writes and talks about facilitation, leadership, collaboration, innovation, culture, meetings, and Design Sprints. He is also the author of four books: Magical Meetings, Beyond the Prototype, How to Remix Anything, and Start Within, He has been published in Forbes, Fast Company, Innovation Leader, and is a regular contributor to The Future Shapers. He publishes a weekly podcast called Facilitation Lab.When not facilitating or coaching facilitators, you might find Douglas patching up his Modular Synth, boxing, doing Pilates, and taking photographs. He graduated from Virginia Polytechnic Institute and State University.Douglas may be contacted at::https://www.linkedin.com/in/douglasferguson/
Meredith Sandland is the CEO of Empower Delivery and the co-author of 2 books, Delivering The Digital Restaurant: Your Roadmap To The Future Of Food , and Delivering The Digital Restaurant: The Path to Digital Maturity, both co-authored with Carl Osbourne. Prior to co-authoring those books, Meredith spent two decades in consulting, corporate strategy, and restaurant development. After Building 1,000+ restaurants the Chief Development Officer at Yum! Brands' Taco Bell, Meredith observed that the on-demand economy was starting to affect restaurants. Meredith joined ghost kitchen start-up, Kitchen United as employee #4 to create their business model, raise initial capital, and serve as the public face of the Google Venture-backed disruptor. Meredith previously joined Restaurant Unstoppable for episode 842! Favorite success quote/mantra: "Be intentional." In this episode we will discuss: Tech stack Delivery Ghost kitchens and virtual brands Dynamic pricing AND MORE! Today's sponsors: Restaurant Technologies the company that helps restaurants, “Control the kitchen chaos.” With RT's total oil management, you get: Dependable fresh bulk cooking oil delivery; Filtration + oil usage monitoring and reporting; Used cooking oil pick-up, and recycling; And say goodbye to messy, dangerous restaurant rendering tanks-yuck. RT's end-to-end cooking oils solution helps you manage your used cooking oil storage, collection, and recycling- conveniently, safely, and cleanly- with no upfront costs. Head to www.RTI-inc.com, and let them know the Restaurant Unstoppable Podcast sent you their way. MarginEdge: Boost your efficiency and profitability without adding labor costs. MarginEdge is a complete restaurant management software that allows you to seamlessly manage all aspects of your business from one central location. Track food costs in real time, make inventory faster and less tedious, easily cost out your recipes, and get a daily P&L so you always know where you stand. See how it works at marginedge.com/unstopabble. Meez: Are you a chef, owner, operator, or manage recipes in professional kitchens? meez is built just for you. Organize, share, prep, and scale recipes like never before. Plus, engineer your menu in real-time and get accurate food costs. Sign up for free today and get 2 FREE months of invoice processing as a listener of the Restaurant Unstoppable Podcast. Visit getmeez.com/unstoppable to learn more. Restaurant Systems Pro - Join the 60-day Restaurant Systems Pro FREE TRAINING. This is something that has never been done before. This 60-day event is at no cost to you, but it is not for everyone. Fred Langley, CEO of Restaurant Systems Pro, will lead a group of restaurateurs through the Restaurant Systems Pro software and set up the systems for your restaurant. During the 60 days, Fred will walk you through the Restaurant Systems Pro Process and help you crush the following goals: Recipe Costing Cards; Guidance in your books for accounting; Cash controls; Sales Forecasting(With Accuracy); Checklists; Budgeting for the entire year; Scheduling for profit; More butts in seats and more… Click Here to learn more. Contact the guest: On the web: https://www.empower.delivery LinkedIn: https://www.linkedin.com/in/meredith-sandland/ Thanks for listening! Rate the podcast, subscribe, and share! We are on Youtube: @RestaurantUnstoppable
"C-Roc" welcomes Chris Hutchins, a prominent life hacker, and financial optimizer, to delve into the transformative elements that have shaped his remarkable journey. Chris, renowned for his innovative approaches to upgrading life, money, and travel, has been featured in prestigious publications such as The New York Times, Wall Street Journal, and CNBC. He's also the mastermind behind influential hacks that elevate personal and financial success. Chris's illustrious career includes roles as the former CEO and founder of Grove, which Wealthfront acquired, and as an investor at Google Ventures. He is also the co-founder of Milk, which was acquired by Google. Chris shares his early fascination with life hacking and unconventional problem-solving, a passion that began in his youth. He reflects on how his relentless curiosity and desire to challenge norms have driven his success. Chris reveals how his unique perspective on personal and professional growth has led him to question traditional approaches and seek innovative solutions. C-Roc and Chris discuss the contrast between corporate environments and entrepreneurial endeavors, emphasizing how Chris's nonconformist mindset often clashed with structured industries like investment banking and management consulting. Chris highlights his preference for meritocratic, dynamic work environments where creativity and efficiency are valued over hierarchical rigidity. Listeners gain insights into Chris's journey from a student seeking business opportunities to a successful entrepreneur and investor. His story is a testament to the power of questioning conventional wisdom and embracing a mindset that prioritizes continuous improvement and curiosity. This episode offers valuable lessons for anyone interested in personal development, financial optimization, and the art of life hacking. Website- allthehacks.com Social Media Links/Handles: https://www.instagram.com/chrishutchins/?hl=en https://x.com/hutchins
#185: Learn a proven 4-step framework to create more time in your day for whatever matters most. In this episode, Jake and John share tactics to highlight your priorities, maintain laser focus, energize with healthy habits, and reflect in order to continuously improve and transform your life. Jake Knapp and John Zeratsky are the authors of the best-selling books Sprint and Make Time, and co-founders of Character, a venture capital firm. Previously, they were partners at Google Ventures and design leads at Google where worked on apps like Gmail and YouTube. Link to Full Show Notes: https://allthehacks.com/make-time-jake-knapp-john-zeratsky Partner Deals Vuori: 20% off the most comfortable performance apparel I've ever worn DeleteMe: 20% off removing your personal info from the web Shopify: $1/month trial for the easiest e-commerce platform Greenlight: First month free on a debit card and money app to raise financially smart kids Green Chef: 50% off organic meal kits with code ALLTHEHACKS50 + 20% off next 2 months For all the deals, discounts and promo codes from our partners, go to: allthehacks.com/deals Resources Mentioned Make Time: Book | Free Notes Template Character | Character Labs Apps: Reclaim Compose Freedom All the Hacks Follow & Review on Apple Podcasts Email for questions, hacks, deals, feedback: podcast@allthehacks.com Full Show Notes (00:00) Introduction (01:50) What Most People Get Wrong About Managing Time (04:06) Why People Are Struggling With Productivity (05:45) Applying ‘Make Time' to Your Personal Life (08:14) Busy Bandwagons and Infinity Pools (12:30) The 4 Step 'Make Time' Framework (15:40) How to Start Your Day With a Highlight (20:24) Highlights: Can There Be More Than One? Can They Change? (24:04) Highlight Tactic: Design Your Day (27:38) Highlight Tactic: Batch the Little Stuff (30:34) How Often Should You Look Into Feedback Loops? (31:18) The Most Efficient Ways to Find Laser Focus (35:20) Creating a Distraction-Free Phone (42:00) How to Deal With Attention Residue (46:38) Ways to Create Energy to Recharge Yourself (50:16) Why You Should Leave Your Headphones at Home (51:05) Best Cadence to Take Breaks (53:00) How Jake and John Apply Exercise to Their Lives (56:20) What Can We Do to Reflect? (59:14) Where to Find Jake and John's Work Connect with All the Hacks All the Hacks: Newsletter | Website | Membership | Email Chris Hutchins: X | Instagram | Website | LinkedIn Editor's Note: The content on this page is accurate as of the posting date; however, some of our partner offers may have expired. Opinions expressed here are the author's alone, not those of any bank, credit card issuer, hotel, airline, or other entity. This content has not been reviewed, approved or otherwise endorsed by any of the entities included within the post.
Kevin Kostiner, Chief Commercial Officer of ClimaFi, joins us to discuss their revolutionary microgrid technologies and strategic energy solutions. As grid instability grows, Kevin emphasizes the crucial need for businesses to secure reliable energy sources to avoid disruptions. Drawing from his own career, he shares invaluable advice on the importance of focus, passion, and perseverance. We also explore the shifting landscape of career trends in the tech industry, from long-term employment to frequent job changes. In a heartfelt segment, we tackle the transformative power of vulnerability and genuine confidence. Behind the polished exteriors often seen at business events, many rely on artificial confidence to hide insecurities. By sharing personal anecdotes from my entrepreneurial journey and my daughter's post-college challenges, we highlight how embracing discomfort and being open about struggles can lead to meaningful connections and personal growth. Lastly, we delve into authenticity in leadership and the evolving virtual work culture. The shift to remote work has allowed for greater authenticity and a healthier work-life balance. We discuss the importance of supporting employees' passions, the benefits of casual attire, and how personal touches in virtual settings can foster deeper connections. We also emphasize the significance of effective CRM systems in enhancing productivity and avoiding manual inefficiencies. Tune in for a comprehensive look at achieving success in sales leadership and creating a thriving, authentic work environment. Kevin Kostiner is the Chief Commercial Officer at ClimaFi, an innovative startup backed by Google Ventures, focused on revolutionizing microgrid solutions through advanced software and funding. Prior to recently joining ClimaFi, Kevin spent 5 years as Head of Business Development & Partnerships at a leading electric vehicle charging manufacturer. As a recognized leader in the EV charging and renewable energy sectors, Kevin is focused on driving growth and forging impactful partnerships that expand sustainable solutions to deliver a greener present and future. Quotes: "In today's world of grid instability, businesses must proactively secure their energy needs to avoid operational disruptions. It's about building resilience and reliability into their infrastructure." "The ability to focus, find interest in your work, and persevere through challenges are key drivers of success. It's not about a perfectly curated life plan but about being adaptable and seizing opportunities as they come." "True confidence comes from embracing vulnerability and being genuine. The façade of artificial confidence at business events hides insecurities that many people have." Links: Kevin's LinkedIn - https://www.linkedin.com/in/kevinkostiner/ ClimaFi - https://www.climafi.com Get this episode and all other episodes of Sales Lead Dog at https://empellorcrm.com/salesleaddog
GV, or Google Ventures, is an independent venture capital firm backed by Alphabet. Erik Norlander is a General Partner at GV and invests across enterprise software and frontier technology, focusing on developer tools, cloud infrastructure and machine learning. He has backed companies like Cockroach, Warp and Neo4j. Prior to joining GV in 2010 and opening The post Google Ventures with Erik Norlander appeared first on Software Engineering Daily.
Chapter 1:Summary of Book Sprint"Sprint: How to Solve Big Problems and Test New Ideas in Just Five Days" is a book written by Jake Knapp, with contributions from John Zeratsky and Braden Kowitz, who were part of Google Ventures. Originally published in 2016, the book introduces a unique five-day process called a "Sprint," designed to help businesses answer critical questions, solve significant issues, and innovate more effectively.The core concept of the Sprint process is structured creativity, applying time constraints and focused effort to reduce wasted time and increase productivity. The five-day structure breaks down as follows:1. Monday: The team begins by setting a long-term goal and brainstorming questions and challenges. The day is focused on mapping out the problem and choosing the most crucial area to focus on through expert interviews within the team.2. Tuesday: The focus is on solutions. Each team member sketches competing solutions on paper, expanding and refining initial ideas into complete sketches that detail their concept.3. Wednesday: The team reviews the solution sketches from Tuesday, debates their merits, and decides on which ones to prototype by considering how they fit towards the ultimate goal. A storyboard is created by the end of the day to guide the prototype creation.4. Thursday: The chosen solutions are turned into a high-fidelity prototype—a realistic façade, not a fully developed product. The aim is to create something sufficiently convincing to test with real users without investing in full development.5. Friday: The final day takes the prototype(s) to real users for feedback. The team observes the reactions of these test users and gathers valuable insights. The observations help in making data-driven decisions about how to proceed, correcting course if necessary or pushing forward with a proven concept.Jake Knapp asserts that a Sprint is suitable for businesses of any size, from startups to large organizations, and can aid in solving problems in various functions, from product development to marketing strategies. The Sprint provides a clear path forward, giving businesses a tangible product or clear evidence on why a concept shouldn't proceed. By compressing potentially months of work into a single week, the Sprint methodology promises to help teams innovate faster and more efficiently.Chapter 2:The Theme of Book SprintIt seems there may be a confusion regarding the title "Book Sprint" authored by Jake Knapp. Jake Knapp is known for a different book titled "Sprint: How to Solve Big Problems and Test New Ideas in Just Five Days," which he co-authored with John Zeratsky and Braden Kowitz. This book, published in 2016, focuses on a unique five-day process for solving tough problems, specifically within the realm of business and product development.If you're looking for insights into "Sprint," here are the key components: Key Plot Points"Sprint" is non-fiction and does not have a traditional plot but rather outlines a step-by-step process for conducting a sprint. The book structures the sprint process into five days:1. Monday: Map - The sprint begins by creating a path for the week. The team defines the challenge and sets an achievable goal.2. Tuesday: Sketch - Each team member individually develops potential solutions, focusing on broad ideation rather than immediate practicality.3. Wednesday: Decide - The team reviews each solution, debates their merits, and decides which ones have the most potential for success.4. Thursday: Prototype - The chosen solutions are turned into a realistic prototype, a facade of the idea that looks and feels like a real product.5. Friday: Test - The prototype is then tested with real live users to understand the flaws, benefits, and usability of the concept. Character...
Today's guest is Jessica Rolph, cofounder and CEO of Lovevery, a subscription brand that sells early-childhood development play kits and solutions. To date, Jessica has raised over $132 million for Lovevery from top-tier investors, including, TCG, Google Ventures, Collaborative Fund, and the Chan Zuckerberg Initiative. Lovevery has 350,000+ active subscribers. It has been named one of Fast Company's “World's Most Innovative Companies" and has been recognized on TIME's list of "Best Inventions". Prior to Lovevery, Jessica was the cofounder and COO of Happy Family, helping to launch, build and lead Happy Family to its position as a top organic baby and toddler brand in the US. Happy Family was acquired by Group Danone in 2013 for about $300 million. Jessica also co-founded the Climate Collaborative , a non-profit organization helping companies in the natural products industry take meaningful steps to reverse climate change. She is an Aspen Institute Henry Crown Fellow and was awarded the Park Leadership Fellowship. In this episode, we dive into: Jessica opens up about how she never thought of herself as an idea person when it came to starting a business How she found purpose in life and how she discovered a market for Happy Family and Lovevery How to find product market fit Why ugly prototypes are the way to go Exiting to Danone and dreaming about Lovevery Her fundraising experience and how she deals with rejection Delegation versus what to control as a leader
This is the last episode of the #designersdigest series where we have Daniel Burka and co-host Shreyas Satish. We talk about blurring lines between product and design, the importance of being a generalist in design, and the role of product managers in the design process. This series is created by Audiogyan in partnership with @godrejdesignlab Designer's Digest series is about Design as a profession, its daily grind, the secrets to climbing the design career ladder, and what edge we'll need to thrive in the captivating world of design. Daniel is the director of product and design at the not-for-profit Resolve to Save Lives, where he spends most of his time on the open-source project, Simple. Simple is used by thousands of hospitals in India, Bangladesh, and Ethiopia to manage over 2 million patients with hypertension and diabetes. He is on the board of Laboratoria, a not-for-profit based in Peru helping Latin American women build successful careers in tech. In 2021, Daniel also started the open-source Health icons project to provide free icons to healthcare projects around the world. He is also a member of Adobe's Design Circle, which grants scholarships to a diverse group of designers each year. Previously at Google Ventures as a Design Partner, Co-founder of Milk.inc and SiverOrange, and more… Questions At RTSL, You're a Director of both Product and Design. How do you distinguish between the two verticals daily, especially concerning concerns and metrics? Who is a Product manager and who is a designer according to you? Who according to you is supposed to focus on defining the right problem and then crafting the perfect solution? How blurred are these lines? What are the primary differences if I may ask? Seems like a designer can become a PM. Can it be the other way around? This is in the context of a few hard skills. A PM is torn between a thousand things from business to analytics and many other things. How can designers venture into this role? Also, can you steelman the case for a designer to become a PM? In a lot of companies, tech and design functions are both product reports, while in many these are separate verticals. In your experience what works best and when? One criticism of product managers, by folks like Marty Cagan, is that product managers often function as project managers. What in your view should a product manager focus on bringing to the table?* Designers in their romantic vision want to solve problems for all users. While Product folks go after those getting the dollars. Can you give any example from your experience where you have balanced it elegantly? What did it take? One death is a tragedy while a thousand deaths are statistics. How do you see this in the world of Product managers obsessed with data over real emotions? This is specifically for your work in healthcare. Some companies Like Airbnb have evolved their org structures to have Product Marketing Managers and Apple of course has Program Managers who report to a Product Director. Do you have a framework to think about organizational design with product teams, of course, knowing that different organizations have designed differently based on what they are focused on? What do you consider the key responsibilities of a product designer? Again, from tiny startups to large MNCs* You work on Simple, which is of course, primarily focused on creating impact. Can you talk a little bit about what it's like designing for social impact compared to increasing market share or profitability? In a digital landscape, how can we ensure our products create real value and positive impact beyond just solving problems? What is the future of Product Managers and Designers in the AI world? What does the career ladder look like? What skills do we acquire for the future of WWW? Reference links https://audiogyan.com/?type=wrtd-series https://audiogyan.com/2021/10/06/shreyas-satish/ https://twitter.com/shreyas_satish https://www.ownpath.com/about https://www.linkedin.com/in/shreyassatish/?originalSubdomain=in https://designup.school/teacher/daniel-burka/ https://library.gv.com/defense-against-the-dark-arts-of-design-a114e5f048bb https://iconscout.com/contributors/healthicons https://medium.com/@dburka https://x.com/dburka?lang=en https://www.instagram.com/dburka/ https://www.linkedin.com/in/dburka/?original_referer=https%3A%2F%2Fwww.google.com%2F&originalSubdomain=uk https://danielburka.com/ https://en.wikipedia.org/wiki/Daniel_Burka
Former President Donald Trump still plans to head to Southern California next week for a pair of fundraisers, despite a guilty verdict in his hush money trial Thursday, a campaign spokesperson confirmed Thursday, May 30. Inflation remained essentially flat in April while showing signs of downward progress, the Commerce Department reported Friday, in a closely watched measure that will guide the Federal Reserve in any decision to loosen interest rates in the coming months. From little white lies to outright whoppers or sneaking answers for assessments, most recent job seekers admit they have used dishonest tactics before in order to land a gig. And some say they use such methods every time. Google Ventures-backed Gravity, which recently opened what it claims is the country's fastest EV charging site at a Manhattan parking garage, now plans to bring the same technology to curbside plugs.
Episode 2 of the new season is a continuation of our discussion with Kate Aronowitz of Google Ventures and Robert Fabricant of Dalberg.In part two, we dig deeply into a pair of articles written by Robert for Fast Company about the state of design leadership by examining the wave of contraction that has happened across the tech industry, and the emergence of grass-roots communities of designers seeking connection and guidance.
10X Success Hacks for Startups, Innovations and Ventures (consulting and training tips)
Welcome to a new episode of the Pitch Cafe Podcast!
Smell is the most underrated of our senses -- and it affects everything. Alex Wiltschko joins Vasant Dhar in episode 81 of Brave New World to discuss the role of smell in our lives -- and in this new digital age. Useful resources: 1. Alex Wiltschko on LinkedIn, Google Ventures, Google Scholar and Twitter. 2. Osmo. 3. Perfumes: The A-Z Guide -- Luca Turin and Tania Sanchez. 4. Perfume: The Story of a Murderer -- Patrick Suskind. 5. The Mystery of Smell -- Lydialyle Gibson on Sandeep Robert Datta. 6. A novel multigene family may encode odorant receptors: a molecular basis for odor recognition -- Linda Buck and Richard Axel. 7. Metabolic activity organizes olfactory representations -- Wesley W Qian et al (including Alex Wiltschko.) 8. Hyperbolic geometry of the olfactory space -- Yuansheng Zhou, Brian H Smith & Tatyana Sharpee. 9. Odor Perception and the Variability in Natural Odor Scenes -- Geraldine A Wright and Mitchell G.A. Thomson. 10. The Biological Sense of Smell -- Christine WJ Chee-Ruiter. 11. Also check out the work of Jim DiCarlo, David Marr and Eero Simoncelli. Check out Vasant Dhar's newsletter on Substack. Subscription is free!
Jake Knapp loves tech. He grew up using Apple II and then Mac computers, browsing bulletin boards, and making his own games. As an adult, he worked at Microsoft on the Encarta CD-ROM, before being hired by Google, where he worked on Gmail, co-founded Google Meet, and created Google Ventures' Design Sprint process. Today, he's a venture capitalist and consultant for start-ups, as well as a writer.But, if Jake was an early adopter and booster of the upsides of technology, he was also early in sensing its not-so-positive side effects. Twelve years ago, unhappy with the pull his smartphone was exerting on him, he decided to curb its distractions. He continues to use this distraction-free phone today.Today on the show, I talk to Jake about what motivated him to change his relationship with his phone over a decade ago and what steps he took to do so, including how and why he lives life without a web browser or email app on his phone. We get into what realizations about work and life Jake's gotten from having a distraction-free phone, why he doesn't think using tools like Screen Time or a dumbphone are always the best solutions to reducing the phone itch, and how he also cuts down on distractions on his desktop computer.Resources Related to the PodcastMake Time: How to Focus on What Matters Every Day by Jake Knapp and John ZeratskySprint: How to Solve Big Problems and Test New Ideas in Just Five Days by Jake KnappAoM Podcast #450: How to Make Time for What Really Matters Every Day With John Zeratsky AoM Podcast #972: Down With Pseudo-Productivity: Why We Need to Transform the Way We Work With Cal NewportAoM Article: The Complete Guide to Breaking Your Smartphone HabitAoM Article: 5 Concrete Ways to Develop a Healthier Relationship With Your Phone (No Blocking or Deleting Apps Required!)AoM Podcast #420: What Makes Your Phone So Addictive & How to Take Back Your LifeFreedom appHow We Feel appLight Phone IITime TimerConnect With Jake KnappJake's website
Visit our Substack for transcript, show notes, and more: https://designbetterpodcast.com/p/kate-aronowitz Few designers have an instinct for business fundamentals. Those that do are able to position design as a competitive advantage for a business and pave the way for design teams to collaborate more effectively. Kate Aronowitz is one of those rare birds. Kate has held high level design roles at LinkedIn, WealthFront, Facebook, eBay, and today she's at GV (formerly known as Google Ventures), where she helps early stage companies find their footing. We speak to Kate about the arc of her career, about entrepreneurship as a designer (and why there aren't more designer-founders), as well as some of the stories from the early days of Facebook like how she was one of the first designers to teach Mark Zuckerberg (or as Eli's 8 year old son calls him, Zerk Merkerbergen) about human centered design. Bio Kate Aronowitz is a design executive who has built her career empowering teams at some of Silicon Valley's most iconic companies. In addition to her role leading GV's operations team, Kate coaches GV portfolio companies on cross-functional design processes, scale, product development, and management strategy. Kate has built world-class design teams at eBay, LinkedIn, Facebook (now Meta), and Wealthfront. She joined the first user experience team at eBay before taking her experience to LinkedIn, where she started the user research team. As Facebook's first design executive, Kate grew the organization from 20 to 200, establishing multidisciplinary design teams in front-end engineering, user research, content strategy, and communication design. *** Subscribe to DB+ to get episodes a week early and ad-free. Plus, every month, you're invited to exclusive AMAs (Ask Me Anything) with big names in design and tech, from companies like Nike, Netflix, and the New York Times who will answer your questions directly. Early bird subscribers get 50% off for the first three months. Visit designbetter.plus to learn more and subscribe. *** Visiting the links below is one of the best ways to support our show: Methodical Coffee: Roasted, blended, brewed, served and perfected by verified coffee nerds
Sanyin Siang is Derek's amazing featured guest this week! Sanyin shares highlights from her journey, and how to accept positive affirmations and constructive criticism as data points in your life, importance of being generous, and how to be vulnerable.Sanyin helps leaders launch and create value by focusing on mindset, behavioral change, and team and culture building. Sanyin is a CEO Coach, Advisor, Author,the Executive Director of Duke University's Fuqua/Coach K Center on Leadership & Ethics (COLE) and a Professor with its Pratt School of Engineering.The COLE center is a leadership laboratory that engages all of Duke's Daytime MBA students and convenes high-level think tank gatherings to explore today's complex leadership opportunities and challenges.Sanyin coaches C-suite executives and is in the original cohort of Marshall Goldsmith's 100 Coaches. She is an advisor for GV (former Google Ventures), Duke Corporate Education, and the Sports Innovation Lab. Her thought leadership has appeared in Forbes, Fortune, The Wall Street Journal, and CNN. She has more than 1 million LinkedIn followers. She is a LinkedIn 2017 & 2018 Top 10 Influencer and a 2018 Thinkers50 On the Radar.Sanyin's board service has included those of The Emily K Center, The Museum of Life & Science, Duke Children's Hospital & Health Center. She is a Sr. Advisor with Dan Ariely's Center for Advanced Hindsight and a faculty with StoryLab at Duke. She has spoken to audiences from the White House to Global Sports Management and Owners Summits.Prior to Duke, Sanyin worked at the American Association for the Advancement of Science (AAAS), the world's largest federation of scientific and engineering societies, and publisher of Science. Her initiatives explored the ethical, social, and legal implications of technological advances before they became reality.Her book The Launch Book: Motivational Stories for Launching Your Idea, Business, or Next Career, uses behavioral science principles to help readers build the mindset for addressing major change.Sanyin received a BSE in Biomedical Engineering and an MBA from Duke University.Order "The Launch Book": https://www.amazon.com/gp/product/B074JC5L9V/ref=dbs_a_def_rwt_bibl_vppi_i0
In this episode of the Giant Robots Smashing Into Other Giant Robots podcast, hosts Will Larry and Victoria Guido discuss the intricacies of product design with thoughtbot's Senior Designers, Rami Taibah and Ferdia Kenny. They delve into the newly launched Product Design Sprint Kit by thoughtbot, which is designed to streamline and enhance product development. Ferdia and Rami explain how the kit aims to compress the design process into a focused five-day sprint, allowing teams to move from idea to user-tested prototype efficiently. They discuss the genesis of the kit, its components, and the rationale behind making it openly available. Towards the end of the episode, the conversation shifts towards the broader implications of design in product development, the iterative nature of design sprints, and the value of user feedback in guiding product decisions. Rami and Ferdia share real-world examples where product design sprints led to significant pivots or refinements in product strategy, emphasizing the critical role of user testing in uncovering genuine user needs versus presumed functionalities. Follow Rami Taibah on LinkedIn (https://www.linkedin.com/in/ramitaibah/). Follow Ferdia Kenny on LinkedIn (https://www.linkedin.com/in/ferdiakenny/). Follow thoughtbot on X (https://twitter.com/thoughtbot) or LinkedIn (https://www.linkedin.com/company/150727/). Become a Sponsor (https://thoughtbot.com/sponsorship) of Giant Robots! Transcript: WILL: This is the Giant Robots Smashing Into Other Giant Robots podcast, where we explore the design, development, and business of great products. I'm your host, Will Larry. VICTORIA: And I'm your co-host, Victoria Guido. And with us today are Rami Taibah, Senior Designer at thoughtbot, and Ferdia Kenny, Senior Designer at thoughtbot, here to talk to us about the newly released Product Design Sprint Kit from thoughtbot. Ferdia and Rami, thank you for joining us. Why don't you introduce yourselves a little bit, tell us a little bit about each of your background while we get started? FERDIA: I'm Ferdia. I'm a product designer at thoughtbot. I've been with the company for nearly three years now. I'm based in Dublin in Ireland, but I'm from the West Coast of Ireland. Happy to be on the podcast. It's my first time coming on, so that'll be a new experience. RAMI: Yeah, so I'm Rami Taibah, and I am also a senior designer at thoughtbot for nearly two years. I'm also from the West Coast, like Ferdia, but I didn't move. I'm still where I'm from [laughs]. VICTORIA: Yeah, so just to get us warmed up here, why don't you tell us something interesting going on in your lives outside of work you want to share with the group? FERDIA: For me, I'm trying to do a bit of traveling at the moment. So, one of the benefits, obviously, of working with thoughtbot is that we are a fully remote company. As long as we're kind of staying roughly within our time zones, we can kind of travel around a little bit. So, I'm actually in France at the moment and going to Spain in March. So yeah, I'll be working from a couple of different spots, which is really cool and a lot of fun. RAMI: Yeah, it's pretty cool. I always see Ferdia, like, having these meetings in, like, these different locations. Just a few months ago, you were in Italy, right? FERDIA: Yeah. Yeah [laughs], that's right, yeah. RAMI: Yeah. So, for me, well, first of all, I got a new baby, new baby girl, exactly on New Year's Day, so that's interesting, going back home every day and seeing how they evolve very quickly at this age. Another thing is I've been doing a lot of Olympic weightlifting. It's probably one of the consistent things in my life since COVID. I was a CrossFitter. I got out of that, thankfully. But coming back into, like, after quarantine, weightlifting seemed like a good choice because it doesn't have the social aspect of CrossFit, and I can just do it on my own. WILL: How is your sleep? RAMI: I'm a heavy sleeper, and I feel guilty about it, so no problems here [laughs]. WILL: Yeah, that was one thing I'm still trying to recover from–sleep. I love my sleep. And so, I know some people can do with little sleep, but I like sleep. And so, I'm just now recovering, and we're almost two years since my baby boy, so [chuckles]... RAMI: Yeah, I'm a heavy sleeper. And I tell my wife, like, we have this understanding, like, if you ever need anything from me besides...because she has to be up for, like, breastfeeding, just kick me. I'll wake up. I'll do whatever you need [laughs]. WILL: That's awesome. VICTORIA: So, my understanding is that if you want to get better at any sport, if you get better at deadlifting, that will help you progress in your sport pretty much. That's my [laughs] understanding. I don't know if you all feel that way as well. RAMI: Oh, I never heard that. But I do know that these three, like, three or four basic lifts just basically boosts you in everything else, like, deadlifts, back squats. And what was the third one? Bench press, I guess. FERDIA: And pull-ups as well, I think, is a compound exercise. I just hate like this. I look for an excuse to skip them, so...[chuckles] VICTORIA: Yeah, the four essential exercises, but it doesn't mean that they're fun, right? FERDIA: [chuckles] VICTORIA: Yeah. And then, Will, I heard you were also training for a new activity, the 5k. WILL: Yeah, I'm going to run a 5k with my best friend. He's coming into town. So, I'm excited about it. I've always tried to do running, but my form was horrible, and I'll get injured, tried to do too much. And I think I finally figured it out, taking it slow, stretching, making sure my form is correct. So, it's been good. I've enjoyed it. And it's interesting looking at what I'm doing now versus when I first started. And I was like, whoa, like, when I first started, I couldn't even run a mile, and I'd be out of breath and dying and just like, ah, and then now it's like, oh, okay, now I'm recovered, and I can walk it off. So, one thing it's taught me is just consistent, being consistent because I feel like with working out and running, you have this, like, two-week period that it's just hard. Everything hurts. Your body is aching. But then after that, your body is like, okay, you're serious. Okay, then, like, I can adjust and do that. And then once you get over that two weeks, it's like, oh, okay, like, still, like, sometimes I still push it and get sore, but for the most part, my body is like, okay, I get it. Let's do this. And then now, compared to before, now I'm just like, I can't stop because I don't want to go back through that two weeks of pain that I started at, at the very beginning. So, yeah, it's been a very good journey. I don't know how far I'm going to go with it. I don't know if I'm going to go a full marathon or a half marathon. I will increase it and do multiple races, but yeah, I don't know how far I'm going to go with it. VICTORIA: Well, it's interesting. It reminds me how, like, anytime you do something new, you're forming new neural pathways in your brain, then you can get in a routine, and it becomes easier and easier every time you do it. So, I'm going to try to relate this back to our Product Design Sprint Kit. It's like a set of exercises you can learn how to do that might be difficult at first, but then it becomes a part of the way that you work and how you build products, right? So, why don't you tell me a little bit about it? Like, what is it? What is the product design kit that you just came out with? FERDIA: The PDS kit or the Product Design Sprint Kit it was something that I'd kind of been playing around with in investment time for a while, and then spoke to Rami about it a couple of months ago, and he got on board. And it really accelerated what we were doing. And it was basically, like, a product design sprint is a known process in design and product design and product development. I think it was started by Google. And, essentially, the concept is that you can take an idea that you have for something new and, in five days, go from that idea to creating something that can be user tested, and so getting real kind of validated feedback on your idea. Yeah, so try to do it in a compressed timeframe. That's why it's called a sprint. So, you're trying to do it within five days. And the concept for kind of creating a kit that we could share to people beyond thoughtbot was that we tend to repeat a lot of the same instructions in each sprint, so we're running very similar exercises. The outcomes are slightly different, obviously, depending on the customer, but the exercises themselves are pretty similar. So, the [inaudible 06:42] kind of when we're talking to the customer are often very much the same. And we just thought that we get a lot of inquiries from start-ups, I think probably maybe even more so in Europe, before they're funded and looking kind of for the first step. Like, what can they do? So, a lot of them, if they're not in a position to, say, pay for some of our design team to come on with them and run a sprint with them, we thought it'd be cool to be able to give them, well, you know, this is something free that you can run yourself with your team and will kind of get you on the ladder. It will hopefully give you something that you can then take to an investor or somebody that could potentially fund a kind of bigger sprint or maybe even an MVP build. WILL: Let me ask you this: Why is design so important? So, if I'm a developer, or a CTO, or a CEO of whatever, why should I be an advocate for design? RAMI: Well, over here at thoughtbot, we do a lot of iterative design. I think that's a key factor that we should take into consideration. With iterative design, it's the idea of designing something based on a validation or based on a user and doing it quickly and testing it to get feedback from the user or from the market and adjust from there, instead of just designing something in, like, a silo and releasing it after six months and then discovering that you went off course four months ago. And that will cost you a lot of time, a lot of money, a lot of agony, I guess [laughs]. And it just generally will become a very frustrating process. I've seen clients before thoughtbot where they come in and they've been working on this thing for six months, and they're just not releasing and pushing the release for month on month just because the CEO does not feel like it's at par with what he's using on, like, everyday apps. And he's, like, looking at, oh, I want to look like Instagram, or feel like Instagram, or feel like whatever they like when, in reality, products don't evolve that way. And Instagram has already, I don't know, 12 years of development and design behind it. And you can't possibly expect your app that you're launching for your startup to feel the same, look the same, and all that stuff. That's why design is important. So, you just discover early on that you are on the right path and always correcting course with different design techniques, including the PDS. FERDIA: What you're talking about there just de-risks a lot of stuff for people when they're trying to create something new. You could have the, you know, a really, really impressive product under the hood that can do a lot of really technical stuff. But if it's very hard to use, or if it's very hard to kind of tap into that magic that you've built on the development side, people just won't use it, and you won't be able to generate the revenue you want. So yeah, the user experience and kind of the design around that is really important to get people actually using your product. VICTORIA: Yeah, I can relate to what you all have said. I've talked with founders before, who they maybe have a lot of experience in the industry and the problem that they are trying to solve. They think I know what it should look like. I just need developers to build it. But the activities you described about the product design sprint and creating something where you can go out and test that theory, and then incorporate that feedback into your product, and doing it within five days, it seems like a really powerful tool to be able to get you on the right path and avoid hundreds of thousands of dollars of development spend, right? FERDIA: Yeah, 100%, yeah. And, like, a typical outcome for a product design sprint will never be a fully polished, like, perfect design. That's just...it's not realistic. But what you will hopefully have by the end of that five days is you will know, okay, these are, like, five or six things that we're doing right, and these are things we should keep going with. And maybe here are three or four things that we thought users would like, or potential customers would like, and we are actually wrong about those. So, we need to change those things and maybe focus on something else. So, as Rami said, design is an iterative process that is like your first iteration. But getting that feedback is so helpful because, as Rami said, if you spend six months developing something and figure out that 4 of the ten things that you built weren't needed or were wrong, or customers just didn't want them, that's a really, really expensive exercise. So, a design sprint, kind of if you're to do them on a continuous basis or every couple of months, can be a really helpful way to check in with users to make sure what you're committing your resources to is actually going to benefit them in the long run. RAMI: Yeah. And I would also like to add, like, one of the outputs of a design sprint is a prototype. To me, I'm always like, seeing is believing. It's just better to have a prototype as a communication tool within the team with clients, with customers, with users, instead of having, like, a document or even just wireframes. It just doesn't really deliver what you're trying to do, like a prototype. FERDIA: Yeah, 100%, Rami. And, like, on the prototype, like, a good comparison that people, if they're not in product development, might have seen it's like if you're building a house, like yourself, Victoria, a lot of architects will give you two-dimensional plans. And for people that aren't in the building industry, plans can be difficult to read or difficult to visualize what those actually look like. But if you can give someone a 3D representation of the house, you know, they can see, oh yeah, this is what it's going to kind of look like and what it's going to feel like. And the prototype that Rami is talking about gives you exactly that. So, it's not just this is our idea; it's, this is actually what the thing could look like, and what do you think of that? So yeah, it's definitely a valuable output. VICTORIA: We're having this debate about whether or not we need a designer for our renovation project. And I'm very much pro [laughs] designer. And maybe that's from my background and being in software development and, like, let's get an expert in here, and they will help us figure it out [laughs], and then we'll make less mistakes and less expensive mistakes going forward. So, I think there's a lot of analogies there. So, this product design sprint is a service that we offer at thoughtbot as well, right? We do workshops and meetings together with the client, and you all have this idea to record the videos and put all the content out there for free. So, I'm curious how that conversation went within management at thoughtbot and how did the idea really get started and get some traction going. FERDIA: The benefit of the Product Design Sprint Kit what you get out of it won't replace, say, doing a product design sprint with thoughtbot because you will have expert product designers or developers in the room with you to kind of share their ideas and their experience. So, the output you're going to get from running a sprint with thoughtbot will be more beneficial, definitely. But what we were trying to, I suppose, cater for was people that fall in the gap, that they're not quite ready to bring thoughtbot on board, or they don't have enough funding to bring thoughtbot on board to do a product design sprint, or a longer discovery sprint, or something like that. But we want to be able to give those people in kind of the software community something actionable that they can actually take and use. So, the first three days, I think, of the Product Design Sprint Kit will be really, really valuable to people. It'll really help them identify the problem that they're trying to solve and then to come up with a lot of different solutions and to try to pick one of those. And probably where it's going to be a bit more challenging if you don't have experience in design or in development will be around the prototype, which Rami had spoken about. You can kind of do some offline things, and there are ways to test things without, say, a high-fidelity prototype, but those high-fidelity prototypes, again, are something that could be helpful. But thoughtbot has always had an approach of kind of giving stuff for free to the community, either open source or just letting people, yeah, letting people learn from our resources and from what we know. And so, yeah, this is just a way to, hopefully, cater to people that we currently can't work with for a variety of reasons but that this is something that they could maybe use in the meantime. MID-ROLL AD: Are you an entrepreneur or start-up founder looking to gain confidence in the way forward for your idea? At thoughtbot, we know you're tight on time and investment, which is why we've created targeted 1-hour remote workshops to help you develop a concrete plan for your product's next steps. Over four interactive sessions, we work with you on research, product design sprint, critical path, and presentation prep so that you and your team are better equipped with the skills and knowledge for success. Find out how we can help you move the needle at tbot.io/entrepreneurs. WILL: So, can you break down...you said it's five days. Can you break down what is walking you through, like, each day? And, like, what experience do I have? Because I know, I've tried to get in Figma sometimes, and it's not easy. It's a pain at times. You're trying to maneuver and stuff like that. So, what do I have to do? Like, do you show me how Figma? Do you give me a template with Figma? Like, how do you help me with those things? And I know Miro and those things. So, like, walk me through each step of the sprint. RAMI: Yeah, well, I mean, Figma and Miro are just tools that just became popular, I guess, after COVID. Design sprints used to be physical, in the same room as sprints. You would get the clients or the stakeholders in a room and do all that stuff. But Figma, FigJam, and, you know, kind of...I don't know if this was part of their, like, product thinking, but it kind of allowed doing full-on design sprints in their tools. So, the first step or the first day would be, like, the understanding day where basically we gather information about the product, the users, what's out there, and just come up with a general plan on how to go forward. And the second day would be divergent where we just look at what's out there and come up with these crazy ideas, kind of, like, a brainstorming thing but in a more inclusive, I guess, way and in a more organized way. So, you don't have people shouting over each other. Like, being anonymous also is important on this day, so nobody really knows what you're doing or saying. It's just ideas to remove bias. Then, we'd have a converge day where we take all these ideas and consolidate them, which will be an input into the prototype phase. And the last day is the test phase. I mean, each of these days you can talk...have a full podcast. VICTORIA: I'm curious about when you're testing and when you're, like, I'll say thoughtbot is a global company, right? And so, there's lots of different types of users and groups that you might be wanting to use your app. I'm thinking, you know, sometimes, in particular, some of the applications I've been looking at are targeting people who maybe they don't have an iPhone. They maybe have lower income or less means and access to get products and services. So, how does your design sprint talk to designing for different types of communities? FERDIA: I think that's a great question, Victoria. I would say the first thing on it is that we'd often get a lot of people with a startup idea, and they would come in and say, "You know, this app could be used by everybody. So, like, we have kind of no beachhead market or no target market. Like, this would be great for the whole world." That's a very nice thought to have if it is something that could potentially be used by everyone. But we would generally say you should pick a smaller niche to try to establish yourself in first and hit a home run basically with that niche first, and then kind of grow from there. We would normally say to people as, like, again, this is going back to what Rami said about the iterative process. If at the end of the five days, you've picked the wrong beachhead market and it doesn't hit home with them, that's fine. You can just do another sprint next week or next month on a different kind of subsection of the market. So, I think picking a fairly niche sector of the market is a good starting point. You then run your product design sprint with that niche in mind and try to talk to five users from that. And, generally, we say five because, generally, if you have less than or fewer than five people contributing, you probably won't get enough data. You know that you could...if you only test with two people, you probably wouldn't get a thorough enough data set. And then, normally, once you go over five, you kind of start seeing the patterns repeating themselves. You get kind of diminishing returns, I guess, after five. So, that would generally be the approach. Try to identify your beachhead market, the one you want to go into first, and then you will try to talk to five people generally from the founding team's network that match the criteria of that beachhead market. And, in some ways, just the final point, I guess, is the fact that you have to pull them from your network is actually beneficial to kind of make you narrow down and pick a niche market that's accessible to you because you know people in it. RAMI: And maybe if you don't know anybody, then maybe you're in the wrong industry. FERDIA: Yeah. Great point. Great point because, yeah, it makes it a lot easier. It's nice to have loads of industries that you could go into, but it makes it so much easier if the founding team have contacts in an industry. Yeah, it makes a big difference. WILL: Yeah, I was going through the different days and kind of what you were talking about. So, like, one day is brainstorming, then converge, and then prototyping, and user testing kind of on that last day. It seems like it's completely laid out. Like, you're giving away all the keys except experience from the actual designer. It seems like it's all laid out. Was that the goal to, like, really have them fully laid out? Hey, you can do this from point A to point B, and this is what it looks like. Is that something that you're...because that's what it looks like as my experience with designers and stuff. And if that's the case, what was your reasoning behind that, to give it away? For someone, like you said, like a startup they can do this because you pretty much laid it all out. I'm not a designer, and I don't claim to, but it looks like I can do this from what you laid out. RAMI: Well, first of all, like, at thoughtbot, we're really big into open source, and open source is not always just development. It can be these kinds of things, right? It's not a trade secret. It's not something we came up with. We maybe evolved it a little bit from Google, I think it was Google Ventures, but we just evolved it. And, at the end of the day, it's something that anybody can do. But, actually, taking the output from it is something that we do as thoughtbot. Like, okay, you have a prototype. That's great. You tested it, but okay, now we want to make it happen. If you can make it happen, then great, but the reality is that a lot of people can't, and that's why there are, like, a gazillion agencies out there that do these things. So, the reasoning, I guess, and Ferdia can expand on, is, like, if somebody takes this and comes up with a great prototype and feels confident that they actually want to develop this idea, who else would be better than thoughtbot who actually gave them the keys to everything? FERDIA: Yeah, 100%, Rami. Yeah, it's essentially just helping people get on the first rung of the product development ladder with fewer barriers to entry, so you don't have to have a couple of thousand dollars saved up to run a sprint. This kind of gives you a really, really low entry point. And I guess there's another use case for it where you would often have potentially founders or even companies that want to release a new product or feature. And they might reach out to thoughtbot because they want to develop something, and they're very sure that this is what we want to develop. And, you know, maybe they don't want to engage with a product design sprint or something like that if they think they know their market well enough. And this could be a handy tool just to say to them, "Okay, if you can go away, take this free resource for a week, run a product design sprint with your team, and come back to us and tell us that nothing has changed, you know that you've correctly identified the right market and that you've validated your theories with them," then we can kind of jump into development from there. But yeah, it can be a good way, I suppose, to show the value of doing a product design sprint. As I said, a lot of people come in, and they have great ideas, and they can be fairly certain that this is going to work. But a product design sprint is really, really valuable to validate those before you dive into building. VICTORIA: And can you give us an example from your experience of a client who went through a product design sprint and decided to pivot maybe their main idea and go in a different direction? FERDIA: I'm not sure off the top of my head, Victoria, if I can pick one that pivoted in a completely different direction, but definitely, like, some of the clients that we worked with on the Fusion team in thoughtbot ended up changing direction or changing the customer that they were going after. So, some people might have had an idea in their head of who they wanted to tackle and might have had a particular, say, feature prioritized for that person. And through the product design sprint, we were able to validate that, actually, this feature is not that important. This other feature is more important, and it's more important to a different group than kind of what you initially thought. That would happen fairly regularly on a product design sprint. Like, I think if you look at the potential outcomes, one being that everything's exactly as you thought it was and you can proceed as planned, or the opposite end of the spectrum where nothing is as you thought it was and, you know, you kind of have to go back to the drawing board, it's very rare that you're on either end of those after a product design sprint. Most of the time, you're somewhere in the middle. You've changed a few things, and you're able to keep a few things, and that's kind of normally where they land. So, I would say nearly every customer that we've done a product design sprint with has changed some things, but never kind of gone back to the drawing board and started from scratch. RAMI: It's usually prioritization and just understanding what to do and also, like, get into the details of how to do it. That's where the value comes in. But, like, completely pivoting from a food delivery app to, I don't know, NFTs [laughs] never really happened. VICTORIA: Yeah, and it doesn't have to necessarily be a big pivot but looking for, like, a real-world example, like, maybe you're building an e-commerce site for a plant marketplace or something like that. RAMI: Yeah. Well, we had a self-help app where they already had the app in the market. It was a progressive web app, and they were really keen on improving this mood tracker feature. But then we did a product design sprint, and they had a bunch of other features, and that exercise kind of reprioritized. And the mood tracker ended up not being released in the first version of the actual mobile app because we were also developing a native app. VICTORIA: Gotcha. So, they were pretty convinced that this was an important feature that people wanted to track their mood in their app. And then, when they went through and tested it, users were actually like, "There's this other feature that's more important to me." FERDIA: One example of another client that we did, which was a kind of a wellness app, they wanted it to feel like a friend in your pocket. So, they were looking at ways to integrate with WhatsApp that you'd get notifications via WhatsApp. So, they would kind of be, like, friendly messages to people as if it's your friend, you know, texting you to check in. And that was kind of an idea going into it, and users did not like that at all. Like, they really didn't like that. So, we ditched that [inaudible 25:49] completely. But, again, that could have been something that they would have spent a long time developing to try to implement, and then to have users say this would have been a very, very costly waste of time. So, we figured that out in a few days, which was a money saver for the team. VICTORIA: And it must be pretty emotional to have that feedback, right? Like, it's better to get it early on so that you don't invest all the money and time into it. But as a founder, I'm sure you're so passionate about your ideas, and you really think you have the answers from your experience, most likely. So, I'm curious if there's any kind of emotional management you do with clients during this product design sprint. FERDIA: I think it definitely is. I think people, as I said, often come in with very strong opinions of what they feel will work. And it might even be a product that they specifically want, or they might be one of those potential users. And I actually think, say, engaging an agency like thoughtbot to design something like that, if we felt that they were going down the wrong path, that could be actually quite difficult to do. But because of product design sprints, you are user-testing it. The founders are hearing this feedback from the horse's mouth, so to speak. They're hearing it directly from potential customers. So, it's a lot more black and white. Now, sometimes, it might still be a case that a founder then doesn't want to proceed with that idea if it's not kind of going to be the way that they wanted it to be, and that's fair enough as well. But the feedback, as I said, it tends not to be that the idea is completely scrapped. It just means that you move a couple of things around. As Rami said, you deprioritize some things and prioritize other things for the first version, and that tends to be the outcome of it. VICTORIA: Are the users always right, or is it sometimes you can have an idea that persist, despite the early feedback from users? RAMI: Interesting question. Like, I see the parallels you're doing with the customer is always right, yeah. But the thing is, like, that's just my opinion, I think. We tested with users, and we kind of observe how they react to it and how they use the prototype. So, it's not like an opinion session or, like, a focus group where they're actually giving...a user can say something and do something else or react in a different way. But yeah, it's a fine line, I think. But I would be really surprised if ten users would agree on something and say something, and their behavior also would reflect that, and we won't pick up on. VICTORIA: Yes, I like the distinction you're making between what they say and then what the behavior shows, right? FERDIA: I think something important there as well, like you'll often hear it in design communities, is that you should listen to the feedback from customers but maybe not the solutions that they're proposing. Because, at the end of the day, like, thoughtbot have experts in product design and product development, so we want to figure out from the user's perspective what they want to achieve and maybe what their problems are, but not necessarily take into account or just, I suppose, not necessarily just follow exactly what they say the solution should be. You're kind of looking for the problems and the things that they're struggling with. You're trying to pick those up rather than just to do the solution that the customer is telling you. And you'll see that in a lot of startups as well that, you know, it's the famous Henry Ford quote about, you know, "If I'd listened to my customers, I'd have designed a faster horse." Sometimes, you need to listen to the problem, and the problem is getting from A to B faster, and then you come up with a solution for that rather than the solution that's been recommended to you. WILL: I want to pivot a little bit and ask you both, why did you get into design? FERDIA: I actually did architecture in university, and there were aspects of that I liked. Funnily enough, it's a fairly similar process to designing for software, and then it's an iterative approach. You're given a brief and yet you kind of take a concept forward. But then, when you apply for planning, you have to make changes. And when you kind of put [inaudible 29:41], you make changes. So, you're constantly, I suppose, designing iteratively. And then I got into startups and was kind of wearing a lot of different hats in that startup sort of world. But the product was the one area that always kind of got me excited. So, you know, if you tried to make a sale with a particular customer and they didn't want to go over something, like, coming home and trying to figure out, okay, how can I fix that problem with the product so that next time when I go to a customer, and they'll say, "Yes"? That was kind of what always gave me the adrenaline. So yeah, comparatively, between architecture and software, the turnaround times in software is so much faster that I think it's more enjoyable than architecture. You kind of can really see progress. Product design sprint in five days. You can kind of take something a long way whereas designing a building is a bit slower, but it's always kind of been some area of interest. Well, what about you, Rami? RAMI: Well, I wanted to become a hacker, but I ended up to be a designer [laughs]. No, really, when, like, in middle school, I really wanted to be a hacker and kept looking up what is it. Like, I see it in all these movies really cool, and I wanted to understand, like, how it's done online. And I saw, like, everybody is talking about this weird, little thing called command line. And it turns out, like, all these hacking, quote, unquote, "hacking tutorials" were done on Linux. So, I started looking into Linux and got into Linux. From there, I started blogging about Linux, and then I just really got into technology. I was in marketing. By then, I was a marketing major. So, that got me into blogging into, like, Linux and open source, which kind of triggered in my head, okay, I need to maybe pivot to a different career path. So, I did a master's degree in information management. Over there, I stumbled into design. The information management school that I was in, like, it was an interdisciplinary school at, like, design, coding, and business all mixed in. So, I stumbled in design there. VICTORIA: That's how you all got started. And now you've put this product out there pretty recently. I'm curious if you have thought about how you would measure the success of this effort. So, how do you know that what you put out there in the product designs kit is helping people or achieving the goals that you had originally set out to? FERDIA: Initially, Victoria, we obviously like to see the view counts going up on YouTube, and we're always open to feedback. So, like, at the end of each video and in the resources and stuff, we've got contact us kind of links and stuff. So, if people have feedback on how we could make it better or more useful, that would be really, really welcome. So, do feel free to reach out to us. And kind of the ultimate success metric for us would be to have somebody come to us in future and say, "Oh, we used that Product Design Sprint Kit that you produced before, and we either got funding or, you know, we got so much value out of it that we'd like to do a full product design sprint or an MVP build, or something like that." And the equivalent that we would kind of have a lot of in thoughtbot would be, say, gems in development where we would get people reaching out and say, "We use that gem all the time. We know about thoughtbot because of that." That kind of is a way to establish trust with potential customers. So, we're hoping that this is somewhat of an equivalent on the design side. WILL: Oh, it's been great chatting with both of you about design and what you came up with this. I really like it. I'm going to look more into it. VICTORIA: Yes. Thank you both for joining us. And I had one question. So, the sprint is the short-term. What would be, like, a product design marathon? Like, what's [chuckles] the big picture for people who are building products? Maybe that's a silly question, but... RAMI: No, it's not, I mean, but I would guess it's actually building the product and having a successful product in the market and iterate over it for years and years. VICTORIA: Yeah. So, it's a one-week sprint, and you could do it over and over again for many years just to fine-tune and really make sure that your product is meeting the needs of the people you were hoping to reach. Wonderful. All right. Well, thank you both so much for joining us. WILL: You can subscribe to the show and find notes along with a complete transcript for this episode at giantrobots.fm. If you have questions or comments, email us at hosts@giantrobots.fm. You can find me on Twitter @will23larry. VICTORIA: And you can find me on Twitter @victori_ousg. This podcast is brought to you by thoughtbot and produced and edited by Mandy Moore. Thanks for listening. See you next time. AD: Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at: tbot.io/referral. Or you can email us at referrals@thoughtbot.com with any questions.
Jake Knapp and John Zeratsky are the authors of best-selling books Sprint and Make Time. They have helped more than 300 teams design new products and bring them to market, including those at YouTube, Gusto, One Medical Group, and Slack. Jake and John are co-founders of the venture capital firm Character, where they support startups with capital and sprints. Previously, they were operating partners at Google Ventures and, before that, design leaders at Google, where John worked on Google Ads and YouTube and Jake helped build Gmail and co-founded Google Meet. In our conversation, we discuss:• “Busy bandwagon” and “infinity pools”• Creating one “highlight” each day• Their four-part framework for productivity• How to use the calendar to design your day• How creating friction can help you avoid distractions• Tips on creating a distraction-free phone• Strategies for managing email and distractions• The importance of reflecting on the day and making time for meaningful work• Design sprints—Brought to you by:• Sidebar—Accelerate your career by surrounding yourself with extraordinary peers• Whimsical—The iterative product workspace• WorkOS—The modern API for auth and user identity—Find the transcript for this episode and all past episodes at: https://www.lennyspodcast.com/episodes/. Today's transcript will be live by 8 a.m. PT.—Where to find Jake Knapp:• X: https://twitter.com/jakek• LinkedIn: https://www.linkedin.com/in/jake-knapp/• Website: https://jakeknapp.com/—Where to find John Zeratsky:• X: https://twitter.com/jazer• LinkedIn: https://www.linkedin.com/in/johnzeratsky/• Website: https://johnzeratsky.com/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) About Jake and John(04:10) Recording the audiobook for Make Time(06:06) What people often get wrong when trying to become more productive(11:24) The busy bandwagon and infinity pools(15:22) Real talk: Jake and John's productivity levels(20:10) The four-part framework for getting more done: Highlight, Laser, Energize, Reflect(25:15) Step 1: Highlight(28:08) Designing your day with a calendar(30:52) The Groundhog Day mentality(35:10) Tactical advice for implementing the highlight method(39:30) An example of a failed highlight(48:08) Step 2: Laser(51:12) Creating intentional friction to avoid distractions(57:28) Curating a distraction-free phone(01:07:58) Resetting expectations and slowing your inbox(01:14:51) Systems over willpower(01:18:14) Managing email distractions(01:18:49) Step 3: Energize(01:22:05) Step 4: Reflect(01:26:30) Introduction to Sprint—Referenced:• Make Time: How to Focus on What Matters Every Day: https://www.amazon.com/Make-Time-Focus-Matters-Every/dp/0525572422• Sprint: How to Solve Big Problems and Test New Ideas in Just Five Days: https://www.amazon.com/Sprint-audiobook/dp/B019R2DQIY• Make Time blog: https://maketime.blog/• Make Time blog on X: https://twitter.com/maketimeblog• Character: https://www.character.vc/• Google Ventures: https://www.gv.com/• Character Labs: https://www.character.vc/labs• Strategies for becoming less distracted and improving focus | Nir Eyal (author of Indistractable and Hooked): https://www.lennyspodcast.com/strategies-for-becoming-less-distracted-and-improving-focus-nir-eyal-author-of-indistractable-and/• Groundhog Day on Prime Video: https://www.amazon.com/Groundhog-Day-Bill-Murray/dp/B000SP1SH6• Reclaim.ai: https://reclaim.ai/• Feed Blocker for LinkedIn: https://chromewebstore.google.com/detail/feed-blocker-for-linkedin/eikaafmldiioljlilngpogcepiedpenf• The Lord of the Rings: https://www.amazon.com/Lord-Rings-J-R-R-Tolkien/dp/0544003411• MagSafe charger: https://www.amazon.com/Apple-MHXH3AM-A-MagSafe-Charger/dp/B08L5NP6NG/• Nanit app: https://www.nanit.com/pages/nanit-app• Arianna Huffington's Phone Bed Charging Station: https://www.amazon.com/Arianna-Huffingtons-Charging-Station-Walnut/dp/B0799ZG1LY• Cell Phone Lock Box with Timer: https://www.amazon.com/Portable-Android-Self-Discipline-Achieve-Addiction/dp/B0CG8V4YG3?th=1• The 4-Hour Workweek: Escape 9-5, Live Anywhere, and Join the New Rich: https://www.amazon.com/4-Hour-Workweek-Escape-Live-Anywhere/dp/0307465357• The Economist: https://www.economist.com/• Odysseus: https://www.britannica.com/topic/Odysseus• Mailman: https://www.mailmanhq.com/• Future: https://www.future.co/• Notion: https://www.notion.so/• Miro: https://miro.com/—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe
In this episode, I sat down with Ankit Jain, Founder and CEO of Infinitus Systems. Founded in 2019, the company automates tedious and time-consuming administrative processes across the healthcare ecosystem with AI to free up clinician time and improve access, adherence, and affordability. Infinitus is the only solution that automates the entire phone call process to complete benefit verifications. Infinitus returns that hard-to-find information so providers have the most comprehensive view of treatment coverage and can confidently move forward with a patient treatment plan. The company has raised money from Google Ventures, Kleiner Perkins, Coatue, and a number of other healthcare luminaries. Ankit and I discussed: 1) The company's “spark story,” and how a Google digital assistant for restaurant, spa and salon reservations was pivoted to fill an unmet market need in healthcare, 2) The importance of freeing up time spent on administrative workflows for clinicians to serve patients, 3) Applications of AI in healthcare to solve for areas of unmet need
Chris Hutchins, avid life hacker, financial optimizer and host of the top ranked podcast, “All the Hacks,” joins the Journey To Launch podcast to discuss how he amassed 14 million credit card points and upgrades his life without spending a fortune. We also talk about the dangers of optimizing too much, intentionally spending and hacking your personal finances, analysis paralysis, and more. Before jumping into the content creation world, Chris was the Head of New Product Strategy at Wealthfront, an investor at Google Ventures, co-founded Milk (acquired by Google) and built an organization called LaidOffCamp. He has been featured in the New York Times, Wall Street Journal, and CNBC. In this episode you'll learn about: How selling a business does not always mean making a profit Why falling down rabbit holes of information changed Chris's life What being in a job Chris hated taught him about Financial Independence Travel hacking, using money to have experiences you can uniquely have now, memory dividends + more Other Links Mentioned in episode: Order my new book, Your Journey to Financial Freedom and get 2 months free of Freedom Quest & access to the FIRE STARTER Course for FREE Episode 356: Cultivating Influence, Mastering Networking + Leveraging People Skills To Improve Our Financial Lives With Vanessa Van Edwards Mastering the Secret Language of Charismatic Communication with Vanessa Van Edwards on All the Hacks Podcast Overcoming Fear, Processing Failure, and Why Having a Plan B Holds You Back with Matt Higgins on All the Hacks Podcast Episode 284- The Simple Path to Wealth and All You Need to Know About Index Investing with JL Collins Episode 346: Index Funds, How To Build Wealth & Financial Independence Wisdom With “The Godfather of FI,” JL Collins The Simple Path to Wealth: Your road map... by Collins, J LAmazon.comhttps://www.amazon.com › Simple-Path-Wealth-finan… Episode 262- Die with Zero & Get All You Can From Your Money and Your Life w/ Bill Perkins Die With Zero: Net Fulfillment Over Net Worth on All the Hacks Podcast Episode 339: Disney World vs. Retiring Early- How Much Our Disney World Trip Cost + Book Updates Travel Freely: Points & Miles 4+ - App StoreApplehttps://apps.apple.com › app › travel-freely-points-miles CardPointers: Credit card rewards made easyCardPointershttps://cardpointers.com Get The Budget Bootcamp for FREE Check out my personal website here. Join The Weekly Newsletter List Leave me a voicemail– Leave me a question on the Journey To Launch voicemail and have it answered on the podcast! YNAB – Start managing your money and budgeting so that you can reach your financial dreams. Sign up for a free 34 days trial of YNAB, my go-to budgeting app by using my referral link. What stage of the financial journey are you on? Are you working on financial stability or work flexibility? Find out with this free assessment and get a curated list of the 10 next best episodes for you to listen to depending on your stage. Check it out here! Connect with Chris: AllTheHacks.com Instagram:@ChrisHutchins Twitter.com:@Hutchins Connect with me: Instagram: @Journeytolaunch Twitter: @JourneyToLaunch Facebook: @Journey To Launch Join the Private Facebook Group Join the Waitlist for My FI Course Get The Free Jumpstart Guide Get The Budget Bootcamp for FREE
Sam Schillace is deputy CTO and corporate vice president at Microsoft. Prior to working at Microsoft, Sam started a company called Writely, which was acquired by Google and became the foundation of what today is Google Docs. While at Google, Sam helped lead many of Google's consumer products, including Gmail, Blogger, PageCreator, Picasa, Reader, Groups, and more recently Maps and Google Automotive Services. Sam was also a principal investor at Google Ventures, has founded six startups, and was the SVP of engineering at Box through their IPO. In this episode, we discuss:• The journey of building Google Docs• The importance of taking risks, embracing failure, and finding joy in your work• The importance of asking “what if” questions vs. “why not”• Why convenience always wins• How, and why, Sam stays optimistic• Inside Microsoft's culture• Why you should solve problems without asking for permission• Early-career advice• Why “pixels are free” and “bots are docs”—Brought to you by Teal—Your personal career growth platform | Vanta—Automate compliance. Simplify security | Ahrefs—Improve your website's SEO for free—Find the full transcript at: https://www.lennyspodcast.com/how-to-be-more-innovative-sam-schillace-microsoft-deputy-cto-creator-of-google-docs/—Where to find Sam Schillace:• LinkedIn: https://www.linkedin.com/in/schillace/• Newsletter: https://sundaylettersfromsam.substack.com/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Sam's background(03:45) The first Google Docs file(06:45) Disruptive innovation(10:11) First-principles thinking(11:00) Recognizing disruptive ideas(13:17) Examples of first-principles thinking(15:46) The power of optimism(19:47) Sam's motto: Get to the edge of something and f**k around(21:53) User value and laziness(24:31) People are lazy (and what to do about it)(28:36) Building Google Docs(31:06) The evolution of Google Docs(37:15) Finding product-market fit(39:52) The future of documents(44:57) The value of playing with technology(47:58) Taking risks and embracing failure(49:21) Thinking in the future(53:48) Finding joy in your work(01:01:20) Just do the best you can(01:02:34) The transformational power of AI(01:09:27) Advice for approaching AI(01:13:07) The culture at Microsoft(01:16:51) Closing thoughts(01:17:32) Lightning round—Referenced:• Google Docs began as a hacked-together experiment, says creator: https://www.theverge.com/2013/7/3/4484000/sam-schillace-interview-google-docs-creator-box• Edna Mode: https://disney.fandom.com/wiki/Edna_Mode• Sergey Brin's profile on Forbes: https://www.forbes.com/profile/sergey-brin/• People Who Were No Smarter Than You: https://medium.com/thrive-global/people-who-were-no-smarter-than-you-4e1c88c3fee6• Nat Torkington (O'Reilly Media): https://www.oreilly.com/people/nathan-torkington/• How Tesla Has Shaken (Not Stirred) Established Carmakers—and Why It Really Matters: https://www.forbes.com/sites/jenniferdungs/2021/04/23/how-tesla-has-shaken-not-stirred-established-carmakersand-why-it-really-matters/• First Principles: Elon Musk on the Power of Thinking for Yourself: https://jamesclear.com/first-principles• Ashton Tate: https://en.wikipedia.org/wiki/Ashton-Tate• Learning by Doing: https://www.linkedin.com/pulse/learning-doing-sam-schillace• Kevin Scott on LinkedIn: https://www.linkedin.com/in/jkevinscott/• How do we make sense of all of this?: https://sundaylettersfromsam.substack.com/p/how-do-we-make-sense-of-all-of-this• Steve Newman on LinkedIn: https://www.linkedin.com/in/stevescalyr/• Eric Schmidt on LinkedIn: https://www.linkedin.com/in/eric-schmidt-02158951/• Michael Arrington on X: https://twitter.com/arrington• TechCrunch: https://techcrunch.com/• “Hello, Computer” scene from Star Trek: https://www.youtube.com/watch?v=hShY6xZWVGE• Writely—Process Words with your Browser: https://techcrunch.com/2005/08/31/writely-process-words-with-your-browser/• Satya Nadella on LinkedIn: https://www.linkedin.com/in/satyanadella/• Poor Charlie's Almanack: The Essential Wit and Wisdom of Charles T. Munger: https://press.stripe.com/poor-charlies-almanack• Calvinism: https://en.wikipedia.org/wiki/Calvinism• This Quote from Seth Godin Could Change How You Think About Pursuing Your Passion: https://friedchickenandsushi.com/blog/2021/7/5/this-quote-from-seth-godin-could-change-how-you-think-about-pursuing-your-passion• AI isn't a feature of your product: https://sundaylettersfromsam.substack.com/p/ai-isnt-a-feature-of-your-product• Introducing Gemini: our largest and most capable AI model: https://blog.google/technology/ai/google-gemini-ai/• Invisible Cities: https://www.amazon.com/Invisible-Cities-Italo-Calvino/dp/0156453800• The Wasp Factory: https://www.amazon.com/WASP-FACTORY-NOVEL-Iain-Banks/dp/0684853159• Where Good Ideas Come From: The Natural History of Innovation: https://www.amazon.com/Where-Good-Ideas-Come-Innovation/dp/1594485380/• Slow Horses on Apple TV+: https://tv.apple.com/us/show/slow-horses/umc.cmc.2szz3fdt71tl1ulnbp8utgq5o• Monarch: Legacy of Monsters on Apple TV+: https://tv.apple.com/us/show/monarch-legacy-of-monsters/umc.cmc.62l8x0ixrhyq3yaqa5y8yo7ew?mttn3pid• Scavengers Reign on Max: https://www.max.com/shows/scavengers-reign/50c8ce6d-088c-42d9-9147-d1b19b1289d4• 2023 Mustang Mach-E: https://www.ford.com/suvs/mach-e/• Boccalone Salumeria (now closed) on X: https://twitter.com/boccalone—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe
Chris Hutchins is an avid lifehacker, a financial optimizer, and he's the host of the top-ranked podcast, All The Hacks, where he shares his quest to upgrade his life without having to spend a fortune. Chris has been featured in the New York Times, Wall Street Journal, and CNBC. Previously, Chris was the head of new product strategy at Wealthfront after they acquired his company, Grove. Before that, he was an investor at Google Ventures, and cofounded Milk, which Google acquired. And he is also a member of my community, The Lab. Chris goes fast. He digs in deep, and he applies a scientific lens to his work. He has a real experimentality. In this episode, you'll learn How Chris grew his podcast How he was able to build and leverage relationships with high-profile creators like Tim Ferriss and Kevin Rose How you can build similar relationships yourself And the experiments that he's run to grow his podcast Full transcript and show notes Chris's Website / Podcast / Twitter / Instagram / Facebook / LinkedIn *** CONNECT