POPULARITY
Categories
“The buzz in LLMs now is all about training data” Andy Edmonds has an MS in Human Factors, Applied Psychology from Clemson University. He started his working career as a webmaster in 1995 and has since developed a huge breadth of expertise in UX, e-commerce, web analytics, online experimentation, data science, information retrieval, and software development methods at tech companies including Microsoft, eBay, RedBubble, Adobe, Facebook, and LinkedIn. He is now a product manager at Quora. He also holds nine patents. Andy Edmonds on LinkedIn Tabtopia on Github Anthropic blog Topics include: – experimental design – cognitive science – applied psychology – data science – HCI (human computer interaction) – LLMs (large language models) – QuoraThe post Episode #71: Andy Edmonds first appeared on Linguistics Careercast.
Roderic Crooks is an associate professor in the Department of Informatics at the University of California, Irvine. His research examines how the use of digital technology by public institutions contributes to the minoritization of working-class communities of color. His current project explores how community organizers in working-class communities of color use data for activist projects, even as they dispute the proliferation of data-intensive technologies in education, law enforcement, financial services, and other vital sites of public life. He has published extensively in HCI, STS, and social science venues on topics including political theories of online participation, equity of access to information and media technologies, and document theory. He is the author Access Is Capture: How Edtech Reproduces Racial Inequality, published in 2024 by the University of California Press (https://www.ucpress.edu/books/access-is-capture/paper). Access is Capture Racially and economically segregated schools across the United States have hosted many interventions from commercial digital education technology (edtech) companies who promise their products will rectify the failures of public education. Edtech's benefits are not only trumpeted by industry promoters and evangelists but also vigorously pursued by experts, educators, students, and teachers. Why, then, has edtech yet to make good on its promises? In Access Is Capture, Roderic N. Crooks investigates how edtech functions in Los Angeles public schools that exclusively serve Latinx and Black communities. These so-called urban schools are sites of intense, ongoing technological transformation, where the tantalizing possibilities of access to computing meet the realities of structural inequality. Crooks shows how data-intensive edtech delivers value to privileged individuals and commercial organizations but never to the communities that hope to share in the benefits. He persuasively argues that data-drivenness ultimately enjoins the public to participate in a racial project marked by the extraction of capital from minoritized communities to enrich the tech sector.Links:Amazon listing for Access Is CaptureUniversity of California Press page for Access Is CaptureAuthor's personal websiteTalks and events from Civics of Technology featuring Roderic N. CrooksArticle co-authored by Crooks discussing intersectional themes in feminist formations Hosted on Acast. See acast.com/privacy for more information.
New to Atlanta? We've got you covered.In this episode, current HCI master's students join the Hive to share their real experiences moving to Atlanta for grad school — from navigating the city to defining what success in grad school means to them to building a new community. Whether you're packing your bags or just curious about what's ahead, tune in for tips, stories, and encouragement! Also, a very hearty welcome to the class of 2027!Our guests today:Parnian Vafa - https://www.linkedin.com/in/parvaf3830/Umme Ammara - https://www.linkedin.com/in/umme-ammara/Hosted by:Manuni Dhruv - https://www.linkedin.com/in/manunidhruv/Rajath Pai - https://www.linkedin.com/in/rajath-pai-k/Edited by: Manuni Dhruv
Einführung mit Microsoft-MVP Manfred Helber
Recently Broadcom announced that vSAN ESA support for SAP HANA was introduced. Erik Rieger is Broadcom's Principal SAP Global Technical Alliance Manager and Architect, and as such I invited him on the show to go over what this actually means, and why this is important for customers!For more details make sure to check:SAP note 3406060 – SAP HANA on VMware vSphere 8 and vSAN 8 for details.SAP HANA and VMware support pagesSAP HANA on HCI powered by vSANvSphere and SAP HANA best practicesDisclaimer: The thoughts and opinions shared in this podcast are our own/guest(s), and not necessarily those of Broadcom, VMware by Broadcom, or SAP.
This podcast is a recording of a webinar presented by Oonagh Gilvarry, Chief Research Officer at HCI. In this webinar, Oonagh discusses HIQA's one-year overview report on the monitoring and inspection of International Protection Accommodation Services (IPAS) centres.
In this episode of the GI Podcast, co-hosts Toben Racicot and Sid Heeg sit down with postdoctoral fellow Dr. Eugene Kukshinov and his work in social virtual reality and exploring the concept of presence within VR environments. Bio: Dr. Eugene Kukshinov is a media psychology and HCI postdoctoral researcher. He got his PhD in Media and Communication from Temple University, USA. His focus is on understanding psychological processing of media and technology. This includes immersive experiences and their interrelationships in different contexts such as (Social) VR, video games, or storytelling. Links: Dr. Kukshinov's website: https://eugenekukshinov.com/ Kukshinov, E. (2024). It's (not) me: Dynamic Nature of Immersive Experiences in Video Game Play. Human Studies. https://doi.org/10.1007/s10746-024-09768-9
This podcast is a recording of a webinar presented by Oonagh Gilvarry, Chief Research Officer at HCI. In this webinar, Oonagh discusses the Health Act 2007 (Care and Welfare of Residents in Designated Centres for Older People) (Amendment) Regulations 2025 (S.I. No. 1 of 2025) which comes into effect on March 31, 2025, bringing significant updates to governance, infection prevention and control, residents' rights, criteria for persons-in-charge, and visiting. Understanding these changes is crucial for compliance and quality care. For more information, contact info@hci.care.
Will AI replace UX professionals or simply evolve the user experience field? Steering Engineering Podcast hosts Brent Stewart and Danny Brian welcome guest Will Grant to explore how generative AI is reshaping UX design, including developments such as hyperpersonalization, adaptive interfaces, and AI agents redefining human computer interaction (HCI). As UX potentially becomes increasingly automated, democratized, and integrated with AI-driven design, software engineering leaders will have to navigate the new challenges and opportunities.Will Grant is a user experience (UX) professional based in the United Kingdom with more than 20 years of experience, including as both a practitioner and strategic director. Will has an extensive background in overseeing the design, accessibility, and usability of web and mobile products that have reached a global audience of more than a billion users. With a background equally footed in deep computer science and a love of simple, usable design, his career has spanned founding startups to consulting for small and midsize enterprises, right up to global brands.
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Today we're joined by Victor Dibia, principal research software engineer at Microsoft Research, to explore the key trends and advancements in AI agents and multi-agent systems shaping 2025 and beyond. In this episode, we discuss the unique abilities that set AI agents apart from traditional software systems–reasoning, acting, communicating, and adapting. We also examine the rise of agentic foundation models, the emergence of interface agents like Claude with Computer Use and OpenAI Operator, the shift from simple task chains to complex workflows, and the growing range of enterprise use cases. Victor shares insights into emerging design patterns for autonomous multi-agent systems, including graph and message-driven architectures, the advantages of the “actor model” pattern as implemented in Microsoft's AutoGen, and guidance on how users should approach the ”build vs. buy” decision when working with AI agent frameworks. We also address the challenges of evaluating end-to-end agent performance, the complexities of benchmarking agentic systems, and the implications of our reliance on LLMs as judges. Finally, we look ahead to the future of AI agents in 2025 and beyond, discuss emerging HCI challenges, their potential for impact on the workforce, and how they are poised to reshape fields like software engineering. The complete show notes for this episode can be found at https://twimlai.com/go/718.
Replit is one of the most visible and exciting companies reshaping how we approach software and application development in the Generative AI era. In this episode, we sit down with its CEO, Amjad Masad, for an in-depth discussion on all things AI, agents, and software. Amjad shares the journey of building Replit, from its humble beginnings as a student side project to becoming a major player in Generative AI today. We also discuss the challenges of launching a startup, the multiple attempts to get into Y Combinator, the pivotal moment when Paul Graham recognized Replit's potential, and the early bet on integrating AI and machine learning into the core of Replit. Amjad dives into the evolving landscape of AI and machine learning, sharing how these technologies are reshaping software development. We explore the concept of coding agents and the impact of Replit's latest innovation, Replit Agent, on the software creation process. Additionally, Amjad reflects on his time at Codecademy and Facebook, where he worked on groundbreaking projects like React Native, and how those experiences shaped his entrepreneurial journey. We end with Amjad's view on techno-optimism and his belief in an energized Silicon Valley. Replit Website - https://replit.com X/Twitter - https://x.com/Replit Amjad Masad LinkedIn - https://www.linkedin.com/in/amjadmasad X/Twitter - https://x.com/amasad FIRSTMARK Website - https://firstmark.com X/Twitter - https://twitter.com/FirstMarkCap Matt Turck (Managing Director) LinkedIn - https://www.linkedin.com/in/turck/ X/Twitter - https://twitter.com/mattturck (00:00) Intro (01:36) The origins of Replit (15:54) Amjad's decision to restart Replit (19:00) Joining Y Combinator (30:06) AI and ML at Replit (32:31) Explain Code (39:09) Replit Agent (52:10) Balancing usability for both developers and non-technical users (53:22) Sonnet 3.5 stack (58:43) The challenge of AI evaluation (01:00:02) ACI vs. HCI (01:05:02) Will AI replace software development? (01:10:15) If anyone can build an app with Replit, what's the next bottleneck? (01:14:31) The future of SaaS in an AI-driven world (01:18:37) Why Amjad embraces techno-optimism (01:20:36) Defining civilizationism (01:23:11) Amjad's perspective on government's role
This episode we're chatting with Enrico Panai about the elements of the digital revolution, AI transforms data into information. HCI, the importance of knowing the tech as a tech philosopher, that ethicists should diagnose not judge, quality and making pasta, whether ethics is really a burden for companies or if you can run faster with ethics, don't steal peoples life, and finding a Marx for the digital world.
FELLOWSHIP WITH GOD - PART 3 BY REV. DR. JOSEPH BAAH OBENG
FELLOWSHIP WITH GOD - PART 4 BY REV. DR. JOSEPH BAAH OBENG
Many SMBs and remote-based enterprises are facing soaring infrastructure costs. On this episode of DevOps Dialogues, host Mitch Ashley is joined by StorMagic's Chief Product Officer Bruce Kornfeld, to look at the evolving landscape of virtualization and HCI (Hyper-Converged Infrastructure). Their discussion covers: - The current state of virtualization technology and its market implications - Challenges businesses face with traditional virtualization solutions - Advantages of alternative HCI and virtualization technologies compared to VMware - StorMagic's role in providing innovative virtualization and cost-effective solutions with its SvHCI platform - Future trends in HCI and virtualization technology
Fellowship with God part 1 by Rev. Dr. Joseph Baah Obeng
Fellowship with God part 2 by Rev. Dr. Joseph Baah Obeng
FASTING - PART 1 BY LADY PS. ADWOA OBENG
FASTING - PART 2 BY LADY PS. ADWOA OBENG
This episode of Nine to Thrive HR features guest Sarah Devereaux, an HCI faculty, a leadership coach and HR expert, exploring the themes of communication, conflict resolution, and organizational culture. We get real as she discusses the year ahead and a mission to lead with respect and open mindedness, if we should choose to accept it. The discussion highlights the need for clear, respectful conversations, adaptive change management, and fostering environments that prioritize collaboration and innovation. Lastly, Sarah reflects on her experiences moderating HCI's first return to in-person conference in 2024, emphasizing the importance of building lasting connections.
Sam and Wouter interview Harry Goldstein, a researcher in property-based testing who works in PL, SE, and HCI. In this episode, we reflect on random generators, the find-a-friend model, interdisciplinary research, and how to have impact beyond your own research community.
In our fast-paced world, the traditional definition of success, often tied to material wealth and societal status, has become outdated. Elizabeth Hamilton-Guarino, a renowned life coach and founder of The Best Ever You Network, challenges this narrow perspective and introduces a more holistic approach to achieving true fulfillment. Her latest book, The Success Guidebook, is a comprehensive guide that empowers readers to redefine success for themselves. It's not just about climbing the corporate ladder or accumulating possessions; it's about cultivating inner peace, gratitude, and a sense of purpose. By focusing on the Ten Factors of Success—a set of behaviors consistently exhibited by high-achievers—readers can overcome obstacles, harness their potential, and build a life of bold and brave possibilities. Through inspiring stories of individuals who embody these principles, The Success Guidebook demonstrates that world-class success is attainable for anyone, regardless of their background or circumstances. Whether you're seeking personal growth, professional advancement, or simply a happier life, this book offers practical tools and actionable strategies to help you achieve your goals. Websites:www.BestEverYou.comwww.ElizabethGuarino.comwww.Compliance4.com About The Author: In 2008, Elizabeth Hamilton-Guarino closed the door to her office to think about her life. When she opened it, she walked through, leaving behind an almost two-decade career in the financial services industry in order to open the doors for the Best Ever You Network. Today, Best Ever You is a revolutionary multimedia brand and platform with millions of fans and followers around the world. She is a tireless champion of others and believes in the need for the individual light within to raise the collaborative power of us and we. Elizabeth Hamilton-Guarino is a globally recognized author, speaker, and founder of the Best Ever You Network, a platform dedicated to helping individuals live their best, most authentic lives. With a mission to empower others through positive change, Elizabeth is the bestselling author of The Change Guidebook - How to Align Your Heart, Truths, and Energy to Find Success in All Areas of Your Life, The Success Guidebook - How to Visualize, Actualize, and Amplify You, Percolate: Let Your Best Self Filter Through, co-authored with Dr. Katie Eastman. Her work, which spans across self-help, personal and professional growth and development, well-being, and success, is praised for its practical and inspiring approach to life's most challenging transitions. With a passion for helping others unlock their potential, Elizabeth's influence extends through her widely followed Best Ever You Podcast, which has millions of downloads, and the popular YouTube series Real Life that she co-hosts with Dr. Katie Eastman. As a Hay House and HCI author and a frequent speaker, Elizabeth inspires audiences to embrace change, root themselves in gratitude, and live authentically. She is also dedicated to social change, leading efforts to help one million people percolate peace worldwide. Elizabeth Hamilton-Guarino and Dr. Katie Eastman are the co-founders of the Percolate Peace Project, a transformative social movement aimed at cultivating hope, healing, and harmony on a global scale. Their Percolate Peace Project aspires to impact 1,000,000 people, helping individuals implement the principles of peace in their daily lives and foster meaningful connections within their communities. Through their leadership, they continue to champion well-being, inner growth, and social harmony, promoting peace on both personal and collective levels. Elizabeth is also a frequent speaker, and her work has been featured in places like Good Housekeeping, Daily Om, The Maine Women's Conference, U.S. News and World Report, Forbes, Thrive, Medium, and more. Her popular “4-4-4 Newsletter” is sent out each week to thousands of subscribers. When Elizabeth was 25, she was diagnosed with life-threatening food allergies and has nearly lost her life on multiple occasions, once while six months pregnant with her son Cam. These food allergies and stories are documented in her books and children's books. These experiences have helped Elizabeth become a food allergy advocate helping people stay alive and thrive with food allergies. Elizabeth works with multiple organizations including FAACT and MedicAlert Foundation. Elizabeth and her husband, Peter, have been married for more than twenty-five years and have four adult sons, three rescued cats, and two dogs. They can often be found in Maine in their gardens, in the pool, raking leaves, or, depending on the season, in Myrtle Beach, South Carolina. You can learn more and sign up for the e-newsletter at elizabethguarino.com and by visiting BestEverYou.com. About the show: Ash Brown is a force to be reckoned with in the world of motivation and empowerment. This multi-talented American is a gifted producer, blogger, speaker, media personality, and event emcee. Her infectious energy and passion for helping others shine through in everything she does. Ash Said It, Ash Does It: * AshSaidit.com: This vibrant blog is your one-stop shop for a peek into Ash's world. Dive into exclusive event invites, insightful product reviews, and a whole lot more. It's a platform that keeps you informed and entertained. * The Ash Said It Show: Buckle up for a motivational ride with Ash's signature podcast. With over 2,000 episodesalready under her belt and a staggering half a million streams worldwide, this show is a testament to Ash's impact. Here, she chats with inspiring individuals and tackles topics that resonate deeply. What Makes Ash Special? Ash doesn't just preach motivation; she lives it. Her strength lies in her authenticity. She connects with her audience on a genuine level, offering real-talk advice and encouragement. She doesn't shy away from the challenges life throws our way, but instead, equips you with the tools to overcome them. Here's what sets Ash apart: * Unwavering Positivity: Ash Brown is a glass-half-full kind of person. Her infectious optimism is contagious, leaving you feeling empowered and ready to take on the world. * Real & Relatable: Ash doesn't sugarcoat things. She understands the struggles we face and offers relatable advice that resonates with listeners from all walks of life. * Actionable Strategies: This isn't just about empty inspirational quotes. Ash provides practical tips and strategies to help you translate motivation into action, turning your dreams into reality. So, if you're looking for a daily dose of inspiration, actionable advice, and a healthy dose of real talk, look no further than Ash Brown. With her infectious positivity and dedication to empowering others, she's sure to become your go-to source for making the most of life. ► Luxury Women Handbag Discounts: https://www.theofficialathena.... ► Become an Equus Coach®: https://equuscoach.com/?rfsn=7... ► For $5 in ride credit, download the Lyft app using my referral link: https://www.lyft.com/ici/ASH58... ► Review Us: https://itunes.apple.com/us/po... ► Subscribe: http://www.youtube.com/c/AshSa... ► Instagram: https://www.instagram.com/1lov... ► Facebook: https://www.facebook.com/ashsa... ► Twitter: https://twitter.com/1loveAsh ► Blog: http://www.ashsaidit.com/blog #atlanta #ashsaidit #theashsaiditshow #ashblogsit #ashsaidit®Become a supporter of this podcast: https://www.spreaker.com/podcast/the-ash-said-it-show--1213325/support.
Jeannette is joined by the fabulous Jennifer Grace to discuss her inspiring journey from a vibrant childhood filled with dance to becoming Miami's number one life coach, motivational speaker, and author. Jennifer talks about her unique approach to coaching, emphasising the importance of helping others recognise their own potential while navigating challenges like imposter syndrome. She also reflects on her experiences working with celebrities and female entrepreneurs, highlighting the different dynamics they face. KEY TAKEAWAYS Mindset is about choosing how to view situations, whether from a limited or growth perspective. Neutralising events allows individuals to detach from emotional responses and choose a more empowering viewpoint. Imposter syndrome affects people across all demographics, regardless of success. Normalising it and helping individuals recognise their achievements can shift their perspective and reduce its impact. It's essential to celebrate milestones and successes, no matter how small. This practice helps individuals appreciate their journey and prevents them from constantly moving the goalposts without acknowledging their accomplishments. The launch of a hybrid publishing company addresses the challenges authors face in traditional publishing. This model offers a transparent, supportive alternative that allows authors to retain a larger share of their profits while receiving necessary guidance and resources. BEST MOMENTS "I think the first thing I always try to do is normalise imposter syndrome, making people feel like, okay, there's not something wrong with me because I'm feeling this way." "The beauty is we get to choose. Are you going to have a limited mindset or a growth mindset?" "I think part of retreat needs to celebrate and have fun. People take sometimes this work way too seriously." "I had a complete breakdown with my own child... I used all the tools that I had in my tool bag to meditate and journal and have a new vision for him and I." This is the perfect time to get focused on what YOU want to really achieve in your business, career, and life. It's never too late to be BRAVE and BOLD and unlock your inner BRILLIANT. Visit our new website https://brave-bold-brilliant.com/ - there you'll find a library of FREE resources and downloadable guides and e-books to help you along your journey. If you'd like to jump on a free mentoring session just DM Jeannette at info@brave-bold-brilliant.com. VALUABLE RESOURCES Brave Bold Brilliant - https://brave-bold-brilliant.com/ Brave, Bold, Brilliant podcast series - https://podcasts.apple.com/gb/podcast/brave-bold-brilliant-podcast/id1524278970 ABOUT THE GUEST Jennifer Grace was named Miami's number #1 Life Coach by New Times. She is a Hay House Author, Motivational Speaker, Radio Show Host, and Corporate Wellness Coach who has recently re located to Nashville, TN. Jennifer's goal-oriented approach to mindfulness recently earned her the role of Prada's first-ever mindset coach. She also works with corporations as a mindset coach such as: Facebook, Turner, HCI, Whycode, and EO (Entrepreneur Organization). She is the lead Train the Trainer for The Catalyst, a mindfulness and emotional intelligence training based on the Stanford University program Creativity in Business developed by Dr. Michael Ray. In 2019, her TedX speech, “Why Mindfulness Should be Just as Important as Math in Our School Systems” debuted on TED.com. Drawing from her inspiration as a mother, Jennifer Grace, redesigned her mindset curriculum for kids and teens in 2016. She has been featured on several morning shows; NBC 6, The Balancing Act on Lifetime TV, San Diego Living, CT Style and featured in; Huffington Post, Ocean Drive, Mindbodygreen, and on radio; The Jenny McCarthy Show and Elvis Duran. As the founder and CEO of JG Enterprises, and Raven and Grace Press, Jennifer has built her business to over 7 figures in annual revenue by empowering people worldwide to maximize their potential. Her inspirational reach continues to change lives. Learn more about Jennifer at www.jennifergrace.com OR www.ravenandgrace.com Instagram: @thejennifergrace or @ravenandgracepress ABOUT THE HOST Jeannette Linfoot is a highly regarded senior executive, property investor, board advisor, and business mentor with over 30 years of global professional business experience across the travel, leisure, hospitality, and property sectors. Having bought, ran, and sold businesses all over the world, Jeannette now has a portfolio of her own businesses and also advises and mentors other business leaders to drive forward their strategies as well as their own personal development. Jeannette is a down-to-earth leader, a passionate champion for diversity & inclusion, and a huge advocate of nurturing talent so every person can unleash their full potential and live their dreams. CONTACT THE HOST Jeannette's linktree - https://linktr.ee/JLinfoot https://www.jeannettelinfootassociates.com/ YOUTUBE - https://www.youtube.com/@braveboldbrilliant LinkedIn - https://uk.linkedin.com/in/jeannettelinfoot Facebook - https://www.facebook.com/jeannette.linfoot/ Instagram - https://www.instagram.com/jeannette.linfoot/ Tiktok - https://www.tiktok.com/@jeannette.linfoot Podcast Description Jeannette Linfoot talks to incredible people about their experiences of being Brave, Bold & Brilliant, which have allowed them to unleash their full potential in business, their careers, and life in general. From the boardroom tables of ‘big' international businesses to the dining room tables of entrepreneurial start-ups, how to overcome challenges, embrace opportunities and take risks, whilst staying ‘true' to yourself is the order of the day.Travel, Bold, Brilliant, business, growth, scale, marketing, investment, investing, entrepreneurship, coach, consultant, mindset, six figures, seven figures, travel, industry, ROI, B2B, inspirational: https://linktr.ee/JLinfoot
Best But Never Final: Private Equity's Pursuit of Excellence
Lloyd Metz, Doug McCormick, Sean Mooney, and operating partners Mary Rachide from ICV and Bob Hund from HCI delve into value creation in private equity. They discuss the role of operating partners in enhancing portfolio company growth, from due diligence to strategic execution. The episode covers aligning company vision with actionable strategies, the importance of stakeholder collaboration, and the balance between innovation and risk management. Mary and Bob share their experiences in driving transformation and the challenges and rewards of their roles in achieving exceptional outcomes.Episode Highlights1:25 - Introduction to value creation and the role of operating partners.1:52 - Mary Rachide outlines her path to becoming an operating partner at ICV, focusing on strategy and execution.3:33 - Bob Hund discusses his journey to HCI, emphasizing process improvement and risk mitigation.6:45 - The integration of operating partners in due diligence and value creation planning.15:18 - Managing stakeholder collaboration and communication within the private equity ecosystem.37:39 - The rewarding aspects of being an operating partner, including working with entrepreneurs and seeing tangible results.44:48 - Challenges of the role, such as prioritizing initiatives and navigating change management.For more information on the podcast, visit bestbutneverfinal.buzzsprout.com and embark on your journey to private equity excellence today.Visit us on LinkedIn at https://www.linkedin.com/company/best-but-never-final-podcast/Visit us on Instagram at https://www.instagram.com/bestbutneverfinal/For information on HCI Equity Partners, go to https://www.hciequity.comFor information on ICV Partners, go to https://www.icvpartners.comFor information on BluWave, go to https://www.bluwave.net
Jason expressed his dissatisfaction with the tipping culture in Miami and emphasized the importance of rewarding good behavior and correcting bad behavior. He also discusses the challenges faced by home builders due to higher mortgage rates and the importance of patience in the real estate market. Lastly, he shared his concerns about the upcoming US presidential election and the potential impact of individual votes, as well as his recent talk in Tampa and upcoming appearance at Global Citizen Week in Miami. Then Jason makes a presentation at the Family Mastermind conference where he discusses the power of inflation as a wealth-building tool for property investors. He explains his concept of "inflation-induced debt destruction" and how it benefits those with fixed-rate mortgages. He introduces his Hartman Comparison Index (HCI) to analyze housing affordability relative to other commodities. He argues that despite high prices, homes are actually more affordable when priced in gold or oil. Jason predicts continued low housing inventory and increasing demand as interest rates potentially decrease. He touches on immigration's impact on housing demand and the national debt. Overall, Jason remains bullish on real estate investing, regardless of political outcomes, due to ongoing inflationary pressures. #RealEstateInvesting #Inflation #HousingMarket #LeadershipLessons #EconomicOutlook #AffordabilityIndex #InterestRates #PopulationGrowth #GovernmentPolicy #FinancialLiteracy #WealthBuilding #MarketAnalysis #InvestmentStrategy #EconomicTrends #HousingInventory #MortgageRates #FamilyMastermind Key Takeaways: Jason's editorial 1:34 The tipping culture and rewarding good behavior 4:22 Builders remain optimistic 8:07 The coming erection and Kamala's "Swipe of my pen" 10:53 The inner Circle town hall Jason Speaking at the Family Mastermind conference 13:58 Kudos to Family Mastermind's Matt Andrews 15:24 Inflation vs. Deflation 23:04 The HCI and the Housing affordability crisis 25:36 National payment-to-income ratio and housing availability 30:36 Buying power and sensitivity 32:51 Adding "workers without papers" 36:33 Join our Mastermind Yacht Adventures to the British Virgin Islands https://familymastermindadventures.com/ ___________________________________________ I'm speaking at Global Citizen Week and as one of the speakers, I'm also excited to offer my network a few VIP passes—which means your access will be complimentary (usually priced at $1,500). However, space is limited, so don't miss out! Reserve Your VIP Pass https://globalcitizenweek.com/miami/local/ Taking place from October 31 to November 1 at the beautiful Hotel AKA Brickell in Miami's financial district, this event is an incredible opportunity to: Expand Your Network: Connect with other forward-thinking entrepreneurs, investors, and business owners who are equally focused on enhancing their global footprint. Engage in Strategic Conversations: Explore the latest trends in diversifying investments, optimizing tax strategies, and building a Plan B for global mobility. Learn and Optimize: Participate in expert-led workshops and discussions to discover new ways to protect your wealth, maximize business potential, and enhance your lifestyle. Who should attend? Entrepreneurs & Business Owners: Learn how to streamline your corporate structure and tax strategy to unlock new growth opportunities. High-net-worth Individuals: Discover strategies for protecting and growing your wealth globally. Investors: Find out about emerging markets and investment opportunities that can drive your financial independence. Those Seeking Global Citizenship: Learn how global citizenship can improve your quality of life with better health care, education, and security. To secure your spot, just register here: Reserve Your VIP Pass https://globalcitizenweek.com/miami/local/ Follow Jason on TWITTER, INSTAGRAM & LINKEDIN Twitter.com/JasonHartmanROI Instagram.com/jasonhartman1/ Linkedin.com/in/jasonhartmaninvestor/ Call our Investment Counselors at: 1-800-HARTMAN (US) or visit: https://www.jasonhartman.com/ Free Class: Easily get up to $250,000 in funding for real estate, business or anything else: http://JasonHartman.com/Fund CYA Protect Your Assets, Save Taxes & Estate Planning: http://JasonHartman.com/Protect Get wholesale real estate deals for investment or build a great business – Free Course: https://www.jasonhartman.com/deals Special Offer from Ron LeGrand: https://JasonHartman.com/Ron Free Mini-Book on Pandemic Investing: https://www.PandemicInvesting.com
In this intro portion, Jason talks with real estate investor Robert Helms. They discuss the current state of the market, opportunities, and the importance of pricing real estate in other assets. Jason also introduces an institutional real estate investor who shares insights on their strategies and the benefits they bring to individual investors. Jason and Robert emphasizes the importance of long-term thinking and avoiding emotional decisions in real estate investing. The build-to-rent (BTR) trend is growing, with institutional players becoming more involved in the real estate market. Richard Ross, CEO of Quinn Residences, discusses the factors driving the demand for BTR homes, including a shortage of affordable housing, aging millennials, and the pandemic's impact on living preferences. He also highlights the increasing number of renters by choice and the potential for growth in the BTR sector. The chart shows that the BTR market share is still relatively small compared to traditional rental housing, but it's expected to grow significantly in the coming years due to various factors. #buildtorent #BTR #realestate #housing #rentalmarket #affordablehousing #millennials #pandemic #rentalhousing #singlefamilyhomes #apartment #investment #housingmarket #residentialrealestate #property #homeownership #renters #rent #rental #propertymanagement Key Takeaways: Jason's editorial 1:24 Eagles and RE Trends with Jason and Robert Helms 2:27 RE vs HCI 3:48 Richard Ross, institutional investors and macro trends Richard Ross interview 7:29 Bullish about SFH 11:14 Large Addressable Market 15:52 How much of the housing stock will be owned by institutional investor 17:45 Compelling sector Supply/Demand Dynamics 18:45 Doomers, shadow supply & demand 23:21 Migration trends & US single family permits by year ___________________________________________ I'm speaking at Global Citizen Week and as one of the speakers, I'm also excited to offer my network a few VIP passes—which means your access will be complimentary (usually priced at $1,500). However, space is limited, so don't miss out! Reserve Your VIP Pass https://globalcitizenweek.com/miami/local/ Taking place from October 31 to November 1 at the beautiful Hotel AKA Brickell in Miami's financial district, this event is an incredible opportunity to: Expand Your Network: Connect with other forward-thinking entrepreneurs, investors, and business owners who are equally focused on enhancing their global footprint. Engage in Strategic Conversations: Explore the latest trends in diversifying investments, optimizing tax strategies, and building a Plan B for global mobility. Learn and Optimize: Participate in expert-led workshops and discussions to discover new ways to protect your wealth, maximize business potential, and enhance your lifestyle. Who should attend? Entrepreneurs & Business Owners: Learn how to streamline your corporate structure and tax strategy to unlock new growth opportunities. High-net-worth Individuals: Discover strategies for protecting and growing your wealth globally. Investors: Find out about emerging markets and investment opportunities that can drive your financial independence. Those Seeking Global Citizenship: Learn how global citizenship can improve your quality of life with better health care, education, and security. To secure your spot, just register here: Reserve Your VIP Pass https://globalcitizenweek.com/miami/local/ #RealEstateInvestment #RentingVsBuying #FinancialAdvice #PersonalFinance #InvestmentStrategy #RealEstateTips #HomeOwnership #RentalProperties #FinancialPlanning #WealthCreation #JasonHartman #RealEstateExpert #Habits #Communication #SuperCommunicators #PowerOfHabit #PersonalDevelopment #Relationships #Conversation #EmotionalIntelligence #Listening #Empathy #SocialSkills #BehaviorChange #Psychology #Neuroscience #SelfImprovement Follow Jason on TWITTER, INSTAGRAM & LINKEDIN Twitter.com/JasonHartmanROI Instagram.com/jasonhartman1/ Linkedin.com/in/jasonhartmaninvestor/ Call our Investment Counselors at: 1-800-HARTMAN (US) or visit: https://www.jasonhartman.com/ Free Class: Easily get up to $250,000 in funding for real estate, business or anything else: http://JasonHartman.com/Fund CYA Protect Your Assets, Save Taxes & Estate Planning: http://JasonHartman.com/Protect Get wholesale real estate deals for investment or build a great business – Free Course: https://www.jasonhartman.com/deals Special Offer from Ron LeGrand: https://JasonHartman.com/Ron Free Mini-Book on Pandemic Investing: https://www.PandemicInvesting.com
In this devotional message titled "Love Like Jesus Did" we are encouraged to love like Jesus did. Giving love to everyone, not with an expectation to get it back but without condition.This was aired on Radio HCI Today via the WeLove Radio App.
In this devotional message titled "Prayer Lessons - Part 3", We are encourage to be sincere in the place of prayer as God already knows what is in our hearts.This was aired on Radio HCi Today via the WeLove Radio App.
Today, our special guest is Whitney Hess, Founder and Executive Coach of Vicarious Partners Inc. We discuss the power of bravery, vulnerability, and personal growth. Discover how to overcome fear, cultivate self-awareness, and embrace failure as a stepping stone toward success. With practical tips and inspiring stories, this podcast is a must-listen for anyone looking to live a more courageous and purposeful life. Highlights include: 0:00-4:25 - Whitney discusses her perspective on failure 04:26-8:17 - Whitney shares her backstory, including living on a sailboat 08:18-14:03 - The importance of being present and fully engaged with clients 14:04-19:48 - A closer look at the power dynamics in coaching relationships 19:49-24:12 - Whitney shares a personal anecdote 24:13-29:45 - More on coaching, UX, and the challenges facing the field 29:46-33:58 - Whitney's perspective on the risks of pursuing the management track 33:59-38:20 - Coaching dynamics and the importance of an opt-in relationship 43:03-46:26 - Brendan and Whitney highlight the importance of self-reflection Who is Whitney Hess Whitney Hess is a coach, writer, and designer on a mission to put humanity back into business. She believes empathy builds empires, and she helps progressive, creative leaders design their careers and accelerate their missions. Her techniques help people gain self-awareness, identify blind spots, navigate obstacles, and bring their whole selves to their work. Whitney has been a user experience (UX) consultant for over a decade, hired to make technology easier and more pleasurable. She has been recognized for her work with the United States Holocaust Memorial Museum, the Martin Luther King Jr. Center for Nonviolent Social Change, Foundation Center, Seamless, Boxee, and WNYC. She is named as a co-inventor on a U.S. patent with American Express. Whitney is a two-time Carnegie Mellon University graduate with a Master's in Human-Computer Interaction and a Bachelor's in Professional Writing and HCI. She is a Certified Integral Coach through New Ventures West and a Professional Certified Coach (PCC) with the International Coaching Federation. She writes on her blog Pleasure & Pain, co-hosts the podcast Designing Yourself, and speaks at conferences and corporations worldwide. Find Whitney Here: Whitney Hess on LinkedIn Whitney Hess Website Whitney Hess Blog Vicarious Partners Inc. on LinkedIn Whitney Hess Email Subscribe to Brave UX Like what you heard and want to hear more? Subscribe and support the show by leaving a review on Apple Podcasts (or wherever you listen). Apple Podcast Spotify YouTube Podbean Follow us on our other social channels for more great Brave UX content! LinkedIn Instagram Brendan Jarvis hosts the Show, and you can find him here: Brendan Jarvis on LinkedIn The Space InBetween Website
Best But Never Final: Private Equity's Pursuit of Excellence
In this engaging episode of 'Best But Never Final,' hosts Lloyd Metz, Doug McCormick, and Sean Mooney welcome David Freed, a seasoned professional with extensive experience across the private equity, technology, and operational landscapes. Freed shares his journey from operating roles within portfolio companies to co-founding CP Energy Holdings, focusing on carbon reduction investments. The discussion delves into the strategic use of technology to drive business growth, the importance of cybersecurity, and the necessity of embracing technological advancements to stay competitive. Freed's insights offer valuable lessons on integrating technology with business strategy to achieve sustainable success in today's rapidly evolving market.Episode Highlights2:06 - Introduction of David Freed and his diverse background in technology, operations, and private equity.6:19 - Freed's transition from Eight Rivers to HCI and the challenges of integrating technology in traditional industries.16:55 - The role of technology in driving value creation and operational efficiency at MSI, a portfolio company.26:11 - The defensive and offensive uses of technology in business, from cybersecurity to enabling growth.44:30 - Freed's approach to evaluating and implementing technology in SMBs to fuel growth and innovation.53:59 - The risks of ignoring technological advancements and the importance of proactive adoption for competitive advantage.For more information on Critical Point Holdings, go to https://cpeh.com/For more information on the podcast, visit bestbutneverfinal.buzzsprout.com and embark on your journey to private equity excellence today.Visit us on LinkedIn at https://www.linkedin.com/company/best-but-never-final-podcast/Visit us on Instagram at https://www.instagram.com/bestbutneverfinal/For information on HCI Equity Partners, go to https://www.hciequity.comFor information on ICV Partners, go to https://www.icvpartners.comFor information on BluWave, go to https://www.bluwave.net
OpenAI DevDay is almost here! Per tradition, we are hosting a DevDay pregame event for everyone coming to town! Join us with demos and gossip!Also sign up for related events across San Francisco: the AI DevTools Night, the xAI open house, the Replicate art show, the DevDay Watch Party (for non-attendees), Hack Night with OpenAI at Cloudflare. For everyone else, join the Latent Space Discord for our online watch party and find fellow AI Engineers in your city.OpenAI's recent o1 release (and Reflection 70b debacle) has reignited broad interest in agentic general reasoning and tree search methods.While we have covered some of the self-taught reasoning literature on the Latent Space Paper Club, it is notable that the Eric Zelikman ended up at xAI, whereas OpenAI's hiring of Noam Brown and now Shunyu suggests more interest in tool-using chain of thought/tree of thought/generator-verifier architectures for Level 3 Agents.We were more than delighted to learn that Shunyu is a fellow Latent Space enjoyer, and invited him back (after his first appearance on our NeurIPS 2023 pod) for a look through his academic career with Harrison Chase (one year after his first LS show).ReAct: Synergizing Reasoning and Acting in Language Modelspaper linkFollowing seminal Chain of Thought papers from Wei et al and Kojima et al, and reflecting on lessons from building the WebShop human ecommerce trajectory benchmark, Shunyu's first big hit, the ReAct paper showed that using LLMs to “generate both reasoning traces and task-specific actions in an interleaved manner” achieved remarkably greater performance (less hallucination/error propagation, higher ALFWorld/WebShop benchmark success) than CoT alone. In even better news, ReAct scales fabulously with finetuning:As a member of the elite Princeton NLP group, Shunyu was also a coauthor of the Reflexion paper, which we discuss in this pod.Tree of Thoughtspaper link hereShunyu's next major improvement on the CoT literature was Tree of Thoughts:Language models are increasingly being deployed for general problem solving across a wide range of tasks, but are still confined to token-level, left-to-right decision-making processes during inference. This means they can fall short in tasks that require exploration, strategic lookahead, or where initial decisions play a pivotal role…ToT allows LMs to perform deliberate decision making by considering multiple different reasoning paths and self-evaluating choices to decide the next course of action, as well as looking ahead or backtracking when necessary to make global choices.The beauty of ToT is it doesnt require pretraining with exotic methods like backspace tokens or other MCTS architectures. You can listen to Shunyu explain ToT in his own words on our NeurIPS pod, but also the ineffable Yannic Kilcher:Other WorkWe don't have the space to summarize the rest of Shunyu's work, you can listen to our pod with him now, and recommend the CoALA paper and his initial hit webinar with Harrison, today's guest cohost:as well as Shunyu's PhD Defense Lecture:as well as Shunyu's latest lecture covering a Brief History of LLM Agents:As usual, we are live on YouTube! Show Notes* Harrison Chase* LangChain, LangSmith, LangGraph* Shunyu Yao* Alec Radford* ReAct Paper* Hotpot QA* Tau Bench* WebShop* SWE-Agent* SWE-Bench* Trees of Thought* CoALA Paper* Related Episodes* Our Thomas Scialom (Meta) episode* Shunyu on our NeurIPS 2023 Best Papers episode* Harrison on our LangChain episode* Mentions* Sierra* Voyager* Jason Wei* Tavily* SERP API* ExaTimestamps* [00:00:00] Opening Song by Suno* [00:03:00] Introductions* [00:06:16] The ReAct paper* [00:12:09] Early applications of ReAct in LangChain* [00:17:15] Discussion of the Reflection paper* [00:22:35] Tree of Thoughts paper and search algorithms in language models* [00:27:21] SWE-Agent and SWE-Bench for coding benchmarks* [00:39:21] CoALA: Cognitive Architectures for Language Agents* [00:45:24] Agent-Computer Interfaces (ACI) and tool design for agents* [00:49:24] Designing frameworks for agents vs humans* [00:53:52] UX design for AI applications and agents* [00:59:53] Data and model improvements for agent capabilities* [01:19:10] TauBench* [01:23:09] Promising areas for AITranscriptAlessio [00:00:01]: Hey, everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO of Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Small AI.Swyx [00:00:12]: Hey, and today we have a super special episode. I actually always wanted to take like a selfie and go like, you know, POV, you're about to revolutionize the world of agents because we have two of the most awesome hiring agents in the house. So first, we're going to welcome back Harrison Chase. Welcome. Excited to be here. What's new with you recently in sort of like the 10, 20 second recap?Harrison [00:00:34]: Linkchain, Linksmith, Lingraph, pushing on all of them. Lots of cool stuff related to a lot of the stuff that we're going to talk about today, probably.Swyx [00:00:42]: Yeah.Alessio [00:00:43]: We'll mention it in there. And the Celtics won the title.Swyx [00:00:45]: And the Celtics won the title. You got that going on for you. I don't know. Is that like floorball? Handball? Baseball? Basketball.Alessio [00:00:52]: Basketball, basketball.Harrison [00:00:53]: Patriots aren't looking good though, so that's...Swyx [00:00:56]: And then Xun Yu, you've also been on the pod, but only in like a sort of oral paper presentation capacity. But welcome officially to the LinkedSpace pod.Shunyu [00:01:03]: Yeah, I've been a huge fan. So thanks for the invitation. Thanks.Swyx [00:01:07]: Well, it's an honor to have you on. You're one of like, you're maybe the first PhD thesis defense I've ever watched in like this AI world, because most people just publish single papers, but every paper of yours is a banger. So congrats.Shunyu [00:01:22]: Thanks.Swyx [00:01:24]: Yeah, maybe we'll just kick it off with, you know, what was your journey into using language models for agents? I like that your thesis advisor, I didn't catch his name, but he was like, you know... Karthik. Yeah. It's like, this guy just wanted to use language models and it was such a controversial pick at the time. Right.Shunyu [00:01:39]: The full story is that in undergrad, I did some computer vision research and that's how I got into AI. But at the time, I feel like, you know, you're just composing all the GAN or 3D perception or whatever together and it's not exciting anymore. And one day I just see this transformer paper and that's really cool. But I really got into language model only when I entered my PhD and met my advisor Karthik. So he was actually the second author of GPT-1 when he was like a visiting scientist at OpenAI. With Alec Redford?Swyx [00:02:10]: Yes.Shunyu [00:02:11]: Wow. That's what he told me. It's like back in OpenAI, they did this GPT-1 together and Ilya just said, Karthik, you should stay because we just solved the language. But apparently Karthik is not fully convinced. So he went to Princeton, started his professorship and I'm really grateful. So he accepted me as a student, even though I have no prior knowledge in NLP. And you know, we just met for the first time and he's like, you know, what do you want to do? And I'm like, you know, you have done those test game scenes. That's really cool. I wonder if we can just redo them with language models. And that's how the whole journey began. Awesome.Alessio [00:02:46]: So GPT-2 was out at the time? Yes, that was 2019.Shunyu [00:02:48]: Yeah.Alessio [00:02:49]: Way too dangerous to release. And then I guess the first work of yours that I came across was React, which was a big part of your defense. But also Harrison, when you came on The Pockets last year, you said that was one of the first papers that you saw when you were getting inspired for BlankChain. So maybe give a recap of why you thought it was cool, because you were already working in AI and machine learning. And then, yeah, you can kind of like intro the paper formally. What was that interesting to you specifically?Harrison [00:03:16]: Yeah, I mean, I think the interesting part was using these language models to interact with the outside world in some form. And I think in the paper, you mostly deal with Wikipedia. And I think there's some other data sets as well. But the outside world is the outside world. And so interacting with things that weren't present in the LLM and APIs and calling into them and thinking about the React reasoning and acting and kind of like combining those together and getting better results. I'd been playing around with LLMs, been talking with people who were playing around with LLMs. People were trying to get LLMs to call into APIs, do things, and it was always, how can they do it more reliably and better? And so this paper was basically a step in that direction. And I think really interesting and also really general as well. Like I think that's part of the appeal is just how general and simple in a good way, I think the idea was. So that it was really appealing for all those reasons.Shunyu [00:04:07]: Simple is always good. Yeah.Alessio [00:04:09]: Do you have a favorite part? Because I have one favorite part from your PhD defense, which I didn't understand when I read the paper, but you said something along the lines, React doesn't change the outside or the environment, but it does change the insight through the context, putting more things in the context. You're not actually changing any of the tools around you to work for you, but you're changing how the model thinks. And I think that was like a very profound thing when I, not that I've been using these tools for like 18 months. I'm like, I understand what you meant, but like to say that at the time you did the PhD defense was not trivial. Yeah.Shunyu [00:04:41]: Another way to put it is like thinking can be an extra tool that's useful.Alessio [00:04:47]: Makes sense. Checks out.Swyx [00:04:49]: Who would have thought? I think it's also more controversial within his world because everyone was trying to use RL for agents. And this is like the first kind of zero gradient type approach. Yeah.Shunyu [00:05:01]: I think the bigger kind of historical context is that we have this two big branches of AI. So if you think about RL, right, that's pretty much the equivalent of agent at a time. And it's like agent is equivalent to reinforcement learning and reinforcement learning is equivalent to whatever game environment they're using, right? Atari game or go or whatever. So you have like a pretty much, you know, you have a biased kind of like set of methodologies in terms of reinforcement learning and represents agents. On the other hand, I think NLP is like a historical kind of subject. It's not really into agents, right? It's more about reasoning. It's more about solving those concrete tasks. And if you look at SEL, right, like each task has its own track, right? Summarization has a track, question answering has a track. So I think really it's about rethinking agents in terms of what could be the new environments that we came to have is not just Atari games or whatever video games, but also those text games or language games. And also thinking about, could there be like a more general kind of methodology beyond just designing specific pipelines for each NLP task? That's like the bigger kind of context, I would say.Alessio [00:06:14]: Is there an inspiration spark moment that you remember or how did you come to this? We had Trida on the podcast and he mentioned he was really inspired working with like systems people to think about Flash Attention. What was your inspiration journey?Shunyu [00:06:27]: So actually before React, I spent the first two years of my PhD focusing on text-based games, or in other words, text adventure games. It's a very kind of small kind of research area and quite ad hoc, I would say. And there are like, I don't know, like 10 people working on that at the time. And have you guys heard of Zork 1, for example? So basically the idea is you have this game and you have text observations, like you see a monster, you see a dragon.Swyx [00:06:57]: You're eaten by a grue.Shunyu [00:06:58]: Yeah, you're eaten by a grue. And you have actions like kill the grue with a sword or whatever. And that's like a very typical setup of a text game. So I think one day after I've seen all the GPT-3 stuff, I just think about, you know, how can I solve the game? Like why those AI, you know, machine learning methods are pretty stupid, but we are pretty good at solving the game relatively, right? So for the context, the predominant method to solve this text game is obviously reinforcement learning. And the idea is you just try out an arrow in those games for like millions of steps and you kind of just overfit to the game. But there's no language understanding at all. And I'm like, why can't I solve the game better? And it's kind of like, because we think about the game, right? Like when we see this very complex text observation, like you see a grue and you might see a sword, you know, in the right of the room and you have to go through the wooden door to go to that room. You will think, you know, oh, I have to kill the monster and to kill that monster, I have to get the sword, I have to get the sword, I have to go, right? And this kind of thinking actually helps us kind of throw shots off the game. And it's like, why don't we also enable the text agents to think? And that's kind of the prototype of React. And I think that's actually very interesting because the prototype, I think, was around November of 2021. So that's even before like chain of thought or whatever came up. So we did a bunch of experiments in the text game, but it was not really working that well. Like those text games are just too hard. I think today it's still very hard. Like if you use GPD 4 to solve it, it's still very hard. So the change came when I started the internship in Google. And apparently Google care less about text game, they care more about what's more practical. So pretty much I just reapplied the idea, but to more practical kind of environments like Wikipedia or simpler text games like Alphard, and it just worked. It's kind of like you first have the idea and then you try to find the domains and the problems to demonstrate the idea, which is, I would say, different from most of the AI research, but it kind of worked out for me in that case.Swyx [00:09:09]: For Harrison, when you were implementing React, what were people applying React to in the early days?Harrison [00:09:14]: I think the first demo we did probably had like a calculator tool and a search tool. So like general things, we tried to make it pretty easy to write your own tools and plug in your own things. And so this is one of the things that we've seen in LangChain is people who build their own applications generally write their own tools. Like there are a few common ones. I'd say like the three common ones might be like a browser, a search tool, and a code interpreter. But then other than that-Swyx [00:09:37]: The LMS. Yep.Harrison [00:09:39]: Yeah, exactly. It matches up very nice with that. And we actually just redid like our integrations docs page, and if you go to the tool section, they like highlight those three, and then there's a bunch of like other ones. And there's such a long tail of other ones. But in practice, like when people go to production, they generally have their own tools or maybe one of those three, maybe some other ones, but like very, very few other ones. So yeah, I think the first demos was a search and a calculator one. And there's- What's the data set?Shunyu [00:10:04]: Hotpot QA.Harrison [00:10:05]: Yeah. Oh, so there's that one. And then there's like the celebrity one by the same author, I think.Swyx [00:10:09]: Olivier Wilde's boyfriend squared. Yeah. 0.23. Yeah. Right, right, right.Harrison [00:10:16]: I'm forgetting the name of the author, but there's-Swyx [00:10:17]: I was like, we're going to over-optimize for Olivier Wilde's boyfriend, and it's going to change next year or something.Harrison [00:10:21]: There's a few data sets kind of like in that vein that require multi-step kind of like reasoning and thinking. So one of the questions I actually had for you in this vein, like the React paper, there's a few things in there, or at least when I think of that, there's a few things that I think of. There's kind of like the specific prompting strategy. Then there's like this general idea of kind of like thinking and then taking an action. And then there's just even more general idea of just like taking actions in a loop. Today, like obviously language models have changed a lot. We have tool calling. The specific prompting strategy probably isn't used super heavily anymore. Would you say that like the concept of React is still used though? Or like do you think that tool calling and running tool calling in a loop, is that ReactSwyx [00:11:02]: in your mind?Shunyu [00:11:03]: I would say like it's like more implicitly used than explicitly used. To be fair, I think the contribution of React is actually twofold. So first is this idea of, you know, we should be able to use calls in a very general way. Like there should be a single kind of general method to handle interaction with various environments. I think React is the first paper to demonstrate the idea. But then I think later there are two form or whatever, and this becomes like a trivial idea. But I think at the time, that's like a pretty non-trivial thing. And I think the second contribution is this idea of what people call like inner monologue or thinking or reasoning or whatever, to be paired with tool use. I think that's still non-trivial because if you look at the default function calling or whatever, like there's no inner monologue. And in practice, that actually is important, especially if the tool that you use is pretty different from the training distribution of the language model. I think those are the two main things that are kind of inherited.Harrison [00:12:10]: On that note, I think OpenAI even recommended when you're doing tool calling, it's sometimes helpful to put a thought field in the tool, along with all the actual acquired arguments,Swyx [00:12:19]: and then have that one first.Harrison [00:12:20]: So it fills out that first, and they've shown that that's yielded better results. The reason I ask is just like this same concept is still alive, and I don't know whether to call it a React agent or not. I don't know what to call it. I think of it as React, like it's the same ideas that were in the paper, but it's obviously a very different implementation at this point in time. And so I just don't know what to call it.Shunyu [00:12:40]: I feel like people will sometimes think more in terms of different tools, right? Because if you think about a web agent versus, you know, like a function calling agent, calling a Python API, you would think of them as very different. But in some sense, the methodology is the same. It depends on how you view them, right? I think people will tend to think more in terms of the environment and the tools rather than the methodology. Or, in other words, I think the methodology is kind of trivial and simple, so people will try to focus more on the different tools. But I think it's good to have a single underlying principle of those things.Alessio [00:13:17]: How do you see the surface of React getting molded into the model? So a function calling is a good example of like, now the model does it. What about the thinking? Now most models that you use kind of do chain of thought on their own, they kind of produce steps. Do you think that more and more of this logic will be in the model? Or do you think the context window will still be the main driver of reasoning and thinking?Shunyu [00:13:39]: I think it's already default, right? You do some chain of thought and you do some tool call, the cost of adding the chain of thought is kind of relatively low compared to other things. So it's not hurting to do that. And I think it's already kind of common practice, I would say.Swyx [00:13:56]: This is a good place to bring in either Tree of Thought or Reflection, your pick.Shunyu [00:14:01]: Maybe Reflection, to respect the time order, I would say.Swyx [00:14:05]: Any backstory as well, like the people involved with NOAA and the Princeton group. We talked about this offline, but people don't understand how these research pieces come together and this ideation.Shunyu [00:14:15]: I think Reflection is mostly NOAA's work, I'm more like advising kind of role. The story is, I don't remember the time, but one day we just see this pre-print that's like Reflection and Autonomous Agent with memory or whatever. And it's kind of like an extension to React, which uses this self-reflection. I'm like, oh, somehow you've become very popular. And NOAA reached out to me, it's like, do you want to collaborate on this and make this from an archive pre-print to something more solid, like a conference submission? I'm like, sure. We started collaborating and we remain good friends today. And I think another interesting backstory is NOAA was contacted by OpenAI at the time. It's like, this is pretty cool, do you want to just work at OpenAI? And I think Sierra also reached out at the same time. It's like, this is pretty cool, do you want to work at Sierra? And I think NOAA chose Sierra, but it's pretty cool because he was still like a second year undergrad and he's a very smart kid.Swyx [00:15:16]: Based on one paper. Oh my god.Shunyu [00:15:19]: He's done some other research based on programming language or chemistry or whatever, but I think that's the paper that got the attention of OpenAI and Sierra.Swyx [00:15:28]: For those who haven't gone too deep on it, the way that you present the inside of React, can you do that also for reflection? Yeah.Shunyu [00:15:35]: I think one way to think of reflection is that the traditional idea of reinforcement learning is you have a scalar reward and then you somehow back-propagate the signal of the scalar reward to the rest of your neural network through whatever algorithm, like policy grading or A2C or whatever. And if you think about the real life, most of the reward signal is not scalar. It's like your boss told you, you should have done a better job in this, but you could jump on that or whatever. It's not like a scalar reward, like 29 or something. I think in general, humans deal more with long scalar reward, or you can say language feedback. And the way that they deal with language feedback also has this back-propagation process, right? Because you start from this, you did a good job on job B, and then you reflect what could have been done differently to change to make it better. And you kind of change your prompt, right? Basically, you change your prompt on how to do job A and how to do job B, and then you do the whole thing again. So it's really like a pipeline of language where in self-graded descent, you have something like text reasoning to replace those gradient descent algorithms. I think that's one way to think of reflection.Harrison [00:16:47]: One question I have about reflection is how general do you think the algorithm there is? And so for context, I think at LangChain and at other places as well, we found it pretty easy to implement React in a standard way. You plug in any tools and it kind of works off the shelf, can get it up and running. I don't think we have an off-the-shelf kind of implementation of reflection and kind of the general sense. I think the concepts, absolutely, we see used in different kind of specific cognitive architectures, but I don't think we have one that comes off the shelf. I don't think any of the other frameworks have one that comes off the shelf. And I'm curious whether that's because it's not general enough or it's complex as well, because it also requires running it more times.Swyx [00:17:28]: Maybe that's not feasible.Harrison [00:17:30]: I'm curious how you think about the generality, complexity. Should we have one that comes off the shelf?Shunyu [00:17:36]: I think the algorithm is general in the sense that it's just as general as other algorithms, if you think about policy grading or whatever, but it's not applicable to all tasks, just like other algorithms. So you can argue PPO is also general, but it works better for those set of tasks, but not on those set of tasks. I think it's the same situation for reflection. And I think a key bottleneck is the evaluator, right? Basically, you need to have a good sense of the signal. So for example, if you are trying to do a very hard reasoning task, say mathematics, for example, and you don't have any tools, you're operating in this chain of thought setup, then reflection will be pretty hard because in order to reflect upon your thoughts, you have to have a very good evaluator to judge whether your thought is good or not. But that might be as hard as solving the problem itself or even harder. The principle of self-reflection is probably more applicable if you have a good evaluator, for example, in the case of coding. If you have those arrows, then you can just reflect on that and how to solve the bug andSwyx [00:18:37]: stuff.Shunyu [00:18:38]: So I think another criteria is that it depends on the application, right? If you have this latency or whatever need for an actual application with an end-user, the end-user wouldn't let you do two hours of tree-of-thought or reflection, right? You need something as soon as possible. So in that case, maybe this is better to be used as a training time technique, right? You do those reflection or tree-of-thought or whatever, you get a lot of data, and then you try to use the data to train your model better. And then in test time, you still use something as simple as React, but that's already improved.Alessio [00:19:11]: And if you think of the Voyager paper as a way to store skills and then reuse them, how would you compare this reflective memory and at what point it's just ragging on the memory versus you want to start to fine-tune some of them or what's the next step once you get a very long reflective corpus? Yeah.Shunyu [00:19:30]: So I think there are two questions here. The first question is, what type of information or memory are you considering, right? Is it like semantic memory that stores knowledge about the word, or is it the episodic memory that stores trajectories or behaviors, or is it more of a procedural memory like in Voyager's case, like skills or code snippets that you can use to do actions, right?Swyx [00:19:54]: That's one dimension.Shunyu [00:19:55]: And the second dimension is obviously how you use the memory, either retrieving from it, using it in the context, or fine-tuning it. I think the Cognitive Architecture for Language Agents paper has a good categorization of all the different combinations. And of course, which way you use depends on the concrete application and the concrete need and the concrete task. But I think in general, it's good to think of those systematic dimensions and all the possible options there.Swyx [00:20:25]: Harrison also has in LangMEM, I think you did a presentation in my meetup, and I think you've done it at a couple other venues as well. User state, semantic memory, and append-only state, I think kind of maps to what you just said.Shunyu [00:20:38]: What is LangMEM? Can I give it like a quick...Harrison [00:20:40]: One of the modules of LangChain for a long time has been something around memory. And I think we're still obviously figuring out what that means, as is everyone kind of in the space. But one of the experiments that we did, and one of the proof of concepts that we did was, technically what it was is you would basically create threads, you'd push messages to those threads in the background, we process the data in a few ways. One, we put it into some semantic store, that's the semantic memory. And then two, we do some extraction and reasoning over the memories to extract. And we let the user define this, but extract key facts or anything that's of interest to the user. Those aren't exactly trajectories, they're maybe more closer to the procedural memory. Is that how you'd think about it or classify it?Shunyu [00:21:22]: Is it like about knowledge about the word, or is it more like how to do something?Swyx [00:21:27]: It's reflections, basically.Harrison [00:21:28]: So in generative worlds.Shunyu [00:21:30]: Generative agents.Swyx [00:21:31]: The Smallville. Yeah, the Smallville one.Harrison [00:21:33]: So the way that they had their memory there was they had the sequence of events, and that's kind of like the raw events that happened. But then every N events, they'd run some synthesis over those events for the LLM to insert its own memory, basically. It's that type of memory.Swyx [00:21:49]: I don't know how that would be classified.Shunyu [00:21:50]: I think of that as more of the semantic memory, but to be fair, I think it's just one way to think of that. But whether it's semantic memory or procedural memory or whatever memory, that's like an abstraction layer. But in terms of implementation, you can choose whatever implementation for whatever memory. So they're totally kind of orthogonal. I think it's more of a good way to think of the things, because from the history of cognitive science and cognitive architecture and how people study even neuroscience, that's the way people think of how the human brain organizes memory. And I think it's more useful as a way to think of things. But it's not like for semantic memory, you have to do this kind of way to retrieve or fine-tune, and for procedural memory, you have to do that. I think those are totally orthogonal kind of dimensions.Harrison [00:22:34]: How much background do you have in cognitive sciences, and how much do you model some of your thoughts on?Shunyu [00:22:40]: That's a great question, actually. I think one of the undergrad influences for my follow-up research is I was doing an internship at MIT's Computational Cognitive Science Lab with Josh Tannenbaum, and he's a very famous cognitive scientist. And I think a lot of his ideas still influence me today, like thinking of things in computational terms and getting interested in language and a lot of stuff, or even developing psychology kind of stuff. So I think it still influences me today.Swyx [00:23:14]: As a developer that tried out LangMEM, the way I view it is just it's a materialized view of a stream of logs. And if anything, that's just useful for context compression. I don't have to use the full context to run it over everything. But also it's kind of debuggable. If it's wrong, I can show it to the user, the user can manually fix it, and I can carry on. That's a really good analogy. I like that. I'm going to steal that. Sure. Please, please. You know I'm bullish on memory databases. I guess, Tree of Thoughts? Yeah, Tree of Thoughts.Shunyu [00:23:39]: I feel like I'm relieving the defense in like a podcast format. Yeah, no.Alessio [00:23:45]: I mean, you had a banger. Well, this is the one where you're already successful and we just highlight the glory. It was really good. You mentioned that since thinking is kind of like taking an action, you can use action searching algorithms to think of thinking. So just like you will use Tree Search to find the next thing. And the idea behind Tree of Thought is that you generate all these possible outcomes and then find the best tree to get to the end. Maybe back to the latency question, you can't really do that if you have to respond in real time. So what are maybe some of the most helpful use cases for things like this? Where have you seen people adopt it where the high latency is actually worth the wait?Shunyu [00:24:21]: For things that you don't care about latency, obviously. For example, if you're trying to do math, if you're just trying to come up with a proof. But I feel like one type of task is more about searching for a solution. You can try a hundred times, but if you find one solution, that's good. For example, if you're finding a math proof or if you're finding a good code to solve a problem or whatever, I think another type of task is more like reacting. For example, if you're doing customer service, you're like a web agent booking a ticket for an end user. Those are more reactive kind of tasks, or more real-time tasks. You have to do things fast. They might be easy, but you have to do it reliably. And you care more about can you solve 99% of the time out of a hundred. But for the type of search type of tasks, then you care more about can I find one solution out of a hundred. So it's kind of symmetric and different.Alessio [00:25:11]: Do you have any data or intuition from your user base? What's the split of these type of use cases? How many people are doing more reactive things and how many people are experimenting with deep, long search?Harrison [00:25:23]: I would say React's probably the most popular. I think there's aspects of reflection that get used. Tree of thought, probably the least so. There's a great tweet from Jason Wei, I think you're now a colleague, and he was talking about prompting strategies and how he thinks about them. And I think the four things that he had was, one, how easy is it to implement? How much compute does it take? How many tasks does it solve? And how much does it improve on those tasks? And I'd add a fifth, which is how likely is it to be relevant when the next generation of models come out? And I think if you look at those axes and then you look at React, reflection, tree of thought, it tracks that the ones that score better are used more. React is pretty easy to implement. Tree of thought's pretty hard to implement. The amount of compute, yeah, a lot more for tree of thought. The tasks and how much it improves, I don't have amazing visibility there. But I think if we're comparing React versus tree of thought, React just dominates the first two axes so much that my question around that was going to be like, how do you think about these prompting strategies, cognitive architectures, whatever you want to call them? When you're thinking of them, what are the axes that you're judging them on in your head when you're thinking whether it's a good one or a less good one?Swyx [00:26:38]: Right.Shunyu [00:26:39]: Right. I think there is a difference between a prompting method versus research, in the sense that for research, you don't really even care about does it actually work on practical tasks or does it help? Whatever. I think it's more about the idea or the principle, right? What is the direction that you're unblocking and whatever. And I think for an actual prompting method to solve a concrete problem, I would say simplicity is very important because the simpler it is, the less decision you have to make about it. And it's easier to design. It's easier to propagate. And it's easier to do stuff. So always try to be as simple as possible. And I think latency obviously is important. If you can do things fast and you don't want to do things slow. And I think in terms of the actual prompting method to use for a particular problem, I think we should all be in the minimalist kind of camp, right? You should try the minimum thing and see if it works. And if it doesn't work and there's absolute reason to add something, then you add something, right? If there's absolute reason that you need some tool, then you should add the tool thing. If there's absolute reason to add reflection or whatever, you should add that. Otherwise, if a chain of thought can already solve something, then you don't even need to use any of that.Harrison [00:27:57]: Yeah. Or if it's just better prompting can solve it. Like, you know, you could add a reflection step or you could make your instructions a little bit clearer.Swyx [00:28:03]: And it's a lot easier to do that.Shunyu [00:28:04]: I think another interesting thing is like, I personally have never done those kind of like weird tricks. I think all the prompts that I write are kind of like just talking to a human, right? It's like, I don't know. I never say something like, your grandma is dying and you have to solve it. I mean, those are cool, but I feel like we should all try to solve things in a very intuitive way. Just like talking to your co-worker. That should work 99% of the time. That's my personal take.Swyx [00:28:29]: The problem with how language models, at least in the GPC 3 era, was that they over-optimized to some sets of tokens in sequence. So like reading the Kojima et al. paper that was listing step-by-step, like he tried a bunch of them and they had wildly different results. It should not be the case, but it is the case. And hopefully we're getting better there.Shunyu [00:28:51]: Yeah. I think it's also like a timing thing in the sense that if you think about this whole line of language model, right? Like at the time it was just like a text generator. We don't have any idea how it's going to be used, right? And obviously at the time you will find all kinds of weird issues because it's not trained to do any of that, right? But then I think we have this loop where once we realize chain of thought is important or agent is important or tool using is important, what we see is today's language models are heavily optimized towards those things. So I think in some sense they become more reliable and robust over those use cases. And you don't need to do as much prompt engineering tricks anymore to solve those things. I feel like in some sense, I feel like prompt engineering even is like a slightly negative word at the time because it refers to all those kind of weird tricks that you have to apply. But I think we don't have to do that anymore. Like given today's progress, you should just be able to talk to like a coworker. And if you're clear and concrete and being reasonable, then it should do reasonable things for you.Swyx [00:29:51]: Yeah. The way I put this is you should not be a prompt engineer because it is the goal of the big labs to put you out of a job.Shunyu [00:29:58]: You should just be a good communicator. Like if you're a good communicator to humans, you should be a good communicator to languageSwyx [00:30:02]: models.Harrison [00:30:03]: That's the key though, because oftentimes people aren't good communicators to these language models and that is a very important skill and that's still messing around with the prompt. And so it depends what you're talking about when you're saying prompt engineer.Shunyu [00:30:14]: But do you think it's like very correlated with like, are they like a good communicator to humans? You know, it's like.Harrison [00:30:20]: It may be, but I also think I would say on average, people are probably worse at communicating with language models than to humans right now, at least, because I think we're still figuring out how to do it. You kind of expect it to be magical and there's probably some correlation, but I'd say there's also just like, people are worse at it right now than talking to humans.Shunyu [00:30:36]: We should make it like a, you know, like an elementary school class or whatever, how toSwyx [00:30:41]: talk to language models. Yeah. I don't know. Very pro that. Yeah. Before we leave the topic of trees and searching, not specific about QSTAR, but there's a lot of questions about MCTS and this combination of tree search and language models. And I just had to get in a question there about how seriously should people take this?Shunyu [00:30:59]: Again, I think it depends on the tasks, right? So MCTS was magical for Go, but it's probably not as magical for robotics, right? So I think right now the problem is not even that we don't have good methodologies, it's more about we don't have good tasks. It's also very interesting, right? Because if you look at my citation, it's like, obviously the most cited are React, Refraction and Tree of Thought. Those are methodologies. But I think like equally important, if not more important line of my work is like benchmarks and environments, right? Like WebShop or SuiteVenture or whatever. And I think in general, what people do in academia that I think is not good is they choose a very simple task, like Alford, and then they apply overly complex methods to show they improve 2%. I think you should probably match the level of complexity of your task and your method. I feel like where tasks are kind of far behind the method in some sense, right? Because we have some good test-time approaches, like whatever, React or Refraction or Tree of Thought, or like there are many, many more complicated test-time methods afterwards. But on the benchmark side, we have made a lot of good progress this year, last year. But I think we still need more progress towards that, like better coding benchmark, better web agent benchmark, better agent benchmark, not even for web or code. I think in general, we need to catch up with tasks.Harrison [00:32:27]: What are the biggest reasons in your mind why it lags behind?Shunyu [00:32:31]: I think incentive is one big reason. Like if you see, you know, all the master paper are cited like a hundred times more than the task paper. And also making a good benchmark is actually quite hard. It's almost like a different set of skills in some sense, right? I feel like if you want to build a good benchmark, you need to be like a good kind of product manager kind of mindset, right? You need to think about why people should use your benchmark, why it's challenging, why it's useful. If you think about like a PhD going into like a school, right? The prior skill that expected to have is more about, you know, can they code this method and can they just run experiments and can solve that? I think building a benchmark is not the typical prior skill that we have, but I think things are getting better. I think more and more people are starting to build benchmarks and people are saying that it's like a way to get more impact in some sense, right? Because like if you have a really good benchmark, a lot of people are going to use it. But if you have a super complicated test time method, like it's very hard for people to use it.Harrison [00:33:35]: Are evaluation metrics also part of the reason? Like for some of these tasks that we might want to ask these agents or language models to do, is it hard to evaluate them? And so it's hard to get an automated benchmark. Obviously with SweetBench you can, and with coding, it's easier, but.Shunyu [00:33:50]: I think that's part of the skillset thing that I mentioned, because I feel like it's like a product manager because there are many dimensions and you need to strike a balance and it's really hard, right? If you want to make sense, very easy to autogradable, like automatically gradable, like either to grade or either to evaluate, then you might lose some of the realness or practicality. Or like it might be practical, but it might not be as scalable, right? For example, if you think about text game, human have pre-annotated all the rewards and all the language are real. So it's pretty good on autogradable dimension and the practical dimension. If you think about, you know, practical, like actual English being practical, but it's not scalable, right? It takes like a year for experts to build that game. So it's not really that scalable. And I think part of the reason that SweetBench is so popular now is it kind of hits the balance between these three dimensions, right? Easy to evaluate and being actually practical and being scalable. Like if I were to criticize upon some of my prior work, I think webshop, like it's my initial attempt to get into benchmark world and I'm trying to do a good job striking the balance. But obviously we make it all gradable and it's really scalable, but then I think the practicality is not as high as actually just using GitHub issues, right? Because you're just creating those like synthetic tasks.Harrison [00:35:13]: Are there other areas besides coding that jump to mind as being really good for being autogradable?Shunyu [00:35:20]: Maybe mathematics.Swyx [00:35:21]: Classic. Yeah. Do you have thoughts on alpha proof, the new DeepMind paper? I think it's pretty cool.Shunyu [00:35:29]: I think it's more of a, you know, it's more of like a confidence boost or like sometimes, you know, the work is not even about, you know, the technical details or the methodology that it chooses or the concrete results. I think it's more about a signal, right?Swyx [00:35:47]: Yeah. Existence proof. Yeah.Shunyu [00:35:50]: Yeah. It can be done. This direction is exciting. It kind of encourages people to work more towards that direction. I think it's more like a boost of confidence, I would say.Swyx [00:35:59]: Yeah. So we're going to focus more on agents now and, you know, all of us have a special interest in coding agents. I would consider Devin to be the sort of biggest launch of the year as far as AI startups go. And you guys in the Princeton group worked on Suiagents alongside of Suibench. Tell us the story about Suiagent. Sure.Shunyu [00:36:21]: I think it's kind of like a triology, it's actually a series of three works now. So actually the first work is called Intercode, but it's not as famous, I know. And the second work is called Suibench and the third work is called Suiagent. And I'm just really confused why nobody is working on coding. You know, it's like a year ago, but I mean, not everybody's working on coding, obviously, but a year ago, like literally nobody was working on coding. I was really confused. And the people that were working on coding are, you know, trying to solve human evil in like a sick-to-sick way. There's no agent, there's no chain of thought, there's no anything, they're just, you know, fine tuning the model and improve some points and whatever, like, I was really confused because obviously coding is the best application for agents because it's autogradable, it's super important, you can make everything like API or code action, right? So I was confused and I collaborated with some of the students in Princeton and we have this work called Intercode and the idea is, first, if you care about coding, then you should solve coding in an interactive way, meaning more like a Jupyter Notebook kind of way than just writing a program and seeing if it fails or succeeds and stop, right? You should solve it in an interactive way because that's exactly how humans solve it, right? You don't have to, you know, write a program like next token, next token, next token and stop and never do any edits and you cannot really use any terminal or whatever tool. It doesn't make sense, right? And that's the way people are solving coding at the time, basically like sampling a program from a language model without chain of thought, without tool call, without refactoring, without anything. So the first point is we should solve coding in a very interactive way and that's a very general principle that applies for various coding benchmarks. And also, I think you can make a lot of the agent task kind of like interactive coding. If you have Python and you can call any package, then you can literally also browse internet or do whatever you want, like control a robot or whatever. So that seems to be a very general paradigm. But obviously I think a bottleneck is at the time we're still doing, you know, very simple tasks like human eval or whatever coding benchmark people proposed. They were super hard in 2021, like 20%, but they're like 95% already in 2023. So obviously the next step is we need a better benchmark. And Carlos and John, which are the first authors of Swaybench, I think they come up with this great idea that we should just script GitHub and solve whatever human engineers are solving. And I think it's actually pretty easy to come up with the idea. And I think in the first week, they already made a lot of progress. They script the GitHub and they make all the same, but then there's a lot of painful info work and whatever, you know. I think the idea is super easy, but the engineering is super hard. And I feel like that's a very typical signal of a good work in the AI era now.Swyx [00:39:17]: I think also, I think the filtering was challenging, because if you look at open source PRs, a lot of them are just like, you know, fixing typos. I think it's challenging.Shunyu [00:39:27]: And to be honest, we didn't do a perfect job at the time. So if you look at the recent blog post with OpenAI, we improved the filtering so that it's more solvable.Swyx [00:39:36]: I think OpenAI was just like, look, this is a thing now. We have to fix this. These students just rushed it.Shunyu [00:39:45]: It's a good convergence of interests for me.Alessio [00:39:48]: Was that tied to you joining OpenAI? Or was that just unrelated?Shunyu [00:39:52]: It's a coincidence for me, but it's a good coincidence.Swyx [00:39:55]: There is a history of anytime a big lab adopts a benchmark, they fix it. Otherwise, it's a broken benchmark.Shunyu [00:40:03]: So naturally, once we propose swimmage, the next step is to solve it. But I think the typical way you solve something now is you collect some training samples, or you design some complicated agent method, and then you try to solve it. Either super complicated prompt, or you build a better model with more training data. But I think at the time, we realized that even before those things, there's a fundamental problem with the interface or the tool that you're supposed to use. Because that's like an ignored problem in some sense. What your tool is, or how that matters for your task. So what we found concretely is that if you just use the text terminal off the shelf as a tool for those agents, there's a lot of problems. For example, if you edit something, there's no feedback. So you don't know whether your edit is good or not. That makes the agent very confused and makes a lot of mistakes. There are a lot of small problems, you would say. Well, you can try to do prompt engineering and improve that, but it turns out to be actually very hard. We realized that the interface design is actually a very omitted part of agent design. So we did this switch agent work. And the key idea is just, even before you talk about what the agent is, you should talk about what the environment is. You should make sure that the environment is actually friendly to whatever agent you're trying to apply. That's the same idea for humans. Text terminal is good for some tasks, like git, pool, or whatever. But it's not good if you want to look at browser and whatever. Also, browser is a good tool for some tasks, but it's not a good tool for other tasks. We need to talk about how design interface, in some sense, where we should treat agents as our customers. It's like when we treat humans as a customer, we design human computer interfaces. We design those beautiful desktops or browsers or whatever, so that it's very intuitive and easy for humans to use. And this whole great subject of HCI is all about that. I think now the research idea of switch agent is just, we should treat agents as our customers. And we should do like, you know… AICI.Swyx [00:42:16]: AICI, exactly.Harrison [00:42:18]: So what are the tools that a suite agent should have, or a coding agent in general should have?Shunyu [00:42:24]: For suite agent, it's like a modified text terminal, which kind of adapts to a lot of the patterns of language models to make it easier for language models to use. For example, now for edit, instead of having no feedback, it will actually have a feedback of, you know, actually here you introduced like a syntax error, and you should probably want to fix that, and there's an ended error there. And that makes it super easy for the model to actually do that. And there's other small things, like how exactly you write arguments, right? Like, do you want to write like a multi-line edit, or do you want to write a single line edit? I think it's more interesting to think about the way of the development process of an ACI rather than the actual ACI for like a concrete application. Because I think the general paradigm is very similar to HCI and psychology, right? Basically, for how people develop HCIs, they do behavior experiments on humans, right? I do every test, right? Like, which interface is actually better? And I do those behavior experiments, kind of like psychology experiments to humans, and I change things. And I think what's really interesting for me, for this three-agent paper, is we can probably do the same thing for agents, right? We can do every test for those agents and do behavior tests. And through the process, we not only invent better interfaces for those agents, that's the practical value, but we also better understand agents. Just like when we do those A-B tests, we do those HCI, we better understand humans. Doing those ACI experiments, we actually better understand agents. And that's pretty cool.Harrison [00:43:51]: Besides that A-B testing, what are other processes that people can use to think about this in a good way?Swyx [00:43:57]: That's a great question.Shunyu [00:43:58]: And I think three-agent is an initial work. And what we do is the kind of the naive approach, right? You just try some interface, and you see what's going wrong, and then you try to fix that. We do this kind of iterative fixing. But I think what's really interesting is there'll be a lot of future directions that's very promising if we can apply some of the HCI principles more systematically into the interface design. I think that would be a very cool interdisciplinary research opportunity.Harrison [00:44:26]: You talked a lot about agent-computer interfaces and interactions. What about human-to-agent UX patterns? Curious for any thoughts there that you might have.Swyx [00:44:38]: That's a great question.Shunyu [00:44:39]: And in some sense, I feel like prompt engineering is about human-to-agent interface. But I think there can be a lot of interesting research done about... So prompting is about how humans can better communicate with the agent. But I think there could be interesting research on how agents can better communicate with humans, right? When to ask questions, how to ask questions, what's the frequency of asking questions. And I think those kinds of stuff could be very cool research.Harrison [00:45:07]: Yeah, I think some of the most interesting stuff that I saw here was also related to coding with Devin from Cognition. And they had the three or four different panels where you had the chat, the browser, the terminal, and I guess the code editor as well.Swyx [00:45:19]: There's more now.Harrison [00:45:19]: There's more. Okay, I'm not up to date. Yeah, I think they also did a good job on ACI.Swyx [00:45:25]: I think that's the main learning I have from Devin. They cracked that. Actually, there was no foundational planning breakthrough. The planner is actually pretty simple, but ACI that they broke through on.Shunyu [00:45:35]: I think making the tool good and reliable is probably like 90% of the whole agent. Once the tool is actually good, then the agent design can be much, much simpler. On the other hand, if the tool is bad, then no matter how much you put into the agent design, planning or search or whatever, it's still going to be trash.Harrison [00:45:53]: Yeah, I'd argue the same. Same with like context and instructions. Like, yeah, go hand in hand.Alessio [00:46:00]: On the tool, how do you think about the tension of like, for both of you, I mean, you're building a library, so even more for you. The tension between making now a language or a library that is like easy for the agent to grasp and write versus one that is easy for like the human to grasp and write. Because, you know, the trend is like more and more code gets written by the agent. So why wouldn't you optimize the framework to be as easy as possible for the model versus for the person?Swyx [00:46:24]: I think it's possible to design an interfaceShunyu [00:46:25]: that's both friendly to humans and agents. But what do you think?Harrison [00:46:29]: We haven't thought about that from the perspective, like we're not trying to design LangChain or LangGraph to be friendly. But I mean, I think to be friendly for agents to write.Swyx [00:46:42]: But I mean, I think we see this with like,Harrison [00:46:43]: I saw some paper that used TypeScript notation instead of JSON notation for tool calling and it got a lot better performance. So it's definitely a thing. I haven't really heard of anyone designing like a syntax or a language explicitly for agents, but there's clearly syntaxes that are better.Shunyu [00:46:59]: I think function calling is a good example where it's like a good interface for both human programmers and for agents, right? Like for developers, it's actually a very friendly interface because it's very concrete and you don't have to do prompt engineering anymore. You can be very systematic. And for models, it's also pretty good, right? Like it can use all the existing coding content. So I think we need more of those kinds of designs.Swyx [00:47:21]: I will mostly agree and I'll slightly disagree in terms of this, which is like, whether designing for humans also overlaps with designing for AI. So Malte Ubo, who's the CTO of Vercel, who is creating basically JavaScript's competitor to LangChain, they're observing that basically, like if the API is easy to understand for humans, it's actually much easier to understand for LLMs, for example, because they're not overloaded functions. They don't behave differently under different contexts. They do one thing and they always work the same way. It's easy for humans, it's easy for LLMs. And like that makes a lot of sense. And obviously adding types is another one. Like type annotations only help give extra context, which is really great. So that's the agreement. And then a disagreement is that when I use structured output to do my chain of thought, I have found that I change my field names to hint to the LLM of what the field is supposed to do. So instead of saying topics, I'll say candidate topics. And that gives me a better result because the LLM was like, ah, this is just a draft thing I can use for chain of thought. And instead of like summaries, I'll say topic summaries to link the previous field to the current field. So like little stuff like that, I find myself optimizing for the LLM where I, as a human, would never do that. Interesting.Shunyu [00:48:32]: It's kind of like the way you optimize the prompt, it might be different for humans and for machines. You can have a common ground that's both clear for humans and agents, but to improve the human performance versus improving the agent performance, they might move to different directions.Swyx [00:48:48]: Might move different directions. There's a lot more use of metadata as well, like descriptions, comments, code comments, annotations and stuff like that. Yeah.Harrison [00:48:56]: I would argue that's just you communicatingSwyx [00:48:58]: to the agent what it should do.Harrison [00:49:00]: And maybe you need to communicate a little bit more than to humans because models aren't quite good enough yet.Swyx [00:49:06]: But like, I don't think that's crazy.Harrison [00:49:07]: I don't think that's like- It's not crazy.Swyx [00:49:09]: I will bring this in because it just happened to me yesterday. I was at the cursor office. They held their first user meetup and I was telling them about the LLM OS concept and why basically every interface, every tool was being redesigned for AIs to use rather than humans. And they're like, why? Like, can we just use Bing and Google for LLM search? Why must I use Exa? Or what's the other one that you guys work with?Harrison [00:49:32]: Tavilli.Swyx [00:49:33]: Tavilli. Web Search API dedicated for LLMs. What's the difference?Shunyu [00:49:36]: Exactly. To Bing API.Swyx [00:49:38]: Exactly.Harrison [00:49:38]: There weren't great APIs for search. Like the best one, like the one that we used initially in LangChain was SERP API, which is like maybe illegal. I'm not sure.Swyx [00:49:49]: And like, you know,Harrison [00:49:52]: and now there are like venture-backed companies.Swyx [00:49:53]: Shout out to DuckDuckGo, which is free.Harrison [00:49:55]: Yes, yes.Swyx [00:49:56]: Yeah.Harrison [00:49:56]: I do think there are some differences though. I think you want, like, I think generally these APIs try to return small amounts of text information, clear legible field. It's not a massive JSON blob. And I think that matters. I think like when you talk about designing tools, it's not only the, it's the interface in the entirety, not only the inputs, but also the outputs that really matter. And so I think they try to make the outputs.Shunyu [00:50:18]: They're doing ACI.Swyx [00:50:19]: Yeah, yeah, absolutely.Harrison [00:50:20]: Really?Swyx [00:50:21]: Like there's a whole set of industries that are just being redone for ACI. It's weird. And so my simple answer to them was like the error messages. When you give error messages, they should be basically prompts for the LLM to take and then self-correct. Then your error messages get more verbose, actually, than you normally would with a human. Stuff like that. Like a little, honestly, it's not that big. Again, like, is this worth a venture-backed industry? Unless you can tell us. But like, I think Code Interpreter, I think is a new thing. I hope so.Alessio [00:50:52]: We invested in it to be so.Shunyu [00:50:53]: I think that's a very interesting point. You're trying to optimize to the extreme, then obviously they're going to be different. For example, the error—Swyx [00:51:00]: Because we take it very seriously. Right.Shunyu [00:51:01]: The error for like language model, the longer the better. But for humans, that will make them very nervous and very tired, right? But I guess the point is more like, maybe we should try to find a co-optimized common ground as much as possible. And then if we have divergence, then we should try to diverge. But it's more philosophical now.Alessio [00:51:19]: But I think like part of it is like how you use it. So Google invented the PageRank because ideally you only click on one link, you know, like the top three should have the answer. But with models, it's like, well, you can get 20. So those searches are more like semantic grouping in a way. It's like for this query, I'll return you like 20, 30 things that are kind of good, you know? So it's less about ranking and it's more about grouping.Shunyu [00:51:42]: Another fundamental thing about HCI is the difference between human and machine's kind of memory limit, right? So I think what's really interesting about this concept HCI versus HCI is interfaces that's optimized for them. You can kind of understand some of the fundamental characteristics, differences of humans and machines, right? Why, you know, if you look at find or whatever terminal command, you know, you can only look at one thing at a time or that's because we have a very small working memory. You can only deal with one thing at a time. You can only look at one paragraph of text at the same time. So the interface for us is by design, you know, a small piece of information, but more temporal steps. But for machines, that should be the opposite, right? You should just give them a hundred different results and they should just decide in context what's the most relevant stuff and trade off the context for temporal steps. That's actually also better for language models because like the cost is smaller or whatever. So it's interesting to connect those interfaces to the fundamental kind of differences of those.Harrison [00:52:43]: When you said earlier, you know, we should try to design these to maybe be similar as possible and diverge if we need to.Swyx [00:52:49]: I actually don't have a problem with them diverging nowHarrison [00:52:51]: and seeing venture-backed startups emerging now because we are different from machines code AI. And it's just so early on, like they may still look kind of similar and they may still be small differences, but it's still just so early. And I think we'll only discover more ways that they differ. And so I'm totally fine with them kind of like diverging earlySwyx [00:53:10]: and optimizing for the...Harrison [00:53:11]: I agree. I think it's more like, you know,Shunyu [00:53:14]: we should obviously try to optimize human interface just for humans. We're already doing that for 50 years. We should optimize agent interface just for agents, but we might also try to co-optimize both and see how far we can get. There's enough people to try all three directions. Yeah.Swyx [00:53:31]: There's a thesis I sometimes push, which is the sour lesson as opposed to the bitter lesson, which we're always inspired by human development, but actually AI develops its own path.Shunyu [00:53:40]: Right. We need to understand better, you know, what are the fundamental differences between those creatures.Swyx [00:53:45]: It's funny when really early on this pod, you were like, how much grounding do you have in cognitive development and human brain stuff? And I'm like
Today, our special guest is Nick Fine, PhD, Principal UX Research Consultant and Strategist at Adaptavist. Nick touches on several topics, including dealing with ADHD, why user-centric design has lost its way, and the impact of economic cycles and AI on the industry. Nick also talks about the need for UX researchers to focus on insight rather than ‘depth', stating that the goal is to “get the gold and get out.” And that's just the start! Highlights include: 00:00 - Guest introduction 02:31 - Discussion on ADHD and "Chorus of Bastards" 09:15 - Nick's background in hacking and hyperfocus 17:56 - Frustration with the current state of UX 23:50 - Future of UX and AI agents 30:31 - Making yourself indispensable in UX 35:16 - Over-intellectualization of UX research 39:31 - The role of managers and leaders in UX 44:11 - Conclusion and key takeaways Who is Nick Fine, PhD Nick is a user experience researcher and designer with 20 years of experience in digital and over 12 years of experience as a practitioner. He holds a PhD and MSc in Human-Computer Interaction (HCI) and a BSc in Psychology. He successfully defended his PhD thesis in 2009, entitled “Personalizing Interaction Using User Interface Skins,” where he established a novel means for determining personality type from keyboard and mouse usage and discovered relationships between design elements (color, shape, meaning) and personality type. By combining academic research skills and HCI knowledge with commercial UX experience, Nick has successfully delivered a number of complex and mission-critical projects, including air traffic control, financial systems, and pharmaceutical R&D. He has led UX on projects for a number of brands, including Coca-Cola, SAB Miller, Jaguar Land Rover, Bentley, EY, Novartis, GlaxoSmithKline, BT, Virgin Media, Camelot, and both the Home and Cabinet Office. Find Nick Here: Nick Fine, PhD on LinkedIn Adaptavist Website Proskin.org Website Subscribe to Brave UX Liked what you heard and want to hear more? Subscribe and support the show by leaving a review on Apple Podcasts (or wherever you listen). Apple Podcast Spotify YouTube Podbean Follow us on our other social channels for more great Brave UX content! LinkedIn Instagram The Show is hosted by Brendan Jarvis, and you can find him here: Brendan Jarvis on LinkedIn The Space InBetween Website
Episode 138I spoke with Meredith Morris about:* The intersection of AI and HCI and why we need more cross-pollination between AI and adjacent fields* Disability studies and AI* Generative ghosts and technological determinism* Developing a useful definition of AGII didn't get to record an intro for this episode since I've been sick. Enjoy!Meredith is Director for Human-AI Interaction Research for Google DeepMind and an Affiliate Professor in The Paul G. Allen School of Computer Science & Engineering and in The Information School at the University of Washington, where she participates in the dub research consortium. Her work spans the areas of human-computer interaction (HCI), human-centered AI, human-AI interaction, computer-supported cooperative work (CSCW), social computing, and accessibility. She has been recognized as an ACM Fellow and ACM SIGCHI Academy member for her contributions to HCI.Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Meredith's influences and earlier work* (03:00) Distinctions between AI and HCI* (05:56) Maturity of fields and cross-disciplinary work* (09:03) Technology and ends* (10:37) Unique aspects of Meredith's research direction* (12:55) Forms of knowledge production in interdisciplinary work* (14:08) Disability, Bias, and AI* (18:32) LaMPost and using LMs for writing* (20:12) Accessibility approaches for dyslexia* (22:15) Awareness of AI and perceptions of autonomy* (24:43) The software model of personhood* (28:07) Notions of intelligence, normative visions and disability studies* (32:41) Disability categories and learning systems* (37:24) Bringing more perspectives into CS research and re-defining what counts as CS research* (39:36) Training interdisciplinary researchers, blurring boundaries in academia and industry* (43:25) Generative Agents and public imagination* (45:13) The state of ML conferences, the need for more cross-pollination* (46:42) Prestige in conferences, the move towards more cross-disciplinary work* (48:52) Joon Park Appreciation* (49:51) Training interdisciplinary researchers* (53:20) Generative Ghosts and technological determinism* (57:06) Examples of generative ghosts and clones, relationships to agentic systems* (1:00:39) Reasons for wanting generative ghosts* (1:02:25) Questions of consent for generative clones and ghosts* (1:05:01) Labor involved in maintaining generative ghosts, psychological tolls* (1:06:25) Potential religious and spiritual significance of generative systems* (1:10:19) Anthropomorphization* (1:12:14) User experience and cognitive biases* (1:15:24) Levels of AGI* (1:16:13) Defining AGI* (1:23:20) World models and AGI* (1:26:16) Metacognitive abilities in AGI* (1:30:06) Towards Bidirectional Human-AI Alignment* (1:30:55) Pluralistic value alignment* (1:32:43) Meredith's perspective on deploying AI systems* (1:36:09) Meredith's advice for younger interdisciplinary researchersLinks:* Meredith's homepage, Twitter, and Google Scholar* Papers* Mediating Group Dynamics through Tabletop Interface Design* SearchTogether: An Interface for Collaborative Web Search* AI and Accessibility: A Discussion of Ethical Considerations* Disability, Bias, and AI* LaMPost: Design and Evaluation of an AI-assisted Email Writing Prototype for Adults with Dyslexia* Generative Ghosts* Levels of AGI Get full access to The Gradient at thegradientpub.substack.com/subscribe
In this Tech Barometer podcast segment, McDowell shares insights from his report Taming the AI-enabled Edge with HCI-based Cloud Architectures...[…]
In this Tech Barometer podcast segment, McDowell shares insights from his report Taming the AI-enabled Edge with HCI-based Cloud Architectures...[…]
Hi Everyone, welcome back to another episode of HCI Insiders. Today, we're thrilled to have Kelsey Guo with us. Kelsey is a distinguished alumna of the Master of Science in Technology Innovation program at the University of Washington. She is a product designer at Cohere Health, and she will share with us how Cohere Health's design supports the healthcare system in the states, introducing automation in claims processing, and thereby speeding up the claims process. Stay tuned on our next episode:) Timeline: 00:00:00 - Start 00:00:05 - Introduction to the new season of HCI Insiders by hosts Mavis, Clara, and Jason. Introduction of guest Kelsey Guo. 00:01:00 - Kelsey introduces herself, detailing her role at Cohere Health and her background, including previous positions at health tech startups. 00:02:12 - Kelsey explains Cohere Health's mission to optimize treatment paths and improve collaboration between healthcare providers and insurance payers. 00:03:24 - Discussion of Kelsey's design responsibilities at Cohere Health, focusing on the submission and review processes that facilitate medical treatments. 00:04:43 - Insight into the collaborative dynamics within Cohere Health's design team and Kelsey's role as a design generalist. 00:06:12 - Kelsey shares details about her current project on Fax Intake Automation, highlighting the integration of machine learning and OCR technology. 00:08:15 - Exploration of Kelsey's motivation for working in healthcare, driven by a belief in design for social good, influenced by her academic projects at the University of Washington. 00:11:57 - Kelsey discusses onboarding resources at Cohere Health, including how new hires learn about healthcare systems and her learning process in the healthcare domain. 00:14:22 - Importance of user feedback in the design process at Cohere Health, and how it influences design decisions and product iterations. 00:17:27 - Kelsey emphasizes the importance of involving users early in the design process to minimize surprises and refine product development. 00:18:46 - Challenges of healthcare product design related to regulatory compliance and the impact of FDA requirements. 00:21:01 - Discuss unique challenges in healthcare UX, such as scalability and the need for products that can adapt to varying client requirements. 00:24:53 - Kelsey shares personal insights on navigating healthcare insurance based on her professional experience, emphasizing the complexities of prior authorizations. 00:26:52 - Conversation about Cohere Health's auto-approval technology and its impact on healthcare providers and patients, highlighting its commitment to transparency. 00:31:26 - Reflection on ethical considerations in healthcare design and the potential need for more radical reforms to improve healthcare processes. 00:33:43 - Kelsey recounts her experience at Scanwell Health during the COVID-19 pandemic, developing home test kits and navigating compliance hurdles. 00:44:10 - Kelsey's academic journey at UC San Diego and the University of Washington, her discovery of HCI, and her drive to use design for social good. 00:52:08 - Application of HCI theories in current work, including the labor illusion concept, and the role of AI and machine learning in future healthcare UX. 00:57:01 - Kelsey advises her younger self on career paths and the value of patience and exploration in professional development.
In Our New World of ADULT BULLIES: How to Spot The, How to Stop Them, author Bill Eddy - lawyer, therapist, educator, and Co-Counder of High Conflict Institute - writes with authority that comes from 40+ years of working with bullies and other high conflict personality individuals. Bullies may always have been a feature of human society. Eddy suggests that between 5 and 10% of people have personalities that do not allow them to put the reins on the abusive behaviors of bullies. Rich with examples the Eddy tells us how to spot bullying behavior/s as well as techniques to contain, channel and stop the abuse that bullies visit on their victims. Eddy's work - in his book and his conversation - avoids a simplistic understanding: bullies are bad. Rather he speaks about how bullying behavior can, when channeled, can push us to be better, push society into new frontiers that may not otherwise be accessible. Bill Eddy is a lawyer, therapist, mediator and the Co-founder and Chief Innovation Officer of the High Conflict Institute. He is the author of over 20 books and manuals about managing relationships and situations with high conflict people and bullies. He trains lawyers, judges, mediators, and therapists worldwide in managing high conflict situations. Now he is writing books for everyone including his latest: Our New World of Adult Bullies: How to Spot Them - How to Stop Them.Bill Eddy, LCSW, Esq. is the co-founder and Chief Innovation Officer. While pioneering High Conflict Personality Theory (HCP), he was the National Conflict Resolution Center's Senior Family Mediator for 15 years, a Certified Family Law Specialist for 15 years, and a licensed clinical social worker therapist for over 12 years.Bill serves on the faculty of the Straus Institute for Dispute Resolution at the Pepperdine University School of Law and is a Conjoint Associate Professor with the University of Newcastle Law School in Australia. He has been a speaker and trainer in over 35 U.S. states and 13 countries.The author or co-author of over 20 books, manuals, and workbooks, he also has a popular blog on the Psychology Today website with millions of views. He co-hosts the podcast, It's All Your Fault! with HCI co-founder, Megan Hunterhttps://highconflictinstitute.com/
Hugo speaks with Shreya Shankar, a researcher at UC Berkeley focusing on data management systems with a human-centered approach. Shreya's work is at the cutting edge of human-computer interaction (HCI) and AI, particularly in the realm of large language models (LLMs). Her impressive background includes being the first ML engineer at Viaduct, doing research engineering at Google Brain, and software engineering at Facebook. In this episode, we dive deep into the world of LLMs and the critical challenges of building reliable AI pipelines. We'll explore: The fascinating journey from classic machine learning to the current LLM revolution Why Shreya believes most ML problems are actually data management issues The concept of "data flywheels" for LLM applications and how to implement them The intriguing world of evaluating AI systems - who validates the validators? Shreya's work on SPADE and EvalGen, innovative tools for synthesizing data quality assertions and aligning LLM evaluations with human preferences The importance of human-in-the-loop processes in AI development The future of low-code and no-code tools in the AI landscape We'll also touch on the potential pitfalls of over-relying on LLMs, the concept of "Habsburg AI," and how to avoid disappearing up our own proverbial arseholes in the world of recursive AI processes. Whether you're a seasoned AI practitioner, a curious data scientist, or someone interested in the human side of AI development, this conversation offers valuable insights into building more robust, reliable, and human-centered AI systems. LINKS The livestream on YouTube (https://youtube.com/live/hKV6xSJZkB0?feature=share) Shreya's website (https://www.sh-reya.com/) Shreya on Twitter (https://x.com/sh_reya) Data Flywheels for LLM Applications (https://www.sh-reya.com/blog/ai-engineering-flywheel/) SPADE: Synthesizing Data Quality Assertions for Large Language Model Pipelines (https://arxiv.org/abs/2401.03038) What We've Learned From A Year of Building with LLMs (https://applied-llms.org/) Who Validates the Validators? Aligning LLM-Assisted Evaluation of LLM Outputs with Human Preferences (https://arxiv.org/abs/2404.12272) Operationalizing Machine Learning: An Interview Study (https://arxiv.org/abs/2209.09125) Vanishing Gradients on Twitter (https://twitter.com/vanishingdata) Hugo on Twitter (https://twitter.com/hugobowne) In the podcast, Hugo also mentioned that this was the 5th time he and Shreya chatted publicly. which is wild! If you want to dive deep into Shreya's work and related topics through their chats, you can check them all out here: Outerbounds' Fireside Chat: Operationalizing ML -- Patterns and Pain Points from MLOps Practitioners (https://www.youtube.com/watch?v=7zB6ESFto_U) The Past, Present, and Future of Generative AI (https://youtu.be/q0A9CdGWXqc?si=XmaUnQmZiXL2eagS) LLMs, OpenAI Dev Day, and the Existential Crisis for Machine Learning Engineering (https://www.youtube.com/live/MTJHvgJtynU?si=Ncjqn5YuFBemvOJ0) Lessons from a Year of Building with LLMs (https://youtube.com/live/c0gcsprsFig?feature=share) Check out and subcribe to our lu.ma calendar (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk) for upcoming livestreams!
Hey everyone! Welcome back to a brand new season of HCI Insiders! This time we're thrilled to have Professor Marti Louw, a faculty member at the Human-Computer Interaction Institute at Carnegie Mellon University. As a design-based researcher, Marti focuses on using design as a creative problem-solving method to collaboratively envision and create technology-enhanced learning environments that are socially co-constructed, personally relevant, and emancipatory. Before diving into education technology and HCI, Marti was an educator and producer for museums, documentaries, and films. She studied Biology as an undergraduate and then pursued Interaction Design at CMU. In this episode, we'll explore her academic and career journey and get her insights on the future of EdTech. Timeline: 00:00 Introduction to the podcast and guest, Professor Marty Loh. 02:44 Marty discusses her career transition from the television industry to interactive media and museums. 03:32 Marty reflects on the evolution of media and the web from broadcast to digital. 05:43 Transition into academia through an NSF grant, starting her academic career. 07:13 Fascination with finding the right tools and mediums for learning. 08:59 Approach to teaching students through real-world problems. 12:01 Opportunities for exploration and pure design. 13:38 Reflection on research methods like speed dating and service blueprint. 15:05 Marty's fascination with science and nature and how it influenced her career decisions. 15:59 Collaboration with the Carnegie Museum of Natural History and the use of high-resolution zoomable imagery for public science engagement. 18:15 The Macro Invertebrate project and the use of high-resolution imagery to improve water quality assessment by everyday citizens. 19:50 Designing museum content for diverse target users, including K-12 kids and the general public. 21:05 Importance of layering information to engage different types of museum visitors. 24:03 Emphasis on authenticity and uniqueness in museum exhibits, and the influence of the City Museum of St. Louis. 25:35 Balancing safety and innovation in children's museums. 27:09 Using constraints as opportunities in exhibit design. 28:39 Thoughts on how AI will impact educational technology and learning environments. 33:48 Importance of understanding teaching and learning from a practical perspective. 34:28 Challenges in the Ed Tech market and the need for sustainable products. 37:20 Importance of process documentation and journaling for creative practice. 44:16 Importance of constructive feedback and growth-oriented conversations. 46:49 Introduction to the METALS program at CMU. 51:27 Final reflections and farewell.
Send us a Text Message.How do government agencies ensure their cloud solutions are both secure and efficient? Join us on the Cables2Clouds podcast as we unravel the complexities of Government cloud solutions with our distinguished guest, Erica Cooper from Cisco. With her deep expertise in cloud technology tailored for the government sector, Erica provides invaluable insights into the unique requirements and security considerations of Government cloud environments. We explore why Microsoft Azure is a favored choice due to its integration with Office 365 and the critical role of hybrid solutions like Azure Hub and HCI in maintaining secure, isolated environments essential for national security.Ever wondered about the painstaking process of transitioning government applications from physical servers to the cloud? We tackle this intricate journey, focusing on US government deployments and the substantial presence of Microsoft Azure for Government (MAG) in these projects. Erica sheds light on the importance of having a point of presence in the continental US (CONUS) for effective communication and operational efficiency. We also delve into the global proliferation of Microsoft Azure for Government services, comparing it with AWS GovCloud and discussing the significance of terms like CONUS and OCONUS in this context.In our deep dive into implementing GovCloud, we emphasize the paramount importance of security in managing and deploying government cloud resources. Erica walks us through the rigorous vetting processes, security clearances, and collaborative efforts necessary to build and manage secure cloud infrastructure. We touch on the logistical challenges, from coordinating escorts to setting up secure facilities, and discuss the integration of AWS Cloud and Cisco's Nexus Dashboard Fabric Controller for enhanced network visibility. Don't miss out on this comprehensive discussion that highlights the practical benefits of transitioning from traditional data centers to sophisticated cloud environments. Stay tuned for more insights, and remember to subscribe and follow us on social media for the latest updates!Check out the Fortnightly Cloud Networking NewsVisit our website and subscribe: https://www.cables2clouds.com/Follow us on Twitter: https://twitter.com/cables2cloudsFollow us on YouTube: https://www.youtube.com/@cables2clouds/Follow us on TikTok: https://www.tiktok.com/@cables2cloudsMerch Store: https://store.cables2clouds.com/Join the Discord Study group: https://artofneteng.com/iaatjArt of Network Engineering (AONE): https://artofnetworkengineering.com
Feeling tethered to your screens? Doomscrolling much? Have you gotten that little message from your phone, ratting you out, informing you that "you spent an average of XX hours and xx minutes daily of screen time"?? Do you wish you could set some better boundaries with tech/social media/screens in general? Let's face it, our devices are here to stay. How can we make them work FOR US, instead of distracting us from the personal connections we need? How are we supposed to "multitask"?? Is the human brain even capable of such a feat?? Cue our expert in human-computer interaction (HCI), Gloria Mark, PhD! WE ARE SO FORTUNATE to be able to pick her brain about how to make OUR BRAINS better at prioritizing our precious mental currency: OUR ATTENTION. Dr. Mark is the author of Attention Span and Multitasking in the Digital Age, the Chancellor's Professor of Informatics at UC Irvine, and has published over 200 papers in top academic journals, and appeared in scores of platforms, including the New York Times, BBC, NPR, the Atlantic, and recently on Armchair Expert with Dax Shepard (check it out, the episode is GREAT!) We are aware that we likely outkicked our coverage, and are SO HAPPY to share Dr. Mark's expertise with y'all. Strap in, pay attention, this is a can't-miss episode, friends! :) Topics in this episode include: The MYTH of multitasking. What is "distraction cost'? How attempting to "multitask" affects our brains and bodies (hint: stress!) Is it really the "notifications" on our phones that are distracting us, or is it something else? Are we really "victims of the algorithm" when it comes to social media? How can being "information rich" make us "attention poor"? How can we be aware of our "urges" to check our phones/email/computer and become more intentional in our use of devices? Learn more about Dr. Mark's work at her website. Her latest book, Attention Span, is available nationwide wherever books are sold! Learn more about her book here. Your Doctor Friends have some BIG THINGS in the works for "refreshing" the pod, and how we deliver meaningful, usable, valid health education to YOU, our dear friends! You'll be hearing some "upcycled" episodes this summer while we work on implementing these changes, and we will be back in full force in the next month or so with a brand new haircut ;) Thanks for tuning in, friends! Please sign up for our “PULSE CHECK” monthly newsletter! Signup is easy, right on our website, and we PROMISE not to spam you. We just want to send you monthly cool articles, videos, and thoughts :) For more episodes, limited edition merch, to send us direct messages, and more, follow this link! Connect with us: Website: https://yourdoctorfriendspodcast.com/ Email us at yourdoctorfriendspodcast@gmail.com @your_doctor_friends on Instagram - Send/DM us a voice memo or question and we might play it/answer it on the show! @yourdoctorfriendspodcast1013 on YouTube @JeremyAllandMD on Instagram, Facebook, and Twitter/X @JuliaBrueneMD on Instagram
THE EMBC NETWORK featuring: ihealthradio and worldwide podcasts
HOW TO LOSE WEIGHT WITH JULIE KAANAPU What does it take to become a National Board Certified Coach with 25 years of experience in health and wellness? Julie Kaanapu, NBC-HWC (national board certified-health & wellness coach) has over 25 years in the health industry from physical therapy, implementing medication management protocols, coaching clients to achieve their health goals, to assisting physicians with training, certification, and implementation programs focused on optimizing hormones to improve patient outcomes and reduce risk of disease. She has a duel bachelor's degree in sports medicine & biology from the University of Oregon, is a national board-certified health & wellness coach, certified in functional nutrition from FNL, certified life coach from HCI, certified graduate of Worldlink Medical (mastering the protocols for optimization of hormone replacement therapy), a physician liaison, medical consultant & field coach with Biote Medical (national training company on preventative medicine through nutraceutical & hormone optimization), a speaker, and lifestyle medicine practitioner. Her passion is to educate and empower women over 40 who feel overlooked by the medical system and struggle with symptoms of F.L.A.B.B. (fatigue, low libido, anxiety, brain fog & belly fat), how to take control of their health, and optimize their quality of life without unnecessary medications! Links to promote: -https://prohealthshare.com/loseweight -https://prohealthshare.com/5questions Master My Metabolism Workshop -https://www.prohealthshare.com/mastermymetabolism (Reduced price of $47)LIQUID COLLAGEN https://modere.io/TB2OkT https://modere.io/BZDgOA BALANCED, BEAUTIFUL, ABUNDANT Retreat https://wellnessmarketingltd.com/balanced-beautiful-abundant-retreat/ FREE BREAKTHROUGH CALL https://calendly.com/rebeccaelizabethwhitman/breakthrough For more information go to… https://www.rebeccaelizabethwhitman.com/ https://pillar.io/rebeccaewhitman #HealthCoach #NationalBoardCertifiedCoach #HealthCoaching #NationalBoardCertification #WellnessCertification #WellnessCoach #HealthAndWellnessCoaching #HealthAndWellnessCoach #CertifiedCoach #WellnessCoaching #WellnessProgram #HealthCertification #WellnessCoachCertification #CertifiedHealthCoach #Health #Fitness #HealthAndWellness #Wellness #Lifestyle #Coaching
THE EMBC NETWORK featuring: ihealthradio and worldwide podcasts
HOW TO LOSE WEIGHT WITH JULIE KAANAPU What does it take to become a National Board Certified Coach with 25 years of experience in health and wellness? Julie Kaanapu, NBC-HWC (national board certified-health & wellness coach) has over 25 years in the health industry from physical therapy, implementing medication management protocols, coaching clients to achieve their health goals, to assisting physicians with training, certification, and implementation programs focused on optimizing hormones to improve patient outcomes and reduce risk of disease. She has a duel bachelor's degree in sports medicine & biology from the University of Oregon, is a national board-certified health & wellness coach, certified in functional nutrition from FNL, certified life coach from HCI, certified graduate of Worldlink Medical (mastering the protocols for optimization of hormone replacement therapy), a physician liaison, medical consultant & field coach with Biote Medical (national training company on preventative medicine through nutraceutical & hormone optimization), a speaker, and lifestyle medicine practitioner. Her passion is to educate and empower women over 40 who feel overlooked by the medical system and struggle with symptoms of F.L.A.B.B. (fatigue, low libido, anxiety, brain fog & belly fat), how to take control of their health, and optimize their quality of life without unnecessary medications! Links to promote: -https://prohealthshare.com/loseweight -https://prohealthshare.com/5questions Master My Metabolism Workshop -https://www.prohealthshare.com/mastermymetabolism (Reduced price of $47)LIQUID COLLAGEN https://modere.io/TB2OkT https://modere.io/BZDgOA BALANCED, BEAUTIFUL, ABUNDANT Retreat https://wellnessmarketingltd.com/balanced-beautiful-abundant-retreat/ FREE BREAKTHROUGH CALL https://calendly.com/rebeccaelizabethwhitman/breakthrough For more information go to… https://www.rebeccaelizabethwhitman.com/ https://pillar.io/rebeccaewhitman #HealthCoach #NationalBoardCertifiedCoach #HealthCoaching #NationalBoardCertification #WellnessCertification #WellnessCoach #HealthAndWellnessCoaching #HealthAndWellnessCoach #CertifiedCoach #WellnessCoaching #WellnessProgram #HealthCertification #WellnessCoachCertification #CertifiedHealthCoach #Health #Fitness #HealthAndWellness #Wellness #Lifestyle #Coaching
Julie Kaanapu, NBC-HWC (national board certified-health & wellness coach) has over 25 years in the health industry from physical therapy, implementing medication management protocols, coaching clients to achieve their health goals, to assisting physicians with training, certification, and implementation programs focused on optimizing hormones to improve patient outcomes and reduce risk of disease. She has a duel bachelor's degree in sports medicine & biology from the University of Oregon, is a national board-certified health & wellness coach, certified in functional nutrition from FNL, certified life coach from HCI, certified graduate of Worldlink Medical (mastering the protocols for optimization of hormone replacement therapy), a physician liaison, medical consultant & field coach with Biote Medical (national training company on preventative medicine through nutraceutical & hormone optimization), a speaker, and lifestyle medicine practitioner. Her passion is to educate and empower women over 40 who feel overlooked by the medical system and struggle with symptoms of F.L.A.B.B. (fatigue, low libido, anxiety, brain fog & belly fat), how to take control of their health, and optimize their quality of life without unnecessary medications! Links to promote: -https://prohealthshare.com/loseweight -https://prohealthshare.com/5questions Master My Metabolism Workshop -https://www.prohealthshare.com/mastermymetabolism (Reduced price of $47)LIQUID COLLAGEN https://modere.io/TB2OkT https://modere.io/BZDgOA BALANCED, BEAUTIFUL, ABUNDANT Retreat https://wellnessmarketingltd.com/balanced-beautiful-abundant-retreat/ FREE BREAKTHROUGH CALL https://calendly.com/rebeccaelizabethwhitman/breakthrough For more information go to… https://www.rebeccaelizabethwhitman.com/ https://linktr.ee/rebeccaewhitman
A short, thought-provoking book about what happens to our online identities after we die. These days, so much of our lives takes place online—but what about our afterlives? Thanks to the digital trails that we leave behind, our identities can now be reconstructed after our death. In fact, AI technology is already enabling us to “interact” with the departed. Sooner than we think, the dead will outnumber the living on Facebook. In this thought-provoking book, Carl Öhman explores the increasingly urgent question of what we should do with all this data and whether our digital afterlives are really our own—and if not, who should have the right to decide what happens to our data. The stakes could hardly be higher. In the next thirty years alone, about two billion people will die. Those of us who remain will inherit the digital remains of an entire generation of humanity—the first digital citizens. Whoever ends up controlling these archives will also effectively control future access to our collective digital past, and this power will have vast political consequences. The fate of our digital remains should be of concern to everyone—past, present, and future. Rising to these challenges, Öhman explains, will require a collective reshaping of our economic and technical systems to reflect more than just the monetary value of digital remains. As we stand before a period of deep civilizational change, The Afterlife of Data: What Happens to Your Information When You Die and Why You Should Care (U Chicago Press, 2024) will be an essential guide to understanding why and how we as a human race must gain control of our collective digital past—before it is too late. Jake Chanenson is a computer science Ph.D. student and law student at the University of Chicago. Broadly, Jake is interested in topics relating to HCI, privacy, and tech policy. Jake's work has been published in top venues such as ACM's CHI Conference on Human Factors in Computing Systems. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network
Ready to start a podcast, app, and manifest money! Start HERE. Summer McStravick was the creator of Hay House Radio (Louise Hay) and hundreds of Hay House podcasts and webinars. She's a longtime leader in personal development, the prior co-host of Dr. Wayne Dyer's podcast, and a featured speaker, author, and teacher for Hay House, HCI, Omvana, the Shift Network, Alternatives, Insight Timer, Mindvalley, AFest and more. Learn more about Summer: https://flowdreaming.com/about-summer/
Ben Shneiderman is a leading figure in the field of human-computer interaction (HCI). Having founded one of the oldest HCI research centers in the country at the University of Maryland in 1983, Shneiderman has been intently studying the design of computer technology and its use by humans. Currently, Ben is a Distinguished University Professor in the Department of Computer Science at the University of Maryland and is working on a new book on human-centered artificial intelligence. I'm so excited to welcome this expert from the field of UX and design to today's episode of Experiencing Data! Ben and I talked a lot about the complex intersection of human-centered design and AI systems. In our chat, we covered: Ben's career studying human-computer interaction and computer science. (0:30) 'Building a culture of safety': Creating and designing ‘safe, reliable and trustworthy' AI systems. (3:55) 'Like zoning boards': Why Ben thinks we need independent oversight of privately created AI. (12:56) 'There's no such thing as an autonomous device': Designing human control into AI systems. (18:16) A/B testing, usability testing and controlled experiments: The power of research in designing good user experiences. (21:08) Designing ‘comprehensible, predictable, and controllable' user interfaces for explainable AI systems and why [explainable] XAI matters. (30:34) Ben's upcoming book on human-centered AI. (35:55) Resources and Links: People-Centered Internet: https://peoplecentered.net/ Designing the User Interface (one of Ben's earlier books): https://www.amazon.com/Designing-User-Interface-Human-Computer-Interaction/dp/013438038X Bridging the Gap Between Ethics and Practice: https://doi.org/10.1145/3419764 Partnership on AI: https://www.partnershiponai.org/ AI incident database: https://www.partnershiponai.org/aiincidentdatabase/ University of Maryland Human-Computer Interaction Lab: https://hcil.umd.edu/ ACM Conference on Intelligent User Interfaces: https://iui.acm.org/2021/hcai_tutorial.html Human-Computer Interaction Lab, University of Maryland, Annual Symposium: https://hcil.umd.edu/tutorial-human-centered-ai/ Ben on Twitter: https://twitter.com/benbendc Quotes from Today's Episode The world of AI has certainly grown and blossomed — it's the hot topic everywhere you go. It's the hot topic among businesses around the world — governments are launching agencies to monitor AI and are also making regulatory moves and rules. … People want explainable AI; they want responsible AI; they want safe, reliable, and trustworthy AI. They want a lot of things, but they're not always sure how to get them. The world of human-computer interaction has a long history of giving people what they want, and what they need. That blending seems like a natural way for AI to grow and to accommodate the needs of real people who have real problems. And not only the methods for studying the users, but the rules, the principles, the guidelines for making it happen. So, that's where the action is. Of course, what we really want from AI is to make our world a better place, and that's a tall order, but we start by talking about the things that matter — the human values: human rights, access to justice, and the dignity of every person. We want to support individual goals, a person's sense of self-efficacy — they can do what they need to in the world, their creativity, their responsibility, and their social connections; they want to reach out to people. So, those are the sort of high aspirational goals that become the hard work of figuring out how to build it. And that's where we want to go. - Ben (2:05) The software engineering teams creating AI systems have got real work to do. They need the right kind of workflows, engineering patterns, and Agile development methods that will work for AI. The AI world is different because it's not just programming, but it also involves the use of data that's used for training. The key distinction is that the data that drives the AI has to be the appropriate data, it has to be unbiased, it has to be fair, it has to be appropriate to the task at hand. And many people and many companies are coming to grips with how to manage that. This has become controversial, let's say, in issues like granting parole, or mortgages, or hiring people. There was a controversy that Amazon ran into when its hiring algorithm favored men rather than women. There's been bias in facial recognition algorithms, which were less accurate with people of color. That's led to some real problems in the real world. And that's where we have to make sure we do a much better job and the tools of human-computer interaction are very effective in building these better systems in testing and evaluating. - Ben (6:10) Every company will tell you, “We do a really good job in checking out our AI systems.” That's great. We want every company to do a really good job. But we also want independent oversight of somebody who's outside the company — someone who knows the field, who's looked at systems at other companies, and who can bring ideas and bring understanding of the dangers as well. These systems operate in an adversarial environment — there are malicious actors out there who are causing trouble. You need to understand what the dangers and threats are to the use of your system. You need to understand where the biases come from, what dangers are there, and where the software has failed in other places. You may know what happens in your company, but you can benefit by learning what happens outside your company, and that's where independent oversight from accounting companies, from governmental regulators, and from other independent groups is so valuable. - Ben (15:04) There's no such thing as an autonomous device. Someone owns it; somebody's responsible for it; someone starts it; someone stops it; someone fixes it; someone notices when it's performing poorly. … Responsibility is a pretty key factor here. So, if there's something going on, if a manager is deciding to use some AI system, what they need is a control panel, let them know: what's happening? What's it doing? What's going wrong and what's going right? That kind of supervisory autonomy is what I talk about, not full machine autonomy that's hidden away and you never see it because that's just head-in-the-sand thinking. What you want to do is expose the operation of a system, and where possible, give the stakeholders who are responsible for performance the right kind of control panel and the right kind of data. … Feedback is the breakfast of champions. And companies know that. They want to be able to measure the success stories, and they want to know their failures, so they can reduce them. The continuous improvement mantra is alive and well. We do want to keep tracking what's going on and make sure it gets better. Every quarter. - Ben (19:41) Google has had some issues regarding hiring in the AI research area, and so has Facebook with elections and the way that algorithms tend to become echo chambers. These companies — and this is not through heavy research — probably have the heaviest investment of user experience professionals within data science organizations. They have UX, ML-UX people, UX for AI people, they're at the cutting edge. I see a lot more generalist designers in most other companies. Most of them are rather unfamiliar with any of this or what the ramifications are on the design work that they're doing. But even these largest companies that have, probably, the biggest penetration into the most number of people out there are getting some of this really important stuff wrong. - Brian (26:36) Explainability is a competitive advantage for an AI system. People will gravitate towards systems that they understand, that they feel in control of, that are predictable. So, the big discussion about explainable AI focuses on what's usually called post-hoc explanations, and the Shapley, and LIME, and other methods are usually tied to the post-hoc approach.That is, you use an AI model, you get a result and you say, “What happened?” Why was I denied a parole, or a mortgage, or a job? At that point, you want to get an explanation. Now, that idea is appealing, but I'm afraid I haven't seen too many success stories of that working. … I've been diving through this for years now, and I've been looking for examples of good user interfaces of post-hoc explanations. It took me a long time till I found one. The culture of AI model-building would be much bolstered by an infusion of thinking about what the user interface will be for these explanations. And even the DARPA's XAI—Explainable AI—project, which has 11 projects within it—has not really grappled with this in a good way about designing what it's going to look like. Show it to me. … There is another way. And the strategy is basically prevention. Let's prevent the user from getting confused and so they don't have to request an explanation. We walk them along, let the user walk through the step—this is like Amazon checkout process, seven-step process—and you know what's happened in each step, you can go back, you can explore, you can change things in each part of it. It's also what TurboTax does so well, in really complicated situations, and walks you through it. … You want to have a comprehensible, predictable, and controllable user interface that makes sense as you walk through each step. - Ben (31:13)
DESCRIPTION: In this episode, Phil and Roy welcome Myke Clarkson to the show! We hear about his background in herpetoculture, before waxing poetic about field herping, then diving into his role at the International Herpetological Symposium. We cover a lot of ground in this one in a really short time! Please like, subscribe, and share this episode, if you feel so inclined. To offer direct support to the show with a tip or donation, consider subscribing to our Patreon (https://patreon.com/projectherpetoculture) and have a look at our generous sponsors at the affiliate links below! SHOW NOTES: Follow Myke on IG: @mykeclarkson Check out the IHS website: https://www.iherpsymp.org Check out the HCI website: https://herpconservation.com/ MERCH: https://www.projectherp.com/shop OUR SPONSORS: Custom Reptile Habitats: https://customreptilehabitats.com/PH Cold Blooded Caffeine (apply the code 'projectherp' for 10% off): https://coldbloodedcaffeine.com/?ref=PH FairyTail Dragons: https://fairytaildragons.com Exo Terra: https://exo-terra.com Tamura Designs (apply the code 'Herpetoculture' for 15% off): https://tamura-designs.com Happy Dragons: https://plus.happydragons.com/home SOCIAL LINKS: Support us on Patreon: https://patreon.com/projectherpetoculture Subscribe on Youtube: https://www.youtube.com/channel/UC0UCdymrooiNVloQlxrW1FA Follow P : H on Instagram: @projectherpetoculture Follow Phil on Instagram: @aridsonly Follow Roy on Instagram: @wellspringherp
We all have had daydreams – those ideas that float into our consciousness from time to time. Usually we don't take much stock in them, and they float away into the ether. What if I were to tell you that you can upgrade your daydreams, to change them from occasional fanciful thoughts to insights that can alter your emotions and beliefs and create a profound shift in your life. The process is called Flowdreaming, and we dive into the method and the outcomes with its creator, personal growth coach Summer McStravick. Summer came up with the technique over 20 years ago and has helped countless people make meaningful changes in their lives. Summer tells us:· how her work with Louise Hay and Dr. Wayne Dyer led to her creation of Flowdreaming· the difference between Flowdreaming and regular daydreaming· what Flowdreaming does that can't be done through meditation· how Flowdreaming can reshape your neural pathways.· how Flowdreaming creates inner bliss· how to use Flowdreaming to manifest abundance Take look at your daytime thoughts in a new light by listening in on this exciting episode of Dream Power Radio. Meet Summer McStravick, the woman who invented Flowdreaming and founded M.E. School. Summer specializes in the architect of emotions — the language of the universe. As a spiritual coach, she helps you harness Flow so you can program your future and experience unstoppable upleveling and personal inner growth in every aspect of your life. Her latest book is Stuff Nobody Taught You. Summer McStravick is also known for her extraordinary background as having been hand-selected to work for Louise Hay, where she had the opportunity to develop a “start-up” division within the publishing company Hay House. There she created audio products and programs for a vast network of the world's greatest spirituality, self-growth teachers and spiritual coaches. There, Summer dreamed up and built the studios, architecture, framework, and programs for HayHouseRadio.com, one of the first live-broadcast radio networks streamed online. She also created some of the first webinars to reach the public from any company. In a stroke of pure Flow, Summer was being set up for her future work as a thought-leader, author, spiritual coach, and teacher, as she worked closely with, and was mentored by, Hay House radio talents such as Esther and Jerry Hicks (with Abraham), Suze Orman, Dr. Christianne Northrup, Dr. Wayne Dyer, Marianne Williamson, Doreen Virtue, Gregg Braden, Carolyn Myss, Debbie Ford, and many other luminaries in the fields of self-help and spirituality. It was while at Hay House Radio that Summer McStravick began to share her previously private practice for manifesting, which eventually developed into Flowdreaming® and The Flow Method. After being recommended to the world by her podcast co-host and mentor Dr. Wayne Dyer, public demand for Flowdreaming exploded, so Summer wrote two books and recorded hundreds of audios about the technique. She finally began her life-changing journey as a spiritual coach teaching Flowdreaming to hundreds of thousands of people. Her books and courses have been published by HCI, Hay House US, Hay House UK, and Hay House Australia as well as been translated into French, Italian, German and Slovakian. She's alsWant more ways to find joy in your life? Check out my website thedreamcoach.net for information about my courses, blogs, books and ways to create a life you love.
In this episode I chat with Summer McStravick. Summer is a longtime leader in personal development, the prior co-host of Dr. Wayne Dyer's podcast, and a featured speaker, author, and course creator for Hay House, HCI, Omvana, the Shift Network, Alternatives, Insight Timer, Mindvalley, AFest and more. In this episode we chat about: 1. Her favorite technique for manifesting abbundance and positive mindset, called Flowdreaming 2. The “Trifecta of Trust” - trust of self, trust of others, trust of the Universe/life (and how that trust gets broken/repaired) 3. Practical + energetic ways for reigniting your inner spark, finding a new direction, and the power of reinvention for everyone who's currently emotionally flatlining I loved this episode and know you will too! xo Check out Summer Website, her book and offerings. https://flowdreaming.com/