POPULARITY
Key topics in today's conversation include:Welcoming Alex to the Podcast (1:56)Military Contracts Explained (4:38)Skills Gained from Military Service (7:54)Alex's First Truck Experience (14:36)Becoming an Independent Contractor (17:23)Purchasing a Truck (19:19)Understanding Business (21:59)Financial Planning (23:17)Fuel Costs (25:21)Fuel Efficiency Habits (27:20)Speed Management (31:50)Analyzing Empty vs. Loaded Miles (37:47)Average MPG Calculation (38:59)Defensive Driving Insights (41:17)Relaxed Driving Approach (43:03)Advice for Owner-Operators (46:09)Alex's Future Aspirations (47:41)Changing the Stigma of Trucking and Parting Thoughts (52:05)Oakley Trucking is a family-owned and operated trucking company headquartered in North Little Rock, Arkansas. For more information, check out our show website: podcast.bruceoakley.com
Highlights from this week's conversation include:Current IPO Drought Discussion (1:12)Assessing Venture Capital Allocations (4:01)AGM Season Insights (9:17)Leveraging AGMs for Impact (12:47)Debate on LP Due Diligence Frameworks (15:27) Democratization of VC Access (20:32)Hamilton Lane Product Introduction (21:54)Emerging Manager Programs Challenges (24:25)Vanguard's Private Equity Recommendation (26:39)Technological and Economic Shifts (28:12)Climate Fund Investment Discussion (33:12)Market Dynamics and Differentiation (36:20)Future of Venture Capital and Parting Thoughts (39:05)Swimming with Allocators is a podcast that dives into the intriguing world of Venture Capital from an LP (Limited Partner) perspective. Hosts Alexa Binns and Earnest Sweat are seasoned professionals who have donned various hats in the VC ecosystem. Each episode, we explore where the future opportunities lie in the VC landscape with insights from top LPs on their investment strategies and industry experts shedding light on emerging trends and technologies. The information provided on this podcast does not, and is not intended to, constitute legal advice; instead, all information, content, and materials available on this podcast are for general informational purposes only.
Highlights from this week's conversation include:Pranav's Background and Journey in Data (1:10)Backstory of Mooncake Labs (2:05)PostgreSQL as a Force (4:47)Curiosity in Product Management (7:33)Challenges with Iceberg (11:12)Go-to-Market Strategy (13:52)Building Community Engagement (15:56)Importance of Feedback (18:26)AI Integration in Mooncake Labs (21:29)Innovation in data interaction (23:49)PostgreSQL and startup growth (28:41)Core component of business strategy (31:20)The Origin of the name Mooncake Labs (34:12)Upcoming Product Release (38:40)Connecting with Mooncake Labs and Parting Thoughts (42:49)The Data Stack Show is a weekly podcast powered by RudderStack, the CDP for developers. Each week we'll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.
Highlights from this week's conversation include:Viktor's Background and Journey in Data (1:20)Evolution of Data Architecture (4:41)The Lakehouse Concept (7:12)Open Source Innovation (11:05)Data Production and Decentralization (15:06)Governance in Decentralized Systems (18:53)Data Economy and Monetization (21:15)Security Concerns in Data Processing (24:21)Impact on Data Consumers (27:37)Compaction Issues in Data Tables (29:39)Open Source Lake Keeper Tool and Parting Thoughts (33:02)The Data Stack Show is a weekly podcast powered by RudderStack, the CDP for developers. Each week we'll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.
Highlights from their conversation include:Welcoming Bola to the Dynamo Team (0:41)Founders in the Industrial Sector (3:14)Madelyn's Promotion and Focus (5:37)Geopolitical Tensions and Supply Chain (7:35)Dynamo's Rebranding and Focus (12:03)AI's Role in Venture Capital (18:02)Navigating Global Trade Policies (22:54)Impact of Tariffs on Supply Chain (24:01)Non-Tariff Restrictions and Semiconductor Industry (26:04) Consumer Confidence and Economic Outlook (29:12)Uncertainty in Manufacturing Sector (32:29)Autonomous Vehicles and Market Trends (35:12)Future of Drone Delivery and Parting Thoughts (39:15)Dynamo is a VC firm led by supply chain and mobility specialists that focus on seed-stage, enterprise startups.Find out more at: https://www.dynamo.vc/
Highlights from this week's conversation include:Ruben's Background (1:14)Defining Operational Data (5:20)The Convergence of Operational and Analytical Data (10:53)Evolution from Data Warehousing to Fulfillment Centers (13:19)Challenges of API Integration (18:25)Understanding Data Complexity (22:18)Database vs. API Calls (25:32)Real-Time Database Views (28:15)The Evolution of Data Technology (32:37)Future Topics on PostgreSQL Scaling and Parting Thoughts (34:02)The Data Stack Show is a weekly podcast powered by RudderStack, the CDP for developers. Each week we'll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.
Highlights from this week's conversation include:Mark's Background and Journey in Data (1:08)Mark's Time at Microsoft (5:33)Internal Adoption of Azure (9:20)Understanding Pain Points (11:06)Complexity in Software Development (13:15)Microservices Architecture Overview (17:15)Microservices vs. Monolith (22:08)Modernizing Legacy Applications (24:39)Dependency Management with Dapr (29:43)Infrastructure as Code (33:04)AI's Rapid Evolution and Vendor Changes (37:27)Language Models in Application Development (39:05)AI in Creative Applications (42:59)The Future of Backend Development (47:22)Streamlining Development Processes (49:29)Dapr as an Open Source Solution (51:11)Getting Started with Dapr and Parting Thoughts (51:39)The Data Stack Show is a weekly podcast powered by RudderStack, the CDP for developers. Each week we'll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.
Highlights from this week's conversation include:Willz's Background and Journey (1:25)Discussing Real Estate Data Challenges (2:58)Inspiration for Software Creation (4:05)From Spreadsheet to Software (9:04)Challenges in Ownership Identification (12:24)Company Acquisition (16:00)Pitching Investors with Data Tools (18:46)Lessons Learned from Selling the Company (21:45)The Journey to Ready (26:55)Sales Development Representatives Explained (29:22)Role of Data in Sales (33:30)Real-Time Dashboards (36:54)Human-AI Collaboration (39:53)Human Touch in Data Compilation (44:02)Paradigm Shift in Data Access (46:19)Frustrations with Sales Cycles (48:22)Value of Genuine Conversations (55:23)Optimizing Internal Tools (56:23)Future of Data Interfaces and Parting Thoughts (57:21)The Data Stack Show is a weekly podcast powered by RudderStack, the CDP for developers. Each week we'll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.
Highlights from this week's conversation include:The Impact of AI (1:25)Historical Context of Technology (2:31)Pre-existing Infrastructure for Change (4:42)AI as a Personal Assistant (7:10)Future of Company Roles (9:13)Managing Teams in a Dystopian AI Future (12:31)Business Architecture Choices (15:52)Integration Tool Usage (18:07)AI's Impact on Data Roles (21:53)AI as an Interface (24:04)Trust in AI vs. SQL (27:12)Snowflake's Acquisition of Dataflow (29:54)Regression to the Mean Concept (33:49)AI's Role in Data Platforms (37:04)User Experience in Data Tools (44:41)Future of Data Tools (46:57)Environment Variable Setup (51:10)Future of Software Implementation and Parting Thoughts (52:10)The Data Stack Show is a weekly podcast powered by RudderStack, the CDP for developers. Each week we'll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.
Highlights from this week's conversation include:Pete's Career Overview (1:00)AI and Data Engineering Discussion (2:05)Themes of Data Council (4:19)High-Frequency Trading Insights (8:04)San Francisco's Unique Advantages (10:27)Data Council Conference Preview (13:23)The Magic of In-Person Events (15:45)Collapsing Batch and Streaming Systems (19:47)Leveraging Local Hardware for Data Processing (22:07)Future of Blockchain in Computing (23:57)Intersection of AI and Data Management (26:47)Advice for AI Startup Founders (28:44)Blurring Lines Between Data Roles (32:46)The Evolving Role of Engineers (36:56)Discount Code for Data Council and Parting Thoughts (38:23)The Data Stack Show is a weekly podcast powered by RudderStack, the CDP for developers. Each week we'll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.
Key topics in today's conversation include:Introducing Today's Topic (0:12)Evolution of Advertising (3:13)Changes in Marketing Strategies (4:43)Fragmentation of Audiences (6:39)Challenges of Targeting Drivers (10:16)Common Marketing Mistakes (12:43)Importance of Authentic Messaging (15:10)Introduction to Digital Advertising (18:00)Local Advertising Success (19:53)YouTube's Role in Company Culture (22:08)The Impact of Short-Form Content (25:31)Role of Pixels in Digital Marketing (28:34)Geo-Targeting in Advertising (33:33)Future of AI in Marketing (36:41Finding the Right Job (39:14)Thankful for Truck Drivers and Parting Thoughts (42:17)Oakley Trucking is a family-owned and operated trucking company headquartered in North Little Rock, Arkansas. For more information, check out our show website: podcast.bruceoakley.com.
World Indoor Championships are officially behind us so we have lots to unpack. Overall, the meet lacked in star power but was still able to produce some memorable moments. Three days. 516 athletes, one championship record, three world leads, 32 national records.We had drama. We had dominance.We had Jakob Ingebrigtsen running off with two gold medals and made it look like a tempo run. We had Gudaf Tsegay breaking a championship record and redefined what a full send means. We had Grant Holloway and Mondo Duplantis pulling off three-peats to cement their greatness.And then there's Josh Hoey — who went from “Is he actually good?” to world indoor 800m champion all within the past 12 months. A personal highlight was Claire Bryant shocked the field in the long jump. Surprise global medals all around — the kind that make the sport so damn fun.Hosts: Chris Chavez | @chris_j_chavez on Instagram + Preet Majithia | @preetmajithia on InstagramSUPPORT OUR SPONSORSKETONE-IQ: Level up your training with Ketone-IQ – a clean shot of energy with no sugar or caffeine. Or try the new Ketone-IQ + Caffeine, combining 5g of ketones with 100mg of green tea caffeine for a smooth, sustained boost. It's used by pro runners like Des Linden and Sara Hall. Proven to enhance endurance, focus, and recovery, ketones are 28% more efficient than glucose. No crashes, no bonking—just next-level performance. Take the shot. Feel the difference. Save 30% off your first subscription order & receive a free six pack of Ketone-IQ with KETONE.com/CITIUS.RUNNA: Runna is the #1 rated personalized running app designed to help you crush your goals no matter the distance. Runna is trusted by hundreds of thousands of runners around the world and makes expert coaching accessible with personalized training plans that fit every goal, fitness level, and schedule. Whether it's someone's first 5K or it's someone chasing a marathon PB, they are here to help runners train smarter, stronger, and love every step of the way. Sign up for Runna today and get your first two weeks free using the code CITIUS.OLIPOP: Big name sodas are rolling out bold new flavors in 2025, but the real buzz is happening in the prebiotic pop aisle. If you haven't already jumped on the Olipop train yet, now's the time. BuzzFeed just came out with an article that recently named Olipop the best overall soda for flavor — and with a lineup that includes classic root beer, vintage cola and cherry vanilla, it's easy to see why. Try Olipop today and save 25% on your order using code CITIUS25 at checkout at DrinkOlipop.com.
In this episode of A Shot in the Arm Podcast, hosts Yvette Raphael and Ben Plumley discuss the resilience of South Africa's healthcare system amidst U.S. aid cuts, particularly through USAID and PEPFAR. They highlight the devastating impacts on HIV treatment, TB care, and broader healthcare services due to the sudden cessation of funding. But the country is markedly more prepared than critics might have feared, to assume full responsibility for its infectious diseases strategies - including procurement and surveillance strategies that the US maintained control over in exchange for the aid. Their conversation extends to cover issues around mental health, future healthcare innovations like long-acting antiretrovirals, and the broader geopolitical implications of donor aid cuts. 00:00 Introduction and Setting the Scene 00:33 Impact of US Aid Cuts on South Africa 03:08 Healthcare Challenges and Government Response 07:04 The Role of Civil Society and Future Preparations 10:21 Consequences of Sudden Aid Withdrawal 14:17 Future of HIV Treatment and Advocacy 16:55 The Threat of Drug-Resistant TB 17:35 Government Investment in Healthcare 19:01 Mental Health Crisis Among Youth 19:41 Impact of USAID Funding Cuts 20:57 Soft Power and International Relations 22:37 South Africa's Self-Reliance 26:43 Addressing Racism and Emigration 32:42 Parting Thoughts and Optimism
Highlights from this week's conversation include:Solomon's Background and Journey in Data (0:38)The Importance of a Triple Threat Data Person (5:14)Sports Sponsorship Analysis at Nielsen (7:31)Challenges of Implementing AI in Business (11:09)Understanding Data Delivery Models (14:18)Innovating Data Delivery (17:38)Modern Data Sharing Framework (19:09)Account Management in Data Sharing (23:43)Data Delivery Systems and Skill Sets (26:08)Practical Steps for Monetizing Data (29:02)Building Trust Through Branding (36:51)LinkedIn Personal Branding Tips (40:54)Mastering the Basics (44:16)Professional Development in Data (48:18)Deep Technical Skills (53:18)Active and Outcome-Focused Approach (55:25)Finding Top Data People and Parting Thoughts (56:44)The Data Stack Show is a weekly podcast powered by RudderStack, the CDP for developers. Each week we'll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.
Well, it certainly didn't take long for President Donald Trump and Vice President J.D. Vance to revive the Ugly American brand.Welcome to Season 3: Episode 5 of A View from the Left Side, New Strategies for a New World Order.As the seventh week of Trump's second term comes to a close, the United States, the world and the global financial markets are reeling from his decisions and walkbacks. Trump is not just a flimflam man. He's a flipflopping flimflam man. It's mind-boggling how much has happened since I recorded my last podcast a week ago. Casting off our long-term allies, jumping in bed with Russia, starting a trade war with our biggest trading partners, crashing Wall Street, tanking world financial markets, pissing off consumers—and lying about all of it during a joint session of Congress on primetime TV. Wow, Trump had a big week! Oh, I almost forgot to mention that – also this week—we learned that several states have measles outbreaks. The upside of the measles outbreaks is that DOGE has rehired some of the Centers for Disease Control and Prevention (CDC) staff they fired. Perhaps we do need a few public health people! (Will Arizona have a measles outbreak? We have the same personal exemption for vaccines that Texas does.) So much winning!Ugh … pass the anti-depressants before RFK Jr. outlaws them! Today's episode features a wide-ranging interview with three long-time political authors from Blog for Arizona. Tucson lawyer and former prosecutor Michael Bryan founded Blog for Arizona more than 20 years ago. B4AZ has been published continuously since then. Retired lawyer and former newspaper journalist, Larry Bodine is the past chair of the Legislative District 18 Democrats in Pima County and has been on the Board of Democrats of Greater Tucson for five years, including three years as president. Phoenician David Gordon has had a career in education and is a successful science fiction author, in addition to being a prolific political blogger. His experience in science fiction writing probably informs his coverage of the Arizona Legislature.Podcast Time Stamps | The Ugly American Brand Returns | 0:29 | It's the Economy Stupid | 4:02 | The 'Stable Genius' at Work | 5:51 | Shadow Group [DOGE] Dismantles 'Shadow Government' [Deep State] | 7:14 | Is Trump's Election Part of a Antisystems Revolt? | 8:14 | Democrats Shouldn't Defend Systems that Are Broken | 8:38 | Podcast Interview: Today's Guests | 9:55 | Strategic Alliances Crumbling | 11:31 | DOGE Is a Challenge to Constitutional Order | 17:05 | Are We Watching an Antisystems Revolt Unfold? | 23:51 | Egg Prices and Where the Democrats Went Wrong in 2024 | 27:49 | Disinformation and the Media Landscape | 40:27 | Do the Dems Need 'Better Stories' or Better Listening Skills? | 42:42 | Institutional Change | 50:23 | The Resistance Can't Be Invisible | 57:54 | Arizona Politics | 1:02:23 | Diversity Equity and Inclusion | 1:09:37 | 2026: Can Dems Keep the Statewide Offices They Hold and Oust Ciscomani? | 1:14:49 | Parting Thoughts | 1:27:03You can watch my podcast on YouTube or listen to it on popular podcast platforms including Apple, Spotify, iHeartRadio, Podcast Addict, Podchaser, Pocket Casts, Deezer, Listen Notes and True Fans.
Roark & Sarabeth are back with a special episode on our final thoughts of GIRLS, and previewing what's next for our pod!! Thank you, dear listeners, for your patience and understanding during our longer-than-anticipated hiatus!
Venture Unlocked: The playbook for venture capital managers.
Follow me @SamirKaji for my thoughts on the venture market, with a focus on the continued evolution of the VC landscape.We recently had the pleasure of hosting Zal Bilimoria, Co-Founder of Refactor Capital. Zal has had a fascinating career from building products at Microsoft, Google, Netflix, and LinkedIn to making the leap into being a VC. His story is one of relentless curiosity and a deep passion for technology, something that started early in his life while working in his family's computer business.In our discussion, Zal walked us through his transition from product management to venture, his time at Andreessen Horowitz, and what ultimately led him to launch Refactor Capital. As a solo GP, he's taken a unique approach to investing, navigating the challenges of fundraising while staying laser-focused on backing founders tackling complex, high-impact problems. We covered everything from the evolution of his investing philosophy to the importance of founder relationships and how he thinks about the future of life sciences and technology.About Zal BilimoriaZal Bilimoria is the Founding Partner of Refactor Capital, a venture firm investing at the intersection of life sciences, technology, and sustainability. With a background in both software and healthcare, Zal brings a unique domain expertise and lens to investing. Before launching Refactor, originally with his partner David Lee, he was a partner at Andreessen Horowitz, where he focused on emerging technologies.This was a fun conversation—if you're interested in what it takes to build a venture firm from scratch, how product thinking translates into investing, or where the future of innovation is headed, this episode is a must-listen.Timestamps:Topics in this conversation include:* Zal's Early Life and Background (2:00)* Career in Product Management (3:06)* Starting Refactor Capital (6:03)* Challenges of Starting a New Firm (9:36)* Portfolio Construction Strategies (13:12)* Solo GP Model (18:06)* Advice on Hiring Associates (20:12)* Fund Size Philosophy (24:32)* Investment Entry Points (28:34)* Return Model Considerations (32:04)* Understanding Ownership Thresholds (36:31)* Market Influence on Investments (38:44)* Navigating Investor Relationships (41:08)* Quick Decision-Making with LPs (43:25)* Parting Thoughts and Future Outlook (46:32)I'd love to know what you took away from this conversation with Zal Bilimoria.Follow me @SamirKaji and give me your insights and questions with the hashtag #venture unlocked. If you'd like to be considered as a guest or have someone you'd like to hear from (GP or LP), drop me a direct message on X. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit ventureunlocked.substack.com
Highlights from their conversation include:Welcoming Back Earnest to the Show (0:45)Inspiration Behind "Stress Wood" (1:05)The Importance of Resilience (2:21)Value of Storytelling in Investing (9:17)Understanding the Supply Chain Landscape (12:27)Opportunities in Non-AI Companies (15:18)Future Investment Focus Areas (21:43)The Industrial Landscape and Labor Challenges (24:43)The Role of Investors in Series A (27:54)Importance of Industry Knowledge (30:17)Pre-Seed and Seed Investment Strategies (31:21)Customer Introductions as a Value Proposition (32:28)Future of Electrification (34:07)Best Ecosystems for Supply Chain Startups and Parting Thoughts (34:16)Dynamo is a VC firm led by supply chain and mobility specialists that focus on seed-stage, enterprise startups.Find out more at: https://www.dynamo.vc/
Exploring Off-Grid Cooking and Food Preservation with Half Ass Off Grid In this episode of the Okayest Cook Podcast, host Chris Whonsetler and co-host Corey Cole are joined by Chris from Half Ass Off Grid. The discussion delves into the essentials of living off-grid, focusing on cooking and food preservation without traditional power sources. They share experiences and tips on canning, dehydrating, smoking meats, and the importance of versatile spices. Highlighting psychological and safety aspects, the episode also includes practical advice for beginners looking to take baby steps towards self-sustainability. With stories of personal mishaps and successes, this episode offers valuable insights into the off-grid lifestyle. Follow Chris' journey off grid: https://www.instagram.com/halfassoffgrid https://www.youtube.com/@halfassoffgrid1036 00:00 Introduction 01:54 Thanksgiving Meal Highlights 03:56 Homemade Butter and Breakfast 06:09 Venison Stew and Preservation Tips 10:24 Off-Grid Living and Food Preservation 14:45 Starting the Off-Grid Journey 18:13 Essential Gear for Off-Grid Living 26:03 Safety and Common Mistakes 28:43 Cooking and Preserving Methods 32:15 Debating the Merits of Freeze Drying 33:33 Power Solutions for Off-Grid Living 35:05 Exploring Canning Techniques 43:08 Cooking with Propane and Wood 46:40 Psychological and Practical Aspects of Off-Grid Living 52:11 The Versatility of Cast Iron Cooking 57:20 Parting Thoughts and Chris' Crying Tiger Recipe Sharing More at OkayestCook.com Connect with us on Instagram @Okayest_Cook And facebook.com/AnOkayestCook Video feed on YouTube.com/@OkayestCook Crew: Chris Whonsetler Email: Chris@OkayestCook.com Web: ChrisWhonsetler.com Instagram: @FromFieldToTable & @WhonPhoto Corey Cole Email: Corey@OkayestCook.com Web: CoreyRCole.com Instagram: @ruggedhunter
Highlights from this week's conversation include:Joyce's Background and Journey in Data (0:39) Technological Growth in Logistics (3:51)Leadership and Communication in Logistics (6:54)Impact of Data Quality (9:13)Significance of Data Entry Accuracy (12:05)Data's Role in Decision Making (16:01)The Cost of Adding Data Points (21:26)Real-Time Data in Logistics (24:28)Understanding Master Data (31:15)Data vs. Information Distinction (33:21)Navigating Change in Data Management (37:35)Career Advice for Data Practitioners and Parting Thoughts (41:10)The Data Stack Show is a weekly podcast powered by RudderStack, the CDP for developers. Each week we'll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.
Highlights from this week's conversation include:Bob's Background and the Best and Worst of Running a Coffee Shop (0:11) Transition to Nonprofit Work (1:59) Formative Experiences in Finance (3:47) Transforming Nathan Cummings Foundation (5:43) Shifting Organizational Culture (7:36)Convincing Skeptics of Impact Investing (11:53)Defining Impact Investing (15:03) Mission-Aligned Investing Toolkit (16:28) Insider Segment: AI Solutions for Today's Financial Institutions (18:43)Custom AI Solutions (20:36)Multi-Channel Strategies (22:39)Investment Team Reflection (25:02)Due Diligence 20 Pledge (27:49)Accountability in Search Processes (32:02)Impact America Fund Investment (35:53) Understanding Allocator Decisions (37:26) Pay It Forward Mindset (38:02) Encouragement for Impact Investing (39:17)Flexible Capital Initiative and Parting Thoughts (41:38)Nathan Cummings Foundation is a multigenerational family foundation, rooted in the Jewish tradition of social justice, working to create a more just, vibrant, sustainable, and democratic society. We partner with social movements, organizations and individuals who have creative and catalytic solutions to climate change and inequality. Learn more: http://nathancummings.orgBottega8 offers secure and cost-efficient AI Model Training and Fine-Tuning tailored for financial institutions. If you're concerned about the expense and complexity of building in-house AI teams, or worried about the privacy and security risks inherent in Big Tech AI APIs, we provide the ideal solution for your proprietary data.Bottega8's solution is specifically designed for institutional financial clients, including PE/VC funds, hedge funds, broker-dealers, traders, investment banks, and fintechs. By partnering with us, you eliminate the need for expensive AI engineers, hefty API fees, and complex technical roadmaps—reducing your AI development costs by up to 85%. If you're seeking AI Model Training and Fine-Tuning services that prioritize security and cost-efficiency without sacrificing Big Tech fidelity, we'd love to talk to you. Learn more at bottega8.com/swimming.Swimming with Allocators is a podcast that dives into the intriguing world of Venture Capital from an LP (Limited Partner) perspective. Hosts Alexa Binns and Earnest Sweat are seasoned professionals who have donned various hats in the VC ecosystem. Each episode, we explore where the future opportunities lie in the VC landscape with insights from top LPs on their investment strategies and industry experts shedding light on emerging trends and technologies. The information provided on this podcast does not, and is not intended to, constitute legal advice; instead, all information, content, and materials available on this podcast are for general informational purposes only.
Plank's parting thoughts.
Highlights from their conversation include:Adam's Background and Journey Into Fleet Safety (0:46)Netradyne's Camera-Based Solution (1:15)Understanding Fleet Safety (6:06)The Benefits of Netradyne's Solution (9:57)Driver Concerns About Monitoring (12:24)Unionized Fleets and Technology (18:26)Best Practices for Implementing Telematics (22:59)Recognition and Incentives for Drivers (26:33)Creative Engagement Strategies (27:42)Rapid Fire Segment to Wrap (28:40)Parting Thoughts and Takeaways (30:24)Dynamo is a VC firm led by supply chain and mobility specialists that focus on seed-stage, enterprise startups.Find out more at: https://www.dynamo.vc/
Explore the transformative power of diagnostic ultrasound and cutting-edge technology in physical therapy practices. 00:00 - Introduction: Why Diagnostic Tech Matters in PT05:12 - Leveraging Diagnostic Ultrasound in Practice15:33 - Building Patient Trust with On-the-Spot Diagnostics23:45 - Marketing Strategies for PTs Using Innovative Tech34:10 - Monetizing Content and CPM/RPM Breakdown42:20 - Real-World Examples and Case Studies in PT51:35 - AI in Physical Therapy: Complementing Care, Not Replacing It01:02:44 - Parting Thoughts and Closing RemarksThis engaging discussion uncovers how high-tech tools are reshaping patient care, marketing strategies, and how therapists can grow their practices with innovative approaches.Key Points Discussed:How Diagnostic Ultrasound Enhances PT Care: Real-life examples of diagnosing tears, fractures, and more.Marketing Your PT Practice with New Tech: Strategies to build trust and attract new patients through innovative diagnostics.Monetizing Content and Reaching the Right Audience: Insights into leveraging YouTube and other platforms for growth.Understanding CPMs and RPMs for Online Content: How to maximize revenue and target the right audience with content.Special Guests:Co-Hosts Tony Maritato (Physical Therapist & Business Expert) and Dave Kittle (Physical Therapy Entrepreneur).
Exploring Exotic Wild Game Cuisine with Rikki Folger In this episode of the The Okayest Cook Podcast, host Chris Whonsetler welcomes special guest Rikki Folger, who shares her experiences and insights as a lodge manager and chef at Budges Wilderness Lodge in Denver. They discuss Rikki's role in reservations and client needs while also ensuring top-notch meals for guests. The conversation highlights Rikki's involvement with outdoor organizations like POMA and Harvesting Nature, her culinary journey from enrolling in culinary school to working in prestigious restaurants, and managing various wild game events including catering for the Blood Origins documentary screening. Rikki dives into unique dishes such as mountain lion sliders and bear sausage jambalaya. Additionally, Rikki provides valuable cooking tips, emphasizes the importance of quality kitchen tools, and shares her perspectives on wild game cooking. The episode wraps up with Rikki expressing enthusiasm for hearty autumn and winter dishes, including a favorite, cassoulet. Find more of Rikki at https://www.instagram.com/wild_and_foraged_/ POMA: https://professionaloutdoormedia.org/ Watch Blood Origins Lionheart: https://youtu.be/K09aGFZTJJg?si=oPF-22G60Ap-tX8G AI generated ‘Chapters' 00:00 Introduction and Guest Welcome 00:54 Rikki's Current Adventures 01:42 Professional Outdoor Media Association 02:26 Harvesting Nature Events 04:01 Notable Meals of the Week 08:05 Dandelion Jam and Clotted Cream 16:22 Blood Origins Documentary 21:41 Exploring Cooking Techniques for Mountain Lion 22:27 Braised Mountain Lion Recipe 22:53 Event Highlights and Meeting Robbie 25:09 Career Journey and Culinary Background 28:47 Tips for Aspiring Chefs 32:52 Essential Kitchen Tools and Ingredients 34:22 Maintaining and Using Cast Iron Cookware 36:17 Future Plans and Cookbook Aspirations 43:15 Parting Thoughts and Favorite Stews More at OkayestCook.com Connect with us on Instagram @Okayest_Cook And facebook.com/AnOkayestCook Video feed on YouTube.com/@OkayestCook Crew: Chris Whonsetler Email: Chris@OkayestCook.com Web: ChrisWhonsetler.com Instagram: @FromFieldToTable & @WhonPhoto
In a fitting theme for the city of Broadway, the 53rd edition of the New York City Marathon was all about breakout stars finally getting their time in the spotlight. Both winners of the men's and women's elite races in NYC are frequently recurring characters near the front of major races, but neither had completed a signature performance before today. Abdi Nageeye of the Netherlands and Sheila Chepkirui of Kenya won their first World Marathon Major titles by defeating fields full of former NYC champs, with Nageeye outkicking 2022 champ Evans Chebet down the homestretch and Chepkirui beating 2023 champ Hellen Obiri at her own game with a late surge through Central Park. Time stamps: Women's race: 2:10 - Breaking down Sheila Chepkirui's win 5:03 - Hellen Obiri talking a big game + reflecting on her career 13:20 - Kudos to Vivian Cheruiyot for her third place finish 16:10 - Sara Vaughn's top American performance 19:42 - More top results: Jess McClain, Sharon Lokedi, Kellyn Taylor, Tristan Van Ord, Dakotah Popehn 22:38 - Reflecting on Jenny Simpson's career Men's race: 26:47 - Breaking down Abdi Nageeye's win 31:24 - Evans Chebet's performance, 30th career marathon 33:18 - More top results: Albert Korir, Tamirat Tola 35:19 - Conner Mantz and Clayton Young's races 39:45 - Kudos to CJ Albertson More thoughts: 44:27 - 2028 U.S. Olympic Marathon Trials 48:40 - Valencia Half Marathon recap 54:50 - What's ahead on the CITIUS MAG Podcast 56:18 - Sydney Marathon becoming the newest World Marathon Major Mentioned in this episode: Watch: NYC Marathon Watchalong Read: Parting Thoughts From The 2024 NYC Marathon: Abdi Nageeye and Sheila Chepkirui Win + More Listen: 2024 NYC Marathon Pre-Race With Conner Mantz, Clayton Young, Hellen Obiri, Dakotah Popehn, Jess McClain + More Listen: Jenny Simpson Reflects on Her Career from High School to 2019, Shares 2020 Plans and Hopes Hosts: Chris Chavez | @chris_j_chavez on Instagram + Isaac Wood | @isaacew on Instagram SUPPORT OUR SPONSORS WAHOO: KICKR RUN - a new revolutionary treadmill offering the freedom and form of outdoor running at home, from Wahoo Fitness. Run hands-free and focus solely on the joy of running with the innovative RunFree Mode - which adjusts to your stride and pace automatically. For the first time runners can now fully benefit from indoor training apps such as Zwift Run and the Wahoo app for an immersive training experience that delivers unmatched realism and results. Learn more at WahooFitness.com OLIPOP: For the past year, we've redefined Olipop as more than just a healthy drink known for its gut microbiome with a low sugar content and a much better alternative to regular soda. You know there are more than 16 flavors, including classic root beer, cherry cola, and lemon-lime. You know it as The Runner's Soda. Get 25% off your orders by using code CITIUS25 at drinkolipop.com.
Karen, Megan, and Rachel are back for another marathon special! The Getting Chicked crew share the biggest highlights from the Chicago Marathon, including Susanna Sullivan's breakthrough performance. Megan also announces that another chick will be joining us soon – she's pregnant! Megan shares what it was like racing a half marathon while five months pregnant and what training has been like over the past few months. Karen and Rachel break down their plans for taper week heading into the NYC Marathon, including training and fueling leading into the final days before the marathon. Time stamps: 0:43 - Quick thoughts on the Chicago Marathon 2:24 - Spectators full sending across the street mid-race 3:54 - Shoutouts to Getting Chicked runners who raced Chicago 9:51 - Weather predictions for NYC 11:21 - Susanna Sullivan's top American finish in Chicago 15:43 - Lindsay Flanagan's standout performance 18:05 - Megan is pregnant! 22:34 - Biggest surprises about running during pregnancy 27:55 - Giving back to the running community 31:08 - Karen and Chris racing the 5K in Toronto – Chris got chicked! 35:57 - NYC Marathon course guide for runners 37:30 - Rachel's plan for taper week 39:02 - Karen's plan for taper week 41:09 - Rachel's day-by-day training for taper week 53:55 - CITIUS MAG's plans for NYC Marathon coverage Mentioned in this episode: Read: NYC Marathon Mile-By-Mile Course Breakdown YOUR HOSTS – Karen Lesiewicz | @kare_les on Instagram – Rachel DaDamio | @rdadamio on X – Megan Connelly | @meganmorantwwe on Instagram FOLLOW OUR SHOW – Subscribe on Apple Podcasts here. – Follow on Spotify here. – Follow the show on Instagram here.
Highlights from this week's conversation include:Darren's Background and Career Journey (1:09)Thesis on Private Markets (5:11)History and Focus of Provenio Capital (6:42)Investment Structuring for Clients (10:22)Investment Process for New Managers (13:44)Current Interests in VC Managers (15:08)Building Relationships with VC Fund Managers (16:28)Understanding Venture in Portfolio Context (18:26)Insider Segment: Setting Institutional-Grade Back Offices (19:13)Operational Diligence in Fund Management (24:10)Minimum Investment Criteria for Clients (28:09)Trends for Allocators to Watch (30:12)Lessons Learned in Venture Investing (31:40)Parting Thoughts on Venture Portfolio (33:01)Current Opportunities in Early Stage Investments (34:34)Provenio Capital: Our specialization and expertise is the sourcing and diligence of alternative investment strategies across hedge funds, private equity, venture capital, real estate and direct deals. Our focus is to build portfolios that are less correlated to the broader markets, and our aim is to generate outperformance during periods of dislocation in order to produce excellent risk-adjusted returns for our clients with significantly less volatility. As part of the overall investment process, we will aim to incorporate the appropriate planning and structure to maximize after-tax returns.Aduro Advisors is a trusted partner for venture capital fund managers, offering comprehensive and expert fund administration services. Known for being agile, responsive, and focused on making fund operations seamless, Aduro enables fund managers to concentrate on investing. With deep expertise across a variety of fund sizes and strategies, Aduro provides a full suite of services, including fund accounting and compliance. The firm understands the fast-paced nature of venture capital and prides itself on being as innovative and driven as the funds it supports. Aduro doesn't just manage operations—they help funds scale. https://www.aduroadvisors.com. Swimming with Allocators is a podcast that dives into the intriguing world of Venture Capital from an LP (Limited Partner) perspective. Hosts Alexa Binns and Earnest Sweat are seasoned professionals who have donned various hats in the VC ecosystem. Each episode, we explore where the future opportunities lie in the VC landscape with insights from top LPs on their investment strategies and industry experts shedding light on emerging trends and technologies. The information provided on this podcast does not, and is not intended to, constitute legal advice; instead, all information, content, and materials available on this podcast are for general informational purposes only.
One year after the late Kelvin Kiptum obliterated Eliud Kipchoge's previous world record by 34 seconds, running an almost unfathomable 2:00:35, Ruth Chepngetich topped that. Her astounding time of 2:09:56 is one of the greatest running performances in history, makes her the first woman to run under 2:10 in a marathon, and lopped one minute and 57 seconds off Tigist Assefa's old world record. On the men's side, Kenya's John Korir notched his first-ever World Marathon Majors victory in 2:02:44. (What a time for the sport of marathoning we're in right now, where a 2:02 showing is basically a footnote!) Hosts: Chris Chavez | @chris_j_chavez on Instagram Preet Majithia | @prm_32 on X Time Stamps: Ruth Chepngetich: 1:31 - Breaking down her 2:09:56 world record 9:40 - More on her running history + Chicago victory 13:52 - Stats + analysis on her record Skepticism surrounding Chepngetich's world record: 19:01 - Training factors 22:27 - Potential for doping 25:48 - Chepngetich being questioned in the press conference 37:19 - Language barrier 39:35 - The press center during the marathon American women: 41:21 - Susanna Sullivan, 7th/1st American, 2:21:57 42:57 - Lindsay Flanagan, 9th/2nd American, 2:23:31 44:33 - Keira D'Amato dropping out due to injury 45:13 - Emma Bates, 11th/3rd American, 2:24:00 Men's race: 47:18 - John Korir, 1st in 2:02:44 50:06 - More on the top finishers American men: 53:25 - CJ Albertson (7th/1st American, 2:08:17) and Zach Panning (10th/2nd American, 2:09:16) Final thoughts: 56:15 - Feedback on the broadcast 1:06:29 - CITIUS live at the Toronto Waterfront Marathon Mentioned in this episode… Read: Parting Thoughts From The 2024 Chicago Marathon: Ruth Chepngetich Destroys Women's World Record In 2:09:56 at Chicago Marathon Read: Tweet by Steve Magness comparing Ruth Chepngetich's world record to equivalent performances Watch: Interviews with some of the top stars from the Chicago Marathon Watch: Chicago Marathon full replay Subscribe for free: CITIUS MAG Newsletter SUPPORT OUR SPONSORS LEVER MOVEMENT: Elevate your running with the LEVER system, just like Olympian Eilish McColgan. Reduce impact on your joints, boost your training volume, and recover faster with this portable, easy-to-use treadmill system. Save 20% with code CITIUS20 at LEVERMOVEMENT.COM. OLIPOP: For the past year, we've redefined Olipop as more than just a healthy drink known for its gut microbiome with a low sugar content and a much better alternative to regular soda. You know there are more than 16 flavors, including classic root beer, cherry cola, and lemon-lime. You know it as The Runner's Soda. Get 25% off your orders by using code CITIUS25 at drinkolipop.com.
Highlights from this week's conversation include:Lisha's Background and Journey in VC (1:44)Inspiration for Equity Charge (2:38)Angel Investing Data (4:36)Access to Venture Capital (6:02)Role of LP Investing (9:32)Advice for First-Time LPs (11:07)Valuing Early Backers (15:04)Economic Opportunity Fund at PayPal Ventures (16:14)Team Collaboration and Initiative (17:24)Advice for Other Organizations (20:00)Insider Segment: How Adoro Provides Scalable Solutions for VC Firms (21:09) Naming Adoro Advisors (23:27)Evolution of Fund Administration (25:37)Adoro's Unique Positioning (28:13)Synergy with Portfolio Companies (30:51)Building Synergistic Relationships (32:47)Diversity in GP Representation (33:39)Legacy of Racial Equity Movement (35:20)Investment Strategy and Portfolio Balance (37:42)Fintech Opportunities and Challenges (39:52)Adoption of Financial Technologies (42:10)Podcast "Sisters with Ventures" and Parting Thoughts (44:54)Lisha Bell has over 20 years of experience in technology innovation, specializing in digital money movement. She currently leads PayPal Ventures' Economic Opportunity Fund, a $100M investment in diverse emerging fund managers. Previously at PayPal, she led Product for the Financially Underserved Segment and Pay with Venmo. Prior roles include payments-related positions at Wells Fargo, Kohl's, and Feedzai, where she developed early digital financial products like online banking and digital wallets. Lisha is cofounder of BLXVC, an angel syndicate supporting Black and Brown founders, and host of the Sisters with Ventures podcast. She previously led deal flow at Pipeline Angels and serves as board chair for Black Girl Ventures. Lisha holds a BSc from USC and MBAs from UC Berkeley and Columbia, and enjoys traveling, cooking, and dancing, all while balancing motherhood duties with her daughter. Aduro Advisors is a trusted partner for venture capital fund managers, offering comprehensive and expert fund administration services. Known for being agile, responsive, and focused on making fund operations seamless, Aduro enables fund managers to concentrate on investing. With deep expertise across a variety of fund sizes and strategies, Aduro provides a full suite of services, including fund accounting and compliance. The firm understands the fast-paced nature of venture capital and prides itself on being as innovative and driven as the funds it supports. Aduro doesn't just manage operations—they help funds scale. https://www.aduroadvisors.com Swimming with Allocators is a podcast that dives into the intriguing world of Venture Capital from an LP (Limited Partner) perspective. Hosts Alexa Binns and Earnest Sweat are seasoned professionals who have donned various hats in the VC ecosystem. Each episode, we explore where the future opportunities lie in the VC landscape with insights from top LPs on their investment strategies and industry experts shedding light on emerging trends and technologies. The information provided on this podcast does not, and is not intended to, constitute legal advice; instead, all information, content, and materials available on this podcast are for general informational purposes only.
Highlights from this week's conversation include:Aakar's Career Journey (1:27)Evolution of Fairview's Investment Strategy (2:56)Impact of the Great Recession (3:06)Fairview's Investment Activities (5:22)Partnerships and New LPs (7:31)Direct Co-Investment Strategy (8:30)Differentiation from Other Fund of Funds (11:23)Chip on the Shoulder Mentality (13:31)Overlooked Opportunities in Diversity (15:01)Emerging Managers and Deal Flow (16:46)Insider Segment: Camber Road's Business Overview (00:17:37)Navigating Large Funding Offers (19:30)Strategic Work in Venture Capital (22:19)Anticipating New Fund Formation (25:20)Challenges of Starting a Fund (28:27)Understanding GP Counseling (31:00)Assessing Non-Traditional Managers (34:30)Maintaining LP Relationships (37:05)Current Investment Opportunities and Parting Thoughts (39:25)Fairview curates dynamic relationships for institutional investors, providing unparalleled access across the most compelling segments of the private markets: venture capital, diverse and emerging managers, and co-investment. Learn more: https://www.fairviewcapital.com/.Camber Road is the most cost-effective, flexible and nimble leasing company for venture-backed businesses. We are experienced, but not stodgy. We're hungry, like the startup companies we serve. And we hold every lease on our balance sheet. We finance business-essential equipment for venture-backed companies. We do one thing, and we do it better than the rest. Learn more at www.camberroad.com.Swimming with Allocators is a podcast that dives into the intriguing world of Venture Capital from an LP (Limited Partner) perspective. Hosts Alexa Binns and Earnest Sweat are seasoned professionals who have donned various hats in the VC ecosystem. Each episode, we explore where the future opportunities lie in the VC landscape with insights from top LPs on their investment strategies and industry experts shedding light on emerging trends and technologies. The information provided on this podcast does not, and is not intended to, constitute legal advice; instead, all information, content, and materials available on this podcast are for general informational purposes only.
Budgeting For Marketing In this episode of the Marketing Guides for Small Businesses podcast, hosts Jeff Stec, Ian Cantle, Ken Tucker, and Paul Barthel tackle one of the most challenging and essential topics for small business owners: budgeting for marketing. This episode is packed with valuable insights, practical advice, and real-world examples to help you effectively allocate your marketing budget and drive your business's growth. Overview: Join the marketing experts as they delve into the intricacies of setting a marketing budget, the pitfalls to avoid, and how to maximize your return on investment (ROI). Whether you're a startup entrepreneur or a seasoned business owner, this episode will provide a comprehensive guide to make informed decisions about your marketing spend. Key Learnings: Challenges of Budgeting for Marketing: Understand why budgeting for marketing is often a daunting task and the common mistakes businesses make.Planning and Research: Learn the importance of planning and strategic research in forming an effective marketing budget.Key Components of a Marketing Budget: Discover the essential elements that should be included in your marketing budget, from advertising costs to content creation.Realistic Expectations: Jeff discusses the “if I build it, they will come” myth and why effective marketing is crucial for business success.In-house vs Outsourcing: Explore the pros and cons of handling marketing internally versus hiring an external agency.The Value of Strategy: Find out how a well-defined marketing strategy can significantly impact your budgeting decisions and overall success.Return on Investment (ROI): Ian and the panel discuss what ROI you can realistically expect from your marketing efforts.Effective Use of Resources: Gain insights into prioritizing marketing channels and making the most of limited resources.Parting Thoughts and Closing Remarks: Key takeaways to help you avoid common budgeting pitfalls and make strategic marketing investments. GET STARTED TODAY: Don't miss out on transforming your marketing approach! Join us for this informative episode and gain the tools you need to set a winning marketing budget. Subscribe to the Marketing Guides for Small Businesses podcast on your favorite platform and stay updated with our latest episodes. For more insights and free resources, visit https://marketingguidesforsmallbusinesses.com/ If you have specific questions or need personalized advice, feel free to set up a free consultation call with any of our hosts. Tune in, get informed, and let's take your business to the next level! Until next week, keep calm and market on. GET IN TOUCH WITH THE MARKETING GUIDES: Ian Cantle of Dental Marketing Heroes and Outsourced Marketing Jeff Stec of Tylerica Marketing Systems Ken Tucker & Paul Barther of Changescape Web Solutions
There is an uneasy coexistence of cars and bikes on the road, and cyclist injuries & deaths from collisions with cars and trucks are on the rise. So what's being done to address this? What can you do — as a cyclist, and as a driver — to do your part? And what's being done in car & truck technology to help the cause? We talk with the lead engineer of Advanced Driver Assistance Systems (ADAS) at General Motors who works on bicycle detection technology to help keep cyclists safer on the roads.RELATED LINKS:BLISTER+ Get Yourself CoveredEp. Sponsor: Bag BalmCHECK OUT OUR OTHER PODCASTSBlister CinematicCRAFTEDBikes & Big IdeasBlister PodcastOff The CouchTOPICS & TIMES:Avoiding Pedestrians vs Avoiding Cyclists (3:58)The Increase of Cyclist Deaths (9:33)Distracted Drivers (10:27)Evolution of Car Safety Tech (11:40)Chad's Background (17:24)What Can Cyclists Do to Reduce Accidents? (22:10)Laws around Phone Use While Driving (25:32)Night Driving (30:17)Promising Tech? (30:17)Parting Thoughts (31:11) Hosted on Acast. See acast.com/privacy for more information.
The St. John's Morning Show from CBC Radio Nfld. and Labrador (Highlights)
After nine years in office, Liberal MP Ken McDonald has announced he will not be running in the next federal election. He joined us on the line this morning to talk about his decision.
Key topics in today's conversation include:Joe's Background and Journey to Oakley (1:06)Reminders About 2290s for Drivers (4:10)Maintenance Week for Joe (5:13)From Construction to Driving (7:16)Friendships and Support at Oakley (12:31)Mentoring New Owner-Operators (17:29)Discussion About the Different Divisions at Oakley (18:49)Recruiting Others to Oakley and Trucking (20:13)Essential Tools and Equipment for Trucking (26:44)Passing on Knowledge to Younger Drivers (30:50)The Mindset of a Good Owner-Operator (34:19)Company Culture and Success (36:11)Joe's Connection with Bruce Mallinson (39:00)Future Plans and Retirement (42:22)Parting Thoughts (46:37)Oakley Trucking is a family-owned and operated trucking company headquartered in North Little Rock, Arkansas. For more information, check out our show website: podcast.bruceoakley.com.
Editor's note: One of the top reasons we have hundreds of companies and thousands of AI Engineers joining the World's Fair next week is, apart from discussing technology and being present for the big launches planned, to hire and be hired! Listeners loved our previous Elicit episode and were so glad to welcome 2 more members of Elicit back for a guest post (and bonus podcast) on how they think through hiring. Don't miss their AI engineer job description, and template which you can use to create your own hiring plan! How to Hire AI EngineersJames Brady, Head of Engineering @ Elicit (ex Spring, Square, Trigger.io, IBM)Adam Wiggins, Internal Journalist @ Elicit (Cofounder Ink & Switch and Heroku)If you're leading a team that uses AI in your product in some way, you probably need to hire AI engineers. As defined in this article, that's someone with conventional engineering skills in addition to knowledge of language models and prompt engineering, without being a full-fledged Machine Learning expert.But how do you hire someone with this skillset? At Elicit we've been applying machine learning to reasoning tools since 2018, and our technical team is a mix of ML experts and what we can now call AI engineers. This article will cover our process from job description through interviewing. (You can also flip the perspectives here and use it just as easily for how to get hired as an AI engineer!)My own journeyBefore getting into the brass tacks, I want to share my journey to becoming an AI engineer.Up until a few years ago, I was happily working my job as an engineering manager of a big team at a late-stage startup. Like many, I was tracking the rapid increase in AI capabilities stemming from the deep learning revolution, but it was the release of GPT-3 in 2020 which was the watershed moment. At the time, we were all blown away by how the model could string together coherent sentences on demand. (Oh how far we've come since then!)I'd been a professional software engineer for nearly 15 years—enough to have experienced one or two technology cycles—but I could see this was something categorically new. I found this simultaneously exciting and somewhat disconcerting. I knew I wanted to dive into this world, but it seemed like the only path was going back to school for a master's degree in Machine Learning. I started talking with my boss about options for taking a sabbatical or doing a part-time distance learning degree.In 2021, I instead decided to launch a startup focused on productizing new research ideas on ML interpretability. It was through that process that I reached out to Andreas—a leading ML researcher and founder of Elicit—to see if he would be an advisor. Over the next few months, I learned more about Elicit: that they were trying to apply these fascinating technologies to the real-world problems of science, and with a business model that aligned it with safety goals. I realized that I was way more excited about Elicit than I was about my own startup ideas, and wrote about my motivations at the time.Three years later, it's clear this was a seismic shift in my career on the scale of when I chose to leave my comfy engineering job at IBM to go through the Y Combinator program back in 2008. Working with this new breed of technology has been more intellectually stimulating, challenging, and rewarding than I could have imagined.Deep ML expertise not requiredIt's important to note that AI engineers are not ML experts, nor is that their best contribution to a tech team.In our article Living documents as an AI UX pattern, we wrote:It's easy to think that AI advancements are all about training and applying new models, and certainly this is a huge part of our work in the ML team at Elicit. But those of us working in the UX part of the team believe that we have a big contribution to make in how AI is applied to end-user problems.We think of LLMs as a new medium to work with, one that we've barely begun to grasp the contours of. New computing mediums like GUIs in the 1980s, web/cloud in the 90s and 2000s, and multitouch smartphones in the 2000s/2010s opened a whole new era of engineering and design practices. So too will LLMs open new frontiers for our work in the coming decade.To compare to the early era of mobile development: great iOS developers didn't require a detailed understanding of the physics of capacitive touchscreens. But they did need to know the capabilities and limitations of a multi-touch screen, the constrained CPU and storage available, the context in which the user is using it (very different from a webpage or desktop computer), etc.In the same way, an AI engineer needs to work with LLMs as a medium that is fundamentally different from other compute mediums. That means an interest in the ML side of things, whether through their own self-study, tinkering with prompts and model fine-tuning, or following along in #llm-paper-club. But this understanding is so that they can work with the medium effectively versus, say, spending their days training new models.Language models as a chaotic mediumSo if we're not expecting deep ML expertise from AI engineers, what are we expecting? This brings us to what makes LLMs different.We'll assume already that our ideal candidate is already inspired by, and full of ideas about, all the new capabilities AI can bring to software products. But the flip side is all the things that make this new medium difficult to work with. LLM calls are annoying due to high latency (measured in tens of seconds sometimes, rather than milliseconds), extreme variance on latency, high error rates even under normal operation. Not to mention getting extremely different answers to the same prompt provided to the same model on two subsequent calls!The net effect is that an AI engineer, even working at the application development level, needs to have a skillset comparable to distributed systems engineering. Handling errors, retries, asynchronous calls, streaming responses, parallelizing and recombining model calls, the halting problem, and fallbacks are just some of the day-in-the-life of an AI engineer. Chaos engineering gets new life in the era of AI.Skills and qualities in candidatesLet's put together what we don't need (deep ML expertise) with what we do (work with capabilities and limitations of the medium). Thus we start to see what Elicit looks for in AI engineers:* Conventional software engineering skills. Especially back-end engineering on complex, data-intensive applications.* Professional, real-world experience with applications at scale.* Deep, hands-on experience across a few back-end web frameworks.* Light devops and an understanding of infrastructure best practices.* Queues, message buses, event-driven and serverless architectures, … there's no single “correct” approach, but having a deep toolbox to draw from is very important.* A genuine curiosity and enthusiasm for the capabilities of language models.* One or more serious projects (side projects are fine) of using them in interesting ways on a unique domain.* …ideally with some level of factored cognition, e.g. breaking the problem down into chunks, making thoughtful decisions about which things to push to the language model and which stay within the realm of conventional heuristics and compute capabilities.* Personal studying with resources like Elicit's ML reading list. Part of the role is collaborating with the ML engineers and researchers on our team. To do so, the candidate needs to “speak their language” somewhat, just as a mobile engineer needs some familiarity with backends in order to collaborate effectively on API creation with backend engineers.* An understanding of the challenges that come along with working with large models (high latency, variance, etc.) leading to a defensive, fault-first mindset.* Careful and principled handling of error cases, asynchronous code (and ability to reason about and debug it), streaming data, caching, logging and analytics for understanding behavior in production.* This is a similar mindset that one can develop working on conventional apps which are complex, data-intensive, or large-scale apps. The difference is that an AI engineer will need this mindset even when working on relatively small scales!On net, a great AI engineer will combine two seemingly contrasting perspectives: knowledge of, and a sense of wonder for, the capabilities of modern ML models; but also the understanding that this is a difficult and imperfect foundation, and the willingness to build resilient and performant systems on top of it.Here's the resulting AI engineer job description for Elicit. And here's a template that you can borrow from for writing your own JD.Hiring processOnce you know what you're looking for in an AI engineer, the process is not too different from other technical roles. Here's how we do it, broken down into two stages: sourcing and interviewing.SourcingWe're primarily looking for people with (1) a familiarity with and interest in ML, and (2) proven experience building complex systems using web technologies. The former is important for culture fit and as an indication that the candidate will be able to do some light prompt engineering as part of their role. The latter is important because language model APIs are built on top of web standards and—as noted above—aren't always the easiest tools to work with.Only a handful of people have built complex ML-first apps, but fortunately the two qualities listed above are relatively independent. Perhaps they've proven (2) through their professional experience and have some side projects which demonstrate (1).Talking of side projects, evidence of creative and original prototypes is a huge plus as we're evaluating candidates. We've barely scratched the surface of what's possible to build with LLMs—even the current generation of models—so candidates who have been willing to dive into crazy “I wonder if it's possible to…” ideas have a huge advantage.InterviewingThe hard skills we spend most of our time evaluating during our interview process are in the “building complex systems using web technologies” side of things. We will be checking that the candidate is familiar with asynchronous programming, defensive coding, distributed systems concepts and tools, and display an ability to think about scaling and performance. They needn't have 10+ years of experience doing this stuff: even junior candidates can display an aptitude and thirst for learning which gives us confidence they'll be successful tackling the difficult technical challenges we'll put in front of them.One anti-pattern—something which makes my heart sink when I hear it from candidates—is that they have no familiarity with ML, but claim that they're excited to learn about it. The amount of free and easily-accessible resources available is incredible, so a motivated candidate should have already dived into self-study.Putting all that together, here's the interview process that we follow for AI engineer candidates:* 30-minute introductory conversation. Non-technical, explaining the interview process, answering questions, understanding the candidate's career path and goals.* 60-minute technical interview. This is a coding exercise, where we play product manager and the candidate is making changes to a little web app. Here are some examples of topics we might hit upon through that exercise:* Update API endpoints to include extra metadata. Think about appropriate data types. Stub out frontend code to accept the new data.* Convert a synchronous REST API to an asynchronous streaming endpoint.* Cancellation of asynchronous work when a user closes their tab.* Choose an appropriate data structure to represent the pending, active, and completed ML work which is required to service a user request.* 60–90 minute non-technical interview. Walk through the candidate's professional experience, identifying high and low points, getting a grasp of what kinds of challenges and environments they thrive in.* On-site interviews. Half a day in our office in Oakland, meeting as much of the team as possible: more technical and non-technical conversations.The frontier is wide openAlthough Elicit is perhaps further along than other companies on AI engineering, we also acknowledge that this is a brand-new field whose shape and qualities are only just now starting to form. We're looking forward to hearing how other companies do this and being part of the conversation as the role evolves.We're excited for the AI Engineer World's Fair as another next step for this emerging subfield. And of course, check out the Elicit careers page if you're interested in joining our team.Podcast versionTimestamps* [00:00:24] Intros* [00:05:25] Defining the Hiring Process* [00:08:42] Defensive AI Engineering as a chaotic medium* [00:10:26] Tech Choices for Defensive AI Engineering* [00:14:04] How do you Interview for Defensive AI Engineering* [00:19:25] Does Model Shadowing Work?* [00:22:29] Is it too early to standardize Tech stacks?* [00:32:02] Capabilities: Offensive AI Engineering* [00:37:24] AI Engineering Required Knowledge* [00:40:13] ML First Mindset* [00:45:13] AI Engineers and Creativity* [00:47:51] Inside of Me There Are Two Wolves* [00:49:58] Sourcing AI Engineers* [00:58:45] Parting ThoughtsTranscript[00:00:00] swyx: Okay, so welcome to the Latent Space Podcast. This is another remote episode that we're recording. This is the first one that we're doing around a guest post. And I'm very honored to have two of the authors of the post with me, James and Adam from Elicit. Welcome, James. Welcome, Adam.[00:00:22] James Brady: Thank you. Great to be here.[00:00:23] Hey there.[00:00:24] Intros[00:00:24] swyx: Okay, so I think I will do this kind of in order. I think James, you're, you're sort of the primary author. So James, you are head of engineering at Elicit. You also, We're VP Eng at Teespring and Spring as well. And you also , you have a long history in sort of engineering. How did you, , find your way into something like Elicit where, , it's, you, you are basically traditional sort of VP Eng, VP technology type person moving into a more of an AI role.[00:00:53] James Brady: Yeah, that's right. It definitely was something of a Sideways move if not a left turn. So the story there was I'd been doing, as you said, VP technology, CTO type stuff for around about 15 years or so, and Notice that there was this crazy explosion of capability and interesting stuff happening within AI and ML and language models, that kind of thing.[00:01:16] I guess this was in 2019 or so, and decided that I needed to get involved. , this is a kind of generational shift. And Spent maybe a year or so trying to get up to speed on the state of the art, reading papers, reading books, practicing things, that kind of stuff. Was going to found a startup actually in in the space of interpretability and transparency, and through that met Andreas, who has obviously been on the, on the podcast before asked him to be an advisor for my startup, and he countered with, maybe you'd like to come and run the engineering team at Elicit, which it turns out was a much better idea.[00:01:48] And yeah, I kind of quickly changed in that direction. So I think some of the stuff that we're going to be talking about today is how actually a lot of the work when you're building applications with AI and ML looks and smells and feels much more like conventional software engineering with a few key differences rather than really deep ML stuff.[00:02:07] And I think that's one of the reasons why I was able to transfer skills over from one place to the other.[00:02:12] swyx: Yeah, I[00:02:12] James Brady: definitely[00:02:12] swyx: agree with that. I, I do often say that I think AI engineering is about 90 percent software engineering with like the, the 10 percent of like really strong really differentiated AI engineering.[00:02:22] And that might, that obviously that number might change over time. I want to also welcome Adam onto my podcast because you welcomed me onto your podcast two years ago.[00:02:31] Adam Wiggins: Yeah, that was a wonderful episode.[00:02:32] swyx: That was, that was a fun episode. You famously founded Heroku. You just wrapped up a few years working on Muse.[00:02:38] And now you've described yourself as a journalist, internal journalist working on Elicit.[00:02:43] Adam Wiggins: Yeah, well I'm kind of a little bit in a wandering phase here and trying to take this time in between ventures to see what's out there in the world and some of my wandering took me to the Elicit team. And found that they were some of the folks who were doing the most interesting, really deep work in terms of taking the capabilities of language models and applying them to what I feel like are really important problems.[00:03:08] So in this case, science and literature search and, and, and that sort of thing. It fits into my general interest in tools and productivity software. I, I think of it as a tool for thought in many ways, but a tool for science, obviously, if we can accelerate that discovery of new medicines and things like that, that's, that's just so powerful.[00:03:24] But to me, it's a. It's kind of also an opportunity to learn at the feet of some real masters in this space, people who have been working on it since it was, before it was cool, if you want to put it that way. So for me, the last couple of months have been this crash course, and why I sometimes describe myself as an internal journalist is I'm helping to write some, some posts, including Supporting James in this article here we're doing for latent space where I'm just bringing my writing skill and that sort of thing to bear on their very deep domain expertise around language models and applying them to the real world and kind of surface that in a way that's I don't know, accessible, legible, that, that sort of thing.[00:04:03] And so, and the great benefit to me is I get to learn this stuff in a way that I don't think I would, or I haven't, just kind of tinkering with my own side projects.[00:04:12] swyx: I forgot to mention that you also run Ink and Switch, which is one of the leading research labs, in my mind, of the tools for thought productivity space, , whatever people mentioned there, or maybe future of programming even, a little bit of that.[00:04:24] As well. I think you guys definitely started the local first wave. I think there was just the first conference that you guys held. I don't know if you were personally involved.[00:04:31] Adam Wiggins: Yeah, I was one of the co organizers along with a few other folks for, yeah, called Local First Conf here in Berlin.[00:04:36] Huge success from my, my point of view. Local first, obviously, a whole other topic we can talk about on another day. I think there actually is a lot more what would you call it , handshake emoji between kind of language models and the local first data model. And that was part of the topic of the conference here, but yeah, topic for another day.[00:04:55] swyx: Not necessarily. I mean , I, I selected as one of my keynotes, Justine Tunney, working at LlamaFall in Mozilla, because I think there's a lot of people interested in that stuff. But we can, we can focus on the headline topic. And just to not bury the lead, which is we're talking about hire, how to hire AI engineers, this is something that I've been looking for a credible source on for months.[00:05:14] People keep asking me for my opinions. I don't feel qualified to give an opinion and it's not like I have. So that's kind of defined hiring process that I'm super happy with, even though I've worked with a number of AI engineers.[00:05:25] Defining the Hiring Process[00:05:25] swyx: I'll just leave it open to you, James. How was your process of defining your hiring, hiring roles?[00:05:31] James Brady: Yeah. So I think the first thing to say is that we've effectively been hiring for this kind of a role since before you, before you coined the term and tried to kind of build this understanding of what it was.[00:05:42] So, which is not a bad thing. Like it's, it was a, it was a good thing. A concept, a concept that was coming to the fore and effectively needed a name, which is which is what you did. So the reason I mentioned that is I think it was something that we kind of backed into, if you will. We didn't sit down and come up with a brand new role from, from scratch of this is a completely novel set of responsibilities and skills that this person would need.[00:06:06] However, it is a A kind of particular blend of different skills and attitudes and and curiosities interests, which I think makes sense to kind of bundle together. So in the, in the post, the three things that we say are most important for a highly effective AI engineer are first of all, conventional software engineering skills, which is Kind of a given, but definitely worth mentioning.[00:06:30] The second thing is a curiosity and enthusiasm for machine learning and maybe in particular language models. That's certainly true in our case. And then the third thing is to do with basically a fault first mindset, being able to build systems that can handle things going wrong in, in, in some sense.[00:06:49] And yeah, the I think the kind of middle point, the curiosity about ML and language models is probably fairly self evident. They're going to be working with, and prompting, and dealing with the responses from these models, so that's clearly relevant. The last point, though, maybe takes the most explaining.[00:07:07] To do with this fault first mindset and the ability to, to build resilient systems. The reason that is, is so important is because compared to normal APIs, where normal, think of something like a Stripe API or a search API or something like this. The latency when you're working with language models is, is wild, like you can get 10x variation.[00:07:32] I mean, I was looking at the stats before, actually, before, before the podcast. We do often, normally, in fact, see a 10x variation in the P90 latency over the course of, Half an hour, an hour when we're prompting these models, which is way higher than if you're working with a, more kind of conventional conventionally backed API.[00:07:49] And the responses that you get, the actual content and the responses are naturally unpredictable as well. They come back with different formats. Maybe you're expecting JSON. It's not quite JSON. You have to handle this stuff. And also the, the semantics of the messages are unpredictable too, which is, which is a good thing.[00:08:08] Like this is one of the things that you're looking for from these language models, but it all adds up to needing to. Build a resilient, reliable, solid feeling system on top of this fundamentally, well, certainly currently fundamentally shaky foundation. The models do not behave in the way that you would like them to.[00:08:28] And yeah, the ability to structure the code around them such that it does give the user this warm, reassuring, Snappy, solid feeling is is really what we're driving for there.[00:08:42] Defensive AI Engineering as a chaotic medium[00:08:42] Adam Wiggins: What really struck me as we, we dug in on the content for this article was that third point there. The, the language models is this kind of chaotic medium, this, this dragon, this wild horse you're, you're, you're riding and trying to guide in the direction that is going to be useful and reliable to users, because I think.[00:08:58] So much of software engineering is about making things not only high performance and snappy, but really just making it stable, reliable, predictable, which is literally the opposite of what you get from from the language models. And yet, yeah, the output is so useful, and indeed, some of their Creativity, if you want to call it that, which is, is precisely their value.[00:09:19] And so you need to work with this medium. And I guess the nuanced or the thing that came out of Elissa's experience that I thought was so interesting is quite a lot of working with that is things that come from distributed systems engineering. But you have really the AI engineers as we're defining them or, or labeling them on the illicit team is people who are really application developers.[00:09:39] You're building things for end users. You're thinking about, okay, I need to populate this interface with some response to user input. That's useful to the tasks they're trying to do, but you have this. This is the thing, this medium that you're working with that in some ways you need to apply some of this chaos engineering, distributed systems engineering, which typically those people with those engineering skills are not kind of the application level developers with the product mindset or whatever, they're more deep in the guts of a, of a system.[00:10:07] And so it's, those, those skills and, and knowledge do exist throughout the engineering discipline, but sort of putting them together into one person that is That feels like sort of a unique thing and working with the folks on the Elicit team who have that skills I'm quite struck by that unique that unique blend.[00:10:23] I haven't really seen that before in my 30 year career in technology.[00:10:26] Tech Choices for Defensive AI Engineering[00:10:26] swyx: Yeah, that's a Fascinating I like the reference to chaos engineering. I have some appreciation, I think when you had me on your podcast, I was still working at Temporal and that was like a nice Framework, if you live within Temporal's boundaries, you can pretend that all those faults don't exist, and you can, you can code in a sort of very fault tolerant way.[00:10:47] What is, what is you guys solutions around this, actually? Like, I think you're, you're emphasizing having the mindset, but maybe naming some technologies would help? Not saying that you have to adopt these technologies, but they're just, they're just quick vectors into what you're talking about when you're, when you're talking about distributed systems.[00:11:03] Like, that's such a big, chunky word, , like are we talking, are Kubernetes or, and I suspect we're not, , like we're, we're talking something else now.[00:11:10] James Brady: Yeah, that's right. It's more at the application level rather than at the infrastructure level, at least, at least the way that it works for us.[00:11:17] So there's nothing kind of radically novel here. It is more a careful application of existing concepts. So the kinds of tools that we reach for to handle these kind of slightly chaotic objects that Adam was just talking about, are retries and fallbacks and timeouts and careful error handling. And, yeah, the standard stuff, really.[00:11:39] There's also a great degree of dependence. We rely heavily on parallelization because, , these language models are not innately very snappy, and , there's just a lot of I. O. going back and forth. So All these things I'm talking about when I was in my earlier stages of a career, these are kind of the things that are the difficult parts that most senior software engineers will be better at.[00:12:01] It is careful error handling, and concurrency, and fallbacks, and distributed systems, and, , eventual consistency, and all this kind of stuff and As Adam was saying, the kind of person that is deep in the guts of some kind of distributed systems, a really high, high scale backend kind of a problem would probably naturally have these kinds of skills.[00:12:21] But you'll find them on, on day one, if you're building a, , an ML powered app, even if it's not got massive scale. I think one one thing that I would mention that we do do yeah, maybe, maybe two related things, actually. The first is we're big fans of strong typing. We share the types all the way from the Backend Python code all the way to the to the front end in TypeScript and find that is I mean We'd probably do this anyway But it really helps one reason around the shapes of the data which can going to be going back and forth and that's really important When you can't rely upon You you're going to have to coerce the data that you get back from the ML if you want if you want for it to be structured basically speaking and The second thing which is related is we use checked exceptions inside our Python code base, which means that we can use the type system to make sure we are handling, properly handling, all of the, the various things that could be going wrong, all the different exceptions that could be getting raised.[00:13:16] So, checked exceptions are not, not really particularly popular. Actually there's not many people that are big fans of them. For our particular use case, to really make sure that we've not just forgotten to handle, , This particular type of error we have found them useful to to, to force us to think about all the different edge cases that can come up.[00:13:32] swyx: Fascinating. How just a quick note of technology. How do you share types from Python to TypeScript? Do you, do you use GraphQL? Do you use something[00:13:39] James Brady: else? We don't, we don't use GraphQL. Yeah. So we've got the We've got the types defined in Python, that's the source of truth. And we go from the OpenAPI spec, and there's a, there's a tool that you work and use to generate types dynamically, like TypeScript types from those OpenAPI definitions.[00:13:57] swyx: Okay, excellent. Okay, cool. Sorry, sorry for diving into that rabbit hole a little bit. I always like to spell out technologies for people to dig their teeth into.[00:14:04] How do you Interview for Defensive AI Engineering[00:14:04] swyx: One thing I'll, one thing I'll mention quickly is that a lot of the stuff that you mentioned is typically not part of the normal interview loop.[00:14:10] It's actually really hard to interview for because this is the stuff that you polish out in, as you go into production, the coding interviews are typically about the happy path. How do we do that? How do we, how do we design, how do you look for a defensive fault first mindset?[00:14:24] Because you can defensive code all day long and not add functionality. to your to your application.[00:14:29] James Brady: Yeah, it's a great question and I think that's exactly true. Normally the interview is about the happy path and then there's maybe a box checking exercise at the end of the candidate says of course in reality I would handle the edge cases or something like this and that unfortunately isn't isn't quite good enough when when the happy path is is very very narrow and yeah there's lots of weirdness on either side so basically speaking, it's just a case of, of foregrounding those kind of concerns through the interview process.[00:14:58] It's, there's, there's no magic to it. We, we talk about this in the, in the po in the post that we're gonna be putting up on, on Laton space. The, there's two main technical exercises that we do through our interview process for this role. The first is more coding focus, and the second is more system designy.[00:15:16] Yeah. White whiteboarding a potential solution. And in, without giving too much away in the coding exercise. You do need to think about edge cases. You do need to think about errors. The exercise consists of adding features and fixing bugs inside the code base. And in both of those two cases, it does demand, because of the way that we set the application up and the interview up, it does demand that you think about something other than the happy path.[00:15:41] But your thinking is the right prompt of how do we get the candidate thinking outside of the, the kind of normal Sweet spot, smooth smooth, smoothly paved path. In terms of the system design interview, that's a little easier to prompt this kind of fault first mindset because it's very easy in that situation just to say, let's imagine that, , this node dies, how does the app still work?[00:16:03] Let's imagine that this network is, is going super slow. Let's imagine that, I don't know, like you, you run out of, you run out of capacity in, in, in this database that you've sketched out here, how do you handle that, that, that sort of stuff. So. It's, in both cases, they're not firmly anchored to and built specifically around language models and ways language models can go wrong, but we do exercise the same muscles of thinking defensively and yeah, foregrounding the edge cases, basically.[00:16:32] Adam Wiggins: James, earlier there you mentioned retries. And this is something that I think I've seen some interesting debates internally about things regarding, first of all, retries are, can be costly, right? In general, this medium, in addition to having this incredibly high variance and response rate, and, , being non deterministic, is actually quite expensive.[00:16:50] And so, in many cases, doing a retry when you get a fail does make sense, but actually that has an impact on cost. And so there is Some sense to which, at least I've seen the AI engineers on our team, worry about that. They worry about, okay, how do we give the best user experience, but balance that against what the infrastructure is going to, , is going to cost our company, which I think is again, an interesting mix of, yeah, again, it's a little bit the distributed system mindset, but it's also a product perspective and you're thinking about the end user experience, but also the.[00:17:22] The bottom line for the business, you're bringing together a lot of a lot of qualities there. And there's also the fallback case, which is kind of, kind of a related or adjacent one. I think there was also a discussion on that internally where, I think it maybe was search, there was something recently where there was one of the frontline search providers was having some, yeah, slowness and outages, and essentially then we had a fallback, but essentially that gave people for a while, especially new users that come in that don't the difference, they're getting a They're getting worse results for their search.[00:17:52] And so then you have this debate about, okay, there's sort of what is correct to do from an engineering perspective, but then there's also what actually is the best result for the user. Is giving them a kind of a worse answer to their search result better, or is it better to kind of give them an error and be like, yeah, sorry, it's not working right at the moment, try again.[00:18:12] Later, both are obviously non optimal, but but this is the kind of thing I think that that you run into or, or the kind of thing we need to grapple with a lot more than you would other kinds of, of mediums.[00:18:24] James Brady: Yeah, that's a really good example. I think it brings to the fore the two different things that you could be optimizing for of uptime and response at all costs on one end of the spectrum and then effectively fragility, but kind of, if you get a response, it's the best response we can come up with at the other end of the spectrum.[00:18:43] And where you want to land there kind of depends on, well, it certainly depends on the app, obviously depends on the user. I think it depends on the, feature within the app as well. So in the search case that you, that you mentioned there, in retrospect, we probably didn't want to have the fallback. And we've actually just recently on Monday, changed that to Show an error message rather than giving people a kind of degraded experience in other situations We could use for example a large language model from a large language model from provider B rather than provider A and Get something which is within the A few percentage points performance, and that's just a really different situation.[00:19:21] So yeah, like any interesting question, the answer is, it depends.[00:19:25] Does Model Shadowing Work?[00:19:25] swyx: I do hear a lot of people suggesting I, let's call this model shadowing as a defensive technique, which is, if OpenAI happens to be down, which, , happens more often than people think then you fall back to anthropic or something.[00:19:38] How realistic is that, right? Like you, don't you have to develop completely different prompts for different models and won't the, won't the performance of your application suffer from whatever reason, right? Like it may be caused differently or it's not maintained in the same way. I, I think that people raise this idea of fallbacks to models, but I don't think it's, I don't, I don't see it practiced very much.[00:20:02] James Brady: Yeah, it is, you, you definitely need to have a different prompt if you want to stay within a few percentage points degradation Like I, like I said before, and that certainly comes at a cost, like fallbacks and backups and things like this It's really easy for them to go stale and kind of flake out on you because they're off the beaten track And In our particular case inside of Elicit, we do have fallbacks for a number of kind of crucial functions where it's going to be very obvious if something has gone wrong, but we don't have fallbacks in all cases.[00:20:40] It really depends on a task to task basis throughout the app. So I can't give you a kind of a, a single kind of simple rule of thumb for, in this case, do this. And in the other, do that. But yeah, we've it's a little bit easier now that the APIs between the anthropic models and opening are more similar than they used to be.[00:20:59] So we don't have two totally separate code paths with different protocols, like wire protocols to, to speak, which makes things easier, but you're right. You do need to have different prompts if you want to, have similar performance across the providers.[00:21:12] Adam Wiggins: I'll also note, just observing again as a relative newcomer here, I was surprised, impressed, not sure what the word is for it, at the blend of different backends that the team is using.[00:21:24] And so there's many The product presents as kind of one single interface, but there's actually several dozen kind of main paths. There's like, for example, the search versus a data extraction of a certain type, versus chat with papers, versus And each one of these, , the team has worked very hard to pick the right Model for the job and craft the prompt there, but also is constantly testing new ones.[00:21:48] So a new one comes out from either, from the big providers or in some cases, Our own models that are , running on, on essentially our own infrastructure. And sometimes that's more about cost or performance, but the point is kind of switching very fluidly between them and, and very quickly because this field is moving so fast and there's new ones to choose from all the time is like part of the day to day, I would say.[00:22:11] So it isn't more of a like, there's a main one, it's been kind of the same for a year, there's a fallback, but it's got cobwebs on it. It's more like which model and which prompt is changing weekly. And so I think it's quite, quite reasonable to to, to, to have a fallback that you can expect might work.[00:22:29] Is it too early to standardize Tech stacks?[00:22:29] swyx: I'm curious because you guys have had experience working at both, , Elicit, which is a smaller operation and, and larger companies. A lot of companies are looking at this with a certain amount of trepidation as, as, , it's very chaotic. When you have, when you have , one engineering team that, that, knows everyone else's names and like, , they, they, they, they meet constantly in Slack and knows what's going on.[00:22:50] It's easier to, to sync on technology choices. When you have a hundred teams, all shipping AI products and all making their own independent tech choices. It can be, it can be very hard to control. One solution I'm hearing from like the sales forces of the worlds and Walmarts of the world is that they are creating their own AI gateway, right?[00:23:05] Internal AI gateway. This is the one model hub that controls all the things and has our standards. Is that a feasible thing? Is that something that you would want? Is that something you have and you're working towards? What are your thoughts on this stuff? Like, Centralization of control or like an AI platform internally.[00:23:22] James Brady: Certainly for larger organizations and organizations that are doing things which maybe are running into HIPAA compliance or other, um, legislative tools like that. It could make a lot of sense. Yeah. I think for the TLDR for something like Elicit is we are small enough, as you indicated, and need to have full control over all the levers available and switch between different models and different prompts and whatnot, as Adam was just saying, that that kind of thing wouldn't work for us.[00:23:52] But yeah, I've spoken with and, um, advised a couple of companies that are trying to sell into that kind of a space or at a larger stage, and it does seem to make a lot of sense for them. So, for example, if you're trying to sell If you're looking to sell to a large enterprise and they cannot have any data leaving the EU, then you need to be really careful about someone just accidentally putting in, , the sort of US East 1 GPT 4 endpoints or something like this.[00:24:22] I'd be interested in understanding better what the specific problem is that they're looking to solve with that, whether it is to do with data security or centralization of billing, or if they have a kind of Suite of prompts or something like this that people can choose from so they don't need to reinvent the wheel again and again I wouldn't be able to say without understanding the problems and their proposed solutions , which kind of situations that be better or worse fit for but yeah for illicit where really the The secret sauce, if there is a secret sauce, is which models we're using, how we're using them, how we're combining them, how we're thinking about the user problem, how we're thinking about all these pieces coming together.[00:25:02] You really need to have all of the affordances available to you to be able to experiment with things and iterate rapidly. And generally speaking, whenever you put these kind of layers of abstraction and control and generalization in there, that, that gets in the way. So, so for us, it would not work.[00:25:19] Adam Wiggins: Do you feel like there's always a tendency to want to reach for standardization and abstractions pretty early in a new technology cycle?[00:25:26] There's something comforting there, or you feel like you can see them, or whatever. I feel like there's some of that discussion around lang chain right now. But yeah, this is not only so early, but also moving so fast. , I think it's . I think it's tough to, to ask for that. That's, that's not the, that's not the space we're in, but the, yeah, the larger an organization, the more that's your, your default is to, to, to want to reach for that.[00:25:48] It, it, it's a sort of comfort.[00:25:51] swyx: Yeah, I find it interesting that you would say that , being a founder of Heroku where , you were one of the first platforms as a service that more or less standardized what, , that sort of early developer experience should have looked like.[00:26:04] And I think basically people are feeling the differences between calling various model lab APIs and having an actual AI platform where. , all, all their development needs are thought of for them. , it's, it's very much, and, and I, I defined this in my AI engineer post as well.[00:26:19] Like the model labs just see their job ending at serving models and that's about it. But actually the responsibility of the AI engineer has to fill in a lot of the gaps beyond that. So.[00:26:31] Adam Wiggins: Yeah, that's true. I think, , a huge part of the exercise with Heroku, which It was largely inspired by Rails, which itself was one of the first frameworks to standardize the SQL database.[00:26:42] And people had been building apps like that for many, many years. I had built many apps. I had made my own templates based on that. I think others had done it. And Rails came along at the right moment. We had been doing it long enough that you see the patterns and then you can say look let's let's extract those into a framework that's going to make it not only easier to build for the experts but for people who are relatively new the best practices are encoded into you.[00:27:07] That framework, , Model View Controller, to take one example. But then, yeah, once you see that, and once you experience the power of a framework, and again, it's so comforting, and you can develop faster, and it's easier to onboard new people to it because you have these standards. And this consistency, then folks want that for something new that's evolving.[00:27:29] Now here I'm thinking maybe if you fast forward a little to, for example, when React came on the on the scene, , a decade ago or whatever. And then, okay, we need to do state management. What's that? And then there's, , there's a new library every six months. Okay, this is the one, this is the gold standard.[00:27:42] And then, , six months later, that's deprecated. Because of course, it's evolving, you need to figure it out, like the tacit knowledge and the experience of putting it in practice and seeing what those real What those real needs are are, are critical, and so it's, it is really about finding the right time to say yes, we can generalize, we can make standards and abstractions, whether it's for a company, whether it's for, , a library, an open source library, for a whole class of apps and it, it's very much a, much more of a A judgment call slash just a sense of taste or , experience to be able to say, Yeah, we're at the right point.[00:28:16] We can standardize this. But it's at least my, my very, again, and I'm so new to that, this world compared to you both, but my, my sense is, yeah, still the wild west. That's what makes it so exciting and feels kind of too early for too much. too much in the way of standardized abstractions. Not that it's not interesting to try, but , you can't necessarily get there in the same way Rails did until you've got that decade of experience of whatever building different classes of apps in that, with that technology.[00:28:45] James Brady: Yeah, it's, it's interesting to think about what is going to stay more static and what is expected to change over the coming five years, let's say. Which seems like when I think about it through an ML lens, it's an incredibly long time. And if you just said five years, it doesn't seem, doesn't seem that long.[00:29:01] I think that, that kind of talks to part of the problem here is that things that are moving are moving incredibly quickly. I would expect, this is my, my hot take rather than some kind of official carefully thought out position, but my hot take would be something like the You can, you'll be able to get to good quality apps without doing really careful prompt engineering.[00:29:21] I don't think that prompt engineering is going to be a kind of durable differential skill that people will, will hold. I do think that, The way that you set up the ML problem to kind of ask the right questions, if you see what I mean, rather than the specific phrasing of exactly how you're doing chain of thought or few shot or something in the prompt I think the way that you set it up is, is probably going to be remain to be trickier for longer.[00:29:47] And I think some of the operational challenges that we've been talking about of wild variations in, in, in latency, And handling the, I mean, one way to think about these models is the first lesson that you learn when, when you're an engineer, software engineer, is that you need to sanitize user input, right?[00:30:05] It was, I think it was the top OWASP security threat for a while. Like you, you have to sanitize and validate user input. And we got used to that. And it kind of feels like this is the, The shell around the app and then everything else inside you're kind of in control of and you can grasp and you can debug, etc.[00:30:22] And what we've effectively done is, through some kind of weird rearguard action, we've now got these slightly chaotic things. I think of them more as complex adaptive systems, which , related but a bit different. Definitely have some of the same dynamics. We've, we've injected these into the foundations of the, of the app and you kind of now need to think with this defined defensive mindset downwards as well as upwards if you, if you see what I mean.[00:30:46] So I think it would gonna, it's, I think it will take a while for us to truly wrap our heads around that. And also these kinds of problems where you have to handle things being unreliable and slow sometimes and whatever else, even if it doesn't happen very often, there isn't some kind of industry wide accepted way of handling that at massive scale.[00:31:10] There are definitely patterns and anti patterns and tools and whatnot, but it's not like this is a solved problem. So I would expect that it's not going to go down easily as a, as a solvable problem at the ML scale either.[00:31:23] swyx: Yeah, excellent. I would describe in, in the terminology of the stuff that I've written in the past, I describe this inversion of architecture as sort of LLM at the core versus LLM or code at the core.[00:31:34] We're very used to code at the core. Actually, we can scale that very well. When we build LLM core apps, we have to realize that the, the central part of our app that's orchestrating things is actually prompt, prone to, , prompt injections and non determinism and all that, all that good stuff.[00:31:48] I, I did want to move the conversation a little bit from the sort of defensive side of things to the more offensive or, , the fun side of things, capabilities side of things, because that is the other part. of the job description that we kind of skimmed over. So I'll, I'll repeat what you said earlier.[00:32:02] Capabilities: Offensive AI Engineering[00:32:02] swyx: It's, you want people to have a genuine curiosity and enthusiasm for the capabilities of language models. We just, we're recording this the day after Anthropic just dropped Cloud 3. 5. And I was wondering, , maybe this is a good, good exercise is how do people have Curiosity and enthusiasm for capabilities language models when for example the research paper for cloud 3.[00:32:22] 5 is four pages[00:32:23] James Brady: Maybe that's not a bad thing actually in this particular case So yeah If you really want to know exactly how the sausage was made That hasn't been possible for a few years now in fact for for these new models but from our perspective as when we're building illicit What we primarily care about is what can these models do?[00:32:41] How do they perform on the tasks that we already have set up and the evaluations we have in mind? And then on a slightly more expansive note, what kinds of new capabilities do they seem to have? Can we elicit, no pun intended, from the models? For example, well, there's, there's very obvious ones like multimodality , there wasn't that and then there was that, or it could be something a bit more subtle, like it seems to be getting better at reasoning, or it seems to be getting better at metacognition, or Or it seems to be getting better at marking its own work and giving calibrated confidence estimates, things like this.[00:33:19] So yeah, there's, there's plenty to be excited about there. It's just that yeah, there's rightly or wrongly been this, this, this shift over the last few years to not give all the details. So no, but from application development perspective we, every time there's a new model release, there's a flow of activity in our Slack, and we try to figure out what's going on.[00:33:38] What it can do, what it can't do, run our evaluation frameworks, and yeah, it's always an exciting, happy day.[00:33:44] Adam Wiggins: Yeah, from my perspective, what I'm seeing from the folks on the team is, first of all, just awareness of the new stuff that's coming out, so that's, , an enthusiasm for the space and following along, and then being able to very quickly, partially that's having Slack to do this, but be able to quickly map that to, okay, What does this do for our specific case?[00:34:07] And that, the simple version of that is, let's run the evaluation framework, which Lissa has quite a comprehensive one. I'm actually working on an article on that right now, which I'm very excited about, because it's a very interesting world of things. But basically, you can just try, not just, but try the new model in the evaluations framework.[00:34:27] Run it. It has a whole slew of benchmarks, which includes not just Accuracy and confidence, but also things like performance, cost, and so on. And all of these things may trade off against each other. Maybe it's actually, it's very slightly worse, but it's way faster and way cheaper, so actually this might be a net win, for example.[00:34:46] Or, it's way more accurate. But that comes at its slower and higher cost, and so now you need to think about those trade offs. And so to me, coming back to the qualities of an AI engineer, especially when you're trying to hire for them, It's this, it's, it is very much an application developer in the sense of a product mindset of What are our users or our customers trying to do?[00:35:08] What problem do they need solved? Or what what does our product solve for them? And how does the capabilities of a particular model potentially solve that better for them than what exists today? And by the way, what exists today is becoming an increasingly gigantic cornucopia of things, right? And so, You say, okay, this new model has these capabilities, therefore, , the simple version of that is plug it into our existing evaluations and just look at that and see if it, it seems like it's better for a straight out swap out, but when you talk about, for example, you have multimodal capabilities, and then you say, okay, wait a minute, actually, maybe there's a new feature or a whole new There's a whole bunch of ways we could be using it, not just a simple model swap out, but actually a different thing we could do that we couldn't do before that would have been too slow, or too inaccurate, or something like that, that now we do have the capability to do.[00:35:58] I think of that as being a great thing. I don't even know if I want to call it a skill, maybe it's even like an attitude or a perspective, which is a desire to both be excited about the new technology, , the new models and things as they come along, but also holding in the mind, what does our product do?[00:36:16] Who is our user? And how can we connect the capabilities of this technology to how we're helping people in whatever it is our product does?[00:36:25] James Brady: Yeah, I'm just looking at one of our internal Slack channels where we talk about things like new new model releases and that kind of thing And it is notable looking through these the kind of things that people are excited about and not It's, I don't know the context, the context window is much larger, or it's, look at how many parameters it has, or something like this.[00:36:44] It's always framed in terms of maybe this could be applied to that kind of part of Elicit, or maybe this would open up this new possibility for Elicit. And, as Adam was saying, yeah, I don't think it's really a I don't think it's a novel or separate skill, it's the kind of attitude I would like to have all engineers to have at a company our stage, actually.[00:37:05] And maybe more generally, even, which is not just kind of getting nerd sniped by some kind of technology number, fancy metric or something, but how is this actually going to be applicable to the thing Which matters in the end. How is this going to help users? How is this going to help move things forward strategically?[00:37:23] That kind of, that kind of thing.[00:37:24] AI Engineering Required Knowledge[00:37:24] swyx: Yeah, applying what , I think, is, is, is the key here. Getting hands on as well. I would, I would recommend a few resources for people listening along. The first is Elicit's ML reading list, which I, I found so delightful after talking with Andreas about it.[00:37:38] It looks like that's part of your onboarding. We've actually set up an asynchronous paper club instead of my discord for people following on that reading list. I love that you separate things out into tier one and two and three, and that gives people a factored cognition way of Looking into the, the, the corpus, right?[00:37:55] Like yes, the, the corpus of things to know is growing and the water is slowly rising as far as what a bar for a competent AI engineer is. But I think, , having some structured thought as to what are the big ones that everyone must know I think is, is, is key. It's something I, I haven't really defined for people and I'm, I'm glad that this is actually has something out there that people can refer to.[00:38:15] Yeah, I wouldn't necessarily like make it required for like the job. Interview maybe, but , it'd be interesting to see like, what would be a red flag. If some AI engineer would not know, I don't know what, , I don't know where we would stoop to, to call something required knowledge, , or you're not part of the cool kids club.[00:38:33] But there increasingly is something like that, right? Like, not knowing what context is, is a black mark, in my opinion, right?[00:38:40] I think it, I think it does connect back to what we were saying before of this genuine Curiosity about and that. Well, maybe it's, maybe it's actually that combined with something else, which is really important, which is a self starting bias towards action, kind of a mindset, which again, everybody needs.[00:38:56] Exactly. Yeah. Everyone needs that. So if you put those two together, or if I'm truly curious about this and I'm going to kind of figure out how to make things happen, then you end up with people. Reading, reading lists, reading papers, doing side projects, this kind of, this kind of thing. So it isn't something that we explicitly included.[00:39:14] We don't have a, we don't have an ML focused interview for the AI engineer role at all, actually. It doesn't really seem helpful. The skills which we are checking for, as I mentioned before, this kind of fault first mindset. And conventional software engineering kind of thing. It's, it's 0. 1 and 0.[00:39:32] 3 on the list that, that we talked about. In terms of checking for ML curiosity and there are, how familiar they are with these concepts. That's more through talking interviews and culture fit types of things. We want for them to have a take on what Elisa is doing. doing, certainly as they progress through the interview process.[00:39:50] They don't need to be completely up to date on everything we've ever done on day zero. Although, , that's always nice when it happens. But for them to really engage with it, ask interesting questions, and be kind of bought into our view on how we want ML to proceed. I think that is really important, and that would reveal that they have this kind of this interest, this ML curiosity.[00:40:13] ML First Mindset[00:40:13] swyx: There's a second aspect to that. I don't know if now's the right time to talk about it, which is, I do think that an ML first approach to building software is something of a different mindset. I could, I could describe that a bit now if that, if that seems good, but yeah, I'm a team. Okay. So yeah, I think when I joined Elicit, this was the biggest adjustment that I had to make personally.[00:40:37] So as I said before, I'd been, Effectively building conventional software stuff for 15 years or so, something like this, well, for longer actually, but professionally for like 15 years. And had a lot of pattern matching built into my brain and kind of muscle memory for if you see this kind of problem, then you do that kind of a thing.[00:40:56] And I had to unlearn quite a lot of that when joining Elicit because we truly are ML first and try to use ML to the fullest. And some of the things that that means is, This relinquishing of control almost, at some point you are calling into this fairly opaque black box thing and hoping it does the right thing and dealing with the stuff that it sends back to you.[00:41:17] And that's very different if you're interacting with, again, APIs and databases, that kind of a, that kind of a thing. You can't just keep on debugging. At some point you hit this, this obscure wall. And I think the second, the second part to this is the pattern I was used to is that. The external parts of the app are where most of the messiness is, not necessarily in terms of code, but in terms of degrees of freedom, almost.[00:41:44] If the user can and will do anything at any point, and they'll put all sorts of wonky stuff inside of text inputs, and they'll click buttons you didn't expect them to click, and all this kind of thing. But then by the time you're down into your SQL queries, for example, as long as you've done your input validation, things are pretty pretty well defined.[00:42:01] And that, as we said before, is not really the case. When you're working with language models, there is this kind of intrinsic uncertainty when you get down to the, to the kernel, down to the core. Even, even beyond that, there's all that stuff is somewhat defensive and these are things to be wary of to some degree.[00:42:18] Though the flip side of that, the really kind of positive part of taking an ML first mindset when you're building applications is that you, If you, once you get comfortable taking your hands off the wheel at a certain point and relinquishing control, letting go then really kind of unexpected powerful things can happen if you lean on the, if you lean on the capabilities of the model without trying to overly constrain and slice and dice problems with to the point where you're not really wringing out the most capability from the model that you, that you might.[00:42:47] So, I was trying to think of examples of this earlier, and one that came to mind was we were working really early when just after I joined Elicit, we were working on something where we wanted to generate text and include citations embedded within it. So it'd have a claim, and then a, , square brackets, one, in superscript, something, something like this.[00:43:07] And. Every fiber in my, in my, in my being was screaming that we should have some way of kind of forcing this to happen or Structured output such that we could guarantee that this citation was always going to be present later on that the kind of the indication of a footnote would actually match up with the footnote itself and Kind of went into this symbolic.[00:43:28] I need full control kind of kind of mindset and it was notable that Andreas Who's our CEO, again, has been on the podcast, was was the opposite. He was just kind of, give it a couple of examples and it'll probably be fine. And then we can kind of figure out with a regular expression at the end. And it really did not sit well with me, to be honest.[00:43:46] I was like, but it could say anything. I could say, it could literally say anything. And I don't know about just using a regex to sort of handle this. This is a potent feature of the app. But , this is that was my first kind of, , The starkest introduction to this ML first mindset, I suppose, which Andreas has been cultivating for much longer than me, much longer than most, of yeah, there might be some surprises of stuff you get back from the model, but you can also It's about finding the sweet spot, I suppose, where you don't want to give a completely open ended prompt to the model and expect it to do exactly the right thing.[00:44:25] You can ask it too much and it gets confused and starts repeating itself or goes around in loops or just goes off in a random direction or something like this. But you can also over constrain the model. And not really make the most of the, of the capabilities. And I think that is a mindset adjustment that most people who are coming into AI engineering afresh would need to make of yeah, giving up control and expecting that there's going to be a little bit of kind of extra pain and defensive stuff on the tail end, but the benefits that you get as a, as a result are really striking.[00:44:58] The ML first mindset, I think, is something that I struggle with as well, because the errors, when they do happen, are bad. , they will hallucinate, and your systems will not catch it sometimes if you don't have large enough of a sample set.[00:45:13] AI Engineers and Creativity[00:45:13] swyx: I'll leave it open to you, Adam. What else do you think about when you think about curiosity and exploring capabilities?[00:45:22] Do people are there reliable ways to get people to push themselves? for joining us on Capabilities, because I think a lot of times we have this implicit overconfidence, maybe, of we think we know what it is, what a thing is, when actually we don't, and we need to keep a more open mind, and I think you do a particularly good job of Always having an open mind, and I want to get that out of more engineers that I talk to, but I, I, I, I struggle sometimes.[00:45:45] Adam Wiggins: I suppose being an engineer is, at its heart, this sort of contradiction of, on one hand, yeah,
In today's episode, Tom Barry and Burley Hawk break down the fundamentals of the Conjugate System and guide you on how to incorporate it into your training regimen. They address common questions from listeners about when to start the system, appropriate warm-ups, exercise selection, and much more. Whether you're a beginner or looking to refine your approach, this episode offers valuable insights to enhance your strength and conditioning journey. Are you an Athlete interest in training at Westside? Apply here: https://www.westside-barbell.com/blogs/the-blog/westside-barbells-elite-athlete-program-join-the-crew/ Join the Club, start with a 7-Day Free Trial! https://www.conjugateclub.com/ Please support this podcast by checking out our sponsors: -Studio Sponsor: CLMS - https://clmslandscapes.com/ - The Butcher & Grocer: https://thebutcherandgrocer.com/ - Conjugate Tactical: https://conjugatetactical.com/ - The Conjugate Club: https://www.conjugateclub.com/ 00:00 Intro 00:04 Start 00:52 Base Building & Starting Conjugate Blogs 01:13 What is the Conjugate Method 14:27 What Considerations to make for exercise selection? 15:25 How to balance the system while training? 17:56 When to push accessories? 24:32 When do you switch exercises? 31:13 Biggest challenges for beginners 39:59 Beginner programs for people over 40 43:14 What techniques & exercises strengthen the bottom of a lift? 45:20 How should dynamic lifts be performed? 49:02 What is an effective warm up routine 54:10 How to rotate main exercises? 55:41 How to modify the system for maximum hypertrophy? 57:14 What level of strength should you have to implement Max Effort? 58:22 Parting Thoughts & Outro
Original by Leopold Aschenbrenner, this summary is not commissioned or endorsed by him. Short Summary Extrapolating existing trends in compute, spending, algorithmic progress, and energy needs implies AGI (remote jobs being completely automatable) by ~2027. AGI will greatly accelerate AI research itself, leading to vastly superhuman intelligences being created ~1 year after AGI. Superintelligence will confer a decisive strategic advantage militarily by massively accelerating all spheres of science and technology. Electricity use will be a bigger bottleneck on scaling datacentres than investment, but is still doable domestically in the US by using natural gas. AI safety efforts in the US will be mostly irrelevant if other actors steal the model weights of an AGI. US AGI research must employ vastly better cybersecurity, to protect both model weights and algorithmic secrets. Aligning superhuman AI systems is a difficult technical challenge, but probably doable, and we must devote lots of [...] ---Outline:(00:13) Short Summary(02:16) 1. From GPT-4 to AGI: Counting the OOMs(02:24) Past AI progress(05:38) Training data limitations(06:42) Trend extrapolations(07:58) The modal year of AGI is soon(09:30) 2. From AGI to Superintelligence: the Intelligence Explosion(09:37) The basic intelligence explosion case(10:47) Objections and responses(14:07) The power of superintelligence(16:29) III The Challenges(16:32) IIIa. Racing to the Trillion-Dollar Cluster(21:12) IIIb. Lock Down the Labs: Security for AGI(21:20) The power of espionage(22:24) Securing model weights(24:01) Protecting algorithmic insights(24:56) Necessary steps for improved security(26:50) IIIc. Superalignment(29:41) IIId. The Free World Must Prevail(32:41) 4. The Project(35:12) 5. Parting Thoughts(36:17) Responses to Situational AwarenessThe original text contained 1 footnote which was omitted from this narration. --- First published: June 8th, 2024 Source: https://forum.effectivealtruism.org/posts/zmRTWsYZ4ifQKrX26/summary-of-situational-awareness-the-decade-ahead --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Original by Leopold Aschenbrenner, this summary is not commissioned or endorsed by him. Short Summary Extrapolating existing trends in compute, spending, algorithmic progress, and energy needs implies AGI (remote jobs being completely automatable) by ~2027. AGI will greatly accelerate AI research itself, leading to vastly superhuman intelligences being created ~1 year after AGI. Superintelligence will confer a decisive strategic advantage militarily by massively accelerating all spheres of science and technology. Electricity use will be a bigger bottleneck on scaling datacentres than investment, but is still doable domestically in the US by using natural gas. AI safety efforts in the US will be mostly irrelevant if other actors steal the model weights of an AGI. US AGI research must employ vastly better cybersecurity, to protect both model weights and algorithmic secrets. Aligning superhuman AI systems is a difficult technical challenge, but probably doable, and we must devote lots of [...] ---Outline:(00:12) Short Summary(02:16) 1. From GPT-4 to AGI: Counting the OOMs(02:23) Past AI progress(05:37) Training data limitations(06:41) Trend extrapolations(07:57) The modal year of AGI is soon(09:29) 2. From AGI to Superintelligence: the Intelligence Explosion(09:36) The basic intelligence explosion case(10:46) Objections and responses(14:06) The power of superintelligence(16:28) III The Challenges(16:31) IIIa. Racing to the Trillion-Dollar Cluster(20:58) IIIb. Lock Down the Labs: Security for AGI(21:05) The power of espionage(22:09) Securing model weights(23:46) Protecting algorithmic insights(24:41) Necessary steps for improved security(26:35) IIIc. Superalignment(29:15) IIId. The Free World Must Prevail(32:15) 4. The Project(34:47) 5. Parting Thoughts(35:51) Responses to Situational AwarenessThe original text contained 1 footnote which was omitted from this narration. --- First published: June 8th, 2024 Source: https://forum.effectivealtruism.org/posts/zmRTWsYZ4ifQKrX26/summary-of-situational-awareness-the-decade-ahead --- Narrated by TYPE III AUDIO.
Nick Whalen and Alex Barutha put a bow on the 2023-24 fantasy basketball season before digging in on the Play-In Tournament, as well as the four Round 1 series that are already locked in. The guys discuss: Bucks-Pacers and how much danger Milwaukee is in; Cleveland maneuvering its way to a 4-5 matchup against Orlando; Boston's path through the East; Mavs-Clippers; Timberwolves as an underdog against the Suns; the Lakers' potential matchup with Denver; End of the road for Golden State?; East Play-In scenarios. 0:00: Intro 2:00: Is this the first satisfying NBA season since COVID? 4:00: What to do with veterans with injury histories next season 7:30: Drafting LaMelo Ball next season? 9:35: Bucks-Pacers preview 16:30: Cleveland-Orlando + Any issue with Cleveland throwing Sunday's game? 19:35: If Bucks advance, who should they want to face in Round 2? 25:20: Clippers-Mavericks preview 32:15: Timberwolves-Suns preview 38:15: Game 1 opening lines (Cavs -4.5; Wolves -1.5; Bucks -2.0; Clippers -2.0) 39:40: West Play-In Preview 42:10: Is there an argument for the Lakers aiming to get the 8 seed? 48:20: East Play-In Preview 51:20: Potential matchups in the East; who should Boston prefer to face? Learn more about your ad choices. Visit podcastchoices.com/adchoices
Steve "Dr. A" Alexander is joined by Rick Kamla to talk fantasy hoops. 1:00 Victory Laps for Doc and Kamla 9:00 Chimezie Metu 13:00 Sam Hauser 18:30 Xavier Tillman 19:50 Jabari Walker and Kris Murray 22:00 RotoWire Lineups Page love 27:55 Jazz shutdown 28:10 Spurs shutdown + Wemby love 32:30 Grizzlies shutdown 34:40 Pistons shutdown Cade Cunningham 40:00 Wizards shutdown Deni, Kispert, Jordan Poole 42:30 Giannis injury 46:30 Luka Doncic - Can Mavs win it all? 50:00 Viewer Comments Learn more about your ad choices. Visit podcastchoices.com/adchoices
The Brainy Business | Understanding the Psychology of Why People Buy | Behavioral Economics
In this episode of The Brainy Business podcast, Dr. Nick Hobson, a distinguished behavioral scientist specializing in social psychology and social neuroscience, joins host Melina Palmer for an insightful discussion on the intersection of psychology, philosophy, and behavioral science. Dr. Hobson's journey into the realm of rituals and moral emotions, shaped by collaborations with renowned behavioral scientists like Mike Norton and Francesca Gino, underscores his profound expertise in the field. The episode delves into the practical applications of psychology, emphasizing the interconnectedness of theoretical foundations and real-world practices in understanding human behavior. Nick's expertise in leveraging technology and behavioral science to analyze emotions and attitudes, exemplified by Emotive Technologies' product, Apex, offers valuable insights for businesses seeking to understand consumer actions. With a wealth of knowledge and expertise, Nick's perspective adds depth and practical relevance to the conversation, making this episode a must-listen for behavioral science researchers and practitioners. In this episode: Explore the profound impact of rituals on human behavior, shedding light on their significance in shaping daily actions and choices. Gain insights into the cognitive processes underlying decision-making behavior, illuminating the intricate mechanisms guiding individual choices and preferences. Delve into the realm of moral emotions through psychological research, uncovering the intricate interplay between emotions and moral decision-making. Uncover effective business strategies harnessing the power of behavioral science, offering valuable lessons for leveraging human behavior in organizational settings. Show Notes: 00:00:00 - Introduction, Melina introduces the episode and the guest, Dr. Nick Hobson, a prominent behavioral scientist with a background in social psychology and neuroscience. 00:02:36 - Background and Research on Rituals Nick discusses his research on the psychology and neuroscience of rituals, which was the focus of his dissertation. 00:05:31 - Philosophy and Science The conversation delves into the value of philosophy in scientific research, emphasizing the importance of asking questions, running thought experiments, and thinking critically. Dr. Hobson highlights the blend between philosophy and computational cognitive science in the work of Daniel Dennett. 00:10:06 - Qualitative and Quantitative Research The importance of embracing both qualitative and quantitative research methods is discussed. The conversation emphasizes the balance between asking big questions and curiosity (qualitative) and empirical methodological thinking (quantitative) in scientific research. 00:11:22 - The Role of Philosophy in Research Nick reflects on the role of philosophy in research, advocating for a philosophical mindset to push the boundaries of scientific innovation. He emphasizes the tension between exploration and replication in scientific research and the value of philosophical thinking in curiosity and hypothesis testing. 00:15:19 - The Birth of Psychology Nick discusses the marriage between philosophy and physiology in the late 19th century, which led to the birth of psychology. He highlights the influence of philosophers like William James on the founding of psychology. 00:17:04 - The Trolley Problem Nick delves into the moral dilemma known as the trolley problem, where individuals must make a decision that reflects their moral philosophy. He explains the differences between utilitarianism and deontology in approaching ethical decisions. 00:20:30 - Emotive Technologies and Apex Nick introduces Emotive Technologies and its product, Apex, which combines technology and behavioral science to uncover and track emotions and attitudes in audience relationships. He emphasizes the tool's ability to analyze consumer behaviors and provide strategic insights to clients. 00:23:39 - Rituals in Consumer Behavior Nick explores the role of rituals in both customer-facing and employee contexts. He discusses how rituals can influence consumer behaviors and highlights his work on fan rituals in sports and health and beauty rituals. 00:27:16 - Leveraging Behavioral Insights Nick shares a case study where behavioral insights from Apex revealed a counterintuitive finding for a client. He explains how the tool provides precise prescriptions for clients to improve their brand's engagement by focusing on specific psychological constructs. 00:31:14 - Exploring the Intriguing World of Behavioral Science Nick and Melina delve into the fascinating world of behavioral science, discussing the importance of understanding the mind's role in shaping behaviors, and the need to explore the philosophical side of behavioral economics. 00:32:23 - Connecting with Dr. Nick Hobson Nick shares his contact information, including his LinkedIn profile, email, and website, for those interested in learning more about behavioral science and connecting with him. 00:33:29 - The Nexus of Behavior and Mind Nick emphasizes the significance of understanding the interplay between behaviors and the underlying thoughts and beliefs in shaping human actions, highlighting the importance of exploring both the behavioral and psychological aspects. 00:34:43 - Parting Thoughts on Behavioral Science Nick encourages listeners to consider the relationship between behaviors and the mind, prompting them to reflect on the drivers of human actions and the underlying thought processes that influence behavior. 00:35:33 - Conclusion, Melina's top insights from the conversation. What stuck with you while listening to the episode? What are you going to try? Come share it with Melina on social media -- you'll find her as @thebrainybiz everywhere and as Melina Palmer on LinkedIn. Thanks for listening. Don't forget to subscribe on Apple Podcasts or Android. If you like what you heard, please leave a review on iTunes and share what you liked about the show. I hope you love everything recommended via The Brainy Business! Everything was independently reviewed and selected by me, Melina Palmer. So you know, as an Amazon Associate I earn from qualifying purchases. That means if you decide to shop from the links on this page (via Amazon or others), The Brainy Business may collect a share of sales or other compensation. Let's connect: Melina@TheBrainyBusiness.com The Brainy Business® on Facebook The Brainy Business on Twitter The Brainy Business on Instagram The Brainy Business on LinkedIn Melina on LinkedIn The Brainy Business on Youtube Connect with Nick: Influence at Work X LinkedIn Learn and Support The Brainy Business: Check out and get your copies of Melina's Books. Get the Books Mentioned on (or related to) this Episode: The Ritual Effect, by Michael Norton Happy Money, by Michael Norton How To Change, by Katy Milkman Happier Hour, by Cassie Holmes Good Habits, Bad Habits, by Wendy Wood Top Recommended Next Episode: Cassie Holmes Interview (ep 257) Already Heard That One? Try These: Wendy Wood Interview (ep 127) What problem are you solving? (ep 126) Surprise and Delight (ep 276) Robert Cialdini Interview (ep 312) Introduction to NUDGES and Choice Architecture (ep 35) Other Important Links: Brainy Bites - Melina's LinkedIn Newsletter What a 5-Step Checklist at Johns Hopkins Can Teach You About Life and Business
Friend Zone Fallout? Are we stuck in a Friendship Recession? Or is it an extinction? Today, we're talking about Friendship Recession and this is The Furious Curious podcast. TIMESTAMPS: Intro: 0:00 The Numbers: 6:05 Five Reasons Why: 11:05 Reason 2: 18:12 Reason 3: 25:58 Reason 4: 30:34 Reason 5: 35:00 Key Actions: 45:00 Parting Thoughts: 47:21 FOLLOW US on LinkedIn SOURCES: https://www.instagram.com/reel/C3nA6WhLEeL/?igsh=MTc4MmM1YmI2Ng%3D%3D https://www.theatlantic.com/ideas/archive/2024/02/america-decline-hanging-out/677451/ https://www.pbs.org/newshour/show/why-a-growing-number-of-american-men-say-they-are-in-a-friendship-recession https://en.wikipedia.org/wiki/Friendship_recession http://bowlingalone.com https://en.wikipedia.org/wiki/Men%27s_shed MUSIC: "Love Quotes" (Jenevieve), "Everything I Never Told You" (Beautiful Emotional Piano Music), "Seinfeld Official Soundtrack Seinfeld Theme" (Jonathan Wolff WaterTower), "Back To The 80's" (Marvel83). ©2024 The Furious Curious.
ProspectiveDoctor | Helping you achieve your medical school dreams | AMCAS | MCAT
Dr. Erkeda DeRouen talks to Dr. Thomas Campanella, a healthcare executive-in-residence at Baldwin Wallace, healthcare consultant, and former healthcare attorney. They talk everything about rural health: the challenges, initiatives, and technology involved to help improve it. [00:00] Introduction [03:00] Challenges Facing Rural Health [07:06] Initiatives for Rural Health [11:37] The Healthcare System, Legislature, and Technology [15:52] What Dr. Campanella Would Change About the Healthcare System [18:01] Parting Thoughts Challenges in Rural Health Looking at the major cities from a population standpoint, most of America is in rural areas. In Ohio where Dr. Campanella is from, 80/88 counties are in rural areas. Health care in those areas is neglected compared to the major cities. There are challenges in major cities, however, there is a need to redirect resources to the rural areas as well. The aging population is over 65% in rural America. You can find Dr. Campanella on LinkedIn and send him an e-mail. To learn more about how MedSchoolCoach can help you along your medical school journey, visit us at Prospective Doctor. You can also reach us through our social media: Facebook: https://www.facebook.com/MedSchoolCoach Dr. Erkeda's Instagram: https://www.instagram.com/doctordgram/ YouTube: www.youtube.com/@ProspectiveDoctor
Love is Blind Episode 12 was a rollercoaster! Wheww! I decided that rather than record a full recap (which I have on TikTok) I would give you my parting thoughts on our final 3 couples we saw Episode 12. I always give myself time to process, change my mind, and reflect to see if any opinions I have are projections from my life. I hope you enjoy this episode, and if you're new here please leave a review! Follow on TikTok: www.TikTok.com/positivelyuncensored Follow on Instagram: www.Instagram.com/positivelyuncensored Follow on X: www.x.com/PosUncensored --- Send in a voice message: https://podcasters.spotify.com/pod/show/positivelyuncensored/message
Pastor Emeritus, Paul Blasko, has been with Grace Point/Twin Orchards Baptist for more than 40 years. Today he shares his memories and hopes with the congregation prior to his and Joanne's move to Pennsylvania to be nearer to their daughter and son-in-law.
The Real Estate Mastermind Live is a live podcast turned radio show, created for real estate investors who want to learn directly from top experts in all different asset classes. The Real Estate Mastermind Live is hosted by Seth Gershberg and Jay Tenenbaum of Scottsdale Mortgage Investments, and Edward Brown of Pacific Private Money. In this podcast episode, listeners are joined by Nelson Chu, the experienced serial entrepreneur and Founder/CEO of Percent, a modern credit marketplace. Nelson's journey began with his recognition of inefficiencies in private credit markets, sparking his mission to revolutionize the industry. Since founding Percent in 2018, Nelson and his team have crafted an end-to-end credit platform, enhancing confidence in transactions through governance, asset transparency, and market standardization. Nelson's impact extends beyond Percent, having been named a Rising Star by Private Debt Investor and boasting a background in strategy consulting and global financial institutions.Here's what Nelson shares with us:Introduction: Nelson reflects on his journey and background.Origin of Percent: Nelson discusses the genesis of Percent and its mission.Understanding Private Credit: Exploring the concept and significance of private credit.Timing for Private Credit: Nelson elaborates on why now is the opportune moment for the asset class.Evolution of Percent's Technology: Insights into the technological advancements since launch.Future of Private Credit: Predictions on the economies of scale in the next five to ten years.Structured Fixed Income Products: Overview of the products private credit offers.Barriers in Capitalizing on Opportunities: Addressing challenges and barriers for private credit amid shifting market dynamics.Parting Thoughts on Private Credit: Nelson shares additional insights on the topic.Investing Wisdom: Nelson imparts words of wisdom for long-term investors.Register to attend The Real Estate Mastermind Live by registering on our website using the link here: https://scottsdalemortgageinvestments.com/podcastLearn more about Scottsdale Mortgage Investments by visiting the website using the link here: https://scottsdalemortgageinvestments.com/Learn more about Pacific Private Money by visiting the website using the link here: https://www.pacificprivatemoney.com/Are you on LinkedIn? Connect with our co-hosts using the links below. Seth Gershberg - Connect on LinkedIn Jay Tenenbaum - Connect on LinkedIn Edward Brown - Connect on LinkedIn
Investment banker and author Chris Whalen, chairman of Whalen Global Advisors, who is also the author of The Institutional Risk Analyst, returns to The Julia La Roche Show to discuss the big picture of the economy and markets, including an impending maxi reset in home prices and the potential for more bank failures. He also highlights the “silent crisis” in commercial real estate and the potential increase in bank failures. Whalen shares his insights on the earnings of big banks and media coverage. He provides an outlook on the Federal Reserve and discusses the debt situation in the US. Lastly, he addresses the possibility of releasing Fannie Mae and Freddie Mac from conservatorship and shares his work and parting thoughts. Takeaways The commercial side of the economy is experiencing pain, particularly in commercial real estate and corporate defaults. The housing market is expected to undergo a reset in the future, leading to a decrease in home prices and potential challenges for developers. There is a silent crisis in commercial real estate, with legacy properties becoming toxic and banks being urged to sell assets. The Federal Reserve may need to drop rates, start buying bonds, and increase reserves to address the challenges in the economy and banking sector. The US debt situation is a significant concern, and long-term rates may rise, impacting various sectors of the economy. The release of Fannie Mae and Freddie Mac from conservatorship is unlikely due to their credit ratings and challenges in functioning as private entities. Links: Twitter/X: https://twitter.com/rcwhalen Website: https://www.rcwhalen.com/ The Institutional Risk Analyst: https://www.theinstitutionalriskanalyst.com/ Comments most recent Fed proposal in Basel III Endgame: https://www.regulations.gov/comment/OCC-2023-0008-0052 Timestamps: 00:00 Intro 01:08 Big picture view of the economy and markets 3:29 Impact of Basel III endgame 4:50 A maxi reset in housing 06:20 Silent Crisis in commercial real estate 09:28 Potential increase in bank failures 13:00 Big bank earnings and media coverage 15:30 Grim economic picture for commercial 21:15 Outlook on the Federal Reserve 24:55 Fed could cause a bank crisis 26:15 Debt situation in the US 31:30 Release of Fannie Mae and Freddie Mac from Conservatorship? 35:15 Parting Thoughts
In this special year-end episode of the Texas Appellate Law Podcast, hosts Jody Sanders and Todd Smith reflect on the year, express gratitude to their audience and sponsors, and revisit past episodes relevant to coping with the holidays. They also touch on recent developments in Texas appellate law, including new rules from the Texas Supreme Court. The episode closes with holiday wishes and anticipation for exciting new content in 2024.Love the show? Subscribe, rate, review, and share!A special thanks to our sponsors:Court Surety Bond AgencyThomson ReutersProudly presented by Butler Snow LLPJoin the Texas Appellate Law Podcast Community today:texapplawpod.comTwitterLinkedInYouTube
In this video, I sit down with two incredible NP colleagues and friends, Kara and Heather, for a candid conversation about the rollercoaster ride of being a nurse practitioner.We delved deep into our shared experiences, tackling everything from the highs of fulfilling our calling to the challenges that can sometimes leave our compassionate hearts feeling a bit weary. Burnout, boundary setting, and the beautiful power of teamwork — it's all on the table.