category of substance
POPULARITY
Categories
“Imagine any city with 50% job losses, it's a completely different place. I don't see governments getting things ready. It's going to take time.” This is a special episode only available to our podcast subscribers, which we call The Mini Chief. These are short, sharp highlights from our fabulous guests, where you get a 5 to 10 minute snapshot from their full episode. This Mini Chief episode features Professor Joel Pearson, Director of Future Minds Lab. His full episode is titled Leading in uncertainty, Future-proofing for the AI Revolution, and De-risking innovation. You can find the full audio and show notes here:
Get My Book On Amazon: https://a.co/d/avbaV48Download The Peptide Cheat Sheet: https://peptidecheatsheet.carrd.co/Download The Bioregulator Cheat Sheet: https://bioregulatorcheatsheet.carrd.co/
Outwork Them All: A Gen X Guide to Business and Leadership Success by Sean P Kling Amazon.com Seankling.com From Stuck in a Rut to Unparalleled Success: Unleash the Power of Generation X Wisdom to Succeed Building a business can be filled with uncertainties, things you can't control, and the constant search for growth. Whether you're a small-business owner feeling stuck in a rut or someone just starting out, the right path is rarely obvious and is always full of obstacles. Thankfully, there's a group of people with decades of experience about what works and what doesn't. Extracting that expertise means you don't have to make the same mistakes to enjoy success. Serial entrepreneur and proud Gen Xer, Sean Kling, reveals the untold practices and attitudes that have propelled Generation X to extraordinary success. As younger generations may have overlooked some of these invaluable business secrets, Sean brings them back into the spotlight. He delves into his generation's upbringing, showcasing how these practices are deeply rooted in their experiences, and explains how they can work wonders in helping you achieve your personal and business goals. You'll learn: Untapped networking opportunities hidden beyond the digital world. Five action steps to build a team of like-minded people in order to create a comfortable company culture. The must-have advisors that make up your inner circle, so your personal blind spots never go unnoticed. A 9-step protocol to help you rebound, reinvent, and recoup when your business starts to wear and tear. A no-nonsense guide to forgo costly software and run your business with more efficiency. Embrace the proven wisdom of Generation X and its time-tested strategies. Read and implement Outwork Them All today and embark on a transformative journey that will position your business for unparalleled success.
Today, it is my pleasure to speak with Anne Rappa & Alex Glauber. Anne is the fine art practice leader for Marsh McLennan. She provides risk management advice and assists clients by negotiating risk and insurance solutions related to fine art collections and transactions. Anne has 30 years' experience representing the interests of both individual and institutional collectors, institutions, art dealers, auction houses, art logistics companies and other fine art focused businesses. Anne, and her firm Marsh McLennan, are a valued Advisor member of the FOX community, and we are grateful to have their expertise and thought leadership in our membership community. Alex is an art advisor, curator, and educator based in New York. He is the founder and principal of AWG Art Advisory, where he works with private individuals, corporations, and institutions in the conceptualization, building, and management of fine art collections. Prior to founding AWG Art Advisory in 2009, Alex served as an assistant curator for the Lehman Brothers and Neuberger Berman art collections from 2006 to 2009. He has curated monographic and thematic group exhibitions at venues as varied as the Portland Museum of Art in Maine, Phillips auction house, and Bryant Park, New York, as well as at galleries such as Lisson Gallery, Andrew Kreps Gallery, Chapter NY, Dickinson, David Lewis Gallery, and Casey Kaplan. Art is an increasingly popular investment asset among enterprise families and family offices – both as a passion investment and an alternative asset in their diversified portfolios. Anne and Alex talk about what is going on today in the world of art investing and highlight the latest trends that have been shaping the space over the recent years. They also explain how art is different from other investment assets, describing the unique attributes and market structures that set art apart from other investments and even from other alternative assets. One practical piece of advice Anne and Alex have for our listeners is to consider and understand the role of a professional art advisor. They describe the role of the art advisor and share their views on why families and family offices should be working with one and what value they can extract from such a relationship. With the unique attributes and value of art come also some unique risks. Anne and Alex shed some light on the major risks art investors should be aware of and how family offices and their clients can manage and protect themselves against those risks. Don't miss this deeply instructional conversation with two of the leading experts and advisors in the world of art investing.
“I think the internet will be gone. AIs will have all the information and entertainment for us whenever we want.” In this episode of The Inner Chief podcast, I speak to Professor Joel Pearson, Director of the Future Minds Lab, on Leading in uncertainty, Future-proofing for the AI revolution, and De-risking innovation.
Converting YouTube viewers into qualified leads is a persistent marketing challenge. Nate Woodbury, CEO of Be The Hero Studios and producer of over 60 successful YouTube channels, shares his proven approach for using YouTube as a lead generation tool rather than an ad revenue platform. His strategy focuses on creating highly specific search-optimized content that answers targeted questions, building relationships with viewers before implementing strategic calls-to-action, and offering value-aligned lead magnets including mini-courses, webinars, and downloadable resources that convert viewers into business opportunities.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Revenue Generator Podcast: Sales + Marketing + Product + Customer Success = Revenue Growth
Converting YouTube viewers into qualified leads is a persistent marketing challenge. Nate Woodbury, CEO of Be The Hero Studios and producer of over 60 successful YouTube channels, shares his proven approach for using YouTube as a lead generation tool rather than an ad revenue platform. His strategy focuses on creating highly specific search-optimized content that answers targeted questions, building relationships with viewers before implementing strategic calls-to-action, and offering value-aligned lead magnets including mini-courses, webinars, and downloadable resources that convert viewers into business opportunities. Show Notes Connect With:Nate Woodbury: Website // LinkedInThe MarTech Podcast: Email // LinkedIn // TwitterBenjamin Shapiro: Website // LinkedIn // TwitterSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
“The 80/20 curve also applies to time: 1% of your time produces 50% of all your productivity.” This is a special episode only available to our podcast subscribers, which we call The Mini Chief. These are short, sharp highlights from our fabulous CEO guests, where you get a 5 to 10 minute snapshot from their full episode. This Mini Chief episode features Perry Marshall, Author and Sales & Marketing Guru. His full episode is titled Redefining the 80/20 Rule, buying time for superhuman productivity, and solving tough problems. You can find the full audio and show notes here:
As we go forward to prioritize the unique concept of mental beauty, episode 114 of the Dare To Share Your Untold Story Podcast, Mental Beauty Segments, Extracting the Beauty Tangled in Knots of Fear-Based Living. In relation to this episode topic the prior episode 71 has been selected for further exploration and deeper dive. In episode 71, titled “Untangling Beauty from Her Tangled Web of Fears”, with guest, Garcia Hanson. Her journey is a profound example of what it takes to break free from the grip of fear and self-doubt. Her story sheds light on how we can find resilience, self-worth, and ultimately, beauty within the messiness of life's challenges. Her journey truly embodies the essence of Mental Beauty. It's a testament to the transformational power of embracing vulnerability and using it as a tool for growth. When we consistently prioritize our mental and emotional well-being, we unlock the ability to live more fully. Garcia highlights this beautifully when she says: “The definition of success is the grace to make changes.” This idea resonates deeply—it's about breaking free from fear, making room for growth, and allowing ourselves the space to redefine success on our terms. Garcia reminds us, success doesn't come from perfection or from meeting external expectations. Instead, it's rooted in finding the courage to untangle those knots and embrace our true selves. Garcia's Mental Beauty Rethink challenges us to confront the hidden fears and self-doubt that often define our lives. Her story is a powerful reminder that behind every perfect facade lies a more complex, vulnerable reality. 3 Practical ‘Mental Beauty Tips' inspired by Episode 114: Name and Reframe Your Fears: Start by naming the fears you feel. Write them down, and identify any underlying beliefs tied to these fears—such as “I'm not enough,” or “I'll be judged.” Reframe these beliefs by challenging their validity and reminding yourself of what's true about you. For example, replace "I'm not enough" with "I am doing my best, and I deserve to be valued." Reframing gives you a more supportive inner voice, allowing you to embrace self-acceptance while seeing fears for what they are: mental constructs, not certainties. Practice Self-Compassion with Small Acts of Courage: Fear often keeps us from stepping outside our comfort zone. Start by taking small, manageable actions toward self-acceptance. For instance, say “no” to a request when you're already overwhelmed, or express a genuine feeling with someone you trust. Each small act of courage helps you build resilience and reinforces the message that you're worthy of acceptance, exactly as you are. Visualize Success and Lean into Your Values: Visualization is a powerful tool for reprogramming your mind to view unfamiliar situations more positively. Spend a few minutes each day picturing yourself navigating challenges with confidence and strength. Tie this practice to your core values—ask yourself how embracing courage or acceptance aligns with your true values (e.g., authenticity, growth, love). Visualizing yourself succeeding in alignment with your values strengthens your resolve to act, helping you move from a fear-driven mindset toward one where you can live freely, guided by your truest self. Episode 114 Takeaway: A self-care tool called ‘Mindful Grounding': This involves taking a few intentional moments each day to pause, breathe, and reconnect with your immediate surroundings. Begin by finding a quiet place, then close your eyes and take three deep breaths, focusing on the sensation of the breath as it fills your lungs and leaves your body. Once you feel centred, open your eyes and slowly observe five things you can see, four things you can feel, three things you can hear, two things you can smell, and one thing you can taste. This grounding exercise gently pulls your mind away from anxious thoughts by anchoring you in the present moment. Of course, remember, if you want to share something amazing that you would like to have a shout out for on your behalf, just send an e-mail to mentalbeautycommunity@gmail.com. Spread Mental Beauty, Stop the Stigma!
As DeFi continues to evolve, the challenge of finding a balance between decentralization and protection from all manner of exploits persists. The founder of Infinex, Kain Warwick, joined the show to talk about: How crypto market makers have at times veered into “all-out crime” What market making looks like today Playing chart games with token allocations What Kain looks at when evaluating tokens Why Binance kicked a MOVE market maker off its platform The $JELLY attack on Hyperliquid and the problem of centralization in DeFi What problems in crypto Kain is attempting to solve with Infinex Visit our website for breaking news, analysis, op-eds, articles to learn about crypto, and much more: unchainedcrypto.com Thank you to our sponsors! Bitwise Guest: Kain Warwick, founder of Infinex App and Synthetix Previous appearances on Unchained: 2025 Will Be a Year of Crypto Competition. Can Ethereum Make a Comeback? Links: Crypto Market Making Kain Warwick: Discussion about market makers Binance: What happened with MOVE on Binance Coindesk: Binance Offboards Market Maker That It Said Made $38M Profit on MOVE Listing Bloomberg: Citadel Securities Plots Jump Into Crypto Trading After Trump's Embrace Hyperliquid Unchained: Hyperliquid Saved Itself a $15 Million Loss, but Sparked Criticism Infinex The Block: Synthetix founder Kain Warwick launches Infinex The Block: Peter Thiel's Founders Fund invests in Infinex's Patron NFT sale as total amount raised hits $67.7 million Timestamps:
“Step one, write down 25 things that you really, really want to do in your life. Step two, order the list in importance to you. Step three, put a circle around the top five and cross off the bottom 20. That's how you succeed.” In this Best of Series episode, we replay a chat we had in 2019 with Perry Marshall, Author and Sales & Marketing Guru, on Redefining the 80/20 Rule, buying time for superhuman productivity, and solving tough problems.
As DeFi continues to evolve, the challenge of finding a balance between decentralization and protection from all manner of exploits persists. The founder of Infinex, Kain Warwick, joined the show to talk about: How crypto market makers have at times veered into “all-out crime” What market making looks like today Playing chart games with token allocations What Kain looks at when evaluating tokens Why Binance kicked a MOVE market maker off its platform The $JELLY attack on Hyperliquid and the problem of centralization in DeFi What problems in crypto Kain is attempting to solve with Infinex Visit our website for breaking news, analysis, op-eds, articles to learn about crypto, and much more: unchainedcrypto.com Thank you to our sponsors! Bitwise Guest: Kain Warwick, founder of Infinex App and Synthetix Previous appearances on Unchained: 2025 Will Be a Year of Crypto Competition. Can Ethereum Make a Comeback? Links: Crypto Market Making Kain Warwick: Discussion about market makers Binance: What happened with MOVE on Binance Coindesk: Binance Offboards Market Maker That It Said Made $38M Profit on MOVE Listing Bloomberg: Citadel Securities Plots Jump Into Crypto Trading After Trump's Embrace Hyperliquid Unchained: Hyperliquid Saved Itself a $15 Million Loss, but Sparked Criticism Infinex The Block: Synthetix founder Kain Warwick launches Infinex The Block: Peter Thiel's Founders Fund invests in Infinex's Patron NFT sale as total amount raised hits $67.7 million Timestamps:
Have you ever experienced something so painful that you just wanted to erase it from your memory forever? Those "never again" moments that feel too heavy to carry? We all have them – whether it's a relapse, a toxic relationship, or a betrayal that left us wounded. In this powerful episode of The Addicted Mind Podcast, hosts Duane and Eric explore the practice of "benefit finding" – a transformative approach to mining our painful experiences for growth and wisdom. Instead of pushing away difficult memories, they suggest we might find our greatest lessons within them. This isn't about toxic positivity or pretending everything happens for a reason. It's about recognizing our remarkable human capacity to make meaning from suffering. As Viktor Frankl discovered in the concentration camps, "suffering ceases to be suffering the moment it finds a meaning." Modern psychology calls this "post-traumatic growth" – the ability to find positive changes in five key areas: appreciation of life, relationships, new possibilities, personal strength, and spiritual change. When we intentionally reframe negative experiences, we're actually rewiring our brain through neuroplasticity. The hosts provide a practical four-step process to transform your pain into wisdom: Identify the negative experience you never want to repeat Understand why you want to avoid it Extract the valuable lessons within it Create a document of your "new learnings" Through this process, your darkest moments can become sources of inspiration and light – not just for yourself, but for others around you. As Brené Brown reminds us, "Our wholeness actually depends on the integration of all of our experiences, including the falls." Whether you're in recovery or simply navigating life's challenges, this episode offers a compassionate roadmap for turning pain into purpose. Download the accompanying worksheet to begin your journey of transformation today. Download the Worksheet Key Topics The natural tendency to want to forget painful experiences vs. the value of mining them for wisdom Post-traumatic growth and the five areas where people can grow through difficult experiences How neuroplasticity allows us to rewire our brains when we reframe negative experiences The difference between benefit finding and toxic positivity • Viktor Frankl's insights on finding meaning in suffering A practical four-step process for transforming pain into wisdom How to create a living document of "new learnings" from painful experiences Timestamp [00:00:54] Introduction to the topic of painful experiences we wish we could erase [00:04:00] Explanation of benefit finding and transforming pain into growth [00:07:40] Discussion of Viktor Frankl and making meaning from suffering [00:08:29] The five areas of post-traumatic growth [00:09:12] How reframing negative experiences rewires our brains [00:13:00] Step 1 & 2: Identifying and understanding your painful experience [00:15:22] Steps 3 & 4: Extracting lessons and creating new learnings Follow and Review: We'd love it even more if you could drop a review or 5-star rating over on Apple Podcasts. Simply select “Ratings and Reviews” and “Write a Review” then a quick line with your favorite part of the episode. It only takes a second and it helps spread the word about the podcast. Supporting Resources: If you live in California and are looking for counseling or therapy please check out Novus Counseling and Recovery Center NovusMindfulLife.com We want to hear from you. Leave us a message or ask us a question: https://www.speakpipe.com/addictedmind Disclaimer Learn more about your ad choices. Visit megaphone.fm/adchoices
This episode features some game exploitation in Neverwinter Nights, weaknesses in mobile implementation for PassKeys, and a bug that allows disclosure of the email addresses of YouTube creators. We also cover some research on weaknesses in Azure.Links and vulnerability summaries for this episode are available at: https://dayzerosec.com/podcast/278.html[00:00:00] Introduction[00:00:35] Exploiting Neverwinter Nights[00:08:48] PassKey Account Takeover in All Mobile Browsers [CVE-2024-9956][00:22:51] Disclosing YouTube Creator Emails for a $20k Bounty[00:31:58] Azure's Weakest Link? How API Connections Spill Secrets[00:39:02] SAML roulette: the hacker always wins[00:40:56] Compromise of Fuse Encryption Key for Intel Security FusesPodcast episodes are available on the usual podcast platforms: -- Apple Podcasts: https://podcasts.apple.com/us/podcast/id1484046063 -- Spotify: https://open.spotify.com/show/4NKCxk8aPEuEFuHsEQ9Tdt -- Google Podcasts: https://www.google.com/podcasts?feed=aHR0cHM6Ly9hbmNob3IuZm0vcy9hMTIxYTI0L3BvZGNhc3QvcnNz -- Other audio platforms can be found at https://anchor.fm/dayzerosecYou can also join our discord: https://discord.gg/daTxTK9
"Listening to the audio recorded in Shanghai, China, at the Spring festival, I identified interesting melodic fragments from a puppet show, as well as discernible rhythmic and melodic elements from unintelligible background voices and other noises from the audience. "I was drawn to these sounds both as being culturally significant and as being representative of aspects of World Heritage connected with tourism. "Extracting my musical starting points I then abstracted and developed them, expanding upon the inherent motifs - following my emotional responses both to the source material and the implications of World Heritage Day (such as unity, cultural appreciation and learning, the historical importance of a place and its population etc., etc.) and its influence by and on ‘global culture'." Performance of traditional puppetry reimagined by Dead Kousin. ——————— This sound is part of the Sonic Heritage project, exploring the sounds of the world's most famous sights. Find out more and explore the whole project: https://www.citiesandmemory.com/heritage
So many times you don't realize how much you have grown from.
Connect with Vanessa Soul: https://sacredsoulenergetics.com/Power & Power Podcast All Apps: https://pod.link/1713095352 1:02 Book Introduction 4:11 What if supression is the issue? 5:44 What is Spiritual evolution is about deep emotional integration 8:15 The Concepts in the book 10:25 How have you been conditioned to supress emotions 13:25 Judgement locks emotions 15:01 When self judgment was keeping me an unhealthy habit 18:38 Judgement release & reflection questions 19:22 Judgement, Fear, & Awareness 21:26 Emotional releasing to activate your personal power 24:07 Emotional suppression consequences 26:12 Getting the right support 27:33 Extracting the awareness from the fear 28:28 Different forms of emotional releasing 30:57 Denial of emotions limits your life 33:07 3 Journaling Prompts 34:34 EFT Practice for Releasing Self Judgment & denial CONNECT W/ VANESSA SOUL https://sacredsoulenergetics.com/IG: https://www.instagram.com/sacred__soul____/ Facebook link https://www.facebook.com/vanessa.spiva.9/Threads https://www.threads.net/@sacred__soul____Power & Power Podcast All Apps: https://pod.link/1713095352 Donate to the Podcast: Sacred Soul Energetics Business Venmo: https://venmo.com/code?user_id=4008578222393358557&created=1739583741.404595&printed=1
We're closing out February with our second Best Picture winner in a row, this time jetting off to South Korea for Bong Joon-ho's Parasite. How does it hold up a few years removed from its hype? Very well, it turns out! CHAPTERS: (00:00:00) - The Nextlander Watchcast Episode 126: Parasite (2019) (00:00:24) - Intro. (00:01:28) - Our film this week is Bong Joon-ho's Parasite! (00:10:12) - A brief note about the (probably) lower number of clips in this episode. (00:11:22) - Getting into the production history. (00:22:04) - Let's get into the movie proper, and talk about our cast. (00:34:37) - A window into the lives of the Kims. (00:38:35) - The Parks, and the importance of keeping up appearances. (00:43:15) - Break! (00:43:40) - We're back, and the scam begins in earnest. (00:49:55) - The dad arrives, and a little about actor Lee Sun-kyun (content warning for suicide discussion). (00:57:14) - Extracting the housekeeper. (01:05:55) - Everything's coming up Kim. (01:13:02) - Ding dong. (01:25:58) - It's a ram-donnybrook! (01:30:40) - That's no ghost. (01:33:08) - Trapped under the world's widest coffee table. (01:37:34) - The effects of monsoon season are not applied equally. (01:41:15) - Barreling toward disaster. (01:50:37) - People start dying very quickly. (01:54:15) - The extended epilogue. (O2:04:39) - Final thoughts. (02:07:32) - Our plans for next month and beyond. (02:12:41) - Outro.
A year and a half ago, neuroscientist Kamilla Souza got the call she'd been waiting for: A baby humpback whale had died just offshore. She wanted its brain. That's because scientists know little about the brains of whales and dolphins off the Central and South American coasts. Studying them, like Kamilla is doing, can teach scientists about the inner workings of these animals — about their behavior and how they're adapted to living underwater. So, she has to race against time to save the brains. The heat in this area of Brazil accelerates decomposition. Minutes matter. This episode was reported by Ari Daniel. Read more of Ari's reporting.Curious about other biology research happening around the world? Let us know by emailing shortwave@npr.org!Learn more about sponsor message choices: podcastchoices.com/adchoicesNPR Privacy Policy
Book: Soul GAME https://tinyurl.com/yckcvnv9Book: Every Word https://www.soulreno.com/every-wordBook: Why Play https://www.soulreno.com/Why-PlayBook Digital Soul: https://www.soulreno.com/Digital-SoulVideo Course: HOW TO PLAY: https://www.soulreno.com/How-to-play-life-is-a-gameInstagram: https://www.instagram.com/soulrenovation/
Given the environmental catastrophe into which we now zombie-walk, here is a bit about the history of extracting greenhouse gases directly from the air. Extracting carbon dioxide from the atmosphere began in the 1930s, but proposals to do it environmentally only began about 25 years ago, with the first large-scale systems appearing in the 2020s. We also talk a little about pulling a worse gas, methane, from the atmosphere.Support the show Support my podcast at https://www.patreon.com/thehistoryofchemistry Tell me how your life relates to chemistry! E-mail me at steve@historyofchem.com Get my book, O Mg! How Chemistry Came to Be, from World Scientific Publishing, https://www.worldscientific.com/worldscibooks/10.1142/12670#t=aboutBook
Ajahn Cunda talks about distraction. He discusses how it affects our practice and ways to overcome it, as well as the process of learning how we are causing our own suffering, how to use the tools of the Buddha’s teaching to examine our experience, and how our practice should progress in a positive direction over time. This talk was offered on February 15, 2025 at Abhayagiri Buddhist Monastery.
Ep. 151 features Simon Gerszberg from ShotQuality, a sports data extraction company using CV, AI and data science to optimize pricing for Sports-books and syndicates. Hear him discuss: How a chance college roommate pairing sparked the beginning for ShotQuality The strategy shift that saw them move away from selling analytics to colleges - despite having 25 Division 1 teams as clients How their first sportsbook deal came from betting into the market using ShotQuality data His experience successfully navigating the dreaded operator integration roadmap News of their expansion from a basketball-only product to a multi-sport offering His perspective on the sports data ecosystem, and where ShotQuality fits within it The joint role of computer vision and human validation in their technology platform How he successfully fundraised a seven-figure seed round as a first-time founder Speaking of the NEXT NYC Summit, capacity is limited and tickets will sell out. Secure yours now and get $200 off your Full Event Pass with code SB2NYC52. Get your ticket here: https://next.io/summits/newyork/#tickets Catch the video version of this episode here. Learn more
A southwestern Manitoba farmer is calling on government and industry to explore the feasibility of establishing a processing facility that would extract medical compounds from the byproducts of hog processing.Heparin is a blood thinner that prevents the formation of blood clots.Jim Downey says he became interested in the medical side of production about three years ago when he was contacted by a Chinese company looking to find out what was being done with the mucosa from the guts of the hogs being slaughtered in Manitoba.The July heat did a number on 2024 crop yields in Saskatchewan, but it did prevent major disease outbreaks.Sandy Junek, the Molecular Lab Manager with Discovery Seed Labs in Saskatoon says seed germination levels range from good to very good.See omnystudio.com/listener for privacy information.
Most of us aren't born with a powerful courtroom presence and a reputation for extracting crucial information in tense depositions. But we can learn. Guest Tara-Jane Flynn has been called a “Princess Warrior” and “The Deposition Queen” for her tough, compelling courtroom manner. But the veteran California personal injury litigator says she started out as a shy girl too overwhelmed to give a classroom speech in school. She taught herself to be strong and to be the strength her clients need. She got involved in theater and public speaking. She guest hosted podcasts. And she developed a strong social media presence. You can, too. In this episode, you'll hear valuable tips for winning depositions, learning how to leverage social media, and being the lawyer your client needs to believe in from a Los Angeles attorney at home battling for 8-figure verdicts. Get ready to be inspired. Questions or ideas about solo and small practices? Drop us a line at NewSolo@legaltalknetwork.com Topics: Overcoming shyness to become a force of nature in the courtroom doesn't always come naturally. Learn what you can do intentionally to build your presence and your confidence. Social media? It's not as hard as you think. Find an attorney with a social media presence you admire and do what they do. How's that for easy? Really. Extracting valuable evidence in a deposition can feel intimidating, but there are things you can do. Hear how to let the deponent talk, ask follow-up questions as simple as why or why not, and prepare, prepare, prepare. Mentioned in This Episode: Toastmasters Previous episode, “T.V. Advertising: What to Expect” with guess Conti Moore The Wayback Machine ABA TECHSHOW 2025 Clio Cloud Conference 2025 Clio Legal Trends Report
Most of us aren't born with a powerful courtroom presence and a reputation for extracting crucial information in tense depositions. But we can learn. Guest Tara-Jane Flynn has been called a “Princess Warrior” and “The Deposition Queen” for her tough, compelling courtroom manner. But the veteran California personal injury litigator says she started out as a shy girl too overwhelmed to give a classroom speech in school. She taught herself to be strong and to be the strength her clients need. She got involved in theater and public speaking. She guest hosted podcasts. And she developed a strong social media presence. You can, too. In this episode, you'll hear valuable tips for winning depositions, learning how to leverage social media, and being the lawyer your client needs to believe in from a Los Angeles attorney at home battling for 8-figure verdicts. Get ready to be inspired. Questions or ideas about solo and small practices? Drop us a line at NewSolo@legaltalknetwork.com Topics: Overcoming shyness to become a force of nature in the courtroom doesn't always come naturally. Learn what you can do intentionally to build your presence and your confidence. Social media? It's not as hard as you think. Find an attorney with a social media presence you admire and do what they do. How's that for easy? Really. Extracting valuable evidence in a deposition can feel intimidating, but there are things you can do. Hear how to let the deponent talk, ask follow-up questions as simple as why or why not, and prepare, prepare, prepare. Mentioned in This Episode: Toastmasters Previous episode, “T.V. Advertising: What to Expect” with guess Conti Moore The Wayback Machine ABA TECHSHOW 2025 Clio Cloud Conference 2025 Clio Legal Trends Report Learn more about your ad choices. Visit megaphone.fm/adchoices
With Ohio State being a big favorite over Notre Dame in the National Championship game, Mike and Jim navigate that game trying to find the best value. Download the latest episode of Cash the Ticket today. To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
Sign up for the Coca Summit in Peru - https://bit.ly/40uRj6s Dennis McKenna is an ethnopharmacologist, author, and brother to well-known psychedelics proponent Terence McKenna. Dennis currently runs the @mckenna.academy YouTube channel. SPONSORS https://hims.com/danny - Start your FREE online visit today. https://pick6.draftkings.com - Download the DraftKings Pick 6 app and use code DANNYJONES. https://whiterabbitenergy.com/?ref=DJP - Use code DJP for 20% off EPISODE LINKS Wisdom of the Leaf Coca Summit - https://bit.ly/40uRj6s Dennis' YouTube channel - @mckenna.academy Brotherhood of the Screaming Abyss book: https://a.co/d/3u81TJP https://mckenna.academy FOLLOW DANNY JONES https://www.instagram.com/dannyjones https://twitter.com/jonesdanny OUTLINE 00:00 - The brotherhood of the screaming abyss 11:56 - Discovering Ayahuasca 23:35 - Psilocybin mushrooms in La Chorrera 32:20 - The transcendental object at the end of time 46:34 - Timewave Zero 01:00:45 - Dennis' disagreement w/ Terence 01:16:25 - Terence McKenna was a complex person 01:28:25 - Mushrooms are the ideal psychedelic 01:46:26 - Set & setting 01:57:06 - The reality hallucination 02:05:34 - We're made of drugs 02:12:40 - Stoned ape theory 02:21:38 - The Extratempestrial Model 02:29:43 - Galen & ancient drugs 02:32:59 - Extracting drugs from plants 02:43:44 - Psychedelics as medicine 02:48:07 - Cocaine 03:03:59 - New coca leaf study 03:10:53 - The biognosis project 03:23:40 - Extended state DMT studies Learn more about your ad choices. Visit megaphone.fm/adchoices
This week Bart reviews a clump of hands played from a rare Saturday afternoon session at the Encore in Boston Harbor. In this session he encounters several situations which required a bit of creativity in order to get the maximum value against opponents with smaller stacks.
Welcome to this week's "Ask Me Anything" on the Cybercrime Magazine Podcast, with host Theresa Payton, CEO at Fortalice Solutions, former CIO at The White House, and previously Deputy Commander of Intelligence on the CBS TV series "Hunted". This special series is brought to you by Pipl AMA, the AI investigator. AMA answers questions about individuals in your investigation. Learn more at https://pipl.com/ama
"Listening to the audio recorded in Union Station, Toronto, I could hear interesting melodic fragments from a busker alongside discernible rhythmic elements from unintelligible voices and other noises in the foreground. "Extracting these musical starting points I developed them, expanding upon the inherent motifs - following my emotional responses to both the source material and my research into the history of Toronto; in parallel with processing the loss of fellow musician and Cities and Memories stalwart M. Lilley. "This piece is dedicated to the memory of Michael D. Lilley (19.09.76-01.08.24)." Union Station, Toronto reimagined by dead kousin. IMAGE: Michael Caven, CC BY 2.0 , via Wikimedia Commons
Welcome to this week's "Ask Me Anything" on the Cybercrime Magazine Podcast, with host Theresa Payton, CEO at Fortalice Solutions, former CIO at The White House, and previously Deputy Commander of Intelligence on the CBS TV series "Hunted". This special series is brought to you by Pipl AMA, the AI investigator. AMA answers questions about individuals in your investigation. Learn more at https://pipl.com/ama
Mohamed is joined once again for some crossover goodness by the Very Clinical guys...Zach and Kevin! The trio shares personal anecdotes from their school days, common dental school nightmares, and detailed step-by-step guidelines for performing lower molar extractions. They emphasize the importance of using a handpiece for sectioning teeth, the value of cone beam CT scans, and provide practical tips for improving surgical techniques. The VC crew aims to help dental students build confidence and competence in oral surgery procedures! Episode Index: 02:42 Finals Week Stress Stories 07:17 Recurring Dental School Nightmares 10:01 Discussion on Tooth Extractions 21:28 Mastering the Instrumentation (Kevin weighs in on the elevatome) 23:05 Personal Experiences and Challenges in Oral Surgery 24:06 Tools and Techniques for Efficient Extractions 28:44 Step-by-Step Guide to Extracting a Lower Molar 34:30 Advanced Extraction Techniques and Tips Join the Very Dental Facebook group using the password "Timmerman," Hornbrook" or "McWethy," "Papa Randy" or "Lipscomb!" The Very Dental Podcast network is and will remain free to download. If you'd like to support the shows you love at Very Dental then show a little love to the people that support us! -- Crazy Dental has everything you need from cotton rolls to equipment and everything in between and the best prices you'll find anywhere! If you head over to verydentalpodcast.com/crazy and use coupon code “VERYDENTAL10” you'll get another 10% off your order! Go save yourself some money and support the show all at the same time! -- The Wonderist Agency is basically a one stop shop for marketing your practice and your brand. From logo redesign to a full service marketing plan, the folks at Wonderist have you covered! Go check them out at verydentalpodcast.com/wonderist! -- Enova Illumination makes the very best in loupes and headlights, including their new ergonomic angled prism loupes! They also distribute loupe mounted cameras and even the amazing line of Zumax microscopes! If you want to help out the podcast while upping your magnification and headlight game, you need to head over to verydentalpodcast.com/enova to see their whole line of products! -- CAD-Ray offers the best service on a wide variety of digital scanners, printers, mills and even their very own browser based design software, Clinux! CAD-Ray has been a huge supporter of the Very Dental Podcast Network and I can tell you that you'll get no better service on everything digital dentistry than the folks from CAD-Ray. Go check them out at verydentalpodcast.com/CADRay!
CX Goalkeeper - Customer Experience, Business Transformation & Leadership
In this episode of the CX Goalkeeper Podcast, Federico Cesconi dives deep into the future of customer experience management, revealing the potential of artificial intelligence in transforming how companies manage customer feedback. He introduces the "Insight Narrator," a groundbreaking tool that simplifies customer feedback analysis, using AI to provide actionable insights faster than ever before. This episode is a must-listen if you're keen on learning how technology can streamline customer experience processes.About the GuestI'm a seasoned professional with immense experience in customer insights and marketing analytics. From Customer Relationship Management to Customer Experience Management, I specialize in helping companies utilize technology and data to make the right decisions for business growth.Today, I'm the co-founder of sandsiv+ (https://sandsiv.com), a software-as-a-service solution powered by Artificial Intelligence (AI) to help companies around the world correctly measure Customer Experience (CX) and actively manage Customer Journey (CJ).With over 20 years of experience in data science and marketing, I helped Sandsiv+ adopt a visionary methodology for gathering and analyzing information concerning customers, their details, their experiences, and their activities to build deeper and more effective customer relationships and improve strategic decision-making.Relevant Linkshttps://www.linkedin.com/in/federico-cesconihttps://sandsiv.com/sandsiv-unveils-insight-narrator-revolutionizing-ai-capabilities-in-sandsiv/The Top 3 Key LearningsAI Can Automate Time-Consuming Tasks: The Insight Narrator uses AI to automate customer feedback analysis, reducing the time spent on repetitive tasks and enabling quicker decision-making.Actionable Insights at Your Fingertips: AI provides real-time, prioritized insights, helping CX managers identify and address the most critical issues that affect customer satisfaction.The Future of CX is AI-Driven: Federico highlights how AI tools like Insight Narrator make customer experience management more accessible and effective, even for smaller companies.Top 3 Quotes“The Insight Narrator doesn't just find satisfaction drivers—it provides actionable insights to improve customer experience.”“With AI, what used to take weeks can now be done in minutes.”“Customer experience management should be about making life easier for both customers and the professionals managing their feedback.”Chapters00:00 Introduction and Guest presentation 03:05 The Impact of Generative AI on Customer Experience 13:57 Introduction to the Insight Narrator Tool14:16 Development and Functionality of the Insight Narrator 18:19 Benefits and Applications of the Insight Narrator 23:56 Future Developments and Improvements 29:27 Conclusion and Call to ActionWe'd love to hear your thoughts! Did you enjoy this episode? Please share it with your network, and don't forget to subscribe and follow the CX Goalkeeper Podcast on your favorite platforms: Apple Podcast | Spotify
Join renowned restorative dentist Dr. Edward Feinberg as he delves into the age-old question: which is better, saving your own tooth or extracting it and placing an implant? With over 40 years of experience and a legacy of expertise, Dr. Feinberg shares his insights on the latest advancements, techniques and considerations in restorative dentistry.From evaluating tooth suitability to understanding patient expectations, Dr. Feinberg explores the complexities of this critical decision. Tune in for thought-provoking discussions, real-life case studies and expert advice on:The pros and cons of tooth saving vs. implant placementThe impact of dental technology on restorative dentistryPatient-centered approaches to dental careThe future of restorative dentistry and digital dentistryWhether you're a dental professional seeking to enhance your skills or a patient navigating the world of restorative dentistry, this podcast is your go-to resource for informed decision-making.Dr. Feinberg works with dentists who want to improve their crown and bridgework skills so that they can deliver better treatment outcomes for their patients. Dr. Edward Feinberg is a graduate of Tufts University and practiced Dentistry in Scarsdale, New York for more than 40 years. Now practicing in Arizona (www.edwardfeinbergdmd.com), he is the successor to a unique tradition of restorative dentistry. He was trained by a master and pioneer in full coverage restorative dentistry, Dr. Elliot Feinberg. The techniques used by Drs. Edward and Elliot Feinberg have been documented with more than 100,000 pictures taken during the past 70 years. Dr. Feinberg is currently Director of ONWARD, an online teaching organization for full coverage restorative dentistry (www.theONWARDprogram.com). To date he has created more than 30 online courses for the site. The site also has an extensive library of downloadable materials, a weekly blog and a forum. Dr. Feinberg is a nationally recognized lecturer and a noted author of scientific and educational articles for dental publications, a textbook, The Double-Tilt Precision Attachment Case for Natural Teeth and Implants, and a book of essays on Dentistry: Open Wide: Essays on Challenges in Dentistry to Achieve Excellence. Dr. Feinberg is a reviewer for the Journal of Oral Implantology and an Editorial Board Member of the AAIP's Implant Prosthodontic Monographs. In addition to educational activities, Dr. Feinberg has served on 4 Councils of the American Dental Association and currently sits on the Arizona Dental Association's Council on Annual Sessions and serves as Secretary-Treasurer of the Central Arizona Dental Society. He is a past president of the Ninth District Dental Association, a component of the New York State Dental Association with 1600 members. Dr. Feinberg has made notable contributions to other organizations such as the New York State Dental Association, the Greater NY Dental Meeting, the American Academy of Implant Prosthodontics, the NY State Pierre Fauchard Academy, the Scarsdale Rotary Club, the Scarsdale Family Counseling Service and the Scarsdale BNI. Dr. Feinberg is a recipient of the Ninth District Dental Association D. Austen Sniffen Award, the Paul Harris Fellowship Award and the NY State Pierre Fauchard Academy's Award for Distinguished Service.http://www.theonwardprogram.com/Become a supporter of this podcast: https://www.spreaker.com/podcast/i-am-refocused-radio--2671113/support.
Today, we'll explore the concept of becoming "unrecognizable" as a catalyst for change. Reflecting on a phone conversation from two decades ago, Jess shares how self-perception and assumptions shape our narratives and impact our actions. Together, we'll tackle familiar patterns that hold us back, embrace the discomfort of change, and consider what it means to "play bigger" in life and business. From personal stories of resilience, including navigating a new life phase with my newborn Mila, to professional insights and an exciting Unrecognizable challenge, join us as we step into new opportunities for growth. Let's embark on this journey of transformation, shedding old habits and embracing bold, new ways of being. Key Takeaways: Becoming unrecognizable at various life stages Reflections on actions to "play bigger" and change self-perception Extracting wisdom from challenges for personal and professional evolution Episode Resources There are TWO ways to join us in Unrecognizable: 1️⃣Click HERE to join The Club Monthly Membership ($49) + receive the Unrecognizable challenge for FREE!!! 2️⃣Click HERE to sign up for the Unrecognizable ($97) challenge ONLY!
Will your business be exit-ready when you're ready to exit, with numbers that work for you?Today on the podcast, Darryl Bates-Brownsword helps business owners answer this critical question, emphasizing the importance of transitioning on your terms. In this insightful episode, Darryl shares the keys to preparing for a business exit that aligns with your financial goals and lifestyle aspirations.As a leading expert at Succession Plus, Darryl offers strategic guidance through the complexities of succession and exit planning. His mission? To ensure you and your business are set up for a smooth, successful transition—whether you're planning to sell, pass the business to the next generation, or take on a new role in a different capacity.Extracting maximum value from your business doesn't have to mean being tied to an 'earn-out' period. Darryl specializes in working with businesses generating between £2m and £50m in revenue. His focus is on helping owners, within 3-5 years of exit, plan for a future that rewards their years of dedication.Darryl introduces the 21 Steps methodology, a comprehensive roadmap to ensure you're ahead of the game when planning your exit. This proven system enhances business value, giving owners more time and financial flexibility to focus on what truly matters during and after the transition.For more expert insights on preparing for a business exit, listen to the full episode of the Exit Insights podcast with Darryl Bates-Brownsword.Looking to shape your exit strategy on your terms? Contact Darryl via LinkedIn (https://www.linkedin.com/in/darrylbates-brownsword/) or email him directly at dbatesbrownsword@successionplus.co.uk.Thanks to our sponsors...BlueprintOS equips business owners to design and install an operating system that runs like clockwork. Through BlueprintOS, you will grow and develop your leadership, clarify your culture and business game plan, align your operations with your KPIs, develop a team of A-Players, and execute your playbooks. Download the FREE Rainmaker to Architect Starter Kit at https://start.blueprintos.com! Autopilot Recruiting is a continuous recruiting service where you'll be assigned a recruiter that has been trained to recruit on your behalf every business day. Go to www.autopilotrecruiting.com to get started.Coach P found great success as an insurance agent and agency owner. He leads a large, stable team of professionals who are at the top of their game year after year. Now he shares the systems, processes, delegation, and specialization he developed along the way. Gain access to weekly training calls and mentoring at www.coachpconsulting.com. Be sure to mention the Above The Business Podcast when you get in touch.TodayApp is a corporate approved app that allows you to build custom activities and track all your commissions and bonus structures, and integrates perfectly with your CRM. It can even manage your employees' time, track production, have a leaderboard with metrics, and more. Contact Today App and for a custom demo and let them know you heard about them on The Above The Business Podcast. https://todayapppro.com/Club Capital is the ultimate partner for financial management and marketing services, designed specifically for insurance agencies, fitness franchises, and youth soccer organizations. As the nation's largest accounting and financial advisory firm for insurance agencies, Club Capital proudly serves over 1,000 agency locations across the country—and we're just getting started. With Club Capital, you get more than just services; you get a dedicated account manager backed by a team of specialists committed to your success. From monthly accounting and tax preparation to CFO services and innovative digital...
Interested in joining Nolan's Commercial Real Estate Network, "The CRE Collective"? Click below to apply. https://www.thecrehabit.com/watch-copy--81253 This episode is sponsored by My Financial Snapshot. Visit MyFinancialSnapshot.com and use coupon code INFINITE20 for 20% off your subscription for life. The time is now to get started making personal finance easy and simple!
Constellations, a New Space and Satellite Innovation Podcast
In this episode, Al Tadros, Chief Technology Officer at Redwire discusses artificial intelligence (AI) and how it enhances Space Domain Awareness (SDA). Artificial intelligence promises to extend the performance and capability of control systems, machine vision and robotics across the domains. Al describes one of the promises for space domain awareness as being the ability to put machine vision in orbit using AI algorithms. Hear him describe datasets and simulations to train on so that characterization and intent can be determined. Learn about AI promising to advance autonomous maneuvering and navigation, even deciding whether to maneuver or not based on the incoming object.
Check out this episode wherever you like to listen or watch podcasts! Episode Page: https://vinneychopra.com/podcast/ Youtube: https://youtu.be/F8DDrfSirn4 Spotify: https://spoti.fi/423B4fz iTunes: https://apple.co/3tQ9Tsf ---- To learn more about how Vinney can help you, click here - https://linktr.ee/VinneySmileChopra Smile Always and Be Happy! ——— FREEBIE: https://vinneychopra.com/freebenefits/ JOIN MY FREE WEBINAR: https://bit.ly/golden-opportunities-webinar-vinney-chopra -----
PREVIEW: MOON: OXYGEN Colleague Bob Zimmerman describes the Sierra Space success in extracting oxygen from simulated lunar soil -- necessary for the future moon colonies. More details tonight. 2868 From the Earth to the Moon, Jules Verne
Hey Pot Heads! In this weeks episode of Talking Pot Heads, we're stirring up the world of cannabis concentrates. We discuss how to make them, the differences between extracts, and how budtenders can guide customers in their dabbing methods. With special guests former budtender Brenna Saldaña and marketing Queen of Extractioneering Razia Hayden, the conversation explores the intricacies of extraction methods, solvent vs. solventless processes, and the different textures and styles found in today's market. They emphasize the importance of budtender education, quality products, and avoiding misconceptions around extraction techniques. Listeners will also hear personal anecdotes, expert insights, and practical advice for navigating the cannabis concentrate landscape, like be careful dabbing CRC cuz it could burn your eyeball!! Thanks so much to this episode's sponsors! First up, shout out to Veriheal for sponsoring this season of Talking Pot Heads! If you need a medical marijuana card, Veriheal can connect you to cannabis-friendly doctors in your state. Visit them at Veriheal.com! And another huge shout out to Nimble Distro, one of Oregon's leading cannabis wholesale distributors. With a mission to do well and do good, Nimble exists to repair the damage caused by the War on Drugs. Their brands span all product categories and reach every inch of Oregon. Check out their fabulous brands, like Broomsticks, Kites, Orchid, Oshihana, SheBang!, and Joy Bombs in Oregon dispensaries. If you're an Oregon cannabis brand seeking a partner to manage your distribution, contact Nimble at nimbledistro.com! Enter in our giveaway with Cool Smoke! Check out the pinned posts on Instagram and Tiktok or sign up for our email list to win! Don't forget to share this episode with friends, coworkers, and your favorite budtenders! Download our Episode 11 FREEBIE here Root Knowledge: A Budtender's Guide to Cannabis Get your educational Terpene coloring book here! Take Root Training Take Root Training on Instagram Take Root Training on TikTok Chronic Gals on Instagram Talking Pot Heads on TikTok Visit Veriheal for your medical marijuana card Get your own 2-piece Wood Pipe Smoking Kit from Cool Smoke To listen to the full episode and stay up to date with Talking Pot Heads, subscribe on Youtube, Spotify or your favorite podcast platform and sign up for our email newsletter! Don't miss out on the valuable insights and educational freebie downloadables provided by the hosts and their expert guests.x Check out our collection of educational coloring books at TakeRootTraining.com/collections/all! Alright pot heads, keep learning and keep growing! --- Support this podcast: https://podcasters.spotify.com/pod/show/talkingpotheads/support
Today's guest, Nicholas Carlini, a research scientist at DeepMind, argues that we should be focusing more on what AI can do for us individually, rather than trying to have an answer for everyone."How I Use AI" - A Pragmatic ApproachCarlini's blog post "How I Use AI" went viral for good reason. Instead of giving a personal opinion about AI's potential, he simply laid out how he, as a security researcher, uses AI tools in his daily work. He divided it in 12 sections:* To make applications* As a tutor* To get started* To simplify code* For boring tasks* To automate tasks* As an API reference* As a search engine* To solve one-offs* To teach me* Solving solved problems* To fix errorsEach of the sections has specific examples, so we recommend going through it. It also includes all prompts used for it; in the "make applications" case, it's 30,000 words total!My personal takeaway is that the majority of the work AI can do successfully is what humans dislike doing. Writing boilerplate code, looking up docs, taking repetitive actions, etc. These are usually boring tasks with little creativity, but with a lot of structure. This is the strongest arguments as to why LLMs, especially for code, are more beneficial to senior employees: if you can get the boring stuff out of the way, there's a lot more value you can generate. This is less and less true as you go entry level jobs which are mostly boring and repetitive tasks. Nicholas argues both sides ~21:34 in the pod.A New Approach to LLM BenchmarksWe recently did a Benchmarks 201 episode, a follow up to our original Benchmarks 101, and some of the issues have stayed the same. Notably, there's a big discrepancy between what benchmarks like MMLU test, and what the models are used for. Carlini created his own domain-specific language for writing personalized LLM benchmarks. The idea is simple but powerful:* Take tasks you've actually needed AI for in the past.* Turn them into benchmark tests.* Use these to evaluate new models based on your specific needs.It can represent very complex tasks, from a single code generation to drawing a US flag using C:"Write hello world in python" >> LLMRun() >> PythonRun() >> SubstringEvaluator("hello world")"Write a C program that draws an american flag to stdout." >> LLMRun() >> CRun() >> VisionLLMRun("What flag is shown in this image?") >> (SubstringEvaluator("United States") | SubstringEvaluator("USA")))This approach solves a few problems:* It measures what's actually useful to you, not abstract capabilities.* It's harder for model creators to "game" your specific benchmark, a problem that has plagued standardized tests.* It gives you a concrete way to decide if a new model is worth switching to, similar to how developers might run benchmarks before adopting a new library or framework.Carlini argues that if even a small percentage of AI users created personal benchmarks, we'd have a much better picture of model capabilities in practice.AI SecurityWhile much of the AI security discussion focuses on either jailbreaks or existential risks, Carlini's research targets the space in between. Some highlights from his recent work:* LAION 400M data poisoning: By buying expired domains referenced in the dataset, Carlini's team could inject arbitrary images into models trained on LAION 400M. You can read the paper "Poisoning Web-Scale Training Datasets is Practical", for all the details. This is a great example of expanding the scope beyond the model itself, and looking at the whole system and how ti can become vulnerable.* Stealing model weights: They demonstrated how to extract parts of production language models (like OpenAI's) through careful API queries. This research, "Extracting Training Data from Large Language Models", shows that even black-box access can leak sensitive information.* Extracting training data: In some cases, they found ways to make models regurgitate verbatim snippets from their training data. Him and Milad Nasr wrote a paper on this as well: Scalable Extraction of Training Data from (Production) Language Models. They also think this might be applicable to extracting RAG results from a generation.These aren't just theoretical attacks. They've led to real changes in how companies like OpenAI design their APIs and handle data. If you really miss logit_bias and logit results by token, you can blame Nicholas :)We had a ton of fun also chatting about things like Conway's Game of Life, how much data can fit in a piece of paper, and porting Doom to Javascript. Enjoy!Show Notes* How I Use AI* My Benchmark for LLMs* Doom Javascript port* Conway's Game of Life* Tic-Tac-Toe in one printf statement* International Obfuscated C Code Contest* Cursor* LAION 400M poisoning paper* Man vs Machine at Black Hat* Model Stealing from OpenAI* Milad Nasr* H.D. Moore* Vijay Bolina* Cosine.sh* uuencodeTimestamps* [00:00:00] Introductions* [00:01:14] Why Nicholas writes* [00:02:09] The Game of Life* [00:05:07] "How I Use AI" blog post origin story* [00:08:24] Do we need software engineering agents?* [00:11:03] Using AI to kickstart a project* [00:14:08] Ephemeral software* [00:17:37] Using AI to accelerate research* [00:21:34] Experts vs non-expert users as beneficiaries of AI* [00:24:02] Research on generating less secure code with LLMs.* [00:27:22] Learning and explaining code with AI* [00:30:12] AGI speculations?* [00:32:50] Distributing content without social media* [00:35:39] How much data do you think you can put on a single piece of paper?* [00:37:37] Building personal AI benchmarks* [00:43:04] Evolution of prompt engineering and its relevance* [00:46:06] Model vs task benchmarking* [00:52:14] Poisoning LAION 400M through expired domains* [00:55:38] Stealing OpenAI models from their API* [01:01:29] Data stealing and recovering training data from models* [01:03:30] Finding motivation in your workTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO-in-Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:12]: Hey, and today we're in the in-person studio, which Alessio has gorgeously set up for us, with Nicholas Carlini. Welcome. Thank you. You're a research scientist at DeepMind. You work at the intersection of machine learning and computer security. You got your PhD from Berkeley in 2018, and also your BA from Berkeley as well. And mostly we're here to talk about your blogs, because you are so generous in just writing up what you know. Well, actually, why do you write?Nicholas [00:00:41]: Because I like, I feel like it's fun to share what you've done. I don't like writing, sufficiently didn't like writing, I almost didn't do a PhD, because I knew how much writing was involved in writing papers. I was terrible at writing when I was younger. I do like the remedial writing classes when I was in university, because I was really bad at it. So I don't actually enjoy, I still don't enjoy the act of writing. But I feel like it is useful to share what you're doing, and I like being able to talk about the things that I'm doing that I think are fun. And so I write because I think I want to have something to say, not because I enjoy the act of writing.Swyx [00:01:14]: But yeah. It's a tool for thought, as they often say. Is there any sort of backgrounds or thing that people should know about you as a person? Yeah.Nicholas [00:01:23]: So I tend to focus on, like you said, I do security work, I try to like attacking things and I want to do like high quality security research. And that's mostly what I spend my actual time trying to be productive members of society doing that. But then I get distracted by things, and I just like, you know, working on random fun projects. Like a Doom clone in JavaScript.Swyx [00:01:44]: Yes.Nicholas [00:01:45]: Like that. Or, you know, I've done a number of things that have absolutely no utility. But are fun things to have done. And so it's interesting to say, like, you should work on fun things that just are interesting, even if they're not useful in any real way. And so that's what I tend to put up there is after I have completed something I think is fun, or if I think it's sufficiently interesting, write something down there.Alessio [00:02:09]: Before we go into like AI, LLMs and whatnot, why are you obsessed with the game of life? So you built multiplexing circuits in the game of life, which is mind boggling. So where did that come from? And then how do you go from just clicking boxes on the UI web version to like building multiplexing circuits?Nicholas [00:02:29]: I like Turing completeness. The definition of Turing completeness is a computer that can run anything, essentially. And the game of life, Conway's game of life is a very simple cellular 2D automata where you have cells that are either on or off. And a cell becomes on if in the previous generation some configuration holds true and off otherwise. It turns out there's a proof that the game of life is Turing complete, that you can run any program in principle using Conway's game of life. I don't know. And so you can, therefore someone should. And so I wanted to do it. Some other people have done some similar things, but I got obsessed into like, if you're going to try and make it work, like we already know it's possible in theory. I want to try and like actually make something I can run on my computer, like a real computer I can run. And so yeah, I've been going on this rabbit hole of trying to make a CPU that I can run semi real time on the game of life. And I have been making some reasonable progress there. And yeah, but you know, Turing completeness is just like a very fun trap you can go down. A while ago, as part of a research paper, I was able to show that in C, if you call into printf, it's Turing complete. Like printf, you know, like, which like, you know, you can print numbers or whatever, right?Swyx [00:03:39]: Yeah, but there should be no like control flow stuff.Nicholas [00:03:42]: Because printf has a percent n specifier that lets you write an arbitrary amount of data to an arbitrary location. And the printf format specifier has an index into where it is in the loop that is in memory. So you can overwrite the location of where printf is currently indexing using percent n. So you can get loops, you can get conditionals, and you can get arbitrary data rates again. So we sort of have another Turing complete language using printf, which again, like this has essentially zero practical utility, but like, it's just, I feel like a lot of people get into programming because they enjoy the art of doing these things. And then they go work on developing some software application and lose all joy with the boys. And I want to still have joy in doing these things. And so on occasion, I try to stop doing productive, meaningful things and just like, what's a fun thing that we can do and try and make that happen.Alessio [00:04:39]: Awesome. So you've been kind of like a pioneer in the AI security space. You've done a lot of talks starting back in 2018. We'll kind of leave that to the end because I know the security part is, there's maybe a smaller audience, but it's a very intense audience. So I think that'll be fun. But everybody in our Discord started posting your how I use AI blog post and we were like, we should get Carlini on the podcast. And then you were so nice to just, yeah, and then I sent you an email and you're like, okay, I'll come.Swyx [00:05:07]: And I was like, oh, I thought that would be harder.Alessio [00:05:10]: I think there's, as you said in the blog posts, a lot of misunderstanding about what LLMs can actually be used for. What are they useful at? What are they not good at? And whether or not it's even worth arguing what they're not good at, because they're obviously not. So if you cannot count the R's in a word, they're like, it's just not what it does. So how painful was it to write such a long post, given that you just said that you don't like to write? Yeah. And then we can kind of run through the things, but maybe just talk about the motivation, why you thought it was important to do it.Nicholas [00:05:39]: Yeah. So I wanted to do this because I feel like most people who write about language models being good or bad, some underlying message of like, you know, they have their camp and their camp is like, AI is bad or AI is good or whatever. And they like, they spin whatever they're going to say according to their ideology. And they don't actually just look at what is true in the world. So I've read a lot of things where people say how amazing they are and how all programmers are going to be obsolete by 2024. And I've read a lot of things where people who say like, they can't do anything useful at all. And, you know, like, they're just like, it's only the people who've come off of, you know, blockchain crypto stuff and are here to like make another quick buck and move on. And I don't really agree with either of these. And I'm not someone who cares really one way or the other how these things go. And so I wanted to write something that just says like, look, like, let's sort of ground reality and what we can actually do with these things. Because my actual research is in like security and showing that these models have lots of problems. Like this is like my day to day job is saying like, we probably shouldn't be using these in lots of cases. I thought I could have a little bit of credibility of in saying, it is true. They have lots of problems. We maybe shouldn't be deploying them lots of situations. And still, they are also useful. And that is the like, the bit that I wanted to get across is to say, I'm not here to try and sell you on anything. I just think that they're useful for the kinds of work that I do. And hopefully, some people would listen. And it turned out that a lot more people liked it than I thought. But yeah, that was the motivation behind why I wanted to write this.Alessio [00:07:15]: So you had about a dozen sections of like how you actually use AI. Maybe we can just kind of run through them all. And then maybe the ones where you have extra commentary to add, we can... Sure.Nicholas [00:07:27]: Yeah, yeah. I didn't put as much thought into this as maybe was deserved. I probably spent, I don't know, definitely less than 10 hours putting this together.Swyx [00:07:38]: Wow.Alessio [00:07:39]: It took me close to that to do a podcast episode. So that's pretty impressive.Nicholas [00:07:43]: Yeah. I wrote it in one pass. I've gotten a number of emails of like, you got this editing thing wrong, you got this sort of other thing wrong. It's like, I haven't just haven't looked at it. I tend to try it. I feel like I still don't like writing. And so because of this, the way I tend to treat this is like, I will put it together into the best format that I can at a time, and then put it on the internet, and then never change it. And this is an aspect of like the research side of me is like, once a paper is published, like it is done as an artifact that exists in the world. I could forever edit the very first thing I ever put to make it the most perfect version of what it is, and I would do nothing else. And so I feel like I find it useful to be like, this is the artifact, I will spend some certain amount of hours on it, which is what I think it is worth. And then I will just...Swyx [00:08:22]: Yeah.Nicholas [00:08:23]: Timeboxing.Alessio [00:08:24]: Yeah. Stop. Yeah. Okay. We just recorded an episode with the founder of Cosine, which is like an AI software engineer colleague. You said it took you 30,000 words to get GPT-4 to build you the, can GPT-4 solve this kind of like app. Where are we in the spectrum where chat GPT is all you need to actually build something versus I need a full on agent that does everything for me?Nicholas [00:08:46]: Yeah. Okay. So this was an... So I built a web app last year sometime that was just like a fun demo where you can guess if you can predict whether or not GPT-4 at the time could solve a given task. This is, as far as web apps go, very straightforward. You need basic HTML, CSS, you have a little slider that moves, you have a button, sort of animate the text coming to the screen. The reason people are going here is not because they want to see my wonderful HTML, right? I used to know how to do modern HTML in 2007, 2008. I was very good at fighting with IE6 and these kinds of things. I knew how to do that. I have no longer had to build any web app stuff in the meantime, which means that I know how everything works, but I don't know any of the new... Flexbox is new to me. Flexbox is like 10 years old at this point, but it's just amazing being able to go to the model and just say, write me this thing and it will give me all of the boilerplate that I need to get going. Of course it's imperfect. It's not going to get you the right answer, and it doesn't do anything that's complicated right now, but it gets you to the point where the only remaining work that needs to be done is the interesting hard part for me, the actual novel part. Even the current models, I think, are entirely good enough at doing this kind of thing, that they're very useful. It may be the case that if you had something, like you were saying, a smarter agent that could debug problems by itself, that might be even more useful. Currently though, make a model into an agent by just copying and pasting error messages for the most part. That's what I do, is you run it and it gives you some code that doesn't work, and either I'll fix the code, or it will give me buggy code and I won't know how to fix it, and I'll just copy and paste the error message and say, it tells me this. What do I do? And it will just tell me how to fix it. You can't trust these things blindly, but I feel like most people on the internet already understand that things on the internet, you can't trust blindly. And so this is not like a big mental shift you have to go through to understand that it is possible to read something and find it useful, even if it is not completely perfect in its output.Swyx [00:10:54]: It's very human-like in that sense. It's the same ring of trust, I kind of think about it that way, if you had trust levels.Alessio [00:11:03]: And there's maybe a couple that tie together. So there was like, to make applications, and then there's to get started, which is a similar you know, kickstart, maybe like a project that you know the LLM cannot solve. It's kind of how you think about it.Nicholas [00:11:15]: Yeah. So for getting started on things is one of the cases where I think it's really great for some of these things, where I sort of use it as a personalized, help me use this technology I've never used before. So for example, I had never used Docker before January. I know what Docker is. Lucky you. Yeah, like I'm a computer security person, like I sort of, I have read lots of papers on, you know, all the technology behind how these things work. You know, I know all the exploits on them, I've done some of these things, but I had never actually used Docker. But I wanted it to be able to, I could run the outputs of language model stuff in some controlled contained environment, which I know is the right application. So I just ask it like, I want to use Docker to do this thing, like, tell me how to run a Python program in a Docker container. And it like gives me a thing. I'm like, step back. You said Docker compose, I do not know what this word Docker compose is. Is this Docker? Help me. And like, you'll sort of tell me all of these things. And I'm sure there's this knowledge that's out there on the internet, like this is not some groundbreaking thing that I'm doing, but I just wanted it as a small piece of one thing I was working on. And I didn't want to learn Docker from first principles. Like I, at some point, if I need it, I can do that. Like I have the background that I can make that happen. But what I wanted to do was, was thing one. And it's very easy to get bogged down in the details of this other thing that helps you accomplish your end goal. And I just want to like, tell me enough about Docker so I can do this particular thing. And I can check that it's doing the safe thing. I sort of know enough about that from, you know, my other background. And so I can just have the model help teach me exactly the one thing I want to know and nothing more. I don't need to worry about other things that the writer of this thinks is important that actually isn't. Like I can just like stop the conversation and say, no, boring to me. Explain this detail. I don't understand. I think that's what that was very useful for me. It would have taken me, you know, several hours to figure out some things that take 10 minutes if you could just ask exactly the question you want the answer to.Alessio [00:13:05]: Have you had any issues with like newer tools? Have you felt any meaningful kind of like a cutoff day where like there's not enough data on the internet or? I'm sure that the answer to this is yes.Nicholas [00:13:16]: But I tend to just not use most of these things. Like I feel like this is like the significant way in which I use machine learning models is probably very different than most people is that I'm a researcher and I get to pick what tools that I use and most of the things that I work on are fairly small projects. And so I can, I can entirely see how someone who is in a big giant company where they have their own proprietary legacy code base of a hundred million lines of code or whatever and like you just might not be able to use things the same way that I do. I still think there are lots of use cases there that are entirely reasonable that are not the same ones that I've put down. But I wanted to talk about what I have personal experience in being able to say is useful. And I would like it very much if someone who is in one of these environments would be able to describe the ways in which they find current models useful to them. And not, you know, philosophize on what someone else might be able to find useful, but actually say like, here are real things that I have done that I found useful for me.Swyx [00:14:08]: Yeah, this is what I often do to encourage people to write more, to share their experiences because they often fear being attacked on the internet. But you are the ultimate authority on how you use things and there's this objectively true. So they cannot be debated. One thing that people are very excited about is the concept of ephemeral software or like personal software. This use case in particular basically lowers the activation energy for creating software, which I like as a vision. I don't think I have taken as much advantage of it as I could. I feel guilty about that. But also, we're trending towards there.Nicholas [00:14:47]: Yeah. No, I mean, I do think that this is a direction that is exciting to me. One of the things I wrote that was like, a lot of the ways that I use these models are for one-off things that I just need to happen that I'm going to throw away in five minutes. And you can.Swyx [00:15:01]: Yeah, exactly.Nicholas [00:15:02]: Right. It's like the kind of thing where it would not have been worth it for me to have spent 45 minutes writing this, because I don't need the answer that badly. But if it will only take me five minutes, then I'll just figure it out, run the program and then get it right. And if it turns out that you ask the thing, it doesn't give you the right answer. Well, I didn't actually need the answer that badly in the first place. Like either I can decide to dedicate the 45 minutes or I cannot, but like the cost of doing it is fairly low. You see what the model can do. And if it can't, then, okay, when you're using these models, if you're getting the answer you want always, it means you're not asking them hard enough questions.Swyx [00:15:35]: Say more.Nicholas [00:15:37]: Lots of people only use them for very small particular use cases and like it always does the thing that they want. Yeah.Swyx [00:15:43]: Like they use it like a search engine.Nicholas [00:15:44]: Yeah. Or like one particular case. And if you're finding that when you're using these, it's always giving you the answer that you want, then probably it has more capabilities than you're actually using. And so I oftentimes try when I have something that I'm curious about to just feed into the model and be like, well, maybe it's just solved my problem for me. You know, most of the time it doesn't, but like on occasion, it's like, it's done things that would have taken me, you know, a couple hours that it's been great and just like solved everything immediately. And if it doesn't, then it's usually easier to verify whether or not the answer is correct than to have written in the first place. And so you check, you're like, well, that's just, you're entirely misguided. Nothing here is right. It's just like, I'm not going to do this. I'm going to go write it myself or whatever.Alessio [00:16:21]: Even for non-tech, I had to fix my irrigation system. I had an old irrigation system. I didn't know how I worked to program it. I took a photo, I sent it to Claude and it's like, oh yeah, that's like the RT 900. This is exactly, I was like, oh wow, you know, you know, a lot of stuff.Swyx [00:16:34]: Was it right?Alessio [00:16:35]: Yeah, it was right.Swyx [00:16:36]: It worked. Did you compare with OpenAI?Alessio [00:16:38]: No, I canceled my OpenAI subscription, so I'm a Claude boy. Do you have a way to think about this like one-offs software thing? One way I talk to people about it is like LLMs are kind of converging to like semantic serverless functions, you know, like you can say something and like it can run the function in a way and then that's it. It just kind of dies there. Do you have a mental model to just think about how long it should live for and like anything like that?Nicholas [00:17:02]: I don't think I have anything interesting to say here, no. I will take whatever tools are available in front of me and try and see if I can use them in meaningful ways. And if they're helpful, then great. If they're not, then fine. And like, you know, there are lots of people that I'm very excited about seeing all these people who are trying to make better applications that use these or all these kinds of things. And I think that's amazing. I would like to see more of it, but I do not spend my time thinking about how to make this any better.Alessio [00:17:27]: What's the most underrated thing in the list? I know there's like simplified code, solving boring tasks, or maybe is there something that you forgot to add that you want to throw in there?Nicholas [00:17:37]: I mean, so in the list, I only put things that people could look at and go, I understand how this solved my problem. I didn't want to put things where the model was very useful to me, but it would not be clear to someone else that it was actually useful. So for example, one of the things that I use it a lot for is debugging errors. But the errors that I have are very much not the errors that anyone else in the world will have. And in order to understand whether or not the solution was right, you just have to trust me on it. Because, you know, like I got my machine in a state that like CUDA was not talking to whatever some other thing, the versions were mismatched, something, something, something, and everything was broken. And like, I could figure it out with interaction with the model, and it gave it like told me the steps I needed to take. But at the end of the day, when you look at the conversation, you just have to trust me that it worked. And I didn't want to write things online that were this, like, you have to trust me that what I'm saying. I want everything that I said to like have evidence that like, here's the conversation, you can go and check whether or not this actually solved the task as I said that the model does. Because a lot of people I feel like say, I used a model to solve this very complicated task. And what they mean is the model did 10%, and I did the other 90% or something, I wanted everything to be verifiable. And so one of the biggest use cases for me, I didn't describe even at all, because it's not the kind of thing that other people could have verified by themselves. So that maybe is like, one of the things that I wish I maybe had said a little bit more about, and just stated that the way that this is done, because I feel like that this didn't come across quite as well. But yeah, of the things that I talked about, the thing that I think is most underrated is the ability of it to solve the uninteresting parts of problems for me right now, where people always say, this is one of the biggest arguments that I don't understand why people say is, the model can only do things that people have done before. Therefore, the model is not going to be helpful in doing new research or like discovering new things. And as someone whose day job is to do new things, like what is research? Research is doing something literally no one else in the world has ever done before. So this is what I do every single day, 90% of this is not doing something new, 90% of this is doing things a million people have done before, and then a little bit of something that was new. There's a reason why we say we stand on the shoulders of giants. It's true. Almost everything that I do is something that's been done many, many times before. And that is the piece that can be automated. Even if the thing that I'm doing as a whole is new, it is almost certainly the case that the small pieces that build up to it are not. And a number of people who use these models, I feel like expect that they can either solve the entire task or none of the task. But now I find myself very often, even when doing something very new and very hard, having models write the easy parts for me. And the reason I think this is so valuable, everyone who programs understands this, like you're currently trying to solve some problem and then you get distracted. And whatever the case may be, someone comes and talks to you, you have to go look up something online, whatever it is. You lose a lot of time to that. And one of the ways we currently don't think about being distracted is you're solving some hard problem and you realize you need a helper function that does X, where X is like, it's a known algorithm. Any person in the world, you say like, give me the algorithm that, have a dense graph or a sparse graph, I need to make it dense. You can do this by doing some matrix multiplies. It's like, this is a solved problem. I knew how to do this 15 years ago, but it distracts me from the problem I'm thinking about in my mind. I needed this done. And so instead of using my mental capacity and solving that problem and then coming back to the problem I was originally trying to solve, you could just ask model, please solve this problem for me. It gives you the answer. You run it. You can check that it works very, very quickly. And now you go back to solving the problem without having lost all the mental state. And I feel like this is one of the things that's been very useful for me.Swyx [00:21:34]: And in terms of this concept of expert users versus non-expert users, floors versus ceilings, you had some strong opinion here that like, basically it actually is more beneficial for non-experts.Nicholas [00:21:46]: Yeah, I don't know. I think it could go either way. Let me give you the argument for both of these. Yes. So I can only speak on the expert user behalf because I've been doing computers for a long time. And so yeah, the cases where it's useful for me are exactly these cases where I can check the output. I know, and anything the model could do, I could have done. I could have done better. I can check every single thing that the model is doing and make sure it's correct in every way. And so I can only speak and say, definitely it's been useful for me. But I also see a world in which this could be very useful for the kinds of people who do not have this knowledge, with caveats, because I'm not one of these people. I don't have this direct experience. But one of these big ways that I can see this is for things that you can check fairly easily, someone who could never have asked or have written a program themselves to do a certain task could just ask for the program that does the thing. And you know, some of the times it won't get it right. But some of the times it will, and they'll be able to have the thing in front of them that they just couldn't have done before. And we see a lot of people trying to do applications for this, like integrating language models into spreadsheets. Spreadsheets run the world. And there are some people who know how to do all the complicated spreadsheet equations and various things, and other people who don't, who just use the spreadsheet program but just manually do all of the things one by one by one by one. And this is a case where you could have a model that could try and give you a solution. And as long as the person is rigorous in testing that the solution does actually the correct thing, and this is the part that I'm worried about most, you know, I think depending on these systems in ways that we shouldn't, like this is what my research says, my research says is entirely on this, like, you probably shouldn't trust these models to do the things in adversarial situations, like, I understand this very deeply. And so I think that it's possible for people who don't have this knowledge to make use of these tools in ways, but I'm worried that it might end up in a world where people just blindly trust them, deploy them in situations that they probably shouldn't, and then someone like me gets to come along and just break everything because everything is terrible. And so I am very, very worried about that being the case, but I think if done carefully it is possible that these could be very useful.Swyx [00:23:54]: Yeah, there is some research out there that shows that when people use LLMs to generate code, they do generate less secure code.Nicholas [00:24:02]: Yeah, Dan Bonet has a nice paper on this. There are a bunch of papers that touch on exactly this.Swyx [00:24:07]: My slight issue is, you know, is there an agenda here?Nicholas [00:24:10]: I mean, okay, yeah, Dan Bonet, at least the one they have, like, I fully trust everything that sort of.Swyx [00:24:15]: Sorry, I don't know who Dan is.Swyx [00:24:17]: He's a professor at Stanford. Yeah, he and some students have some things on this. Yeah, there's a number. I agree that a lot of the stuff feels like people have an agenda behind it. There are some that don't, and I trust them to have done the right thing. I also think, even on this though, we have to be careful because the argument, whenever someone says x is true about language models, you should always append the suffix for current models because I'll be the first to admit I was one of the people who was very much on the opinion that these language models are fun toys and are going to have absolutely no practical utility. If you had asked me this, let's say, in 2020, I still would have said the same thing. After I had seen GPT-2, I had written a couple of papers studying GPT-2 very carefully. I still would have told you these things are toys. And when I first read the RLHF paper and the instruction tuning paper, I was like, nope, this is this thing that these weird AI people are doing. They're trying to make some analogies to people that makes no sense. It's just like, I don't even care to read it. I saw what it was about and just didn't even look at it. I was obviously wrong. These things can be useful. And I feel like a lot of people had the same mentality that I did and decided not to change their mind. And I feel like this is the thing that I want people to be careful about. I want them to at least know what is true about the world so that they can then see that maybe they should reconsider some of the opinions that they had from four or five years ago that may just not be true about today's models.Swyx [00:25:47]: Specifically because you brought up spreadsheets, I want to share my personal experience because I think Google has done a really good job that people don't know about, which is if you use Google Sheets, Gemini is integrated inside of Google Sheets and it helps you write formulas. Great.Nicholas [00:26:00]: That's news to me.Swyx [00:26:01]: Right? They don't maybe do a good job. Unless you watch Google I.O., there was no other opportunity to learn that Gemini is now in your Google Sheets. And so I just don't write formulas manually anymore. It just prompts Gemini to do it for me. And it does it.Nicholas [00:26:15]: One of the problems that these machine learning models have is a discoverability problem. I think this will be figured out. I mean, it's the same problem that you have with any assistant. You're given a blank box and you're like, what do I do with it? I think this is great. More of these things, it would be good for them to exist. I want them to exist in ways that we can actually make sure that they're done correctly. I don't want to just have them be pushed into more and more things just blindly. I feel like lots of people, there are far too many X plus AI, where X is like arbitrary thing in the world that has nothing to do with it and could not be benefited at all. And they're just doing it because they want to use the word. And I don't want that to happen.Swyx [00:26:58]: You don't want an AI fridge?Nicholas [00:27:00]: No. Yes. I do not want my fridge on the internet.Swyx [00:27:03]: I do not want... Okay.Nicholas [00:27:05]: Anyway, let's not go down that rabbit hole. I understand why some of that happens, because people want to sell things or whatever. But I feel like a lot of people see that and then they write off everything as a result of it. And I just want to say, there are allowed to be people who are trying to do things that don't make any sense. Just ignore them. Do the things that make sense.Alessio [00:27:22]: Another chunk of use cases was learning. So both explaining code, being an API reference, all of these different things. Any suggestions on how to go at it? I feel like one thing is generate code and then explain to me. One way is just tell me about this technology. Another thing is like, hey, I read this online, kind of help me understand it. Any best practices on getting the most out of it?Swyx [00:27:47]: Yeah.Nicholas [00:27:47]: I don't know if I have best practices. I have how I use them.Swyx [00:27:51]: Yeah.Nicholas [00:27:51]: I find it very useful for cases where I understand the underlying ideas, but I have never usedSwyx [00:27:59]: them in this way before.Nicholas [00:28:00]: I know what I'm looking for, but I just don't know how to get there. And so yeah, as an API reference is a great example. The tool everyone always picks on is like FFmpeg. No one in the world knows the command line arguments to do what they want. They're like, make the thing faster. I want lower bitrate, like dash V. Once you tell me what the answer is, I can check. This is one of these things where it's great for these kinds of things. Or in other cases, things where I don't really care that the answer is 100% correct. So for example, I do a lot of security work. Most of security work is reading some code you've never seen before and finding out which pieces of the code are actually important. Because, you know, most of the program isn't actually do anything to do with security. It has, you know, the display piece or the other piece or whatever. And like, you just, you would only ignore all of that. So one very fun use of models is to like, just have it describe all the functions and just skim it and be like, wait, which ones look like approximately the right things to look at? Because otherwise, what are you going to do? You're going to have to read them all manually. And when you're reading them manually, you're going to skim the function anyway, and not just figure out what's going on perfectly. Like you already know that when you're going to read these things, what you're going to try and do is figure out roughly what's going on. Then you'll delve into the details. This is a great way of just doing that, but faster, because it will abstract most of whatSwyx [00:29:21]: is right.Nicholas [00:29:21]: It's going to be wrong some of the time. I don't care.Swyx [00:29:23]: I would have been wrong too.Nicholas [00:29:24]: And as long as you treat it with this way, I think it's great. And so like one of the particular use cases I have in the thing is decompiling binaries, where oftentimes people will release a binary. They won't give you the source code. And you want to figure out how to attack it. And so one thing you could do is you could try and run some kind of decompiler. It turns out for the thing that I wanted, none existed. And so I spent too many hours doing it by hand. Before I first thought, why am I doing this? I should just check if the model could do it for me. And it turns out that it can. And it can turn the compiled source code, which is impossible for any human to understand, into the Python code that is entirely reasonable to understand. And it doesn't run. It has a bunch of problems. But it's so much nicer that it's immediately a win for me. I can just figure out approximately where I should be looking, and then spend all of my time doing that by hand. And again, you get a big win there.Swyx [00:30:12]: So I fully agree with all those use cases, especially for you as a security researcher and having to dive into multiple things. I imagine that's super helpful. I do think we want to move to your other blog post. But you ended your post with a little bit of a teaser about your next post and your speculations. What are you thinking about?Nicholas [00:30:34]: So I want to write something. And I will do that at some point when I have time, maybe after I'm done writing my current papers for ICLR or something, where I want to talk about some thoughts I have for where language models are going in the near-term future. The reason why I want to talk about this is because, again, I feel like the discussion tends to be people who are either very much AGI by 2027, orSwyx [00:30:55]: always five years away, or are going to make statements of the form,Nicholas [00:31:00]: you know, LLMs are the wrong path, and we should be abandoning this, and we should be doing something else instead. And again, I feel like people tend to look at this and see these two polarizing options and go, well, those obviously are both very far extremes. Like, how do I actually, like, what's a more nuanced take here? And so I have some opinions about this that I want to put down, just saying, you know, I have wide margins of error. I think you should too. If you would say there's a 0% chance that something, you know, the models will get very, very good in the next five years, you're probably wrong. If you're going to say there's a 100% chance that in the next five years, then you're probably wrong. And like, to be fair, most of the people, if you read behind the headlines, actually say something like this. But it's very hard to get clicks on the internet of like, some things may be good in the future. Like, everyone wants like, you know, a very, like, nothing is going to be good. This is entirely wrong. It's going to be amazing. You know, like, they want to see this. I want people who have negative reactions to these kinds of extreme views to be able to at least say, like, to tell them, there is something real here. It may not solve all of our problems, but it's probably going to get better. I don't know by how much. And that's basically what I want to say. And then at some point, I'll talk about the safety and security things as a result of this. Because the way in which security intersects with these things depends a lot in exactly how people use these tools. You know, if it turns out to be the case that these models get to be truly amazing and can solve, you know, tasks completely autonomously, that's a very different security world to be living in than if there's always a human in the loop. And the types of security questions I would want to ask would be very different. And so I think, you know, in some very large part, understanding what the future will look like a couple of years ahead of time is helpful for figuring out which problems, as a security person, I want to solve now. You mentioned getting clicks on the internet,Alessio [00:32:50]: but you don't even have, like, an ex-account or anything. How do you get people to read your stuff? What's your distribution strategy? Because this post was popping up everywhere. And then people on Twitter were like, Nicholas Garlini wrote this. Like, what's his handle? It's like, he doesn't have it. It's like, how did you find it? What's the story?Nicholas [00:33:07]: So I have an RSS feed and an email list. And that's it. I don't like most social media things. On principle, I feel like they have some harms. As a person, I have a problem when people say things that are wrong on the internet. And I would get nothing done if I would have a Twitter. I would spend all of my time correcting people and getting into fights. And so I feel like it is just useful for me for this not to be an option. I tend to just post things online. Yeah, it's a very good question. I don't know how people find it. I feel like for some things that I write, other people think it resonates with them. And then they put it on Twitter. And...Swyx [00:33:43]: Hacker News as well.Nicholas [00:33:44]: Sure, yeah. I am... Because my day job is doing research, I get no value for having this be picked up. There's no whatever. I don't need to be someone who has to have this other thing to give talks. And so I feel like I can just say what I want to say. And if people find it useful, then they'll share it widely. You know, this one went pretty wide. I wrote a thing, whatever, sometime late last year, about how to recover data off of an Apple profile drive from 1980. This probably got, I think, like 1000x less views than this. But I don't care. Like, that's not why I'm doing this. Like, this is the benefit of having a thing that I actually care about, which is my research. I would care much more if that didn't get seen. This is like a thing that I write because I have some thoughts that I just want to put down.Swyx [00:34:32]: Yeah. I think it's the long form thoughtfulness and authenticity that is sadly lacking sometimes in modern discourse that makes it attractive. And I think now you have a little bit of a brand of you are an independent thinker, writer, person, that people are tuned in to pay attention to whatever is next coming.Nicholas [00:34:52]: Yeah, I mean, this kind of worries me a little bit. I don't like whenever I have a popular thing that like, and then I write another thing, which is like entirely unrelated. Like, I don't, I don't... You should actually just throw people off right now.Swyx [00:35:01]: Exactly.Nicholas [00:35:02]: I'm trying to figure out, like, I need to put something else online. So, like, the last two or three things I've done in a row have been, like, actually, like, things that people should care about.Swyx [00:35:10]: Yes. So, I have a couple of things.Nicholas [00:35:11]: I'm trying to figure out which one do I put online to just, like, cull the list of people who have subscribed to my email.Swyx [00:35:16]: And so, like, tell them, like,Nicholas [00:35:16]: no, like, what you're here for is not informed, well-thought-through takes. Like, what you're here for is whatever I want to talk about. And if you're not up for that, then, like, you know, go away. Like, this is not what I want out of my personal website.Swyx [00:35:27]: So, like, here's, like, top 10 enemies or something.Alessio [00:35:30]: What's the next project you're going to work on that is completely unrelated to research LLMs? Or what games do you want to port into the browser next?Swyx [00:35:39]: Okay. Yeah.Nicholas [00:35:39]: So, maybe.Swyx [00:35:41]: Okay.Nicholas [00:35:41]: Here's a fun question. How much data do you think you can put on a single piece of paper?Swyx [00:35:47]: I mean, you can think about bits and atoms. Yeah.Nicholas [00:35:49]: No, like, normal printer. Like, I gave you an office printer. How much data can you put on a piece of paper?Alessio [00:35:54]: Can you re-decode it? So, like, you know, base 64A or whatever. Yeah, whatever you want.Nicholas [00:35:59]: Like, you get normal off-the-shelf printer, off-the-shelf scanner. How much data?Swyx [00:36:03]: I'll just throw out there. Like, 10 megabytes. That's enormous. I know.Nicholas [00:36:07]: Yeah, that's a lot.Swyx [00:36:10]: Really small fonts. That's my question.Nicholas [00:36:12]: So, I have a thing. It does about a megabyte.Swyx [00:36:14]: Yeah, okay.Nicholas [00:36:14]: There you go. I was off by an order of magnitude.Swyx [00:36:16]: Yeah, okay.Nicholas [00:36:16]: So, in particular, it's about 1.44 megabytes. A floppy disk.Swyx [00:36:21]: Yeah, exactly.Nicholas [00:36:21]: So, this is supposed to be the title at some point. It's a floppy disk.Swyx [00:36:24]: A paper is a floppy disk. Yeah.Nicholas [00:36:25]: So, this is a little hard because, you know. So, you can do the math and you get 8.5 by 11. You can print at 300 by 300 DPI. And this gives you 2 megabytes. And so, every single pixel, you need to be able to recover up to like 90 plus percent. Like, 95 percent. Like, 99 point something percent accuracy. In order to be able to actually decode this off the paper. This is one of the things that I'm considering. I need to get a couple more things working for this. Where, you know, again, I'm running into some random problems. But this is probably, this will be one thing that I'm going to talk about. There's this contest called the International Obfuscated C-Code Contest, which is amazing. People try and write the most obfuscated C code that they can. Which is great. And I have a submission for that whenever they open up the next one for it. And I'll write about that submission. I have a very fun gate level emulation of an old CPU that runs like fully precisely. And it's a fun kind of thing. Yeah.Swyx [00:37:20]: Interesting. Your comment about the piece of paper reminds me of when I was in college. And you would have like one cheat sheet that you could write. So, you have a formula, a theoretical limit for bits per inch. And, you know, that's how much I would squeeze in really, really small. Yeah, definitely.Nicholas [00:37:36]: Okay.Swyx [00:37:37]: We are also going to talk about your benchmarking. Because you released your own benchmark that got some attention, thanks to some friends on the internet. What's the story behind your own benchmark? Do you not trust the open source benchmarks? What's going on there?Nicholas [00:37:51]: Okay. Benchmarks tell you how well the model solves the task the benchmark is designed to solve. For a long time, models were not useful. And so, the benchmark that you tracked was just something someone came up with, because you need to track something. All of deep learning exists because people tried to make models classify digits and classify images into a thousand classes. There is no one in the world who cares specifically about the problem of distinguishing between 300 breeds of dog for an image that's 224 or 224 pixels. And yet, like, this is what drove a lot of progress. And people did this not because they cared about this problem, because they wanted to just measure progress in some way. And a lot of benchmarks are of this flavor. You want to construct a task that is hard, and we will measure progress on this benchmark, not because we care about the problem per se, but because we know that progress on this is in some way correlated with making better models. And this is fine when you don't want to actually use the models that you have. But when you want to actually make use of them, it's important to find benchmarks that track with whether or not they're useful to you. And the thing that I was finding is that there would be model after model after model that was being released that would find some benchmark that they could claim state-of-the-art on and then say, therefore, ours is the best. And that wouldn't be helpful to me to know whether or not I should then switch to it. So the argument that I tried to lay out in this post is that more people should make benchmarks that are tailored to them. And so what I did is I wrote a domain-specific language that anyone can write for and say, you can take tasks that you have wanted models to solve for you, and you can put them into your benchmark that's the thing that you care about. And then when a new model comes out, you benchmark the model on the things that you care about. And you know that you care about them because you've actually asked for those answers before. And if the model scores well, then you know that for the kinds of things that you have asked models for in the past, it can solve these things well for you. This has been useful for me because when another model comes out, I can run it. I can see, does this solve the kinds of things that I care about? And sometimes the answer is yes, and sometimes the answer is no. And then I can decide whether or not I want to use that model or not. I don't want to say that existing benchmarks are not useful. They're very good at measuring the thing that they're designed to measure. But in many cases, what that's designed to measure is not actually the thing that I want to use it for. And I expect that the way that I want to use it is different the way that you want to use it. And I would just like more people to have these things out there in the world. And the final reason for this is, it is very easy. If you want to make a model good at some benchmark, to make it good at that benchmark, you can find the distribution of data that you need and train the model to be good on the distribution of data. And then you have your model that can solve this benchmark well. And by having a benchmark that is not very popular, you can be relatively certain that no one has tried to optimize their model for your benchmark.Swyx [00:40:40]: And I would like this to be-Nicholas [00:40:40]: So publishing your benchmark is a little bit-Swyx [00:40:43]: Okay, sure.Nicholas [00:40:43]: Contextualized. So my hope in doing this was not that people would use mine as theirs. My hope in doing this was that- You should make yours. Yes, you should make your benchmark. And if, for example, there were even a very small fraction of people, 0.1% of people who made a benchmark that was useful for them, this would still be hundreds of new benchmarks that- not want to make one myself, but I might want to- I might know the kinds of work that I do is a little bit like this person, a little bit like that person. I'll go check how it is on their benchmarks. And I'll see, roughly, I'll get a good sense of what's going on. Because the alternative is people just do this vibes-based evaluation thing, where you interact with the model five times, and you see if it worked on the kinds of things that you just like your toy questions. But five questions is a very low bit output from whether or not it works for this thing. And if you could just automate running it 100 questions for you, it's a much better evaluation. So that's why I did this.Swyx [00:41:37]: Yeah, I like the idea of going through your chat history and actually pulling out real-life examples. I regret to say that I don't think my chat history is used as much these days, because I'm using Cursor, the native AI IDE. So your examples are all coding related. And the immediate question is, now that you've written the How I Use AI post, which is a little bit broader, are you able to translate all these things to evals? Are some things unevaluable?Nicholas [00:42:03]: Right. A number of things that I do are harder to evaluate. So this is the problem with a benchmark, is you need some way to check whether or not the output was correct. And so all of the kinds of things that I can put into the benchmark are the kinds of things that you can check. You can check more things than you might have thought would be possible if you do a little bit of work on the back end. So for example, all of the code that I have the model write, it runs the code and sees whether the answer is the correct answer. Or in some cases, it runs the code, feeds the output to another language model, and the language model judges was the output correct. And again, is using a language model to judge here perfect? No. But like, what's the alternative? The alternative is to not do it. And what I care about is just, is this thing broadly useful for the kinds of questions that I have? And so as long as the accuracy is better than roughly random, like, I'm okay with this. I've inspected the outputs of these, and like, they're almost always correct. If you ask the model to judge these things in the right way, they're very good at being able to tell this. And so, yeah, I probably think this is a useful thing for people to do.Alessio [00:43:04]: You complain about prompting and being lazy and how you do not want to tip your model and you do not want to murder a kitten just to get the right answer. How do you see the evolution of like prompt engineering? Even like 18 months ago, maybe, you know, it was kind of like really hot and people wanted to like build companies around it. Today, it's like the models are getting good. Do you think it's going to be less and less relevant going forward? Or what's the minimum valuable prompt? Yeah, I don't know.Nicholas [00:43:29]: I feel like a big part of making an agent is just like a fancy prompt that like, you know, calls back to the model again. I have no opinion. It seems like maybe it turns out that this is really important. Maybe it turns out that this isn't. I guess the only comment I was making here is just to say, oftentimes when I use a model and I find it's not useful, I talk to people who help make it. The answer they usually give me is like, you're using it wrong. Which like reminds me very much of like that you're holding it wrong from like the iPhone kind of thing, right? Like, you know, like I don't care that I'm holding it wrong. I'm holding it that way. If the thing is not working with me, then like it's not useful for me. Like it may be the case that there exists a way to ask the model such that it gives me the answer that's correct, but that's not the way I'm doing it. If I have to spend so much time thinking about how I want to frame the question, that it would have been faster for me just to get the answer. It didn't save me any time. And so oftentimes, you know, what I do is like, I just dump in whatever current thought that I have in whatever ill-formed way it is. And I expect the answer to be correct. And if the answer is not correct, like in some sense, maybe the model was right to give me the wrong answer. Like I may have asked the wrong question, but I want the right answer still. And so like, I just want to sort of get this as a thing. And maybe the way to fix this is you have some default prompt that always goes into all the models or something, or you do something like clever like this. It would be great if someone had a way to package this up and make a thing I think that's entirely reasonable. Maybe it turns out that as models get better, you don't need to prompt them as much in this way. I just want to use the things that are in front of me.Alessio [00:44:55]: Do you think that's like a limitation of just how models work? Like, you know, at the end of the day, you're using the prompt to kind of like steer it in the latent space. Like, do you think there's a way to actually not make the prompt really relevant and have the model figure it out? Or like, what's the... I mean, you could fine tune itNicholas [00:45:10]: into the model, for example, that like it's supposed to... I mean, it seems like some models have done this, for example, like some recent model, many recent models. If you ask them a question, computing an integral of this thing, they'll say, let's think through this step by step. And then they'll go through the step by step answer. I didn't tell it. Two years ago, I would have had to have prompted it. Think step by step on solving the following thing. Now you ask them the question and the model says, here's how I'm going to do it. I'm going to take the following approach and then like sort of self-prompt itself.Swyx [00:45:34]: Is this the right way?Nicholas [00:45:35]: Seems reasonable. Maybe you don't have to do it. I don't know. This is for the people whose job is to make these things better. And yeah, I just want to use these things. Yeah.Swyx [00:45:43]: For listeners, that would be Orca and Agent Instruct. It's the soda on this stuff. Great. Yeah.Alessio [00:45:49]: That's a few shot. It's included in the lazy prompting. Like, do you do a few shot prompting? Like, do you collect some examples when you want to put them in? Or...Nicholas [00:45:57]: I don't because usually when I want the answer, I just want to get the answer. Brutal.Swyx [00:46:03]: This is hard mode. Yeah, exactly.Nicholas [00:46:04]: But this is fine.Swyx [00:46:06]: I want to be clear.Nicholas [00:46:06]: There's a difference between testing the ultimate capability level of the model and testing the thing that I'm doing with it. What I'm doing is I'm not exercising its full capability level because there are almost certainly better ways to ask the questions and sort of really see how good the model is. And if you're evaluating a model for being state of the art, this is ultimately what I care about. And so I'm entirely fine with people doing fancy prompting to show me what the true capability level could be because it's really useful to know what the ultimate level of the model could be. But I think it's also important just to have available to you how good the model is if you don't do fancy things.Swyx [00:46:39]: Yeah, I would say that here's a divergence between how models are marketed these days versus how people use it, which is when they test MMLU, they'll do like five shots, 25 shots, 50 shots. And no one's providing 50 examples. I completely agree.Nicholas [00:46:54]: You know, for these numbers, the problem is everyone wants to get state of the art on the benchmark. And so you find the way that you can ask the model the questions so that you get state of the art on the benchmark. And it's good. It's legitimately good to know. It's good to know the model can do this thing if only you try hard enough. Because it means that if I have some task that I want to be solved, I know what the capability level is. And I could get there if I was willing to work hard enough. And the question then is, should I work harder and figure out how to ask the model the question? Or do I just do the thing myself? And for me, I have programmed for many, many, many years. It's often just faster for me just to do the thing than to figure out the incantation to ask the model. But I can imagine someone who has never programmed before might be fine writing five paragraphs in English describing exactly the thing that they want and have the model build it for them if the alternative is not. But again, this goes to all these questions of how are they going to validate? Should they be trusting the output? These kinds of things.Swyx [00:47:49]: One problem with your eval paradigm and most eval paradigms, I'm not picking on you, is that we're actually training these things for chat, for interactive back and forth. And you actually obviously reveal much more information in the same way that asking 20 questions reveals more information in sort of a tree search branching sort of way. Then this is also by the way the problem with LMSYS arena, right? Where the vast majority of prompts are single question, single answer, eval, done. But actually the way that we use chat things, in the way, even in the stuff that you posted in your how I use AI stuff, you have maybe 20 turns of back and forth. How do you eval that?Nicholas [00:48:25]: Yeah. Okay. Very good question. This is the thing that I think many people should be doing more of. I would like more multi-turn evals. I might be writing a paper on this at some point if I get around to it. A couple of the evals in the benchmark thing I have are already multi-turn. I mentioned 20 questions. I have a 20 question eval there just for fun. But I have a couple others that are like, I just tell the model, here's my get thing, figure out how to cherry pick off this other branch and move it over there. And so what I do is I just, I basically build a tiny little agency thing. I just ask the model how I do it. I run the thing on Linux. This is what I want a Docker for. I spin up a Docker container. I run whatever the model told me the output to do is. I feed the output back into the model. I repeat this many rounds. And then I check at the very end, does the git commit history show that it is correctly cherry picked in
In life and business, you either win or you learn! During this episode, we dive further into Co-Host Brook Bishop's journey into sales success. In this continuation episode, Brook gets into all the muscle that he built in the early part of his sales career, learning how to turn that into massive amounts of sales revenue and eventually scaling a team level to the tune of 30 million dollars. Tuning in, you'll get countless lessons from Brook on turning difficulty into opportunity and identifying systems, processes, and patterns for faster sales and success. He takes this a step further and teaches listeners how to turn this into a template that facilitates scaling. With insights from different points in his career, Brook shows how he has found opportunity in difficult circumstances, and shares what he has learned and won along the way. If you haven't listened to Part 1 of this conversation, visit iTunes and do so before pressing play on this one. Thanks for listening! Key Points From This Episode:Introducing Part 2 of our conversation with Co-Hose Brook Bishop's journey to sales success and mastery.What he learned from working behind the scenes at Buffini & Company.Brook's intention to master each role beyond its function to educate himself.Becoming calculated in the process of building a network.How developing his understanding of personalities and strengths enabled him to create rapport with others.Obstacles to climbing the leadership ranks at Buffini. Advice from his mentor after leaving the Buffini marriage. Extracting opportunity out of difficulty.Building his own coaching practice and branching out into different industries.How Brook met the team at Tony Robbins before being headhunted to work with them.The role of taking inspired action in facilitating opportunities.Starting to work at Tony Robbins and getting his team's attention by getting insane results.Why it is so important to be strategic with every opportunity.Changing his life mission to triple his income and cut his errors in half. Distinguishing between what you are saying and what a client needs to hear.How his accolades dovetailed into results while working at Tony Robbins.Finding clear ‘whys' for himself and his clients to motivate his success.Links Mentioned in Today's Episode:Mastering Sales Pt 1: Lessons from Brook Bishop's Path to SuccessBuffiniTony RobbinsHeritage ProfileRyan Lang on LinkedInBrook Bishop on LinkedInEmpire PartnersEmpire AcademyThe Coaching Equation Podcast on iTunes
Extracting information from captive colleagues can be a complicated process, but the whole thing gets a lot easier when they aren't paid nearly enough to put up with this kind of trauma.
Industrial Talk is chatting with Hartmut Hahn, CEO at Userlane about “Extracting greater value and user adoption out of your technology stack”. Scott MacKenzie and Hartmut Hahn discussed challenges in extracting value from technology stacks, optimizing software use, and reducing software spend. They emphasized the importance of data-driven insights and user engagement to identify areas of improvement. Hartmut highlighted their platform's ability to track user interactions and provide a framework for evaluating software use. Scott MacKenzie questioned how their approach could accommodate different organizational processes. Later, Hartmut discussed the role of predictive analytics in technology adoption, emphasizing the need for a comprehensive understanding of business processes and constant monitoring. The speakers also highlighted the importance of predicting user adoption and efficiency, reaching out to Hardware Lane company for collaboration, and leveraging technology to solve problems. Action Items [ ] Reach out to Userlane directly through their website or contact Hartmut Hahn on LinkedIn for a demo or trial of their software. [ ] Promote industrial podcasts or technologies on the Industrial Talk platform by contacting Scott MacKenzie. (Podcast owners, technology companies) [ ] Map out key business processes to track within Userlane's software once onboarding. Outline Using technology to extract value from digital transformation solutions. Scott MacKenzie interviews Hartmut Hahn about Userlane platform insights. Industrial talk provides a platform for podcasts and technology solutions to reach a wider audience. Scott MacKenzie interviews Heartburn about technology solutions. Software usage and efficiency in large organizations. Hartmut: Companies buy many software solutions, often without proper implementation. Hartmut: Companies struggle with paying consultancies for software implementation. Hartmut: Executives have gut feelings about software usage, but no data to back it up. Hartmut: Userlane analyzes software stack to identify usage patterns, struggles, and areas for improvement. Process mapping and monitoring in software development. Hartmut explains how their software tracks employee interactions across five dimensions to provide a score for each software, highlighting differences in implementation across organizations. Hartmut emphasizes the importance of process mapping and its value in identifying areas for improvement. Hartmut: Monitor processes constantly, adjust yellow/green indicators based on business needs. Hartmut: Executives like constant monitoring, but may not know extent of Salesforce licenses or usage. Optimizing software spend and improving user experience. Hartmut mentions realizing unnecessary software costs and Shadow IT usage. Scott MacKenzie agrees, highlighting the importance of technology stack optimization. Hartmut suggests optimizing software spend by identifying unused licenses and improving usage of business-critical software. Hartmut offers solutions to increase employee engagement and motivation, such as creating interactive guides and content within the application. Technology efficiency and predictive analytics for business success. Scott MacKenzie: Predicting user adoption, efficiency, and inefficiencies in technology. Hartmut: Predicting new releases' impact on productivity, addressing inefficiencies. Hartmut encourages listeners to reach out for collaboration on technology solutions. If interested in
Returning to the podcast to discuss water desalination is Mark Holtzapple. Mark is a Professor of Chemical Engineering at Texas A&M University where he researches technologies that improve sustainability. With close to 30 years of experience as a researcher, Mark is constantly finding new ways to turn waste into useful products. So what's his take on obtaining fresh water from saltwater? In this episode, you will learn about: What desalination is, and how Mark got started working with it. Techniques used to desalinate water, and how they have evolved over the years. How reverse osmosis contributes to water desalination. What vapor compression technology is, and why this may be the future of energy-efficient water desalination. Imagine a world where freshwater is as plentiful as seawater… With the work that researchers like Mark Holtzapple are doing, this idea may be more realistic than you may think! To find out more about Mark Holtzapple and his work, you can visit gfrc.tamu.edu. Take advantage of a 5% discount on Ekster accessories by using the code FINDINGGENIUS. Enhance your style and functionality with premium accessories. Visit bit.ly/3uiVX9R to explore latest collection. Episode also available on Apple Podcast: http://apple.co/30PvU9C
Gary gives us a sneak peek into his latest masterpiece, "Day Trading Attention: How to Actually Build Brand and Sales in the New Social Media World.” Gary's new book is the textbook you wish you had when starting your marketing and branding journey. Think charts, real-life examples, advice, market research, case studies, tips, and more–all while NOT putting you to sleep. Sound too good to be true? Ryan has read it and can confirm–NO BS.While discussing Gary's latest book and why everyone should grab a copy, Gary stresses that the key is to not get too caught up in the future, or too stuck in the past. The here and NOW is free social media. We're in the Golden Era, baby. While it's the most freely available, most effective way to reach your audience right NOW, it's imperative to take advantage of it to achieve the success you desire for your brand.Adapting to marketing trends like TikTok and other social media platforms (No, TikTok isn't just for teenage girls who like to dance) is the key to grabbing attention in the new social media world. Your strategy should then be to turn that attention into intent. Take the BS out of business by turning BUZZ into BITE.Don't look back in 10 years and wish you would've taken advantage of the massive opportunities social media provides right now. Additional discussions include the significance of building relationships, teamwork, authenticity, leading with empathy, and the influence of AI in the future of marketing.Gary also shares some practical and profound parenting advice about fostering self-esteem in the digital age (No 8th place trophies here. Just real reinforcement and nurturing).Huge thanks to Gary Vee for coming on the show and dropping these knowledge bombs for us!What did you think of this episode? DM me on @rightaboutnowshow or @ryanalford on Instagram. I'd love to hear your thoughts!Pre-order Gary's new book on Amazon! “Day Trading Attention” releases on May 21, 2024. Don't wait to get your hands on this goldmine of information.TAKEAWAYSPerspective is Key: Appreciate what you have because many have it worse. Gratitude can change your outlook.Relationships Matter: Gary emphasizes the value of teamwork and maintaining strong relationships. Leaders should prioritize serving their team.Value Creation: Gary's primary focus is always on bringing value, a principle crucial for success in any endeavor.Empathy and Kindness: Building empires isn't just about business strategies; it's about treating people well. Empathy and kindness can lead to significant achievements.Continuous Learning: Gary sees his new book as a curriculum, reflecting his lifelong commitment to learning about selling and branding.Brand Building: Nike's success illustrates the importance of branding. Playing your "greatest hits" repeatedly can solidify your brand's identity.Adaptation and Realism: In a changing world, parents must adapt while instilling real self-esteem in their children to withstand challenges.Attention and Intent: Turning attention into action requires more than just grabbing eyeballs. It demands offering genuine value and having a plan.Embrace Discomfort: Growth often occurs outside your comfort zone. Embrace discomfort to reach new heights.Seize Opportunities: We're currently in a golden era of free attention and distribution. Leverage social media now and adapt to future platforms to stay ahead. TIMESTAMPSThe importance of social media (00:00:00) Gary emphasizes the significance of taking social media seriously for business and personal growth.Introduction and welcome (00:00:33) The hosts introduce the show and welcome Gary Vaynerchuk, highlighting his various titles and accomplishments.Perspective and gratitude (00:01:11) Gary and the co-host discuss the importance of perspective, gratitude, and controlling one's own destiny.Building a strong team and culture (00:02:37) Gary explains the importance of building a strong team, fostering relationships, and creating a positive work culture.Empathy and kindness in business (00:08:33) Gary reflects on the significance of empathy and kindness in leadership and business, emphasizing the need to do right by people.Balancing passion and capability (00:10:36) The co-host shares a personal experience of following passion in the car business, leading to failure, and Gary emphasizes the importance of being good enough in pursuing one's passion.Embracing current opportunities (00:11:48) The co-host discusses the quote from Gary's book about underestimating current opportunities and shares personal experiences related to embracing new platforms like TikTok.Salesmanship versus branding and marketing (00:14:51) Gary discusses the difference between salesmanship and branding/marketing, emphasizing the need for a long-term brand-building approach.Consistency in brand-building (00:18:27) The co-host highlights the importance of consistency in brand-building and references Gary's consistent messaging as a testament to successful brand-building.Parenting and social media (00:19:17) Gary and the co-host discuss the considerations of parenting in the digital age, emphasizing the importance of building genuine self-esteem in children to navigate the evolving digital landscape.Gary Vaynerchuk's New Book (00:21:10) Discussion about Gary Vaynerchuk's new book "Day Trading Attention" and its relevance to marketing today.Gary's Approach to Attention Marketing (00:22:03) Gary Vaynerchuk explains his approach to staying ahead of the attention curve and implementing attention marketing.Gary's Unique Position in Marketing (00:22:59) Gary discusses his role in a 2000-person global agency and the unique insights he gains from it.Impact of AI on Marketing (00:27:49) Gary discusses the impact of AI on various industries, including marketing and advertising, and the need to strategize.Turning Attention into Intent (00:32:15) The conversation delves into the process of turning attention into intent and translating buzz into action.Content Creation and Attention (00:33:07) Gary discusses the factors that capture attention in content creation and the importance of providing value.Understanding Social Media Changes (00:35:33) Gary explains the shift in social media dynamics and the importance of understanding individual content performance.Empathy and Authenticity in Marketing (00:39:14) The discussion touches on the importance of empathy and authenticity in marketing and its impact on human connection.Real Success vs. Proxy Success (00:43:00) Gary emphasizes the difference between real success and proxy success, highlighting the importance of genuine achievement.The importance of social media (00:43:23) Gary emphasizes the significance of leveraging free social media for marketing and branding.Leveraging owned versus rented land (00:44:16) Discussion on using free social media to drive traffic to owned platforms like websites and lists.Extracting attention from social media (00:45:19) Gary discusses the power of extracting attention from social media for marketing and brand building.Preparing for the future of marketing (00:47:07) Gary warns about the potential decline of free social media awareness and the need to capitalize on it now.Building leverage over time (00:48:14) The importance of building leverage over time and taking one's brand to the next platform. If you enjoyed this episode and want to learn more, join Ryan's newsletter https://ryanalford.com/newsletter/ to get Ferrari level advice daily for FREE. Learn how to build a 7 figure business from your personal brand by signing up for a FREE introduction to personal branding https://ryanalford.com/personalbranding. Learn more by visiting our website at www.ryanisright.comSubscribe to our YouTube channel www.youtube.com/@RightAboutNowwithRyanAlford.