Edible fruit
POPULARITY
For more helpful information, advice, and recommendations, go to www.dirtdoctor.com.
There is a growing appetite for New Zealand's rarest and most unusual fruit. Persimmons have had a 20% rise in exports in the last year, and demand has never been higher. Persimmon Industry Council Manager Ian Turk told Mike Hosking it's thanks to recent sunny weather in Gisborne, where the vast majority of the fruit is grown. He says after a rough five years for the industry —with impacts from the likes of Cyclone Gabrielle— growers are looking forward to a good season ahead. LISTEN ABOVE See omnystudio.com/listener for privacy information.
Send us a textIn memory of Butch Kronlund, this episode is a replay of a live interview recorded as part of the Under the Persimmon Tree series at the Henry Miller Library.In this conversation, Butch reflects on his early life and upbringing, meeting his beloved wife Patte, and his arrival in Big Sur—where he would go on to lay the foundations of the Post Ranch Inn, collaborate with architect Mickey Muennig on several iconic homes, help build the new Big Sur Health Center, and oversee the rebuilding of the baths at Esalen.We also hear about his more recent efforts to raise and distribute critical funds for community members affected by fires and floods—a testament to his enduring care for the coast and its people.Thanks for listening, and for remembering Butch with us.There will be a celebration of Butch's life in June. Announcement will be forthcoming.Thank you for listening!Support the show_________________________________________________This podcast is a production of the Henry Miller Memorial Library with support from The Arts Council for Monterey County! Let us know what you think!SEND US AN EMAIL!
Persimmon Tart Cook time: 25 minutes Prep time: 30 minutes Serves - 6 4 tbsp sugar 2 tbsp water 50g butter 4 persimmons, cut each into 8 wedges 2 sheets puff pastry Raspberry sorbet to serve Preheat oven to 190*C. Place sugar and water in a small heavy-based saucepan over high heat. Cook until sugar dissolves, then bring to the boil, stirring continuously. Add butter, melt and allow to bubble until the mix turns a light caramel colour, turn off. Remove leaves from persimmons and cut each one into 8 wedges. Fan out persimmon pieces over the caramel, leaving a 2-3cm gap around the edge. Stick the two puff pastry sheets together and cut into a circle the diameter of the top of the pan. Lay the pastry over the persimmon and press edges down so they touch the bottom of the pan, forming a seal. Place in oven and bake for 25 minutes. Remove from oven and place a large serving plate over the frying pan and carefully flip the tarte out onto the plate. Serve hot with a delicious raspberry sorbet (or alternatively with ice cream of your choice). LISTEN ABOVESee omnystudio.com/listener for privacy information.
Comic book talk.Jonah is an incredible story, Leviathan, the Great Fish.Are the Conan the Barbarian stories actually real history?Abraham was a minor warlord for awhile.Behemoth and Leviathan, what are they?Strange animal husbandry, hybrid bloodlines, giants in the Bible, why the Israelites wiped out entire tribes of people in the Promised Land.Shape-shifting pigs.David's Mighty Men, how tall was Goliath actually? Cubits and measurement inaccuracies.How did the giants survive the flood?Gregarious forms, lion-faced men.Why David picked up 5 stones to fight Goliath.The Biblical cinematic universe.Adam's comics and their backstory.LinksE. Adam Farris on SubstackS1E039: This Isn't Even Our Gregarious FormMore Linkswww.MAPSOC.orgFollow Sumo on TwitterAlternate Current RadioSupport the Show!Subscribe to the Podcast on GumroadSubscribe to the Podcast on PatreonBuy Us a Tibetan Herbal TeaSumo's SubstacksHoly is He Who WrestlesModern Pulp
In this episode we discuss Persimmon, Dominos, Berkeley Group, Supermarket Income REIT, Legal & General, & Deliveroo$psn $dom $bkg $supr $lgen $roo
In Episode 128 of the Diary of a UK Stock Investor Podcast this week:- (00:00) Show Start (12:24) BooHoo rebranding to Debenhams (17:30) Admirals stellar earnings release (22:00) Persimmon sees 7% rise in house completions (26:14) Kiyosaki, Buffet and Rami Sethi on Stock Investing (27:43) Decisions on the Hargreaves Lansdown private takeover Diary of a UK Stock Investor Podcast is a show for everyday long-term retail investors, hosted by Chris Chillingworth. The podcast is unique in that it serves as a place for Chris to reflect on the highs and lows of long-term UK stock investing, as well as sharing detailed updates on how his own portfolio is growing. With new episodes every Thursday, and a detailed update on his quest to reach £1,024,867 in portfolio value by 2043, episodes often discuss investing education, strategy, mindset, ideas and even stock picks and analysis. The show, which now has an active following of over 4000 downloads a month, is curated by Chris Chillingworth, a UK investor for over a decade whose stockpicks have achieved a 18% annual average return between Jan 2014 - Nov 2024. Email Chris at the show on chris@chrischillingworth.com Checkout the website https://chrischillingworth.com
HEALTH NEWS · An avocado a day won't fix heart health, but it boosts diet and sleep · Imagining future events changes the brain to improve healthy decision-making · Study uncovers how low-carb diet drives colorectal cancer development · Efficacy of a Dietary Supplement Extracted from Persimmon in Overweight Healthy Adults · Autoimmune diseases misdiagnosed as psychosomatic can lead to long-term damage to physical and mental well-being · Ultra-processed food associated with faster biologic aging
In this episode we discuss Persimmon, Games Workshop, Taylor Wimpey, Fevertree, Experian & Netflix $psn $expn $tw. $fevr $gaw $nflx
Today I highlight Persimmon wood as a lumber source. The North American Ebony. Then dive into the variety of decking products on the market and their lifespans. Might start to wander into touchy topics when I get to the composite market...maybe. Also I talk about Lignosat and what the future of wood in space might mean.
Erfahre hier mehr über unseren Partner Scalable Capital - dem Broker mit Flatrate und Zinsen. Alle weiteren Infos gibt's hier: scalable.capital/oaws. Aktien + Whatsapp = Hier anmelden. Lieber als Newsletter? Geht auch. Das Buch zum Podcast? Jetzt lesen. Schwache Zahlen bei Boeing & VW. Einziger Trost: Erwartung war noch schwächer. Starke Zahlen hatte Eli Lilly. Problem: Erwartung war stärker. Ansonsten hat sich Signet verschätzt, Persimmon & KB performen, Applied Digital kriegt Geld, Daimler Truck liefert & JD fällt. Auf StockX ist Anta Sports (WKN: A0MVDZ) letztes Jahr 1.900% gewachsen. Kann sowas bald im Stock Market passieren? Trump pusht viele Branchen. Einen der größten Effekte könnte seine Deregulierung auf die Private-Equity-Welt haben. KKR (WKN: A2LQV6), Apollo (WKN: A3DB5F), Carlyle (WKN: A2PXCR) und Blackstone (WKN: A2PM4W) freut's. Diesen Podcast vom 15.01.2025, 3:00 Uhr stellt dir die Podstars GmbH (Noah Leidinger) zur Verfügung.
In this episode of Maximize Your Hunt, host Jon Teater discusses (Whitetail Landscapes) various aspects of land management and hunting strategies, focusing on the benefits of honey locust trees in silvopasture systems. Joined by guest Austin Unruh (Trees for Graziers) and Thomas Mlsna (Untamed Ambition), they explore the ecological services provided by these trees, their nutritional value for wildlife, and practical applications for integrating them into hunting properties. The conversation emphasizes the importance of patience and strategic planning in land management to enhance hunting success. In this conversation, the group discusses the integration of trees in silvopasture systems, focusing on the benefits of various tree species, particularly mulberries, for wildlife and livestock. The discussion emphasizes the importance of effective tree protection methods, the role of tree gender in fruit production, and introduces his new tree nursery business aimed at providing high-quality trees for sustainable farming practices. Takeaways Maximize Your Hunt focuses on land management and hunting strategies. Winter severity can impact deer populations and habitat management. Silvopasture integrates trees into pasture systems for livestock and wildlife. Honey locust trees provide late-season food sources for deer. Dappled shade from honey locust benefits both livestock and wildlife. Honey locust pods are high in sugar and energy, crucial for winter nutrition. Designing landscapes with honey locust can create consistent deer movement. Patience is essential for seeing results in land management. Honey locust, Persimmon, Mulberry can be a valuable resource for bees and other wildlife. Understanding the ecological benefits of trees is key to effective land management. Five to eight years is a guideline for tree yield. Silvopasture integrates trees for shade and forage. Fiberglass stakes are durable and cost-effective. Mulberries provide high protein feed for wildlife. Tree protection is essential for successful growth. Growing trees above browse height reduces costs. Mulberry trees are resilient and easy to manage. Tree gender affects fruit and pod production. A new nursery focuses on silvopasture trees. Effective tree management enhances ecosystem benefits. Social Links https://whitetaillandscapes.com/ https://www.facebook.com/whitetaillandscapes/ https://www.instagram.com/whitetail_landscapes/?hl=en https://www.theuntamedambition.com/ https://treesforgraziers.com/austin-unruh/ Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode of Maximize Your Hunt, host Jon Teater discusses (Whitetail Landscapes) various aspects of land management and hunting strategies, focusing on the benefits of honey locust trees in silvopasture systems. Joined by guest Austin Unruh (Trees for Graziers) and Thomas Mlsna (Untamed Ambition), they explore the ecological services provided by these trees, their nutritional value for wildlife, and practical applications for integrating them into hunting properties. The conversation emphasizes the importance of patience and strategic planning in land management to enhance hunting success. In this conversation, the group discusses the integration of trees in silvopasture systems, focusing on the benefits of various tree species, particularly mulberries, for wildlife and livestock. The discussion emphasizes the importance of effective tree protection methods, the role of tree gender in fruit production, and introduces his new tree nursery business aimed at providing high-quality trees for sustainable farming practices.TakeawaysMaximize Your Hunt focuses on land management and hunting strategies.Winter severity can impact deer populations and habitat management.Silvopasture integrates trees into pasture systems for livestock and wildlife.Honey locust trees provide late-season food sources for deer.Dappled shade from honey locust benefits both livestock and wildlife.Honey locust pods are high in sugar and energy, crucial for winter nutrition.Designing landscapes with honey locust can create consistent deer movement.Patience is essential for seeing results in land management.Honey locust, Persimmon, Mulberry can be a valuable resource for bees and other wildlife.Understanding the ecological benefits of trees is key to effective land management. Five to eight years is a guideline for tree yield.Silvopasture integrates trees for shade and forage.Fiberglass stakes are durable and cost-effective.Mulberries provide high protein feed for wildlife.Tree protection is essential for successful growth.Growing trees above browse height reduces costs.Mulberry trees are resilient and easy to manage.Tree gender affects fruit and pod production.A new nursery focuses on silvopasture trees.Effective tree management enhances ecosystem benefits. Social Linkshttps://whitetaillandscapes.com/https://www.facebook.com/whitetaillandscapes/https://www.instagram.com/whitetail_landscapes/?hl=enhttps://www.theuntamedambition.com/https://treesforgraziers.com/austin-unruh/
Welcome to a festive Christmas special of the Ducks Unlimited podcast! Join hosts Dr. Mike Brasher, Katie Burke, and Dr. Jared Henson as they celebrate the holiday season with special guest Malcom Reed from "How to BBQ Right." Malcom brings his expertise in barbecue and shares his experiences and favorite recipes, perfect for hunting camp and duck camp. Listen in for a delightful conversation filled with holiday cheer, barbecue tips, and memorable Christmas stories.Listen now: www.ducks.org/DUPodcastSend feedback: DUPodcast@ducks.org
ゲスト Easycomeのジョニーくん 川島健太朗くん(persimmon、Seamour)
Andy Johnson and Garrett Morrison team up for a two-part episode for this Thursday release. To start, Andy chats with Todd Demsey, a former professional golfer who now hand-makes persimmon clubs. Andy and Todd discuss Todd's All-American college golf career at Arizona State, his experience playing with persimmon clubs on the PGA Tour Champions, and why persimmons are special to him. In the second half of this episode, Garrett is joined by Chris Millard, author of the book The Shot: Watson, Nicklaus, Pebble Beach, and the Chip That Changed Everything, to discuss the new release and the long history of Pebble Beach Golf Links. Garrett and Chris dive into the early days of Pebble Beach, the 1982 U.S. Open, and how television helped popularize the sport across America.
Ventura fire takes toll on avocados and citrus, Bay Delta Plan Phase 2, SGMA turns 10, Persimmons for the holidays—they're healthy too.
Melanie Whittington, PhD, Head of the Leerink Center for Pharmacoeconomics interviews Holly Krasa, CEO of Blue Persimmon Group. In this episode they discuss the recent CPE Exclusive: Cobenfy™ for Adults Living with Schizophrenia, the need for innovation in schizophrenia, and the importance of listening to the stories from people living with schizophrenia.
See omnystudio.com/listener for privacy information.
In this episode we discuss ASOS, AutoTrader, Taylor Wimpey, Novo Nordisk, Persimmon & Fiverr $asos $auto $tw. $novob $psn $fvrr
Summary In this special celebratory installment of Startup Junkies, we sit down for our four hundredth episode! Hosts Jeff Amerine, Jon Cadieux, Caleb Talley, Daniel Koonce, and Matthew Ward gather to reminisce about past ventures and share their experiences in the entrepreneurial ecosystem of Northwest Arkansas. The episode marks a significant transition for the podcast, highlighting their first in-person international engagement with a South Korean accelerator cohort. The discussion delves into the unique entrepreneurial energy in Northwest Arkansas, characterized by frequent networking events and a supportive community. As the hosts collectively reflect on their roles within Startup Junkie, they emphasize the multitasking nature of their small but dynamic team that managed to organize over two hundred events in 2023. Additionally, several investments are highlighted, including Torch Dental, Privacy Hawk, and Persimmon, showcasing their success due to strategic timing and market needs. The team amusingly touches on controversial entrepreneurial ideas, including one about creating rain using a spray rig that required a wildly fluctuating budget and another about a contentious T-shirt website that attracted Secret Service scrutiny. Throughout the episode, the team shares lighthearted anecdotes, including a humorous comparison between cryptocurrencies and the US dollar, missed investment opportunities in Tesla and Amazon, and analyzes entrepreneurial education versus real-world experience. Ultimately, the episode celebrates the evolution of both the podcast and the entrepreneurial landscape in Northwest Arkansas, marked by strong community ties, innovative ideas, and collective growth in the startup ecosystem. Show Notes (0:00) Introduction (9:59) Recognizing Missed Opportunities (12:51) Analyzing Failed Investments (17:27) Economic Crisis Impacts (23:34) Driving Global Investments (32:20) Podcast Experience and Insights (35:05) Transformative Growth Since 2020 (38:52) Supporting Emerging Entrepreneurs (43:06) Ensuring Ideas Don't Become Obsolete with Time (46:10) Closing Statements Links Jeff Amerine Jon Cadieux Caleb Talley Daniel Koonce Matthew Ward Startup Junkie Startup Junkie YouTube NWA Workplaces Bike Rack Brewing
americanfarmsteadconvention.com americanfarmsteadhers.com
US Presidential Election results are indicative of a Trump victory and Republicans taking the Senate; House is too close to call.US futures have ripped higher, while European futures have been hit given the potential EZ growth implications.DXY is currently up 1.5% and has seen its largest jump since March 2020; EUR, JPY and antipodeans are suffering.In the fixed income space, US yields are higher across the curve with the curve bear-steepening.Bitcoin is up over 8% after surging to a record high, crude has been hit by the stronger USD.Looking ahead, highlights include German Industrial Orders, EZ PMIs (Final), NBP Policy Announcement, US Election Results, ECB President Lagarde, de Guindos & BoC's Rogers, Supply from Germany & US.Earnings from Pandora, Novo Nordisk, Banco BPM, Bper Banca, Enel, Poste Italiane, Snam, Vonovia, Commerzbank, Fresenius, Henkel, Puma, Siemens Healthineers, BMW, GEA Group, Evonik Industries, Eurazeo, Arkema, Teleperformance, Credit Agricole, Wise, Persimmon, Marks and Spencer, Beazley, Williams Companies Inc, CVS Health Corp, Gilead Sciences Inc, Sempra, Qualcomm Inc, Johnson Controls International & Arm Holdings.Read the full report covering Equities, Forex, Fixed Income, Commodites and more on Newsquawk
Mark and Dan compare soils, native gardens and how to get the best mulch.Tets and Mark answer your garden questions including why it's not a good idea to hand water your lawn. 02:13 How to deal with army worms in your garden. 05:54 Replanting a 'Fuyu' Persimmon.11:54 The best way to water your lawn (put down your hose!)Mark Tucek is filling in for Sabrina Hahn.Listen to the program live on Saturdays at 9:00AM on ABC Radio Perth and ask your questions by calling in on 1300 222 720 or text 0437 922 720Subscribe to the podcast through the ABC Listen App, Apple Podcasts or wherever you like to listen
Have you ever thought about how there are so few North American foods that are globally available - or even regular foods for people living in North America?Well wonder no more. Or at least slightly less - and explore some of the major items that are native to Norht America, and yet almost made it to "famous because they are yummy" but not quite. Also - what are the possible global superstars in waiting.Music Credit: Fingerlympics by Doctor TurtleShow Notes: https://thehistoryofamericanfood.blogspot.com/Email: TheHistoryofAmericanFood at gmail dot com Threads: @THoAFoodInstagram: @THoAFood& some other socials... @THoAFood
They're gripping at straws to make me look and feel crazy which can only mean— He's losing his power. Hopefully he's expecting another baby. Hopefully, for the ba's sake and its mother, it's not a little girl. Even my big and strong boy might be irreversibly damaged at the hands of a psychotic narcissist with anger problems—and though surely he had tried to kill me any way he could, I had survived. Now, the tables had indeed turned in my favor. With enough time, the truth would be revealed not only to those above, but to all who knew us; I hadn't lost my mind at all, only finally found a pair of eyes that could see the world around me that they did not like—and a pair of legs to run away from it. The first time my ex husband actually hit me— he had snapped, and though there had been other counts of shoving,heavy handed close calls and other questionable events in the years leading up to this, it had never been what it turned out to be his fist actually connecting with my face— not just once, but several times over and over until something got in the way— even years later, I didn't know what, but maybe just that I had stopped moving, or struggling to get away. “Play dead.” Maybe he thought I was dead—or maybe I was. Everything since in the nearly eight years after seemed an inescapable and hellish nightmare—inescapable, that is, from him. Or, from “it.” The thing that had tried to kill me That even after assuming an entirely new identity and seperate life, this dirty, lazy, disgusting and altogether unllpleasant energy seemed to follow me everywhere—and worse—this energy seemed to crawl into the other humans surrounding me, and like a parasite, never letting go. I wanted to die as much as anything just to never be reminded of him again. My thriving and success would make him look like a fool— more of one, anyhow, and either way— his jealousy of my life without him made it obvious how little and weak he actually was, though not on purpose, and, in some ways—many small ones, I had succeeded. Suddenly, everything became battlegrounds—fighting for my life as if somehow I were still in my abusers presence and grips—the devil in him seeking me out in the world as if I had deserved it in the first place. No one really deserves to die like that/- Especially not in front of their children. Now at least I knew he had no power alone, but that what one would The Devil itself often lived inside of the weak—weak in spirit, weak minded. Feeble and malleable, often fat and lazy people, it had become obvious— that people were the tools for this force to deplete the light and kindness, the good spirit and soul's purpose of others. I had forgiven him, but something indeed had rotted away the core I thought once shared into a blackened depth if awful waste—the things about him belonging to a world I wished never to see or be part of. I had grown, and changed—and I was sure with time so had he; perhaps not, but I couldn't know and wouldn't want to, wishing only for the best for anyone's sake. But this thing that seemed to follow me was a pitiful, screaming l and evil thing—I had let go with the consistent reminders of the permanent scars left in the crevices of my lip, and on my face—and though an entire child and perhaps several women between us, his need vengeance that I had left must have been mad, as the sweltering parasitic welt that riled up with enough fierceness to crawl into other sunken bodies, and surround my every waking moment. Not his power, at all, but a greater force of evil—the evil of all mankind—Satan himself seemed to have chosen me as his prey, my abuser as the illusion of conception. There for I, There for I, There for I, None! As truth did shatter mine ever being, And also Ever person near WHO VALIDATED THAT BITCH'S PARKING. —you think she drove here?! —if she did it would be on a broomstick. Goddammit. Get her out of here! Out! I said! You're…not a fan of Fallon's, are you. No, I'm not. (No—God, no.) Well, why not? First of all, he winks at people. ;) *cringe* Like, off camera. JIMMY O'FALLON And I want damages. Damages?! Damages. He's seeking damages?! To what. JIMMY O'FALLON Like, my entire—everything. Damages to everything. My entire life! Ah. [The Festival Project ™] I've got to admit, being sued hy Jimmy Fallon is probably the most exciting thing that's ever happened in the entirety of this series! What about that thing with Skrillex. (That was pretty exiting.) Which thing with Skrillex? All the things with Skrillex were pretty exciting. (Admittedly, yes.) Then there was Dillon Francis. I hate Dillon Francis. Exactly. Why! Because he excited you. Next question! Ahead. Yo. I finally get to link up with Supacree. You're a mess. Everything is a mess. The world is a mess. —your mom's a mess. Amanda, please. Have you been drinking? How long has deadmau5 been a cat? Forever, I think. Exciting! Enter through the exit! Enter through the exit! Who the fuck let you in here. {Enter The Multiverse} MARTHA STEWART'S plan for world domination is complete. L E G E N D S Johnny Moon was a handsome fellow; Johnny Moon was a Sam as well. Johnny Moon was a madman also; Johnny Moon had indeed done bad. Johnny Moon was a handsome devil; Johnny Moon was a charming man Johnny Moon went to heaven after Johnny Moon finished in Hell. Welcome To The Wonderful World of… | The Complex Collective © | By [The Festival Project ™] Breaking down that one scene from Ascension. How the fuck did these two actors even get into the realm of ascension? Being honest, I think it's that part of the dream like in The Wizard of Oz and/ or Alice in wonderland where everything just kind of bleeds together into one blurry weird world before it all explodes—or implodes— Whatever, just kill yourself. (On my way.) Titus- Jason Sudakis Perscimmion - Will Forte Why. I don't know why. The King just fucking guess . (I'll let you decide.) Titus and and Perscimmion— One argues this character's name is actually “Persimmon”… i've generally myself no preference but though I had first heard it as “Simeon”— Apparently, actually, “Perscimmon”, or “Persimmon”, the former however not accurately as in other contexts, he is sometimes referred to as “Perci” Whatever. Why is this Will Forte. *shrugs* Cause whatever, I don't know. (I like his socks.) Titus and Perscimmon— Perscimmion Whatever. CUT TO: /Bedtime Stories with Chak Chel —or was it, Chak Chel's bedtime stories. Whichever. No one cares. THE COSMIC AVENGER/SUPACREE Ugh grow up. KIRSTEN SHAAL Or is it Kristin? Ugh K, SHAAL It could be whoever, or whatever— anyone— right? GOOGLE KID 1 But it's not whoever. GOOGLE KID 2 It is whoever. GOOGLE KID 1 It's just two actors! GOOGLE KID 3 —then pick better actors! watch it! K. SHAAL It could be whatever, it could be whoever… I could be whoever! I'm whoever. It doesn't matter. CUT BACK TO: {Enter The Multiverse} L E G E N D S Dissecting this recent excerpt from Ascencion © The Festival Project, Inc. 2019 All rights reserved. — have just discovered the King's seduction of a lady in waiting; the reigning Queen of her own dominion, betrothed to another, also presumed to be in his own right, a King. As scholars and members of the high court, both Titus and Perscimmion are groomed to keep watch over the happenings within each quarry, as given jusrisdiction by the Asended Mastery to spectate freely throughout all lands, and as such; they often travel—often in pairs or groups. Titus and Perscimmion Persimmon Whatever. —have quickly departed, haveing spotted the King far out of bounds, to which the King quickly launches after these two Kingsmen in pursuit, and though their loyalty lies within no singular dictation, they somewhat begrudgingly agree it best to keep the King's secret, after he wearily explains to the men, as his friends and genuinely that he feels he has fallen truly in love with her. KING IV Titus! [Titus is annoyed and expecting there to be a fight] TITUS Mellow. (Chill, bro) KING IV Be bold, you! (If you have something to say, then say it now and let's duke it out.) TITUS Never—mellow I am, as are we. (Nah, I'm chillin. We cool.) (I'm good, he's good—we chillin.) PERCI Chaos, you've spelled it. (You've opened a can of worms, dude.) (You got us all fucked up.) (You fucked up.) KING IV I've spelled then many words For our wise, Nevermind before you found her waiting, Dusk was fallen And here you, cry out such a task- To have found her in waiting, Not I or heavy bound, But yet with lust, The breath of motherdom on her wicked truth The tied you have counted, For I wisked away with every since Your true intent, persist, I may. The King implies here that he's made many conscious choices and has been playing at this game as a King, to which that only other royalty might understand, the strife of making hard decisions in which case, others might be hurt— or even killed. He explains that he and this Queen have found common ground, confining in one another's understanding of hardship as leaders, And that their attraction to one other has grown from this trust —naturally, and out of control; as he sees her maternal prime has approached; he suggests that he means no harm at all, but urges the men to think about what they plan to do with the discovery of their possible affair—nearly asking “what exactly do you plan to do with your knowledge of this?” (Are you finna tell on me?) (Who you finna tell?) TITUS Now. (Yo.) (srsly?) [Titus is a bit pissed that the king would turn it around to imply that his knowledge of this secret could do more harm than the secret itself; he is quite visibly angry.] [Perci keeps the peace by holding his friend back.] PERCI Mellow. (Chill, bro.) KING IV You found for call my wants; Shallow, as it may My need ne'er far behind the broken, Does call to you, brother, And you also, For I widow in thought, My fury (I'm a man; I have needs— I often put my needs as a King behind that of ny entire Kindgom—you're both men; so you know how it is; the feelings I have can't be ignored—it's primal.) A tear. [sarcasm. He's suggesting “cry about it.” Or “why don't I believe you?” Or, blatently—] (Cry me a river!) A tear, you ask But one does not cry as I seek Fair judgement and ridicule, Severed heart I, Come now awakened in To her, A dusk had come, Though night was golden A dawn arose with fury in my bosom Mine love awakened [He implies to lose his composure would show weakness—the King also implies here that he does, however, feel horrible about it; that he expects to be reviled, killed, or even dethroned—that his heart has truly broken as he has discovered something new in him; he has fallen in love with her. That after spending the night with her, he had become anew.] TITUS Not love, but—[he begins to argue that it is only lust] PERCI Seldom! (Yeah right/ that's rare.) KING IV Love, I bear you mine honest hands, The wilted rose, Blood upon thornes, Truly marks I who has come To wake in her (I'm telling you, I'm really in love with her.) [the king pleas that painstakingly so, his love is pure and true] PERCI Then. (Whatever.) [Titus gives up and agrees] TITUS So, I mellow. (Okay, okay.) [finally Percimmon speaks his mind] (Or whatever the fuck his actual name is) ::||pause. By now it ought to be obvious to you, dear reader and listener, that I am in fact, dictating this—translating these things for you sent from some faraway higher realm, for the sake of the art and with the purpose of your understanding my true intentions, as fellow human and as a writer, to live in the way I desire, honestly and wholeheartedly, without further interruption to my sanctity and wellness, in peace— Until my departure from this world. Does that quite say it? I don't know. Whatever. ::||Unpause. PERCI (By the way apparently some decendant or incarnation of the God Percius, son of Zeus) PERCIUS PERSCIMMION SIMMEON PERSCIMMON PERCI (You get it, right?) Mits infinite, And for the sake of this concept, Let's just consider this— All the same fucking guy, Or at the very least, Very closely interacting versions of this same guy Within these parallels Of time and space Wherein these worlds And realms Exist. Okay? Ok. Good. Proceeding. [this dude's pretty much been quiet the whole time but now is a little tiffed himself.] PERCI Did you fear for not The death that approaches, For now you call I, And our brethren here, For siren had sounded to wake, You in the light and there destined to love By blood is bound, And yet you wait, here now on high Calling to us, havingbeen hound by light, Whether you did, or did not forsought Come as foreign And leave again Worried, feather feared at all That by this blood, you too shall weep, To reap again what you sow Or shall they say, As punishment, For cause just binds?? (Did it bother you at all to think that not only you might get killed, but get us all killed?! Now you're asking us to lie for you— because all of a sudden, you're in love with this woman; a blood oath set in stone, and her having been betrothed— and here you come, running after us, after it finally occurs to you—whether you meant for it to happen or just “didn't think about it”, went all this way just to fuck shit up (complicate things), then come back home freaking out, running around like a chicken with your head cut off (acting like a crazy bird about to get eaten) saying that, whoever has to hurt or be killed over all this, you feel really bad about— but overall, know you what's coming to you, and you know, and I know, and he knows that we'll probably just all be better off not telling anybody about this…at least for now… but eventually, someone's bound to find out about this, and the less people “know”, the better…right?) KING IV Now. (Yeah.) TITUS I second. (I agree.) KING IV Here, too, I second, I third, even for not I as you, And you both as I, And how, The sun has set upon us, Why, death is sure to come As I rise, But give me no mercy, this Mellow now, I only beg What here has transpired Silence here, Between myself and I— Brethren. (So we all agree that it's better that this all just stays between us.) [the king implies that either way the truth will probably come out and he will die for it, but for now, the secret is best kept between them, with the understanding that they too could be killed in the vengeance and damage of the truth being told sooner than later.] Steady ye we all sigh as one. (I'm basically you.) / (if any of us go down, we all go down.) Steady ye as my death is yours. (We are one) (we're fucked, but whatever I guess.) Steady be my tongue as forced to lie with sacred heart true love does lie. (I hate having to do this but my love is true) So be it. (Fine) So, then. (Very well then.) Honor thy pardon. (Thank you guys.) Off, then. (Just …go.) (Get out) [the king quickly vanishes into the night] Damn, that took me longer to decode than I actually spent writing it. You—wrote this? I… Whatever. [The Festival Project.™] The Complex Collective © COPYRIGHT © THE FESTIVAL PROJECT 2024 ALL RIGHTS RESERVED. © -Ū.
They're gripping at straws to make me look and feel crazy which can only mean— He's losing his power. Hopefully he's expecting another baby. Hopefully, for the ba's sake and its mother, it's not a little girl. Even my big and strong boy might be irreversibly damaged at the hands of a psychotic narcissist with anger problems—and though surely he had tried to kill me any way he could, I had survived. Now, the tables had indeed turned in my favor. With enough time, the truth would be revealed not only to those above, but to all who knew us; I hadn't lost my mind at all, only finally found a pair of eyes that could see the world around me that they did not like—and a pair of legs to run away from it. The first time my ex husband actually hit me— he had snapped, and though there had been other counts of shoving,heavy handed close calls and other questionable events in the years leading up to this, it had never been what it turned out to be his fist actually connecting with my face— not just once, but several times over and over until something got in the way— even years later, I didn't know what, but maybe just that I had stopped moving, or struggling to get away. “Play dead.” Maybe he thought I was dead—or maybe I was. Everything since in the nearly eight years after seemed an inescapable and hellish nightmare—inescapable, that is, from him. Or, from “it.” The thing that had tried to kill me That even after assuming an entirely new identity and seperate life, this dirty, lazy, disgusting and altogether unllpleasant energy seemed to follow me everywhere—and worse—this energy seemed to crawl into the other humans surrounding me, and like a parasite, never letting go. I wanted to die as much as anything just to never be reminded of him again. My thriving and success would make him look like a fool— more of one, anyhow, and either way— his jealousy of my life without him made it obvious how little and weak he actually was, though not on purpose, and, in some ways—many small ones, I had succeeded. Suddenly, everything became battlegrounds—fighting for my life as if somehow I were still in my abusers presence and grips—the devil in him seeking me out in the world as if I had deserved it in the first place. No one really deserves to die like that/- Especially not in front of their children. Now at least I knew he had no power alone, but that what one would The Devil itself often lived inside of the weak—weak in spirit, weak minded. Feeble and malleable, often fat and lazy people, it had become obvious— that people were the tools for this force to deplete the light and kindness, the good spirit and soul's purpose of others. I had forgiven him, but something indeed had rotted away the core I thought once shared into a blackened depth if awful waste—the things about him belonging to a world I wished never to see or be part of. I had grown, and changed—and I was sure with time so had he; perhaps not, but I couldn't know and wouldn't want to, wishing only for the best for anyone's sake. But this thing that seemed to follow me was a pitiful, screaming l and evil thing—I had let go with the consistent reminders of the permanent scars left in the crevices of my lip, and on my face—and though an entire child and perhaps several women between us, his need vengeance that I had left must have been mad, as the sweltering parasitic welt that riled up with enough fierceness to crawl into other sunken bodies, and surround my every waking moment. Not his power, at all, but a greater force of evil—the evil of all mankind—Satan himself seemed to have chosen me as his prey, my abuser as the illusion of conception. There for I, There for I, There for I, None! As truth did shatter mine ever being, And also Ever person near WHO VALIDATED THAT BITCH'S PARKING. —you think she drove here?! —if she did it would be on a broomstick. Goddammit. Get her out of here! Out! I said! You're…not a fan of Fallon's, are you. No, I'm not. (No—God, no.) Well, why not? First of all, he winks at people. ;) *cringe* Like, off camera. JIMMY O'FALLON And I want damages. Damages?! Damages. He's seeking damages?! To what. JIMMY O'FALLON Like, my entire—everything. Damages to everything. My entire life! Ah. [The Festival Project ™] I've got to admit, being sued hy Jimmy Fallon is probably the most exciting thing that's ever happened in the entirety of this series! What about that thing with Skrillex. (That was pretty exiting.) Which thing with Skrillex? All the things with Skrillex were pretty exciting. (Admittedly, yes.) Then there was Dillon Francis. I hate Dillon Francis. Exactly. Why! Because he excited you. Next question! Ahead. Yo. I finally get to link up with Supacree. You're a mess. Everything is a mess. The world is a mess. —your mom's a mess. Amanda, please. Have you been drinking? How long has deadmau5 been a cat? Forever, I think. Exciting! Enter through the exit! Enter through the exit! Who the fuck let you in here. {Enter The Multiverse} MARTHA STEWART'S plan for world domination is complete. L E G E N D S Johnny Moon was a handsome fellow; Johnny Moon was a Sam as well. Johnny Moon was a madman also; Johnny Moon had indeed done bad. Johnny Moon was a handsome devil; Johnny Moon was a charming man Johnny Moon went to heaven after Johnny Moon finished in Hell. Welcome To The Wonderful World of… | The Complex Collective © | By [The Festival Project ™] Breaking down that one scene from Ascension. How the fuck did these two actors even get into the realm of ascension? Being honest, I think it's that part of the dream like in The Wizard of Oz and/ or Alice in wonderland where everything just kind of bleeds together into one blurry weird world before it all explodes—or implodes— Whatever, just kill yourself. (On my way.) Titus- Jason Sudakis Perscimmion - Will Forte Why. I don't know why. The King just fucking guess . (I'll let you decide.) Titus and and Perscimmion— One argues this character's name is actually “Persimmon”… i've generally myself no preference but though I had first heard it as “Simeon”— Apparently, actually, “Perscimmon”, or “Persimmon”, the former however not accurately as in other contexts, he is sometimes referred to as “Perci” Whatever. Why is this Will Forte. *shrugs* Cause whatever, I don't know. (I like his socks.) Titus and Perscimmon— Perscimmion Whatever. CUT TO: /Bedtime Stories with Chak Chel —or was it, Chak Chel's bedtime stories. Whichever. No one cares. THE COSMIC AVENGER/SUPACREE Ugh grow up. KIRSTEN SHAAL Or is it Kristin? Ugh K, SHAAL It could be whoever, or whatever— anyone— right? GOOGLE KID 1 But it's not whoever. GOOGLE KID 2 It is whoever. GOOGLE KID 1 It's just two actors! GOOGLE KID 3 —then pick better actors! watch it! K. SHAAL It could be whatever, it could be whoever… I could be whoever! I'm whoever. It doesn't matter. CUT BACK TO: {Enter The Multiverse} L E G E N D S Dissecting this recent excerpt from Ascencion © The Festival Project, Inc. 2019 All rights reserved. — have just discovered the King's seduction of a lady in waiting; the reigning Queen of her own dominion, betrothed to another, also presumed to be in his own right, a King. As scholars and members of the high court, both Titus and Perscimmion are groomed to keep watch over the happenings within each quarry, as given jusrisdiction by the Asended Mastery to spectate freely throughout all lands, and as such; they often travel—often in pairs or groups. Titus and Perscimmion Persimmon Whatever. —have quickly departed, haveing spotted the King far out of bounds, to which the King quickly launches after these two Kingsmen in pursuit, and though their loyalty lies within no singular dictation, they somewhat begrudgingly agree it best to keep the King's secret, after he wearily explains to the men, as his friends and genuinely that he feels he has fallen truly in love with her. KING IV Titus! [Titus is annoyed and expecting there to be a fight] TITUS Mellow. (Chill, bro) KING IV Be bold, you! (If you have something to say, then say it now and let's duke it out.) TITUS Never—mellow I am, as are we. (Nah, I'm chillin. We cool.) (I'm good, he's good—we chillin.) PERCI Chaos, you've spelled it. (You've opened a can of worms, dude.) (You got us all fucked up.) (You fucked up.) KING IV I've spelled then many words For our wise, Nevermind before you found her waiting, Dusk was fallen And here you, cry out such a task- To have found her in waiting, Not I or heavy bound, But yet with lust, The breath of motherdom on her wicked truth The tied you have counted, For I wisked away with every since Your true intent, persist, I may. The King implies here that he's made many conscious choices and has been playing at this game as a King, to which that only other royalty might understand, the strife of making hard decisions in which case, others might be hurt— or even killed. He explains that he and this Queen have found common ground, confining in one another's understanding of hardship as leaders, And that their attraction to one other has grown from this trust —naturally, and out of control; as he sees her maternal prime has approached; he suggests that he means no harm at all, but urges the men to think about what they plan to do with the discovery of their possible affair—nearly asking “what exactly do you plan to do with your knowledge of this?” (Are you finna tell on me?) (Who you finna tell?) TITUS Now. (Yo.) (srsly?) [Titus is a bit pissed that the king would turn it around to imply that his knowledge of this secret could do more harm than the secret itself; he is quite visibly angry.] [Perci keeps the peace by holding his friend back.] PERCI Mellow. (Chill, bro.) KING IV You found for call my wants; Shallow, as it may My need ne'er far behind the broken, Does call to you, brother, And you also, For I widow in thought, My fury (I'm a man; I have needs— I often put my needs as a King behind that of ny entire Kindgom—you're both men; so you know how it is; the feelings I have can't be ignored—it's primal.) A tear. [sarcasm. He's suggesting “cry about it.” Or “why don't I believe you?” Or, blatently—] (Cry me a river!) A tear, you ask But one does not cry as I seek Fair judgement and ridicule, Severed heart I, Come now awakened in To her, A dusk had come, Though night was golden A dawn arose with fury in my bosom Mine love awakened [He implies to lose his composure would show weakness—the King also implies here that he does, however, feel horrible about it; that he expects to be reviled, killed, or even dethroned—that his heart has truly broken as he has discovered something new in him; he has fallen in love with her. That after spending the night with her, he had become anew.] TITUS Not love, but—[he begins to argue that it is only lust] PERCI Seldom! (Yeah right/ that's rare.) KING IV Love, I bear you mine honest hands, The wilted rose, Blood upon thornes, Truly marks I who has come To wake in her (I'm telling you, I'm really in love with her.) [the king pleas that painstakingly so, his love is pure and true] PERCI Then. (Whatever.) [Titus gives up and agrees] TITUS So, I mellow. (Okay, okay.) [finally Percimmon speaks his mind] (Or whatever the fuck his actual name is) ::||pause. By now it ought to be obvious to you, dear reader and listener, that I am in fact, dictating this—translating these things for you sent from some faraway higher realm, for the sake of the art and with the purpose of your understanding my true intentions, as fellow human and as a writer, to live in the way I desire, honestly and wholeheartedly, without further interruption to my sanctity and wellness, in peace— Until my departure from this world. Does that quite say it? I don't know. Whatever. ::||Unpause. PERCI (By the way apparently some decendant or incarnation of the God Percius, son of Zeus) PERCIUS PERSCIMMION SIMMEON PERSCIMMON PERCI (You get it, right?) Mits infinite, And for the sake of this concept, Let's just consider this— All the same fucking guy, Or at the very least, Very closely interacting versions of this same guy Within these parallels Of time and space Wherein these worlds And realms Exist. Okay? Ok. Good. Proceeding. [this dude's pretty much been quiet the whole time but now is a little tiffed himself.] PERCI Did you fear for not The death that approaches, For now you call I, And our brethren here, For siren had sounded to wake, You in the light and there destined to love By blood is bound, And yet you wait, here now on high Calling to us, havingbeen hound by light, Whether you did, or did not forsought Come as foreign And leave again Worried, feather feared at all That by this blood, you too shall weep, To reap again what you sow Or shall they say, As punishment, For cause just binds?? (Did it bother you at all to think that not only you might get killed, but get us all killed?! Now you're asking us to lie for you— because all of a sudden, you're in love with this woman; a blood oath set in stone, and her having been betrothed— and here you come, running after us, after it finally occurs to you—whether you meant for it to happen or just “didn't think about it”, went all this way just to fuck shit up (complicate things), then come back home freaking out, running around like a chicken with your head cut off (acting like a crazy bird about to get eaten) saying that, whoever has to hurt or be killed over all this, you feel really bad about— but overall, know you what's coming to you, and you know, and I know, and he knows that we'll probably just all be better off not telling anybody about this…at least for now… but eventually, someone's bound to find out about this, and the less people “know”, the better…right?) KING IV Now. (Yeah.) TITUS I second. (I agree.) KING IV Here, too, I second, I third, even for not I as you, And you both as I, And how, The sun has set upon us, Why, death is sure to come As I rise, But give me no mercy, this Mellow now, I only beg What here has transpired Silence here, Between myself and I— Brethren. (So we all agree that it's better that this all just stays between us.) [the king implies that either way the truth will probably come out and he will die for it, but for now, the secret is best kept between them, with the understanding that they too could be killed in the vengeance and damage of the truth being told sooner than later.] Steady ye we all sigh as one. (I'm basically you.) / (if any of us go down, we all go down.) Steady ye as my death is yours. (We are one) (we're fucked, but whatever I guess.) Steady be my tongue as forced to lie with sacred heart true love does lie. (I hate having to do this but my love is true) So be it. (Fine) So, then. (Very well then.) Honor thy pardon. (Thank you guys.) Off, then. (Just …go.) (Get out) [the king quickly vanishes into the night] Damn, that took me longer to decode than I actually spent writing it. You—wrote this? I… Whatever. [The Festival Project.™] The Complex Collective © COPYRIGHT © THE FESTIVAL PROJECT 2024 ALL RIGHTS RESERVED. © -Ū.
In this episode of 'Sleepy Seedlings: The Bedtime Podcast with Trees', we explore the tranquil beauty and quiet wisdom of the Luminous Persimmon tree. As summer slowly transitions to autumn, the persimmon's vibrant fruit ripens, reminding us that patience and perseverance often bring the sweetest rewards. With its deep roots and gentle presence, we'll see that growth comes in its own time, especially when the days grow cooler and life begins to slow down.Accompanied by the soft tones of wind chimes and the gentle wind rustling through the trees, this episode invites you to relax and reflect on the importance of waiting for things to come to fruition. Let the soothing sounds and peaceful reflections guide you into a restful sleep.Be sure to follow us on Instagram @sleepy_seedlings for more nature-inspired content and updates. Hosted on Acast. See acast.com/privacy for more information.
Wayne Hall, general manager of Wi Pere Trust's horticulture team in Gisborne, shares his love of persimmons and a recipe for how to best enjoy the fruit.
On this week's episode of Fully Equipped, GOLF's Jonathan Wall is joined by Gene Parente of Golf Laboratories to breakdown Bryson hitting a persimmon, if their is a benefit to Tiger's rusty wedges and notable gear changes at the Open Championship. The episode then concludes with an exclusive interview fraturing Callaway's Senior Director of Brand and Product Management Dave Neville talking their new Opus wedge line. -- Thanks to our official sponsor Golf Pride and their new Reverse Taper putter grip. It's the most crucial split-second in golf. You can't worry about if the ball is going in the hole or not – if you haven't worried about what's happening at impact. That's what led Golf Pride to design a grip to ensure you're set up to succeed in that split-second. The very moment where the putt is decided. With REVERSE TAPER technology to ensure you have a more consistently square face at impact. A grip that's most impactful, during the most impactful split-second in golf. These new grips are now available. Visit https://Golfpride.com to learn more.
The UK Investor Magazine was thrilled to welcome Michael Field, European Equity Strategist at Morningstar, back to the podcast to jump headfirst into European equities and the key consideration for investors.Download Morningstar's European Equity Outlook Q3 2024Michael has joined the UK Investor Magazine on a number of occasions over the past year and provided a fascinating insight into the macroeconomic factors driving European stocks and highlighted individual names offering value. This episode was no different.We start with looking at the key risks and rewards for European equities and explore the political environment.The discussion progresses to the sector Morningstar see value in before touching on Reckitt Benckiser.Morningstar previously highlighted Persimmon as a company that offered deep value. After a near 50% rally, we look at what the the Labour government means for the UK housebuilding sector. Hosted on Acast. See acast.com/privacy for more information.
Tida Beattie (she/her) is a Thai-American end-of-life doula, grief support facilitator, and immigrant advocate. She creates radical spaces for immigrant families, their caregivers and their grievers to receive support addressing care, loss, death and grief. Tida knows the innate power within any immigrant family - perseverance, boldness, courage, resilience - and empowers them to be seen and heard so they advocate unapologetically for their needs and their human right to live and die with dignity, comfort, safety, and compassion.Find extensive show notes for this episode on our substack! If you haven't already, go ahead and subscribe to the podcast and sign up for our newsletter at www.wearemarigolde.com so you can be the first to know when new episodes drop.
In this episode, Oh Yes and Intern 1 talk about 자신감(Jah Shin Gahm).
Today, I tell you about the medicinal uses of these three trees, but also get into harvesting and preserving persimmons, cooking with them and even making persimmon beer and brandy!The Spring Foraging Cook Book is available in paperback on Amazon: https://www.amazon.com/dp/B0CRP63R54Or you can buy the eBook as a .pdf directly from the author (me), for $9.99:https://southernappalachianherbs.blogspot.com/2024/01/the-spring-foraging-cookbook.htmlYou can read about the Medicinal Trees book here https://southernappalachianherbs.blogspot.com/2021/06/paypal-safer-easier-way-to-pay-online.html or buy it on Amazon: https://www.amazon.com/dp/1005082936PS. New in the woodcraft Shop: Judson Carroll Woodcraft | SubstackRead about my new books:Medicinal Weeds and Grasses of the American Southeast, an Herbalist's Guidehttps://southernappalachianherbs.blogspot.com/2023/05/medicinal-weeds-and-grasses-of-american.htmlAvailable in paperback on Amazon:https://www.amazon.com/dp/B0C47LHTTHandConfirmation, an Autobiography of Faithhttps://southernappalachianherbs.blogspot.com/2023/05/confirmation-autobiography-of-faith.htmlAvailable in paperback on Amazon:https://www.amazon.com/dp/B0C47Q1JNKVisit my Substack and sign up for my free newsletter: https://judsoncarroll.substack.com/Read about my new other books:Medicinal Ferns and Fern Allies, an Herbalist's Guide https://southernappalachianherbs.blogspot.com/2022/11/medicinal-ferns-and-fern-allies.htmlAvailable for purchase on Amazon: https://www.amazon.com/dp/B0BMSZSJPSThe Omnivore's Guide to Home Cooking for Preppers, Homesteaders, Permaculture People and Everyone Else: https://southernappalachianherbs.blogspot.com/2022/10/the-omnivores-guide-to-home-cooking-for.htmlAvailable for purchase on Amazon: https://www.amazon.com/dp/B0BGKX37Q2Medicinal Shrubs and Woody Vines of The American Southeast an Herbalist's Guidehttps://southernappalachianherbs.blogspot.com/2022/06/medicinal-shrubs-and-woody-vines-of.htmlAvailable for purchase on Amazon https://www.amazon.com/dp/B0B2T4Y5L6andGrowing Your Survival Herb Garden for Preppers, Homesteaders and Everyone Elsehttps://southernappalachianherbs.blogspot.com/2022/04/growing-your-survival-herb-garden-for.htmlhttps://www.amazon.com/dp/B09X4LYV9RThe Encyclopedia of Medicinal Bitter Herbs: https://southernappalachianherbs.blogspot.com/2022/03/the-encyclopedia-of-bitter-medicina.htmlAvailable for purchase on Amazon: https://www.amazon.com/dp/B0B5MYJ35RandChristian Medicine, History and Practice: https://southernappalachianherbs.blogspot.com/2022/01/christian-herbal-medicine-history-and.htmlAvailable for purchase on Amazon: www.amazon.com/dp/B09P7RNCTBHerbal Medicine for Preppers, Homesteaders and Permaculture People: https://southernappalachianherbs.blogspot.com/2021/10/herbal-medicine-for-preppers.htmlAlso available on Amazon: www.amazon.com/dp/B09HMWXL25Podcast: https://www.spreaker.com/show/southern-appalachian-herbsBlog: https://southernappalachianherbs.blogspot.com/Free Video Lessons: https://rumble.com/c/c-618325
INTERVIEW: Persimmon on new single 'True Crime' and upcoming debut album by Emily Kerr-Bell on Radio One 91FM Dunedin
Our next SF event is AI UX 2024 - let's see the new frontier for UX since last year! Last call: we are recording a preview of the AI Engineer World's Fair with swyx and Ben Dunphy, send any questions about Speaker CFPs and Sponsor Guides you have!Alessio is now hiring engineers for a new startup he is incubating at Decibel: Ideal candidate is an “ex-technical co-founder type”. Reach out to him for more!David Luan has been at the center of the modern AI revolution: he was the ~30th hire at OpenAI, he led Google's LLM efforts and co-led Google Brain, and then started Adept in 2022, one of the leading companies in the AI agents space. In today's episode, we asked David for some war stories from his time in early OpenAI (including working with Alec Radford ahead of the GPT-2 demo with Sam Altman, that resulted in Microsoft's initial $1b investment), and how Adept is building agents that can “do anything a human does on a computer" — his definition of useful AGI.Why Google *couldn't* make GPT-3While we wanted to discuss Adept, we couldn't talk to a former VP Eng of OpenAI and former LLM tech lead at Google Brain and not ask about the elephant in the room. It's often asked how Google had such a huge lead in 2017 with Vaswani et al creating the Transformer and Noam Shazeer predicting trillion-parameter models and yet it was David's team at OpenAI who ended up making GPT 1/2/3. David has some interesting answers:“So I think the real story of GPT starts at Google, of course, right? Because that's where Transformers sort of came about. However, the number one shocking thing to me was that, and this is like a consequence of the way that Google is organized…what they (should) have done would be say, hey, Noam Shazeer, you're a brilliant guy. You know how to scale these things up. Here's half of all of our TPUs. And then I think they would have destroyed us. He clearly wanted it too…You know, every day we were scaling up GPT-3, I would wake up and just be stressed. And I was stressed because, you know, you just look at the facts, right? Google has all this compute. Google has all the people who invented all of these underlying technologies. There's a guy named Noam who's really smart, who's already gone and done this talk about how he wants a trillion parameter model. And I'm just like, we're probably just doing duplicative research to what he's doing. He's got this decoder only transformer that's probably going to get there before we do. And it turned out the whole time that they just couldn't get critical mass. So during my year where I led the Google LM effort and I was one of the brain leads, you know, it became really clear why. At the time, there was a thing called the Brain Credit Marketplace. Everyone's assigned a credit. So if you have a credit, you get to buy end chips according to supply and demand. So if you want to go do a giant job, you had to convince like 19 or 20 of your colleagues not to do work. And if that's how it works, it's really hard to get that bottom up critical mass to go scale these things. And the team at Google were fighting valiantly, but we were able to beat them simply because we took big swings and we focused.”Cloning HGI for AGIHuman intelligence got to where it is today through evolution. Some argue that to get to AGI, we will approximate all the “FLOPs” that went into that process, an approach most famously mapped out by Ajeya Cotra's Biological Anchors report:The early days of OpenAI were very reinforcement learning-driven with the Dota project, but that's a very inefficient way for these models to re-learn everything. (Kanjun from Imbue shared similar ideas in her episode).David argues that there's a shortcut. We can bootstrap from existing intelligence.“Years ago, I had a debate with a Berkeley professor as to what will it actually take to build AGI. And his view is basically that you have to reproduce all the flops that went into evolution in order to be able to get there… I think we are ignoring the fact that you have a giant shortcut, which is you can behaviorally clone everything humans already know. And that's what we solved with LLMs!”LLMs today basically model intelligence using all (good!) written knowledge (see our Datasets 101 episode), and have now expanded to non-verbal knowledge (see our HuggingFace episode on multimodality). The SOTA self-supervised pre-training process is surprisingly data-efficient in taking large amounts of unstructured data, and approximating reasoning without overfitting.But how do you cross the gap from the LLMs of today to building the AGI we all want? This is why David & friends left to start Adept.“We believe the clearest framing of general intelligence is a system that can do anything a human can do in front of a computer. A foundation model for actions, trained to use every software tool, API, and webapp that exists, is a practical path to this ambitious goal” — ACT-1 BlogpostCritical Path: Abstraction with ReliabilityThe AGI dream is fully autonomous agents, but there are levels to autonomy that we are comfortable giving our agents, based on how reliable they are. In David's word choice, we always want higher levels of “abstractions” (aka autonomy), but our need for “reliability” is the practical limit on how high of an abstraction we can use.“The critical path for Adept is we want to build agents that can do a higher and higher level abstraction things over time, all while keeping an insanely high reliability standard. Because that's what turns us from research into something that customers want. And if you build agents with really high reliability standard, but are continuing pushing a level of abstraction, you then learn from your users how to get that next level of abstraction faster. So that's how you actually build the data flow. That's the critical path for the company. Everything we do is in service of that.”We saw how Adept thinks about different levels of abstraction at the 2023 Summit:The highest abstraction is the “AI Employee”, but we'll get there with “AI enabled employees”. Alessio recently gave a talk about the future of work with “services as software” at this week's Nvidia GTC (slides).No APIsUnlike a lot of large research labs, Adept's framing of AGI as "being able to use your computer like a human" carries with it a useful environmental constraint:“Having a human robot lets you do things that humans do without changing everything along the way. It's the same thing for software, right? If you go itemize out the number of things you want to do on your computer for which every step has an API, those numbers of workflows add up pretty close to zero. And so then many points along the way, you need the ability to actually control your computer like a human. It also lets you learn from human usage of computers as a source of training data that you don't get if you have to somehow figure out how every particular step needs to be some particular custom private API thing. And so I think this is actually the most practical path (to economic value).”This realization and conviction means that multimodal modals are the way to go. Instead of using function calling to call APIs to build agents, which is what OpenAI and most of the open LLM industry have done to date, Adept wants to “drive by vision”, (aka see the screen as a human sees it) and pinpoint where to click and type as a human does. No APIs needed, because most software don't expose APIs.Extra context for readers: You can see the DeepMind SIMA model in the same light: One system that learned to play a diverse set of games (instead of one dedicated model per game) using only pixel inputs and keyboard-and-mouse action outputs!The OpenInterpreter team is working on a “Computer API” that also does the same.To do this, Adept had to double down on a special kind of multimodality for knowledge work:“A giant thing that was really necessary is really fast multimodal models that are really good at understanding knowledge work and really good at understanding screens. And that is needs to kind of be the base for some of these agents……I think one big hangover primarily academic focus for multimodal models is most multimodal models are primarily trained on like natural images, cat and dog photos, stuff that's come out of the camera… (but) where are they going to be the most useful? They're going to be most useful in knowledge work tasks. That's where the majority of economic value is going to be. It's not in cat and dogs. And so if that's what it is, what do you need to train? I need to train on like charts, graphs, tables, invoices, PDFs, receipts, unstructured data, UIs. That's just a totally different pre-training corpus. And so Adept spent a lot of time building that.”With this context, you can now understand the full path of Adept's public releases:* ACT-1 (Sept 2022): a large Transformers model optimized for browser interactions. It has a custom rendering of the browser viewport that allows it to better understand it and take actions.* Persimmon-8B (Sept 2023): a permissive open LLM (weights and code here)* Fuyu-8B (Oct 2023): a small version of the multimodal model that powers Adept. Vanilla decoder-only transformer with no specialized image encoder, which allows it to handle input images of varying resolutions without downsampling.* Adept Experiments (Nov 2023): A public tool to build automations in the browser. This is powered by Adept's core technology but it's just a piece of their enterprise platform. They use it as a way to try various design ideas.* Fuyu Heavy (Jan 2024) - a new multimodal model designed specifically for digital agents and the world's third-most-capable multimodal model (beating Gemini Pro on MMMU, AI2D, and ChartQA), “behind only GPT4-V and Gemini Ultra, which are 10-20 times bigger”The Fuyu-8B post in particular exhibits a great number of examples on knowledge work multimodality:Why Adept is NOT a Research LabWith OpenAI now worth >$90b and Anthropic >$18b, it is tempting to conclude that the AI startup metagame is to build a large research lab, and attract the brightest minds and highest capital to build AGI. Our past guests (see the Humanloop episode) and (from Imbue) combined to ask the most challenging questions of the pod - with David/Adept's deep research pedigree from Deepmind and OpenAI, why is Adept not building more general foundation models (like Persimmon) and playing the academic benchmarks game? Why is Adept so focused on commercial agents instead?“I feel super good that we're doing foundation models in service of agents and all of the reward within Adept is flowing from “Can we make a better agent”…… I think pure play foundation model companies are just going to be pinched by how good the next couple of (Meta Llama models) are going to be… And then seeing the really big players put ridiculous amounts of compute behind just training these base foundation models, I think is going to commoditize a lot of the regular LLMs and soon regular multimodal models. So I feel really good that we're just focused on agents.”and the commercial grounding is his answer to Kanjun too (whom we also asked the inverse question to compare with Adept):“… the second reason I work at Adept is if you believe that actually having customers and a reward signal from customers lets you build AGI faster, which we really believe, then you should come here. And I think the examples for why that's true is for example, our evaluations are not academic evals. They're not simulator evals. They're like, okay, we have a customer that really needs us to do these particular things. We can do some of them. These are the ones they want us to, we can't do them at all. We've turned those into evals.. I think that's a degree of practicality that really helps.”And his customers seem pretty happy, because David didn't need to come on to do a sales pitch:David: “One of the things we haven't shared before is we're completely sold out for Q1.”Swyx: “Sold out of what?”David: “Sold out of bandwidth to onboard more customers.”Well, that's a great problem to have.Show Notes* David Luan* Dextro at Data Driven NYC (2015)* Adept* ACT-1* Persimmon-8B* Adept Experiments* Fuyu-8B* $350M Series B announcement* Amelia Wattenberger talk at AI Engineer Summit* FigureChapters* [00:00:00] Introductions* [00:01:14] Being employee #30 at OpenAI and its early days* [00:13:38] What is Adept and how do you define AGI?* [00:21:00] Adept's critical path and research directions* [00:26:23] How AI agents should interact with software and impact product development* [00:30:37] Analogies between AI agents and self-driving car development* [00:32:42] Balancing reliability, cost, speed and generality in AI agents* [00:37:30] Potential of foundation models for robotics* [00:39:22] Core research questions and reasons to work at AdeptTranscriptsAlessio [00:00:00]: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO in Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.Swyx [00:00:15]: Hey, and today we have David Luan, CEO, co-founder of Adept in the studio. Welcome.David [00:00:20]: Yeah, thanks for having me.Swyx [00:00:21]: Been a while in the works. I've met you socially at one of those VC events and you said that you were interested in coming on and glad we finally were able to make this happen.David: Yeah, happy to be part of it.Swyx: So we like to introduce the speaker and then also just like have you talk a little bit about like what's not on your LinkedIn, what people should just generally know about you. You started a company in college, which was the first sort of real time video detection classification API that was Dextro, and that was your route to getting acquired into Axon where you're a director of AI. Then you were the 30th hire at OpenAI?David [00:00:53]: Yeah, 30, 35, something around there. Something like that.Swyx [00:00:56]: So you were VP of Eng for two and a half years to two years, briefly served as tech lead of large models at Google, and then in 2022 started Adept. So that's the sort of brief CV. Is there anything else you like want to fill in the blanks or like people should know more about?David [00:01:14]: I guess a broader story was I joined OpenAI fairly early and I did that for about two and a half to three years leading engineering there. It's really funny, I think second or third day of my time at OpenAI, Greg and Ilya pulled me in a room and we're like, you know, you should take over our directs and we'll go mostly do IC work. So that was fun, just coalescing a bunch of teams out of a couple of early initiatives that had already happened. The company, the Dota effort was going pretty hard and then more broadly trying to put bigger picture direction around what we were doing with basic research. So I spent a lot of time doing that. And then I led Google's LLM efforts, but also co-led Google Brain was one of the brain leads more broadly. You know, there's been a couple of different eras of AI research, right? If we count everything before 2012 as prehistory, which people hate it when I say that, kind of had this like you and your three best friends write a research paper that changes the world period from like 2012 to 2017. And I think the game changed in 2017 and like most labs didn't realize it, but we at OpenAI really did. I think in large part helped by like Ilya's constant beating of the drum that the world would be covered in data centers. And I think-Swyx [00:02:15]: It's causally neat.David [00:02:16]: Yeah. Well, like I think we had conviction in that, but it wasn't until we started seeing results that it became clear that that was where we had to go. But also part of it as well was for OpenAI, like when I first joined, I think one of the jobs that I had to do was how do I tell a differentiated vision for who we were technically compared to, you know, hey, we're just smaller Google Brain, or like you work at OpenAI if you live in SF and don't want to commute to Mountain View or don't want to live in London, right? That's like not enough to like hang your technical identity as a company. And so what we really did was, and I spent a lot of time pushing this, is just how do we get ourselves focused on a certain class of like giant swings and bets, right? Like how do you flip the script from you just do bottom-up research to more about how do you like leave some room for that, but really make it about like, what are the big scientific outcomes that you want to show? And then you just solve them at all costs, whether or not you care about novelty and all that stuff. And that became the dominant model for a couple of years, right? And then what's changed now is I think the number one driver of AI products over the next couple of years is going to be the deep co-design and co-evolution of product and users for feedback and actual technology. And I think labs, every tool to go do that are going to do really well. And that's a big part of why I started Adept.Alessio [00:03:20]: You mentioned Dota, any memories thinking from like the switch from RL to Transformers at the time and kind of how the industry was evolving more in the LLM side and leaving behind some of the more agent simulation work?David [00:03:33]: Like zooming way out, I think agents are just absolutely the correct long-term direction, right? You just go to find what AGI is, right? You're like, Hey, like, well, first off, actually, I don't love AGI definitions that involve human replacement because I don't think that's actually how it's going to happen. Even this definition of like, Hey, AGI is something that outperforms humans at economically valuable tasks is kind of implicit view of the world about what's going to be the role of people. I think what I'm more interested in is like a definition of AGI that's oriented around like a model that can do anything a human can do on a computer. If you go think about that, which is like super tractable, then agent is just a natural consequence of that definition. And so what did all the work we did on our own stuff like that get us was it got us a really clear formulation. Like you have a goal and you want to maximize the goal, you want to maximize reward, right? And the natural LLM formulation doesn't come with that out of the box, right? I think that we as a field got a lot right by thinking about, Hey, how do we solve problems of that caliber? And then the thing we forgot is the Novo RL is like a pretty terrible way to get there quickly. Why are we rediscovering all the knowledge about the world? Years ago, I had a debate with a Berkeley professor as to what will it actually take to build AGI. And his view is basically that you have to reproduce all the flops that went into evolution in order to be able to get there. Right.Swyx [00:04:44]: The biological basis theory. Right.David [00:04:46]: So I think we are ignoring the fact that you have a giant shortcut, which is you can behavioral clone everything humans already know. And that's what we solved with LLMs. We've solved behavioral cloning, everything that humans already know. Right. So like today, maybe LLMs is like behavioral cloning every word that gets written on the internet in the future, the multimodal models are becoming more of a thing where behavioral cloning the visual world. But really, what we're just going to have is like a universal byte model, right? Where tokens of data that have high signal come in, and then all of those patterns are like learned by the model. And then you can regurgitate any combination now. Right. So text into voice out, like image into other image out or video out or whatever, like these like mappings, right? Like all just going to be learned by this universal behavioral cloner. And so I'm glad we figured that out. And I think now we're back to the era of how do we combine this with all of the lessons we learned during the RL period. That's what's going to drive progress.Swyx [00:05:35]: I'm still going to pressure you for a few more early opening stories before we turn to the ADET stuff. On your personal site, which I love, because it's really nice, like personal, you know, story context around like your history. I need to update it. It's so old. Yeah, it's so out of date. But you mentioned GPT-2. Did you overlap with GPT-1? I think you did, right?David [00:05:53]: I actually don't quite remember. I think I was joining right around- Right around then?Swyx [00:05:57]: I was right around that, yeah. Yeah. So what I remember was Alec, you know, just kind of came in and was like very obsessed with Transformers and applying them to like Reddit sentiment analysis. Yeah, sentiment, that's right. Take us through-David [00:06:09]: Sentiment neuron, all this stuff.Swyx [00:06:10]: The history of GPT as far as you know, you know, according to you. Ah, okay.David [00:06:14]: History of GPT, according to me, that's a pretty good question. So I think the real story of GPT starts at Google, of course, right? Because that's where Transformers sort of came about. However, the number one shocking thing to me was that, and this is like a consequence of the way that Google is organized, where like, again, you and your three best friends write papers, right? Okay. So zooming way out, right? I think about my job when I was a full-time research leader as a little bit of a portfolio allocator, right? So I've got really, really smart people. My job is to convince people to coalesce around a small number of really good ideas and then run them over the finish line. My job is not actually to promote a million ideas and never have critical mass. And then as the ideas start coming together and some of them start working well, my job is to nudge resources towards the things that are really working and then start disbanding some of the things that are not working, right? That muscle did not exist during my time at Google. And I think had they had it, what they would have done would be say, hey, Noam Shazir, you're a brilliant guy. You know how to scale these things up. Here's half of all of our TPUs. And then I think they would have destroyed us. He clearly wanted it too.Swyx [00:07:17]: He's talking about trillion parameter models in 2017.David [00:07:20]: Yeah. So that's the core of the GPT story, right? Which is that, and I'm jumping around historically, right? But after GPT-2, we were all really excited about GPT-2. I can tell you more stories about that. It was the last paper that I even got to really touch before everything became more about building a research org. You know, every day we were scaling up GPT-3, I would wake up and just be stressed. And I was stressed because, you know, you just look at the facts, right? Google has all this compute. Google has all the people who invented all of these underlying technologies. There's a guy named Noam who's really smart, who's already gone and done this talk about how he wants a trillion parameter model. And I'm just like, we're probably just doing duplicative research to what he's doing, right? He's got this decoder only transformer that's probably going to get there before we do. And I was like, but like, please just like let this model finish, right? And it turned out the whole time that they just couldn't get critical mass. So during my year where I led the Google LM effort and I was one of the brain leads, you know, it became really clear why, right? At the time, there was a thing called the brain credit marketplace. And did you guys know the brain credit marketplace? No, I never heard of this. Oh, so it's actually, it's a, you can ask any Googler.Swyx [00:08:23]: It's like just like a thing that, that, I mean, look like, yeah, limited resources, you got to have some kind of marketplace, right? You know, sometimes it's explicit, sometimes it isn't, you know, just political favors.David [00:08:34]: You could. And so then basically everyone's assigned a credit, right? So if you have a credit, you get to buy end chips according to supply and demand. So if you want to go do a giant job, you had to convince like 19 or 20 of your colleagues not to do work. And if that's how it works, it's really hard to get that bottom up critical mass to go scale these things. And the team at Google were fighting valiantly, but we were able to beat them simply because we took big swings and we focused. And I think, again, that's like part of the narrative of like this phase one of AI, right? Of like this modern AI era to phase two. And I think in the same way, I think phase three company is going to out execute phase two companies because of the same asymmetry of success.Swyx [00:09:12]: Yeah. I think it's underrated how much NVIDIA works with you in the early days as well. I think maybe, I think it was Jensen. I'm not sure who circulated a recent photo of him delivering the first DGX to you guys.David [00:09:24]: I think Jensen has been a complete legend and a mastermind throughout. I have so much respect for NVIDIA. It is unreal.Swyx [00:09:34]: But like with OpenAI, like kind of give their requirements, like co-design it or just work of whatever NVIDIA gave them.David [00:09:40]: So we work really closely with them. There's, I'm not sure I can share all the stories, but examples of ones that I've found particularly interesting. So Scott Gray is amazing. I really like working with him. He was on one of my teams, the supercomputing team, which Chris Berner runs and Chris Berner still does a lot of stuff in that. As a result, like we had very close ties to NVIDIA. Actually, one of my co-founders at Adept, Eric Elson, was also one of the early GPGPU people. So he and Scott and Brian Catanzaro at NVIDIA and Jonah and Ian at NVIDIA, I think all were very close. And we're all sort of part of this group of how do we push these chips to the absolute limit? And I think that kind of collaboration helped quite a bit. I think one interesting set of stuff is knowing the A100 generation, that like quad sparsity was going to be a thing. Is that something that we want to go look into, right? And figure out if that's something that we could actually use for model training. Really what it boils down to is that, and I think more and more people realize this, six years ago, people, even three years ago, people refused to accept it. This era of AI is really a story of compute. It's really the story of how do you more efficiently map actual usable model flops to compute,Swyx [00:10:38]: Is there another GPT 2, 3 story that you love to get out there that you think is underappreciated for the amount of work that people put into it?David [00:10:48]: So two interesting GPT 2 stories. One of them was I spent a good bit of time just sprinting to help Alec get the paper out. And I remember one of the most entertaining moments was we were writing the modeling section. And I'm pretty sure the modeling section was the shortest modeling section of any ML, reasonably legitimate ML paper to that moment. It was like section three model. This is a standard vanilla decoder only transformer with like these particular things, those paragraph long if I remember correctly. And both of us were just looking at the same being like, man, the OGs in the field are going to hate this. They're going to say no novelty. Why did you guys do this work? So now it's funny to look at in hindsight that it was pivotal kind of paper, but I think it was one of the early ones where we just leaned fully into all we care about is solving problems in AI and not about, hey, is there like four different really simple ideas that are cloaked in mathematical language that doesn't actually help move the field forward?Swyx [00:11:42]: Right. And it's like you innovate on maybe like data set and scaling and not so much the architecture.David [00:11:48]: We all know how it works now, right? Which is that there's a collection of really hard won knowledge that you get only by being at the frontiers of scale. And that hard won knowledge, a lot of it's not published. A lot of it is stuff that's actually not even easily reducible to what looks like a typical academic paper. But yet that's the stuff that helps differentiate one scaling program from another. You had a second one? So the second one is, there's like some details here that I probably shouldn't fully share, but hilariously enough for the last meeting we did with Microsoft before Microsoft invested in OpenAI, Sam Altman, myself and our CFO flew up to Seattle to do the final pitch meeting. And I'd been a founder before. So I always had a tremendous amount of anxiety about partner meetings, which this basically this is what it was. I had Kevin Scott and Satya and Amy Hood, and it was my job to give the technical slides about what's the path to AGI, what's our research portfolio, all of this stuff, but it was also my job to give the GPT-2 demo. We had a slightly bigger version of GPT-2 that we had just cut maybe a day or two before this flight up. And as we all know now, model behaviors you find predictable at one checkpoint are not predictable in another checkpoint. And so I'd spent all this time trying to figure out how to keep this thing on rails. I had my canned demos, but I knew I had to go turn it around over to Satya and Kevin and let them type anything in. And that just, that really kept me up all night.Swyx [00:13:06]: Nice. Yeah.Alessio [00:13:08]: I mean, that must have helped you talking about partners meeting. You raised $420 million for Adept. The last round was a $350 million Series B, so I'm sure you do great in partner meetings.Swyx [00:13:18]: Pitchers meetings. Nice.David [00:13:20]: No, that's a high compliment coming from a VC.Alessio [00:13:22]: Yeah, no, I mean, you're doing great already for us. Let's talk about Adept. And we were doing pre-prep and you mentioned that maybe a lot of people don't understand what Adept is. So usually we try and introduce the product and then have the founders fill in the blanks, but maybe let's do the reverse. Like what is Adept? Yeah.David [00:13:38]: So I think Adept is the least understood company in the broader space of foundational models plus agents. So I'll give some color and I'll explain what it is and I'll explain also why it's actually pretty different from what people would have guessed. So the goal for Adept is we basically want to build an AI agent that can do, that can basically help humans do anything a human does on a computer. And so what that really means is we want this thing to be super good at turning natural language like goal specifications right into the correct set of end steps and then also have all the correct sensors and actuators to go get that thing done for you across any software tool that you already use. And so the end vision of this is effectively like I think in a couple of years everyone's going to have access to like an AI teammate that they can delegate arbitrary tasks to and then also be able to, you know, use it as a sounding board and just be way, way, way more productive. Right. And just changes the shape of every job from something where you're mostly doing execution to something where you're mostly actually doing like these core liberal arts skills of what should I be doing and why. Right. And I find this like really exciting and motivating because I think it's actually a pretty different vision for how AGI will play out. I think systems like Adept are the most likely systems to be proto-AGIs. But I think the ways in which we are really counterintuitive to everybody is that we've actually been really quiet because we are not a developer company. We don't sell APIs. We don't sell open source models. We also don't sell bottom up products. We're not a thing that you go and click and download the extension and like we want more users signing up for that thing. We're actually an enterprise company. So what we do is we work with a range of different companies, some like late stage multi-thousand people startups, some fortune 500s, et cetera. And what we do for them is we basically give them an out of the box solution where big complex workflows that their employees do every day could be delegated to the model. And so we look a little different from other companies in that in order to go build this full agent thing, the most important thing you got to get right is reliability. So initially zooming way back when, one of the first things that DEP did was we released this demo called Act One, right? Act One was like pretty cool. It's like kind of become a hello world thing for people to show agent demos by going to Redfin and asking to buy a house somewhere because like we did that in the original Act One demo and like showed that, showed like Google Sheets, all this other stuff. Over the last like year since that has come out, there's been a lot of really cool demos and you go play with them and you realize they work 60% of the time. But since we've always been focused on how do we build an amazing enterprise product, enterprises can't use anything that isn't in the nines of reliability. And so we've actually had to go down a slightly different tech tree than what you might find in the prompt engineering sort of plays in the agent space to get that reliability. And we've decided to prioritize reliability over all else. So like one of our use cases is crazy enough that it actually ends with a physical truck being sent to a place as the result of the agent workflow. And if you're like, if that works like 60% of the time, you're just blowing money and poor truck drivers going places.Alessio [00:16:30]: Interesting. One of the, our investment teams has this idea of services as software. I'm actually giving a talk at NVIDIA GTC about this, but basically software as a service, you're wrapping user productivity in software with agents and services as software is replacing things that, you know, you would ask somebody to do and the software just does it for you. When you think about these use cases, do the users still go in and look at the agent kind of like doing the things and can intervene or like are they totally removed from them? Like the truck thing is like, does the truck just show up or are there people in the middle checking in?David [00:17:04]: I think there's two current flaws in the framing for services as software, or I think what you just said. I think that one of them is like in our experience, as we've been rolling out Adept, the people who actually do the jobs are the most excited about it because they don't go from, I do this job to, I don't do this job. They go from, I do this job for everything, including the shitty rote stuff to I'm a supervisor. And I literally like, it's pretty magical when you watch the thing being used because now it parallelizes a bunch of the things that you had to do sequentially by hand as a human. And you can just click into any one of them and be like, Hey, I want to watch the trajectory that the agent went through to go solve this. And the nice thing about agent execution as opposed to like LLM generations is that a good chunk of the time when the agent fails to execute, it doesn't give you the wrong result. It just fails to execute. And the whole trajectory is just broken and dead and the agent knows it, right? So then those are the ones that the human then goes and solves. And so then they become a troubleshooter. They work on the more challenging stuff. They get way, way more stuff done and they're really excited about it. I think the second piece of it that we've found is our strategy as a company is to always be an augmentation company. And I think one out of principle, that's something we really care about. But two, actually, if you're framing yourself as an augmentation company, you're always going to live in a world where you're solving tasks that are a little too hard for what the model can do today and still needs a human to provide oversight, provide clarifications, provide human feedback. And that's how you build a data flywheel. That's how you actually learn from the smartest humans how to solve things models can't do today. And so I actually think that being an augmentation company forces you to go develop your core AI capabilities faster than someone who's saying, ah, okay, my job is to deliver you a lights off solution for X.Alessio [00:18:42]: Yeah. It's interesting because we've seen two parts of the market. One is we have one company that does agents for SOC analysts. People just don't have them, you know, and just they cannot attract the talent to do it. And similarly, in a software development, you have Copilot, which is the augmentation product, and then you have sweep.dev and you have these products, which they just do the whole thing. I'm really curious to see how that evolves. I agree that today the reliability is so important in the enterprise that they just don't use most of them. Yeah. Yeah. No, that's cool. But it's great to hear the story because I think from the outside, people are like, oh, a dev, they do Act One, they do Persimon, they do Fuyu, they do all this stuff. Yeah, it's just the public stuff.Swyx [00:19:20]: It's just public stuff.David [00:19:21]: So one of the things we haven't shared before is we're completely sold out for Q1. And so I think...Swyx [00:19:26]: Sold out of what?David [00:19:27]: Sold out of bandwidth to go on board more customers. And so we're like working really hard to go make that less of a bottleneck, but our expectation is that I think we're going to be significantly more public about the broader product shape and the new types of customers we want to attract later this year. So I think that clarification will happen by default.Swyx [00:19:43]: Why have you become more public? You know, if the whole push has... You're sold out, you're my enterprise, but you're also clearly putting effort towards being more open or releasing more things.David [00:19:53]: I think we just flipped over that way fairly recently. That's a good question. I think it actually boils down to two things. One, I think that, frankly, a big part of it is that the public narrative is really forming around agents as being the most important thing. And I'm really glad that's happening because when we started the company in January 2022, everybody in the field knew about the agents thing from RL, but the general public had no conception of what it was. They were still hanging their narrative hat on the tree of everything's a chatbot. And so I think now one of the things that I really care about is that when people think agent, they actually think the right thing. All sorts of different things are being called agents. Chatbots are being called agents. Things that make a function call are being called agents. To me, an agent is something that you can give a goal and get an end step workflow done correctly in the minimum number of steps. And so that's a big part of why. And I think the other part is because I think it's always good for people to be more aware of Redept as they think about what the next thing they want to do in their careers. The field is quickly pivoting in a world where foundation models are looking more and more commodity. And I think a huge amount of gain is going to happen from how do you use foundation models as the well-learned behavioral cloner to go solve agents. And I think people who want to do agents research should really come to Redept.Swyx [00:21:00]: When you say agents have become more part of the public narrative, are there specific things that you point to? I'll name a few. Bill Gates in his blog post mentioning that agents are the future. I'm the guy who made OSes, and I think agents are the next thing. So Bill Gates, I'll call that out. And then maybe Sam Altman also saying that agents are the future for open AI.David [00:21:17]: I think before that even, I think there was something like the New York Times, Cade Metz wrote a New York Times piece about it. Right now, in a bit to differentiate, I'm seeing AI startups that used to just brand themselves as an AI company, but now brand themselves as an AI agent company. It's just like, it's a term I just feel like people really want.Swyx [00:21:31]: From the VC side, it's a bit mixed. Is it? As in like, I think there are a lot of VCs where like, I would not touch any agent startups because like- Why is that? Well, you tell me.Alessio [00:21:41]: I think a lot of VCs that are maybe less technical don't understand the limitations of the-Swyx [00:21:46]: No, that's not fair.Alessio [00:21:47]: No, no, no, no. I think like- You think so? No, no. I think like the, what is possible today and like what is worth investing in, you know? And I think like, I mean, people look at you and say, well, these guys are building agents. They needed 400 million to do it. So a lot of VCs are maybe like, oh, I would rather invest in something that is tacking on AI to an existing thing, which is like easier to get the market and kind of get some of the flywheel going. But I'm also surprised a lot of funders just don't want to do agents. It's not even the funding. Sometimes we look around and it's like, why is nobody doing agents for X? Wow.David [00:22:17]: That's good to know actually. I never knew that before. My sense from my limited perspective is there's a new agent company popping up every day.Swyx [00:22:24]: So maybe I'm- They are. They are. But like I have advised people to take agents off of their title because it's so diluted.David [00:22:31]: It's now so diluted.Swyx [00:22:32]: Yeah. So then it doesn't stand for anything. Yeah.David [00:22:35]: That's a really good point.Swyx [00:22:36]: So like, you know, you're a portfolio allocator. You have people know about Persimmon, people know about Fuyu and Fuyu Heavy. Can you take us through like how you think about that evolution of that and what people should think about what that means for adepts and sort of research directions? Kind of take us through the stuff you shipped recently and how people should think about the trajectory of what you're doing.David [00:22:56]: The critical path for adepts is we want to build agents that can do a higher and higher level abstraction things over time, all while keeping an insanely high reliability standard. Because that's what turns us from research into something that customers want. And if you build agents with really high reliability standard, but are continuing pushing a level of abstraction, you then learn from your users how to get that next level of abstraction faster. So that's how you actually build the data flow. That's the critical path for the company. Everything we do is in service of that. So if you go zoom way, way back to Act One days, right? Like the core thing behind Act One is can we teach large model basically how to even actuate your computer? And I think we're one of the first places to have solved that and shown it and shown the generalization that you get when you give it various different workflows and texts. But I think from there on out, we really realized was that in order to get reliability, companies just do things in various different ways. You actually want these models to be able to get a lot better at having some specification of some guardrails for what it actually should be doing. And I think in conjunction with that, a giant thing that was really necessary is really fast multimodal models that are really good at understanding knowledge work and really good at understanding screens. And that is needs to kind of be the base for some of these agents. Back then we had to do a ton of research basically on how do we actually make that possible? Well, first off, like back in forgot exactly one month to 23, like there were no multimodal models really that you could use for things like this. And so we pushed really hard on stuff like the Fuyu architecture. I think one big hangover primarily academic focus for multimodal models is most multimodal models are primarily trained on like natural images, cat and dog photos, stuff that's come out of the camera. Coco. Yeah, right. And the Coco is awesome. Like I love Coco. I love TY. Like it's really helped the field. Right. But like that's the build one thing. I actually think it's really clear today. Multimodal models are the default foundation model, right? It's just going to supplant LLMs. Like you just train a giant multimodal model. And so for that though, like where are they going to be the most useful? They're going to be most useful in knowledge work tasks. That's where the majority of economic value is going to be. It's not in cat and dogs. Right. And so if that's what it is, what do you need to train? I need to train on like charts, graphs, tables, invoices, PDFs, receipts, unstructured data, UIs. That's just a totally different pre-training corpus. And so a depth spent a lot of time building that. And so the public for use and stuff aren't trained on our actual corpus, it's trained on some other stuff. But you take a lot of that data and then you make it really fast and make it really good at things like dense OCR on screens. And then now you have the right like raw putty to go make a good agent. So that's kind of like some of the modeling side, we've kind of only announced some of that stuff. We haven't really announced much of the agent's work, but that if you put those together with the correct product form factor, and I think the product form factor also really matters. I think we're seeing, and you guys probably see this a little bit more than I do, but we're seeing like a little bit of a pushback against the tyranny of chatbots as form factor. And I think that the reason why the form factor matters is the form factor changes what data you collect in the human feedback loop. And so I think we've spent a lot of time doing full vertical integration of all these bits in order to get to where we are.Swyx [00:25:44]: Yeah. I'll plug Amelia Wattenberger's talk at our conference, where she gave a little bit of the thinking behind like what else exists other than chatbots that if you could delegate to reliable agents, you could do. I was kind of excited at Adept experiments or Adept workflows, I don't know what the official name for it is. I was like, okay, like this is something I can use, but it seems like it's just an experiment for now. It's not your product.David [00:26:06]: So you basically just use experiments as like a way to go push various ideas on the design side to some people and just be like, yeah, we'll play with it. Actually the experiments code base underpins the actual product, but it's just the code base itself is kind of like a skeleton for us to go deploy arbitrary cards on the side.Swyx [00:26:22]: Yeah.Alessio [00:26:23]: Makes sense. I was going to say, I would love to talk about the interaction layer. So you train a model to see UI, but then there's the question of how do you actually act on the UI? I think there was some rumors about open app building agents that are kind of like, they manage the end point. So the whole computer, you're more at the browser level. I read in one of your papers, you have like a different representation, kind of like you don't just take the dome and act on it. You do a lot more stuff. How do you think about the best way the models will interact with the software and like how the development of products is going to change with that in mind as more and more of the work is done by agents instead of people?David [00:26:58]: This is, there's so much surface area here and it's actually one of the things I'm really excited about. And it's funny because I've spent most of my time doing research stuff, but there's like a whole new ball game that I've been learning about and I find it really cool. So I would say the best analogy I have to why Adept is pursuing a path of being able to use your computer like a human, plus of course being able to call APIs and being able to call APIs is the easy part, like being able to use your computer like a human is a hard part. It's in the same way why people are excited about humanoid robotics, right? In a world where you had T equals infinity, right? You're probably going to have various different form factors that robots could just be in and like all the specialization. But the fact is that humans live in a human environment. So having a human robot lets you do things that humans do without changing everything along the way. It's the same thing for software, right? If you go itemize out the number of things you want to do on your computer for which every step has an API, those numbers of workflows add up pretty close to zero. And so then many points along the way, you need the ability to actually control your computer like a human. It also lets you learn from human usage of computers as a source of training data that you don't get if you have to somehow figure out how every particular step needs to be some particular custom private API thing. And so I think this is actually the most practical path. I think because it's the most practical path, I think a lot of success will come from going down this path. I kind of think about this early days of the agent interaction layer level is a little bit like, do you all remember Windows 3.1? Like those days? Okay, this might be, I might be, I might be too old for you guys on this. But back in the day, Windows 3.1, we had this transition period between pure command line, right? Being the default into this new world where the GUI is the default and then you drop into the command line for like programmer things, right? The old way was you booted your computer up, DOS booted, and then it would give you the C colon slash thing. And you typed Windows and you hit enter, and then you got put into Windows. And then the GUI kind of became a layer above the command line. The same thing is going to happen with agent interfaces is like today we'll be having the GUI is like the base layer. And then the agent just controls the current GUI layer plus APIs. And in the future, as more and more trust is built towards agents and more and more things can be done by agents, if more UIs for agents are actually generative in and of themselves, then that just becomes a standard interaction layer. And if that becomes a standard interaction layer, what changes for software is that a lot of software is going to be either systems or record or like certain customized workflow execution engines. And a lot of how you actually do stuff will be controlled at the agent layer.Alessio [00:29:19]: And you think the rabbit interface is more like it would like you're not actually seeing the app that the model interacts with. You're just saying, hey, I need to log this call on Salesforce. And you're never actually going on salesforce.com directly as the user. I can see that being a model.David [00:29:33]: I think I don't know enough about what using rabbit in real life will actually be like to comment on that particular thing. But I think the broader idea that, you know, you have a goal, right? The agent knows how to break your goal down into steps. The agent knows how to use the underlying software and systems or record to achieve that goal for you. The agent maybe presents you information in a custom way that's only relevant to your particular goal, all just really leads to a world where you don't really need to ever interface with the apps underneath unless you're a power user for some niche thing.Swyx [00:30:03]: General question. So first of all, I think like the sort of input mode conversation. I wonder if you have any analogies that you like with self-driving, because I do think like there's a little bit of how the model should perceive the world. And you know, the primary split in self-driving is LiDAR versus camera. And I feel like most agent companies that I'm tracking are all moving towards camera approach, which is like the multimodal approach, you know, multimodal vision, very heavy vision, all the Fuyu stuff that you're doing. You're focusing on that, including charts and tables. And do you find that inspiration there from like the self-driving world? That's a good question.David [00:30:37]: I think sometimes the most useful inspiration I've found from self-driving is the levels analogy. I think that's awesome. But I think that our number one goal is for agents not to look like self-driving. We want to minimize the chances that agents are sort of a thing that you just have to bang your head at for a long time to get to like two discontinuous milestones, which is basically what's happened in self-driving. We want to be living in a world where you have the data flywheel immediately, and that takes you all the way up to the top. But similarly, I mean, compared to self-driving, like two things that people really undervalue is like really easy to driving a car down highway 101 in a sunny day demo. That actually doesn't prove anything anymore. And I think the second thing is that as a non-self-driving expert, I think one of the things that we believe really strongly is that everyone undervalues the importance of really good sensors and actuators. And actually a lot of what's helped us get a lot of reliability is a really strong focus on actually why does the model not do this thing? And the non-trivial amount of time, the time the model doesn't actually do the thing is because if you're a wizard of ozzing it yourself, or if you have unreliable actuators, you can't do the thing. And so we've had to fix a lot of those problems.Swyx [00:31:43]: I was slightly surprised just because I do generally consider the way most that we see all around San Francisco as the most, I guess, real case of agents that we have in very material ways.David [00:31:55]: Oh, that's absolutely true. I think they've done an awesome job, but it has taken a long time for self-driving to mature from when it entered the consciousness and the driving down 101 on a sunny day moment happened to now. Right. So I want to see that more compressed.Swyx [00:32:07]: And I mean, you know, cruise, you know, RIP. And then one more thing on just like, just going back on this reliability thing, something I have been holding in my head that I'm curious to get your commentary on is I think there's a trade-off between reliability and generality, or I want to broaden reliability into just general like sort of production readiness and enterprise readiness scale. Because you have reliability, you also have cost, you have speed, speed is a huge emphasis for a debt. The tendency or the temptation is to reduce generality to improve reliability and to improve cost, improve speed. Do you perceive a trade-off? Do you have any insights that solve those trade-offs for you guys?David [00:32:42]: There's definitely a trade-off. If you're at the Pareto frontier, I think a lot of folks aren't actually at the Pareto frontier. I think the way you get there is basically how do you frame the fundamental agent problem in a way that just continues to benefit from data? I think one of the main ways of being able to solve that particular trade-off is you basically just want to formulate the problem such that every particular use case just looks like you collecting more data to go make that use case possible. I think that's how you really solve. Then you get into the other problems like, okay, are you overfitting on these end use cases? You're not doing a thing where you're being super prescriptive for the end steps that the model can only do, for example.Swyx [00:33:17]: Then the question becomes, do you have one house model that you can then customize for each customer and you're fine-tuning them on each customer's specific use case?David [00:33:25]: Yeah.Swyx [00:33:26]: We're not sharing that. You're not sharing that. It's tempting, but that doesn't look like AGI to me. You know what I mean? That is just you have a good base model and then you fine-tune it.David [00:33:35]: For what it's worth, I think there's two paths to a lot more capability coming out of the models that we all are training these days. I think one path is you figure out how to spend, compute, and turn it into data. In that path, I consider search, RL, all the things that we all love in this era as part of that path, like self-play, all that stuff. The second path is how do you get super competent, high intelligence demonstrations from humans? I think the right way to move forward is you kind of want to combine the two. The first one gives you maximum sample efficiency for a little second, but I think that it's going to be hard to be running at max speed towards AGI without actually solving a bit of both.Swyx [00:34:16]: You haven't talked much about synthetic data, as far as I can tell. Probably this is a bit too much of a trend right now, but any insights on using synthetic data to augment the expensive human data?David [00:34:26]: The best part about framing AGI as being able to help people do things on computers is you have an environment.Swyx [00:34:31]: Yes. So you can simulate all of it.David [00:34:35]: You can do a lot of stuff when you have an environment.Alessio [00:34:37]: We were having dinner for our one-year anniversary. Congrats. Yeah. Thank you. Raza from HumanLoop was there, and we mentioned you were coming on the pod. This is our first-Swyx [00:34:45]: So he submitted a question.Alessio [00:34:46]: Yeah, this is our first, I guess, like mailbag question. He asked, when you started GPD 4 Data and Exist, now you have a GPD 4 vision and help you building a lot of those things. How do you think about the things that are unique to you as Adept, and like going back to like the maybe research direction that you want to take the team and what you want people to come work on at Adept, versus what is maybe now become commoditized that you didn't expect everybody would have access to?David [00:35:11]: Yeah, that's a really good question. I think implicit in that question, and I wish he were tier two so he can push back on my assumption about his question, but I think implicit in that question is calculus of where does advantage accrue in the overall ML stack. And maybe part of the assumption is that advantage accrues solely to base model scaling. But I actually believe pretty strongly that the way that you really win is that you have to go build an agent stack that is much more than that of the base model itself. And so I think like that is always going to be a giant advantage of vertical integration. I think like it lets us do things like have a really, really fast base model, is really good at agent things, but is bad at cat and dog photos. It's pretty good at cat and dog photos. It's not like soda at cat and dog photos, right? So like we're allocating our capacity wisely, right? That's like one thing that you really get to do. I also think that the other thing that is pretty important now in the broader foundation modeling space is I feel despite any potential concerns about how good is agents as like a startup area, right? Like we were talking about earlier, I feel super good that we're doing foundation models in service of agents and all of the reward within Adept is flowing from can we make a better agent? Because right now I think we all see that, you know, if you're training on publicly available web data, you put in the flops and you do reasonable things, then you get decent results. And if you just double the amount of compute, then you get predictably better results. And so I think pure play foundation model companies are just going to be pinched by how good the next couple of llamas are going to be and the next what good open source thing. And then seeing the really big players put ridiculous amounts of compute behind just training these base foundation models, I think is going to commoditize a lot of the regular LLMs and soon regular multimodal models. So I feel really good that we're just focused on agents.Swyx [00:36:56]: So you don't consider yourself a pure play foundation model company?David [00:36:59]: No, because if we were a pure play foundation model company, we would be training general foundation models that do summarization and all this other...Swyx [00:37:06]: You're dedicated towards the agent. Yeah.David [00:37:09]: And our business is an agent business. We're not here to sell you tokens, right? And I think like selling tokens, unless there's like a...Swyx [00:37:14]: Not here to sell you tokens. I love it.David [00:37:16]: It's like if you have a particular area of specialty, right? Then you won't get caught in the fact that everyone's just scaling to ridiculous levels of compute. But if you don't have a specialty, I find that, I think it's going to be a little tougher.Swyx [00:37:27]: Interesting. Are you interested in robotics at all? Just a...David [00:37:30]: I'm personally fascinated by robotics. I've always loved robotics.Swyx [00:37:33]: Embodied agents as a business, you know, Figure is like a big, also sort of open AI affiliated company that raises a lot of money.David [00:37:39]: I think it's cool. I think, I mean, I don't know exactly what they're doing, but...Swyx [00:37:44]: Robots. Yeah.David [00:37:46]: Well, I mean, that's a...Swyx [00:37:47]: Yeah. What question would you ask? If we had them on, what would you ask them?David [00:37:50]: Oh, I just want to understand what their overall strategy is going to be between now and when there's reliable stuff to be deployed. But honestly, I just don't know enough about it.Swyx [00:37:57]: And if I told you, hey, fire your entire warehouse workforce and, you know, put robots in there, isn't that a strategy? Oh yeah.David [00:38:04]: Yeah. Sorry. I'm not questioning whether they're doing smart things. I genuinely don't know what they're doing as much, but I think there's two things. One, I'm so excited for someone to train a foundation model of robots. It's just, I think it's just going to work. Like I will die on this hill, but I mean, like again, this whole time, like we've been on this podcast, we're just going to continually saying these models are basically behavioral cloners. Right. So let's go behavioral clone all this like robot behavior. Right. And then you figure out everything else you have to do in order to teach you how to solve a new problem. That's going to work. I'm super stoked for that. I think unlike what we're doing with helping humans with knowledge work, it just sounds like a more zero sum job replacement play. Right. And I'm personally less excited about that.Alessio [00:38:46]: We had a Ken June from InBoo on the podcast. We asked her why people should go work there and not at Adept.Swyx [00:38:52]: Oh, that's so funny.Alessio [00:38:54]: Well, she said, you know, there's space for everybody in this market. We're all doing interesting work. And she said, they're really excited about building an operating system for agent. And for her, the biggest research thing was like getting models, better reasoning and planning for these agents. The reverse question to you, you know, why should people be excited to come work at Adept instead of InBoo? And maybe what are like the core research questions that people should be passionate about to have fun at Adept? Yeah.David [00:39:22]: First off, I think that I'm sure you guys believe this too. The AI space to the extent there's an AI space and the AI agent space are both exactly as she likely said, I think colossal opportunities and people are just going to end up winning in different areas and a lot of companies are going to do well. So I really don't feel that zero something at all. I would say to like change the zero sum framing is why should you be at Adept? I think there's two huge reasons to be at Adept. I think one of them is everything we do is in the service of like useful agents. We're not a research lab. We do a lot of research in service of that goal, but we don't think about ourselves as like a classic research lab at all. And I think the second reason I work at Adept is if you believe that actually having customers and a reward signal from customers lets you build a GI faster, which we really believe, then you should come here. And I think the examples for why that's true is for example, our evaluations, they're not academic evals. They're not simulator evals. They're like, okay, we have a customer that really needs us to do these particular things. We can do some of them. These are the ones they want us to, we can't do them at all. We've turned those into evals, solve it, right? I think that's really cool. Like everybody knows a lot of these evals are like pretty saturated and the new ones that even are not saturated. You look at someone and you're like, is this actually useful? Right? I think that's a degree of practicality that really helps. Like we're equally excited about the same problems around reasoning and planning and generalization and all of this stuff. They're very grounded in actual needs right now, which is really cool.Swyx [00:40:45]: Yeah. This has been a wonderful dive. You know, I wish we had more time, but I would just leave it kind of open to you. I think you have broad thoughts, you know, just about
Osage scholar Jimmy Lee Beason II offers an Indigenous perspective on Killers of the Flower Moon and the history of the Osage murders that the book and film depict. In this Cocktails & Capitalism interview, we discuss the true history of the Reign of Terror–a horrific string of murders of Osage committed by white settlers pillaging Osage oil wealth. In addition to providing deeper context for the recent Martin Scorsese film, our conversation highlights some of the impact of this dark chapter on the lives of the Osage people.A member of the Osage Nation, Jimmy Lee Beason II is a professor and writer who teaches in the Indigenous American Indian Studies Department at Haskell University. He was a guest on a prior episode about the residential school system — a system designed to remove Indigenous children from their communities and strip them of their culture. The university where Jimmy teaches was once the site of one of these residential schools. By teaching and mentoring Indigenous students, Jimmy works to combat the legacy of Indigenous erasure perpetuated by the residential school system.Links and Calls to Action:Follow @osage_scholarDonate to the Osage Nation Foundation hereAdvocate for oil headrights to go back to the Osage peopleContact Professor Jimmy Lee Beason II for speaking engagements: pahuska8@gmail.comMocktail Pairing: The Lily(Crafted by Jesse Torres)Jimmy chose to name this mocktail after Lily Gladstone to honor her representation of Indigenous perseverance and her historic accomplishment as the first Native American actor to win the Golden Globe Award for Best Actress in a Motion Picture. 45ml Apple Cider (or whiskey if you prefer)15ml Persimmon (or pear) syrup (see below)15ml Elderflower syrup (or sweetened elderflower tea)15ml Lemon juice 30ml Ginger beer 1 dash bitters Shake everything except ginger beer with ice and strain over fresh ice. Top with ginger beerPersimmon (or pear) Syrup 225g Persimmons (or pears)200ml Honey250ml WaterRinse, de-stem, and medium dice persimmons (or pears). Add to honey and water and bring to a boil. Lightly simmer for 20 minutes. Stir to thoroughly combine. Remove from heat and let cool for about five minutes. Fine strain and let cool. Bottle and label, adding the date. Persimmon Syrup must be refrigerated and is good for up to 2 weeks.Support the showCocktails & Capitalism is an anticapitalist labor of love, but we could use your help to make this project sustainable. If you can support with even a dollar a month, that would really help us continue to educate, agitate, and amplify the voices of those who are working to dismantle capitalism and create a better world. https://www.patreon.com/cocktailsandcapitalismFollow us on Instagram and TwitterSome episodes on YouTube. Please like & subscribe
Sometimes it's good to know that your elected representatives are really getting things done.
From his home in Connecticut, Ran Oron observed and drew a pair of ospreys, a couple of birds of prey that return each year to the same nest. With a delicate line, in a series of drawings, in a narrative that straddles poetry and prose, he wrote and echoed the construction of his own family nest, its dismantling, and the departure of the children from the home, while reflecting deeply on the dynamics between the pair of birds. He Could See a Bird Outside if He Looked Through His Window (In Hebrew; Persimmon Books, 2023) is a lyrical and tranquil book about partnership, parenthood, and children. About the changing seasons of nature and the soul, a parable of pain and optimism. A story that bridges art and poetry about the intimate space of each and every one of us. A beautifully crafted book that is both inspirational and a gift. Ran Oron was a helicopter navigator prior to studying architecture at the Cooper Union. In 1996 he founded ROART, an architecture studio in New York City. For two decades he was a design professor at Pratt Institute School of Architecture. An Israeli born architect and artist Mr Oron exhibited and lectured around the world. Dr. Yakir Englander is the National Director of Leadership programs at the Israeli-American Council. He also teaches at the AJR. He can be reached at: Yakir1212englander@gmail.com Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network
From his home in Connecticut, Ran Oron observed and drew a pair of ospreys, a couple of birds of prey that return each year to the same nest. With a delicate line, in a series of drawings, in a narrative that straddles poetry and prose, he wrote and echoed the construction of his own family nest, its dismantling, and the departure of the children from the home, while reflecting deeply on the dynamics between the pair of birds. He Could See a Bird Outside if He Looked Through His Window (In Hebrew; Persimmon Books, 2023) is a lyrical and tranquil book about partnership, parenthood, and children. About the changing seasons of nature and the soul, a parable of pain and optimism. A story that bridges art and poetry about the intimate space of each and every one of us. A beautifully crafted book that is both inspirational and a gift. Ran Oron was a helicopter navigator prior to studying architecture at the Cooper Union. In 1996 he founded ROART, an architecture studio in New York City. For two decades he was a design professor at Pratt Institute School of Architecture. An Israeli born architect and artist Mr Oron exhibited and lectured around the world. Dr. Yakir Englander is the National Director of Leadership programs at the Israeli-American Council. He also teaches at the AJR. He can be reached at: Yakir1212englander@gmail.com Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/literature
From his home in Connecticut, Ran Oron observed and drew a pair of ospreys, a couple of birds of prey that return each year to the same nest. With a delicate line, in a series of drawings, in a narrative that straddles poetry and prose, he wrote and echoed the construction of his own family nest, its dismantling, and the departure of the children from the home, while reflecting deeply on the dynamics between the pair of birds. He Could See a Bird Outside if He Looked Through His Window (In Hebrew; Persimmon Books, 2023) is a lyrical and tranquil book about partnership, parenthood, and children. About the changing seasons of nature and the soul, a parable of pain and optimism. A story that bridges art and poetry about the intimate space of each and every one of us. A beautifully crafted book that is both inspirational and a gift. Ran Oron was a helicopter navigator prior to studying architecture at the Cooper Union. In 1996 he founded ROART, an architecture studio in New York City. For two decades he was a design professor at Pratt Institute School of Architecture. An Israeli born architect and artist Mr Oron exhibited and lectured around the world. Dr. Yakir Englander is the National Director of Leadership programs at the Israeli-American Council. He also teaches at the AJR. He can be reached at: Yakir1212englander@gmail.com Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/art
Join us and ZamboniFunk ( @zambonifunk ) and our new puppy Persimmon for a sneak peek at the astrology of 2024, with a focus on the winter quarter. We've got #pluto going in and out and back in for the final time into #aquarius and we talk about the importance of that 0* Aquarius point. We talk about the epic solar #eclipse on April 8th, and the implications thereof, about how this is a #mars year big time, and what this year is leading up to. #astrology #astrologypodcast #podcast #forecast #astrologyforecast #2024 #yearahead #plantcunningpodcast --- Send in a voice message: https://podcasters.spotify.com/pod/show/plantcunning/message Support this podcast: https://podcasters.spotify.com/pod/show/plantcunning/support
Wisconsin Extension forestry specialist Tony Johnson discusses how maple syrup production can open doors for other agroforestry enterprises, and how Wisconsin woodland owners can get started. Plus, Scott Brainard and Eliza Greenman describe the benefits of persimmons as an agroforestry crop. Show notes at www.savannainstitute.org/perennialaf
APAC stocks were softer across the board following the prior day's gains and the choppy/mixed lead from Wall Street.DXY gradually inched higher towards the top end of a 105.25-42 APAC range with G10s softer against the Buck to varying degrees.RBA hiked its Cash Rate by 25bps as expected to 4.35% from 4.10%, while forward guidance saw a dovish tweak.European equity futures are indicative of a softer open with the Eurostoxx 50 -0.3% after cash markets closed -0.4% yesterday.Israeli PM Netanyahu says Israel is open to "short pauses" in Gaza, but ruled out a ceasefire.Looking ahead, highlights include EZ, German, French & Italian Construction PMI, US International Trade, IBD/TIPP, Manheim Index, NY Fed Q3 Household Debt & Credit Report, UK King's Speech, Speeches from ECB's de Guindos; Fed's Schmid, Williams, Logan, Barr & Waller, supply from UK.Earnings: Capgemini, CNH Industrial, Daimler Truck, Persimmon, Watches of Switzerland, UBS, eBay, Occidental Petroleum Corp, Datadog.Read the full report covering Equities, Forex, Fixed Income, Commodites and more on Newsquawk
Thanks to the over 11,000 people who joined us for the first AI Engineer Summit! A full recap is coming, but you can 1) catch up on the fun and videos on Twitter and YouTube, 2) help us reach 1000 people for the first comprehensive State of AI Engineering survey and 3) submit projects for the new AI Engineer Foundation.See our Community page for upcoming meetups in SF, Paris, NYC, and Singapore. This episode had good interest on Twitter.Last month, Imbue was crowned as AI's newest unicorn foundation model lab, raising a $200m Series B at a >$1 billion valuation. As “stealth” foundation model companies go, Imbue (f.k.a. Generally Intelligent) has stood as an enigmatic group given they have no publicly released models to try out. However, ever since their $20m Series A last year their goal has been to “develop generally capable AI agents with human-like intelligence in order to solve problems in the real world”.From RL to Reasoning LLMsAlong with their Series A, they announced Avalon, “A Benchmark for RL Generalization Using Procedurally Generated Worlds”. Avalon is built on top of the open source Godot game engine, and is ~100x faster than Minecraft to enable fast RL benchmarking and a clear reward with adjustable game difficulty.After a while, they realized that pure RL isn't a good path to teach reasoning and planning. The agents were able to learn mechanical things like opening complex doors, climbing, but couldn't go to higher level tasks. A pure RL world also doesn't include a language explanation of the agent reasoning, which made it hard to understand why it made certain decisions. That pushed the team more towards the “models for reasoning” path:“The second thing we learned is that pure reinforcement learning is not a good vehicle for planning and reasoning. So these agents were able to learn all sorts of crazy things: They could learn to climb like hand over hand in VR climbing, they could learn to open doors like very complicated, like multiple switches and a lever open the door, but they couldn't do any higher level things. And they couldn't do those lower level things consistently necessarily. And as a user, I do not want to interact with a pure reinforcement learning end to end RL agent. As a user, like I need much more control over what that agent is doing.”Inspired by Chelsea Finn's work on SayCan at Stanford, the team pivoted to have their agents do the reasoning in natural language instead. This development parallels the large leaps in reasoning that humans have developed as the scientific method:“We are better at reasoning now than we were 3000 years ago. An example of a reasoning strategy is noticing you're confused. Then when I notice I'm confused, I should ask:* What was the original claim that was made? * What evidence is there for this claim? * Does the evidence support the claim? * Is the claim correct? This is like a reasoning strategy that was developed in like the 1600s, you know, with like the advent of science. So that's an example of a reasoning strategy. There are tons of them. We employ all the time, lots of heuristics that help us be better at reasoning. And we can generate data that's much more specific to them.“The Full Stack Model LabOne year later, it would seem that the pivot to reasoning has had tremendous success, and Imbue has now reached a >$1B valuation, with participation from Astera Institute, NVIDIA, Cruise CEO Kyle Vogt, Notion co-founder Simon Last, and others. Imbue tackles their work with a “full stack” approach:* Models. Pretraining very large (>100B parameter) models, optimized to perform well on internal reasoning benchmarks, with a ~10,000 Nvidia H100 GPU cluster lets us iterate rapidly on everything from training data to architecture and reasoning mechanisms.* Tools and Agents. Building internal productivity tools from coding agents for fixing type checking and linting errors, to sophisticated systems like CARBS (for hyperparameter tuning and network architecture search).* Interface Invention. Solving agent trust and collaboration (not merely communication) with humans by creating better abstractions and interfaces — IDEs for users to program computers in natural language.* Theory. Publishing research about the theoretical underpinnings of self-supervised learning, as well as scaling laws for machine learning research.Kanjun believes we are still in the “bare metal phase” of agent development, and they want to take a holistic approach to building the “operating system for agents”. We loved diving deep into the Imbue approach toward solving the AI Holy Grail of reliable agents, and are excited to share our conversation with you today!Timestamps* [00:00:00] Introductions* [00:06:07] The origin story of Imbue* [00:09:39] Imbue's approach to training large foundation models optimized for reasoning* [00:12:18] Imbue's goals to build an "operating system" for reliable, inspectable AI agents* [00:15:37] Imbue's process of developing internal tools and interfaces to collaborate with AI agents* [00:17:27] Imbue's focus on improving reasoning capabilities in models, using code and other data* [00:19:50] The value of using both public benchmarks and internal metrics to evaluate progress* [00:21:43] Lessons learned from developing the Avalon research environment* [00:23:31] The limitations of pure reinforcement learning for general intelligence* [00:28:36] Imbue's vision for building better abstractions and interfaces for reliable agents* [00:31:36] Interface design for collaborating with, rather than just communicating with, AI agents* [00:37:40] The future potential of an agent-to-agent protocol* [00:39:29] Leveraging approaches like critiquing between models and chain of thought* [00:45:49] Kanjun's philosophy on enabling team members as creative agents at Imbue* [00:53:51] Kanjun's experience co-founding the communal co-living space The Archive* [01:00:22] Lightning RoundShow Notes* Imbue* Avalon* CARBS (hyperparameter optimizer)* Series B announcement* Kanjun/Imbue's Podcast* MIT Media Lab* Research mentioned:* Momentum Contrast* SimClr* Chelsea Finn - SayCan* Agent Protocol - part of the AI Engineer Foundation* Xerox PARC* Michael Nielsen* Jason Benn* Outset Capital* Scenius - Kevin Kelly* South Park Commons* The Archive* Thursday Nights in AITranscriptAlessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, Partner and CTO at Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai. [00:00:19]Swyx: Hey, and today in the studio we have Kanjun from Imbue. Welcome. So you and I have, I guess, crossed paths a number of times. You're formerly named Generally Intelligent and you've just announced your rename, rebrand in huge, humongous ways. So congrats on all of that. And we're here to dive in into deeper detail on Imbue. We like to introduce you on a high level basis, but then have you go into a little bit more of your personal side. So you graduated your BS at MIT and you also spent some time at the MIT Media Lab, one of the most famous, I guess, computer hacking labs in the world. Then you graduated MIT and you went straight into BizOps at Dropbox, where you're eventually chief of staff, which is a pretty interesting role we can dive into later. And then it seems like the founder bug hit you. You were basically a three times founder at Ember, Sorceress, and now at Generally Intelligent slash Imbue. What should people know about you on the personal side that's not on your LinkedIn? That's something you're very passionate about outside of work. [00:01:12]Kanjun: Yeah. I think if you ask any of my friends, they would tell you that I'm obsessed with agency, like human agency and human potential. [00:01:19]Swyx: That's work. Come on.Kanjun: It's not work. What are you talking about?Swyx: So what's an example of human agency that you try to promote? [00:01:27]Kanjun: With all of my friends, I have a lot of conversations with them that's kind of helping figure out what's blocking them. I guess I do this with a team kind of automatically too. And I think about it for myself often, like building systems. I have a lot of systems to help myself be more effective. At Dropbox, I used to give this onboarding talk called How to Be Effective, which people liked. I think like a thousand people heard this onboarding talk, and I think maybe Dropbox was more effective. I think I just really believe that as humans, we can be a lot more than we are. And it's what drives everything. I guess completely outside of work, I do dance. I do partner dance. [00:02:03]Swyx: Yeah. Lots of interest in that stuff, especially in the sort of group living houses in San Francisco, which I've been a little bit part of, and you've also run one of those. [00:02:12]Kanjun: That's right. Yeah. I started the archive with two friends, with Josh, my co-founder, and a couple of other folks in 2015. That's right. And GPT-3, our housemates built. [00:02:22]Swyx: Was that the, I guess, the precursor to Generally Intelligent, that you started doing more things with Josh? Is that how that relationship started? Yeah. [00:02:30]Kanjun: This is our third company together. Our first company, Josh poached me from Dropbox for Ember. And there we built a really interesting technology, laser raster projector, VR headset. And then we were like, VR is not the thing we're most passionate about. And actually it was kind of early days when we both realized we really do believe that in our lifetimes, like computers that are intelligent are going to be able to allow us to do much more than we can do today as people and be much more as people than we can be today. And at that time, we actually, after Ember, we were like, work on AI research or start an AI lab. A bunch of our housemates were joining OpenAI, and we actually decided to do something more pragmatic to apply AI to recruiting and to try to understand like, okay, if we are actually trying to deploy these systems in the real world, what's required? And that was Sorceress. That taught us so much about maybe an AI agent in a lot of ways, like what does it actually take to make a product that people can trust and rely on? I think we never really fully got there. And it's taught me a lot about what's required. And it's kind of like, I think informed some of our approach and some of the way that we think about how these systems will actually get used by people in the real world. [00:03:42]Swyx: Just to go one step deeper on that, you're building AI agents in 2016 before it was cool. You got some muscle and you raised $30 million. Something was working. What do you think you succeeded in doing and then what did you try to do that did not pan out? [00:03:56]Kanjun: Yeah. So the product worked quite well. So Sorceress was an AI system that basically looked for candidates that could be a good fit and then helped you reach out to them. And this was a little bit early. We didn't have language models to help you reach out. So we actually had a team of writers that like, you know, customized emails and we automated a lot of the customization. But the product was pretty magical. Like candidates would just be interested and land in your inbox and then you can talk to them. As a hiring manager, that's such a good experience. I think there were a lot of learnings, both on the product and market side. On the market side, recruiting is a market that is endogenously high churn, which means because people start hiring and then we hire the role for them and they stop hiring. So the more we succeed, the more they... [00:04:39]Swyx: It's like the whole dating business. [00:04:40]Kanjun: It's the dating business. Exactly. Exactly. And I think that's the same problem as the dating business. And I was really passionate about like, can we help people find work that is more exciting for them? A lot of people are not excited about their jobs and a lot of companies are doing exciting things and the matching could be a lot better. But the dating business phenomenon like put a damper on that, like it's actually a pretty good business. But as with any business with like relatively high churn, the bigger it gets, the more revenue we have, the slower growth becomes because if 30% of that revenue you lose year over year, then it becomes a worse business. So that was the dynamic we noticed quite early on after our Series A. I think the other really interesting thing about it is we realized what was required for people to trust that these candidates were like well vetted and had been selected for a reason. And it's what actually led us, you know, a lot of what we do at Imbue is working on interfaces to figure out how do we get to a situation where when you're building and using agents, these agents are trustworthy to the end user. That's actually one of the biggest issues with agents that, you know, go off and do longer range goals is that I have to trust, like, did they actually think through this situation? And that really informed a lot of our work today. [00:05:52]Alessio: Let's jump into GI now, Imbue. When did you decide recruiting was done for you and you were ready for the next challenge? And how did you pick the agent space? I feel like in 2021, it wasn't as mainstream. Yeah. [00:06:07]Kanjun: So the LinkedIn says that it started in 2021, but actually we started thinking very seriously about it in early 2020, late 2019, early 2020. So what we were seeing is that scale is starting to work and language models probably will actually get to a point where like with hacks, they're actually going to be quite powerful. And it was hard to see that at the time, actually, because GPT-3, the early versions of it, there are all sorts of issues. We're like, oh, that's not that useful, but we could kind of see like, okay, you keep improving it in all of these different ways and it'll get better. What Josh and I were really interested in is how can we get computers that help us do bigger things? Like, you know, there's this kind of future where I think a lot about, you know, if I were born in 1900 as a woman, like my life would not be that fun. I'd spend most of my time like carrying water and literally like getting wood to put in the stove to cook food and like cleaning and scrubbing the dishes and, you know, getting food every day because there's no refrigerator, like all of these things, very physical labor. And what's happened over the last 150 years since the industrial revolution is we've kind of gotten free energy, like energy is way more free than it was 150 years ago. And so as a result, we've built all these technologies like the stove and the dishwasher and the refrigerator, and we have electricity and we have infrastructure, running water, all of these things that have totally freed me up to do what I can do now. And I think the same thing is true for intellectual energy. We don't really see it today, but because we're so in it, but our computers have to be micromanaged. You know, part of why people are like, oh, you're stuck to your screen all day. Well, we're stuck to our screen all day because literally nothing happens unless I'm doing something in front of my screen. I don't, you know, I can't send my computer off to do a bunch of stuff for me. And there is a future where that's not the case, where, you know, I can actually go off and do stuff and trust that my computer will pay my bills and figure out my travel plans and do the detailed work that I am not that excited to do so that I can like be much more creative and able to do things that I as a human, I'm very excited about and collaborate with other people. And there are things that people are uniquely suited for. So that's kind of always been the thing that has been really exciting to me. Like Josh and I have known for a long time, I think that, you know, whatever AI is, it would happen in our lifetimes. And the personal computer kind of started giving us a bit of free intellectual energy. And this is like really the explosion of free intellectual energy. So in early 2020, we were thinking about this and what happened was self-supervised learning basically started working across everything. So worked in language, SimClear came out, I think MoCo had come out, Momentum Contrast had come out earlier in 2019, SimClear came out in early 2020. And we're like, okay, for the first time, self-supervised learning is working really well across images and text and suspect that like, okay, actually it's the case that machines can learn things the way that humans do. And if that's true, if they can learn things in a fully self-supervised way, because like as people, we are not supervised. We like go Google things and try to figure things out. So if that's true, then like what the computer could be is much bigger than what it is today. And so we started exploring ideas around like, how do we actually go? We didn't think about the fact that we could actually just build a research lab. So we were like, okay, what kind of startup could we build to like leverage self-supervised learning? So that eventually becomes something that allows computers to become much more able to do bigger things for us. But that became General Intelligence, which started as a research lab. [00:09:39]Alessio: So your mission is you aim to rekindle the dream of the personal computer. So when did it go wrong and what are like your first products and user facing things that you're building to rekindle it? [00:09:53]Kanjun: Yeah. So what we do at Imbue is we train large foundation models optimized for reasoning. And the reason for that is because reasoning is actually, we believe the biggest blocker to agents or systems that can do these larger goals. If we think about something that writes an essay, like when we write an essay, we like write it. We put it and then we're done. We like write it and then we look at it and we're like, oh, I need to do more research on that area. I'm going to go do some research and figure it out and come back and, oh, actually it's not quite right. The structure of the outline. So I'm going to rearrange the outline, rewrite it. It's this very iterative process and it requires thinking through like, okay, what am I trying to do? Is the goal correct? Also like, has the goal changed as I've learned more? So as a tool, like when should I ask the user questions? I shouldn't ask them questions all the time, but I should ask them questions in higher risk situations. How certain am I about the like flight I'm about to book? There are all of these notions of like risk certainty, playing out scenarios, figuring out how to make a plan that makes sense, how to change the plan, what the goal should be. That are things that we lump under the bucket of reasoning and models today, they're not optimized for reasoning. It turns out that there's not actually that much explicit reasoning data on the internet as you would expect. And so we get a lot of mileage out of optimizing our models for reasoning in pre-training. And then on top of that, we build agents ourselves and we, I can get into, we really believe in serious use, like really seriously using the systems and trying to get to an agent that we can use every single day, tons of agents that we can use every single day. And then we experiment with interfaces that help us better interact with the agents. So those are some set of things that we do on the kind of model training and agent side. And then the initial agents that we build, a lot of them are trying to help us write code better because code is most of what we do every day. And then on the infrastructure and theory side, we actually do a fair amount of theory work to understand like, how do these systems learn? And then also like, what are the right abstractions for us to build good agents with, which we can get more into. And if you look at our website, we build a lot of tools internally. We have a like really nice automated hyperparameter optimizer. We have a lot of really nice infrastructure and it's all part of the belief of like, okay, let's try to make it so that the humans are doing the things humans are good at as much as possible. So out of our very small team, we get a lot of leverage. [00:12:18]Swyx: And so would you still categorize yourself as a research lab now, or are you now in startup mode? Is that a transition that is conscious at all? [00:12:26]Kanjun: That's a really interesting question. I think we've always intended to build, you know, to try to build the next version of the computer, enable the next version of the computer. The way I think about it is there's a right time to bring a technology to market. So Apple does this really well. Actually, iPhone was under development for 10 years, AirPods for five years. And Apple has a story where iPhone, the first multi-touch screen was created. They actually were like, oh wow, this is cool. Let's like productionize iPhone. They actually brought, they like did some work trying to productionize it and realized this is not good enough. And they put it back into research to try to figure out like, how do we make it better? What are the interface pieces that are needed? And then they brought it back into production. So I think of production and research as kind of like these two separate phases. And internally we have that concept as well, where like things need to be done in order to get to something that's usable. And then when it's usable, like eventually we figure out how to productize it. [00:13:20]Alessio: What's the culture like to make that happen, to have both like kind of like product oriented, research oriented. And as you think about building the team, I mean, you just raised 200 million. I'm sure you want to hire more people. What are like the right archetypes of people that work at Imbue? [00:13:35]Kanjun: I would say we have a very unique culture in a lot of ways. I think a lot about social process design. So how do you design social processes that enable people to be effective? I like to think about team members as creative agents, because most companies, they think of their people as assets and they're very proud of this. And I think about like, okay, what is an asset? It's something you own that provides you value that you can discard at any time. This is a very low bar for people. This is not what people are. And so we try to enable everyone to be a creative agent and to really unlock their superpowers. So a lot of the work I do, you know, I was mentioning earlier, I'm like obsessed with agency. A lot of the work I do with team members is try to figure out like, you know, what are you really good at? What really gives you energy and where can we put you such that, how can I help you unlock that and grow that? So much of our work, you know, in terms of team structure, like much of our work actually comes from people. Carbs, our hyperparameter optimizer came from Abe trying to automate his own research process doing hyperparameter optimization. And he actually pulled some ideas from plasma physics. He's a plasma physicist to make the local search work. A lot of our work on evaluations comes from a couple of members of our team who are like obsessed with evaluations. We do a lot of work trying to figure out like, how do you actually evaluate if the model is getting better? Is the model making better agents? Is the agent actually reliable? A lot of things kind of like, I think of people as making the like them shaped blob inside imbue and I think, you know, yeah, that's the kind of person that we're, we're hiring for. We're hiring product engineers and data engineers and research engineers and all these roles. We have projects, not teams. We have a project around data, data collection and data engineering. That's actually one of the key things that improve the model performance. We have a pre-training kind of project with some fine tuning as part of that. And then we have an agent's project that's like trying to build on top of our models as well as use other models in the outside world to try to make agents then we actually use as programmers every day. So all sorts of different, different projects. [00:15:37]Swyx: As a founder, you're now sort of a capital allocator among all of these different investments effectively at different projects. And I was interested in how you mentioned that you were optimizing for improving reasoning and specifically inside of your pre-training, which I assume is just a lot of data collection. [00:15:55]Kanjun: We are optimizing reasoning inside of our pre-trained models. And a lot of that is about data. And I can talk more about like what, you know, what exactly does it involve? But actually big, maybe 50% plus of the work is figuring out even if you do have models that reason well, like the models are still stochastic. The way you prompt them still makes, is kind of random, like makes them do random things. And so how do we get to something that is actually robust and reliable as a user? How can I, as a user, trust it? We have all sorts of cool things on the, like, you know, I was mentioning earlier when I talked to other people building agents, they have to do so much work, like to try to get to something that they can actually productize and it takes a long time and agents haven't been productized yet for, partly for this reason is that like the abstractions are very leaky. We can get like 80% of the way there, but like self-driving cars, like the remaining 20% is actually really difficult. We believe that, and we have internally, I think some things that like an interface, for example, that lets me really easily like see what the agent execution is, fork it, try out different things, modify the prompt, modify like the plan that it is making. This type of interface, it makes it so that I feel more like I'm collaborating with the agent as it's executing, as opposed to it's just like doing something as a black box. That's an example of a type of thing that's like beyond just the model pre-training, but on the model pre-training side, like reasoning is a thing that we optimize for. And a lot of that is about what data do we put in. [00:17:27]Swyx: It's interesting just because I always think like, you know, out of the levers that you have, the resources that you have, I think a lot of people think that running foundation model company or a research lab is going to be primarily compute. And I think the share of compute has gone down a lot over the past three years. It used to be the main story, like the main way you scale is you just throw more compute at it. And now it's like, Flops is not all you need. You need better data, you need better algorithms. And I wonder where that shift has gone. This is a very vague question, but is it like 30-30-30 now? Is it like maybe even higher? So one way I'll put this is people estimate that Llama2 maybe took about three to $4 million of compute, but probably 20 to $25 million worth of labeling data. And I'm like, okay, well that's a very different story than all these other foundation model labs raising hundreds of millions of dollars and spending it on GPUs. [00:18:20]Kanjun: Data is really expensive. We generate a lot of data. And so that does help. The generated data is close to actually good, as good as human labeled data. [00:18:34]Swyx: So generated data from other models? [00:18:36]Kanjun: From our own models. From your own models. Or other models, yeah. [00:18:39]Swyx: Do you feel like there's certain variations of this? There's the sort of the constitutional AI approach from Anthropic and basically models sampling training on data from other models. I feel like there's a little bit of like contamination in there, or to put it in a statistical form, you're resampling a distribution that you already have that you already know doesn't match human distributions. How do you feel about that basically, just philosophically? [00:19:04]Kanjun: So when we're optimizing models for reasoning, we are actually trying to like make a part of the distribution really spiky. So in a sense, like that's actually what we want. We want to, because the internet is a sample of the human distribution that's also skewed in all sorts of ways. That is not the data that we necessarily want these models to be trained on. And so when we're generating data, we're not really randomly generating data. We generate very specific things that are like reasoning traces and that help optimize reasoning. Code also is a big piece of improving reasoning. So generated code is not that much worse than like regular human written code. You might even say it can be better in a lot of ways. So yeah. So we are trying to already do that. [00:19:50]Alessio: What are some of the tools that you thought were not a good fit? So you built Avalon, which is your own simulated world. And when you first started, the metagame was like using games to simulate things using, you know, Minecraft and then OpenAI is like the gym thing and all these things. And I think in one of your other podcasts, you mentioned like Minecraft is like way too slow to actually do any serious work. Is that true? Yeah. I didn't say it. [00:20:17]Swyx: I don't know. [00:20:18]Alessio: That's above my pay grade. But Avalon is like a hundred times faster than Minecraft for simulation. When did you figure that out that you needed to just like build your own thing? Was it kind of like your engineering team was like, Hey, this is too slow. Was it more a long-term investment? [00:20:34]Kanjun: Yeah. At that time we built Avalon as a research environment to help us learn particular things. And one thing we were trying to learn is like, how do you get an agent that is able to do many different tasks? Like RL agents at that time and environments at that time. What we heard from other RL researchers was the like biggest thing keeping holding the field back is lack of benchmarks that let us explore things like planning and curiosity and things like that and have the agent actually perform better if the agent has curiosity. And so we were trying to figure out in a situation where, how can we have agents that are able to handle lots of different types of tasks without the reward being pretty handcrafted? That's a lot of what we had seen is that like these very handcrafted rewards. And so Avalon has like a single reward it's across all tasks. And it also allowed us to create a curriculum so we could make the level more or less difficult. And it taught us a lot, maybe two primary things. One is with no curriculum, RL algorithms don't work at all. So that's actually really interesting. [00:21:43]Swyx: For the non RL specialists, what is a curriculum in your terminology? [00:21:46]Kanjun: So a curriculum in this particular case is basically the environment Avalon lets us generate simpler environments and harder environments for a given tasks. What's interesting is that the simpler environments, what you'd expect is the agent succeeds more often. So it gets more reward. And so, you know, kind of my intuitive way of thinking about it is, okay, the reason why it learns much faster with a curriculum is it's just getting a lot more signal. And that's actually an interesting general intuition to have about training these things as like, what kind of signal are they getting? And like, how can you help it get a lot more signal? The second thing we learned is that reinforcement learning is not a good vehicle, like pure reinforcement learning is not a good vehicle for planning and reasoning. So these agents were not able to, they were able to learn all sorts of crazy things. They could learn to climb like hand over hand in VR climbing, they could learn to open doors like very complicated, like multiple switches and a lever open the door, but they couldn't do any higher level things. And they couldn't do those lower level things consistently necessarily. And as a user, I do not want to interact with a pure reinforcement learning end to end RL agent. As a user, like I need much more control over what that agent is doing. And so that actually started to get us on the track of thinking about, okay, how do we do the reasoning part in language? And we were pretty inspired by our friend Chelsea Finn at Stanford was I think working on SACAN at the time where it's basically an experiment where they have robots kind of trying to do different tasks and actually do the reasoning for the robot in natural language. And it worked quite well. And that led us to start experimenting very seriously with reasoning. [00:23:31]Alessio: How important is the language part for the agent versus for you to inspect the agent? You know, like is it the interface to kind of the human on the loop really important or? [00:23:43]Kanjun: Yeah, I personally think of it as it's much more important for us, the human user. So I think you probably could get end to end agents that work and are fairly general at some point in the future. But I think you don't want that. Like we actually want agents that we can like perturb while they're trying to figure out what to do. Because, you know, even a very simple example, internally we have like a type error fixing agent and we have like a test generation agent. Test generation agent goes off rails all the time. I want to know, like, why did it generate this particular test? [00:24:19]Swyx: What was it thinking? [00:24:20]Kanjun: Did it consider, you know, the fact that this is calling out to this other function? And the formatter agent, if it ever comes up with anything weird, I want to be able to debug like what happened with RL end to end stuff. Like we couldn't do that. Yeah. [00:24:36]Swyx: It sounds like you have a bunch of agents operating internally within the company. What's your most, I guess, successful agent and what's your least successful one? [00:24:44]Kanjun: The agents don't work. All of them? I think the only successful agents are the ones that do really small things. So very specific, small things like fix the color of this button on the website or like change the color of this button. [00:24:57]Swyx: Which is now sweep.dev is doing that. Exactly. [00:25:00]Kanjun: Perfect. Okay. [00:25:02]Swyx: Well, we should just use sweep.dev. Well, I mean, okay. I don't know how often you have to fix the color of a button, right? Because all of them raise money on the idea that they can go further. And my fear when encountering something like that is that there's some kind of unknown asymptote ceiling that's going to prevent them, that they're going to run head on into that you've already run into. [00:25:21]Kanjun: We've definitely run into such a ceiling. But what is the ceiling? [00:25:24]Swyx: Is there a name for it? Like what? [00:25:26]Kanjun: I mean, for us, we think of it as reasoning plus these tools. So reasoning plus abstractions, basically. I think actually you can get really far with current models and that's why it's so compelling. Like we can pile debugging tools on top of these current models, have them critique each other and critique themselves and do all of these, like spend more computer inference time, context hack, retrieve augmented generation, et cetera, et cetera, et cetera. Like the pile of hacks actually does get us really far. And a way to think about it is like the underlying language model is kind of like a noisy channel. Actually I don't want to use this analogy. It's actually a really bad analogy, but you kind of like trying to get more signal out of the channel. We don't like to think about it that way. It's what the default approach is, is like trying to get more signal out of this noising channel. But the issue with agents is as a user, I want it to be mostly reliable. It's kind of like self-driving in that way. Like it's not as bad as self-driving, like in self-driving, you know, you're like hurtling at 70 miles an hour. It's like the hardest agent problem. But one thing we learned from Sorceress and one thing we learned by using these things internally is we actually have a pretty high bar for these agents to work. You know, it's actually really annoying if they only work 50% of the time and we can make interfaces to make it slightly less annoying. But yeah, there's a ceiling that we've encountered so far and we need to make the models better. We also need to make the kind of like interface to the user better. And also a lot of the like critiquing. I hope what we can do is help people who are building agents actually like be able to deploy them. I think, you know, that's the gap that we see a lot of today is everyone who's trying to build agents to get to the point where it's robust enough to be deployable. It just, it's like an unknown amount of time. Okay. [00:27:12]Swyx: So this goes back into what Embu is going to offer as a product or a platform. How are you going to actually help people deploy those agents? Yeah. [00:27:21]Kanjun: So our current hypothesis, I don't know if this is actually going to end up being the case. We've built a lot of tools for ourselves internally around like debugging, around abstractions or techniques after the model generation happens. Like after the language model generates the text and like interfaces for the user and the underlying model itself, like models talking to each other, maybe some set of those things kind of like an operating system. Some set of those things will be helpful for other people. And we'll figure out what set of those things is helpful for us to make our agents. Like what we want to do is get to a point where we can like start making an agent, deploy it, it's reliable, like very quickly. And there's a similar analog to software engineering, like in the early days, in the seventies and the sixties, like to program a computer, like you have to go all the way down to the registers and write things and eventually we had assembly. That was like an improvement. But then we wrote programming languages with these higher levels of abstraction and that allowed a lot more people to do this and much faster. And the software created is much less expensive. And I think it's basically a similar route here where we're like in the like bare metal phase of agent building. And we will eventually get to something with much nicer abstractions. [00:28:36]Alessio: We had this conversation with George Hotz and we were like, there's not a lot of reasoning data out there. And can the models really understand? And his take was like, look, with enough compute, you're not that complicated as a human. Like the model can figure out eventually why certain decisions are made. What's been your experience? Like as you think about reasoning data, like do you have to do a lot of like manual work or like is there a way to prompt models to extract the reasoning from actions that they [00:29:03]Swyx: see? [00:29:03]Kanjun: So we don't think of it as, oh, throw enough data at it and then it will figure out what the plan should be. I think we're much more explicit. You know, a way to think about it is as humans, we've learned a lot of reasoning strategies over time. We are better at reasoning now than we were 3000 years ago. An example of a reasoning strategy is noticing you're confused. Then when I notice I'm confused, I should ask like, huh, what was the original claim that was made? What evidence is there for this claim? Does the evidence support the claim? Is the claim correct? This is like a reasoning strategy that was developed in like the 1600s, you know, with like the advent of science. So that's an example of a reasoning strategy. There are tons of them. We employ all the time, lots of heuristics that help us be better at reasoning. And we didn't always have them. And because they're invented, like we can generate data that's much more specific to them. So I think internally, yeah, we have a lot of thoughts on what reasoning is and we generate a lot more specific data. We're not just like, oh, it'll figure out reasoning from this black box or like it'll figure out reasoning from the data that exists. Yeah. [00:30:04]Alessio: I mean, the scientific method is like a good example. If you think about hallucination, right, people are thinking, how do we use these models to do net new, like scientific research? And if you go back in time and the model is like, well, the earth revolves around the sun and people are like, man, this model is crap. It's like, what are you talking about? Like the sun revolves around the earth. It's like, how do you see the future? Like if the models are actually good enough, but we don't believe them, it's like, how do we make the two live together? So you're like, you use Inbu as a scientist to do a lot of your research and Inbu tells you, hey, I think this is like a serious path you should go down. And you're like, no, that sounds impossible. Like how is that trust going to be built? And like, what are some of the tools that maybe are going to be there to inspect it? [00:30:51]Kanjun: Really there are two answers to this. One element of it is as a person, like I need to basically get information out of the model such that I can try to understand what's going on with the model. Then the second question is like, okay, how do you do that? And that's kind of some of our debugging tools, they're not necessarily just for debugging. They're also for like interfacing with and interacting with the model. So like if I go back in this reasoning trace and like change a bunch of things, what's going to happen? Like, what does it conclude instead? So that kind of helps me understand like, what are its assumptions? And, you know, we think of these things as tools. And so it's really about like, as a user, how do I use this tool effectively? I need to be willing to be convinced as well. It's like, how do I use this tool effectively? And what can it help me with? [00:31:36]Swyx: And what can it tell me? There's a lot of mention of code in your process. And I was hoping to dive in even deeper. I think we might run the risk of giving people the impression that you view code or you use code just as like a tool within InView just for coding assistance. But I think you actually train code models. And I think there's a lot of informal understanding about how adding code to language models improves their reasoning capabilities. I wonder if there's any research or findings that you have to share that talks about the intersection of code and reasoning. Hmm. Yeah. [00:32:08]Kanjun: So the way I think about it intuitively is like code is the most explicit example of reasoning data on the internet. [00:32:15]Swyx: Yeah. [00:32:15]Kanjun: And it's not only structured, it's actually very explicit, which is nice. You know, it says this variable means this, and then it uses this variable. And then the function does this. As people, when we talk in language, it takes a lot more to extract that explicit structure out of our language. And so that's one thing that's really nice about code is I see it as almost like a curriculum for reasoning. I think we use code in all sorts of ways. The coding agents are really helpful for us to understand what are the limitations of the agents. The code is really helpful for the reasoning itself. But also code is a way for models to act. So by generating code, it can act on my computer. And, you know, when we talk about rekindling the dream of the personal computer, kind of where I see computers going is, you know, like computers will eventually become these much more malleable things where I, as a user today, I have to know how to write software code, like in order to make my computer do exactly what I want it to do. But in the future, if the computer is able to generate its own code, then I can actually interface with it in natural language. And so one way we think about agents is kind of like a natural language programming language. It's a way to program my computer in natural language that's much more intuitive to me as a user. And these interfaces that we're building are essentially IDEs for users to program our computers in natural language. Maybe I should say what we're doing that way. Maybe it's clearer. [00:33:47]Swyx: I don't know. [00:33:47]Alessio: That's a good pitch. What do you think about the different approaches people have, kind of like text first, browser first, like multi-on? What do you think the best interface will be? Or like, what is your, you know, thinking today? [00:33:59]Kanjun: In a lot of ways, like chat as an interface, I think Linus, Linus Lee, you had on this. I really like how he put it. Chat as an interface is skeuomorphic. So in the early days, when we made word processors on our computers, they had notepad lines because that's what we understood these like objects to be. Chat, like texting someone is something we understand. So texting our AI is something that we understand. But today's word documents don't have notepad lines. And similarly, the way we want to interact with agents, like chat is a very primitive way of interacting with agents. What we want is to be able to inspect their state and to be able to modify them and fork them and all of these other things. And we internally have, think about what are the right representations for that? Like architecturally, like what are the right representations? What kind of abstractions do we need to build? And how do we build abstractions that are not leaky? Because if the abstractions are leaky, which they are today, like, you know, this stochastic generation of text is like a leaky abstraction. I cannot depend on it. And that means it's actually really hard to build on top of. But our experience and belief is actually by building better abstractions and better tooling, we can actually make these things non-leaky. And now you can build like whole things on top of them. So these other interfaces, because of where we are, we don't think that much about them. [00:35:17]Swyx: Yeah. [00:35:17]Alessio: I mean, you mentioned, this is kind of like the Xerox Spark moment for AI. And we had a lot of stuff come out of Parc, like the, what you see is what you got editors and like MVC and all this stuff. But yeah, but then we didn't have the iPhone at Parc. We didn't have all these like higher things. What do you think it's reasonable to expect in like this era of AI, you know, call it like five years or so? Like what are like the things we'll build today and what are things that maybe we'll see in kind of like the second wave of products? [00:35:46]Kanjun: That's interesting. I think the waves will be much faster than before. Like what we're seeing right now is basically like a continuous wave. Let me zoom a little bit earlier. So people like the Xerox Parc analogy I give, but I think there are many different analogies. Like one is the like analog to digital computer is kind of an example, like another analogy to where we are today. The analog computer Vannevar Bush built in the 1930s, I think, and it's like a system of pulleys and it can only calculate one function. Like it can calculate like an integral. And that was so magical at the time because you actually did need to calculate this integral bunch, but it had a bunch of issues like in analog errors compound. And so there was actually a set of breakthroughs necessary in order to get to the digital computer, like Turing's decidability, Shannon. I think the like whole like relay circuits can be thought of as can be mapped to Boolean operators and a set of other like theoretical breakthroughs, which essentially were abstractions. They were like creating abstractions for these like very like lossy circuits. They were creating abstractions for these like very analog circuits and digital had this nice property of like being error correcting. And so when I talk about like less leaky abstractions, that's what I mean. That's what I'm kind of pointing a little bit to. It's not going to look exactly the same way. And then the Xerox PARC piece, a lot of that is about like, how do we get to computers that as a person, I can actually use well. And the interface actually helps it unlock so much more power. So the sets of things we're working on, like the sets of abstractions and the interfaces, like hopefully that like help us unlock a lot more power in these systems. Like hopefully that'll come not too far in the future. I could see a next version, maybe a little bit farther out. It's like an agent protocol. So a way for different agents to talk to each other and call each other. Kind of like HTTP. [00:37:40]Swyx: Do you know it exists already? [00:37:41]Kanjun: Yeah, there is a nonprofit that's working on one. I think it's a bit early, but it's interesting to think about right now. Part of why I think it's early is because the issue with agents, it's not quite like the internet where you could like make a website and the website would appear. The issue with agents is that they don't work. And so it may be a bit early to figure out what the protocol is before we really understand how these agents get constructed. But, you know, I think that's, I think it's a really interesting question. [00:38:09]Swyx: While we're talking on this agent to agent thing, there's been a bit of research recently on some of these approaches. I tend to just call them extremely complicated chain of thoughting, but any perspectives on kind of meta-GPT, I think it's the name of the paper. I don't know if you care about at the level of individual papers coming out, but I did read that recently and TLDR, it beat GPT-4 and human eval by role-playing software agent development agency, instead of having sort of single shot or single role, you have multiple roles and how having all of them criticize each other as agents communicating with other agents. [00:38:45]Kanjun: Yeah, I think this is an example of an interesting abstraction of like, okay, can I just plop in this like multi-role critiquing and see how it improves my agent? And can I just plop in chain of thought, tree of thought, plop in these other things and see how they improve my agent? One issue with this kind of prompting is that it's still not very reliable. It's like, there's one lens, which is like, okay, if you do enough of these techniques, you'll get to high reliability. And I think actually that's a pretty reasonable lens. We take that lens often. And then there's another lens that's like, okay, but it's starting to get really messy what's in the prompt and like, how do we deal with that messiness? And so maybe you need like cleaner ways of thinking about and constructing these systems. And we also take that lens. So yeah, I think both are necessary. Yeah. [00:39:29]Swyx: Side question, because I feel like this also brought up another question I had for you. I noticed that you work a lot with your own benchmarks, your own evaluations of what is valuable. I would say I would contrast your approach with OpenAI as OpenAI tends to just lean on, hey, we played StarCraft or hey, we ran it on the SAT or the, you know, the AP bio test and that did results. Basically, is benchmark culture ruining AI? [00:39:55]Swyx: Or is that actually a good thing? Because everyone knows what an SAT is and that's fine. [00:40:04]Kanjun: I think it's important to use both public and internal benchmarks. Part of why we build our own benchmarks is that there are not very many good benchmarks for agents, actually. And to evaluate these things, you actually need to think about it in a slightly different way. But we also do use a lot of public benchmarks for like, is the reasoning capability in this particular way improving? So yeah, it's good to use both. [00:40:26]Swyx: So for example, the Voyager paper coming out of NVIDIA played Minecraft and set their own benchmarks on getting the Diamond X or whatever and exploring as much of the territory as possible. And I don't know how that's received. That's obviously fun and novel for the rest of the engineer, the people who are new to the scene. But for people like yourselves, you build Avalon just because you already found deficiencies with using Minecraft. Is that valuable as an approach? Oh, yeah. I love Voyager. [00:40:57]Kanjun: I mean, Jim, I think is awesome. And I really like the Voyager paper and I think it has a lot of really interesting ideas, which is like the agent can create tools for itself and then use those tools. [00:41:06]Swyx: He had the idea of the curriculum as well, which is something that we talked about earlier. Exactly. [00:41:09]Kanjun: And that's like a lot of what we do. We built Avalon mostly because we couldn't use Minecraft very well to like learn the things we wanted. And so it's like not that much work to build our own. [00:41:19]Swyx: It took us, I don't know. [00:41:22]Kanjun: We had like eight engineers at the time, took about eight weeks. So six weeks. [00:41:27]Swyx: And OpenAI built their own as well, right? Yeah, exactly. [00:41:30]Kanjun: It's just nice to have control over our environment. But if you're doing our own sandbox to really trying to inspect our own research questions. But if you're doing something like experimenting with agents and trying to get them to do things like Minecraft is a really interesting environment. And so Voyager has a lot of really interesting ideas in it. [00:41:47]Swyx: Yeah. Cool. One more element that we had on this list, which is context and memory. I think that's kind of like the foundational, quote unquote, RAM of our era. I think Andrej Karpathy has already made this comparison. So there's nothing new here. And that's just the amount of working knowledge that we can fit into one of these agents. And it's not a lot, right? Especially if you need to get them to do long running tasks. If they need to self-correct from errors that they observe while operating in their environment. Do you see this as a problem? Do you think we're going to just trend to infinite context and that'll go away? Or how do you think we're going to deal with it? [00:42:22]Kanjun: I think when you talked about what's going to happen in the first wave and then in the second wave, I think what we'll see is we'll get like relatively simplistic agents pretty soon. And they will get more and more complex. And there's like a future wave in which they are able to do these like really difficult, really long running tasks. And the blocker to that future, one of the blockers is memory. And that was true of computers too. You know, I think when von Neumann made the von Neumann architecture, he was like, the biggest blocker will be like, we need this amount of memory, which is like, I don't remember exactly like 32 kilobytes or something to store programs. And that will allow us to write software. He didn't say it this way because he didn't have these terms, but that only really was like happened in the seventies with the microchip revolution. It may be the case that we're waiting for some research breakthroughs or some other breakthroughs in order for us to have like really good long running memory. And then in the meantime, agents will be able to do all sorts of things that are a little bit smaller than that. I do think with the pace of the field, we'll probably come up with all sorts of interesting things like, you know, RAG is already very helpful. [00:43:26]Swyx: Good enough, you think? [00:43:27]Kanjun: Maybe good enough for some things. [00:43:29]Swyx: How is it not good enough? I don't know. [00:43:31]Kanjun: I just think about a situation where you want something that's like an AI scientist. As a scientist, I have learned so much about my fields and a lot of that data is maybe hard to fine tune or on, or maybe hard to like put into pre-training. Like a lot of that data, I don't have a lot of like repeats of the data that I'm seeing. You know, like if I'm a scientist, I've like accumulated so many little data points. And ideally I'd want to store those somehow, or like use those to fine tune myself as a model somehow, or like have better memory somehow. I don't think RAG is enough for that kind of thing. But RAG is certainly enough for like user preferences and things like that. Like what should I do in this situation? What should I do in that situation? That's a lot of tasks. We don't have to be a scientist right away. Awesome. [00:44:21]Swyx: I have a hard question, if you don't mind me being bold. Yeah. I think the most comparable lab to InView is Adept. You know, a research lab with like some amount of product situation on the horizon, but not just yet, right? Why should people work for InView over Adept? And we can cut this if it's too like... Yeah. [00:44:40]Kanjun: The way I think about it is I believe in our approach. The type of thing that we're doing is we're trying to like build something that enables other people to build agents and build something that really can be maybe something like an operating system for agents. I know that that's what we're doing. I don't really know what everyone else is doing. You know, I can kind of like talk to people and have some sense of what they're doing. And I think it's a mistake to focus too much on what other people are doing, because extremely focused execution on the right thing is what matters. To the question of like, why us? I think like strong focus on reasoning, which we believe is the biggest blocker, on inspectability, which we believe is really important for user experience and also for the power and capability of these systems. Building non-leaky, good abstractions, which we believe is solving the core issue of agents, which is around reliability and being able to make them deployable. And then really seriously trying to use these things ourselves, like every single day, and getting to something that we can actually ship to other people that becomes something that is a platform. Like, it feels like it could be Mac or Windows. I love the dogfooding approach. [00:45:49]Swyx: That's extremely important. And you will not be surprised how many agent companies I talk to that don't use their own agent. Oh no, that's not good. That's a big surprise. [00:45:59]Kanjun: Yeah, I think if we didn't use our own agents, then we would have all of these beliefs about how good they are. Wait, did you have any other hard questions you wanted to ask? [00:46:08]Swyx: Yeah, mine was just the only other follow-up that you had based on the answer you just gave was, do you see yourself releasing models or do you see yourself, what is the artifacts that you want to produce that lead up to the general operating system that you want to have people use, right? And so a lot of people just as a byproduct of their work, just to say like, hey, I'm still shipping, is like, here's a model along the way. Adept took, I don't know, three years, but they released Persimmon recently, right? Like, do you think that kind of approach is something on your horizon? Or do you think there's something else that you can release that can show people, here's kind of the idea, not the end products, but here's the byproducts of what we're doing? [00:46:51]Kanjun: Yeah, I don't really believe in releasing things to show people like, oh, here's what we're doing that much. I think as a philosophy, we believe in releasing things that will be helpful to other people. [00:47:02]Swyx: Yeah. [00:47:02]Kanjun: And so I think we may release models or we may release tools that we think will help agent builders. Ideally, we would be able to do something like that, but I'm not sure exactly what they look like yet. [00:47:14]Swyx: I think more companies should get into the releasing evals and benchmarks game. Yeah. [00:47:20]Kanjun: Something that we have been talking to agent builders about is co-building evals. So we build a lot of our own evals and every agent builder tells me, basically evals are their biggest issue. And so, yeah, we're exploring right now. And if you are building agents, please reach out to me because I would love to, like, figure out how we can be helpful based on what we've seen. Cool. [00:47:40]Swyx: That's a good call to action. I know a bunch of people that I can send your way. Cool. Great. [00:47:43]Kanjun: Awesome. [00:47:44]Swyx: Yeah. We can zoom out to other interests now. [00:47:46]Alessio: We got a lot of stuff. So we have Sherif from Lexicon, the podcast. He had a lot of interesting questions on his website. You similarly have a lot of them. Yeah. [00:47:55]Swyx: I need to do this. I'm very jealous of people with personal websites right there. Like, here's the high level questions of goals of humanity that I want to set people on. And I don't have that. [00:48:04]Alessio: It's never too late, Sean. [00:48:05]Swyx: Yeah. [00:48:05]Alessio: It's never too late. [00:48:06]Kanjun: Exactly. [00:48:07]Alessio: There were a few that stuck out as related to your work that maybe you're kind of learning [00:48:12]Swyx: more about it. [00:48:12]Alessio: So one is why are curiosity and goal orientation often at odds? And from a human perspective, I get it. It's like, you know, would you want to like go explore things or kind of like focus on your career? How do you think about that from like an agent perspective? Where it's like, should you just stick to the task and try and solve it as in the guardrails as possible? Or like, should you look for alternative solutions? [00:48:34]Swyx: Yeah. [00:48:34]Kanjun: I think one thing that's really interesting about agents actually is that they can be forked. Like, you know, we can take an agent that's executed to a certain place and said, okay, here, like fork this and do a bunch of different things. I try a bunch of different things. Some of those agents can be goal oriented and some of them can be like more curiosity driven. You can prompt them in slightly different ways. And something I'm really curious about, like what would happen if in the future, you know, we were able to actually go down both paths. As a person, why I have this question on my website is I really find that like I really can only take one mode at a time and I don't understand why. And like, is it inherent in like the kind of context that needs to be held? That's why I think from an agent perspective, like forking it is really interesting. Like I can't fork myself to do both, but I maybe could fork an agent to like add a certain point in a task. [00:49:26]Swyx: Yeah. Explore both. Yeah. [00:49:28]Alessio: How has the thinking changed for you as the funding of the company changed? That's one thing that I think a lot of people in the space think is like, oh, should I raise venture capital? Like, how should I get money? How do you feel your options to be curious versus like goal oriented has changed as you raise more money and kind of like the company has grown? [00:49:50]Kanjun: Oh, that's really funny. Actually, things have not changed that much. So we raised our Series A $20 million in late 2021. And our entire philosophy at that time was, and still kind of is, is like, how do we figure out the stepping stones, like collect stepping stones that eventually let us build agents, kind of these new computers that help us do bigger things. And there was a lot of curiosity in that. And there was a lot of goal orientation in that. Like the curiosity led us to build CARBS, for example, this hyperparameter optimizer. Great name, by the way. [00:50:28]Swyx: Thank you. [00:50:29]Kanjun: Is there a story behind that name? [00:50:30]Swyx: Yeah. [00:50:31]Kanjun: Abe loves CARBS. It's also cost aware. So as soon as he came up with cost aware, he was like, I need to figure out how to make this work. But the cost awareness of it was really important. So that curiosity led us to this really cool hyperparameter optimizer. That's actually a big part of how we do our research. It lets us experiment on smaller models. And for those experiment results to carry to larger ones. [00:50:56]Swyx: Which you also published a scaling laws, which is great. I think the scaling laws paper from OpenAI was like the biggest. And from Google, I think, was the greatest public service to machine learning that any research lab can do. Yeah, totally. [00:51:10]Kanjun: What was nice about CARBS is it gave us scaling laws for all sorts of hyperparameters. So yeah, that's cool. It basically hasn't changed very much. So there's some curiosity. And then there's some goal oriented parts. Like Avalon, it was like a six to eight week sprint for all of us. And we got this thing out. And then now different projects do like more curiosity or more goal orientation at different times. Cool. [00:51:36]Swyx: Another one of your questions that we highlighted was, how can we enable artificial agents to permanently learn new abstractions and processes? I think this is might be called online learning. [00:51:45]Kanjun: Yeah. So I struggle with this because, you know, that scientist example I gave. As a scientist, I've like permanently learned a lot of new things. And I've updated and created new abstractions and learned them pretty reliably. And you were talking about like, okay, we have this RAM that we can store learnings in. But how well does online learning actually work? And the answer right now seems to be like, as models get bigger, they fine tune faster. So they're more sample efficient as they get bigger. [00
In the latest episode of "Chef AF", we welcomed Chef Daniel Garwood, a culinary force to be reckoned with. Currently, the Sous Chef at Atomix in New York City – the restaurant that recently made waves by ranking No. 8 in the 2023 World's 50 Best Restaurants list – Garwood is more than just a chef; he's a global storyteller.From Tasmania to the World StageBorn in Launceston, Tasmania, Chef Garwood started his culinary journey in local Tasmanian restaurants, where his passion for foraging, grilling, and using all-natural ingredients blossomed. His global expeditions led him to hone his skills in world-renowned kitchens from Sweden to South Korea, and all the way to the heart of New York City.It is at Atomix that Chef Garwood combines his rich tapestry of international experiences with the unique essence of New York – a city known for its diverse culinary landscape. While the foundation is Korean-inspired, the execution is globally nuanced, resulting in a menu that is distinctly "Korean but not."As we chatted about Atomix, it became clear that New York City is more than just a workplace for Daniel. The city's melange of cultures has shaped his approach at Atomix. However, Daniel humbly credits the restaurant's unique identity mostly to Chef Junghyun 'JP' Park, considering himself in a supportive role.Mentorship and The Power of NetworkingWe delved deep into the pivotal role mentorship plays in shaping culinary careers. Daniel's recent U.S. Regional Winner title at The S.Pellegrino Young Chef Academy Competition is a testament to this. His relationship with mentor Chef Nina Compton, from Compère Lapin in New Orleans, has been transformative. With her guidance, Chef Garwood has navigated the challenges of the competition, simultaneously staying authentic to his culinary ethos.But mentorship wasn't the only gain from the SPYCA competition. Daniel's victory opened doors to elite culinary platforms such as the James Beard Foundation, introducing him to chefs and initiatives championing community welfare and employee well-being.Signature Dishes and Culinary IdentityDaniel's Aged Duck and Persimmon in Tak Cheongju with Banchan Through The Eyes of a Traveler dish stands out not just for its flavor but the narrative it tells. Each element, from the choice of wood for grilling to the unique preparation techniques, traces back to a place or moment in Chef Garwood's life.For instance, during his stint at Firedoor in Sydney, Chef Garwood was introduced to grilling with various woods, each infusing the dish with a distinctive flavor profile. In the competition, his signature dish embodies this ethos, incorporating techniques inspired by countries he's graced with his culinary touch.Mental Health in the Culinary LimelightThe conversation took a poignant turn when they touched upon mental health – an issue close to Chef Garwood's heart. In collaboration with his wife, Sooky An, Daniel released "Oralis: A Conversation on Food and Mental Health," bringing awareness to the mental challenges faced by culinary professionals. By hosting dinner events, creating dialogue, and promoting practices like journaling and fitness, Chef Garwood is championing a change in the industry's perspective towards mental well-being.Aspirations and What Lies AheadNew York City may be home for now, but Chef Garwood has plans. He dreams of opening a large-scale bar restaurant, focusing on grilling with varied woods – an experience he feels NYC is missing.For those eager to experience Chef Garwood's culinary magic, you can find him at Atomix. However, snagging a reservation, especially at the 14-seat chef's counter, is no small feat given the high demand.The conversation with Chef Daniel left us inspired and waiting with bated breath for the Grand Finale of the S.Pellegrino Young Chef Academy Competition in October. Chef Daniel Garwood, with his signature dish and global experiences, is undoubtedly one to watch.
It's Friday and we are spinning the clock to the weekend and Open Championship week. Andy and Brendan begin with an unsubstantiated run-in with a Team Smash member on the recent workout drama. Then they get to the quote roulette of players reacting to the Senate hearings, including Xander, Spieth, and Scheffler suggesting some hard questions coming for Jay and a total lack of clarity. There's a separate section on Xander's standing in the game. They also discuss the mishmash of narratives pushed by the Tour, and why the players are clearly right to lack trust in the leadership. Then they react to some of the startling numbers shown when Rory hit a persimmon driver at the Scottish Open, and his comments to “roll back everything” including the clubs. Lastly, they close with some SGS Golf Advice on drunkard in a PGA Tour pro-am, a HS match against an opponent in a cart, and chipping green etiquette.
Todd Demsey is the most interesting man in golf. From shaping persimmon woods by hand for Kelly Slater, to traveling the PGA TOUR in an RV, Todd has an incredible story and relationship with the game of golf. As a 4-time All-American at Arizona State with legendary teammates like Phil Mickelson and Pat Perez, Todd won the NCAA Individual title by one shot over David Duval in 1993! After an incredible amateur career which included a Walker Cup invite, Todd joined the PGA TOUR in 1997, and traveled the country in an RV. Todd now makes his home in Neptune Beach, Florida, where he fashions persimmon woods for by hand for players all over the globe. One thing's for sure, the game needs more people like Todd Demsey around. Link from this episode:Follow Straight Down the Middle'ish on Twitter: @SDTMishPodcastCheck out Todd's official website: Todd DemseyIf you like living forever, and you like golf, then you're going to LOVE Live Forever Golf.Enter discount code "LFG20" for 20% off your next order at LiveForeverGolf.comStraight Down the Middle'ish is brought to you by Live Forever Golf. Check out our Final Few collection to get great deals on our clearance inventory! Free shipping on all orders over $100.
VOICEMAILS: A fear of vomiting. Pooping yourself in your car. Target is lame for getting rid of Pride merch. Persimmon poop emergency. Vacation induced constipation. A famous murderer lived in a listener's small town in the UK. M&M store in Vegas. Staying the night at a date's house after you ate sugar free gummy bears. Reno River Festival creeper. Webcrawlerspod@gmail.com626-634-2069Discord / Twitter / Instagram / Patreon / MerchSupport this show http://supporter.acast.com/webcrawlers. Hosted on Acast. See acast.com/privacy for more information.
In this episode, Jon Teater (Whitetail Landscapes) and Ryan Haines (Blue Hill Wildlife Nursery) discuss the importance and benefit of fruit trees on the landscape. Ryan explains the considerations when picking a specific tree to plant or propagate on the landscape. Ryan suggests the best practice is replicating quality fruit trees on hunting properties. Jon and Ryan discuss how to prune and shape fruit trees. Both Ryan and Jon converse about tree spacing, scaffolding and light considerations. Ryan discusses management techniques for developing strong branching and optimal fruit. Ryan explains soil considerations when planting trees. Ryan explains soil amendments and soil deficiencies. Ryan and Jon suggest certain amendments required to support tree growth and key nutrients that tend to be deficient on the landscape. Ryan discusses using compost and common mistakes with planting fruit trees near and around food plots. Jon and Ryan discuss fruit tree site selection, and sunlight needs. Ryan discusses spraying fruit trees and timing. Ryan and Jon explain the differences and benefits of dwarf, semi-dwarf and standard trees. Ryan explains fruit drop times and the importance of land setup and positioning of trees. Ryan ends explaining his top Pear, Crabapple, Applecrab, Apple and Persimmon choices and how to develop the best fruit tree layout for your property. Check out the Sportsmen's Empire Podcast Network for more relevant outdoor content! Social Links https://whitetaillandscapes.com/ https://www.facebook.com/whitetaillandscapes/ https://www.instagram.com/whitetail_landscapes/?hl=en Fruit Trees For Deer - Blue Hill Wildlife Nursery - Buy Wildlife Trees Learn more about your ad choices. Visit megaphone.fm/adchoices