Podcasts about Sook

  • 157PODCASTS
  • 615EPISODES
  • 15mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • May 12, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Sook

Latest podcast episodes about Sook

The TradeWaiters
eps. 117: "Banned Book Club" by Kim Hyun Sook, Ko Hyung-Ju, & Ryan Estrada

The TradeWaiters

Play Episode Listen Later May 12, 2025 64:46


For this episode our panel read "Banned Book Club" by Kim Hyun Sook, Ko Hyung-Ju, and Ryan Estrada. This book follows Hyun Sook's adventures as a young revolutionary during South Korea's Fifth Republic dictatorship. Because irony is dead, this book has in turn faced book bans of its own. Join us as we talk about the freedom to read, the importance of being able to imagine a different world, that a police state will inevitably do police state things, and how much we love the bus. Music by Sleuth. Our next episode will be a minisode on comics / life / balance.

De Döschkassen
Konklave in Reichsdag

De Döschkassen

Play Episode Listen Later May 8, 2025 2:43


Jungedi, dat weer je 'n Opreeger an Dingsdag in Reichsdag, wa? Witten Rook is bi veele no'n ersten Wohlgang blots ut 'n Kopp kom‘. Overs erst bi'n nästen Anloop hett dat heeten „habemus Kanzler“. Un nu? Is unsen Kanzler nu tweete Wohl? Tscha. Schimpt ward bi uns je al vun veele Ecken un Kanten. Dat is je so düütsch, ne. Kiekt man sick de Geschicht vun de Papstwohln mol an, denn is Merz richti good wechkom‘. De Wohl vun Papst Gregor X. hett vun 1268 an meist dree Johr duuert. Erst as man de Kardinols bi Woter un Brot insloten un op't Letzt‘ dat Doken vun ehrn Palast reeten hett, keem Bewegung in de Sook. Un bet hüüt geev dat keen eenzigen Papst, de glieks bi'n ersten Wohlgang wählt worrn is. Ook dütmol ni. Dat is dor ganz normol. Mehr noch: Dat is faszineernd un geheemnisvull un all sowat. Jedet Mol, wenn swatten Rook ut'n Scho'steen vun de Sixtinische Kapell kümmt, stiggt de Spannung. Jo. Dor geiht dat. Bi uns ni. Weer Konrad Adenauer erst bi't tweeten Wohlgang Bunnesanzler worrn, denn weer ook nu allns in Ordnung. Overs so is dat bi uns: „Dat hebbt wi jümmers al so mokt, dat hebbt wi noch nie so mokt, dor kann je jeder kom‘.“ Jo nu. Wi harrn ook noch ni eenmol in de Bunnesrepublik so'n dösiged Ergeevnis bi 'n Bunnesdagswohl. Corona speelt ook 'n Rull, mennige Ministers sünd kiebig, dat se nu ni mehr mitspeeln dörft un so wieder. De halve Welt steiht Kopp, overs bi uns mutt allns no't Handbook loopen. Un wenn in't Utland över unse Kanzlerwohl unkt ward, den lot ehr doch unken. Bi de Novers, de an düllsten op een schimpt, is in de Reegel ook ni allns würkli astrein. Un so'n verdreihte Regeerung as man sick dat nu in Russland, in China, in de Türkei, de USA un wat weet ick wonehm noch bekieken kann, hebbt wi bi uns siet 1949 noch ni hatt. Un mi dücht, sowat ward wi ook nu ni kriegen. Ick wurr seggen: Lot de niede Mannschop man erstmol ehr ersten 100 Doog moken. Denn seht wi wieder. Un dat gifft no mien Meenung keen Grund, al no fief Minuten dat Quarken antofang‘… In düssen Sinn

De Döschkassen
Dösige Tiern in de Luft

De Döschkassen

Play Episode Listen Later Apr 24, 2025 2:30


De Mehrsten vun uns hebbt wiss al mol 'n Drohne sehn. De een oder anner hett veellicht sogor een to Huus. Toerst weern de Dingers mit veer, süss oder acht Propellers je ni veel mehr as drullige Speeltüüch. Intwüschen kann man overs allerhand dormit moken. Man kann dor 'n Kamera anbuun, üm to 'n Bispeel notokieken, wat de Nover jüst nokelt in Goorn rümliggt. Man kann dor overs natüürli ook wunnerbore Luftopnohm‘ mit moken, de fröher richti düüer ween sünd. Na jo, man kann ook ganz fürchterliche Sooken mit Drohnen moken, ehr as fleegende Bomben insetten un so. Overs dorüm schall dat hier ni gohn. Mi geiht dat üm all de nützlichen Anwennungen, de mit de patenten Fleegers mögli sünd. Groote Versandhannels fangt al an, mit de Dingers Pakete bet no de Huusdöör fleegen to loten. Fehlt blots noch 'n Arm mit 'n Finger an, de op 'n Klingelknoop drückt. Op de anner Sied de ward dat Postbüddel, de sick 'n frische Arbeit sööken mutt, veellicht erstmol ni so lusti finnen, un man kann sick je överhaupt froogen, wo dat mit all de Technik noch hengohn schall. Overs ni allns, wat nied is is automatisch slecht. So'n würkli nützliche Anwennung gifft dat in China un Afrika al länger, un nu ook bi uns: Blootkonserven un sowat ward mit Drohnen mang Krankenhüüs hen- un herflogen. Dat is 'n feine Sook. Dat geiht gau, dat is günsti, dor mutt keen Auto mehr för in Stau stohn un so wieder. Schier. Overs wat is, wenn so'n Drohne mit twee, dree Liter Levenssaft an Bord afstött? Ick kann mi dat Drama al regelrecht bildli vörstelln. „Herr Dokter, Herr Dokter, ick heff hier 'n Notfall“, seggt de besorgte Natuurfründ to'n Tierarzt. „Hier, düsse dösige Vogel is jüst vun Himmel fulln, mi direkt vör de Fööt. Dat hett fürchterli klötert, un dat Bloot hett blots so sprütt. He zuckt overs noch af un to. Kann man dor veellicht noch wat moken?“ Tscha, den Dokter weer wiss gau klor, dat sick dor keen Operatschoon lohn‘ wurr un dat dor ni mol mehr 'n Drohne mit 'n Blootkonserve an Bord hölpen kunn… In düssen Sinn

De Döschkassen
Verdreihten Kanol

De Döschkassen

Play Episode Listen Later Apr 3, 2025 2:29


Hebbt Ju dat Dingsdag ook in't Radio heuert? Dor keem in de Norichen, dat de Nord-Ostsee-Kanol in „Ost-Nordsee-Kanol“ ümbenennt ward. Dat „Lübecker Institut för Geographische Gliekstellung“ hett siet Johrn dorop klogt un nu vör't böberste Lannesgericht in Sleswig Recht kreegen. In de Begrünnung hett dat heeten, wat dat ni gerecht weer, dat de Nordsee bi den Kanol al siet 77 Johr an Anfang steiht. Nu schall de Ostsee in den Noom för de nästen 77 Johr op Platz een stohn. De Brunsbütteler Börgermeister Martin Schnmedtje hett dor in't Radio ook wat to seggt. He is splitteri un frogt sick, wat för'n „Honk“ ut den Nord-Ostsee-Kanol 'n „ONK“ moken will. Ulf Kämpfer, de böberste Börgermeister vun Kiel, finnd dat good, weil he je an de Ostsee to Huus is. Un Thomas Fischer, de Pressespreeker vun't Woter- un Schippfohrtsamt süht in de Noomsännerung blots 'n grooten Barg Arbeid. Dösig weer overs, dat de ganze Krom blots op een Radiosenner loopen is un ni op all. Un wokeen in Kalenner keeken hett, den weer klor, dat Dingsdag de 1. April ween is. Jo, dat weer 'n Aprilscherz. Ni slecht. Dat hebbt se sick scheun utdacht. Ook dat Schmedtje, Kämpfer un Fischer dor mitspeelt hebbt funn ick drulli. Veele Toheurers weer seeker glieks klor, wat dat Klamauk ween is. Harr de Mellung heeten, dat de NOK in „Donald-Trump-Kanol“ ümbenennt warrn schall, harrn dat wohrschienli mehr Lüüd glöövt. Op de anner Sied is för Trump je geföhlt jeden Dag de 1. April, wenn dat üm de Wohrheit geiht. Un dormit is he lang ni alleent. Na, un wann hebbt Ju toletzt een in April schickt? Dat schient je so'n beten ut de Mod‘ to kom‘. Overs ick finn, dat's 'n feine Sook, wenn man sien graue Zelln bilütten ook mol dorto bruukt, annere Lüüd mit'n harmlose Sook op'n Arm to nehm. Achteran kann man denn tosom doröver to lachen. Denn mit unscheune un weniger amüsante Angelegenheiten ward wi je al mehr as genog op'n Arm nohm… In düssen Sinn

The TASTE Podcast
556: Bonnie's Calvin Eng Has MSG Tattooed on His Arm (And All Over His Book)

The TASTE Podcast

Play Episode Listen Later Mar 12, 2025 61:58


Calvin Eng is the chef and owner of Bonnie's, a very popular Cantonese American restaurant in Williamsburg, Brooklyn. He's also the author of a great new cookbook: Salt Sugar MSG, an introduction to his unique style of home cooking, cowritten with his partner, Phoebe Melnick. It's so fun to have Calvin in the studio to talk about growing up in New York's many Chinatowns, developing recipes for home cooks (instead of line cooks), and more.Also on the show, Matt has a fun conversation with Substack's Austin Tedesco. We love Substack and talked about the upcoming Grubstack virtual festival, running from March 13-15. We also talked about how Substack is helping writers reach wide audiences, and go over some of our favorites. Mentioned on the episode: Bangers and Jams, Alison Roman, Feed Me, Vittles, Sweet City, The Best Bit, Perfect City, Bite Sized, The Angel, Cake Zine, Sook, A Small & Simple Thing, Snaxshot, Why Is This Interesting?Do you enjoy This Is TASTE? Drop us a review on Apple, or star us on Spotify. We'd love to hear from you. READ MORE:Buttery Oyster Sauce Noodles [TASTE]Salted MSG Caramel Sauce [Food and Wine]The Future of Food Media Is in Your Inbox [TASTE]See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

De Döschkassen
Föffti Perzent goode Tokunft

De Döschkassen

Play Episode Listen Later Feb 27, 2025 2:38


So, wi hebbt wählt. Un so, as dat utsüht, ward Achim Merz no Olaf Scholz de näste Kanzler. Wat – Ju kinnt Achim Merz ni? Doch, kinnt Ju. Blots dat Ju em eher ünner sien tweeten Vörnoom kinnt. Mit allns wat dorto heuert heet he nömli Joachim-Friedrich Martin Josef Merz. Dor is de Personolutwies glieks 20 Gramm swörer. Annerlei. Jedenfalls harr Merz, jüst so as all de annern Lüüd ut all de annern Partein, veel dorför geeven, wenn he al vörher to weeten kreegen harr, wo de Wohl utgohn wurr. Dat gifft je würkli Politikers, de bilütten no 'n Hellseher oder no 'n Astrologen goht, wenn Wohln oder wichtige Entscheedung‘ anstoht. Dat weer al bi dat Orakel vun Delphi so. Dor hebbt sick de Regeernden al vör 'n poor Duusend Johr no de Tokunft erkunnigt. Man kunn Dor je ook eenfach so hengohn – ohne Termin. So'n anstänniged Orakel weet nömli vörher al, dat man vörbi kümmt. Bi'n Astrologen is dat 'n anner Sook. Dor harr Jeachim-Frie'ch erstmol seggen musst, wann he op de Welt kom‘ is. Dat weer bi em 1955 so wiet un twor an 11. November. Wat he ook noch ölben Minuten no Klock ölben born is, dat weet ick ni. Overs bi so'n Karnevals-Gebuurtsdag steiht man je automatisch vun ersten Momang an mit een Been in de Bütt. Liekers – dat he mol Bunneskanzler warrn wurr, kunn he jüst so wenig weeten, as dat Olaf Scholz 2021 ni weeten kunn, dat he ni mol veer Johr as Kanzler dörholn schull. Tscha, wi weet eenfach ni, wat morn los is. Un dat is ook good so. Anners kunn man je glieks an ersten Schooldag de Afsluss-Tüüchnisse an de Schöölers verdeeln. An un för sick, will man je ook blots denn wat över de Tokunft weeten, wenn dat wat Angenehmed is. Liekers kann man dat positiv sehn, denn dormit, dat wi ni weet, wat noch kümmt, stoht de Schangsen bi genau 50 Perzent, dat sick allns to'n Gooden entwickelt – ook wat de Regeerung angeiht, de nu kümmt. Un 50 Perzent sünd je al 'n ganzen Barg. Dorto kümmt noch dat, wat Mark Twain al seggt hett: Mit Prognosen is dat ni so eenfach, besünners wenn se sick üm de Tokunft dreiht… In düssen Sinn

De Döschkassen
Dor liggt wat in de Luft

De Döschkassen

Play Episode Listen Later Feb 13, 2025 2:36


Oha, dat Ümweltbunnesamt hett Alarm slogen. Vörgüstern weer dat. Vör „Smog“ is warnt worrn. Dorbi harrn wi Middeweeken no mien Ansehn een vun de scheunsten Doog, de wi düssed Johr betlang överhaupt hatt hebbt. Bi uns in Windbargen hett meist den ganzen leeven Dag de Sünn schient. Af un to hett se mol an 'n poor Wulken vörbischien‘ musst, overs no allns, wat ick weet, harrn wi eenfach 'n richti scheun‘ Dag. Overs ick bün je ook blots 'n Laie. Un wenn Lai'n sick mit wat befot‘, mit dat sick ansünsten blots Experten utkinnt, denn kümmt dor even to'n Bispeel bi rut, dat man dinkt, wat wi scheune scheune Luft hebbt, wenn in Würklichkeit Smog is. To veel „Fienstoff“, also Feinstaub op Hochdüütsch, is in de Luft ween, hebbt de Fachlüüd seggt. Un dat keem dorvun, wat dat so kold weer. Wenn dat kold is, heizt wi nömli mehr. Un besünners all, de mit 'n Ohmb oder 'n Kamin heizt, also mit Holt un Brikett un sowat, de sünd blang de Kohle- un Gaskraftwerke un de Industrie överhaupt doran Schuld, dat wi to veel Fienstoff in de Luft hebbt, wenn Otto Normolverbruuker meent, dat allns in Ordung is. Hmm. Dat is di wat. Wi hebbt doch nu al so veele Windmöhln bi uns rümstohn un so veele schmucke Koppeln un Wischen hebbt wi för den Solarstrom opgeeven. Un liekers hebbt wi noch Smog un to hoge Fienstoffbelastung? Weer dat fröher denn noch mehr Smog un dat dorste Amt hett uns dat blots ni seggt? Tscha, weern wi in Amerika, denn harrn Trump un Musk de Lüüd vun't Ümweltbunnesamt al lang no't ehr Kollegen vun't Arbeidsamt schickt. Över de Industrie schimpen, dor kümmt bi Donald un Elon nix no. Overs mol ganz ehrli. Ick harr ohne dat Amt würkli ni markt, dat wi vörgüstern Smog un keen Sünnschien hatt hebbt. Un denn is dor je noch 'n Sook, op de ick ohne de Beamten gor ni kom‘ weer: Wegen all den Fienstoff in de Luft, hett dat Ümweltbunnesamt för anstrengende Aktivitäten ünner frie'n Himmel warnt. Besünners joggen schull man ni gohn. Ha! To'n Glück goh ick so un so ni joggen. Na, denn heff ick je allns richti mokt… In düssen Sinn

So Here's What Happened
Carolyn Talks 'Paying For It' with Filmmaker Sook-yin Lee

So Here's What Happened

Play Episode Listen Later Feb 6, 2025 44:29


In this episode of #CarolynTalks, I chat with director and writer Sook-yin Lee about her humorous and thought provoking film PAYING FOR IT. Co-written with Joanne Sarazan, based on the real experiences the real experiences of a couple, Chester (Dan Beirne) and Sonny -Sook-yin's alter ego played by Emily Lê - who embark on a journey of self-discovery that challenges and reshapes how they view life, love, and sex, their perceptions and biases on relationships.#PayingForItMovie #Interview #CanadianFilm #FilmCritic #GraphicNovel#PayingForIt premiered at #TIFF24, and is based on the 2001 graphic novel written by Chester Brown.*Trailer and images courtesy of TIFF, and Route 504.Find me on Twitter and Instagram at: @CarrieCnh12Buy me a coffee or pizza at https://buymeacoffee.com/carolynhinds?status=1paypal.com/paypalme/carolynhinds0525My Social Media hashtags are: #CarolynTalks #DramasWithCarrie #SaturdayNightSciFi #SHWH #KCrushVisit Authory.com/CarolynHinds to find links to all of my published film festival coverage, writing, YouTube and other podcasts So Here's What Happened!, and Beyond The Romance. Hosted on Acast. See acast.com/privacy for more information.

De Döschkassen
Klock dree is Dag

De Döschkassen

Play Episode Listen Later Jan 23, 2025 2:41


Dat is würkli gediegen: Meist jede Nacht, Slag Klock dree wook ick op. Ni, weil ick mol no Tan't Meier mutt, sünnern eenfach so. Jo, un denn kann ick erstmol ni mehr toslopen. Dat gifft je Lüüd, de kümmt ni in Slop, weil se över irgendwat an grübeln un sinneern sünd. Dat is bi mi overs ni so. Ick bün eenfach wook un wieder nix. Besünners bleud is dat, wenn al Klock fief de Wecker geiht. Denn sitt ick dor op de Bettkant‘ un dink: „Schiet, in twee Stünnen geiht de Wecker.“ No dösiger is: Mitünner drööm ick jüst so wat Scheuned, nix Unanstänniged, eenfch blots wat Scheuned. Na jo, so'n lütt beten unanstänni is dat mitünner ook. Overs dat is 'n anner‘ Buustedt. Jedenfalls mark ick denn, wo mi de scheune Droom so langsom verschütt geiht. So as in't Fernsehn, wenn de Reklame kümmt. Nix an to moken. Sogor so'n Medizin-Atem-Dings mit Elektroden an‘ Lief heff ick al ümhatt, üm to meten, wat dor irgendwat scheeflöppt bi mien Slopen. Is overs allns in Ordnung. Keen Utsetters. Blots 'n beten dick bün ick. Overs dat weet ick ook ohne Elektroden. Wat also moken? Schoop telln? Schoop heff ick keen, de ick telln kunn. Katten telln weer mögli. Overs dor kom ick blots bet twee – dor warr ick ni mööd vun. To'n Lesen heff ick tomeist ook keen Lust. Melk mit Muskat schall 'n goodet Rezept ween. Overs vun Melk krieg ick 'n flotten Otto. Dat hölpt also ook ni bi't Toslopen. Op't Letzt‘ stoh ick oftmols op un kiek mi irgendwat in't Internet an. Dor warr ick ook tatsächli mööd vun. Overs dat is mitünner so interessant, dat ick dor ni so gau vun opholn kann, un denn is de Klock mitünner al veer. Denn sünd noch 60 Minuten över, de ick noch mol in't Lummerland afdüükern kann. Na jo, ick kom ook so över'n Dag un eegentli fehlt mi je ook nix. Liekers: Dörslopen is doch 'n feine Sook. Mien Madam dinkt an de niede Bodstuuv, de wi neuss kriegt. Denn ratzt se so wech. Dat hölpt bi mi ni. Overs tööv mol. Dat Nodinken övert Toslopen köön‘ mokt mi düchti mööd. Uäh. Ick glööv, ick mutt gau to Bett… In düssen Sinn

Real Estate Renegades with Glenn Twiddle
Ep48 - Glenn Having a Sook - Glenn Twiddle

Real Estate Renegades with Glenn Twiddle

Play Episode Listen Later Jan 12, 2025 98:10


Not proud, but this episode is mainly therapy for me. Sure there might be a few life lessons in here, but this is drama, soap opera and several real estate agents (and a coach as well) all acting in less than admirable ways !! A masterclass in what not to do and who NOT to be !!  Your prospecting DONE FOR YOU, at less than the cost of your PA doing it, visit http://www.AgentProspecting.com.au

Triple M Rocks Footy AFL
CLASSIC SATURDAY RUB | Spud snubbed at Hall of Fame then butchers his segment, Damo's boundary sook

Triple M Rocks Footy AFL

Play Episode Listen Later Jan 4, 2025 25:11


Today's Classic Rub episode is from June 16 2012, and involves Spud being snubbed by for a seat at his mate Robert Harvey's Hall of Fame induction, Gaz copping it for some headwobble about his article in The Age, Spud butchering a top 5 segment, and Damo having a sook about having to do the boundary.----We'll be dropping Classic Saturday Rub moments and episodes every Wednesday and Saturday over summer - subscribe to Triple M Footy AFL on the LiSTNR app to get every instalment as it drops!See omnystudio.com/listener for privacy information.

De Döschkassen
Gooden Rutsch ut de Düsterflaute

De Döschkassen

Play Episode Listen Later Dec 26, 2024 2:41


So, blots noch 'n poor Doog, denn mööt wi 2025 in Breefe, Formulare un all sowat schrieven. Overs wat blifft vun 2024 no? De Welt steiht an veele Ecken in Brand un op 'n Kopp, overs dat weer vöriged Johr ook ni anners. Mol sehn, wat ut Syrien ward. Privot heff ick menniged Scheuned belevt, dat ick ni vergeeten warr. Overs blang dat Enn vun de Ampel un dat Donald Trump nochmol to'n Präsident wählt worrn is, ward düsset Johr wohrschienli ni besünners in de Geschichsbööker opfalln. Dat geev keen Muuerfall un keen Wellervereenigung, overs to'n Glück ook keen Tschernobyl un keen Fokushima. Ohne Tschernobyl 1986 un Fokushima, harrn wi nu je overs keen „Düsterflaute“ ne. Angela Merkel weer jüst erst ut den ünner Raud-Greun beslotenen Atomutstieg utsteegen. Overs 2011, keen Week no Fokushima is se ut den Utstieg vun den dorsten Utstieg fuurts weller utsteegen. De Greun‘ wussen gor ni wat se seggen schulln, as Merkel mit ehr Thema utneiht is. Un dat je ook blots, weil in NRW de Wohl anstohn hett. Na jo. Intwüschen hett Raud-Grön-Geel energiepolitisch je an mennige Rööd dreiht. Atomkraft gifft dat bi uns ni mehr, also keen eegene tominst, dorför overs even Düsterflaute. Wat 'n dösiged Woord, ne. Dat is as Dösthunger oder so. Tscha, denn wüllt wi mol kieken, wat 2025 so för un in petto hett. Seeker, Freeden weer 'n feine Sook. So ganz toversichtli bün ick dor overs ni. Is dat ni dösig, dat utgereekend 'n Handvull öllere Mannslüüd den Freeden in de Welt ümmer weller to'n Düüwel schickt? Ansünsten schimpt öllere Mannslüüd je gern över de Jugend. Also Wenn't no mi geiht, denn schull 2025 dat Johr warrn in dat de Jugend de Weltpolitik bestimmt. Ick heff mi düssed Johr al mehrmols positiv över junge Lüüd wunnert. Jo, ick weet, wat dat utopisch is, dat de Jugend den Kurs op de Welt vörgeeven ward – 'n scheun‘ Gedanken is dat liekers. Overs wat ook kom‘ mach: Ick wünsch Ju Gesundheit, Tofreedenheit un 'n gooden Rutsch...

Rev Limiter
The #QatarGP was a cracker, but why do the drivers sook so much?

Rev Limiter

Play Episode Listen Later Dec 3, 2024 58:53


Formula 1 is in its final two races of the season, and the whinging behaviour of its drivers has finally become too much to bear. Find out which driver lost respect for who over allegations of dobbing, but then turned around and did the same to someone else in the race. We love conflict, but not this epidemic of being sooky la-las? Why can't they just person-up and race like racers? This is all part of our debrief of the #QatarGP, which Max Verstappen won. Charles Leclerc and Auto Action's Oscar Piastri were on the podium. Plus, we are preparing for Jack Doohan's early F1 debut this weekend. There's plenty of other news and views, too. And why does Bruce rate The Flying Horse Bar & Brewery Grand Annual Sprintcar Classic at Warrnambool better than sex? If you are on the ⁠AC Delco⁠ site, use AAREVL25 for a 25% discount on all your parts. All this and more on the latest episode of the ⁠⁠⁠Auto Action RevLimiter⁠⁠⁠. Check out ⁠⁠⁠https://autoaction.com.au/⁠ ⁠⁠for all the latest updates and dropping news. Thanks to Byron Clarke for production support.  Music by Yevgeniy Sorokin from Pixabay

Humans of Jeju
Garden of Tangerines Baryeong, Farmers Ahn Yi-sook & Kang Seong-gil

Humans of Jeju

Play Episode Listen Later Sep 20, 2024 24:09


Earth-saving farmers running an eco-friendly, year-round healing experience farm called Garden of Tangerines Baryeong. The term "Baryeong" is a Jeju language word that means the origin of organic farming, and they put the philosophy into practice. They offer various experiential programs throughout the seasons: tangerine blossoms in spring, green tangerines in summer, and picking ripe orange tangerines in autumn. Committed to an eco-friendly lifestyle, they implement ecological cycle farming methods, such as seawater farming and cover cropping, to promote a healthy planet and safe food.

Bad Dads Film Review
Midweek Mention... The Handmaiden

Bad Dads Film Review

Play Episode Listen Later Sep 4, 2024 32:13


You can now text us anonymously to leave feedback, suggest future content or simply hurl abuse at us. We'll read out any texts we receive on the show. Click here to try it out!Welcome back to Bad Dads Film Review! Today, we're diving into the captivating and intricately plotted world of The Handmaiden, a 2016 psychological thriller directed by Park Chan-wook. This South Korean film, inspired by the novel Fingersmith by Sarah Waters, is a masterclass in storytelling, visual aesthetics, and suspense.A Tale of Deceit and Desire Set in 1930s Korea during the Japanese occupation, The Handmaiden tells the story of a young Korean woman, Sook-hee, who is hired as a handmaiden to a wealthy Japanese heiress, Lady Hideko. However, Sook-hee's employment is part of an elaborate scheme concocted by a con man named Count Fujiwara. His plan: for Sook-hee to help him seduce Lady Hideko so he can marry her, commit her to an asylum, and steal her wealth.Layers Upon Layers As the narrative unfolds, it becomes a mesmerizing tale of double-crosses, sexual awakening, and shifting loyalties. The story is presented in three parts, each from a different character's perspective, which gradually reveals the depths of deception and the complexities of the characters' relationships. This structure keeps the viewer guessing and reevaluating everything they know about the characters and their motives.Why It Stands OutVisual Storytelling: Park Chan-wook's direction is visually stunning. Every frame is meticulously crafted, combining exquisite costume and set designs with a vibrant color palette that enhances the sensual and suspenseful atmosphere of the film.Complex Characters: The characters in The Handmaiden are richly developed, with motivations that are both hidden and explicit. This complexity adds layers to the narrative, making the character dynamics engrossing and the plot twists more impactful.Erotic Elements: The film incorporates strong erotic themes that are integral to the plot and character development. These elements are handled with artistry and contribute significantly to the film's exploration of freedom, power, and identity.At its core, The Handmaiden explores themes of manipulation and liberation. It delves into the ways individuals manipulate each other for personal gain, but also how characters can break free from societal and personal oppression. The film challenges traditional notions of power and autonomy, particularly in relation to gender and sexuality.For fans of psychological thrillers, historical dramas, and beautifully crafted cinema, The Handmaiden is a must-watch. Its ability to blend genre elements with a profound exploration of human desires and deceptions makes it a standout film that engages the mind and the senses.Join us as we unravel the intricate web of The Handmaiden, exploring its narrative complexities, stunning aesthetics, and the provocative questions it raises about love, betrayal, and freedom. Whether you're discovering it for the first time or revisiting this modern classic, there's plenty to uncover in this thrilling cinematic journey.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Why you should write your own LLM benchmarks — with Nicholas Carlini, Google DeepMind

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Aug 29, 2024 70:05


Today's guest, Nicholas Carlini, a research scientist at DeepMind, argues that we should be focusing more on what AI can do for us individually, rather than trying to have an answer for everyone."How I Use AI" - A Pragmatic ApproachCarlini's blog post "How I Use AI" went viral for good reason. Instead of giving a personal opinion about AI's potential, he simply laid out how he, as a security researcher, uses AI tools in his daily work. He divided it in 12 sections:* To make applications* As a tutor* To get started* To simplify code* For boring tasks* To automate tasks* As an API reference* As a search engine* To solve one-offs* To teach me* Solving solved problems* To fix errorsEach of the sections has specific examples, so we recommend going through it. It also includes all prompts used for it; in the "make applications" case, it's 30,000 words total!My personal takeaway is that the majority of the work AI can do successfully is what humans dislike doing. Writing boilerplate code, looking up docs, taking repetitive actions, etc. These are usually boring tasks with little creativity, but with a lot of structure. This is the strongest arguments as to why LLMs, especially for code, are more beneficial to senior employees: if you can get the boring stuff out of the way, there's a lot more value you can generate. This is less and less true as you go entry level jobs which are mostly boring and repetitive tasks. Nicholas argues both sides ~21:34 in the pod.A New Approach to LLM BenchmarksWe recently did a Benchmarks 201 episode, a follow up to our original Benchmarks 101, and some of the issues have stayed the same. Notably, there's a big discrepancy between what benchmarks like MMLU test, and what the models are used for. Carlini created his own domain-specific language for writing personalized LLM benchmarks. The idea is simple but powerful:* Take tasks you've actually needed AI for in the past.* Turn them into benchmark tests.* Use these to evaluate new models based on your specific needs.It can represent very complex tasks, from a single code generation to drawing a US flag using C:"Write hello world in python" >> LLMRun() >> PythonRun() >> SubstringEvaluator("hello world")"Write a C program that draws an american flag to stdout." >> LLMRun() >> CRun() >> VisionLLMRun("What flag is shown in this image?") >> (SubstringEvaluator("United States") | SubstringEvaluator("USA")))This approach solves a few problems:* It measures what's actually useful to you, not abstract capabilities.* It's harder for model creators to "game" your specific benchmark, a problem that has plagued standardized tests.* It gives you a concrete way to decide if a new model is worth switching to, similar to how developers might run benchmarks before adopting a new library or framework.Carlini argues that if even a small percentage of AI users created personal benchmarks, we'd have a much better picture of model capabilities in practice.AI SecurityWhile much of the AI security discussion focuses on either jailbreaks or existential risks, Carlini's research targets the space in between. Some highlights from his recent work:* LAION 400M data poisoning: By buying expired domains referenced in the dataset, Carlini's team could inject arbitrary images into models trained on LAION 400M. You can read the paper "Poisoning Web-Scale Training Datasets is Practical", for all the details. This is a great example of expanding the scope beyond the model itself, and looking at the whole system and how ti can become vulnerable.* Stealing model weights: They demonstrated how to extract parts of production language models (like OpenAI's) through careful API queries. This research, "Extracting Training Data from Large Language Models", shows that even black-box access can leak sensitive information.* Extracting training data: In some cases, they found ways to make models regurgitate verbatim snippets from their training data. Him and Milad Nasr wrote a paper on this as well: Scalable Extraction of Training Data from (Production) Language Models. They also think this might be applicable to extracting RAG results from a generation.These aren't just theoretical attacks. They've led to real changes in how companies like OpenAI design their APIs and handle data. If you really miss logit_bias and logit results by token, you can blame Nicholas :)We had a ton of fun also chatting about things like Conway's Game of Life, how much data can fit in a piece of paper, and porting Doom to Javascript. Enjoy!Show Notes* How I Use AI* My Benchmark for LLMs* Doom Javascript port* Conway's Game of Life* Tic-Tac-Toe in one printf statement* International Obfuscated C Code Contest* Cursor* LAION 400M poisoning paper* Man vs Machine at Black Hat* Model Stealing from OpenAI* Milad Nasr* H.D. Moore* Vijay Bolina* Cosine.sh* uuencodeTimestamps* [00:00:00] Introductions* [00:01:14] Why Nicholas writes* [00:02:09] The Game of Life* [00:05:07] "How I Use AI" blog post origin story* [00:08:24] Do we need software engineering agents?* [00:11:03] Using AI to kickstart a project* [00:14:08] Ephemeral software* [00:17:37] Using AI to accelerate research* [00:21:34] Experts vs non-expert users as beneficiaries of AI* [00:24:02] Research on generating less secure code with LLMs.* [00:27:22] Learning and explaining code with AI* [00:30:12] AGI speculations?* [00:32:50] Distributing content without social media* [00:35:39] How much data do you think you can put on a single piece of paper?* [00:37:37] Building personal AI benchmarks* [00:43:04] Evolution of prompt engineering and its relevance* [00:46:06] Model vs task benchmarking* [00:52:14] Poisoning LAION 400M through expired domains* [00:55:38] Stealing OpenAI models from their API* [01:01:29] Data stealing and recovering training data from models* [01:03:30] Finding motivation in your workTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO-in-Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:12]: Hey, and today we're in the in-person studio, which Alessio has gorgeously set up for us, with Nicholas Carlini. Welcome. Thank you. You're a research scientist at DeepMind. You work at the intersection of machine learning and computer security. You got your PhD from Berkeley in 2018, and also your BA from Berkeley as well. And mostly we're here to talk about your blogs, because you are so generous in just writing up what you know. Well, actually, why do you write?Nicholas [00:00:41]: Because I like, I feel like it's fun to share what you've done. I don't like writing, sufficiently didn't like writing, I almost didn't do a PhD, because I knew how much writing was involved in writing papers. I was terrible at writing when I was younger. I do like the remedial writing classes when I was in university, because I was really bad at it. So I don't actually enjoy, I still don't enjoy the act of writing. But I feel like it is useful to share what you're doing, and I like being able to talk about the things that I'm doing that I think are fun. And so I write because I think I want to have something to say, not because I enjoy the act of writing.Swyx [00:01:14]: But yeah. It's a tool for thought, as they often say. Is there any sort of backgrounds or thing that people should know about you as a person? Yeah.Nicholas [00:01:23]: So I tend to focus on, like you said, I do security work, I try to like attacking things and I want to do like high quality security research. And that's mostly what I spend my actual time trying to be productive members of society doing that. But then I get distracted by things, and I just like, you know, working on random fun projects. Like a Doom clone in JavaScript.Swyx [00:01:44]: Yes.Nicholas [00:01:45]: Like that. Or, you know, I've done a number of things that have absolutely no utility. But are fun things to have done. And so it's interesting to say, like, you should work on fun things that just are interesting, even if they're not useful in any real way. And so that's what I tend to put up there is after I have completed something I think is fun, or if I think it's sufficiently interesting, write something down there.Alessio [00:02:09]: Before we go into like AI, LLMs and whatnot, why are you obsessed with the game of life? So you built multiplexing circuits in the game of life, which is mind boggling. So where did that come from? And then how do you go from just clicking boxes on the UI web version to like building multiplexing circuits?Nicholas [00:02:29]: I like Turing completeness. The definition of Turing completeness is a computer that can run anything, essentially. And the game of life, Conway's game of life is a very simple cellular 2D automata where you have cells that are either on or off. And a cell becomes on if in the previous generation some configuration holds true and off otherwise. It turns out there's a proof that the game of life is Turing complete, that you can run any program in principle using Conway's game of life. I don't know. And so you can, therefore someone should. And so I wanted to do it. Some other people have done some similar things, but I got obsessed into like, if you're going to try and make it work, like we already know it's possible in theory. I want to try and like actually make something I can run on my computer, like a real computer I can run. And so yeah, I've been going on this rabbit hole of trying to make a CPU that I can run semi real time on the game of life. And I have been making some reasonable progress there. And yeah, but you know, Turing completeness is just like a very fun trap you can go down. A while ago, as part of a research paper, I was able to show that in C, if you call into printf, it's Turing complete. Like printf, you know, like, which like, you know, you can print numbers or whatever, right?Swyx [00:03:39]: Yeah, but there should be no like control flow stuff.Nicholas [00:03:42]: Because printf has a percent n specifier that lets you write an arbitrary amount of data to an arbitrary location. And the printf format specifier has an index into where it is in the loop that is in memory. So you can overwrite the location of where printf is currently indexing using percent n. So you can get loops, you can get conditionals, and you can get arbitrary data rates again. So we sort of have another Turing complete language using printf, which again, like this has essentially zero practical utility, but like, it's just, I feel like a lot of people get into programming because they enjoy the art of doing these things. And then they go work on developing some software application and lose all joy with the boys. And I want to still have joy in doing these things. And so on occasion, I try to stop doing productive, meaningful things and just like, what's a fun thing that we can do and try and make that happen.Alessio [00:04:39]: Awesome. So you've been kind of like a pioneer in the AI security space. You've done a lot of talks starting back in 2018. We'll kind of leave that to the end because I know the security part is, there's maybe a smaller audience, but it's a very intense audience. So I think that'll be fun. But everybody in our Discord started posting your how I use AI blog post and we were like, we should get Carlini on the podcast. And then you were so nice to just, yeah, and then I sent you an email and you're like, okay, I'll come.Swyx [00:05:07]: And I was like, oh, I thought that would be harder.Alessio [00:05:10]: I think there's, as you said in the blog posts, a lot of misunderstanding about what LLMs can actually be used for. What are they useful at? What are they not good at? And whether or not it's even worth arguing what they're not good at, because they're obviously not. So if you cannot count the R's in a word, they're like, it's just not what it does. So how painful was it to write such a long post, given that you just said that you don't like to write? Yeah. And then we can kind of run through the things, but maybe just talk about the motivation, why you thought it was important to do it.Nicholas [00:05:39]: Yeah. So I wanted to do this because I feel like most people who write about language models being good or bad, some underlying message of like, you know, they have their camp and their camp is like, AI is bad or AI is good or whatever. And they like, they spin whatever they're going to say according to their ideology. And they don't actually just look at what is true in the world. So I've read a lot of things where people say how amazing they are and how all programmers are going to be obsolete by 2024. And I've read a lot of things where people who say like, they can't do anything useful at all. And, you know, like, they're just like, it's only the people who've come off of, you know, blockchain crypto stuff and are here to like make another quick buck and move on. And I don't really agree with either of these. And I'm not someone who cares really one way or the other how these things go. And so I wanted to write something that just says like, look, like, let's sort of ground reality and what we can actually do with these things. Because my actual research is in like security and showing that these models have lots of problems. Like this is like my day to day job is saying like, we probably shouldn't be using these in lots of cases. I thought I could have a little bit of credibility of in saying, it is true. They have lots of problems. We maybe shouldn't be deploying them lots of situations. And still, they are also useful. And that is the like, the bit that I wanted to get across is to say, I'm not here to try and sell you on anything. I just think that they're useful for the kinds of work that I do. And hopefully, some people would listen. And it turned out that a lot more people liked it than I thought. But yeah, that was the motivation behind why I wanted to write this.Alessio [00:07:15]: So you had about a dozen sections of like how you actually use AI. Maybe we can just kind of run through them all. And then maybe the ones where you have extra commentary to add, we can... Sure.Nicholas [00:07:27]: Yeah, yeah. I didn't put as much thought into this as maybe was deserved. I probably spent, I don't know, definitely less than 10 hours putting this together.Swyx [00:07:38]: Wow.Alessio [00:07:39]: It took me close to that to do a podcast episode. So that's pretty impressive.Nicholas [00:07:43]: Yeah. I wrote it in one pass. I've gotten a number of emails of like, you got this editing thing wrong, you got this sort of other thing wrong. It's like, I haven't just haven't looked at it. I tend to try it. I feel like I still don't like writing. And so because of this, the way I tend to treat this is like, I will put it together into the best format that I can at a time, and then put it on the internet, and then never change it. And this is an aspect of like the research side of me is like, once a paper is published, like it is done as an artifact that exists in the world. I could forever edit the very first thing I ever put to make it the most perfect version of what it is, and I would do nothing else. And so I feel like I find it useful to be like, this is the artifact, I will spend some certain amount of hours on it, which is what I think it is worth. And then I will just...Swyx [00:08:22]: Yeah.Nicholas [00:08:23]: Timeboxing.Alessio [00:08:24]: Yeah. Stop. Yeah. Okay. We just recorded an episode with the founder of Cosine, which is like an AI software engineer colleague. You said it took you 30,000 words to get GPT-4 to build you the, can GPT-4 solve this kind of like app. Where are we in the spectrum where chat GPT is all you need to actually build something versus I need a full on agent that does everything for me?Nicholas [00:08:46]: Yeah. Okay. So this was an... So I built a web app last year sometime that was just like a fun demo where you can guess if you can predict whether or not GPT-4 at the time could solve a given task. This is, as far as web apps go, very straightforward. You need basic HTML, CSS, you have a little slider that moves, you have a button, sort of animate the text coming to the screen. The reason people are going here is not because they want to see my wonderful HTML, right? I used to know how to do modern HTML in 2007, 2008. I was very good at fighting with IE6 and these kinds of things. I knew how to do that. I have no longer had to build any web app stuff in the meantime, which means that I know how everything works, but I don't know any of the new... Flexbox is new to me. Flexbox is like 10 years old at this point, but it's just amazing being able to go to the model and just say, write me this thing and it will give me all of the boilerplate that I need to get going. Of course it's imperfect. It's not going to get you the right answer, and it doesn't do anything that's complicated right now, but it gets you to the point where the only remaining work that needs to be done is the interesting hard part for me, the actual novel part. Even the current models, I think, are entirely good enough at doing this kind of thing, that they're very useful. It may be the case that if you had something, like you were saying, a smarter agent that could debug problems by itself, that might be even more useful. Currently though, make a model into an agent by just copying and pasting error messages for the most part. That's what I do, is you run it and it gives you some code that doesn't work, and either I'll fix the code, or it will give me buggy code and I won't know how to fix it, and I'll just copy and paste the error message and say, it tells me this. What do I do? And it will just tell me how to fix it. You can't trust these things blindly, but I feel like most people on the internet already understand that things on the internet, you can't trust blindly. And so this is not like a big mental shift you have to go through to understand that it is possible to read something and find it useful, even if it is not completely perfect in its output.Swyx [00:10:54]: It's very human-like in that sense. It's the same ring of trust, I kind of think about it that way, if you had trust levels.Alessio [00:11:03]: And there's maybe a couple that tie together. So there was like, to make applications, and then there's to get started, which is a similar you know, kickstart, maybe like a project that you know the LLM cannot solve. It's kind of how you think about it.Nicholas [00:11:15]: Yeah. So for getting started on things is one of the cases where I think it's really great for some of these things, where I sort of use it as a personalized, help me use this technology I've never used before. So for example, I had never used Docker before January. I know what Docker is. Lucky you. Yeah, like I'm a computer security person, like I sort of, I have read lots of papers on, you know, all the technology behind how these things work. You know, I know all the exploits on them, I've done some of these things, but I had never actually used Docker. But I wanted it to be able to, I could run the outputs of language model stuff in some controlled contained environment, which I know is the right application. So I just ask it like, I want to use Docker to do this thing, like, tell me how to run a Python program in a Docker container. And it like gives me a thing. I'm like, step back. You said Docker compose, I do not know what this word Docker compose is. Is this Docker? Help me. And like, you'll sort of tell me all of these things. And I'm sure there's this knowledge that's out there on the internet, like this is not some groundbreaking thing that I'm doing, but I just wanted it as a small piece of one thing I was working on. And I didn't want to learn Docker from first principles. Like I, at some point, if I need it, I can do that. Like I have the background that I can make that happen. But what I wanted to do was, was thing one. And it's very easy to get bogged down in the details of this other thing that helps you accomplish your end goal. And I just want to like, tell me enough about Docker so I can do this particular thing. And I can check that it's doing the safe thing. I sort of know enough about that from, you know, my other background. And so I can just have the model help teach me exactly the one thing I want to know and nothing more. I don't need to worry about other things that the writer of this thinks is important that actually isn't. Like I can just like stop the conversation and say, no, boring to me. Explain this detail. I don't understand. I think that's what that was very useful for me. It would have taken me, you know, several hours to figure out some things that take 10 minutes if you could just ask exactly the question you want the answer to.Alessio [00:13:05]: Have you had any issues with like newer tools? Have you felt any meaningful kind of like a cutoff day where like there's not enough data on the internet or? I'm sure that the answer to this is yes.Nicholas [00:13:16]: But I tend to just not use most of these things. Like I feel like this is like the significant way in which I use machine learning models is probably very different than most people is that I'm a researcher and I get to pick what tools that I use and most of the things that I work on are fairly small projects. And so I can, I can entirely see how someone who is in a big giant company where they have their own proprietary legacy code base of a hundred million lines of code or whatever and like you just might not be able to use things the same way that I do. I still think there are lots of use cases there that are entirely reasonable that are not the same ones that I've put down. But I wanted to talk about what I have personal experience in being able to say is useful. And I would like it very much if someone who is in one of these environments would be able to describe the ways in which they find current models useful to them. And not, you know, philosophize on what someone else might be able to find useful, but actually say like, here are real things that I have done that I found useful for me.Swyx [00:14:08]: Yeah, this is what I often do to encourage people to write more, to share their experiences because they often fear being attacked on the internet. But you are the ultimate authority on how you use things and there's this objectively true. So they cannot be debated. One thing that people are very excited about is the concept of ephemeral software or like personal software. This use case in particular basically lowers the activation energy for creating software, which I like as a vision. I don't think I have taken as much advantage of it as I could. I feel guilty about that. But also, we're trending towards there.Nicholas [00:14:47]: Yeah. No, I mean, I do think that this is a direction that is exciting to me. One of the things I wrote that was like, a lot of the ways that I use these models are for one-off things that I just need to happen that I'm going to throw away in five minutes. And you can.Swyx [00:15:01]: Yeah, exactly.Nicholas [00:15:02]: Right. It's like the kind of thing where it would not have been worth it for me to have spent 45 minutes writing this, because I don't need the answer that badly. But if it will only take me five minutes, then I'll just figure it out, run the program and then get it right. And if it turns out that you ask the thing, it doesn't give you the right answer. Well, I didn't actually need the answer that badly in the first place. Like either I can decide to dedicate the 45 minutes or I cannot, but like the cost of doing it is fairly low. You see what the model can do. And if it can't, then, okay, when you're using these models, if you're getting the answer you want always, it means you're not asking them hard enough questions.Swyx [00:15:35]: Say more.Nicholas [00:15:37]: Lots of people only use them for very small particular use cases and like it always does the thing that they want. Yeah.Swyx [00:15:43]: Like they use it like a search engine.Nicholas [00:15:44]: Yeah. Or like one particular case. And if you're finding that when you're using these, it's always giving you the answer that you want, then probably it has more capabilities than you're actually using. And so I oftentimes try when I have something that I'm curious about to just feed into the model and be like, well, maybe it's just solved my problem for me. You know, most of the time it doesn't, but like on occasion, it's like, it's done things that would have taken me, you know, a couple hours that it's been great and just like solved everything immediately. And if it doesn't, then it's usually easier to verify whether or not the answer is correct than to have written in the first place. And so you check, you're like, well, that's just, you're entirely misguided. Nothing here is right. It's just like, I'm not going to do this. I'm going to go write it myself or whatever.Alessio [00:16:21]: Even for non-tech, I had to fix my irrigation system. I had an old irrigation system. I didn't know how I worked to program it. I took a photo, I sent it to Claude and it's like, oh yeah, that's like the RT 900. This is exactly, I was like, oh wow, you know, you know, a lot of stuff.Swyx [00:16:34]: Was it right?Alessio [00:16:35]: Yeah, it was right.Swyx [00:16:36]: It worked. Did you compare with OpenAI?Alessio [00:16:38]: No, I canceled my OpenAI subscription, so I'm a Claude boy. Do you have a way to think about this like one-offs software thing? One way I talk to people about it is like LLMs are kind of converging to like semantic serverless functions, you know, like you can say something and like it can run the function in a way and then that's it. It just kind of dies there. Do you have a mental model to just think about how long it should live for and like anything like that?Nicholas [00:17:02]: I don't think I have anything interesting to say here, no. I will take whatever tools are available in front of me and try and see if I can use them in meaningful ways. And if they're helpful, then great. If they're not, then fine. And like, you know, there are lots of people that I'm very excited about seeing all these people who are trying to make better applications that use these or all these kinds of things. And I think that's amazing. I would like to see more of it, but I do not spend my time thinking about how to make this any better.Alessio [00:17:27]: What's the most underrated thing in the list? I know there's like simplified code, solving boring tasks, or maybe is there something that you forgot to add that you want to throw in there?Nicholas [00:17:37]: I mean, so in the list, I only put things that people could look at and go, I understand how this solved my problem. I didn't want to put things where the model was very useful to me, but it would not be clear to someone else that it was actually useful. So for example, one of the things that I use it a lot for is debugging errors. But the errors that I have are very much not the errors that anyone else in the world will have. And in order to understand whether or not the solution was right, you just have to trust me on it. Because, you know, like I got my machine in a state that like CUDA was not talking to whatever some other thing, the versions were mismatched, something, something, something, and everything was broken. And like, I could figure it out with interaction with the model, and it gave it like told me the steps I needed to take. But at the end of the day, when you look at the conversation, you just have to trust me that it worked. And I didn't want to write things online that were this, like, you have to trust me that what I'm saying. I want everything that I said to like have evidence that like, here's the conversation, you can go and check whether or not this actually solved the task as I said that the model does. Because a lot of people I feel like say, I used a model to solve this very complicated task. And what they mean is the model did 10%, and I did the other 90% or something, I wanted everything to be verifiable. And so one of the biggest use cases for me, I didn't describe even at all, because it's not the kind of thing that other people could have verified by themselves. So that maybe is like, one of the things that I wish I maybe had said a little bit more about, and just stated that the way that this is done, because I feel like that this didn't come across quite as well. But yeah, of the things that I talked about, the thing that I think is most underrated is the ability of it to solve the uninteresting parts of problems for me right now, where people always say, this is one of the biggest arguments that I don't understand why people say is, the model can only do things that people have done before. Therefore, the model is not going to be helpful in doing new research or like discovering new things. And as someone whose day job is to do new things, like what is research? Research is doing something literally no one else in the world has ever done before. So this is what I do every single day, 90% of this is not doing something new, 90% of this is doing things a million people have done before, and then a little bit of something that was new. There's a reason why we say we stand on the shoulders of giants. It's true. Almost everything that I do is something that's been done many, many times before. And that is the piece that can be automated. Even if the thing that I'm doing as a whole is new, it is almost certainly the case that the small pieces that build up to it are not. And a number of people who use these models, I feel like expect that they can either solve the entire task or none of the task. But now I find myself very often, even when doing something very new and very hard, having models write the easy parts for me. And the reason I think this is so valuable, everyone who programs understands this, like you're currently trying to solve some problem and then you get distracted. And whatever the case may be, someone comes and talks to you, you have to go look up something online, whatever it is. You lose a lot of time to that. And one of the ways we currently don't think about being distracted is you're solving some hard problem and you realize you need a helper function that does X, where X is like, it's a known algorithm. Any person in the world, you say like, give me the algorithm that, have a dense graph or a sparse graph, I need to make it dense. You can do this by doing some matrix multiplies. It's like, this is a solved problem. I knew how to do this 15 years ago, but it distracts me from the problem I'm thinking about in my mind. I needed this done. And so instead of using my mental capacity and solving that problem and then coming back to the problem I was originally trying to solve, you could just ask model, please solve this problem for me. It gives you the answer. You run it. You can check that it works very, very quickly. And now you go back to solving the problem without having lost all the mental state. And I feel like this is one of the things that's been very useful for me.Swyx [00:21:34]: And in terms of this concept of expert users versus non-expert users, floors versus ceilings, you had some strong opinion here that like, basically it actually is more beneficial for non-experts.Nicholas [00:21:46]: Yeah, I don't know. I think it could go either way. Let me give you the argument for both of these. Yes. So I can only speak on the expert user behalf because I've been doing computers for a long time. And so yeah, the cases where it's useful for me are exactly these cases where I can check the output. I know, and anything the model could do, I could have done. I could have done better. I can check every single thing that the model is doing and make sure it's correct in every way. And so I can only speak and say, definitely it's been useful for me. But I also see a world in which this could be very useful for the kinds of people who do not have this knowledge, with caveats, because I'm not one of these people. I don't have this direct experience. But one of these big ways that I can see this is for things that you can check fairly easily, someone who could never have asked or have written a program themselves to do a certain task could just ask for the program that does the thing. And you know, some of the times it won't get it right. But some of the times it will, and they'll be able to have the thing in front of them that they just couldn't have done before. And we see a lot of people trying to do applications for this, like integrating language models into spreadsheets. Spreadsheets run the world. And there are some people who know how to do all the complicated spreadsheet equations and various things, and other people who don't, who just use the spreadsheet program but just manually do all of the things one by one by one by one. And this is a case where you could have a model that could try and give you a solution. And as long as the person is rigorous in testing that the solution does actually the correct thing, and this is the part that I'm worried about most, you know, I think depending on these systems in ways that we shouldn't, like this is what my research says, my research says is entirely on this, like, you probably shouldn't trust these models to do the things in adversarial situations, like, I understand this very deeply. And so I think that it's possible for people who don't have this knowledge to make use of these tools in ways, but I'm worried that it might end up in a world where people just blindly trust them, deploy them in situations that they probably shouldn't, and then someone like me gets to come along and just break everything because everything is terrible. And so I am very, very worried about that being the case, but I think if done carefully it is possible that these could be very useful.Swyx [00:23:54]: Yeah, there is some research out there that shows that when people use LLMs to generate code, they do generate less secure code.Nicholas [00:24:02]: Yeah, Dan Bonet has a nice paper on this. There are a bunch of papers that touch on exactly this.Swyx [00:24:07]: My slight issue is, you know, is there an agenda here?Nicholas [00:24:10]: I mean, okay, yeah, Dan Bonet, at least the one they have, like, I fully trust everything that sort of.Swyx [00:24:15]: Sorry, I don't know who Dan is.Swyx [00:24:17]: He's a professor at Stanford. Yeah, he and some students have some things on this. Yeah, there's a number. I agree that a lot of the stuff feels like people have an agenda behind it. There are some that don't, and I trust them to have done the right thing. I also think, even on this though, we have to be careful because the argument, whenever someone says x is true about language models, you should always append the suffix for current models because I'll be the first to admit I was one of the people who was very much on the opinion that these language models are fun toys and are going to have absolutely no practical utility. If you had asked me this, let's say, in 2020, I still would have said the same thing. After I had seen GPT-2, I had written a couple of papers studying GPT-2 very carefully. I still would have told you these things are toys. And when I first read the RLHF paper and the instruction tuning paper, I was like, nope, this is this thing that these weird AI people are doing. They're trying to make some analogies to people that makes no sense. It's just like, I don't even care to read it. I saw what it was about and just didn't even look at it. I was obviously wrong. These things can be useful. And I feel like a lot of people had the same mentality that I did and decided not to change their mind. And I feel like this is the thing that I want people to be careful about. I want them to at least know what is true about the world so that they can then see that maybe they should reconsider some of the opinions that they had from four or five years ago that may just not be true about today's models.Swyx [00:25:47]: Specifically because you brought up spreadsheets, I want to share my personal experience because I think Google has done a really good job that people don't know about, which is if you use Google Sheets, Gemini is integrated inside of Google Sheets and it helps you write formulas. Great.Nicholas [00:26:00]: That's news to me.Swyx [00:26:01]: Right? They don't maybe do a good job. Unless you watch Google I.O., there was no other opportunity to learn that Gemini is now in your Google Sheets. And so I just don't write formulas manually anymore. It just prompts Gemini to do it for me. And it does it.Nicholas [00:26:15]: One of the problems that these machine learning models have is a discoverability problem. I think this will be figured out. I mean, it's the same problem that you have with any assistant. You're given a blank box and you're like, what do I do with it? I think this is great. More of these things, it would be good for them to exist. I want them to exist in ways that we can actually make sure that they're done correctly. I don't want to just have them be pushed into more and more things just blindly. I feel like lots of people, there are far too many X plus AI, where X is like arbitrary thing in the world that has nothing to do with it and could not be benefited at all. And they're just doing it because they want to use the word. And I don't want that to happen.Swyx [00:26:58]: You don't want an AI fridge?Nicholas [00:27:00]: No. Yes. I do not want my fridge on the internet.Swyx [00:27:03]: I do not want... Okay.Nicholas [00:27:05]: Anyway, let's not go down that rabbit hole. I understand why some of that happens, because people want to sell things or whatever. But I feel like a lot of people see that and then they write off everything as a result of it. And I just want to say, there are allowed to be people who are trying to do things that don't make any sense. Just ignore them. Do the things that make sense.Alessio [00:27:22]: Another chunk of use cases was learning. So both explaining code, being an API reference, all of these different things. Any suggestions on how to go at it? I feel like one thing is generate code and then explain to me. One way is just tell me about this technology. Another thing is like, hey, I read this online, kind of help me understand it. Any best practices on getting the most out of it?Swyx [00:27:47]: Yeah.Nicholas [00:27:47]: I don't know if I have best practices. I have how I use them.Swyx [00:27:51]: Yeah.Nicholas [00:27:51]: I find it very useful for cases where I understand the underlying ideas, but I have never usedSwyx [00:27:59]: them in this way before.Nicholas [00:28:00]: I know what I'm looking for, but I just don't know how to get there. And so yeah, as an API reference is a great example. The tool everyone always picks on is like FFmpeg. No one in the world knows the command line arguments to do what they want. They're like, make the thing faster. I want lower bitrate, like dash V. Once you tell me what the answer is, I can check. This is one of these things where it's great for these kinds of things. Or in other cases, things where I don't really care that the answer is 100% correct. So for example, I do a lot of security work. Most of security work is reading some code you've never seen before and finding out which pieces of the code are actually important. Because, you know, most of the program isn't actually do anything to do with security. It has, you know, the display piece or the other piece or whatever. And like, you just, you would only ignore all of that. So one very fun use of models is to like, just have it describe all the functions and just skim it and be like, wait, which ones look like approximately the right things to look at? Because otherwise, what are you going to do? You're going to have to read them all manually. And when you're reading them manually, you're going to skim the function anyway, and not just figure out what's going on perfectly. Like you already know that when you're going to read these things, what you're going to try and do is figure out roughly what's going on. Then you'll delve into the details. This is a great way of just doing that, but faster, because it will abstract most of whatSwyx [00:29:21]: is right.Nicholas [00:29:21]: It's going to be wrong some of the time. I don't care.Swyx [00:29:23]: I would have been wrong too.Nicholas [00:29:24]: And as long as you treat it with this way, I think it's great. And so like one of the particular use cases I have in the thing is decompiling binaries, where oftentimes people will release a binary. They won't give you the source code. And you want to figure out how to attack it. And so one thing you could do is you could try and run some kind of decompiler. It turns out for the thing that I wanted, none existed. And so I spent too many hours doing it by hand. Before I first thought, why am I doing this? I should just check if the model could do it for me. And it turns out that it can. And it can turn the compiled source code, which is impossible for any human to understand, into the Python code that is entirely reasonable to understand. And it doesn't run. It has a bunch of problems. But it's so much nicer that it's immediately a win for me. I can just figure out approximately where I should be looking, and then spend all of my time doing that by hand. And again, you get a big win there.Swyx [00:30:12]: So I fully agree with all those use cases, especially for you as a security researcher and having to dive into multiple things. I imagine that's super helpful. I do think we want to move to your other blog post. But you ended your post with a little bit of a teaser about your next post and your speculations. What are you thinking about?Nicholas [00:30:34]: So I want to write something. And I will do that at some point when I have time, maybe after I'm done writing my current papers for ICLR or something, where I want to talk about some thoughts I have for where language models are going in the near-term future. The reason why I want to talk about this is because, again, I feel like the discussion tends to be people who are either very much AGI by 2027, orSwyx [00:30:55]: always five years away, or are going to make statements of the form,Nicholas [00:31:00]: you know, LLMs are the wrong path, and we should be abandoning this, and we should be doing something else instead. And again, I feel like people tend to look at this and see these two polarizing options and go, well, those obviously are both very far extremes. Like, how do I actually, like, what's a more nuanced take here? And so I have some opinions about this that I want to put down, just saying, you know, I have wide margins of error. I think you should too. If you would say there's a 0% chance that something, you know, the models will get very, very good in the next five years, you're probably wrong. If you're going to say there's a 100% chance that in the next five years, then you're probably wrong. And like, to be fair, most of the people, if you read behind the headlines, actually say something like this. But it's very hard to get clicks on the internet of like, some things may be good in the future. Like, everyone wants like, you know, a very, like, nothing is going to be good. This is entirely wrong. It's going to be amazing. You know, like, they want to see this. I want people who have negative reactions to these kinds of extreme views to be able to at least say, like, to tell them, there is something real here. It may not solve all of our problems, but it's probably going to get better. I don't know by how much. And that's basically what I want to say. And then at some point, I'll talk about the safety and security things as a result of this. Because the way in which security intersects with these things depends a lot in exactly how people use these tools. You know, if it turns out to be the case that these models get to be truly amazing and can solve, you know, tasks completely autonomously, that's a very different security world to be living in than if there's always a human in the loop. And the types of security questions I would want to ask would be very different. And so I think, you know, in some very large part, understanding what the future will look like a couple of years ahead of time is helpful for figuring out which problems, as a security person, I want to solve now. You mentioned getting clicks on the internet,Alessio [00:32:50]: but you don't even have, like, an ex-account or anything. How do you get people to read your stuff? What's your distribution strategy? Because this post was popping up everywhere. And then people on Twitter were like, Nicholas Garlini wrote this. Like, what's his handle? It's like, he doesn't have it. It's like, how did you find it? What's the story?Nicholas [00:33:07]: So I have an RSS feed and an email list. And that's it. I don't like most social media things. On principle, I feel like they have some harms. As a person, I have a problem when people say things that are wrong on the internet. And I would get nothing done if I would have a Twitter. I would spend all of my time correcting people and getting into fights. And so I feel like it is just useful for me for this not to be an option. I tend to just post things online. Yeah, it's a very good question. I don't know how people find it. I feel like for some things that I write, other people think it resonates with them. And then they put it on Twitter. And...Swyx [00:33:43]: Hacker News as well.Nicholas [00:33:44]: Sure, yeah. I am... Because my day job is doing research, I get no value for having this be picked up. There's no whatever. I don't need to be someone who has to have this other thing to give talks. And so I feel like I can just say what I want to say. And if people find it useful, then they'll share it widely. You know, this one went pretty wide. I wrote a thing, whatever, sometime late last year, about how to recover data off of an Apple profile drive from 1980. This probably got, I think, like 1000x less views than this. But I don't care. Like, that's not why I'm doing this. Like, this is the benefit of having a thing that I actually care about, which is my research. I would care much more if that didn't get seen. This is like a thing that I write because I have some thoughts that I just want to put down.Swyx [00:34:32]: Yeah. I think it's the long form thoughtfulness and authenticity that is sadly lacking sometimes in modern discourse that makes it attractive. And I think now you have a little bit of a brand of you are an independent thinker, writer, person, that people are tuned in to pay attention to whatever is next coming.Nicholas [00:34:52]: Yeah, I mean, this kind of worries me a little bit. I don't like whenever I have a popular thing that like, and then I write another thing, which is like entirely unrelated. Like, I don't, I don't... You should actually just throw people off right now.Swyx [00:35:01]: Exactly.Nicholas [00:35:02]: I'm trying to figure out, like, I need to put something else online. So, like, the last two or three things I've done in a row have been, like, actually, like, things that people should care about.Swyx [00:35:10]: Yes. So, I have a couple of things.Nicholas [00:35:11]: I'm trying to figure out which one do I put online to just, like, cull the list of people who have subscribed to my email.Swyx [00:35:16]: And so, like, tell them, like,Nicholas [00:35:16]: no, like, what you're here for is not informed, well-thought-through takes. Like, what you're here for is whatever I want to talk about. And if you're not up for that, then, like, you know, go away. Like, this is not what I want out of my personal website.Swyx [00:35:27]: So, like, here's, like, top 10 enemies or something.Alessio [00:35:30]: What's the next project you're going to work on that is completely unrelated to research LLMs? Or what games do you want to port into the browser next?Swyx [00:35:39]: Okay. Yeah.Nicholas [00:35:39]: So, maybe.Swyx [00:35:41]: Okay.Nicholas [00:35:41]: Here's a fun question. How much data do you think you can put on a single piece of paper?Swyx [00:35:47]: I mean, you can think about bits and atoms. Yeah.Nicholas [00:35:49]: No, like, normal printer. Like, I gave you an office printer. How much data can you put on a piece of paper?Alessio [00:35:54]: Can you re-decode it? So, like, you know, base 64A or whatever. Yeah, whatever you want.Nicholas [00:35:59]: Like, you get normal off-the-shelf printer, off-the-shelf scanner. How much data?Swyx [00:36:03]: I'll just throw out there. Like, 10 megabytes. That's enormous. I know.Nicholas [00:36:07]: Yeah, that's a lot.Swyx [00:36:10]: Really small fonts. That's my question.Nicholas [00:36:12]: So, I have a thing. It does about a megabyte.Swyx [00:36:14]: Yeah, okay.Nicholas [00:36:14]: There you go. I was off by an order of magnitude.Swyx [00:36:16]: Yeah, okay.Nicholas [00:36:16]: So, in particular, it's about 1.44 megabytes. A floppy disk.Swyx [00:36:21]: Yeah, exactly.Nicholas [00:36:21]: So, this is supposed to be the title at some point. It's a floppy disk.Swyx [00:36:24]: A paper is a floppy disk. Yeah.Nicholas [00:36:25]: So, this is a little hard because, you know. So, you can do the math and you get 8.5 by 11. You can print at 300 by 300 DPI. And this gives you 2 megabytes. And so, every single pixel, you need to be able to recover up to like 90 plus percent. Like, 95 percent. Like, 99 point something percent accuracy. In order to be able to actually decode this off the paper. This is one of the things that I'm considering. I need to get a couple more things working for this. Where, you know, again, I'm running into some random problems. But this is probably, this will be one thing that I'm going to talk about. There's this contest called the International Obfuscated C-Code Contest, which is amazing. People try and write the most obfuscated C code that they can. Which is great. And I have a submission for that whenever they open up the next one for it. And I'll write about that submission. I have a very fun gate level emulation of an old CPU that runs like fully precisely. And it's a fun kind of thing. Yeah.Swyx [00:37:20]: Interesting. Your comment about the piece of paper reminds me of when I was in college. And you would have like one cheat sheet that you could write. So, you have a formula, a theoretical limit for bits per inch. And, you know, that's how much I would squeeze in really, really small. Yeah, definitely.Nicholas [00:37:36]: Okay.Swyx [00:37:37]: We are also going to talk about your benchmarking. Because you released your own benchmark that got some attention, thanks to some friends on the internet. What's the story behind your own benchmark? Do you not trust the open source benchmarks? What's going on there?Nicholas [00:37:51]: Okay. Benchmarks tell you how well the model solves the task the benchmark is designed to solve. For a long time, models were not useful. And so, the benchmark that you tracked was just something someone came up with, because you need to track something. All of deep learning exists because people tried to make models classify digits and classify images into a thousand classes. There is no one in the world who cares specifically about the problem of distinguishing between 300 breeds of dog for an image that's 224 or 224 pixels. And yet, like, this is what drove a lot of progress. And people did this not because they cared about this problem, because they wanted to just measure progress in some way. And a lot of benchmarks are of this flavor. You want to construct a task that is hard, and we will measure progress on this benchmark, not because we care about the problem per se, but because we know that progress on this is in some way correlated with making better models. And this is fine when you don't want to actually use the models that you have. But when you want to actually make use of them, it's important to find benchmarks that track with whether or not they're useful to you. And the thing that I was finding is that there would be model after model after model that was being released that would find some benchmark that they could claim state-of-the-art on and then say, therefore, ours is the best. And that wouldn't be helpful to me to know whether or not I should then switch to it. So the argument that I tried to lay out in this post is that more people should make benchmarks that are tailored to them. And so what I did is I wrote a domain-specific language that anyone can write for and say, you can take tasks that you have wanted models to solve for you, and you can put them into your benchmark that's the thing that you care about. And then when a new model comes out, you benchmark the model on the things that you care about. And you know that you care about them because you've actually asked for those answers before. And if the model scores well, then you know that for the kinds of things that you have asked models for in the past, it can solve these things well for you. This has been useful for me because when another model comes out, I can run it. I can see, does this solve the kinds of things that I care about? And sometimes the answer is yes, and sometimes the answer is no. And then I can decide whether or not I want to use that model or not. I don't want to say that existing benchmarks are not useful. They're very good at measuring the thing that they're designed to measure. But in many cases, what that's designed to measure is not actually the thing that I want to use it for. And I expect that the way that I want to use it is different the way that you want to use it. And I would just like more people to have these things out there in the world. And the final reason for this is, it is very easy. If you want to make a model good at some benchmark, to make it good at that benchmark, you can find the distribution of data that you need and train the model to be good on the distribution of data. And then you have your model that can solve this benchmark well. And by having a benchmark that is not very popular, you can be relatively certain that no one has tried to optimize their model for your benchmark.Swyx [00:40:40]: And I would like this to be-Nicholas [00:40:40]: So publishing your benchmark is a little bit-Swyx [00:40:43]: Okay, sure.Nicholas [00:40:43]: Contextualized. So my hope in doing this was not that people would use mine as theirs. My hope in doing this was that- You should make yours. Yes, you should make your benchmark. And if, for example, there were even a very small fraction of people, 0.1% of people who made a benchmark that was useful for them, this would still be hundreds of new benchmarks that- not want to make one myself, but I might want to- I might know the kinds of work that I do is a little bit like this person, a little bit like that person. I'll go check how it is on their benchmarks. And I'll see, roughly, I'll get a good sense of what's going on. Because the alternative is people just do this vibes-based evaluation thing, where you interact with the model five times, and you see if it worked on the kinds of things that you just like your toy questions. But five questions is a very low bit output from whether or not it works for this thing. And if you could just automate running it 100 questions for you, it's a much better evaluation. So that's why I did this.Swyx [00:41:37]: Yeah, I like the idea of going through your chat history and actually pulling out real-life examples. I regret to say that I don't think my chat history is used as much these days, because I'm using Cursor, the native AI IDE. So your examples are all coding related. And the immediate question is, now that you've written the How I Use AI post, which is a little bit broader, are you able to translate all these things to evals? Are some things unevaluable?Nicholas [00:42:03]: Right. A number of things that I do are harder to evaluate. So this is the problem with a benchmark, is you need some way to check whether or not the output was correct. And so all of the kinds of things that I can put into the benchmark are the kinds of things that you can check. You can check more things than you might have thought would be possible if you do a little bit of work on the back end. So for example, all of the code that I have the model write, it runs the code and sees whether the answer is the correct answer. Or in some cases, it runs the code, feeds the output to another language model, and the language model judges was the output correct. And again, is using a language model to judge here perfect? No. But like, what's the alternative? The alternative is to not do it. And what I care about is just, is this thing broadly useful for the kinds of questions that I have? And so as long as the accuracy is better than roughly random, like, I'm okay with this. I've inspected the outputs of these, and like, they're almost always correct. If you ask the model to judge these things in the right way, they're very good at being able to tell this. And so, yeah, I probably think this is a useful thing for people to do.Alessio [00:43:04]: You complain about prompting and being lazy and how you do not want to tip your model and you do not want to murder a kitten just to get the right answer. How do you see the evolution of like prompt engineering? Even like 18 months ago, maybe, you know, it was kind of like really hot and people wanted to like build companies around it. Today, it's like the models are getting good. Do you think it's going to be less and less relevant going forward? Or what's the minimum valuable prompt? Yeah, I don't know.Nicholas [00:43:29]: I feel like a big part of making an agent is just like a fancy prompt that like, you know, calls back to the model again. I have no opinion. It seems like maybe it turns out that this is really important. Maybe it turns out that this isn't. I guess the only comment I was making here is just to say, oftentimes when I use a model and I find it's not useful, I talk to people who help make it. The answer they usually give me is like, you're using it wrong. Which like reminds me very much of like that you're holding it wrong from like the iPhone kind of thing, right? Like, you know, like I don't care that I'm holding it wrong. I'm holding it that way. If the thing is not working with me, then like it's not useful for me. Like it may be the case that there exists a way to ask the model such that it gives me the answer that's correct, but that's not the way I'm doing it. If I have to spend so much time thinking about how I want to frame the question, that it would have been faster for me just to get the answer. It didn't save me any time. And so oftentimes, you know, what I do is like, I just dump in whatever current thought that I have in whatever ill-formed way it is. And I expect the answer to be correct. And if the answer is not correct, like in some sense, maybe the model was right to give me the wrong answer. Like I may have asked the wrong question, but I want the right answer still. And so like, I just want to sort of get this as a thing. And maybe the way to fix this is you have some default prompt that always goes into all the models or something, or you do something like clever like this. It would be great if someone had a way to package this up and make a thing I think that's entirely reasonable. Maybe it turns out that as models get better, you don't need to prompt them as much in this way. I just want to use the things that are in front of me.Alessio [00:44:55]: Do you think that's like a limitation of just how models work? Like, you know, at the end of the day, you're using the prompt to kind of like steer it in the latent space. Like, do you think there's a way to actually not make the prompt really relevant and have the model figure it out? Or like, what's the... I mean, you could fine tune itNicholas [00:45:10]: into the model, for example, that like it's supposed to... I mean, it seems like some models have done this, for example, like some recent model, many recent models. If you ask them a question, computing an integral of this thing, they'll say, let's think through this step by step. And then they'll go through the step by step answer. I didn't tell it. Two years ago, I would have had to have prompted it. Think step by step on solving the following thing. Now you ask them the question and the model says, here's how I'm going to do it. I'm going to take the following approach and then like sort of self-prompt itself.Swyx [00:45:34]: Is this the right way?Nicholas [00:45:35]: Seems reasonable. Maybe you don't have to do it. I don't know. This is for the people whose job is to make these things better. And yeah, I just want to use these things. Yeah.Swyx [00:45:43]: For listeners, that would be Orca and Agent Instruct. It's the soda on this stuff. Great. Yeah.Alessio [00:45:49]: That's a few shot. It's included in the lazy prompting. Like, do you do a few shot prompting? Like, do you collect some examples when you want to put them in? Or...Nicholas [00:45:57]: I don't because usually when I want the answer, I just want to get the answer. Brutal.Swyx [00:46:03]: This is hard mode. Yeah, exactly.Nicholas [00:46:04]: But this is fine.Swyx [00:46:06]: I want to be clear.Nicholas [00:46:06]: There's a difference between testing the ultimate capability level of the model and testing the thing that I'm doing with it. What I'm doing is I'm not exercising its full capability level because there are almost certainly better ways to ask the questions and sort of really see how good the model is. And if you're evaluating a model for being state of the art, this is ultimately what I care about. And so I'm entirely fine with people doing fancy prompting to show me what the true capability level could be because it's really useful to know what the ultimate level of the model could be. But I think it's also important just to have available to you how good the model is if you don't do fancy things.Swyx [00:46:39]: Yeah, I would say that here's a divergence between how models are marketed these days versus how people use it, which is when they test MMLU, they'll do like five shots, 25 shots, 50 shots. And no one's providing 50 examples. I completely agree.Nicholas [00:46:54]: You know, for these numbers, the problem is everyone wants to get state of the art on the benchmark. And so you find the way that you can ask the model the questions so that you get state of the art on the benchmark. And it's good. It's legitimately good to know. It's good to know the model can do this thing if only you try hard enough. Because it means that if I have some task that I want to be solved, I know what the capability level is. And I could get there if I was willing to work hard enough. And the question then is, should I work harder and figure out how to ask the model the question? Or do I just do the thing myself? And for me, I have programmed for many, many, many years. It's often just faster for me just to do the thing than to figure out the incantation to ask the model. But I can imagine someone who has never programmed before might be fine writing five paragraphs in English describing exactly the thing that they want and have the model build it for them if the alternative is not. But again, this goes to all these questions of how are they going to validate? Should they be trusting the output? These kinds of things.Swyx [00:47:49]: One problem with your eval paradigm and most eval paradigms, I'm not picking on you, is that we're actually training these things for chat, for interactive back and forth. And you actually obviously reveal much more information in the same way that asking 20 questions reveals more information in sort of a tree search branching sort of way. Then this is also by the way the problem with LMSYS arena, right? Where the vast majority of prompts are single question, single answer, eval, done. But actually the way that we use chat things, in the way, even in the stuff that you posted in your how I use AI stuff, you have maybe 20 turns of back and forth. How do you eval that?Nicholas [00:48:25]: Yeah. Okay. Very good question. This is the thing that I think many people should be doing more of. I would like more multi-turn evals. I might be writing a paper on this at some point if I get around to it. A couple of the evals in the benchmark thing I have are already multi-turn. I mentioned 20 questions. I have a 20 question eval there just for fun. But I have a couple others that are like, I just tell the model, here's my get thing, figure out how to cherry pick off this other branch and move it over there. And so what I do is I just, I basically build a tiny little agency thing. I just ask the model how I do it. I run the thing on Linux. This is what I want a Docker for. I spin up a Docker container. I run whatever the model told me the output to do is. I feed the output back into the model. I repeat this many rounds. And then I check at the very end, does the git commit history show that it is correctly cherry picked in

Korean True Crime
Lady Death (Case of Eom In-sook)

Korean True Crime

Play Episode Listen Later Aug 13, 2024 58:20


Everywhere Eom In-sook went death and destruction followed closely behind. She was so determined to become rich that other people became obstacles. She would stop at nothing to get what she wanted, even if it meant killing. Join the KTC's ⁠⁠⁠Discord⁠⁠⁠ today! If you enjoy Korean True Crime, please rate, follow, and send feedback! It helps me continue to improve the show. If you'd like to support the show or find show sources for free, ⁠⁠⁠⁠⁠⁠⁠⁠join Korean True Crime on Patreon. --- Support this podcast: https://podcasters.spotify.com/pod/show/koreantruecrime/support

Cosmic Scene with Jill Jardine
Declutter Your Life and Mind: Minimalism and Sanskrit Mantras for Happiness

Cosmic Scene with Jill Jardine

Play Episode Listen Later May 30, 2024 25:54 Transcription Available


Send us a Text Message.Could shedding excess belongings truly unlock the door to happiness? Discover how embracing minimalism can transform your life as we share personal stories, inspired by Fumio Sasaki's "Goodbye Things" and Marie Kondo's "The Life-Changing Magic of Tidying Up." We'll unravel the profound mental and emotional benefits of letting go, from financial freedom to reduced stress, leading to a lighter, happier you.  Swedish death clearing reveals how parting with unnecessary items, such as clothes and books, can be a powerful act of self-liberation and environmental responsibility.Ever wondered how ancient mantras could influence modern happiness? Join us as we explore the Sanskrit concept of "sukha" or "good space," guided by wisdom from the Dalai Lama's "The Art of Happiness." We'll discuss how acceptance, love, self-care, and gratitude are keys to a joyful life. Plus, learn about the transformative power of Sanskrit mantras like "Om Guru Raja Sukha Pradaya Namaha" and "Om Sharma Dayinye Namaha," and how they can rewire your mind and clear energy blockages. Chant along and start your journey towards a more content and joyful life today!SANSKRIT MANTRAS:1) OM GURU RAJA SUKHA PRADAYA NAMAHA: (Ohm Gu-ru Rah-ja Sook-ha Pra-da-ya Na-ma-ha)"Salutations to Lord Ganesha who is king of the Gurus who brings happiness."2) OM SHAMBU SHANKARAYA NAMAHA:  (Ohm Sham-buu Shan-ka-ra-ya Na-ma-ha)"Salutations to Shiva who brings happiness." Invokes Happy Shiva3) OM SHAMBAVE NAMAHA: (Ohm Sham-ba-vay Na-ma-ha) Shiva mantra for happinessObeisances to the Shiva who exists for our happiness4) OM SHARMA DAYINYEI NAMAHA:(Ohm Shar-ma Da-yin-yay Na-ma-ha)"Salutations to the Mother who is the Giver of Happiness."5) OM SHARAVANA BHAVAYA NAMAHA: (Ohm Sha-ra-va-na Bah-va-ya Na-ma-ha)Mantra to Subramanya, the brother of Ganesha, son of Shiva and Durga.  This mantra brings good luck and levity, as well as successful outcomes.  Just chanting it makes you feel good.6) OM EIM KLIM SAU SAU KLIM EIM: (Ohm I'm Kleem Saw Saw Kleem I'm)Lalita Mantra that makes one happy and freerww.jilljardineastrology.com/shopwww.lifewave.com/jilljar   - Natural healing through light therapy- good for easing inflammation, insomnia and enhancing the body's innate regenerative energies Support the Show.

Dark Asia with Megan
Army veteran buries Korean wife alive but her Apple watch saves her|Young Sook An case

Dark Asia with Megan

Play Episode Listen Later May 29, 2024 17:24


For more of my latest content, subscribe to my YouTube channel, 'Dark Asia with Megan.' Head over to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠www.youtube.com/@DarkAsiawithMegan⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ and join our awesome community. Your support means everything, and I can't wait to share more Asian cases with you! - Megan On Other Platforms TikTok: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.tiktok.com/@darkasiawithmegan⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Instagram: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.instagram.com/darkasiawithmegan⁠/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Facebook: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.facebook.com/darkasiamegan/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

NBA Straya
Tue May 14: Luka's a sook & Mavs BLOW IT vs OKC, do Boston stink + should Atlanta draft Bronny James No. 1!? (NBA Straya Ep 1096)

NBA Straya

Play Episode Listen Later May 14, 2024 57:04


Wow did Luka and the Dallas Mavericks blow it today vs OKC! And, after another worrying fourth quarter, do Boston actually stink? Plus, after winning the NBA Draft lottery... should Atlanta draft Bronny James and see if LeBron keeps his word? All that and HEAPS more in today's NBA Straya in our massive daily #NBAPlayoffs recaps!   NBA Straya today wraps up Game 4 of Boston @ Cleveland and OKC @ Dallas, and figures out where it all went wrong for the Cavs and Mavs. Plus there's all the usual stuff: That's Not A Knife, Old Mate No Mates, Spud of the Night, and Better Than Lonzo Ball!  On top of that there's also YEAH NAHs, the Unpopular Opinion of the Day and OUTBACK TAKEHOUSE… and a handy Strayan Player Watch because the Aussies are playing each other.... Plus a combo Luc Longley Memorial Role Player of the Week AND Andrew Gaze Grey Mamba Award for Outstanding Achievement in the field of excellence ...  as well as a Patty Mills Gameday Bala, Gameday Social Media check in...    The show then finishes off with the only #playoffs pick and previews you need as we dig into two MASSIVE Game 5s for Wednesday May 15 - with Indy @ New York and Minnesota @ Denver!  What will happen? Find out with the best NBA picks in the biz WITH NBA STRAYA! So, enjoy!  Righto - cheers ledges!! Thanks for tuning in to the best daily NBA podcast in the world!! Onyas... 

Kpop Bookshelf
"Somehow I will find a way to help you" - Many Lives Intertwined by Hyun Sook Han

Kpop Bookshelf

Play Episode Listen Later May 14, 2024 28:58


Season 6 Episode 5 We cover the book Many Lives Intertwined by Hyun Sook Han Mrs. Han was a strong-willed, very honest social worker in both Korea and the United States as well as an adoptive mother. We will learn more about the Korean adoption process in the 1950s-1960s, including the adoption of mixed race Korean children, from the social worker perspective. LINKS Hyun Sook Han, Korean Social Worker and Adoption Pioneer, Dies at 83 [New York Times]

Asian American History 101
The History of The Kim Sisters

Asian American History 101

Play Episode Listen Later May 6, 2024 25:30


Welcome to Season 4, Episode 19! Back in S03E03, we talked about one of the first popular Asian American singing groups, the Kim Loo Sisters, but there was also another group a few years later that made a longer lasting impact… the Kim Sisters. Originally from South Korea, the trio of ladies that included Sook-ja, Ai-ja, and Min-ja, were a dynamic vocal group that could also play a multitude of instruments.  They recorded several songs in the U.S. and made a splash in Las Vegas and on television through several variety shows. Their early life wasn't smooth, but they overcame those struggles to succeed. We talk about their early life, journey to America, TV appearances, and more! In our recurring segment, we bring back Obscure Asian Comic Book Characters and talk about the DC hero Kai-Ro. If you like what we do, please share, follow, and like us in your podcast directory of choice or on Instagram @AAHistory101. For previous episodes and resources, please visit our site at https://asianamericanhistory101.libsyn.com or social media links at http://castpie.com/AAHistory101. If you have any questions, comments or suggestions, email us at info@aahistory101.com. Segments 00:25 Intro Current Events 04:54 The History of the Kim Sisters 24:02 Obscure Asian Comic Book Characters Featuring Kai-Ro

Ben Fordham: Highlights
‘Sook' - Politician blasts cop accusing Sam Kerr of racism

Ben Fordham: Highlights

Play Episode Listen Later Mar 12, 2024 2:10


See omnystudio.com/listener for privacy information.

Alan Jones Daily Comments
‘Sook' - Politician blasts cop accusing Sam Kerr of racism

Alan Jones Daily Comments

Play Episode Listen Later Mar 12, 2024 2:10


See omnystudio.com/listener for privacy information.

NBA Straya
Mon Feb 26: An NBA fight! Wemby's 5x5, Unbeatable Celtics, Lakers sook, Trae Young's injury + WEEKEND WINNERS & LOSERS (NBA Straya Ep 1040)

NBA Straya

Play Episode Listen Later Feb 26, 2024 63:37


What a weekend! We got a fight in Pelicans-Heat, the Celtics look unbeatable, the Bucks re BACK and the Lakers are having a big sook about the refs. Also: how does Trae Young's injury affect the Hawks? All of which is covered in WEEKEND WINNERS & LOSERS! SO! For all that and everything else that happened of the weekend and today in the NBA... find out in today's episode of NBA Straya!  Cos NBA STRAYA is here to talk out all things NBA!! With all of the NBA GAMES on the weekend wrapped and covered in today's NBA Straya Game Wraps, there's something for everyone! All the big stories - the Nuggets took care of the Warriors,  SGA and OKC look amazing, Boston are awesome, the MAVS fell apart vs the Pacers and the SUNS beat the hell out of the Lakers and heaps more - and all the usual stuff: That's Not A Knife, Old Mate No Mates, Spud of the Night, Better Than Lonzo Ball and more.... as well as Magic Tweet Of The Day! Plus it's Monday! That means D-head of the Week!    There's also YEAH NAHs, the Unpopular Opinion of the Day and OUTBACK TAKEHOUSE… as well as the STRAYAN PLAYER WATCH as we check in with all the Aussies. Plus there's an ANDREW GAZE GREY MAMBA AWARD FOR OUTSTANDING ACHIEVEMENT IN THE FIELD OF EXCELLENCE!  Then there's all four NBA games picked and previewed for Tuesday Feb 27. And a DELLY REVIEW of Taylor Swift to finish off.    Righto - Love ya guts, ledges!  Tune in for the best daily NBA podcast in the world!! #Onyas

WenMint - Cardano Culture and NFTs
S2E7 : BIG ARF ENERGY! feat. Lil Sook of Lil Sappys

WenMint - Cardano Culture and NFTs

Play Episode Listen Later Feb 10, 2024 61:53


This week, The Machinist and CMDR sit down with Lil Sook of Lil Sappys, a Sappy Seals franchise NFT project on the Cardano blockchain! https://twitter.com/LilSappys https://www.lilsappys.com/ https://giphy.com/lilsappys https://twitter.com/PudgySook Tune-in to WENMINT : Cardano Culture & NFTs LIVE every weekend on X & YouTube! https://twitter.com/WenMint_Podcast https://www.youtube.com/@WenMintPodcast All episodes can be found wherever you get your podcasts. For a full list, visit our website at WENMINTPodcast.io WENMINT IS FOR EDUCATIONAL PURPOSES ONLY! PLEASE REFER TO THE DISCLAIMER IN EVERY EPISODE AND ALWAYS DO YOUR OWN RESEARCH.

NBA Straya
Wed Jan 24: Milwaukee fire Adrian Griffin, Doc Rivers to the Bucks? + Miami-Charlotte trade Lowry for Rozier, Blazers have a sook (NBA Straya Ep 1025)

NBA Straya

Play Episode Listen Later Jan 24, 2024 60:04


Wait, the Bucks are 30-13 and FIRED COACH ADRIAN GRIFFIN!? AND ARE HIRING DOC RIVERS (probably)!? WHAT THE HELL!? ... AND WE GOT A TRADE TODAY as the Miami Heat and Charlotte Hornets traded Kyle Lowry for Terry Rozier! CHAOS! Plus, another Nets meltdown, the Blazers have a sook and some Joker magic. Crazy day! So, for what else happened in today's games in the NBA... Find out in today's episode of NBA Straya! Cos NBA STRAYA is here to talk out all the games and all things NBA for you... The five games today are all covered in the NBA Straya Game Wraps - the NUGGETS were lucky, the CLIPPERS rule the Battle of LA, the Battle of NY was a classic Nets bed-wetting and more! - and all the usual stuff: That's Not A Knife, Old Mate No Mates, Spud of the Night, Better Than Lonzo Ball and more.... like AN EXTENDED Magic Tweet Of The Day as Magic found his drafts folder! There's also YEAH NAHs, the Unpopular Opinion of the Day and OUTBACK TAKEHOUSE… and the Strayan Player Watch to check in on all the Aussies in the NBA!  There's also a ANDREW GAZE GREY MAMBA AWARD FOR OUTSTANDING ACHIEVEMENT IN THE FIELD OF EXCELLENCE. We close out with a pick & preview for the eight NBA games on Thursday Jan 25, with a cracking Suns-Mavs game AND Chet vs Wemby as the Thunder and Spurs go at it!! Righto - Love ya guts, ledges.  Tune in for the best daily NBA podcast in the world!! #Onyas

NBA Straya
Fri Jan 12: Blow Out Friday - Josh Giddey's triple double as OKC embarrass Portland, Josh Green goes OFF, Suns burn Lakers + Carmelo Has A Sook & Weekend Preview (NBA Straya Ep 1017)

NBA Straya

Play Episode Listen Later Jan 12, 2024 58:56


Josh Giddey is BACK! Today's NBA Straya looks at Blow Out Friday as OKC embarrass the Blazers, the Bucks KILL the Celtics, the pitiful Nets lose in France among a bunch of big beat downs, while Brad Beal finally shows up! Also, Josh Giddey's triple double, Josh Green goes off, the latest All-Star voting results, splitting the draft into two days and Carmelo has a sook! As for what else happened today in the NBA? Find out with NBA Straya!  Cos it's just as well NBA STRAYA is here to talk out all the games and all things NBA for you... With five game today there's plenty to cover in the NBA Straya Game Wraps - the BUCKS smashed the Cs, JOSH GREEN helped the MAVS to a win over the KNICKS, the THUNDER smoked the BLAZERS, and the LAKERS got burned by the SUNS - and and all the usual stuff: That's Not A Knife, Old Mate No Mates, Spud of the Night, Better Than Lonzo Ball and more.... but no Magic Tweet Of The Day!  There's also YEAH NAHs, the Unpopular Opinion of the Day and OUTBACK TAKEHOUSE… and the Strayan Player Watch to check in on all the Aussies in the NBA!  There's also a GREAT MOMENT IN NBA STRAYA STAT HISTORY - featuring Nathan Jawai - as well as a SHANE HEAL SHOOTERS SHOOT SHOOT YOUR SHOT LIGHT EM UP AWARD.   Then to close out the week, a pick and preview for EVERY SINGLE NBA GAME ACROSS THE WEEKEND, including 10 tomorrow on January 13 -- and some crackers across Sunday and Monday -- for you! Plus a classic COOKING WITH BAYNESY at the end! Righto - Love ya guts, ledges.  Tune in for the best daily NBA podcast in the world!! #Onyas

Project Relationship
137 Handling the stress of relational growth

Project Relationship

Play Episode Listen Later Dec 23, 2023 36:56


Starting a new relationship, ending a relationship, negotiating within a partnership for the first time, practicing differentiation, having kids - all of these changes can be positive, exciting, and beneficial, but they can also bring up stress. Making sense of all those conflicting emotions can be challenging. If you're feeling stressed or overwhelmed by your relationship(s), that is SO OK!! And, this is the episode for you! We talk about a ton of powerful tools you can use to stay grounded while navigating relationship changes and growth. Embracing your agency in the face of change is a huge part of the individuation work we talk so much about. Listen to learn more! JOIN The Year of Opening community for a full year of learning & support. The waitlist is open now at www.theyearofopening.com Are you ready to open your relationship happily? Find out at www.JoliQuiz.com

Sook Sports
Sook Sports Event: BOWL SEASON

Sook Sports

Play Episode Listen Later Dec 8, 2023 12:14


I LOVE BOWL GAMES. 12/8/23

Reframing Chronic Illness
It was never really about this, was it? | 64

Reframing Chronic Illness

Play Episode Listen Later Dec 7, 2023 30:33


Learning the same lessons over and over again is all a part of this journey we call life, so why does it feel so new and surprising when it happens?!An update on my Year of Adventure, and why it's SO OK that it's not going to plan.Plus, an invitation to sign up to Lizzy's Christmas Party; "an annual goody bag bonanza full of generous gifts & Christmas cheer". Aka, loads and loads of really wonderful courses, workshops & self/life/business-development stuff (from ALL walks of life) for FREE. You can sign up here https://www.lcp2023.com/alana *this is an affiliate link, meaning if you sign up to the paid tier (where there are even more wonderful things to access for a complete steal), I'll receive a commission at no extra cost to you. Hosted on Acast. See acast.com/privacy for more information.

The A to Z English Podcast
A to Z Am I the Jerk 30 | Am I the jerk for giving one employee a raise and not the other?

The A to Z English Podcast

Play Episode Listen Later Nov 3, 2023 16:46


Check out The Jack & 'Chill Podcast here!http://atozenglishpodcast.com/episodeshttps://podcasts.apple.com/us/podcast/the-jack-chill-podcast/id1709902691https://redcircle.com/shows/the-jack-and-chill-podcastYou are a manager at your workplace, and you have two employees, Alice and Bob, who work on the same team. Alice is a very dedicated and hardworking employee, and she consistently goes above and beyond to meet deadlines and deliver exceptional results. Bob, on the other hand, has been struggling with his performance and frequently misses deadlines.One day, Bob comes to you and asks for a raise, citing his financial difficulties and the need to support his family. He feels that he deserves a raise because he's been with the company for a long time, even though his performance hasn't been up to par.You know that Alice has been doing an excellent job and deserves a raise as well. You only have the budget to give one raise at this time. You decide to give the raise to Alice because of her exceptional performance, but you don't inform Bob about her performance being the primary reason for your decision.Bob finds out about Alice's raise and feels very upset and unfairly treated. He confronts you and accuses you of being unfair and playing favorites. He argues that he's been loyal to the company for years, and Alice has only been with the company for a short time.So, the question is: Are you the jerk in this situation for giving the raise to Alice and not fully explaining your decision to Bob?00:00:00JackYou're listening to the A-Z English podcast.00:00:13JackWelcome to the A-Z English podcast. My name is Jack and I am here with three of my good friends, agnieska, Leila and Salimah.00:00:23JackAnd today we are going to do an MI, the jerk episode, and these are my favorites. I love doing these and let me read the situation to you guys 1st and then one by one I'll ask what your opinion is about the situation.00:00:39JackSo you are a manager at your workplace and you have two employees, Alice and Bob, who work on the same team.00:00:48JackAlice is a very dedicated and hardworking employee, and she consistently goes above and beyond to meet deadlines and deliver exceptional results. Bob's, on the other hand, has been struggling with his performance and frequently misses deadlines.00:01:08JackOne day, Bob comes to you and asks for a raise, citing his financial difficulties and the need to support his family. He feels that he deserves a raise because he's been with the company for a long time, even though his performance hasn't been.00:01:28JackUp to par.00:01:30JackYou know Alice has been doing an excellent job and deserves a raise as well. You only have the budget to give one raise at this time. You decide to give the raise to Alice because of her exceptional performance, but you don't inform Bob about her performance.00:01:51JackBeing the primary reason for your decision.00:01:55JackBob finds out about Alice's raise and feels very upset and unfairly treated. He confronts you and accuses.00:02:04JackYou of being.00:02:04JackUnfair and playing favorites, he argues that he's been loyal to the company for years, and Alice has only been with the company.00:02:15JackFor a short time. So the question is, are you the jerk in this situation for giving the raise to Alice and not fully explaining your decision to Bob?00:02:27JackAnd we'll start with Agnieszka first, what do you think is the manager the jerk?00:02:34AgnieszkaUh, for me? Yes, because he's creating, you know, this tension between cowork workers. I know he he did. He did the right thing because I think Alice is the one that deserve this.00:02:54AgnieszkaBut I think the best it was talk with Bob and tell him the the truth that he decided to.00:03:03AgnieszkaFor these reasons, and This is why he didn't get the raise. So it's better because when you have this environment in work, it's very difficult. Everybody's have a competition, have this feeling that they want to be the best and they really hate each other. So when they don't.00:03:23AgnieszkaWhen they they have these.00:03:25AgnieszkaThis competition, they they really, you know, feel that they deserve.00:03:29AgnieszkaThe thing so.00:03:30AgnieszkaIt's better to be clear and and have a.00:03:34AgnieszkaA good environment.00:03:35AgnieszkaBecause it's OK, let's get raised. But it's better if we if we tell him that. What is the reasons? So for me?00:03:43발표자OK.00:03:46JackSo he wasn't clear, he wasn't transparent.00:03:46발표자For me.00:03:49JackHe didn't explain it and so he created a situation of, like, tension between the workers. And so he is the jerk. OK, interesting this.00:04:00JackIs very interesting.00:04:01JackI I have my opinion, but I'm gonna wait till the end and and share it with you guys. But you're I'm on the same track as you.00:04:08SalimehOK.00:04:10JackUh, what do you think?00:04:12JackAbout this situation.00:04:15LaylaCould you just please remind me of the question because I couldn't hear Agnieszka, while she answered the question and as well as you, Jack, just remind me of the question please.00:04:26JackOh, no problem. So yeah, so the question is, is the manager the jerk for giving a raise to Alice?00:04:34JackAnd not explaining the situation to Bob because Bob didn't get the raise, and Agnieszka said yes, he's the manager. He's a jerk because the manager didn't explain the situation to Bob, but just gave it to Alice kind of secretly behind the.00:04:54JackThe curtain.00:04:55JackSo what do you think about the situation with who do you think is the jerk?00:04:58LaylaYes, honestly, I would like to say because there is a a manager right in the for for the company who raised the the how to say.00:05:17JackThe salary or the wages? Yeah.00:05:17LaylaThat will come for Alice, right?00:05:23LaylaOhh yeah, the the wages. So I think the manager is a little bit jerk because he or maybe she should explain that for Bob before raising the Alice.00:05:42JackOK.00:05:42LaylaBecause to give to give him the clear idea about what's happen.00:05:49LaylaAnd and why he or maybe she raised Alice instead of Bob in.00:05:57LaylaThis certain moment.00:05:59JackRight.00:06:00LaylaSo I.00:06:02JackYou agree with Agnieszka, then? Basically, yeah.00:06:02LaylaWhen when the manager didn't explain that.00:06:06LaylaTo to to.00:06:08LaylaYeah, yeah.00:06:09JackOK, OK. So.00:06:09LaylaAlthough I didn't hear her answer, but this is my answer so.00:06:10JackI think you.00:06:14JackYeah, yeah. You guys have the same answer, but I think you know, great minds think alike. So in this case you guys both came to the same conclusion that the manager should have.00:06:15LaylaWe are.00:06:25JackBlamed clearly to Bob why he gave the.00:06:29JackRace to Alice.00:06:30JackAnd not to Bob.00:06:33JackAnd so yeah, I think that's you guys have the similar conclusion. You came to the similar conclusion. But Celine, I'm kind of curious what do you think about the situation, do you agree with Leila and Agnieszka or do you have your?00:06:45JackOwn opinion about it.00:06:47SalimehNot really. I think the manager is not jerk because Alice is the one who is hard working person in that company in that office and she deserves that that.00:06:58SalimehRace, you know.00:06:59SalimehAnd the managers, you know, I know this situations, the manager is just human, you know.00:07:05SalimehSometimes they can't, you know.00:07:07SalimehWhere they can make a mistake like this situation. I know the manager should talk with Bob about the reason and she should be he.00:07:15SalimehOr she the manager.00:07:16SalimehShould be clear with employees, but the decision but the important things in this story is making the right decision. So I guess the manager is definitely not richer, but as Bob is the loyal because loyalty is one of the factors in the offices in the companies. So.00:07:37SalimehWell, if he's working for a long time for that company.00:07:40LaylaI I I think.00:07:41SalimehThe manager could give a loan. I don't know. Maybe from the salary in salary or not, but he should. You know, he he could give a loan to that person, Bob. And after that, you know, reduce the loan from his salary, you know, every month. You know, I I I would like to help you know.00:08:03SalimehTo Bob because he has difficulties with his life, so he could manage, you know, both. But yeah, this is my opinion.00:08:14JackInteresting. OK. Yeah. Or or maybe?00:08:17JackGive like half of a raise to.00:08:19JackWell, that's not really the the point of the am I the jerk is. You can't make these kinds of, you know, different arrangements. But maybe like half to Bob and half to Alice would be like the perfect solution.00:08:32SalimehNo, but I I guess you.00:08:34SalimehKnow you don't encourage your employees. If you do the half to the the employee that she can't, you know, work properly, he.00:08:42SalimehWork well, so I just the manager, you know, did the right thing, you know, because the managers should be the when they are making decisions, they have to be one or zero. I mean true or false. And this is the, you know, this is a great thing that the manager did.00:08:54JackRight.00:08:59JackRight. They have to make a decision, you know? That's right. You can't always be. You can't make everybody happy all the time.00:09:06JackThat's a good point.00:09:08JackOK, my I'm I'm I'm going.00:09:10JackIn like kind of different direction here I'm thinking.00:09:13JackBob is is, it's.00:09:15JackWorking in a company is like a marathon. When you first start, it's easy to be a hard worker. You know that first year, the second year, you're like, really.00:09:25JackYou know what can I do? What?00:09:26JackCan I do but after like?00:09:28JackFive years or 10.00:09:29JackYears. It's like that the last miles of a marathon. And that's what Bob is like in a different stage of the marathon. He's, like trying to finish the marathon, and Alice is just starting the marathon. And she looks so fresh and so, so bright. And so the manager is like.00:09:46JackLook at the shiny.00:09:47JackNew, you know, object look at this.00:09:49JackYou know, this is so nice and new and it's like, but Bob is like he's been there for a long time, just chugging away like a, you know, the hard worker. So I'm.00:10:01JackI kind of feel.00:10:02JackLike because Bob has been there longer, he deserves the raise. Even though Alice is a hard worker. If Alice is still a hard worker after like 5 or 10 years, then definitely she deserves a raise later. But I feel like Bob deserves the raise, so I'm I'm gonna say yes, the manager.00:10:23JackIs the jerk.00:10:25JackHe made the wrong decision. He went for the new shiny worker and he should have stuck with the loyal, you know, the loyal teammate, even though he's a little slow and a little.00:10:36JackEasy. I don't know what.00:10:37JackAm I wrong sling me? What do you think?00:10:39AgnieszkaIf I if Alice.00:10:41SalimehFound out that you already gave the raise to Bob, then maybe she lost the, you know, interest for working, for being the hard worker. So you know if you. So OK. I'm like, I'm Allison. I'm not gonna work for hard for this company because they don't deserve it.00:10:58SalimehYou don't, you know, give me the rates so.00:11:01JackThis is true, this is.00:11:02LaylaI I don't. Yeah, yeah.00:11:04JackOK, I can ask, what do you think? Is my argument good or is it? Is there a problem?00:11:09JackDo I have weaknesses?00:11:10AgnieszkaI I think you make a good argument also you.00:11:14AgnieszkaWant to lose if?00:11:16AgnieszkaThey do like that they want to lose, that he's a loyal maybe Alice. In two years that she decided to go and and Bob was the loyal worker. So.00:11:30AgnieszkaYeah, I I understand that. But also as a manager, I think one of the best thing for me when I see a manager is.00:11:39AgnieszkaHave create a good environment so he maybe he could have a he could take a decision, but a better decision like selling said maybe not everyone's happy, but maybe they what they deserve. So yeah, you'll understand.00:11:58JackI don't know. Yeah.00:11:59JackI feel like maybe Alice will become Bob. She just hasn't been there long enough.00:12:05JackEverybody becomes Bob eventually. If you work for a company, but maybe I'm wrong, maybe just Jack becomes Bob, like that's me. Like I just identify with Bob and I feel sorry for Bob because I've, you know, I'm like the same situation where you know, you've been at the same company for a long, long, long time.00:12:25JackAnd you just, you know, you. But it's like an old hard drive. You have a lot of information, you know, I know a lot about how it works. I know this the company, I know the situation. So I don't know. I'm just torn. I I feel like.00:12:41JackYeah, I don't know.00:12:42AgnieszkaThey should create a legal a loyal rate like like give a vote, but for being loyal.00:12:50JackYeah. The Bob.00:12:51JackGive him, like a yeah loyalty button.00:12:54JackSlime, please go.00:12:55JackAhead. What were you going to say?00:12:57SalimehNo, but the company should, you know, make money. So how they could, you know, make money?00:13:01발표자Right.00:13:04SalimehWith hard workers.00:13:04SalimehOK. Bob is loyal. Bob, have a lot of.00:13:06SalimehKnowledge and experience, and I know I respect for those experience, but if you cannot use those experience though, it's it's going to.00:13:15SalimehBe useless, you know.00:13:16SalimehBecause bug is just, you know.00:13:20SalimehI I guess in the story Bob is the one who know everything is loyal, but it's kind of tired of this job. So maybe we can't. We can't have. But in that position maybe maybe we have to think about, you know just changing the position.00:13:38SalimehI don't know.00:13:39JackI think you're right. I mean you're you're exactly right. That's all the companies really care about is making money. And if Bob's not making as much money as Alice Bob's out Alice is in. I mean, that's really how it how it comes, what it comes down to. So I think that's a a fair argument.00:13:55JackAnd and. All right. Well, thank you guys. You gave me a lot to think about. This is a really difficult one actually, because I don't know. I still feel sorry for Bob. So I'm kind of, I'm pro Bob right now.00:14:08SalimehOne more thing, Jack. It's about not making decision logically and emotionally, so I guess I'm not making decision just logically because I just.00:14:17SalimehYou know, pay attention for.00:14:18SalimehThe you know, care about the money, but I know about this, but yeah, so yeah, it's very difficult decisions to make.00:14:26JackYeah. And and.00:14:27JackI guess that's the the difficult part of a manager's job is like.00:14:31JackHow do you balance the emotional aspect of knowing Bob for longer and knowing that Bob has a family but Alice is making you more money because she's, you know, harder worker. She she, you know, she's just a better employee at this point in time. So yeah, it's it's.00:14:52JackI think the the cold, you know, the hard logical decision is the one that most companies want you to make and that's why I could never be a manager in a company like that because I don't have a stomach for it. I'm too emotional. I'm way too emotional. I.00:15:09JackCould do it and yeah, so I'm glad I don't have that job, but.00:15:15JackListeners out there let us know what you think. Send us a comment on this. Am I the jerk episode at AZ englishpodcast.com? Send me your emails either pro Bob or pro Alice. Which side are you on? AZ englishpodcast@gmail.com.00:15:33JackOr jump into our WhatsApp group, the Ados English podcast group, and you can have our argument right there in the WhatsApp chat group. And with that said, we will see you next time. Thanks everybody. Bye, bye.00:15:49LaylaBy that.Podcast Website:https://atozenglishpodcast.com/a-to-z-am-i-the-jerk-30-am-i-the-jerk-for-giving-one-employee-a-raise-and-not-the-other/Social Media:Facebook Group: https://www.facebook.com/groups/671098974684413/Tik Tok:@atozenglish1Instagram:@atozenglish22Twitter:@atozenglish22A to Z Facebook Page:https://www.facebook.com/theatozenglishpodcastCheck out our You Tube Channel:https://www.youtube.com/channel/UCds7JR-5dbarBfas4Ve4h8ADonate to the show: https://app.redcircle.com/shows/9472af5c-8580-45e1-b0dd-ff211db08a90/donationsRobin and Jack started a new You Tube channel called English Word Master. You can check it out here:https://www.youtube.com/channel/UC2aXaXaMY4P2VhVaEre5w7ABecome a member of Podchaser and leave a positive review!https://www.podchaser.com/podcasts/the-a-to-z-english-podcast-4779670Join our Whatsapp group: https://forms.gle/zKCS8y1t9jwv2KTn7Intro/Outro Music: Daybird by Broke for Freehttps://freemusicarchive.org/music/Broke_For_Free/Directionless_EP/Broke_For_Free_-_Directionless_EP_-_03_Day_Bird/https://creativecommons.org/licenses/by/3.0/legalcodehttps://freemusicarchive.org/music/eaters/simian-samba/audrey-horne/Support this podcast at — https://redcircle.com/the-a-to-z-english-podcast/donationsAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

Sook Sports
Sook Sports Event: Perry's Fan Rant

Sook Sports

Play Episode Listen Later Oct 11, 2023 10:35


Just getting some things off my chest... Boomer Sooner, Rally Red Rangers, and legitimate question: How about those Cowboys? Seriously, I need answers... 10/11/2023

One Humanity Lab: Into an Ecology of Wholeness
The Magic Of Coaching And Spiritual Awakening With Sook Yee Tai

One Humanity Lab: Into an Ecology of Wholeness

Play Episode Listen Later Oct 5, 2023 37:41


One of the great things about coaching is how it helps us see beyond the surface and unearth the things inside us that we never thought were causing us stress or even pain. Coach, leader, and visionary Sook Yee Tai is all about helping people see the world from a different lens, leading them to a profound way of seeing themselves and the world. Prior to this extraordinary mission, Sook Yee was on a remarkable spiritual journey that enlightened her about the power of her mind and faith. In this episode, she joins Ina Gjikondi and Bonnie Buckner to share this with us and the amazing wisdom she earned along the way. She reflects on her upbringing as a Malaysian and third-generation Chinese, where she learned to integrate her Confucian background with her Christian faith, as well as her career path as a woman leader in a male-dominated field. All of this led Sook Yee to her passion for coaching and nurturing young executives. Find out how her spiritual journey influenced her coaching system and earn key insights on mind-body alignment, healing, and transformation.Love the show? Subscribe, rate, review, & share! https://leadershipcoaching.cepl.gwu.edu/podcast/

Sook Sports
Sook Sports Event: Fall is the Best.

Sook Sports

Play Episode Listen Later Sep 7, 2023 10:56


Football Excitement, Baseball Disappointments, Summer Fun and a Love Letter to Fall. 09/07/23

Sook Sports
Sook Sports Rant: NBA and NHL Over, Golf is a Mystery, and DeGrom is DeDone

Sook Sports

Play Episode Listen Later Jun 17, 2023 11:25


Unnis on Oppas: A KDrama Fan Podcast
Episode 15: Seo Yi Sook

Unnis on Oppas: A KDrama Fan Podcast

Play Episode Listen Later Jun 13, 2023 116:19


Before her most recent appearances in THE GOOD BAD MOTHER and QUEENMAKER, Seo Yi Sook was already well-established as a sought-after support player. In this episode, Susie and Lynn discuss her work in some of their favorite (and not-so-favorite) dramas. (About SPOILERS: we try to avoid major plot spoilers in our discussions of different dramas, but inevitably some minor details will be revealed. If you want to avoid knowing absolutely anything about a show you haven't seen, we recommend skipping to the next segment!) (0:00) - Intro, Jal Meogisseubnida, KDrama ketchup (34:00) - Inheritors/The Heirs (46:35) - Empress Ki (55:15) - Hotel del Luna (1:06:10) - Do Do Sol Sol La La Sol (1:17:20) - Rookie Cops (1:23:35) - Under the Queen's Umbrella (1:33:30) - Start-Up (1:43:20) - Favorite Seo Yi Sook scenes (1:50:55) - Ask Susie Unnithing If you like this episode, please subscribe and leave us a rating and review! Follow us on Instagram @unnisonoppaspod

Storytelling School
How Relatable Stories Help You Connect With Any Audience

Storytelling School

Play Episode Listen Later May 31, 2023 32:02


It's 2006. TEDx announces that they're holding the first-ever TEDx University. They invite all of us who are attendees to submit a description of what we want to share and teach the group. And a handful will be selected to do Talks. At this point, I'm not a speaker; I'm still in the technology industry. Still, I know exactly what I want to do. I want to teach the other attendees something that makes me feel like a badass whenever I do it... I'm going to teach them how to break boards with their bare hands! So I present my crazy idea for selection: Mind Over Wood. And… I get chosen as one of only 30 speakers out of over a thousand people! Excited, I get to work right away on writing my Talk and start picturing myself speaking in front of world leaders, neuroscientists, and billionaires... Wait. World leaders. Neuroscientists. Billionaires. I start thinking, “Who am I to teach anything to a roomful of those people? What if they laugh at me, or even worse, walk right out of the room?” So on the day of TEDx University, I'm nervously standing in front of the room. I see the editor of Forbes to the left, three rows back. I see the founder of Flickr walk into the room on the right. I realize there's a Nobel Prize winner sitting in the front row. But I'm also feeling good about my Talk because I've been rehearsing it relentlessly. So OK, it's go time! I take a deep breath and begin.  And I can't tell you how many people came up afterward to tell me how impactful my presentation was. Wow. At that moment I realize how fun this was and am astonished that people get to do this… for a living... including my incredible guest today! Adiel Gorel is sought-after as an international speaker and recently joined the TEDx stage. In this episode of the Storytelling School Podcast, he talks about how discovering something that exists only in the U.S. motivated him to help others change one area of their unfolding life story. He also answers questions such as: What's the biggest financial gift you can add to your story in the U.S. (if you're not already)? What should you do before you close your story in a Talk (which many people don't do)? Why does storytelling belong in the real estate industry? And what do you want to have at the ready when someone asks you a question like, “What do you do?” What you will learn in this episode: How sharing a more relatable story in drier subject areas helps with your message Why the short story form is a more effective format for spreading your message today How breaking the rules can open doors and create an even better story Who is Adiel? Adiel Gorel is a CEO, author, and international speaker with a Master's degree in Electrical Engineering from Stanford University and a lifetime passion for engaging audiences of all sizes in multiple languages. His company International Capital Group (ICG) is a leading real estate investment firm in the San Francisco Bay. With over 35 years of experience in the industry, he has not only invested in hundreds of properties for his own portfolio but also successfully assisted thousands of investors with purchasing more than 10,000 properties in the U.S. As a six-time published author, Adiel's books cover topics ranging from building wealth to breathing correctly. From being a Computer Science instructor at Stanford to creating public TV specials, his life's work is to challenge conventions in order to offer insight on how everyone can improve their health, build their wealth, and live a better life. He has two grown kids and currently resides in the Bay Area. Links and Resources: Adiel Gorel ICG Real Estate Investments Adiel Gorel's TEDx Talk - From Fear to Fortitude: Inflation, Time, and Debt for Lifelong Wealth @adielgorel on Instagram @AdielGorelSpeaker on Facebook @adielgorel on LinkedIn @ICGRE on Twitter @AdielGorel on Youtube Storytelling School Website @storytellingschool on Instagram @storytellingSchool on Facebook

Sook Sports
Sook Sports Event: No Joy in Mudville

Sook Sports

Play Episode Listen Later May 15, 2023 11:54


NBA and NHL Playoffs and why I'm Drake because I'M UPSET! 05/15/2023

Sook Sports
Sook Sports Event: Playoff Insanity+

Sook Sports

Play Episode Listen Later Apr 28, 2023 16:02


NHL & NBA Playoffs, Baseball Update, and NFL Draft. 04/28/2023

Sook Sports
Sook Sports Minute: ThenMaster's (or MY BUCKET LIST TRIP!)

Sook Sports

Play Episode Listen Later Apr 10, 2023 8:44


A Tradition Unlike Any Other. 04/10/2023

Sook Sports
Sook Sports Event: March is Literally MADNESS!!!

Sook Sports

Play Episode Listen Later Apr 1, 2023 8:05


The Power of Parity. 03/31/2023

The Fierce Female Network
Artist Sook, and Indie Artist Dreadpool Are On Air!

The Fierce Female Network

Play Episode Listen Later Mar 31, 2023 24:00


The band Sook is driven by the mastermind and Gold Coast musical wizard Kirk Mesmer (formally of Magick Lamp). A hermetic pop rock blend of Weezer, The Cure and Smashing Pumpkins, Kirk shapes his sound from a selective array of art in pop culture music, films and life.************************************************************************************************************************************************************************************************************************************************************************** DreadpooL (formally known as Crescendo) is a Music Producer, based out of Springfield, VA. Born and raised in Washington DC, he began studying the clarinet in the third grade, later attended Duke Ellington School of the Arts, where he picked-up the baritone saxophone. Upon graduation, he studied music at Clark Atlanta University in Atlanta, GA and this is when he officially began creating beats. After many years of creating beats, he took a break to pursue another passion in photography, starting Tic-Tac-Pose Photography. During this pursuit, while very successful, he realized that there was nothing else that he wanted to do, but music. In 2013, he closed his photography studio, sold all his equipment, and began building his music studio... again. This time, it was all or nothing. Crescendo Forever Entertainment was rebirthed, will a different drive and a different outlook.

The Fierce Female Network
Artist Sook, and Indie Artist Dreadpool Are On Air!

The Fierce Female Network

Play Episode Listen Later Mar 31, 2023 24:00


The band Sook is driven by the mastermind and Gold Coast musical wizard Kirk Mesmer (formally of Magick Lamp). A hermetic pop rock blend of Weezer, The Cure and Smashing Pumpkins, Kirk shapes his sound from a selective array of art in pop culture music, films and life.************************************************************************************************************************************************************************************************************************************************************************** DreadpooL (formally known as Crescendo) is a Music Producer, based out of Springfield, VA. Born and raised in Washington DC, he began studying the clarinet in the third grade, later attended Duke Elling023. With both songs, depending on your mood you can enjoy both the original versions and the remix versions. Indie Musichttp://www.blogtalkradio.com/entertainu/2023/01/18/indie-artist-sat he wanted to do, but music. In 2013, he closed his photography studio, sold all his equipment, and began building his music studio... again. This time, it was all or nothing. Crescendo Forever Entertainment was rebirthed, will a different drive and a different outlook.

The Fierce Female Network
Indie Artist BlokStar, and Artist Sook Are On Air!

The Fierce Female Network

Play Episode Listen Later Mar 31, 2023 27:00


BlokStar is from Knoxville, TN and has been rapping since the early 2000's. He's opened up for Three 6 Mafia, T.I, Eightball & MJG, amongst others. His real look at life with his rhythmic flow sets him apart from the rest. He recently dropped an EP titled Constellations. It's his first studio project since 2005 when he dropped a self-titled album BlokStar.

Done & Dunne
100. Capote's Coterie | 1924-1933

Done & Dunne

Play Episode Listen Later Mar 25, 2023 71:13


In this first episode of Capote's Coterie, we explore Truman Capote's early years – from his tiny hometown of Monroeville, Alabama, and the cast of characters there – including the very first members of his coterie: Lillie Mae Faulk, and her family – Aunts Jenny, Sook, and Callie. Add into all of this an uncle, a cousin, and a neighborhood kid with just as much imagination and curiosity, Harper Lee. Truman Capote makes his way in the world from birth to second grade aided by the help and grace of his mother's relatives and neighbors – truly, Truman's first coterie. Further reading Truman Capote's Southern Years: Stories from a Monroeville Cousin, by Marianne M. Moates (Amazon link) Party of the Century: The Fabulous Story of Truman Capote and His Black and White Ball, by Deborah Davis (Amazon link) Find sources and more at doneanddunne.com. Continue your investigation with ad-free and bonus episodes on Patreon! To advertise on Done & Dunne, please reach out to sales@advertisecast.com or visit https://www.advertisecast.com/DoneDunne. Learn more about your ad choices. Visit podcastchoices.com/adchoices

Sook Sports
Sook Sports Event: BIG NEWS

Sook Sports

Play Episode Listen Later Mar 4, 2023 1:47


Times, They Are A-Changin'. 03/03/2023

Rick or Treat Horrorcast
#21 Carrie (w/Victoria Rae Sook)

Rick or Treat Horrorcast

Play Episode Listen Later Mar 3, 2023 130:33


This week Victoria Rae Sook guest hosts to discuss everyone's favorite prom date, CARRIE. Vic is currently directing a production of Carrie the Musical at Ohio University and provides some insight into adapting this classic Stephen King tale from the page to the stage. We also discuss bullying, dirty pillows, and recent horror releases. LET'S GO RICK OR TREATING!PODCAST:INSTA: @RickOrTreatPodVICTORIA RAE SOOK (Guest Host):SOCIALS: @victoriaraesookWEBSITE: www.victoriaraesook.comRICKY (Host):INSTA: @rickrtreatMY WRITING:www.Rue-Morgue.comwww.SpoilerFreeReviews.comLETTERBOXD: RickOrTreatLESTAT VON MONDLICHT (Music by):Youtube:https://www.youtube.com/@lestatvonmondlichtCRIMSON SOUL:Insta: @crimsonsoulofficialhttps://www.facebook.com/crimsonsoulofficialAFTER DARK (Band):https://www.facebook.com/afterdarkofficialuyhttps://www.instagram.com/afterdarkuyhttps://www.youtube.com/@afterdarkuyDROP BEAT EMPIRE (Electrogoth project):https://linktr.ee/dropbeatempireuy