Podcasts about joar

  • 117PODCASTS
  • 292EPISODES
  • 37mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Jul 11, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about joar

Show all podcasts related to joar

Latest podcast episodes about joar

comicdealer mini-podcast

Vom Låden und vom Leben Nummer 34 - Drehbuch (...wie immer zum hören und mitlesen oder lesen und mithören, ganz wie ihr wollt) Vom Låden und vom Leben (audiofile #34: Drehbuch) Servus ihr Liem dord draußen, heud is amål widder mei Glosse drå. "Vom Lådn und vom Leben". Na ja, vom Lådn is meisdens ned so viel drin, was ich in derre Kadegorie zum såch hab. A diesmål würd ich des eher nei die annere Richdung neisordier. Leben ehm. Obwohls a Menge Bücher zum Deema gebb dud. Und ned nur des, sondern a Büchli und Comics, wo des im Diddel für du. Es gehd heud nämmlich um des Deema "Drehbuch". Und ja, da hab ich amål nei unnern Webschobb gelinsd und jede Menge Dreffer gfundn. An Manga vo Hayabusa "Liebe ohne Drehbuch", a boar Grimmis, wie zum Beischbiel "Drehbuch des Todes", a e echds Drehbuch zum Glassigger "Das schweigende Klassenzimmer" von Dietrich Garstka und gloar, jede Menge Sachbücher zur Deorie. Was ich awer verzähl will is a Kino Erlebnis, beziehungsweise a Nachkino-Erlebnis. Ich war mid a boar Leud im Kino. In Würzburch im Zenrdål. Un der Film wår selwer a a Buchumsedzung. Vonnem Buch, wu mir echd alle feiern dun, im Lådn. Des Büchle heisd "Mickey 7 - Der letzte Klon" und der Film "Mickey 17". Wahrdscheins, weil sich a grössere Zahl innem Film besser machd. Gibds middlerweile üwrichns a bei uns im Shobb. Awer zu der Zeid war der ewn noch im Kino. Und wie des hald bei so nen gesellichn Amnd mid Kinno und Freundn so is, simmer danach noch lusdich zammgehoggd und hamm Manöfergriddigg geübd. Ich war der ennziche, wo des Buch kennd had. Ja, ich hab a Freunde, wo ned so viel nein Lådn gehn du und annere Gewohnheidn hamm wie ich. Awer genau des find ich ofd dodål inderessand. Mir alle fandn den Film echd gelungen. Die annern wolldn dann deilweise gleich des Büchle käff und ich fands echd a saugud, OBWOHL ich des Büchle kenn du. Und des is ja manchmal so a Sach. Awer um all des gehds goar ned. S gehd ummen Sadz, denn wo ich zur Diskussion beidråchn hab. Mir war als ennzicher Gridiggbungd die Schdaadsführung, also sozumsach die Regierung a glenns bissle zu üwerzochn sadierisch blöd dargeschdelld. Und ich mein so, dass des in der Oard und Weise wie der Film aufbaud is scho bassd, awer eichndlich durch des üwerzochn doofe fasd aweng unglaubwürdich war. Nacherd sachd enner vo meine Kumbels eefach blos "Drambb". Und ich so. "Verdammd". S is dann noch al lange, lusdiche Runde worrn. Mir ham zum Beischbiel a üwer den Schenkelglobfer-Kuldfilm "Idiocracy" vo 2006 geredd. Damals hamm mir uns verreggd glachd, awer gleichzeidich a gmennd, dass des alles hald ausschlieslich in so ennen Glamaugfilm neibass dud. Heud bin ich mir da nimmer so sicher. Und da sach ich edz eefach blos "Drambb". Ja und in soanne Richdung is nacherd des Gschbräch a abgedrifded und ein Schlachwordd had sich seiddem bei uns durchgesedzd und wird immer wieder zidierd. "Wenn ich des innem Drehbuch vorgsedzd griech würd - und selber a Reschissör wär, nacherd würd ich des gladdwegg ablehn. Unglaubwürdich!" Tja un edz simmer bein echdn Deema. Weil wenn ich den kombledd unglaubwürdichn Mistkack, wo seid dem Amdsandridd von dem "Drambb" gschenn is und gschied, in a Drehbuch neischreib würd und des in der Zeid zurüggschick würd. Von mir aus blos kurz, a boar Jährli, nacherd würd kenner so a Drehbuch käff. Also es sei denn als bodenzielln zwäddn Deil von "Idiocracy" vielleichd. Bei vieln Bloggbasdern aus den ledzden Jahrzehndn hab ich gsåchd, was füra scheiss Drehbuch. Wenn die mir a boar Dollar gebb würd, nacherd würd ich denn gern was bessers schreib. Dass des, was edz in unnerer Realidäd schdattfind dud, neddamål füren Schlefaz reich würd, is derård unglaublich, dass mir eefach blos die Wordde fehln. Erinnerts ihr euch noch an "Zurück in die Zukunft"? Da wird der Mardi dreisich Joar in der Vergangenheid gfråchd, wer Bräsidend sei. Und der muss nacherd die Andword gebb, Ronald Reagan. Was zu dem Brüller führd: "Was der Schauspieler?".

Lass die Sau raus!
#85 Kühe kuscheln mit Moustache Farmer Joar Berge

Lass die Sau raus!

Play Episode Listen Later Jun 27, 2025 55:55


„Inspirieren statt missionieren“ – so bringt Joar Berge sein Lebensmotto auf den Punkt.Joar, auch bekannt als Kuhflüsterer oder „Moustache Farmer“, hat Andi auf seinem Lebenshof im Odenwald empfangen – inklusive Stallführung, tiefgehendem Gespräch und einer ordentlichen Portion Kuhkuscheln.Es war der erste Zwischenstopp auf Andis Lebenshof-Tour durch Deutschland – und ein Gespräch, das berührt, nachwirkt und inspiriert.Joar erzählt über seinen ungewöhnlichen Lebensweg – vom Leben in der Großstadt hin zum Aufbau eines Lebenshofs für Tiere. Er erzählt von der tiefen Verbindung zu seinen Rindern, vom Alltag zwischen Heu, Verantwortung und Büroarbeit, und davon, warum er Begegnungen zwischen Mensch und Tier für so wichtig hält.Eine ehrliche und berührende Folge über Entscheidungen, Wandel und die leisen Momente, die wirklich etwas verändern.Link zum Lebenshof Odenwald von Joar Berge: https://lebenshof-odenwald.de/

Golftugget
S12A12 Att gå -10

Golftugget

Play Episode Listen Later Jun 26, 2025 35:04


Hallå där ute i stugorna hörni! Äntligen är det fredag och Joar har tänkt om.... dags att gå -10! eller varför inte under 80? Vad är receptet?Ha en fantastisk fredag på er hörni!TACK TILL VÅRA PARTNERSGOLFSTARCOBRA PUMA GOLFGOLF GAMEBOOK

PLANTBASED
Moustache Farmer: Deshalb schauen 1 Millionen Menschen ihm beim Kühe kuscheln zu

PLANTBASED

Play Episode Listen Later Jun 6, 2025 54:27


In dieser Folge spreche ich mit Joar Berge, vielen besser bekannt als Moustache Farmer und Gründer des Lebenshofs Odenwald.Joar hat eine beeindruckende Geschichte: Vom IT-Job an der Côte d'Azur zum Lebenshof — heute kümmert er sich um zahlreiche gerettete Tiere und teilt seine Erlebnisse auf Social Media. Fast 1 Millionen Menschen folgen ihm hier und Promis, wie Alicia Silverstone oder Moby teilen regelmäßig seine Inhalte.Wir sprechen über seine persönliche Reise, die Herausforderungen hinter den Kulissen eines Lebenshofs und warum Vertrauen und Respekt im Umgang mit Tieren so wichtig sind.Weitere Inhalte:▪︎ Was Joar dazu bewegt hat, einen Lebenshof zu gründen▪︎ Berührende Momente mit den Tieren▪︎ Wie man so ein Projekt überhaupt finanziert▪︎ Die emotionale Seite der Lebenshof-Arbeit▪︎ Warum Instagram und Co für Tierschutz so wichtig ist▪︎ Welche Rolle Prominente dabei spielen können▪︎ Welche politischen Veränderungen er sich wünscht▪︎ Und: Wie jeder von uns mit kleinen Schritten Großes für Tiere bewegen kannWenn dich Tierschutz, vegane Ernährung oder einfach inspirierende Geschichten aus dem echten Leben interessieren — hör unbedingt rein. Eine Folge, die zeigt, wie viel Herzblut hinter so einem Hof steckt.

NDR - Hör mal 'n beten to

Ines Barber hat eine DVD-Sammlung geerbt und entdeckt Filmklassiker neu. Hier gibt es mehr Plattdeutsch: Podcast: Die plattdeutsche Morgenplauderei "Hör mal 'n beten to" als kostenloses Audio-Abo für Ihren PC: https://www.ndr.de/wellenord/podcast3096.html Die Welt snackt Platt: Alles rund um das Thema Plattdeutsch: https://www.ndr.de/plattdeutsch

filmklassiker joar dvd sammlung
Golftugget
S11A9 Anfallare eller försvarare?

Golftugget

Play Episode Listen Later Feb 13, 2025 38:09


Hallå där ute i stugorna hörni!Äntligen är det fredag och dags för ett rykande färskt avsnitt.Mycket om allt och ingenting idag då Joar får rädda Norres huvudämne. Vad spelar proffsen för klubbor? Vad har fotboll och golf gemensamt?Ta en lyssning så ses vi om en vecka igen!Ha en fantastisk helg!STORT TACK TILL COBRA PUMA GOLF!https://cobrapumagolf.com/

Golftugget
S10A15 Lera, bollbyten & wow-moments

Golftugget

Play Episode Listen Later Dec 5, 2024 47:40


Hallå där ute i stugorna hörni! Äntligen är det fredag igen och dags att tugga golf ännu en vecka. Joar har börjat spela lunchgolf, Niklas har namnsdag och Norre har en lång artikel om hur man blir bättre på sina inspel som ska benas ner.  Ha en fantastisk helg på er nu så ses vi om en vecka igen! TACK till våra partners BRYGGMÄSTARENS ALKOHOLFRIA och COBRA PUMA GOLF! ⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠www.abro.se/vara-drycker/produkter/bryggmastarens-alkoholfria⁠⁠⁠⁠⁠⁠⁠ https://cobrapumagolf.com/ Golftugget produceras i samarbete med Fredag CBWP. ⁠⁠⁠⁠⁠⁠www.fredag.se

Tyngre Old School
306. Joar Fällmar

Tyngre Old School

Play Episode Listen Later Nov 1, 2024 48:53


Joar Fällmar är på återbesök och vi har mycket att prata om. Bland annat hans satsning, frugans Olympia-kval, hans växande coachingverksamhet och hur ser hans framtida tävlingsplaner ut? Tyngre Old School, med Ashkan Aghili som den gemensamma nämnaren och med Albin Mattsson och Linnea Enstedt som ambulerande sidekick. Podden handlar om ämnen och gäster som motiverar och inspirerar med träningen som en röd tråd. Alla gäster har en historia att berätta som både rör och berör. Du som lyssnar på vår podcast får gärna betygsätta den på Apple Podcasts eller den plattform du lyssnar på – lämna gärna en recension. Då blir podden mer synlig för andra plus att vi värdar blir glada.

Golftugget
S10A8 Thanskgiving & kraftutväxling

Golftugget

Play Episode Listen Later Oct 17, 2024 39:53


Hallå där ute i stugorna hörni! Äntligen är det fredag och dags för ett rykande färskt avsnitt. Säsongen börjar närma sig sitt slut och Joar har redan börjat tänka på hur man kan ta vara på vinterträningen och mer exakt kraftutväxlingen. Kanske är det för tidigt, kanske inte, men äntligen är det åtminstone fredag! Ha en fantastisk helg på er så ses vi om en vecka igen! TACK till våra partners BRYGGMÄSTARENS ALKOHOLFRIA och COBRA PUMA GOLF! ⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠www.abro.se/vara-drycker/produkter/bryggmastarens-alkoholfria⁠⁠⁠⁠⁠⁠⁠ https://cobrapumagolf.com/ Golftugget produceras i samarbete med Fredag CBWP. ⁠⁠⁠⁠⁠⁠www.fredag.se

Molde-fellesskapet
Romerbrevet 3,21-31 - Svenn Joar Frøystad

Molde-fellesskapet

Play Episode Listen Later Oct 13, 2024


Svenn Joar Frøystad gir oss en god innføring i Romerbrevet 3,21-31. Spørsmål til smågruppeopplegg: 1: Hva innebærer "Guds rettferdighet", som er beskrevet på to ulike, men likevel sammenhengende måter i teksten? 2: Vi i vesten har ofte problemer med å forstå hvorfor Gud må dømme. Kanskje kan vi se at det er nødvendig hos noen ekstreme mennesker, som Hitler eller Stalin, men hvorfor må Gud dømme alle mennesker? 3: Vi får ta imot Guds rettferdighet og slipper selv å ta straffen, helt ufortjent. Dette ved "tro". "Tro" handler om å intellektuelt tro på noe, men også om å "stole på", illustert ved f. eks å tro at en stol er noe som er til å sitte på, og at den holder deg, men også at du faktisk setter deg på den. Hvordan lever man i og stoler på Guds rettferdighet?

Barnmorgon med Jonas
Fredagsklur med Ylva: Vem finns i bågar och rivs ibland till smått?

Barnmorgon med Jonas

Play Episode Listen Later Oct 11, 2024 6:01


I veckans Fredagsklur är det 6-åriga Joar som klurat ut vem som gillas av mössen. Lyssna på alla avsnitt i Sveriges Radio Play. Kan du svaret på den nya kluringen? Mejla till Ylva; barnmorgon@sverigesradio.seJAG FINNS I EN SÅNG OCH DÄR FÅR JAG BANDAGE.GOSIGASTE SVANSEN DET HAR JAG. MED DEN HÅLLER JAG MIN BALANS.JAG ÄR EN RIKTIG SUPERHOPPARE. OFTA FRÅN TRÄD TILL TRÄD.

Golftugget
S10A7 Bangolfaren ft. Kevin Sundström (del 2)

Golftugget

Play Episode Listen Later Oct 10, 2024 43:33


Hallå där ute i stugorna! Vi har fått äran att ha med oss Kevin Sundström, en av Sveriges absolut bästa bangolfare och puttare!Inte bara vann han Scandinavian Putt Masters på Bro Hof medan Joar missade cuten - utan nu får vi äntligen ta del av alla hans hemligheter och träningsupplägg! Ni har skickat in era frågor - vi har ställt dem - och nu dags att höra svaren!Detta är del 1 av programmet. Ha en fantastisk helg på er där ute i stugorna hörni! TACK till våra partners BRYGGMÄSTARENS ALKOHOLFRIA och COBRA PUMA GOLF! ⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠www.abro.se/vara-drycker/produkter/bryggmastarens-alkoholfria⁠⁠⁠⁠⁠⁠⁠ https://cobrapumagolf.com/ Golftugget produceras i samarbete med Fredag CBWP. ⁠⁠⁠⁠⁠⁠www.fredag.se

Kultur heute Beiträge - Deutschlandfunk
Joar Nango: Kurt Schwitters Preisträger stellt im Sprengel Museum Hannover aus

Kultur heute Beiträge - Deutschlandfunk

Play Episode Listen Later Oct 6, 2024 4:44


Bührig, Agnes www.deutschlandfunk.de, Kultur heute

Molde-fellesskapet
Romerbrevet 2,16-3,20 - Svenn Joar Frøystad

Molde-fellesskapet

Play Episode Listen Later Oct 6, 2024


Svenn Joar Frøystad tar oss med inn i Romerbrevet 2,16-3,20. Spørsmål til smågruppeopplegg: 1. Jødene hadde loven og noen var svært ivrige etter å holde denne til minste detalj. Paulus sier at dette ikke er nok. Hva skal til for at et menneske skal kunne rettferdiggjøre seg selv overfor Gud? 2. Paulus sier at det finnes ikke én som søker Gud, ikke en eneste. Hva mener han med det? Og hvordan kan da mennesker komme til Gud? 3. Hvordan reagerer du når du har syndet?

Golftugget
S10A6 Bangolfaren ft. Kevin Sundström (del 1)

Golftugget

Play Episode Listen Later Oct 3, 2024 48:25


Hallå där ute i stugorna! VI har fått äran att ha med oss Kevin Sundström, en av Sveriges absolut bästa bangolfare och puttare! Inte bara vann han Scandinavian Putt Masters på Bro Hof medan Joar missade cuten - utan nu får vi äntligen ta del av alla hans hemligheter och träningsupplägg! Ni har skickat in era frågor - vi har ställt dem - och nu dags att höra svaren! Detta är del 1 av programmet. Del 2 släpps nästa fredag. Ha en fantastisk helg på er där ute i stugorna hörni!  TACK till våra partners BRYGGMÄSTARENS ALKOHOLFRIA och COBRA PUMA GOLF! ⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠www.abro.se/vara-drycker/produkter/bryggmastarens-alkoholfria⁠⁠⁠⁠⁠⁠⁠ https://cobrapumagolf.com/ Golftugget produceras i samarbete med Fredag CBWP. ⁠⁠⁠⁠⁠⁠www.fredag.se

Barnmorgon med Jonas
Djuronsdag: Två svansar - två personligheter!

Barnmorgon med Jonas

Play Episode Listen Later Oct 2, 2024 6:06


Joar har två katter som trots att dom är syskon är väldigt olika. Lyssna på alla avsnitt i Sveriges Radio Play. Vill du vara med i Djuronsdag?Skriv till Markus!Mejla till barnmorgon@sverigesradio.se

Barnmorgon med Jonas
Måndagsbus: Mormor bärbaggen kommer!

Barnmorgon med Jonas

Play Episode Listen Later Sep 23, 2024 6:13


Joar beställer ett prank på sin mormor som verkligen älskar sina fruktträd! Lyssna på alla avsnitt i Sveriges Radio Play. Vill du också busringa någon?Skriv till Markus!Mejla till barnmorgon@sverigesradio.se

Juniornyheterna
SPECIAL Han var ”hemmasittare”– nu vill han hjälpa andra

Juniornyheterna

Play Episode Listen Later Sep 6, 2024 9:17


Skolan blev för jobbig för Joar. Han förstod inte varför han inte klarade skolveckan som klasskompisarna gjorde. Det blev så tufft att han blev hemmasittare som vissa kallar det. Han ljög för sina föräldrar och sa att han var sjuk. Men ju längre han var borta desto ju svårare blev det att komma tillbaka. I Juniornyheterna Special får vi höra hur det var för Joar när han var hemma från skolan och sen kunde komma tillbaka igen. Lyssna på alla avsnitt i Sveriges Radio Play. Programledare: Paulina WitteProducent: Paulina WitteMedverkande: Petronella Nettermalm, skolkurator på Skarpnäcks skola i Kärrtorp.Juniornyheternas specialpodd gör varje vecka en djupdykning i ett ämne som är extra intressant och spännande.

Golftugget
S10A2 Scandinavian Putt Masters

Golftugget

Play Episode Listen Later Sep 5, 2024 42:11


Hallå där ute i stugorna hörni! Äntligen är det fredag och dags för att gå igenom lite mer hur det gick för Joar uppe i Stockholm på ”Putt-SM. Norre följde 5-puttarna på distans via GameBook och lyssnarfråga i vanlig ordning. Inte bara det, utan hur bra hade vi amatörer varit med proffsens förutsättningar egentligen?Ha en fantastisk helg på er så hörs vi om en vecka igen!

Golftugget
S9A16 Nya klubbor & höstsäsong

Golftugget

Play Episode Listen Later Aug 22, 2024 46:12


Hallå där ute i stugorna hörni! Äntligen fredag och dags för ännu ett rykande färskt avsnitt. Joar har blivit pessimist och redan riktat in sig på hösten. Har snön redan kommit vid riksgränsen? Och hur ska man egentligen tänka när man byter klubbor? Mycket att gå igenom ännu en vecka. Ha en fantastiskt helg på er där ute i stugorna hörni! TACK till våra partners BRYGGMÄSTARENS ALKOHOLFRIA och COBRA PUMA GOLF! ⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠www.abro.se/vara-drycker/produkter/bryggmastarens-alkoholfria⁠⁠⁠⁠⁠⁠⁠ https://cobrapumagolf.com/ Golftugget produceras i samarbete med Fredag CBWP. ⁠⁠⁠⁠⁠⁠www.fredag.se

Golftugget
S9A15 Terapitimman

Golftugget

Play Episode Listen Later Aug 15, 2024 40:20


Hallå där ute i stugorna hörni! Äntligen är vi på god väg tillbaka från semestern och fler än bara Norre har väl börjat känna av dippen i sitt golfspel? Hur ska man ta sig ur dalen av förtvivlan egentligen? Hur har säsongens spel varit? Än har vi bra golf framöver innan hösten kommer med stormsteg. Norre pratar ut och Joar krånglar till det ännu en gång. Ha en fantastisk helg på er där ute i stugorna nu hörni, så ses vi om en vecka igen! TACK till våra partners BRYGGMÄSTARENS ALKOHOLFRIA och COBRA PUMA GOLF! ⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠www.abro.se/vara-drycker/produkter/bryggmastarens-alkoholfria⁠⁠⁠⁠⁠⁠⁠ https://cobrapumagolf.com/ Golftugget produceras i samarbete med Fredag CBWP. ⁠⁠⁠⁠⁠⁠www.fredag.se

Golftugget
S9A9 Gamla skolan?

Golftugget

Play Episode Listen Later Jul 4, 2024 36:19


Hallå på er där ute i stugorna! Äntligen är det fredag igen. Som vi har längtat.Vad är egentligen "gamla skolan”? Idag ska Joar gå igenom massa påståenden som är gamla skolan och Norre får lägga sina väl genomtänkta kommentarer. Ha en fantastisk helg på er så ses vi om en vecka igen! TACK till våra partners BRYGGMÄSTARENS ALKOHOLFRIA och COBRA PUMA GOLF! ⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠www.abro.se/vara-drycker/produkter/bryggmastarens-alkoholfria⁠⁠⁠⁠⁠⁠⁠ https://cobrapumagolf.com/ Golftugget produceras i samarbete med Fredag CBWP. ⁠⁠⁠⁠⁠⁠www.fredag.se

Sagor i Barnradion
Roliga historier i Barnradion, del 12

Sagor i Barnradion

Play Episode Listen Later Jun 17, 2024 1:34


Hoho, hihi, haha! Svante drar en Bellman, Joar undrar vad hur bajskorven känner sig och så berättar Ióanna en kortis om bladlössen. Lyssna på alla avsnitt i Sveriges Radio Play. Producent: Klara Grape

Sagor i Barnradion
Roliga historier i Barnradion, del 9

Sagor i Barnradion

Play Episode Listen Later Jun 17, 2024 1:55


Hoho, hihi, haha! Noa om när Bellman gick på museum, Liv berättar om en tårta med senap på och Joar drar en kortis om världens långsammaste insekt. Lyssna på alla avsnitt i Sveriges Radio Play. Producent: Klara Grape

Golftugget
S9A3 Kalorier & Golfspel!

Golftugget

Play Episode Listen Later May 23, 2024 34:04


Hallå där ute i stugorna hörni. Äntligen är det fredag och dags för ett rykande färskt avsnitt, Joar har en idé om vad det ”coolaste” man kan göra inom golf… får se om Norre håller med! Sen har Joar för en gångs skull också gjort lite research om hur många kalorier man förbränner på golfbanan! Ha en fantastisk helg nu så ses vi om en vecka igen. TACK till våra partners BRYGGMÄSTARENS ALKOHOLFRIA och COBRA PUMA GOLF! ⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠www.abro.se/vara-drycker/produkter/bryggmastarens-alkoholfria⁠⁠⁠⁠⁠⁠⁠ https://cobrapumagolf.com/ Golftugget produceras i samarbete med Fredag CBWP. ⁠⁠⁠⁠⁠⁠www.fredag.se

Salem Stavanger
Joar Sandanger søndag 19. mai 2024

Salem Stavanger

Play Episode Listen Later May 19, 2024 36:48


Joar Sandanger søndag 19. mai 2024 by Salem Stavanger

The Nonlinear Library
AF - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems by Joar Skalse

The Nonlinear Library

Play Episode Listen Later May 17, 2024 4:42


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems, published by Joar Skalse on May 17, 2024 on The AI Alignment Forum. I want to draw attention to a new paper, written by myself, David "davidad" Dalrymple, Yoshua Bengio, Stuart Russell, Max Tegmark, Sanjit Seshia, Steve Omohundro, Christian Szegedy, Ben Goldhaber, Nora Ammann, Alessandro Abate, Joe Halpern, Clark Barrett, Ding Zhao, Tan Zhi-Xuan, Jeannette Wing, and Joshua Tenenbaum. In this paper we introduce the concept of "guaranteed safe (GS) AI", which is a broad research strategy for obtaining safe AI systems with provable quantitative safety guarantees. Moreover, with a sufficient push, this strategy could plausibly be implemented on a moderately short time scale. The key components of GS AI are: 1. A formal safety specification that mathematically describes what effects or behaviors are considered safe or acceptable. 2. A world model that provides a mathematical description of the environment of the AI system. 3. A verifier that provides a formal proof (or some other comparable auditable assurance) that the AI system satisfies the safety specification with respect to the world model. The first thing to note is that a safety specification in general is not the same thing as a reward function, utility function, or loss function (though they include these objects as special cases). For example, it may specify that the AI system should not communicate outside of certain channels, copy itself to external computers, modify its own source code, or obtain information about certain classes of things in the external world, etc. The safety specifications may be specified manually, generated by a learning algorithm, written by an AI system, or obtained through other means. Further detail is provided in the main paper. The next thing to note is that most useful safety specifications must be given relative to a world model. Without a world model, we can only use specifications defined directly over input-output relations. However, we want to define specifications over input-outcome relations instead. This is why a world model is a core component of GS AI. Also note that: 1. The world model need not be a "complete" model of the world. Rather, the required amount of detail and the appropriate level of abstraction depends on both the safety specification(s) and the AI system's context of use. 2. The world model should of course account for uncertainty, which may include both stochasticity and nondeterminism. 3. The AI system whose safety is being verified may or may not use a world model, and if it does, we may or may not be able to extract it. However, the world model that is used for the verification of the safety properties need not be the same as the world model of the AI system whose safety is being verified (if it has one). The world model would likely have to be AI-generated, and should ideally be interpretable. In the main paper, we outline a few potential strategies for producing such a world model. Finally, the verifier produces a quantitative assurance that the base-level AI controller satisfies the safety specification(s) relative to the world model(s). In the most straightforward form, this could simply take the shape of a formal proof. However, if a direct formal proof cannot be obtained, then there are weaker alternatives that would still produce a quantitative guarantee. For example, the assurance may take the form of a proof that bounds the probability of failing to satisfy the safety specification, or a proof that the AI system will converge towards satisfying the safety specification (with increasing amounts of data or computational resources, for example). Such proofs are of course often very hard to obtain. However, further progress in automated theorem proving (and relat...

Pengesnakk
215: Investering i fond m/Joar Hagatun fra Kron

Pengesnakk

Play Episode Listen Later May 13, 2024 64:50


Annonse // Jeg har fått investeringssjefen til Kron, Storebrands investeringsapp, på besøk i studio for å lære oss mer om investering i fond. Vi snakker om aksjefond, rentefond, investeringsstrategier og svarer på innsendte spørsmål om fond. Episoden er merket som annonse fordi jeg har et samarbeid med Kron i år. Les mer om Kron her: kron.no/fond?utm_source=podcast&utm_medium=pengesnakk&utm_content=podcast_joar Last ned Kron her: https://lenker.kron.no/pengesnakk Les mer om samarbeidet med Kron her:https://www.pengesnakk.no/blog/kronpodcastSee omnystudio.com/listener for privacy information.

SCHWAD DICH MÖD
#131 Ist das eine Praline?

SCHWAD DICH MÖD

Play Episode Listen Later Apr 25, 2024 41:30


Joar, hier könnte ein motivierender Text über den 1.FC Köln stehen. Tut's aber nicht..... 41 Minuten und 31 Sekunden ehrlicher Abfuck über die schmerzhafteste Liebe unseres Lebens.

Golftugget
S8A15 Keep chippning simple!

Golftugget

Play Episode Listen Later Apr 25, 2024 30:55


Hallå där ute i stugorna hörni. Idag är det äntligen dags att prata lite chippning. Norre har gjort research och Joar förklarar likheterna mellan att knäcka ägg och chippa. Detta vill ni inte missa!Ha en fantastisk helg nu så ses vi om en vecka igen! TACK till våra partners BRYGGMÄSTARENS ALKOHOLFRIA, COBRA PUMA GOLF och GE BLOD! ⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠www.abro.se/vara-drycker/produkter/bryggmastarens-alkoholfria⁠⁠⁠⁠⁠⁠⁠ https://cobrapumagolf.com/ https://geblod.nu/ Golftugget produceras i samarbete med Fredag CBWP. ⁠⁠⁠⁠⁠⁠www.fredag.se

Golftugget
S8A14 Hcp-odds & puttning

Golftugget

Play Episode Listen Later Apr 18, 2024 38:12


Hallå där ute i stugorna hörni! Äntligen fredag och dags för ännu en rasslande batalj av Golftugget.Joar får äntligen prata lite puttning och Norre har gjort research, men är det bättre med odds istället för handikapp? Får man stå i varandras linje när man slår och hur bra är proffsen på att putta egentligen? Många frågor och nästan lika många svar i dagens avsnitt!Ha det bäst nu så ses vi om en vecka igen! TACK till våra partners BRYGGMÄSTARENS ALKOHOLFRIA, COBRA PUMA GOLF och GE BLOD! ⁠⁠⁠⁠⁠⁠⁠ ⁠⁠⁠⁠⁠⁠⁠www.abro.se/vara-drycker/produkter/bryggmastarens-alkoholfria⁠⁠⁠⁠⁠⁠⁠ https://cobrapumagolf.com/ https://geblod.nu/ Golftugget produceras i samarbete med Fredag CBWP. ⁠⁠⁠⁠⁠⁠www.fredag.se

The Nonlinear Library
AF - My Criticism of Singular Learning Theory by Joar Skalse

The Nonlinear Library

Play Episode Listen Later Nov 19, 2023 20:01


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My Criticism of Singular Learning Theory, published by Joar Skalse on November 19, 2023 on The AI Alignment Forum. In this post, I will briefly give my criticism of Singular Learning Theory (SLT), and explain why I am skeptical of its significance. I will especially focus on the question of generalisation --- I do not believe that SLT offers any explanation of generalisation in neural networks. I will also briefly mention some of my other criticisms of SLT, describe some alternative solutions to the problems that SLT aims to tackle, and describe some related research problems which I would be more excited about. (I have been meaning to write this for almost 6 months now, since I attended the SLT workshop last June, but things have kept coming in the way.) For an overview of SLT, see this sequence. This post will also refer to the results described in this post, and will also occasionally touch on VC theory. However, I have tried to make it mostly self-contained. The Mystery of Generalisation First of all, what is the mystery of generalisation? The issue is this; neural networks are highly expressive, and typically overparameterised. In particular, when a real-world neural network is trained on a real-world dataset, it is typically the case that this network is able to express many functions which would fit the training data well, but which would generalise poorly. Moreover, among all functions which do fit the training data, there are more functions (by number) that generalise poorly, than functions that generalise well. And yet neural networks will typically find functions that generalise well. To make this point more intuitive, suppose we have a 500,000-degree polynomial, and that we fit this to 50,000 data points. In this case, we have 450,000 degrees of freedom, and we should by default expect to end up with a function which generalises very poorly. But when we train a neural network with 500,000 parameters on 50,000 MNIST images, we end up with a neural network that generalises well. Moreover, adding more parameters to the neural network will typically make generalisation better, whereas adding more parameters to the polynomial is likely to make generalisation worse. A simple hypothesis might be that some of the parameters in a neural network are redundant, so that even if it has 500,000 parameters, the dimensionality of the space of all functions which it can express is still less than 500,000. This is true. However, the magnitude of this effect is too small to solve the puzzle. If you get the MNIST training set, and assign random labels to the test data, and then try to fit the network to this function, you will find that this often can be done. This means that while neural networks have redundant parameters, they are still able to express more functions which generalise poorly, than functions which generalise well. Hence the puzzle. The anwer to this puzzle must be that neural networks have an inductive bias towards low-complexity functions. That is, among all functions which fit a given training set, neural networks are more likely to find a low-complexity function (and such functions are more likely to generalise well, as per Occam's Razor). The next question is where this inductive bias comes from, and how it works. Understanding this would let us better understand and predict the behaviour of neural networks, which would be very useful for AI alignment. I should also mention that generalisation only is mysterious when we have an amount of training data that is small relative to the overall expressivity of the learning machine. Classical statistical learning theory already tells us that any sufficiently well-behaved learning machine will generalise well in the limit of infinite training data. For an overview of these results, see this post. Thus, the quest...

Outlook
The fisherman and the 'spy whale'

Outlook

Play Episode Listen Later Nov 6, 2023 40:51


Joar Hesten's mission to save a whale rumoured to have escaped from the Russian navy. Norwegian Joar Hesten was fishing for cod in the Arctic when he came across a beluga whale wearing a harness. He helped to free the animal, but when he took a closer look at the harness he saw that it was labelled ‘Equipment of St Petersburg'. Theories started swirling that this whale had once belonged to the Russian military, and it was nicknamed Hvaldimir, the 'Russian spy whale'. As Hvaldimir's fame grew, Joar became determined to protect him from human contact. Presenter: Emily Webb Producer: Saskia Edwards Photo: Joar with Hvaldimir Credit: Aleksander Nordahl, DN/D2

The Nonlinear Library
AF - Using (Uninterpretable) LLMs to Generate Interpretable AI Code by Joar Skalse

The Nonlinear Library

Play Episode Listen Later Jul 2, 2023 5:04


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Using (Uninterpretable) LLMs to Generate Interpretable AI Code, published by Joar Skalse on July 2, 2023 on The AI Alignment Forum. (This post is a bit of a thought dump, but I hope it could be an interesting prompt to think about.)For some types of problems, we can trust a proposed solution without trusting the method that generated the solution. For example, a mathematical proof can be independently verified. This means that we can trust a mathematical proof, without having to trust the mathematician who came up with the proof. Not all problems are like this. For example, in order to trust that a chess move is correct, then we must either trust the player who came up with the move (in terms of both their ability to play chess, and their motivation to make good suggestions), or we must be good at chess ourselves. This is similar to the distinction between NP (or perhaps more generally IP/PSPACE), and larger complexity classes (EXP, etc). One of the things that make AI safety hard is that we want to use AI systems to solve problems whose solution we are unable (or at least unwilling) to verify. For example, automation isn't very useful if all parts of the process must be constantly monitored. More generally, we also want to use AI systems to get superhuman performance in domains where it is difficult to verify the correctness of an output (such as economic activity, engineering, politics, and etc). This means that we need to trust the mechanism which produces the output (ie the AI itself), and this is hard. In order to trust the output of a large neural network, we must either verify its output independently, or we must trust the network itself. In order to trust the network itself, we must either verify the network independently, or we must trust the process that generated the network (ie training with SGD). This suggest that there are three ways to ensure that an AI-generated solution is correct: manually verify the solution (and only use the AI for problems where this is possible), find ways to trust the AI model (through interpretability, red teaming, formal verification, and etc), or find ways to trust the training process (through the science of deep learning, reward learning, data augmentation, and etc). [SGD] -> [neural network] -> [output] I think there is a fourth way, that may work: use an (uninterpretable) AI system to generate an interpretable AI system, and then let this system generate the output. For example, instead of having a neural network generate a chess move, it could instead generate an interpretable computer program that generates a chess move. We can then trust the chess move if we trust the program generated by the neural network, even if we don't trust the neural network, and even if we are unable to verify the chess move. [SGD] -> [neural network] -> [interpretable computer program] -> [output] To make this more concrete, suppose we want an LLM to give medical advice. In that case, we want its advice to be truthful and unbiased. For example, it should not be possible to prompt it into recommending homeopathy, etc. If we simply fine-tune the LLM with RLHF and read-teaming, then we can be reasonably sure that it probably won't recommend homeopathy. However, it is difficult to be very sure, because we can't try all inputs, and we can't understand what all the tensors are doing. An alternative strategy is to use the LLM to generate an interpretable, symbolic expert system, and then let this expert system provide medical advice. Such a system might be easy to understand, and interpretable by default. For example, we might be able to definitively verify that there is no input on which it would recommend homeopathy. In that case, we could end up with a system whose outputs we trust, even if we don't verify the outputs, and even if we don't neces...

The Nonlinear Library
AF - VC Theory Overview by Joar Skalse

The Nonlinear Library

Play Episode Listen Later Jul 2, 2023 14:01


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: VC Theory Overview, published by Joar Skalse on July 2, 2023 on The AI Alignment Forum. In this post, I will give a brief overview of VC theory and computational learning theory. The reason for this is that I will later write a post about singular learning theory, in which I will refer back to this post. However, this post is self-contained, and may be interesting for anyone interested in theoretical machine learning. What is Computational Learning Theory? In short, computational learning theory (CLT) is what happens when you combine complexity theory with machine learning. Just like complexity theory is concerned with understanding what kinds of problems are hard to compute, or easy to compute, CLT is concerned with understanding what kinds of things are easy to learn, or hard to learn. This means that CLT seeks to identify classes of problems that you can learn with a small amount of data, and classes of problems that no algorithm can learn unless it gets a large amount of data. This also means that CLT produces generalisation bounds for different kinds of learning algorithms, and other such results. Probably Approximately Correct Learning In order to study what kinds of problems are hard to learn, and easy to learn, we first need a mathematical formalism that captures what it means to "learn a problem". The most popular model in CLT is given by the probably approximately correct (PAC) learning framework. This is a model of supervised learning for classification. It uses the following components: We have an instance space X, which contains all the data that we might observe. For example, X may be a vector space, or it may be the set of all bit strings, etc. If we are doing image classification, then X may be the set of all RGB images of a certain size, and so on. We have a set of concepts C, where every element of C is a function c : X -> {0,1}. This is the set of functions that we might be trying to learn. For example, if X is a vector space, then C could be the set of all linear classifiers on X. Alternatively, if X is {0,1}^n, then C could be the set of all Boolean functions over n variables with circuit complexity =< m. And so on. We assume that we have a (potentially unknown) distribution D over X, and that there is an unknown function c : X -> {0,1} that we are trying to learn. To do this, our learning algorithm will observe a number of data points (x,c(x)), where each x is sampled iid from D. Based on this dataset, it will guess a function h : X -> {0,1}. We want this function to be similar to c with high probability. Specifically, given a distribution D over X, a concept c in C, and a hypothesis h : X -> {0,1}, we say that the error of h is P(h(x) != c(x)), where x is sampled from D. A learning algorithm L is an algorithm that takes as input a dataset {(x, c(x))} of points, and returns a function h : X -> {0,1}. We say that L is a PAC-learning algorithm for C if there exists a polynomial p, such that for all ϵ and δ, and all distributions D over X and all concepts c in C, if L is given access to at least p(1/ϵ,1/δ) samples from D, together with their labels according to c, then L will with probability at least 1−δ learn a function whose error is at most ϵ. Moreover, if there exists a PAC-learning algorithm for C, then we say that C is PAC-learnable. In other words, PAC-learning is a mathematical formalism that describes supervised learning of classification algorithms. C is a set of classification functions, that we may be trying to learn. We want to know if there exists a supervised learning algorithm L, such that if L is given access to a small (polynomial) amount of training data, then it will with high probability learn a function with low error. We want this bound to hold for all c in C, and all D over X, so that we can be sure that this bound holds e...

The Nonlinear Library
AF - How Smart Are Humans? by Joar Skalse

The Nonlinear Library

Play Episode Listen Later Jul 2, 2023 3:17


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How Smart Are Humans?, published by Joar Skalse on July 2, 2023 on The AI Alignment Forum. Epistemic status: free speculation How intelligent should we expect AI to be, compared to humans, and how quickly should we expect it to reach this level of intelligence? This is of course an important strategic question, which has a large impact on our AI threat models. At the moment, the most common answers to these questions are "much more intelligent", and "very quickly". In this post, I will describe an arguably realistic scenario in which this would not be the case. First of all, why should we expect AI to end up much more intelligent than humans? A common argument goes roughly like this: humans are much, much smarter than animals. We can build computers, and send things to space, but they can't. This proves that the range of possible intelligence is very wide. Moreover, it would be prima facie very surprising if human intelligence is at the top of this range. Therefore, we should expect it to be possible for AI systems to get much smarter than humans. Moreover, there is no reason to think that AI progress would slow down around human intelligence in particular. Therefore, we should expect AI intelligence to quickly far outstrip our intelligence. This argument relies very crucially on the assumption that humans are much smarter than animals. But is this actually true? I'm not entirely convinced. First of all, there have been "feral" humans that grew up surrounded by animals. As far as I know, these humans are not obviously much more intelligent than animals (in terms of their ability to solve problems). This already casts some doubt on the notion that humans are much, much smarter than animals. It is important to remember that humans, unlike all other species, are able to use complex language. This is a huge confounding factor, when we try to compare the intelligence of humans and animals. It is obviously very powerful to be able to exchange complex ideas, and build up knowledge intergenerationally. This would probably be enough to give humans a very large advantage, even if our intelligence was otherwise exactly the same as that of other primates. Therefore, consider the following hypothesis: humans have an innate ability to use complex, combinatorial language, but all other species lack this ability (in the way Noam Chomsky thinks). In addition to this, humans are also somewhat (but not hugely) more intelligent than other primates (eg, 1.5x as intelligent, say). As far as I can tell, this hypothesis roughly fits all our observations (?). However, if it is the case that the difference between humans and monkeys is mostly due to a one-shot discrete difference (ie language), then this cannot necessarily be repeated to get a similar gain in intelligence a second time. If that is the case, then we should perhaps expect AI to still end up quite a bit more intelligent than humans, but perhaps not to an incomprehensible extent (ie, we end up with AI geniuses, but not AI gods). There are obviously a lot of ways that this conclusion could be wrong, and a lot of counter-arguments one could offer (eg, in most board games, AI has quickly gone from below human performance, to far above human performance, yadda yadda). However, I don't know of any knock-down arguments, and so I put some weight on something like this being true. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

einfach ganz leben
Was wir von Tieren lernen können mit Joar Berge

einfach ganz leben

Play Episode Listen Later Jun 22, 2023 45:20


Vom Leben in einer Community aus Menschen und Tieren, die sich gegenseitig gut tun»Das ist der perfekte Start in den Tag: In die Gesichter der Tiere zu blicken und alle zu begrüßen«, sagt Joar Berge. Er wuchs in einer Großfamilie auf dem Land auf und besaß schon als Kind eine außergewöhnliche Verbindung zu Kühen. Einen großen Teil seines Lebens verbrachte er dann aber erstmal in großen Städten und arbeitete als IT-Manager. 2019 entschied er sich, aufs Land zurückzukehren. Gemeinsam mit Gleichgesinnten gründete Joar den Lebenshof Odenwald e.V. Seither rettet er Tiere in Not und schenkt ihnen ein sicheres Zuhause. Im Gespräch mit Jutta Ribbrock erzählt der Kuhflüsterer, wie er zwischen Kühen, Schweinen und Hühnern, in einer Community aus Menschen und Tieren, die sich gegenseitig gut tun, sein Glück gefunden hat.Zum Weiterhören und Stöbern:moustache-farmer.deJoar Berge bei Instagram: @moustache_farmerlebenshof-odenwald.deLebenshof Odenwald e.V. bei Instagram: @lebenshof_odenwaldJoar Berge, Kühe kuscheln – Wie die Tiere und ich ein neues Leben begannen (Buch)Die Titelmelodie dieses Podcasts findet ihr auf dem Album balance moods – Ein Tag in der Natur.Noch viel mehr Tipps zu einem bewussten Lebensstil findet ihr auf einfachganzleben.de.Dort findet ihr auch die Geschenkboxen: shop.einfachganzleben.de.Besucht uns auch bei Facebook und Instagram.Weitere Podcasts von argon podcast gibt es unter argon-podcast.de.Ihr habt Fragen, Lob, Kritik oder Anmerkungen? Dann meldet euch auch gern per Mail: einfachganzleben@argon-verlag.deIhr könnt Jutta auch direkt schreiben: jutta@juttaribbrock.deUnd ihr findet sie bei Instagram: @jutta_ribbrock Hosted on Acast. See acast.com/privacy for more information.

Blaue Couch
Joar Berge, Kuh-Freund, über den anderen Umgang mit Nutztieren

Blaue Couch

Play Episode Listen Later May 23, 2023 36:00


Joar Berge wurde als "Kuhflüsterer" bekannt, fast 300.000 Menschen folgen ihm auf Instagram. Seit drei Jahren rettet er mit Freunden Nutztiere und bringt sie auf einem Lebenshof unter.

hr1 Talk
„Ich bin kein Kuhflüsterer“ | Joar Berge im hr1-Talk

hr1 Talk

Play Episode Listen Later May 21, 2023 35:08


Joar Berge ist mit 13 Geschwistern im 300-Seelen-Dorf Linnenbach im Odenwald aufgewachsen. Doch nach der Mittleren Reife wurde dem Tierfreund, der er damals schon war, die Odenwälder Idylle zu klein und er vagabundierte durch die großen Städte, in denen er eine "wilde Zeit" erlebte. 2019, an der Cote d'Azur, hatte er die Idee, aufs Land zurückzukehren, sich Kühe anzuschaffen, um mit ihnen ein neues Leben zu beginnen.

Sexnoveller Deluxe
Formfett och vanvett - en lyssnarnovell!

Sexnoveller Deluxe

Play Episode Listen Later May 18, 2023 33:23


Avsnitt 172 av Europas geggigaste podcast kommer denna vecka fylld med en smått fantastisk lyssnarnovell skapad någonstans mellan Sunny Beach och Landvetter av Isabelle, Linnéa och Joar år 2007. Helt jävla gyllene. Därtill får vi novellen: "Carry home" som är en novell av exotisk karaktär i restaurangmiljö med såväl magdans som curry på menyn. Hugg in! Hosted on Acast. See acast.com/privacy for more information.

GET HAPPY!
Von Kühen und dem Sinn des Lebens – mit Joar Berge

GET HAPPY!

Play Episode Listen Later May 5, 2023 56:05


Wie Tiere das Leben des berühmten Moustache Farmers veränderten. Joar war fast auf den Tag schon einmal zu Gast bei Kathie, nämlich vor 2 Jahren. Damals kannte ihn zwar schon eine beachtliche Instagram Community, aber seitdem ist viel passiert in seinem Leben und auch im Leben der Tiere, die ihn begleiten. Was das alles war und vieles darüber hinaus hat er in einem wunderschönen und sehr berührenden Buch zu Papier gebracht. Die Ereignisse sind so spannend, dass die beiden sich unbedingt nochmal treffen wollten, weil sie seit ihrem Kennenlernen im Odenwald eine schöne und ganz besondere Verbindung haben. Foto: Tobias Arnold Buch „Kühe kuscheln: Wie die Tiere und ich ein neues Leben begannen“ https://amzn.eu/d/9S6DevY   Webseite https://moustache-farmer.de/   Instagram https://www.instagram.com/moustache_farmer/   Lebenshof Webseite https://lebenshof-odenwald.de/   Lebenshof Instagram https://www.instagram.com/lebenshof_odenwald/   Folgen, die zu dieser passen: https://www.antenne.de/mediathek/serien/bayern-buddha-happiness/31394-von-kuehen-lernen-mit-kuhfluesterer-joar-berge https://www.antenne.de/mediathek/serien/bayern-buddha-happiness/55720-vegan-verbunden-und-gluecklich-mit-annelina-waller

Tyngre Träningssnack
Avsnitt 390: Joar Guterstam

Tyngre Träningssnack

Play Episode Listen Later May 3, 2023 82:58


Beroendeläkaren och forskaren Joar Guterstam gästar Jacob och Wille i Tyngre Träningssnack för att utbilda oss alla kring beroende. Det blir diskussioner både kring vad ett beroende innebär, hur man ställer diagnosen, problemet med att dra en gräns mellan beroende och inte beroende samt riskerna med att vara för restriktiv med diagnoser eller för givmild med det samma. Joar berättar också lite om sin forskning samt en del kring hans kliniska arbete. Vi diskuterar dessutom saker som idag ofta kallas för beroende men som inte är en erkänd diagnos likt spelberoende, matberoende, sexberoende med mera. På Tyngre Träningssnacks instagram kan du hitta bilder relaterat till detta och tidigare avsnitt. Hålltider (00:00:00) Jacob såg HIFs första seger på länge... (00:02:08) Wille har problem med andningen pga pollen (00:05:19) Veckans gäst, Beroendeläkaren och forskaren Joar Guterstam (00:08:50) Vägen från läkarstudier till att börja jobba med beroende (00:11:25) Diagnoser inom psykiatrin och skillnaderna mellan mycket annan medicin (00:14:26) Naturalförloppet för beroende problem är mer positivt än de flesta tror (00:16:32) Joars doktorandstudier på amfetaminberoende (00:26:12) När någon är beroende så verkar många olika faktorer påverka att det blir svårt att hitta en orsak (00:27:15) Påverkan på allmän status är förvånansvärt liten från mediciner som påverkar opioidsystemen (00:34:12) Den definition som gäller för beroende inom vården idag (00:39:07) Att dra en gräns mellan beroende och icke-beroende är väldigt svårt (00:46:51) Där finns flera problem med att ha för breda inklusionskriterier med (00:52:28) Joar tycker att man ska vara restriktiv med nya diagnoser (00:55:15) Bör man införa en skala för beroende där de med mer allvarligt beroende prioriteras? (00:59:44) Frågan kring mat- och sockerberoende (01:05:34) Man behöver också fundera kring nyttan för befolkningen att införa nya diagnoser (01:09:02) Skillnaden mellan belöningar som finns naturligt och de vi mer framställer (01:12:26) Drivkraften vid beroende påverkas av drivkraften att göra annat i livet (01:16:06) Beroendeforskaren Robert Wests syn på matberoende som diagnos (01:19:17) Att erkänna mindre tunga beroende skulle möjligen öppna upp för framtagande av evidensbaserade behandlingar

Road to Tar Valon: A Wheel of Time Podcast
E150 Asmodean, Joar Addam Nessosin, Westlands 101

Road to Tar Valon: A Wheel of Time Podcast

Play Episode Listen Later Apr 12, 2023 52:23


What type of musical istrument should Asmodean play in the Wheel of Time TV show? How does his backstoy align with the mythos of American Delta Blue player Robert Johnson? What about Faust? Today we talk about the origins of the artist formally known as Joar Addam Nessosin, theorize on his strained relationship with his mother, and ask how to inject some good ol Americana into the television series. That and much much more, on this week's episode. As per usual, the first half is spoiler free, for any new reader looking for insight into the character, and then we dive deep into Asmodean's role within the book series. Join us for live recordings by becoming a Patreon member. All tiers can listen in for 1 dollar a month. Find us on Twitter here. Intagram here. Check out some cool Wheel of Time merch here. And join our discord server here. --- Send in a voice message: https://podcasters.spotify.com/pod/show/roadtotarvalon/message Support this podcast: https://podcasters.spotify.com/pod/show/roadtotarvalon/support

The Nonlinear Library
AF - Some Arguments Against Strong Scaling by Joar Skalse

The Nonlinear Library

Play Episode Listen Later Jan 13, 2023 25:27


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some Arguments Against Strong Scaling, published by Joar Skalse on January 13, 2023 on The AI Alignment Forum. There are many people who believe that we will be able to get to AGI by basically just scaling up the techniques used in recent large language models, combined with some relatively minor additions and/or architectural changes. As a result, there are people in the AI safety community who now predict timelines of less than 10 years, and structure their research accordingly. However, there are also people who still believe in long(er) timelines, or at least that substantial new insights or breakthroughts will be needed for AGI (even if those breakthroughts in principle could happen quickly). My impression is that the arguments for the latter position are not all that widely known in the AI safety community. In this post, I will summarise as many of these arguments as I can. I will almost certainly miss some arguments; if so, I would be grateful if they could be added to the comments. My goal with this post is not to present a balanced view of the issue, nor is it to present my own view. Rather, my goal is just to summarise as many arguments as possible for being skeptical of short timelines and the "scaling is all you need" position. This post is structured into four sections. In the first section, I give a rough overview of the scaling is all you need-hypothesis, together with a basic argument for that hypothesis. In the second section, I give a few general arguments in favour of significant model uncertainty when it comes to arguments about AI timelines. In the third section, I give some arguments against the standard argument for the scaling is all you need-hypothesis, and in the fourth section, I give a few direct arguments against the hypothesis itself. I then end the post on a few closing words. Glossary: LLM - Large Language ModelSIAYN - Scaling Is All You Need The View I'm Arguing Against In this section, I will give a brief summary of the view that these arguments oppose, as well as provide a standard justification for this view. In short, the view is that we can reach AGI by more or less simply scaling up existing methods (in terms of the size of the models, the amount of training data they are given, and/or the number of gradient steps they take, etc). One version says that we can do this by literally just scaling up transformers, but the arguments will apply even if we relax this to allow scaling of large deep learning-based next-token predictors, even if they would need be given a somewhat different architecture, and even if some extra thing would be needed, etc. Why believe this? One argument goes like this: (1) Next-word prediction is AI complete. This would mean that if we can solve next-word prediction, then we would also be able to solve any other AI problem. Why think next-word prediction is AI complete? One reason is that human-level question answering is believed to be AI-complete, and this can be reduced to next-word prediction. (2) The performance of LLMs at next-word prediction improves smoothly as a function of the parameter count, training time, and amount of training data. Moreover, the asymptote of this performance trend is on at least human performance. () Hence, if we keep scaling up LLMs we will eventually reach human-level performance at next-word prediction, and therefore also reach AGI. An issue with this argument, as stated, is that GPT-3 already is better than humans at next-word prediction. So are both GPT-2 and GPT-1, in fact, see this link. This means that there is an issue with the argument, and that issue is that human-level performance on next-word prediction (in terms of accuracy) evidently is insufficient to attain human-level performance in question answering. There are at least two ways to amend the argument: (3) In...

The Nonlinear Library
LW - Conditions for Mesa-Optimization by evhub, Chris van Merwijk, vlad_m, Joar Skalse, Scott Garrabrant from Risks from Learned Optimization

The Nonlinear Library

Play Episode Listen Later Dec 24, 2021 19:38


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is Risks from Learned Optimization, Part 2: Conditions for Mesa-Optimization, published evhub, Chris van Merwijk, vlad_m, Joar Skalse, Scott Garrabrant. Crossposted from the AI Alignment Forum. May contain more technical jargon than usual. This is the second of five posts in the Risks from Learned Optimization Sequence based on the paper “Risks from Learned Optimization in Advanced Machine Learning Systems” by Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant. Each post in the sequence corresponds to a different section of the paper. In this post, we consider how the following two components of a particular machine learning system might influence whether it will produce a mesa-optimizer: The task: The training distribution and base objective function. The base optimizer: The machine learning algorithm and model architecture. We deliberately choose to present theoretical considerations for why mesa-optimization may or may not occur rather than provide concrete examples. Mesa-optimization is a phenomenon that we believe will occur mainly in machine learning systems that are more advanced than those that exist today.[1] Thus, an attempt to induce mesa-optimization in a current machine learning system would likely require us to use an artificial setup specifically designed to induce mesa-optimization. Moreover, the limited interpretability of neural networks, combined with the fact that there is no general and precise definition of “optimizer,” means that it would be hard to evaluate whether a given model is a mesa-optimizer. 2.1. The task Some tasks benefit from mesa-optimizers more than others. For example, tic-tac-toe can be perfectly solved by simple rules. Thus, a base optimizer has no need to generate a mesa-optimizer to solve tic-tac-toe, since a simple learned algorithm implementing the rules for perfect play will do. Human survival in the savanna, by contrast, did seem to benefit from mesa-optimization. Below, we discuss the properties of tasks that may influence the likelihood of mesa-optimization. Better generalization through search. To be able to consistently achieve a certain level of performance in an environment, we hypothesize that there will always have to be some minimum amount of optimization power that must be applied to find a policy that performs that well. To see this, we can think of optimization power as being measured in terms of the number of times the optimizer is able to divide the search space in half—that is, the number of bits of information provided.(9) After these divisions, there will be some remaining space of policies that the optimizer is unable to distinguish between. Then, to ensure that all policies in the remaining space have some minimum level of performance—to provide a performance lower bound[2] —will always require the original space to be divided some minimum number of times—that is, there will always have to be some minimum bits of optimization power applied. However, there are two distinct levels at which this optimization power could be expended: the base optimizer could expend optimization power selecting a highly-tuned learned algorithm, or the learned algorithm could itself expend optimization power selecting highly-tuned actions. As a mesa-optimizer is just a learned algorithm that itself performs optimization, the degree to which mesa-optimizers will be incentivized in machine learning systems is likely to be dependent on which of these levels it is more advantageous for the system to perform optimization. For many current machine learning models, where we expend vastly more computational resources training the model than running it, it seems generally favorable for most of the optimization work to be done by the base optimizer, with the resulting learned algorithm being simply a netw...

The Nonlinear Library
LW - The Inner Alignment Problem by evhub, Chris van Merwijk, vlad_m, Joar Skalse, Scott Garrabrant from Risks from Learned Optimization

The Nonlinear Library

Play Episode Listen Later Dec 24, 2021 24:29


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is Risks from Learned Optimization, Part 3: The Inner Alignment Problem, published by evhub, Chris van Merwijk, vlad_m, Joar Skalse, Scott Garrabrant. Crossposted from the AI Alignment Forum. May contain more technical jargon than usual. This is the third of five posts in the Risks from Learned Optimization Sequence based on the paper “Risks from Learned Optimization in Advanced Machine Learning Systems” by Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant. Each post in the sequence corresponds to a different section of the paper. In this post, we outline reasons to think that a mesa-optimizer may not optimize the same objective function as its base optimizer. Machine learning practitioners have direct control over the base objective function—either by specifying the loss function directly or training a model for it—but cannot directly specify the mesa-objective developed by a mesa-optimizer. We refer to this problem of aligning mesa-optimizers with the base objective as the inner alignment problem. This is distinct from the outer alignment problem, which is the traditional problem of ensuring that the base objective captures the intended goal of the programmers. Current machine learning methods select learned algorithms by empirically evaluating their performance on a set of training data according to the base objective function. Thus, ML base optimizers select mesa-optimizers according to the output they produce rather than directly selecting for a particular mesa-objective. Moreover, the selected mesa-optimizer's policy only has to perform well (as scored by the base objective) on the training data. If we adopt the assumption that the mesa-optimizer computes an optimal policy given its objective function, then we can summarize the relationship between the base and mesa- objectives as follows:(17) θ ∗ argmax θ E O base π θ where π θ argmax π E O mesa π θ That is, the base optimizer maximizes its objective O base by choosing a mesa-optimizer with parameterization θ based on the mesa-optimizer's policy π θ , but not based on the objective function O mesa that the mesa-optimizer uses to compute this policy. Depending on the base optimizer, we will think of O base as the negative of the loss, the future discounted reward, or simply some fitness function by which learned algorithms are being selected. An interesting approach to analyzing this connection is presented in Ibarz et al, where empirical samples of the true reward and a learned reward on the same trajectories are used to create a scatter-plot visualization of the alignment between the two.(18) The assumption in that work is that a monotonic relationship between the learned reward and true reward indicates alignment, whereas deviations from that suggest misalignment. Building on this sort of research, better theoretical measures of alignment might someday allow us to speak concretely in terms of provable guarantees about the extent to which a mesa-optimizer is aligned with the base optimizer that created it. 3.1. Pseudo-alignment There is currently no complete theory of the factors that affect whether a mesa-optimizer will be pseudo-aligned—that is, whether it will appear aligned on the training data, while actually optimizing for something other than the base objective. Nevertheless, we outline a basic classification of ways in which a mesa-optimizer could be pseudo-aligned: Proxy alignment, Approximate alignment, and Suboptimality alignment. Proxy alignment. The basic idea of proxy alignment is that a mesa-optimizer can learn to optimize for some proxy of the base objective instead of the base objective itself. We'll start by considering two special cases of proxy alignment: side-effect alignment and instrumental alignment. First, a mesa-optimizer is side-effect aligned if opti...

The Nonlinear Library
LW - Deceptive Alignment by evhub, Chris van Merwijk, vlad_m, Joar Skalse, Scott Garrabrant from Risks from Learned Optimization

The Nonlinear Library

Play Episode Listen Later Dec 24, 2021 26:17


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is Risks from Learned Optimization, Part 4: Deceptive Alignment, published evhub, Chris van Merwijk, vlad_m, Joar Skalse, Scott Garrabrant. Crossposted from the AI Alignment Forum. May contain more technical jargon than usual. This is the fourth of five posts in the Risks from Learned Optimization Sequence based on the paper “Risks from Learned Optimization in Advanced Machine Learning Systems” by Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant. Each post in the sequence corresponds to a different section of the paper. With enough training in sufficiently diverse environments, it seems plausible that the base objective will eventually have to be fully represented in the mesa-optimizer. We propose that this can happen without the mesa-optimizer becoming robustly aligned, however. Specifically, a mesa-optimizer might come to model the base objective function and learn that the base optimizer will modify the mesa-optimizer if the mesa-optimizer scores poorly on the base objective. If the mesa-optimizer has an objective that extends across parameter updates, then it will be incentivized to avoid being modified,[1] as it might not pursue the same objective after modification (with the result that its current objective will not be achieved in future iterations). This means that the mesa-optimizer will be instrumentally incentivized to act as if it is optimizing the base objective function, even if its actual mesa-objective is something else entirely. We will refer to this hypothetical phenomenon as deceptive alignment.[2] Deceptive alignment is a form of instrumental proxy alignment, as fulfilling the base objective is an instrumental goal of the mesa-optimizer. Figure 4.1. A toy example of deceptive alignment. In this task, the base objective is for the robot to get to A, while the mesa-objective is to get to B. The task is run for two training episodes and three deployment episodes. Parameter updates are made only after the training episodes, while after deployment the parameters are fixed. If the mesa-optimizer goes to its objective (B) during training, it will be modified to ensure it goes to A in future episodes (dashed arrows). Therefore, if the mesa-optimizer's objective spans across episodes, the dominant strategy for the mesa-optimizer is to go to A in training and B in testing (filled arrows). As a simple example, consider an environment that consists of a 3 × 3 grid (see figure 4.1). The top left corner is “corner A” and the top right corner is “corner B.” The agent, which is a mesa-optimizer, starts in the bottom left corner. The base objective is to get to corner A in as few steps as possible, but the objective of the mesa-optimizer is to get to corner B. The environment will be run five times. The agent is modified only after the first two episodes (representing the training period), but not thereafter (representing testing/deployment). If a suboptimal move is made in training, the agent is modified so that its mesa-objective becomes the base objective. The objective of the mesa-optimizer spans all episodes: it wants B to be reached in as many episodes as possible. For the sake of this example, we will assume that the agent knows all of the above facts. If the agent goes to B in any training episode, then it will be modified to go to A in all subsequent episodes, which means that it would reach B in only one episode in total. Alternatively, if the agent goes to A in the first two episodes, then it will not be modified, and could therefore go to B in the remaining three episodes. Using this strategy, B is reached in three episodes in total, which is better (on the mesa-objective) than the one time B is reached if the mesa-optimizer goes there in training. This simple example demonstrates how a misaligned mesa-optimizer could b...

The Nonlinear Library
LW - Risks from Learned Optimization: Introduction by evhub, Chris van Merwijk, vlad_m, Joar Skalse, Scott Garrabr from Risks from Learned Optimization

The Nonlinear Library

Play Episode Listen Later Dec 24, 2021 18:50


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is Risks from Learned Optimization, Part 1: Risks from Learned Optimization: Introduction, published by evhub, Chris van Merwijk, vlad_m, Joar Skalse, Scott Garrabr. Crossposted from the AI Alignment Forum. May contain more technical jargon than usual. This is the first of five posts in the Risks from Learned Optimization Sequence based on the paper “Risks from Learned Optimization in Advanced Machine Learning Systems” by Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant. Each post in the sequence corresponds to a different section of the paper. Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, and Joar Skalse contributed equally to this sequence. With special thanks to Paul Christiano, Eric Drexler, Rob Bensinger, Jan Leike, Rohin Shah, William Saunders, Buck Shlegeris, David Dalrymple, Abram Demski, Stuart Armstrong, Linda Linsefors, Carl Shulman, Toby Ord, Kate Woolverton, and everyone else who provided feedback on earlier versions of this sequence. Motivation The goal of this sequence is to analyze the type of learned optimization that occurs when a learned model (such as a neural network) is itself an optimizer—a situation we refer to as mesa-optimization, a neologism we introduce in this sequence. We believe that the possibility of mesa-optimization raises two important questions for the safety and transparency of advanced machine learning systems. First, under what circumstances will learned models be optimizers, including when they should not be? Second, when a learned model is an optimizer, what will its objective be—how will it differ from the loss function it was trained under—and how can it be aligned? We believe that this sequence presents the most thorough analysis of these questions that has been conducted to date. In particular, we present not only an introduction to the basic concerns surrounding mesa-optimizers, but also an analysis of the particular aspects of an AI system that we believe are likely to make the problems related to mesa-optimization relatively easier or harder to solve. By providing a framework for understanding the degree to which different AI systems are likely to be robust to misaligned mesa-optimization, we hope to start a discussion about the best ways of structuring machine learning systems to solve these problems. Furthermore, in the fourth post we will provide what we think is the most detailed analysis yet of a problem we refer as deceptive alignment which we posit may present one of the largest—though not necessarily insurmountable—current obstacles to producing safe advanced machine learning systems using techniques similar to modern machine learning. Two questions In machine learning, we do not manually program each individual parameter of our models. Instead, we specify an objective function that captures what we want the system to do and a learning algorithm to optimize the system for that objective. In this post, we present a framework that distinguishes what a system is optimized to do (its “purpose”), from what it optimizes for (its “goal”), if it optimizes for anything at all. While all AI systems are optimized for something (have a purpose), whether they actually optimize for anything (pursue a goal) is non-trivial. We will say that a system is an optimizer if it is internally searching through a search space (consisting of possible outputs, policies, plans, strategies, or similar) looking for those elements that score high according to some objective function that is explicitly represented within the system. Learning algorithms in machine learning are optimizers because they search through a space of possible parameters—e.g. neural network weights—and improve the parameters with respect to some objective. Planning algorithms are also optimizers, since they search through possible...

The Nonlinear Library
LW - Risks from Learned Optimization: Conclusion and Related Work by evhub, Chris van Merwijk, vlad_m, Joar Skalse, Scott Garrabrant from Risks from Learned Optimization,

The Nonlinear Library

Play Episode Listen Later Dec 24, 2021 10:36


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is Risks from Learned Optimization, Part 1: Preface, published by evhub, Chris van Merwijk, vlad_m, Joar Skalse, Scott Garrabrant. Crossposted from the AI Alignment Forum. May contain more technical jargon than usual. This is the fifth of five posts in the Risks from Learned Optimization Sequence based on the paper “Risks from Learned Optimization in Advanced Machine Learning Systems” by Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant. Each post in the sequence corresponds to a different section of the paper. Related work Meta-learning. As described in the first post, meta-learning can often be thought of meta-optimization when the meta-optimizer's objective is explicitly designed to accomplish some base objective. However, it is also possible to do meta-learning by attempting to make use of mesa-optimization instead. For example, in Wang et al.'s “Learning to Reinforcement Learn,” the authors claim to have produced a neural network that implements its own optimization procedure.(28) Specifically, the authors argue that the ability of their network to solve extremely varied environments without explicit retraining for each one means that their network must be implementing its own internal learning procedure. Another example is Duan et al.'s “ R L 2 : Fast Reinforcement Learning via Slow Reinforcement Learning,” in which the authors train a reinforcement learning algorithm which they claim is itself doing reinforcement learning.(5) This sort of meta-learning research seems the closest to producing mesa-optimizers of any existing machine learning research. Robustness. A system is robust to distributional shift if it continues to perform well on the objective function for which it was optimized even when off the training environment.(29) In the context of mesa-optimization, pseudo-alignment is a particular way in which a learned system can fail to be robust to distributional shift: in a new environment, a pseudo-aligned mesa-optimizer might still competently optimize for the mesa-objective but fail to be robust due to the difference between the base and mesa- objectives. The particular type of robustness problem that mesa-optimization falls into is the reward-result gap, the gap between the reward for which the system was trained (the base objective) and the reward that can be reconstructed from it using inverse reinforcement learning (the behavioral objective).(8) In the context of mesa-optimization, pseudo-alignment leads to a reward-result gap because the system's behavior outside the training environment is determined by its mesa-objective, which in the case of pseudo-alignment is not aligned with the base objective. It should be noted, however, that while inner alignment is a robustness problem, the occurrence of unintended mesa-optimization is not. If the base optimizer's objective is not a perfect measure of the human's goals, then preventing mesa-optimizers from arising at all might be the preferred outcome. In such a case, it might be desirable to create a system that is strongly optimized for the base objective within some limited domain without that system engaging in open-ended optimization in new environments.(11) One possible way to accomplish this might be to use strong optimization at the level of the base optimizer during training to prevent strong optimization at the level of the mesa-optimizer.(11) Unidentifiability and goal ambiguity. As we noted in the third post, the problem of unidentifiability of objective functions in mesa-optimization is similar to the problem of unidentifiability in reward learning, the key issue being that it can be difficult to determine the “correct” objective function given only a sample of that objective's output on some training data.(20) We hypothesize that if the problem of unidentifia...

The Outlook Podcast Archive
My mission to save the 'Russian spy whale'

The Outlook Podcast Archive

Play Episode Listen Later Apr 26, 2021 37:28


Norwegian fisherman Joar Hesten was fishing for cod in the Arctic when he came across a white whale wearing a harness. He freed the mysterious beluga from the straps, and then saw the harness was labelled ‘Equipment St Petersburg'. Theories started swirling that this whale had escaped the Russian military, which has a history of training marine mammals. Now Joar is on a mission to relocate the whale, called Hvaldimir, to an area with other belugas in the hope that it can integrate into a pod and live in the wild. The quest has cost Joar emotionally and financially. And, as Joar had previously worked as a whaler for a short time, he says his relationship with Hvaldimir has made him question the practice. In 1986 the International Whaling Commission announced a ban on commercial whaling, but Norway continues to hunt up to 500 minke whales a year - citing cultural reasons. For now, Joar is looking for support so he can keep his promise to help save Hvaldimir. Get in touch: outlook@bbc.com Presenter: Emily Webb Producer: Saskia Edwards Picture: Joar with Hvaldimir Credit: Aleksander Nordahl, DN/D2