POPULARITY
Er Sveinung en insider eller outsider? Selv om han har bodd både i Gjøvik, Moss og USA, blir vi enige om at han er ganske inkludert sammenlignet med veldig mange i vårt tiltagende multikulturelle land. Hvit mann som pusher femti, med fast stilling på landets høyeste handelshøyskole, er han i alle fall. Likevel har han tråkket mye av livet i utenforskapet gjennom arbeidet sitt med rusmisbrukere gjennom Tyrilistiftelsen. Men gjesten i denne episoden har betydelig erfaring med, og kompetanse på utenforskap, særlig slik det knytter seg til krysskulturelle barn og unge. For Warsame Ali, som inntil nylig var forsker tilknyttet FHI, men nå leder stiftelsen Flexid, ble nasjonalt kjent da han gikk ut og sa seg enig i deler av diagnosen (men ikke løsningene) som Sylvi Listhaug stilte knyttet til ungdomskriminalitet i Oslo. Warsame har unike erfaringer med livet som krysskulturell i hovedstaden, og har engasjert seg både faglig og som samfunnsdebattant for å fremme inkludering og motvirke utenforskap for barn og unge med krysskulturell bakgrunn. Han forteller om erfaringene knyttet til medieblesten rundt saken som ble publisert i NRK og dykker ned i hvilke ting han er enig og hvilke ting han er uenig med Sylvi Listhaug i. Sveinung innrømmer at han synes det er vanskelig å snakke om problemstillinger knyttet til innvandring, og Warsame argumenterer for at vi har en krevende berøringsangst som gjør disse problemstillingene vanskeligere å gjøre noe med. Warsame tar oss med inn i innvandringshistorikken i Norge og snakker samtidig om behovet for å utvikle et bedre, felles språk for å snakke om dette. Vi snakker om ulike verktøy for ulike typer utenforskap i ulike faser av integrering, og Warsame forteller om arbeidet til Flexid, som nylig har fått Ferd Sosiale Entreprenører med blant sine støttespillere. Han vektlegger helsepersonells rolle og forteller om forskning fra FHI på dette, mens Lars Jacob utfordrer ham til å snakke om grader av utenforskap og verktøykassen for ulike typer problemstillinger. Sveinung utfordrer Warsame til å fortelle om oppveksten med somalske foreldre på Haugenstua på 90-tallet og trekke paralleller til dagens situasjon. Lars Jacob snakker om integrering av fotballjenter på østkanten i Oslo, snakker om Leo Ajkic sine TV-utforskinger av hat mellom folkegrupper og utfordrer Warsame til å tenke seg inn i situasjonen som inkluderingsminister. Sveinung mimrer til fotballtrenerkarrieren sin og erfaringer med inkludering i den sammenheng, og Warsame mener at ubehaget er naturlig. Vi snakker om migrasjonsstress, vektlegger verdien av empati og snakker om alt vi tar for gitt. Avslutningsvis ber Lars Jacob Warsame om å ta oss med inn i forskning på psykisk uhelse knyttet til migrasjon og utenforskap, og han snakker om verdien av sosial støtte og tilhørighet. Hosted on Acast. See acast.com/privacy for more information.
Today's guest is Nick Bostrom, a prominent philosopher and the Director of the Future of Humanity Institute (FHI) at Oxford University. He also leads the Governance of AI Program (GovAI) within the FHI. Renowned globally, his expertise spans existential risk, the anthropic principle, ethics surrounding human enhancement, whole brain emulation, superintelligence risks, and the reversal test. In this special episode of the “AI Futures” series on the AI in Business podcast, we offer an exclusive sample from the Trajectory podcast, hosted by Emerj CEO and Head of Research Daniel Faggella. This excerpt features a fascinating conversation with Nick centering on the concept of a “worthy successor”—post-human intelligences that could expand beyond human capabilities while preserving essential human values. Drawing from his latest book, Deep Utopia, Bostrom explores the potential for AI to “go right,” offering a rare glimpse of optimism for the future of artificial intelligence and its alignment with moral progress. If you're interested in getting more perspectives on AI's longer term impact on business and society, be sure to tune into the Trajectory podcast. You can find the YouTube and podcast links here: emerj.com/tj2
Som fersk leder mistet Camilla Stoltenberg ved et par anledninger helt besinnelsen. Hva har (leder)livet siden lært henne om hvordan man håndterer og motiverer egne ansatte - og det norske folk? I ukens Preik deler den tidligere FHI-direktøren noen av sine erfaringer fra det å være med å lede Norge gjennom vår generasjons største akutte krise.I tillegg snakker vi om:faren med vedtatt enighetledelse av individualisterforskjellen på håp og optimismeråd til ledere som kvir seg for å delta i samfunnsdebatten.#AlltidNysgjerrigAldriUtlærtAbonner og hør på Preik via Spotify, Apple Podkaster, Acast, Player FM eller hvor enn du hører på podcast.Medvirkende:Camilla Stoltenberg, direktør for forskningsstiftelsen NORCEJan Erik Kjerpeseth, konsernsjef i Sparebanken Vest Hosted on Acast. See acast.com/privacy for more information.
Jakob King leads a Q & A with an organization called Family Hope International, focusing on the impact of FHI and how ECC can be a support to Christians in other nations.
If you're enjoying the content, please like, subscribe, and comment! Please consider supporting the show! https://anchor.fm/worldxppodcast/support GoFundMe: https://gofund.me/0ebafa52 Evans Soccer Academy: https://evanssoccer.com/ Ian Evans is a Program Officer for the Sports, Cultural and Youth programs at FHI 360. His team serves as the implementing partner for the U.S. Department of State Sports Visitor and Sports Envoy program. He works closely with U.S. and international non-elite youth athletes. His passion and area of focus consists of utilizing youth sports as a vehicle to promote intercultural relationships and communication. Ian is a former collegiate and semi-professional soccer player and coach. He is co-founder of Evans Soccer Academy and Statloba. ______________________ Follow us! @worldxppodcast Instagram - https://bit.ly/3eoBwyr @worldxppodcast Twitter - https://bit.ly/2Oa7Bzm Spotify - http://spoti.fi/3sZAUTG Anchor - http://bit.ly/3qGeaH7 YouTube - http://bit.ly/3rxDvUL #youthsoccer #soccer #premierleague #pingpongdiplomacy #youth #youthsports #coaching #sports #diplomacy #statedepartment #studentexchange #students #envoy #diplomat #embassy #podcastshow #longformpodcast #longformpodcast #podcasts #podcaster #newpodcast #podcastshow #podcasting #newshow #worldxppodcast --- Support this podcast: https://podcasters.spotify.com/pod/show/worldxppodcast/support
Miriam og Frode har bestemt seg for å slutte med Antijantepodden, i alle fall for nå. I denne episoden forteller de om hvordan podkasten startet, og de reflekterer over viktige episoder og gjester. De deler også sine tanker om hvordan situasjonen er i verden akkurat nå, og løgner og overdrivelser som er ment for å gjøre vanlige mennesker til fiender av hverandre. Ekteparet har selv konkludert med at både den politiske høyre- og venstresiden i bunn og grunn dytter oss i en mer kollektivistisk retning. De mener at man derfor bør se etter et helt annet fundament for samfunnet. I episoden deler de anbefalinger for bøker og personer det er verdt å følge med på. Personlig skal de fortsatt følge med på det som skjer i verden, men de vil vie en betydelig større andel av tiden til dyrehold, å dyrke mat, relasjoner lokalt i Panama, samt at de må jobbe for å skaffe seg inntekt.KILDER DISKUTERT:› antijanteboka.com: Du skal lide for fellesskapet› norge er et konformt land (søk)› svein østvik munnbind (søk)› vl.no: Nok mobbing og uthenging i norsk offentlighet! (Miriam sin kronikk)› 6. okt 2020 - Smitten øker i Europa – WHO advarer mot korona-apati - NRK • “Likegyldighet til koronaviruset er økende i Europa, advarer WHOs Europasjef Hans Kluge. Han mener folk må motiveres på nye måter for å få kontroll over den økende smitten av det dødelige viruset.”› who financed hitlers nazi warmachine (søk)› operation paperclip (søk)› npc non player character (søk) • hylic people (søk)Digital ID: • 2015: Kun Gardermoen har teknologi for ansiktsgjenkjenning i Norge ◦ “Nasjonalt ID-senter frykter at så mange som 27.000 utlendinger, som er eller har vært i Norge, har bodd her under falsk identitet. Nå mener det statlige ekspertorganet at det er på høy tid å ta i bruk flere og bedre verktøy for ID-sjekk.” • Snart kan du bruke mobilen som legitimasjon ◦ “Fra mandag blir det mulig å bruke mobilen som legitimasjon. Det skjer gjennom en ny tjeneste i BankID-appen.” ◦ “De to statlige tjenestene tror dette kan rette på at stadig flere mangler gyldig legitimasjon når de trenger det.” • Snart kan ansiktet ditt bli boardingkortet på flyplassen ◦ “Disse skjermene bruker en biometrisk teknologi som identifiserer reisende ved å skanne deres ansikt – og skal erstatte papir- og telefonboardingkort.” ◦ “Nei, å skanne ansiktet i stedet for boardingkort, er frivillig.” ◦ “På sikt kan ansiktsgjenkjenning kobles til andre reisedokumenter – som pass.” ◦ “Dette kan være starten på en total dokumentasjonserstatning i fremtiden.” • Brukte tre år på hjelpe samboeren å få BankID: – Staten må ta over ◦ “Elektronisk identifikasjon er nøkkelen i et digitalisert samfunn. BankID er den mest utbredte.” ◦ “Samtidig er denne nøkkelen vanskelig å få, fordi bankene mangler et samordnet regelverk og i tillegg må forholde seg til to forskjellige lover.” ◦ “Det som Aakre mener gir mest mening, er at Skatteetaten tar over eID-systemet på høyeste nivå.” • Nye EU-regler tvinger staten til å ta eID-ansvar – krever individuell behandling i mellomtiden ◦ “Både Finans Norge og diskrimineringsombudet mener det er uholdbart at bankene kan nekte folk tilgang til elektronisk ID.” • Slik skal man gjøre idretten tryggere og forhindre at overgripere blir trenere ◦ “Opprullingen førte til en diskusjon om hvor tungvint ordningen med politiattester i idrettsklubber er.” ◦ “Nå innføres en etterlengtet, digital ordning.”Klimaendringer, helse og “pandemier”: • Pandemier og naturen: Hva er sammenhengen? • Risikoen for pandemier kan økes av klimaendringer og naturødeleggelser • Pandemier og globale miljøendringer henger sammen • Eksperter advarer: Klimaendringer kan skape nye og farlige pandemier • Klima, sult og sårbarhet ◦ “Klimaendringene gir økt risiko for utbredelse av sykdommer og større globale helseutfordringer som pandemier og feilernæring.” • Klimaendringer øker risikoen for nye sykdommerHeteslag, eller ikke…: • Kona til savnede Michael Mosley: Har ikke gitt opp håpet ◦ “Tre dager etter at BBC-programleder Michael Mosley forsvant på den greske øyen Symi, fortsetter letingen.” • BBC-programleder Michael Mosley funnet død i Hellas • Savnet britisk TV-profil funnet død ◦ “Den britiske TV-profilen Michael Mosley er funnet død på den greske øya Symi. Mosley døde mest sannsynlig av heteslag og dehydrering, ifølge ordføreren.” • Michael Mosley døydde av naturlege årsaker ◦ “TV-programleiaren og legen Michael Mosley døydde av naturlege årsaker, seier gresk politi til BBC.” • Turist-dødsfallene i Hellas: – Påvirker oss alle ◦ “Flere har mistet livet i ekstreme temperaturer i Hellas den siste tiden. Norsk overlege advarer mot ferieturen. ” ◦ “I starten av juni ble den britiske kjendislegen Michael Mosley funnet død etter en spasertur langs stranden.”› Tvangsinnlagt etter heftig Facebook-posting • “Saken har fått stor oppmerksomhet. I tillegg til nevnte Steigan, har nettsteder som Document, Antijantepodden og internasjonale «alternative» nettsteder omtalt saken.”› collectivism vs individualism (søk) • archive.org: The Chasm - Collectivism vs. Individualism by G Edward Griffin • realityzone.com: The Chasm - Collectivism vs. Individualism by G Edward Griffin› Frp og Ap vil ha forsøk med tvangsvaksinasjon i Oslo.› Frp vil at alle ansatte i nye Moss kommune skal være vaksinerte› khmer rouge (søk)› kundalini yoga (søk)› serpent energy (søk)› all seeing eye (søk)› age of aquarius (søk)› Mr X - The Great Reset(s)› youtube.com: From New Age to Jesus› the white pill jesus (søk)› Ole Østlid • verstat.substack.com› Jarle AarstadEN FRISKERE BEFOLKNING ETTER “VAKSINERINGEN”?:Etter COVID, men før injeksjonene:› Norge har ikke hatt flere dødsfall enn normalt • “Korona påvirker dødstallene ulikt i forskjellige land. I Norge er det ikke flere som dør enn det vanligvis er.”Etter injiseringene startet opp:› Flere blir syke: – Vaksinen er ikke årsaken • “Flere blir innlagt på akuttmottak og dødeligheten stiger i Norge. – Ingen grunn til å mistenke koronavaksinen, sier assisterende helsedirektør Espen Rostrup Nakstad.”› Mange virus i omløp: - Tre og fire ulike virus i samme prøve • “Det er nå et stort og økende omfang av luftveisinfeksjoner i befolkningen med samtidig forekomst av covid-19, influensa og RS-virusinfeksjoner, skrev FHI i sin ukesrapport før jul.”› Flere barn i psykiatrieen • “Økningen i psykiatrien for barn og unge fra 2019 til 2022 var på totalt 26 prosent.”› Vekker bekymring: Flere enn ventet døde av hjertesykdom • “Hjerte- og karsykdommer har kostet langt flere menneskeliv enn ventet de to siste årene. Etter mer enn 50 år med nedgang i dødeligheten, har utviklingen nå snudd.”› Økt sykefravær etter pandemien • “Lege- og egenmeldt sykefravær ligger på et høyere nivå nå enn før pandemien.”› Reagerer på høye dødstall blant unge • “For første gang på minst 20 år er det registrert overdødelighet blant unge i Norge.”› FHI-rapport: Bekrefter høye dødstall blant unge • “Det døde uvanlig mange i alderen 20–39 år i fjor. Det viser en fersk rapport fra Folkehelseinstituttet.”› Stor økning i sykmeldinger for psykiske lidelser – verst blant unge› “I løpet av fem år har sykefravær grunnet psykiske lidelser økt med 44 prosent.”› Flere unge døde av sykdom: Forskere slår alarm • “Folkehelseinstituttets rapport om hva nordmenn døde av i fjor, er dyster lesning. For første gang på flere år er det registrert høyere dødelighet blant unge i alderen 1–39 år.”› Rekordmange smittet av flått • “Aldri før har så mange blitt smittet av sykdommen borreliose som i år, ifølge FHI.”› Flere hundre millioner i økte utgifter til migrenepasienter • “Antallet migrenepasienter som får sykepenger øker og har kostet staten flere hundre millioner kroner mer på ett år.”› Høyeste sykefravær på 15 årHISTORIER MAN IKKE MÅ GLEMME:› 13. januar 2021 - Ingen vaksine mot overtro — Lars Gilberg, Vårt Land • “En konspirasjonsteori som enkelte faktisk tar seriøst er at koronapandemien er konstruert av store globale aktører som vil kontrollere oss som individer.” • “Å ta i bruk en nyutviklet vaksine er ikke veldig forskjellig fra å ta en nyutviklet medisin. Man må stole på at utprøvingen har vært god nok og at alle regler ble fulgt før de ble godkjent.” • “Å spre vaksineskepsis uten å fortelle hva vaksiner har gjort for oss, kan i ytterste konsekvens føre til at andre mennesker dør.” • “Går man ett steg videre og dyrker konspirasjonsteorier, plasserer man seg selv i en kategori som evolusjonen har en tendens til å luke ut.”› 18. desember 2021 - Vi blir holdt som gisler av en minoritet — Morten Kinander - Institutt for rettsvitenskap og styring v/ BI • “Er det mulig å begrunne forskjellsbehandling av de vaksinerte og de uvaksinerte? Ja. Burde vi ta i bruk hardere virkemidler overfor uvaksinerte? Absolutt. Er dette i tråd med rettsstaten? Selvfølgelig.”› Funnet død etter “vaksineringen” startet› Livsfarlig propaganda på ung.noMER OM COVID:Følg med på NRK sin fremste propaganda-journalist på COVID: • Jan-Erik Wilthil ◦ Slår alarm: Rekordmange til lege med utmattelse ◦ Reagerer på høye dødstall blant unge ◦ Koronaforsker: Advarte om høye dødstall ◦ Flere unge døde av sykdom: Forskere slår alarm ◦ Professor om covid-19: – Mye farligere enn influensa ◦ Ekspertene svarer: Bør unge ta koronavaksinen? ◦ Gunhild Alvik Nyborg advarte Norge om koronavirusetANBEFALTE BØKER:› The Most Dangerous Superstition - Larken Rose • The Jones Plantation (2012) • Jones Plantation (2023) ◦ jonesplantationfilm.com› Brotherhood of Darkness - Stanley Monteith • Les på archive.org› The Light That Was Dark: From the New Age to Amazing Grace - Warren B. Smith› Norges hemmelige hær - historien om Stay behind • Les på archive.orgANBEFALTE PODCASTERE / JOURNALISTER:› James Corbett @ corbettreport.com • Membership / support› Dr. John CampbellRELATERTE ANTIJANTEPODD-EPISODER:Våre tre viktigste episoder(?):› AJP 47 | James Corbett — COVID-19 is a step towards a prison state› AJP 52 | Patrick Wood — A new economic order is being enforced› AJP 95 | G. Edward Griffin — Our real enemy is collectivism • videoLeger, forskere og helsepersonell:› AJP 16 | Kjetil Elvevold — Vi vet for lite om vaksinenes langtidsvirkninger› AJP 17 | Peter Dvergsdal — Vi har effektive medisiner mot COVID-19› AJP 22 | Dag Bratlid — For stor risiko å vaksinere barn› AJP 30 | Runi Rogers — Det er ikke bare uvaksinerte som sprer viruset› AJP 43 | Mattias Desmet — People are being hypnotised› AJP 44 | Anne Godding — Myndighetene driver vaksinepropaganda› AJP 45 | Aleksander Bjargo — Vi glemmer at vi er en del av naturen› AJP 61 | Sjur Even Aunmo — Fikk sparken for å snakke om bivirkninger› AJP 78 | Aseem Malhotra — More than 1 in 1000 experience severe side-effects› AJP 85 | Jørn Eikemo — Kjemper mot vaksinetvang› AJP 93 | Sjur Even Aunmo — Planter inneholder giftstoffer som kan ødelegge helsen› AJP 108 | Eva Fjeldstad — 80-90 % av maten i norske butikker gjør oss syke› AJP 119 | Jan Raa — Vi må gjenopprette økosystemet i tarmen› AJP 123 | Jarle Aarstad — Robuste funn rundt overdødelighet etter COVID-vaksinering› AJP 132 | Jarle Aarstad — Dødsårsaksregisteret må kobles opp mot vaksinasjonsdata for å finne sannheten om overdødelighet blant unge› AJP 134 | Michael Nehls — The lockdowns and the “vaccines” were targeted attacks on our brain› AJP 137 | Kjetil Elvevold — COVID-vaksinene gav ingen fordeler› AJP 138 | Eva Fjeldstad — Psykiske problemer kan komme fra tarmenPsykiatere og coacher:› AJP 39 | Mark McDonald — Fear is being exploited› AJP 69 | Mark McDonald — People are stuck in irrational fear› AJP 83 | Mark McDonald — Anger is a healthy sign of boundaries being crossed› AJP 113 | Adam Njå — Vi må forholde oss til reelle kjønnsforskjeller› AJP 116 | Meredith Miller — The world is in a narcissistic abuse cycleAdvokater og jurister:› AJP 4 | Marius Reikerås mener at karantenehotell er ulovlig› AJP 9 | Marius Reikerås mener at barne- og familieministeren bør gå av› AJP 31 | Helge Morset — Får ikke vite begrunnelsen for nedstengingen› AJP 33 | Helge Morset — Koronapasset skulle bare gi lettelser› AJP 41 | Marius Reikerås — Tiltakene har sendt folk ut i ekstrem fattigdom› AJP 50 | Jørgen Heier — Sleipe tilsnikelser i høringsnotater› AJP 88 | Jørgen Heier — Søk om erstatning for vaksineskader› AJP 111 | Morten Walløe Tvedt — Alvorlige WHO-endringer bak lukkede dører› AJP 121 | Morten Walløe Tvedt — Departementet holder tilbake viktig informasjonEpisoder om klima:› AJP 67 | Gjertrud Røyland — Mener klimanarrativet er uvitenskapelig› AJP 125 | Ole Østlid — Klimastatistikk ble fjernet fra yr.no› AJP 141 | Ole Henrik Ellestad — Klimadebatten er preget av ensretting og voldsomme overdrivelser› AJP 143 | Ole Henrik Ellestad — Skjender naturen for å «redde» kloden› AJP 145 | Ole Østlid — Absurd rapportering om vær og klimaEpisoder om økonomi:› AJP 21 | Hans Eirik Olav — Er verdensøkonomien i ferd med å bryte sammen?› AJP 79 | Hans Eirik Olav — Verdensøkonomien vil kollapse› AJP 87 | Rune Østgård — Inflasjon stjeler 7 % av pengene våre hvert år› AJP 90 | John Thore Stub Sneisen — Vi trenger en desentraliserende revolusjon› AJP 96 | Gregard Mikkelborg — Gull er det tryggeste pengesystemet vi harMenneskelige episoder:› AJP 15 | Yusela Del Carmen Gjæringen — Fikk livet snudd opp ned av koronavaksine-skade› AJP 34 | Linda Omnes — Barna utsettes for et enormt press› AJP 37 | Cathrine Johansen — Kastet ut av akutten fordi hun var uvaksinert› AJP 38 | Trine Wilhelmine Rønnevig — Ble uthengt i lokalavisen for vaksinemotstandEpisoder om åndelighet:› AJP 131 | Terry Wolfe — Cynical deception among truth seekers› AJP 133 | Terry Wolfe — People are herded into using drugs› AJP 135 | Terry Wolfe — Authorities will disclose “special knowledge” about aliens› AJP 142 | Hrvoje Morić — Multipolarity is bringing us closer to a one world governmentAndre relaterte episoder:› AJP 1 | Svein Østvik om munnbind-brenningen› AJP 51 | Janna Johnsen — Føler seg ikke trygg i Norge› AJP 63 | Frode — Vil vekke folk med eventyr› AJP 92 | James Corbett — We need to model disobedience • video› AJP 112 | Nils Petter Gleditsch og Anders Hellebust — Brudd på ytrings- og forskningsfriheten› AJP 118 | Leo Lyon Zagami — Secret societies are very influential in Norway› AJP 146 | Ben Massmann — Found the perfect country to build his communityLast ned episodenInnspilt: 2024-09-04Publisert: 2024-09-06Støtte Antijantepodden?Liker du arbeidet vi gjør, og vil bidra til at vi lager flere episoder?Finn ut hvordan du kan gi noe tilbake ved å gå til antijantepodden.com!Meld deg på vårt nyhetsbrev
Verdens helseorganisasjon kaller viruset M-kopper en internasjonal helsekrise. Mange er døde, og nå er smitten kommet til Norden. Hva slags sykdom er dette? Og er det farlig? Med kommentator Joacim Lund og overlege ved avdeling for infeksjonsovervåkning og vaksine på FHI, Tone Bruun. Foto: REUTERS/Arlette Bashizi.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply to Aether - Independent LLM Agent Safety Research Group, published by RohanS on August 24, 2024 on The Effective Altruism Forum. The basic idea Aether will be a small group of talented early-career AI safety researchers with a shared research vision who work full-time with mentorship on their best effort at making AI go well. That research vision will broadly revolve around the alignment, control, and evaluation of LLM agents. There is a lot of latent talent in the AI safety space, and this group will hopefully serve as a way to convert some of that talent into directly impactful work and great career capital. Get involved! 1. Submit a short expression of interest here by Fri, Aug 23rd at 11:59pm PT if you would like to contribute to the group as a full-time in-person researcher, part-time / remote collaborator, or advisor. (Note: Short turnaround time!) 2. Apply to join the group here by Sat, Aug 31st at 11:59pm PT. 3. Get in touch with Rohan at rs4126@columbia.edu with any questions. Who are we? Team members so far Rohan Subramani I recently completed my undergrad in CS and Math at Columbia, where I helped run an Effective Altruism group and an AI alignment group. I'm now interning at CHAI. I've done several technical AI safety research projects in the past couple years. I've worked on comparing the expressivities of objective-specification formalisms in RL (at AI Safety Hub Labs, now called LASR Labs), generalizing causal games to better capture safety-relevant properties of agents (in an independent group), corrigibility in partially observable assistance games (my current project at CHAI), and LLM instruction-following generalization (part of an independent research group). I've been thinking about LLM agent safety quite a bit for the past couple of months, and I am now also starting to work on this area as part of my CHAI internship. I think my (moderate) strengths include general intelligence, theoretical research, AI safety takes, and being fairly agentic. A relevant (moderate) weakness of mine is programming. I like indie rock music :). Max Heitmann I hold an undergraduate master's degree (MPhysPhil) in Physics and Philosophy and a postgraduate master's degree (BPhil) in Philosophy from Oxford University. I collaborated with Rohan on the ASH Labs project ( comparing the expressivities of objective-specification formalisms in RL), and have also worked for a short while at the Center for AI Safety (CAIS) under contract as a ghostwriter for the AI Safety, Ethics, and Society textbook. During my two years on the BPhil, I worked on a number of AI safety-relevant projects with Patrick Butlin from FHI. These were focussed on deep learning interpretability, the measurement of beliefs in LLMs, and the emergence of agency in AI systems. In my thesis, I tried to offer a theory of causation grounded in statistical mechanics, and then applied this theory to vindicate the presuppositions of Judea Pearl-style causal modeling and inference. Advisors Erik Jenner and Francis Rhys Ward have said they're happy to at least occasionally provide feedback for this research group. We will continue working to ensure this group receives regular mentorship from experienced researchers with relevant background. We are highly prioritizing working out of an AI safety office because of the informal mentorship benefits this brings. Research agenda We are interested in conducting research on the risks and opportunities for safety posed by LLM agents. LLM agents are goal-directed cognitive architectures powered by one or more large language models (LLMs). The following diagram (taken from On AutoGPT) depicts many of the basic components of LLM agents, such as task decomposition and memory. We think future generations of LLM agents might significantly alter the safety landscape, for two ...
FHI anbefaler oss å ikke drikke vann rett fra naturen. Er dette faktisk et tema vi friluftsfolk må være oppmerksomme på? Vi drøfter temaet litt med vår vinkling.Siden sist har vi vært på sesongens kanskje siste fluefisketur hvor utbytte var ganske magert - bokstavelig talt. Samtidig er det litt vemodig at en sesong går mot slutten og det kjenner vi litt på om dagen. En sesong avløses riktignok av en annen og foran oss ligger jaktstart på flere arter som ender, skogsfugl og jerv. Det er mye å se fram til som kan veie litt opp for vemodet vi har i kroppen.Jakten på lyden: Andreas Mikalsen var først ute med riktig svar på forrige ukes lyd og får en kopp i posten. I dag kommer det en ny lyd som du kan bryne deg på. Vi håper vi har skrudd til vanskelighetsgraden litt, men send oss en melding så snart du tror du vet hva du hører lyden av.På vår patreonside kan du nå høre lyden så mange ganger du vil.Ny film på patreon: Her kan du se Røyefiske i en bitteliten elv Støtt podcasten vår og få eksklusivt innhold og rabatter i nettbutikken med Patreon Hosted on Acast. See acast.com/privacy for more information.
Med skolestart og barnehageåpning øker risikoen for at barn og voksne blir syke av luftveisinfeksjoner som RS-virus, kikhoste og Covid-19. Helsemyndighetene minner om viktigheten av god hygiene og vaksinasjon. Seksjonsleder seksjon for luft, blod- og seksuell smitte ved FHI, Berit Løkken Finess, kommer med generelle råd på hva man skal gjøre dersom man blir smittet.
In this episode of BioTalk with Rich Bendis, we have the honor of speaking with the U.S. Ambassador to the Republic of Tunisia, Joey R. Hood. With an impressive diplomacy and international relations background, Ambassador Hood sheds light on the vital collaboration between the U.S. and Tunisia in scientific research, innovation, and technology transfer. Host Rich Bendis is the President and CEO of BioHealth Innovation, Inc., dedicated to accelerating biohealth opportunities and fostering innovation across the BioHealth Capital Region of Maryland, Washington D.C., and Virginia….and the world. Ambassador Hood discusses the current status of U.S.-Tunisia cooperation in scientific research and technology transfer, emphasizing these areas' critical role in Tunisia's development. He highlights the importance of scientific research, innovation, and technology transfer in Tunisia and explains how the U.S. can support and enhance these efforts. We explore the Department of State-US funded CURES project in Tunisia, examining its goals, progress, and future objectives. Ambassador Hood elaborates on the significance of U.S. funding for projects like CURES and the broader impact on both nations. BioHealth Innovation, Inc. (BHI) is a contractor to FHI 360, supporting the CURES project to assist Tunisia in building its life science research, technology transfer, and commercialization capabilities. The conversation also covers other key partnerships and projects that the U.S. supports in Tunisia, aimed at fostering research and technology transfer. Ambassador Hood shares insights into these collaborations and his vision for the future of U.S.-Tunisia relations in science and technology. "The collaborative efforts between the U.S. and Tunisia in scientific research and technology transfer are crucial for advancing innovation and fostering economic growth. The programs we discuss on BioTalk are a testament to the great work happening in this field. I want to thank Ambassador Hood for joining us and sharing his invaluable insights on these important initiatives," says Rich Bendis. About Joey R. Hood: Joey Hood was confirmed by the U.S. Senate on December 22, 2022. He has held several leadership positions in the U.S. Department of State's Bureau of Near Eastern Affairs, including Acting Assistant Secretary. His extensive diplomatic career includes assignments in Iraq, Kuwait, Saudi Arabia, Yemen, and Qatar. Prior to the Foreign Service, Mr. Hood was a Fulbright scholar in Burkina Faso and worked at a bank in Vermont. He earned a Master's degree from the Fletcher School at Tufts University and a Bachelor's degree from Dartmouth College. Tune in to BioTalk for an insightful conversation with Ambassador Joey R. Hood as we explore the dynamics of U.S.-Tunisia cooperation in science and technology, and its potential to drive innovation and progress.
Norske tilstander ligner mer og mer på svenske tilstander, som fremdeles er et lite stykke unna de franske tilstandene, i alle fall politisk. Mens kommunister i Sverige reiser Lenin-statue med Palestina-flagg, forverres volden blant unge i Norge. I likhet med i Sverige, brukes norske barn under den kriminelle lavalder til å begå lovbrudd. Noen politiske tiltak eller endringer ligger ikke i horisonten, men politet er veldig bekymret. Mens Folkehelseinstituttet (FHI) blånekter for at koronavaksinen har slått feil på noen som helst måte, kjører NRK fetisj-kampanje for homofile menn i lær i beste sendetid. Det til tross for at det finnes nok av nyheter å ta tak i, blant annet at den juridisk krigføringen eskalerer i USA. Flere kjente profiler har fått smake på det amerikanske politiserte rettsvesenet de siste årene. Steve Bannon skal snart sone fengselsstraff, mens Alex Jones jages med høygafler i et sivilt søksmål. Det er ikke nok å slå ham konkurs, nå vil de ha X-kontoen hans også. Det er kanskje slik at «vestlige tilstander» bør bli årets nye begrep, men mindre Frankrike leder oss på veien ut av absurditetens tidsalder. Velkommen til Dagsorden med Rebecca Mistereggen og Christian Skaug.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Scar Worth Bearing: My Improbable Story of Kidney Donation, published by Elizabeth Klugh on May 30, 2024 on The Effective Altruism Forum. TL;DR: I donated my kidney and you can too. If that's too scary, consider blood donation, the bone marrow registry, post-mortem organ donation, or other living donations (birth tissue, liver donation). Kidney donation sucks. It's scary, painful, disruptive, scarring. My friends and family urged me not to; words were exchanged, tears were shed. My risk of preeclampsia tripled, that of end stage renal disease multiplied by five. I had to turn down two job offers while prepping for donation. It is easy to read philosophical arguments in favor of donation, agree with them, and put the book back on the shelf. But it is different when your friend needs a kidney: Love bears all things, believes all things, hopes all things, endures all things. Eighteen months ago, at 28-years-old, my friend Alan started losing weight. He developed a distinctive butterfly-shaped rash and became too weak to eat. On February 1, 2023, he collapsed. The level of toxins in his blood were the worst the doctors had ever seen, 24 times the normal level. He shouldn't have been alive. Two years ago, he'd watched his mother die of lupus and now he had the same disease. His body was attacking itself. Alan: By April 1, transplant discussions were under way. A living donor would mean avoiding years of relentless dialysis while waiting for a 3-year backlog of deceased donors. Living kidneys are better-quality and longer-lasting too. Having received six units of blood though, Alan had become allergic to 88% of donors. Regardless, I completed a comprehensive eleven-page history to determine my eligibility. In each of my classes, and at my own wedding, I gave a brief presentation encouraging others to apply as well. Nobody did. After initial blood work, my blood type was deemed incompatible, but I continued the health screenings to see if I could give indirectly through the National Kidney Registry. There were countless physicals, urine samples, iron infusions, psychological examinations, and dozens of tubes of blood. Throughout the process, Alan, his wife Meg, and I had many conversations that went something like this: Meg and Alan: "You know you don't have to do this right?" Me: "I want to... I might still bail though." Meg and Alan: "We certainly wouldn't blame you if you did." In January, I got the call that further bloodwork showed that I had a special type of AB+ blood and would be a direct match for Alan. I was elated. Alan cried. We both figured that God wouldn't have made me such an improbable match if I wasn't meant to share. So, on Tuesday April 9, 2024, we applied lick-and-stick kidney tattoos and drove to the hospital together at 5am. We were wheeled into surgery at 9am and were out by lunchtime. I took the anesthesia harder than most and spent a day longer in the hospital than predicted. I had an ugly scar and was crumpled in pain. I vomited on myself. I couldn't sleep on my side. I couldn't sleep at all. For weeks, every time I coughed or sneezed, it felt like I was going to rip open. There were times I feared I would never heal. But that's not the point. The point is that life persists. Alan is composing symphonies and playing cello and learning Mandarin. He released a new rap album, Back from the Dead, about his experiences. Though still recovering, I'm attending weddings and baking muffins, planting plants, and mending clothes. I started a new role with FHI 360 and am working to eradicate neglected tropical diseases. I will continue to fight for life. And you can too: Consider donating a kidney, giving blood, joining the bone marrow registry, signing up for post-mortem organ donation, or giving other living donations (birth tissue, liver donation). Alan ...
Navigating the complex landscape of global health requires not only expertise but also the insights of those who've walked the path from local origins to international influence. Maged, an Egyptian-American luminary in public health policy, joins us to share his transformative journey from a pharmacy graduate in Cairo to a key player in worldwide health initiatives. His vibrant heritage and professional metamorphosis underscore the critical role of cultural understanding and community engagement in crafting healthcare systems that truly make a difference.Our conversation with Maged illuminates the often-overlooked importance of clear public health messaging and the empowering ripple effect of health literacy. We grapple with the challenges of harmonizing healthcare expenditure with tangible outcomes, a dilemma faced by OECD nations and beyond. Maged's role at FHI 360 brings into focus the delicate balance between global health directives and the indispensable value of localized, culturally-sensitive policies—offering a candid look at the interplay of education, public health, and societal advancement.Wrapping up the dialogue, we confront the daunting global challenges that impinge on our collective well-being, from the confidentiality conundrums in data sharing to the promise of AI and telemedicine in closing health disparity gaps. Maged's perspective on the broader implications of our localized actions, particularly in the battle against climate change, serves as a poignant reminder of our shared responsibility in fostering a healthier, more equitable planet. Join us for an inspiring session that bridges continents and communities, reminding us that the health of one is indeed intertwined with the health of all. Support the Show.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Express interest in an "FHI of the West", published by habryka on April 18, 2024 on LessWrong. TLDR: I am investigating whether to found a spiritual successor to FHI, housed under Lightcone Infrastructure, providing a rich cultural environment and financial support to researchers and entrepreneurs in the intellectual tradition of the Future of Humanity Institute. Fill out this form or comment below to express interest in being involved either as a researcher, entrepreneurial founder-type, or funder. The Future of Humanity Institute is dead: I knew that this was going to happen in some form or another for a year or two, having heard through the grapevine and private conversations of FHI's university-imposed hiring freeze and fundraising block, and so I have been thinking about how to best fill the hole in the world that FHI left behind. I think FHI was one of the best intellectual institutions in history. Many of the most important concepts[1] in my intellectual vocabulary were developed and popularized under its roof, and many crucial considerations that form the bedrock of my current life plans were discovered and explained there (including the concept of crucial considerations itself). With the death of FHI (as well as MIRI moving away from research towards advocacy), there no longer exists a place for broadly-scoped research on the most crucial considerations for humanity's future. The closest place I can think of that currently houses that kind of work is the Open Philanthropy worldview investigation team, which houses e.g. Joe Carlsmith, but my sense is Open Philanthropy is really not the best vehicle for that kind of work. While many of the ideas that FHI was working on have found traction in other places in the world (like right here on LessWrong), I do think that with the death of FHI, there no longer exists any place where researchers who want to think about the future of humanity in an open ended way can work with other people in a high-bandwidth context, or get operational support for doing so. That seems bad. So I am thinking about fixing it. Anders Sandberg, in his oral history of FHI, wrote the following as his best guess of what made FHI work: What would it take to replicate FHI, and would it be a good idea? Here are some considerations for why it became what it was: Concrete object-level intellectual activity in core areas and finding and enabling top people were always the focus. Structure, process, plans, and hierarchy were given minimal weight (which sometimes backfired - flexible structure is better than little structure, but as organization size increases more structure is needed). Tolerance for eccentrics. Creating a protective bubble to shield them from larger University bureaucracy as much as possible (but do not ignore institutional politics!). Short-term renewable contracts. [...] Maybe about 30% of people given a job at FHI were offered to have their contracts extended after their initial contract ran out. A side-effect was to filter for individuals who truly loved the intellectual work we were doing, as opposed to careerists. Valued: insights, good ideas, intellectual honesty, focusing on what's important, interest in other disciplines, having interesting perspectives and thoughts to contribute on a range of relevant topics. Deemphasized: the normal academic game, credentials, mainstream acceptance, staying in one's lane, organizational politics. Very few organizational or planning meetings. Most meetings were only to discuss ideas or present research, often informally. Some additional things that came up in a conversation I had with Bostrom himself about this: A strong culture that gives people guidance on what things to work on, and helps researchers and entrepreneurs within the organization coordinate A bunch of logistical and operation...
TLDR: I am investigating whether to found a spiritual successor to FHI, housed under Lightcone Infrastructure, providing a rich cultural environment and financial support to researchers and entrepreneurs in the intellectual tradition of the Future of Humanity Institute. Fill out this form or comment below to express interest in being involved either as a researcher, entrepreneurial founder-type, or funder.The Future of Humanity Institute is dead:I knew that this was going to happen in some form or another for a year or two, having heard through the grapevine and private conversations of FHI's university-imposed hiring freeze and fundraising block, and so I have been thinking about how to best fill the hole in the world that FHI left behind. I think FHI was one of the best intellectual institutions in history. Many of the most important concepts[1] in my intellectual vocabulary were developed and popularized under its [...]The original text contained 1 footnote which was omitted from this narration. --- First published: April 18th, 2024 Source: https://www.lesswrong.com/posts/ydheLNeWzgbco2FTb/express-interest-in-an-fhi-of-the-west --- Narrated by TYPE III AUDIO.
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Express interest in an "FHI of the West", published by habryka on April 18, 2024 on LessWrong. TLDR: I am investigating whether to found a spiritual successor to FHI, housed under Lightcone Infrastructure, providing a rich cultural environment and financial support to researchers and entrepreneurs in the intellectual tradition of the Future of Humanity Institute. Fill out this form or comment below to express interest in being involved either as a researcher, entrepreneurial founder-type, or funder. The Future of Humanity Institute is dead: I knew that this was going to happen in some form or another for a year or two, having heard through the grapevine and private conversations of FHI's university-imposed hiring freeze and fundraising block, and so I have been thinking about how to best fill the hole in the world that FHI left behind. I think FHI was one of the best intellectual institutions in history. Many of the most important concepts[1] in my intellectual vocabulary were developed and popularized under its roof, and many crucial considerations that form the bedrock of my current life plans were discovered and explained there (including the concept of crucial considerations itself). With the death of FHI (as well as MIRI moving away from research towards advocacy), there no longer exists a place for broadly-scoped research on the most crucial considerations for humanity's future. The closest place I can think of that currently houses that kind of work is the Open Philanthropy worldview investigation team, which houses e.g. Joe Carlsmith, but my sense is Open Philanthropy is really not the best vehicle for that kind of work. While many of the ideas that FHI was working on have found traction in other places in the world (like right here on LessWrong), I do think that with the death of FHI, there no longer exists any place where researchers who want to think about the future of humanity in an open ended way can work with other people in a high-bandwidth context, or get operational support for doing so. That seems bad. So I am thinking about fixing it. Anders Sandberg, in his oral history of FHI, wrote the following as his best guess of what made FHI work: What would it take to replicate FHI, and would it be a good idea? Here are some considerations for why it became what it was: Concrete object-level intellectual activity in core areas and finding and enabling top people were always the focus. Structure, process, plans, and hierarchy were given minimal weight (which sometimes backfired - flexible structure is better than little structure, but as organization size increases more structure is needed). Tolerance for eccentrics. Creating a protective bubble to shield them from larger University bureaucracy as much as possible (but do not ignore institutional politics!). Short-term renewable contracts. [...] Maybe about 30% of people given a job at FHI were offered to have their contracts extended after their initial contract ran out. A side-effect was to filter for individuals who truly loved the intellectual work we were doing, as opposed to careerists. Valued: insights, good ideas, intellectual honesty, focusing on what's important, interest in other disciplines, having interesting perspectives and thoughts to contribute on a range of relevant topics. Deemphasized: the normal academic game, credentials, mainstream acceptance, staying in one's lane, organizational politics. Very few organizational or planning meetings. Most meetings were only to discuss ideas or present research, often informally. Some additional things that came up in a conversation I had with Bostrom himself about this: A strong culture that gives people guidance on what things to work on, and helps researchers and entrepreneurs within the organization coordinate A bunch of logistical and operation...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Future of Humanity Institute 2005-2024: Final Report, published by Pablo on April 17, 2024 on The Effective Altruism Forum. Anders Sandberg has written a "final report" released simultaneously with the announcement of FHI's closure. The abstract and an excerpt follow. Normally manifestos are written first, and then hopefully stimulate actors to implement their vision. This document is the reverse: an epitaph summarizing what the Future of Humanity Institute was, what we did and why, what we learned, and what we think comes next. It can be seen as an oral history of FHI from some of its members. It will not be unbiased, nor complete, but hopefully a useful historical source. I have received input from other people who worked at FHI, but it is my perspective and others would no doubt place somewhat different emphasis on the various strands of FHI work. What we did well One of the most important insights from the successes of FHI is to have a long-term perspective on one's research. While working on currently fashionable and fundable topics may provide success in academia, aiming for building up fields that are needed, writing papers about topics before they become cool, and staying in the game allows for creating a solid body of work that is likely to have actual meaning and real-world effect. The challenge is obviously to create enough stability to allow such long-term research. This suggests that long-term funding and less topically restricted funding is more valuable than big funding. Many academic organizations are turned towards other academic organizations and recognized research topics. However, pre-paradigmatic topics are often valuable, and relevant research can occur in non-university organizations or even in emerging networks that only later become organized. Having the courage to defy academic fashion and "investing" wisely in such pre-paradigmatic or neglected domains (and networks) can reap good rewards. Having a diverse team, both in terms of backgrounds but also in disciplines, proved valuable. But this was not always easy to achieve within the rigid administrative structure that we operated in. Especially senior hires with a home discipline in a faculty other than philosophy were nearly impossible to arrange. Conversely, by making it impossible to hire anyone not from a conventional academic background (i.e., elite university postdocs) adversely affects minorities, and resulted in instances where FHI was practically blocked from hiring individuals from under-represented groups. Hence, try to avoid credentialist constraints. In order to do interdisciplinary work, it is necessary to also be curious about what other disciplines are doing and why, as well as to be open to working on topics one never considered before. It also opens the surface to the rest of the world. Unusually for a research group based in a philosophy department, FHI members found themselves giving tech support to the pharmacology department; participating in demography workshops, insurance conferences, VC investor events, geopolitics gatherings, hosting artists and civil servant delegations studying how to set up high-performing research institutions in their own home country, etc. - often with interesting results. It is not enough to have great operations people; they need to understand what the overall aim is even as the mission grows more complex. We were lucky to have had many amazing and mission-oriented people make the Institute function. Often there was an overlap between being operations and a researcher: most of the really successful ops people participated in our discussions and paper-writing. Try to hire people who are curious. Where we failed Any organization embedded in a larger organization or community needs to invest to a certain degree in establishing the right kind of...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FHI (Future of Humanity Institute) has shut down (2005-2024), published by gwern on April 17, 2024 on LessWrong. Over time FHI faced increasing administrative headwinds within the Faculty of Philosophy (the Institute's organizational home). Starting in 2020, the Faculty imposed a freeze on fundraising and hiring. In late 2023, the Faculty of Philosophy decided that the contracts of the remaining FHI staff would not be renewed. On 16 April 2024, the Institute was closed down. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Anders Sandberg has written a “final report” released simultaneously with the announcement of FHI's closure. The abstract and an excerpt follow. Normally manifestos are written first, and then hopefully stimulate actors to implement their vision. This document is the reverse: an epitaph summarizing what the Future of Humanity Institute was, what we did and why, what we learned, and what we think comes next. It can be seen as an oral history of FHI from some of its members. It will not be unbiased, nor complete, but hopefully a useful historical source. I have received input from other people who worked at FHI, but it is my perspective and others would no doubt place somewhat different emphasis on the various strands of FHI work. What we did well One of the most important insights from the successes of FHI is to have a long-term perspective on one's research. While [...] ---Outline:(00:57) What we did well(03:48) Where we failed(05:06) So, you want to start another FHI?--- First published: April 17th, 2024 Source: https://forum.effectivealtruism.org/posts/uK27pds7J36asqJPt/future-of-humanity-institute-2005-2024-final-report Linkpost URL:https://www.dropbox.com/scl/fi/ml8d3ubi3ippxs4yon63n/FHI-Final-Report.pdf?rlkey=2c94czhgagy27d9don7pvbc26&dl=0 --- Narrated by TYPE III AUDIO.
This is a link post. Anders Sandberg has written a “final report” released simultaneously with the announcement of FHI's closure. The abstract and an excerpt follow. Normally manifestos are written first, and then hopefully stimulate actors to implement their vision. This document is the reverse: an epitaph summarizing what the Future of Humanity Institute was, what we did and why, what we learned, and what we think comes next. It can be seen as an oral history of FHI from some of its members. It will not be unbiased, nor complete, but hopefully a useful historical source. I have received input from other people who worked at FHI, but it is my perspective and others would no doubt place somewhat different emphasis on the various strands of FHI work. What we did well One of the most important insights from the successes of FHI is to have a long-term perspective [...] ---Outline:(01:00) What we did well(03:52) Where we failed(05:10) So, you want to start another FHI?--- First published: April 17th, 2024 Source: https://forum.effectivealtruism.org/posts/uK27pds7J36asqJPt/future-of-humanity-institute-2005-2024-final-report Linkpost URL:https://www.dropbox.com/scl/fi/ml8d3ubi3ippxs4yon63n/FHI-Final-Report.pdf?rlkey=2c94czhgagy27d9don7pvbc26&dl=0 --- Narrated by TYPE III AUDIO.
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FHI (Future of Humanity Institute) has shut down (2005-2024), published by gwern on April 17, 2024 on LessWrong. Over time FHI faced increasing administrative headwinds within the Faculty of Philosophy (the Institute's organizational home). Starting in 2020, the Faculty imposed a freeze on fundraising and hiring. In late 2023, the Faculty of Philosophy decided that the contracts of the remaining FHI staff would not be renewed. On 16 April 2024, the Institute was closed down. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
What are the best ways to set up public, private, and academic clinical research partnerships? How do we get these public-private partnerships (PPP) to work most effectively? And who should be in charge of what in multistakeholder research collaborations? We will get those answers in more in this episode of Research in Action with our guests Rob King, President and CEO of FHI Clinical; and Dr. Kristen Lewis, Head of Clinical Operations at the Center for Vaccine Innovation and Access at PATH. --------------------------------------------------------- Episode Transcript: 00;00;00;01 - 00;00;22;22 What are the best ways to set up public-private clinical research projects? Where does and should the money for such research come from and who should be in charge of what? We'll get those answers and more on this episode of Research in Action. Hello and welcome to Research in Action, brought to you by Oracle Life Sciences. 00;00;22;22 - 00;00;50;05 I'm Mike Stiles. And today we're just trying to outdo ourselves by talking to not one, but two very interesting people. First is Rob King, president and CEO of FHI Clinical. FHI uses Oracle's clinical trial software for their clinical operations and partner with public entities like PATH, which brings me to Dr. Kristen Lewis, who is Head of Clinical Operations at the Center for Vaccine Innovation and Access at PATH. 00;00;50;26 - 00;01;29;23 I could go through what each of these organizations do just to hear myself talk, But why do that when I have both of you here? So, Rob, tell us what FHI Clinical does. Yeah, I think Mike, so clinical in a contract, they were actually for profit and hearing of a large nonprofit called F8 had three ethically and while we are for profit empathy, our mission is to address unmet research needs and maximum social impact pouring into development of medical treatment around the world. 00;01;30;04 - 00;01;58;20 While we work globally, we tend to focus on the low and middle income country on the whole pharma and biotech client are also include nonprofits and government. Empathy. Well with biotech receive public funding and path having him be one of our client. Appreciate Kristen being here arguing that four years ago and I'm currently the CEO and I'm happy to be here. 00;01;58;20 - 00;02;22;19 Well great. Kristen what about PATH? Yeah, thanks for the introduction, Mike. It's a pleasure to speak with you and Rob today and have the opportunity to contribute to this discussion. So most people listening to this podcast may not be familiar with PATH. We're a nonprofit global public health organization with approximately 1600 employees worldwide. Our headquarters are in Seattle, Washington, and we have offices across the African and Asian continents and Europe. 00;02;22;19 - 00;02;53;00 Some of the locations we have offices in include Kenya, Ethiopia, Senegal, Uganda, Zambia, India, Vietnam, Ukraine. And I could go on, but I'll I'll hold hold it there. Our mission is to advance health equity through innovation and partnerships. We do this with the help of local and global partners by generating evidence, advancing innovation and strengthening local capacity to improve health in countries and communities that are experiencing disproportionate burdens of disease and barriers to well-being, specifically in low and middle income countries. 00;02;53;11 - 00;03;26;01 This includes working in over 70 countries across the African, Asian, Latin American, European and North American regions. Within Paths Center for Vaccine Innovation and Access, we drive the mission of achieving health equity using a three-pronged approach, including developing, facilitating and implementing global market and policy solutions to ensure sustainable supply and equitable access to vaccines. Supporting country led efforts to advance national health equity priorities, and to strengthen immunization system resilience and driving innovation and technological advances. 00;03;26;01 - 00;03;50;20 To accelerate and optimize access to vaccines. Now, this last point is where my work is focus. Thus, during today's discussion, I'll be speaking with the lens of developing vaccines for disease indications benefiting low and middle income countries, and the importance of public private partnerships in achieving that goal. And just to note, you'll note a common thread there in the introductions from both Rob and myself, and that's the low and middle income country focus. 00;03;50;20 - 00;04;15;17 And I think that you'll start to hear some commonalities come into play as we go further into this session. Great. Well, I think what I want to get into here is kind of what you talked about is the value of public private partnerships in clinical research. Rob, give me the honest first reaction that a lot of private companies have when it is suggested that they partner with a public or a government organization. 00;04;15;17 - 00;04;45;18 Is that something that they jump at with open arms or is there any hesitancy? How does that go down? You know, with recently reading an article about one of the first public private partnerships and it was how mail really hit home, like, you know, for most of our listener, what most people won't be familiar with are the initiative around vaccination for diseases like polio and Spanish flu, MENA and rubella. 00;04;46;00 - 00;05;33;19 And we tend to have short memories. And they and the devastating impact they've had on society prior to vaccination and treatment options or with also that treatment developed over HIV and AIDS and then most recently the COVID pandemic. So with that said, you know, private companies maintain the shy away from what we call the triple P public private partnership in the funding limitations that my, you know, government based funding required a lot of compliance when the whole myriad of regulations and public kind of activity may have restricting how and where or how and when fund your, you know, without experience are now horsepower in the public private partnership. 00;05;34;07 - 00;06;09;21 It creates see private companies to engage and may see growth for example will not serve as a prime contractor on government funding work because when you're in the accounting and you're when the regulatory compliance and you'll only see those of normal commercial contracts, therefore they can turn them and be overly burdensome for those companies to pay. And public private partnerships, you have to have an operational model that meets the unique need of that partnership. 00;06;10;03 - 00;06;36;15 And at the end of the day, you really can't you can't get value for society that public private partnerships have contributed to. And Kristen, from the nonprofit or public side, what what is the benefit of partnering with private companies? Yeah, that's a great question. And I think to answer that, I'd first like to highlight some of the major successes when these partnerships have come together. 00;06;37;04 - 00;07;05;23 PATH has played through public private partnerships. PATH has played a critical role in some of immunizations, created successes over the past 30 years in lmics low and middle income countries. This includes developing the world's first malaria vaccine, which has now reached more than 2 million children, eliminating meningitis epidemics in Africa following introduction of the A4 backed vaccine protecting over 300 million children from Japanese encephalitis, vaccinating millions of girls against HPV. 00;07;06;06 - 00;07;33;20 And I could go on. But those are some some highlights. Path has not achieved these accomplishments in isolation. These successes have been catalyzed via public private partnerships models, and they're examples of which the private sector alone may not have been interested in developing these indications. These vaccine indications for low and middle income country use due to financing or budget considerations or constraints or some of the points that Rob made earlier. 00;07;34;00 - 00;08;03;13 However, with partnerships between PATH and private entities, including finance mechanisms for rollout and use of the vaccines in the regions following development, we've been able to champion development and introduction of vaccines that might not usually have generated sufficient interest for the investment that's required for full development. So in a nutshell, public private partnerships are the bread and butter of our work and integral to the goal of achieving improvements in global public health among populations facing economic challenges worldwide. 00;08;03;24 - 00;08;43;19 Well, so it feels like these partnerships would automatically create multiple stakeholders. So, Rob, how hard is it to make sure that the goals and priorities are aligned amongst all these people and stay aligned? First, I think I have a, you know, expectation and the goals are higher for public private partnership and for commercial initiative. You know, eight you public five, there is an expectation that you're going to achieve the goal or outcome and you're held accountable for how those on her spent. 00;08;44;11 - 00;09;27;10 You're not accountable to a or stockholder, but general public. And you know, public funds are unlimited and there are every dollar may account for whatever goal they're trying to achieve. And we're spending public funds a buying or accounting of how this on her being spent and her limitation on this on and how there may not be extra funds or reserve goes back to if those funds start to run low and usually the public entity defines the impact and the work that has to be completed in ensuring that the funding is in place. 00;09;28;01 - 00;09;53;24 And they then tracking the work that the private company may have contractually in their you mean clear terms on what's being delivered and the restrictions that may or may not be around the funding for that deliverable. So I you agree that saying, though, priorities are paramount because of the fact that we're accountable to the end of the day, to the general public. 00;09;54;09 - 00;10;29;01 And Kristen, is there anything on the public or nonprofit side that's done to kind of make sure that projects aren't subjected to red tape or bureaucracies? I mean, I guess there's always going to be some of that, but to the extent that would might slow things down. Yeah, it's a great question, an interesting and insightful one. So Path we work as a clinical development partner and hold sponsor sponsor roles to implement clinical trials and generate evidence to support vaccine licensure, W.H.O., Prequalification and decision making for vaccine Introduction. 00;10;29;11 - 00;10;51;09 And our work spans the entire vaccine development and delivery lifecycle. And with this broad set of objectives, in order to achieve the aforementioned successes, we have worked with the same urgencies and efficiencies as our private counterparts. From a private lens, there seems to be a perception that the public sector does not come with the same development pressures as the private sector. 00;10;51;19 - 00;11;22;13 In other words, there seems to be a perception that the public sector works slow due to many policies or rules or paperwork, or is generally lacking a sense of urgency, if you will. Now, I don't have that experience working in government, so I can't comment on that side of things. However, in my experience working in vaccine development with a non governmental nonprofit for the majority of my career as well as a few years working for a for profit entity, I can comment that the intensity of work at a nonprofit has been similar to the intensity at a private entity. 00;11;22;26 - 00;11;46;09 While the root of the development pressures may be slightly different. The goal is to develop products as efficiently as possible, while also retaining high quality remain in both sectors. For private entities, I believe the term may be, quote, time as money and quote as a driving consideration. While for my work in the nonprofit space, what drives us is, quote, time is lives, unquote. 00;11;46;14 - 00;12;17;20 And that is really the driving consideration. But regardless of those driving considerations, there's still urgency and sense that we need to be as efficient as possible and ensure that we aren't were removing blockages, red tape, bureaucracy as much as possible. So, Kristen, I'm curious, just from your point of view, when the pandemic came down, that was an entirely different animal in terms and the need to get something done and get something done rapidly. 00;12;17;25 - 00;12;48;23 Just how different a process was that? Yes. So I wouldn't say that the process was necessarily different between the public and private side. I would say that we did things across both sectors in a a new way. So the COVID pandemic really brought home how there are many similarities between the public and the private sectors. Not everything differs according to operating model. 00;12;49;01 - 00;13;14;16 In fact, during the pandemic, the global public health and product development safe spaces, regardless of the type of sector, were going through the same waves of initial shock and uncertainty and how to continue the trials during the very initial stages of the pandemic considerations in terms of the risk benefit tradeoffs of operating non-covid interventional trials during that time, and depending on the type of trial availability of remote technologies and a product's importance to saving lives. 00;13;14;27 - 00;13;38;24 We had to take into consideration different ways and methods for making sure that those Non-covid interventional trials were completed. We also were involved with needing to identify new ways of getting the work done, which included catalyzing a more definitive shift towards identification of local partners that were in close proximity to the trial locations for ease and trial oversight and management. 00;13;38;24 - 00;14;04;12 Implementing remote solution for activities such as source, document verification, remote training, remote site assessments and other types of remote activities, identifying how to get supplies or equipment to the sites ahead of study. Start with supply chains being disrupted and finally determining how to maintain the trials and keep them running once up and going while continuing to deliver with with high quality and ensuring participant safety. 00;14;04;24 - 00;14;30;10 So from Passent, given our work is primarily focused in low and middle income countries, many of the challenges faced in the private sector high income market were further exacerbated due to the relatively slower adoption or uptake of technology surgical clinical trial advances. And this experience was important as it pushed for adoption of technologies that had been previously questioned due to fear of loss of data or other concerns, as with other areas of our lives. 00;14;30;11 - 00;14;56;28 COVID really helped to push the envelope in terms of finding new efficiencies and ways of getting things done. Rob When a partnership like this comes together, I guess this goes along with the expectation setting side that you touched on earlier. How are the roles and responsibilities assigned? I say that in the triple P or public private partnership it really different in that respect as compared to commercial partnership. 00;14;57;25 - 00;15;41;11 You know, the earlier the public finds an objective and a private is to execute that. Now the public entity may only outsource part of the work because they already have the skills and knowledge and the resources themselves. And then they will only outsource the pieces that they can't do themselves. But I think the main thing to keep in mind when a public private partnership is that the public entity, a steward of the public interest and liability and accountability for that public interest lies with them regardless of whether they outsource or not to a private company. 00;15;41;11 - 00;16;05;12 So I feel bad for Kristin and the pressure that they have on them as a public entity compared to myself and her private empathy, where I don't necessarily feel the same pressure we have. Some people might think that the role of public funding is just to get the project more money. You know, we tell you what we need, you go get it for us, and that's your role. 00;16;05;12 - 00;16;28;04 How true or not true is that, Kristin? Yeah. You point out an important consideration for pairing public funding with private resources. There is the potential that private entities may believe that we, the nonprofit, will help bring in key funder resources to augment a development program regardless of their development goals, in alignment with the use of the product in low and middle income countries. 00;16;28;13 - 00;16;54;25 However, in order to mitigate the potential for this misalignment within PATH, we focus on partnering with private entities. When there's clear alignment between Path's mission and the mission of the private entity. Additionally, this alignment has to be in writing agreed to via contract. It includes global access agreements for product availability and use. And so in summary, my experience has been that it's not true that the goal of public funding is to get the project more money. 00;16;54;25 - 00;17;16;10 The goal of public funding is to achieve an outcome that might not otherwise be achievable, given lack of private interest without the public funding to come in and co-fund an objective that benefits low and middle income countries. So we've got public and private represented on this episode with the two of you. What we don't have is someone representing the academic side. 00;17;16;10 - 00;17;45;27 Rob, do you have any thoughts on the role that that third leg of the stool plays or should play? Yeah, you know, there are academic institutions that also have private public anything in and out where I have a lot of admiration for the role of peer academia, Both public and private institutions rely on academia being a catalyst for innovation and providing health very specific areas of research. 00;17;46;24 - 00;18;13;00 There are a lot of academics out there. They're doing very research and I never know when that point of time in in hand. So at every level we rely on our advisory or academic consultant to keep us informed on very specific events or therapeutic topics. And this plays into whether the research into them or not that we intend to do. 00;18;13;10 - 00;18;51;08 And there's a large portion of investigator and key opinion leaders involved in research actually come from academia. On the flip side, academia also relies on public private partnership to bring their ideas into the research environment because they lack the funding to paint the vision or the technical knowledge on how to bring that idea to the next step. You know, I think the example that perhaps a lot of people have heard of are the bar industry days and Loreal, which is the Biomedical Advanced Research and Development Authority. 00;18;51;24 - 00;19;35;13 They host annually this event where people come in for ideas, for collaboration in partnership with US funding, and so they have it. So for them, the novel idea that aligns with the interests of the US government and they get the opportunity to collaborate with other companies that can bring that into fruition as well with funding behind it. So I think there are a lot of opportunities out there for academics to bring the right into fruition, but we have a great job of sort of pulling them in the right direction. 00;19;35;28 - 00;19;58;14 Kristen, I have to tell you, as a as a layperson, I kind of picture this three way partnership, and the first thing that comes to mind is that's a lot of cooks in the kitchen. So it's kind of amazing to me that anything gets done or gets done in kind of a timely manner. What are the essential ingredients of a truly successful collaboration in your mind? 00;19;58;26 - 00;20;34;24 Yeah, it's a very good point. And I will add on to Rob's comments regarding academia that academia is a very important partner in this setup. Academia generally is part of these partnerships. And so there are I would, as you put it, a lot of cooks in the kitchen when we're bringing these projects together. And the short answer and how we make these successful is to never underestimate the value of careful pre-planning and preparation and setting up the partnerships, including mission alignment, alignment in the partners scopes of work and roles and responsibilities. 00;20;34;25 - 00;21;11;19 I think Rob alluded to that earlier. And the Seven Seas of collaborations jump to mind, clarity of purpose, concurrency of mission strategy and values, creation of value, connection with purpose and people, communication between partners, continually learning or a growth mindset and commitment to the partnership. In addition, it's also important to lay a solid foundation underlying all of that of respect, trust and finding a balance between humility and confidence across the partners to make sure that everybody is partnering fairly and with trust and in good faith. 00;21;12;01 - 00;21;37;07 Yeah, you know, I don't want to start a fight, but who is largely responsible for big innovations in clinical health? I think the public gets the impression there are private scientists huddled together in one lab, and then government scientists huddle together in another lab, probably in D.C. That's not really the way it is, is it, Rob? I mean, how are the big, impactful innovations truly getting developed? 00;21;37;17 - 00;22;07;29 Yeah, I'm one I answer that question in the obvious here. I mean, there when we all work together and leverage the strength of all of our partners. I honestly do think that commercial or private things are faster innovation, but they have a feel and reward system. They're always our innovation, a profit making endeavor. I mean, why not? You have a good eye and you want to be recognized and rewarded for it. 00;22;08;12 - 00;22;36;24 But bringing innovation in areas where the opportunity for regular recognition and reward is not so great. And that's where public private partnership come into play. You know, as a global community, it's in our interest to innovate in low reward scenarios because the knock on effect is that the problem is not spread and that it allows a particular community to or region to prosper. 00;22;37;13 - 00;23;00;15 And so therefore, if people prosper, they're less likely to mean in the future and we can maximize their contribution for the greater good. Yeah, but Rob, when it comes to public health, people do seem to put the bulk of that responsibility on government. Like people didn't demand an answer to COVID from Pfizer. They demanded it from the White House. 00;23;00;15 - 00;23;29;27 So is that fair? I think fair and yet a moral issue that we can do a whole nother podcast around. So, yeah, but, you know, human empathy and theoretically the government are there to serve the public and the public good through taxation and donations. We expect the instinct to step up when the need arises. You know, the public can't hold a private company like Pfizer accountable in a crisis. 00;23;30;13 - 00;23;58;14 And then the obvious thing here is they hold the public entity responsible. The only problem is we pan who fund our public entity with a little support if possible, or we lose the funding that's already there with a whole myriad of special interests. We don't leave a whole lot left in crisis. We're also very bad at funding the future, whether it's for crisis or innovation. 00;23;58;27 - 00;24;25;25 We're not people that really think ahead, sometimes have public empathy, have to scramble to reallocate funds, and they usually can't staff up or get resources in place quick enough. And they turn to commercial companies that really have no restriction on growth and simply eat the money and make it happen. Rob What's the most gratifying thing that's come from working with Path from your perspective? 00;24;26;17 - 00;24;57;14 Well, I'll make this short with Sweet. We know toward the beginning of our path and we'll have similar missions now. Path being a public entity, hailing here for the greater good and not really for a reward or profit. Who? I don't know. But I feel I feel better about myself and my company associating and working with Light Path. 00;24;58;00 - 00;25;22;15 And Kristen, what keeps you bought into the whole public private partnership model? Well, it's it's that it's a factor that the model is effective in bringing new life saving interventions to low and middle income countries. So for me, it's the advancement of the public health mission and being able to efficiently facilitate implementation of health interventions for low and middle income income countries that wouldn't otherwise be available. 00;25;22;16 - 00;25;42;26 It's the ability to have a true impact to save lives. And this partnership model is is critical in making that happen. Yeah, but it can't all be gumdrops and rainbows. So what are some of the challenges as or wish list items that you both feel still kind of need to be addressed when it comes to the partnerships around clinical research? 00;25;42;26 - 00;26;18;16 First Rob, then Kristen how I think we can do a better job of building trust and sharing intelligence even in public private partnership. There in Singapore. If trust and holding on the information that can be of mutual benefit. And I personally would like to break down some of the barriers, you know, a key concept in public private partnership in the best value and in most cases that require public entity get like three quotes for some of activity or contract. 00;26;19;02 - 00;26;44;17 And then you have to justify why you can go with it. So we all know that paper is not always better, and I would like to see us define value in more ways than just cost. Also think they're alive and healthy. It can be shared around best practices of Kristin and I belong to a group that's publicly funded that share best practices. 00;26;45;07 - 00;27;25;20 But you know that sharing of best practice has been limited with sort of all that culture of caution. So I'd like to see more sharing and the assumption of positive impact on our party. And I think we held out a lot during the COVID pandemic, and I applaud that. I hadn't felt the call center for a large government project, and we had to do it time and when I reached out to a technology company to help me fill up that call center, the question was, how much are you going to pay me or what kind of, yeah, how quickly you need it. 00;27;26;12 - 00;28;06;15 And then we literally are without contract, without much, especially around term. And they phone up in record time and we work the other stuff out on the back end to mutual benefit. And I know that we can't always do that, but it shows you what possible. And Kristen, what gets your goat? Yeah, I guess there's two points that jump to mind in the first is that we have some more work to do and in terms of sustainable capacity development to ensure that the ground that we gain in facilitating research in low and middle income countries continues to be built without the loss of human or material resources that are built out for trials. 00;28;06;27 - 00;28;26;16 How do we do a better job of sustaining capacity that's been built following the completion of a trial or a set of trials at sites that we've invested in? That's an area that many folks are putting thought into these days, But I think we have yet to identify a solution to that. And I think that's that's something that we can do, do better at. 00;28;26;16 - 00;28;56;00 And I know we will. It's it's a work in progress. And then the second thing is the concept of equitable partnerships that needs additional consideration and support. And I think back to Rob's comment about assuming positive intent and working in good faith, there's a focus now on on transferring leadership and ownership of much of our clinical development work to the regions that are participating in the work so that they're really co-creating and co owning the development work in the development space. 00;28;56;08 - 00;29;17;00 While COVID helped to catalyze that shift, there's still some more push that we need to do within the global public health and development community to make this shift really, really be adopted and occur. And we have a bit of a way to go in terms of fully embracing the models that are led out of the regions that our products serve. 00;29;17;16 - 00;29;38;27 And I believe that the public private model and partnership is an area where we can help to facilitate this in the future. You know, I'd probably be remiss if I didn't ask about the role that you see technology playing and being maybe that fourth partner in clinical trials. Rob, I know you use Oracle's clinical trial Solutions. What does that bring to the table? 00;29;38;27 - 00;30;25;07 So I think, you know, you're in the COVID pandemic. Technology was really a shining star and allowed some things that we probably couldn't done earlier by embracing technology that people were perhaps hesitant to use before. So I think that certainly around Oracle, we were able to use many of the Oracle platform during the COVID pandemic. I think my favorite story, and people probably heard it before, I apologize to anyone hearing me repeat, is that I think how clinical it have at home, even you're a platform without join and so joined right before the pandemic and you're all now on my whiteboard. 00;30;25;16 - 00;30;50;07 My ideal platform for data collection analysis and sharing with other and a former colleague of mine who we recently joined Oracle dropped by the office and we were hanging out my office and he looked at my whiteboard and he said, What's the Oracle Product Development Plan doing on your whiteboard? I said, Well, that's not the Oracle product development plan, that's my plan. 00;30;50;18 - 00;31;21;25 And he said, Well, that exactly met what we're doing right now. And that was in of our use of clinical one. And, you know, just hearing differently, you know, what I had in mind and what the Oracle developer had in mind were the same. I don't think anybody with smart irony when they coming in the gene at that time drove innovation and all the partners on that, and it came at just the right time. 00;31;22;12 - 00;31;53;20 And Kristin, are you surprised by or frustrated by the technology capabilities that are available for your endeavors and what you're trying to get done today? Yeah, I'm excited for trial platforms in low and middle income countries to have the chance to further adopt technologies that have been utilized in other regions. I would say there's been some reluctance in adoption of the technologies that have been commonly utilized in high income country settings for some time, but that COVID has really catalyzed adoption of many of those. 00;31;54;22 - 00;32;16;19 There has also been some backsliding in use of those technologies since COVID. The urgency of the COVID vaccine development cycle more or less ended. And so what I'm excited for is that there was a push during COVID. We've seen it work in the past and that there's the potential for continued adoption of these solutions, such as these saucy diaries Pro ET cetera. 00;32;16;28 - 00;32;44;20 As we work through the challenges with implementation of those technologies outside of high income country settings. So there's there's a little bit of work to do in terms of adoption. But I think we're we're getting there and I'm excited to see the field further embrace those technologies. Well, it's great to hear about partnerships like this and what's increasingly becoming an accepted model for how we can get better results for people faster and for more people. 00;32;45;00 - 00;33;08;13 A lot of our listeners may want to learn more about what you've been talking about and what you do. So do each of you have a way they can do that or even contact you? How about you? Rob Yeah, so feel free to reach out to me quote unquote dot com. And I'm also only in and happy to sort of brainstorm with anybody. 00;33;08;21 - 00;33;39;20 We sort of can move the idea of public private partnership even farther and Kristen yeah our websites available WW w path dawg and it provides additional information on path and what we do and I'm also on LinkedIn then can be reached via that platform Perfect well if you want to see how Oracle is accelerating life sciences research and how it might be able to do that for your work as well, check out Oracle.com/lifesciences 00;33;40;00 - 00;33;58;18 Also be sure to subscribe to this show and we'll be back next time for Research in Action.
Lad opp til helg sammen med beste fredagsgjengen Iben, Mari og Elise! Det er Oktoberfest-sesong,som gjerne krasjer med FHI og svenske myndigheters ølråd (og gjerne studenthverdagen?). Det blir fredagsquiz, har den regjerende mesteren falt av tronen? Vi hater på østlendinger "uten" dialekt, og slår et slag for nynorsken! Len deg tilbake og nyt!
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Updates to the flow of funding in EA movement building post, published by Vaidehi Agarwalla on July 25, 2023 on The Effective Altruism Forum. This is an summary of updates made to my previous post, The flow of funding in EA movement building. Overall Changes Total funding tracked in the data increased to $290M (from $245M). New data is from: Several private donors and Longview Philanthropy who shared (previously non-public) donation & grant recommendation data Global health & wellbeing spending e.g. GiveWell, ACE and some animal orgs (at a discounted rate since these organizations aren't explicitly focused on EA movement building but did contribute to the growth of the EA movement) The inclusion of some longtermist research organizations such as FHI which have helped do field building (also at a discounted rate) Changes to proportions and funding over time During the 2012-2016 period, funding tracked in my data roughly doubled from ~$4M to ~$8.9M (quick estimate) including $4M in funding to GiveWell and $0.5M from other donors. During 2017-2023 period, funding tracked roughly increased from $241 to $281M, from other donors and the inclusion of some cause-area specific organizations that contributed to movement building. The table below summarizes the changes to the proportions of funding coming from different sources: Funder CategoryChange in % New %Original %Other donorsUp ~8% 9.6%1.5%FTX Future FundDown ~3%14.8%17.5%EAIF (non-OP donors), LTFF & Jaan Tallinn (incl. SFF)Down ~1% EA Animal FundUp ~1%1.1%0% Open Philanthropy OP LT: Down 9.5%(~10.1% w. EAIF) OP GH&W: Down 0.4% OP Other: Up 5.9% Overall: Down ~3% OP LT: 50.4% (~54.5% w. EAIF) OP GH&W: 2.6% OP Other: 5.9% Overall: 63% OP LT: 59.8% (~64.6% w. EAIF) OP GH&W: 2.2% OP Other: 0% Overall: 66% Here's the new % data in a pie chart: What data is still missing? Total funding: I estimate total funding from 2012 to June 2023 is likely $300-350M (medium confidence). I previously estimated $250-280M (significant underestimate). Individual donors: I estimate that $1-20M since 2012 is probably still missing, since I haven't included donors who work with Effective Giving, Generation Pledge or Founders' Pledge. Allocation of cause-specific efforts: You may disagree with the discounting I've done towards different cause-specific projects (in either direction). If you think I'm underweighting those efforts, then you could consider that "missing" data. The most accurate way to do these estimates would be to ask movement building organizations for their annual expenses and to break down the sources of their funding. This information is not publicly available, and some organizations do not publish annual expenses publicly from where you might make initial guesses. I'd encourage organizations to share their numbers to give us a fuller picture of the landscape. Mistakes & reflections I didn't expect this post to be read by as many people as it was. If I'd known this in advance, I think it's likely I would have delayed publication and seeked more external feedback because concrete numbers can be sticky and hard to update people's views on. I noted that this was a preliminary analysis in the opening, but the data may have been seen as more final than it was. In the future I would spend more time hedging numbers and stating ranges of possible values and encourage people to cite those instead of exact numbers. I didn't add enough uncertainty estimates to the numbers throughout the post. For example, I mentioned that the data was incomplete, and provided an estimate on the total amount of funding ($250-280M) - this was a moderately large underestimate (the total new total tracked data now stands at $290M). I missed several sources of global health & wellbeing spending, which significantly increased total spend between 2012-2016. This ...
Recorded: February 28, 2023 We are excited to welcome Patrick Fine to this AidKonekt/Konektid USAID Market Series event to discuss USAID's localization initiative. He is the former FHI 360 CEO, former USAID Deputy Assistant Administrator, Africa, and Brookings Institution Nonresident Senior Fellow. Patrick authored the recent Brookings article "Rethinking the constraints to localization of foreign aid". You can read that article here: brookings.edu/blog/future-development/2022/12/01/rethinking-the-constraints-to-localization-of-foreign-aid/ This conversation will be followed up by a discussion with Christy Hollywood, Konektid's COO, on what this means for both new and established USAID implementing partners. We will discuss: - What are USAID's localization priorities? - How do these align with the New Partnerships Initiative? - Does this push differ from previous initiatives like USAID FORWARD? - How should USAID primes move forward given current conversation for local partners. Event Draft Agenda 40 min - Localization & USAID discussion with Patrick Fine 10 min - Conversation with Christy Hollywood on what this means for USAID partners 10 min - Q&A Panelist biographies: Patrick Fine Nonresident Senior Fellow, Brookings Institution Patrick Fine is an international development professional with deep experience in global health, education, humanitarian response, and building resilient communities. He has served in senior positions in both the public and nonprofit sectors, including as the CEO of FHI 360, one of the United States' preeminent diversified global development nonprofits, and as the vice president for compact operations at the Millennium Development Corporation. A retired member of the career foreign service, Fine served as the senior deputy assistant administrator in the Africa Bureau and mission director in Afghanistan at the U.S. Agency for International Development (USAID). He is a frequent speaker and writer on international and human development. Patrick Fine, former CEO, FHI 360 ABOUT THE HOST: The podcast is hosted by Mike Shanley, Founder and CEO of Konektid International and AidKonekt Data, the leading USAID partner support firms that help clients to grow their USAID funding portfolios. Mike has nearly two decades of experience in working with USAID projects and funding opportunities. His USAID funding expertise has been recognized by the largest aid and development associations, partners, and conferences, including by USAID, Devex, British Expertise International, AidEx, and by the Society for International Development-US where he also serves as Board Member. Websites: Konektid International: www.konektid.com AidKonekt Data: https://www.aidkonekt.com/usaid-biz-dev Connect with Mike Shanley: https://www.linkedin.com/in/konektid-usaid-specialists/
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: “Reframing Superintelligence” + LLMs + 4 years, published by Eric Drexler on July 10, 2023 on The AI Alignment Forum. Background In January 2019, FHI published Reframing Superintelligence, a book-length technical report on prospects for advanced AI. OpenAI published the first paper on GPT-2 a month later. Advances since then have been strange and rapid, and I'd like to revisit the report in light of what we have learned. In brief, I think that the abstract conceptual model of AI development and organization proposed in Reframing fits today's reality quite well, even though LLM-based technologies have diverged far from anything I'd anticipated. Below, you'll find an abstract of the abstract of the report, followed by a series of section-level mini-summaries with update comments. I've omitted sections that are either outside the intended focus of this article or are too broad and forward-looking to summarize. A significant impetus behind “Reframing Superintelligence” was to challenge a prevailing notion of advanced AI (equating superintelligent-level AI with a superintelligent agent), which has, in my view, been assigned disproportionate weight and skewed the balance of alignment research. The report offers an alternative framework that includes both risks and opportunities that are overlooked by agent-centric perspectives. Note that this reframing is additive rather than subtractive: My intention is not to disregard agent-focused concerns - their importance is assumed, not debated. Indeed, the AI services model anticipates a world in which dangerous superintelligent agents could emerge with relative ease, and perhaps unavoidably. My aim is to broaden the working ontology of the community to include systems in which superintelligent-level capabilities can take a more accessible, transparent, and manageable form, open agencies rather than unitary agents. This framework highlights different risks and expands the the solution-space for familiar problems. Finally, when I refer “LLMs”, please read this as encompassing multimodal models (GPT-4!) with considerations that carry over to a wider range of foundation models. Abstract of the Abstract “Reframing Superintelligence” reviews the concept of superintelligent AI systems as utility-driven agents and suggests expanding our ontology of superintelligence to include compositions of AI systems that can best be understood through their structures, relationships, development processes, and the services they can provide - services that can include AI research and development itself. This perspective gives rise to the “Comprehensive AI Services” (CAIS) model, which proposes general intelligence as a property of flexible systems of services in which task-focused agents are among the components. The CAIS model envisions AI services expanding toward asymptotically comprehensive superintelligent-level performance, including the service of providing new services in line with human objectives and informed by strong models of human (dis)approval. This reframing has broad implications for AI prospects, including AI safety and strategy, practical applications of advanced AI systems, and the fundamental relationship between goals and intelligence. In this context, the emergence of strongly self-modifying agents with superintelligent-level capabilities remains a concern, yet the desirability and potential instrumental value of such agents is greatly diminished. Section mini-summaries + updates 1. R&D automation provides the most direct path to an intelligence explosion Self-transforming AI agents have no natural role in recursive improvement. A more direct path would instead involve AI-enabled AI development in which new capabilities are implemented without any system being self-modifying. Today's most striking applications of AI to AI develo...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: “Reframing Superintelligence” + LLMs + 4 years, published by Eric Drexler on July 10, 2023 on LessWrong. Background In January 2019, FHI published Reframing Superintelligence, a book-length technical report on prospects for advanced AI. OpenAI published the first paper on GPT-2 a month later. Advances since then have been strange and rapid, and I'd like to revisit the report in light of what we have learned. In brief, I think that the abstract conceptual model of AI development and organization proposed in Reframing fits today's reality quite well, even though LLM-based technologies have diverged far from anything I'd anticipated. Below, you'll find an abstract of the abstract of the report, followed by a series of section-level mini-summaries with update comments. I've omitted sections that are either outside the intended focus of this article or are too broad and forward-looking to summarize. A significant impetus behind “Reframing Superintelligence” was to challenge a prevailing notion of advanced AI (equating superintelligent-level AI with a superintelligent agent), which has, in my view, been assigned disproportionate weight and skewed the balance of alignment research. The report offers an alternative framework that includes both risks and opportunities that are overlooked by agent-centric perspectives. Note that this reframing is additive rather than subtractive: My intention is not to disregard agent-focused concerns - their importance is assumed, not debated. Indeed, the AI services model anticipates a world in which dangerous superintelligent agents could emerge with relative ease, and perhaps unavoidably. My aim is to broaden the working ontology of the community to include systems in which superintelligent-level capabilities can take a more accessible, transparent, and manageable form, open agencies rather than unitary agents. This framework highlights different risks and expands the the solution-space for familiar problems. Finally, when I refer “LLMs”, please read this as encompassing multimodal models (GPT-4!) with considerations that carry over to a wider range of foundation models. Abstract of the Abstract “Reframing Superintelligence” reviews the concept of superintelligent AI systems as utility-driven agents and suggests expanding our ontology of superintelligence to include compositions of AI systems that can best be understood through their structures, relationships, development processes, and the services they can provide - services that can include AI research and development itself. This perspective gives rise to the “Comprehensive AI Services” (CAIS) model, which proposes general intelligence as a property of flexible systems of services in which task-focused agents are among the components. The CAIS model envisions AI services expanding toward asymptotically comprehensive superintelligent-level performance, including the service of providing new services in line with human objectives and informed by strong models of human (dis)approval. This reframing has broad implications for AI prospects, including AI safety and strategy, practical applications of advanced AI systems, and the fundamental relationship between goals and intelligence. In this context, the emergence of strongly self-modifying agents with superintelligent-level capabilities remains a concern, yet the desirability and potential instrumental value of such agents is greatly diminished. Section mini-summaries + updates 1. R&D automation provides the most direct path to an intelligence explosion Self-transforming AI agents have no natural role in recursive improvement. A more direct path would instead involve AI-enabled AI development in which new capabilities are implemented without any system being self-modifying. Today's most striking applications of AI to AI development are app...
Errata (July 16 2023) Since the original publication, I've made substantial improvements to the data in the post. The full account of changes are detailed in this post. The total funding tracked in the data increased to $290M (from $245M) and the main updates are: Several private donors shared (previously non-public) donation data. I believe I have gathered data from at least 50% of the major individual donors who have given to meta causes. I included meta global health & wellbeing spending e.g. GiveWell, ACE and some animal orgs (at 30-50% discounts since these organizations aren't explicitly focused on EA movement building but did contribute to the growth of the EA movement). This substantially increased total funding in the 2012-2016 period. I included some longtermist organizations such as FHI, GPI and Forethought Foundation which have done field building (at 40-60% discounted rates). This post is part of EA [...] ---Outline:(01:52) Key observations(01:55) Overall picture(02:07) Total funding 2012-2023 by known sources(04:29) Breakdown of funding by source and time (known sources)(04:35) 2012-2016(07:47) 2017-2023(12:18) Funders of EA groups (2015-2022)(16:31) Appendix(16:34) Breakdown of EA Infrastructure Fund (EAIF) funding sources(18:57) Breakdown of Funding Table--- First published: June 23rd, 2023 Source: https://forum.effectivealtruism.org/posts/nnTQaLpBfy2znG5vm/the-flow-of-funding-in-ea-movement-building --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
The World Health Organization announced last month that The West African country of Benin has eliminated the neglected tropical disease, trachoma as a public health problem. Benin is the fifth country in WHO's African Region and one of 17 countries in the world to have eliminated trachoma as a public health problem. Joining me today to talk about trachoma and the achievement in Benin is Stephanie Palmer. Stephanie is the trachoma technical advisor for the global organization, FHI 360. Watch the interview at Outbreak News TV
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] Longtermists Are Pushing a New Cold War With China, published by Mohammad Ismam Huda on May 27, 2023 on The Effective Altruism Forum. Jacob Davis, a writer for the socialist political magazine Jacobin, raises an interesting concern about how current longermist initiatives in AI Safety are in his assessment escalating tensions between the US and China. This highlights a conundrum for the Effective Altruism movement which seeks to advance both AI Safety and avoid a great power conflict between the US and China. This is not the first time this conundrum has been raised which has been explored on the forum previously by Stephen Clare. The key points Davis asserts are that: Longtermists have been key players in President Biden's choice last October to place heavy controls on semiconductor exports. Key longtermist figures advancing export controls and hawkish policies against China include former Google CEO Eric Schmidt (through Schmidt Futures and the longtermist political fund Future Forward PAC), former congressional candidate and FHI researcher Carrick Flynn, as well as other longtermists in key positions at Gerogetown Center for Security and Emerging Technology and the RAND Corporation. Export controls have failed to limit China's AI research, but have wrought havoc on global supply chains and seen as protectionist in some circles. I hope this linkpost opens up a debate about the merits and weaknesses of current strategies and views in longtermist circles. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
If you're enjoying the content, please like, subscribe, and comment! Please consider supporting the show! https://anchor.fm/worldxppodcast/support Ian Evans is a Program Officer for the Sports, Cultural and Youth programs at FHI 360. His team serves as the implementing partner for the U.S. Department of State Sports Visitor and Sports Envoy program. He works closely with U.S. and international non-elite youth athletes. His passion and area of focus consists of utilizing youth sports as a vehicle to promote intercultural relationships and communication. He is a former collegiate and semi-professional soccer player and coach. ______________________ Follow us! @worldxppodcast Instagram - https://bit.ly/3eoBwyr @worldxppodcast Twitter - https://bit.ly/2Oa7Bzm Spotify - http://spoti.fi/3sZAUTG Apple Podcasts - http://apple.co/30uGTny Google Podcasts - http://bit.ly/3v8CF2U Anchor - http://bit.ly/3qGeaH7 YouTube - http://bit.ly/3rxDvUL #pingpongdiplomacy #youth #youthsports #coaching #sports #diplomacy #statedepartment #studentexchange #students #envoy #diplomat #embassy #podcastshow #longformpodcast #longformpodcast #podcasts #podcaster #newpodcast #podcastshow #podcasting #newshow #worldxppodcast --- Support this podcast: https://podcasters.spotify.com/pod/show/worldxppodcast/support
Ellie Price is a corporate credit analyst at S&P Global Ratings, and prior was an Associate Director in public finance at Fitch Ratings. Previously, she was the Chief Operating Officer at Iris Light Technologies, a silicon photonics startup company spun out of Argonne National Lab. While at University of Chicago's Harris School of Public Policy, Price worked as an investment banking summer associate at Credit Suisse, and apprenticed as a venture capital associate. Prior to grad school, Price directed The Locus Coalition of 14 NGOs, convening international development funders, policymakers, and practitioners to promote the design and evaluation of evidence-based, locally owned, and integrated global development programs. Price represented Locus members' best practices as a speaker and moderator on pals at the UN, think tanks and other forums, and she oversaw all operations and activities of the Coalition and its Research and Learning Working Groups. Price also served as co-chair of the Society for International Development Young Professionals Network Washington DC Chapter. Prior to Locus, Price was a Crisis Response Program Officer at FHI 360, where she supported the launch of a humanitarian response unit and rapid response capabilities with the US Office of Foreign Disaster Assistance in Borno State, Nigeria. She also served as an Urban Leaders Fellow in Denver, Colorado. Price holds a Master's in Public Policy from the University of Chicago, a bachelor's degree in international development studies from Calvin College and studied East African politics and religion at Uganda Christian University. During her undergraduate studies, Price performed research, fundraising strategy, and project design for nonprofits working with immigrant populations in Michigan. As a consultant for a healthcare management firm, she designed an impact evaluation for its international nonprofit arm, adapting training materials for healthcare managers in Dubai and sub-Saharan Africa. Ellie is on on twitter. The songs picked by all our guests can be found via our playlist #walktalklisten here. Please let me/us know via our email innovationhub@cwsglobal.org what you think about this new series. We would love to hear from you. Please like/follow our Walk Talk Listen podcast and follow mauricebloem on twitter and instagram. Or check us out on our website 100mile.org. We also encourage you to check out the special WTL series Enough for All about an organization called CWS. The 11th 100 mile walk campaign will continue until the summer, find more info via de 100mile.org website. Or go straight to our fundraising page.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The 'Wild' and 'Wacky' Claims of Karnofsky's ‘Most Important Century', published by Spencer Becker-Kahn on April 26, 2023 on The Effective Altruism Forum. Holden Karnofsky describes the claims of his “Most Important Century” series as “wild” and “wacky”, but at the same time purports to be in the mindset of “critically examining” such “strange possibilities” with “as much rigour as possible”. This emphasis is mine, but for what is supposedly an important piece of writing in a field that has a big part of its roots in academic analytic philosophy, it is almost ridiculous to suggest that this examination has been carried out with 'as much rigour as possible'. My main reactions - which I will expand on this essay - are that Karnofsky's writing is in fact distinctly lacking in rigour; that his claims are too vague or even seem to shift around; and that his writing style - often informal, or sensationalist - aggravates the lack of clarity while simultaneously putting the goal of persuasion above that of truth-seeking. I also suggest that his emphasis on the wildness and wackiness of his own "thesis" is tantamount to an admission of bias on his part in favour of surprising or unconventional claims. I will start with some introductory remarks about the nature of my criticisms and of such criticism in general. Then I will spend some time trying to point to various instances of imprecision, bias, or confusion. And I will end by asking whether any of this even matters or what kind of lessons we should be drawing from it all. Notes: Throughout, I will quote from the whole series of blog posts by treating them as a single source rather than referencing which them separately. Note that the series appears in single pdf here (so one can always Ctrl/Cmd+F to jump to the part I am quoting). It is plausible that some of this post comes across quite harshly but none of it is intended to constitute a personal attack on Holden Karnofsky or an accusation of dishonesty. Where I have made errors of have misrepresented others, I welcome any and all corrections. I also generally welcome feedback on the writing and presentation of my own thoughts either privately or in the comments.Acknowledgements: I started this essay a while ago and so during the preparation of this work, I have been supported at various points by FHI, SERI MATS, BERI and Open Philanthropy. The development of this work benefitted significantly from numerous conversations with Jennifer Lin. 1. Broad Remarks About My Criticisms If you felt and do feel convinced by Karnofsky's writings, then upon hearing about my reservations, your instinct may be to respond with reasonable-seeming questions like: 'So where exactly does he disagree with Karnofsky?' or 'What are some specific things that he thinks Karnofsky gets wrong?'. You may well want to look for wherever it is that I have carefully categorized my criticisms, to scroll through to find all of my individual object-level disagreements so that you can see if you know the counterarguments that mean that I am wrong. And so it may be frustrating that I will often sound like I am trying to weasel out of having to answer these questions head-on or not putting much weight on the fact that I have not laid out my criticisms in that way. Firstly, I think that the main issues to do with clarity and precision that I will highlight occur at a fundamental level. It is not that they are 'more important' than individual, specific, object-level disagreements, but I claim that Karnofsky does a sufficiently poor job of explaining his main claims, the structure of his arguments, the dependencies between his propositions, and in separating his claims from the verifications of those claims, that it actually prevents detailed, in-depth discussions of object-level disagreements from making much sense...
Hach, herrlich, mein heutiger Gast produziert Titel wie Edgar Allan Poe zu seinen besten Zeiten: “Der LKW-Killer”, “Der Tod kommt per Post”, “Der Sandmann”. Und so weiter. Die Rede ist natürlich von Philipp Fleiter, dem Kopf hinter dem True-Crime-Podcast "Verbrechen von nebenan", einem der erfolgreichsten Podcasts Deutschlands. In seinem zweiwöchentlichen Format erzählt Philipp wahre Kriminalfälle aus der Nachbarschaft und legt dabei größten Wert auf Authentizität und Faktenrecherche. Was seinen Podcast so besonders macht, ist die Tatsache, dass er sich nicht nur auf spektakuläre Fälle beschränkt, sondern auch weniger bekannte Verbrechen beleuchtet. Und wie es nun mal so ist, wenn sich der Erfolg erstmal einstellt, dann geht es in alle Richtungen munter weiter. TV-Shows, Live-Auftritte, Bücher! Philipp ist ein gefragter Experte, wann immer es um Verbrechen geht. Hätte man ihm das vor einigen Jahren vorausgesagt - geglaubt hätte er es wahrscheinlich selbst nicht. Haken wir also nach, woher seine Leidenschaft fürs Thema kommt. Und noch wichtiger: Woher kommt diese unglaubliche Professionalität, mit der er seine Fälle aufbereitet. Brauchte es dafür eine Jugend in Gütersloh, was lag da im Weg, und war das Ziel immer in Sicht? Diese Folge wird präsentiert von: **Iberostar Selection Llaut Palma Llaut** All-Inclusive Konzept für aktive und unternehmungslustige Erwachsene, die Mallorca entdecken möchten Hach, herrlich, mein heutiger Gast produziert Titel wie Edgar Allan Poe zu seinen besten Zeiten: “Der LKW-Killer”, “Der Tod kommt per Post”, “Der Sandmann”. Und so weiter. Die Rede ist natürlich von Philipp Fleiter, dem Kopf hinter dem True-Crime-Podcast "Verbrechen von nebenan", einem der erfolgreichsten Podcasts Deutschlands. In seinem zweiwöchentlichen Format erzählt Philipp wahre Kriminalfälle aus der Nachbarschaft und legt dabei größten Wert auf Authentizität und Faktenrecherche. Was seinen Podcast so besonders macht, ist die Tatsache, dass er sich nicht nur auf spektakuläre Fälle beschränkt, sondern auch weniger bekannte Verbrechen beleuchtet. Und wie es nun mal so ist, wenn sich der Erfolg erstmal einstellt, dann geht es in alle Richtungen munter weiter. TV-Shows, Live-Auftritte, Bücher! Philipp ist ein gefragter Experte, wann immer es um Verbrechen geht. Hätte man ihm das vor einigen Jahren vorausgesagt - geglaubt hätte er es wahrscheinlich selbst nicht. Haken wir also nach, woher seine Leidenschaft fürs Thema kommt. Und noch wichtiger: Woher kommt diese unglaubliche Professionalität, mit der er seine Fälle aufbereitet. Brauchte es dafür eine Jugend in Gütersloh, was lag da im Weg, und war das Ziel immer in Sicht? Diese Folge wird präsentiert von: **Iberostar Selection Llaut Palma** All-Inclusive Konzept für aktive und unternehmungslustige Erwachsene, die Mallorca entdecken möchten [https://www.iberostar.com](https://www.iberostar.com/de/hotels/mallorca/iberostar-llaut-palma/?v=FHI&utm_source=Podcast&utm_medium=HostReadAds&utm_campaign=llaut23&utm_id=IBSMKTCON_EMEA_BRANDCNT_CUNA_DE_DE_ESP_BAL_PMI_LLT_MIX_NA_NA_NA_NA_NA) Mehr über den Podcast: FB: https://www.facebook.com/daszielistimweg Instagram: https://www.instagram.com/andreas.loff Mehr über den Podcast: FB: https://www.facebook.com/daszielistimweg Instagram: https://www.instagram.com/andreas.loff
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What organizations other than Conjecture have (esp. public) info-hazard policies?, published by David Scott Krueger on March 16, 2023 on The AI Alignment Forum. I believe Anthropic has said they won't publish capabilities research?OpenAI seems to be sort of doing the same (although no policy AFAIK).I heard FHI was developing one way back when...I think MIRI sort of does as well (default to not publishing, IIRC?) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
TODAY'S GUEST Dr. Shahar Avin is a Senior Research Associate at the Center for the Study of Existential Risk (CSER) at Cambridge University, focusing primarily on risks associated with artificial intelligence and deep learning algorithms. EPISODE SUMMARY We discuss: Shahar's unique approach of discovering truths through simulation and gaming. His insights into how to fix the broken system of science-funding. The existential risks associated with the rise of machine intelligence. Shahar's project, Intelligence Rising, in collaboration with Oxford University's Future of Humanities Institute and our own Remake Labs, to create a strategic scenario game that will help educate decision makers on the possible risks and effects of the accelerating development in AI capabilities in the next few years. EPISODE LINKS Dr. Shahar Avin's Links
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Against EA-Community-Received-Wisdom on Practical Sociological Questions, published by Michael Cohen on March 9, 2023 on The Effective Altruism Forum. In my view, there is a rot in the EA community that is so consequential that it inclines me to discourage effective altruists from putting much, if any, trust in EA community members, EA "leaders", the EA Forum, or LessWrong. But I think that it can be fixed, and the EA movement would become very good. In my view, this rot comes from incorrect answers to certain practical sociological questions, like: How important for success is having experience or having been apprenticed to someone experienced? Is the EA Forum a good tool for collaborative truth-seeking? How helpful is peer review for collaborative truth-seeking? Meta-1. Is "Defer to a consensus among EA community members" a good strategy for answering practical sociological questions? Meta-2. How accurate are conventional answers to practical sociological questions that many people want to get right? I'll spend a few sentences attempting to persuade EA readers that my position is not easily explained away by certain things they might call mistakes. Most of my recent friends are in the EA community. (I don't think EAs are cringe). I assign >10% probability to AI killing everyone, so I'm doing technical AI Safety research as a PhD student at FHI. (I don't think longtermism or sci-fi has corrupted the EA community). I've read the sequences, and I thought they were mostly good. (I'm not "inferentially distant"). I think quite highly of the philosophical and economic reasoning of Toby Ord, Will MacAskill, Nick Bostrom, Rob Wiblin, Holden Karnofsky, and Eliezer Yudkowsky. (I'm "value-aligned", although I object to this term). Let me begin with an observation about Amazon's organizational structure. From what I've heard, Team A at Amazon does not have to use the tool that Team B made for them. Team A is encouraged to look for alternatives elsewhere. And Team B is encouraged to make the tool into something that they can sell to other organizations. This is apparently how Amazon Web Services became a product. The lesson I want to draw from this is that wherever possible, Amazon outsources quality control to the market (external people) rather than having internal "value-aligned" people attempt to assess quality and issue a pass/fail verdict. This is an instance of the principle: "if there is a large group of people trying to answer a question correctly (like 'Is Amazon's tool X the best option available?'), and they are trying (almost) as hard as you to answer it correctly, defer to their answer." That is my claim; now let me defend it, not just by pointing at Amazon, and claiming that they agree with me. High-Level Claims Claim 1: If there is a large group of people trying to answer a question correctly, and they are trying (almost) as hard as you to answer it correctly, any consensus of theirs is more likely to be correct than you. There is extensive evidence (Surowiecki, 2004) that aggregating the estimates of many people produces a more accurate estimate as the number of people grows. It may matter in many cases that people are actually trying rather than just professing to try. If you have extensive and unique technical expertise, you might be able to say no one is trying as hard as you, because properly trying to answer the question correctly involves seeking to understand the implications of certain technical arguments, which only you have bothered to do. There is potentially plenty of gray area here, but hopefully, all of my applications of Claim 1 steer well clear of it. Let's now turn to Meta-2 from above. Claim 2: For practical sociological questions that many people want to get right, if there is a conventional answer, you should go with the conventional answer....
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nick Bostrom should step down as Director of FHI, published by BostromAnonAccount on March 4, 2023 on The Effective Altruism Forum. Nick Bostrom should step down as Director of FHI. He should move into a role as a Senior Research Fellow at FHI, and remain a Professor of Philosophy at Oxford University. I don't seek to minimize his intellectual contribution. His seminal 2002 paper on existential risk launched a new sub-field of existential risk research (building on many others). The 2008 book on Global Catastrophic Risks he co-edited was an important part of bringing together this early field. 2014's Superintelligence put AI risk squarely onto the agenda. And he has made other contributions across philosophy from human enhancement to the simulation hypothesis. I'm not denying that. I'm not seeking to cancel him and prevent him from writing further papers and books. In fact, I want him to spend more time on that. But I don't think he's been a particularly good Director of FHI. These difficulties are demonstrated by and reinforced by his Apology. I think he should step down for the good of FHI and the field. This post has some hard truths and may be uncomfortable reading, but FHI and the field are more important than that discomfort. Pre-existing issues Bostrom was already struggling as Director. In the past decade, he's churned through 5-10 administrators, due to his persistent micromanagement. He discouraged investment in the relationship with the University and sought to get around/streamline/reduce the bureaucracy involved with being part of the University. All of this contributed to the breakdown of the relationship with the Philosophy Faculty (which FHI is a part of). This led the Faculty to impose a hiring freeze a few years ago, preventing FHI from hiring more people until they had resolved administrative problems. Until then, FHI could rely on a constant churn of new people to replace the people burnt out and/or moving on. The hiring freeze stopped the churn. The hiring freeze also contributed in part to the end of the Research Scholars Program and Cotton-Barratt's resignation from FHI. It also contributed in part to the switch of almost all of the AI Governance Research Group to the Center for the Governance of AI. Apology Then in January 2023, Bostrom posted an Apology for an Old Email. In my personal opinion, this statement demonstrated his lack of aptitude and lack of concern for his important role. These are sensitive topics that need to be handled with care. But the Apology had a glib tone, reused the original racial slur, seemed to indicate he was still open to discredited ‘race science' hypotheses, and had an irrelevant digression on eugenics. I personally think these are disqualifying views for someone in his position as Director. But also, any of these issues would presumably have been flagged by colleagues or a communications professional. It appears he didn't check this major statement with anyone or seek feedback. Being Director of a major research center in an important but controversial field requires care, tact, leadership and attention to downside risks. The Apology failed to demonstrate that. The Apology has had the effect of complicating many important relationships for FHI: with the University, with staff, with funders and with collaborators. Bostrom will now struggle even more to lead the center. First, University. The Faculty was already concerned, and Oxford University is now investigating. Oxford University released a statement to The Daily Beast: “The University and Faculty of Philosophy is currently investigating the matter but condemns in the strongest terms possible the views this particular academic expressed in his communications. Neither the content nor language are in line with our strong commitment to diversity and equality.” B...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FYI there is a German institute studying sociological aspects of existential risk, published by Max Görlitz on February 12, 2023 on The Effective Altruism Forum. The institute is called Käte Hamburger Centre for Apocalyptic and Post-Apocalyptic Studies and is based in Heidelberg, Germany. They started in 2021 and initially received 9 million € of funding from the German government for the first four years. AFAICT, they study sociological aspects of narratives of apocalypses, existential risks, and the end of the world. They have engaged with EA thinking, and I assume they will have an interesting outside perspective of some prevalent worldviews in EA. For example, here is a recorded talk about longtermism (I have only skipped through it so far), which mentions MIRI, FHI, and What We Owe The Future. I stumbled upon this today and thought it could interest some people here. Generally, I am very curious to learn more about alternative worldviews to EA that also engage with existential risk in epistemically sound ways. One criticism of EA that became more popular over the last months is that EA organizations engage too little with other disciplines and institutions with relevant expertise. Therefore, I suggest checking out the work of this Centre. Please comment if you have engaged with them before and know more than I do. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Nå er det helt mørkt , sier norsk-afghanere om Talibans utdanningsforbud for kvinner. De ber den norske regjeringa sette inn flere sanksjoner mot Afghanistans øverste ledelse. SV vil at gravide skal kunne ta abort hos fastlegen. - totalt urealistisk, sier fastlege Kaveh Rashidi. Regjeringen vil felle tre ulveflokker innenfor ulvesona. - Forkastelig og rovdyrfiendtlig, snerrer Venstre. Lærere i Sverige frykter at lesebrett kan gjøre at barn aldri lærer å lese. Er det på tide å gjeninnføre skolebøkene? Dette er Dagsnytt 18 på bittelille julaften, der vi også blant annet skal snakke med FHI om årets influensabølge. Gry Veiby. Hør episoden i appen NRK Radio
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Take 6: CAIS is actually Orwellian., published by Charlie Steiner on December 7, 2022 on The AI Alignment Forum. As a writing exercise, I'm writing an AI Alignment Hot Take Advent Calendar - one new hot take, written every day for 25 days. Or until I run out of hot takes. CAIS, or Comprehensive AI Services, was a mammoth report by Eric Drexler from 2019. (I think reading the table of contents is a good way of getting the gist of it.) It contains a high fraction of interesting predictions and also a high fraction of totally wrong ones - sometimes overlapping! The obvious take about CAIS is that it's wrong when it predicts that agents will have no material advantages over non-agenty AI systems. But that's long been done, and everyone already knows it. What not everyone knows is that CAIS isn't just a descriptive report about technology, it also contains prescriptive implications, and relies on predictions about human sociocultural adaptation to AI. And this future that it envisions is Orwellian. This isn't totally obvious. Mostly, the report is semi-technical arguments AI capabilities. But even if you're looking for the parts of the report about what AI capabilities people will or should develop, or even the parts that sound like predictions about the future, they sound quite tame. It envisions that humans will use superintelligent AI services in contexts where defense trumps offense, and where small actors can't upset the status quo and start eating the galaxy. The CAIS worldview expects us to get to such a future because humans are actively working for it - no AI developer, or person employing AI developers, wants to get disassembled by a malevolent agent, and so we'll look for solutions that shape the future such that that's less likely (and the technical arguments claim that such solutions are close to hand). If the resulting future looks kinda like business as usual - in terms of geopolitical power structure, level of human autonomy, maybe even superficial appearance of the economy, it's because humans acted to make it happen because they wanted business as usual. Setting up a defensive equilibrium where new actors can't disrupt the system is hard work. Right now, just anyone is allowed to build an AI. This capability probably has to be eliminated for the sake of long-term stability. Ditto for people being allowed to have unfiltered interaction with existing superintelligent AIs. Moore's law of mad science says that the IQ needed to destroy the world drops by 1 point every 18 months. In the future where that IQ is 70, potentially world-destroying actions will have to be restricted if we don't want the world destroyed. In short, this world where people successfully adapt to superintelligent AI services is a totalitarian police state. The people who currently have power in the status quo are the ones who are going to get access to the superintelligent AI, and they're going to (arguendo) use it to preserve the status quo, which means just a little bit of complete surveillance and control. Hey, at least it's preferable to getting turned into paperclips. These implications shouldn't surprise you too much if you know that Eric Drexler produced this report at FHI, and remember the works of Nick Bostrom. In fact, also in 2019, Bostrom published The Vulnerable World Hypothesis, which much more explicitly lays out the arguments for why adaptation to future technology might look like a police state. Now, one might expect an Orwellian future to be unlikely (even if we suspend our disbelief about the instability of the system to an AI singleton). People just aren't prepared to support a police state - especially if they think "it's necessary for you own good" sounds like a hostile power-grab. On the other hand, the future elites will have advanced totalitarianism-enabling tech...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Differential technology development: preprint on the concept, published by Hamish Hobbs on September 12, 2022 on The Effective Altruism Forum. Summary We have published a preprint of a paper describing the concept of differential technology development and exploring when and how its implementation may be viable. This manuscript has been in the works at FHI for a while, so we would love your feedback! Authors: Jonas B. Sandbrink, Hamish Hobbs, Jacob L. Swett, Allan Dafoe, Anders Sandberg Further details Nick Bostrom articulated the concept of Differential Technological Development in Superintelligence. This concept, and the concept of differential progress more broadly, has already been fairly widely discussed and written about, but within a limited community. This includes here on the EA Forum. The concept is typically used to argue that it would be beneficial to accelerate risk-reducing technological progress and retard risk-increasing technological progress, to manage potentially catastrophic or existential technological risks such as those from advanced AI systems and biotechnologies. With this paper, we attempt to thoroughly explore the concept, make it accessible to the broader research and policy community, and explore the contexts in which its implementation is likely to be viable. To our knowledge, this is the first academic article seeking to comprehensively articulate a principle of differential technology development. In the paper we argue that: Responsible innovation efforts to date have largely focussed on shaping individual technologies. However, as demonstrated by the preferential advancement of low-emission technologies, certain technologies reduce risks from other technologies or constitute low-risk substitutes. Governments and other relevant actors may leverage risk-reducing interactions across technology portfolios to mitigate risks beyond climate change. We adapt Nick Bostrom's original concept to propose a responsible innovation principle of “differential technology development”, which calls relevant actors to leverage risk-reducing interactions between technologies by affecting their relative timing. Thus, it may be beneficial to delay risk-increasing technologies and preferentially advance risk-reducing defensive, safety, or substitute technologies. Implementing differential technology development requires the ability to anticipate or identify impacts and intervene in the relative timing of technologies. We find that both are sometimes viable and that differential technology development may still be usefully applied even late in the diffusion of some harmful technologies. A principle of differential technology development may inform government research funding priorities and technology regulation, as well as philanthropic research and development funders and corporate social responsibility measures. Differential technology development may be particularly promising to mitigate potential catastrophic risks from emerging technologies like synthetic biology and artificial intelligence. While writing the paper, we considered alternative terms to describe the concept of differential technology development. Our leading alternatives were “responsible innovation sequencing” or “differential progress”. However, in the end we decided that differential technological development was the best fit for integrating this article into the responsible innovation literature. That said, we do see merit to either defining technology broadly to include “not only gadgets but also methods, techniques and institution design principles” or to extending the concept to include forms of differential development beyond technology development. Amongst other things, paper outlines four relevant categories of technologies: Risk increasing technologies cause negative societal imp...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing the Existential Risks Introductory Course (ERIC), published by Nandini Shiralkar on August 19, 2022 on The Effective Altruism Forum. Crossposted to LessWrong. Introduction We (the Cambridge Existential Risks Initiative) ran an Existential Risks Introductory Course (ERIC) in the first quarter of 2022, aiming to introduce the field of existential risks, without being explicitly associated with any particular philosophy. We expect the programme to be most useful to people who are new to this field, and we hypothesised that we may be able to reach a different target audience by not explicitly branding it as EA. The full curriculum we used for the programme, along with exercises and organisation spotlights, can be found here. This was primarily designed by Callum McDougall, with some inputs from the rest of the CERI team. If you are interested in joining the next iteration of the course in Winter 2022 (either as a participant or as a facilitator), please fill out this interest form. This post contains an overview of the course, which is followed by an abbreviated version of the syllabus for the ease of gathering feedback. The weekly summaries may also be helpful for community builders looking for summaries of any of the core readings from our syllabus. We welcome any feedback on the content, exercises or anything else pertaining to the course, either here publicly on the Forum, or you can also reach out to us privately if you prefer that. Course overview The course consists of 8 weeks of reading (split into core and applied). Some weeks also include exercises, which participants are encouraged to complete and discuss in the session. Each week, participants will meet for 1.5 hour sessions where they will discuss the material and exercises with a facilitator. The topics for each week are as follows: Week 1: Introduction to Existential RisksProvides an introduction to x-risks, why they might be both highly important and neglected, and introduces some important terminology. Week 2: Natural & Anthropogenic RisksDiscusses natural risks, and risks from nuclear war and climate change. Week 3: Biosecurity, And How To Think About Future Risks Discusses risks from engineered pandemics, as well as a broader look at future risks in general and how we can reason about them and prepare for them. Week 4: Unaligned Artificial IntelligenceDiscusses risks from unaligned AI, and provides a brief overview of the different approaches that are being taken to try and solve the problem. Week 5: Dystopias, Lock-in & Unknown UnknownsConcludes the discussion of specific risks by discussing some more neglected risks. Also includes a discussion of the “unknown unknowns” problem, and how we can categorise and assess probabilities of risks. Week 6: Forecasting & Decision-makingMoves away from specific risks, and discusses broad strategies that can help mitigate a variety of risks, with a focus on improving forecasting and decision-making (both at the institutional and individual level). Week 7: Different Frameworks for Existential RiskFurther explores some alternative frameworks for x-risks than those found in The Precipice, e.g. FHI's origin/scaling/endgame model, and the “Democratising Risk” paper. Week 8: Next StepsConcludes the fellowship with a lookback on the key themes in the material, and a discussion of how the fellows plan to put what they've learned into action (e.g. in their future careers). Abbreviated curriculum (Core readings) Week 1: Introduction to Existential Risks The first group of core materials here outlines the key ideas of Toby Ord's book The Precipice, that we may be living in a uniquely important and dangerous time thanks to the threat of existential risks. What are the most important moral problems of our time? (10 mins.) The Precipice: Introduction, Chapters 1 & 2...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Interesting vs. Important Work - A Place EA is Prioritizing Poorly, published by Davidmanheim on July 28, 2022 on The Effective Altruism Forum. There are many important issues in the world, and many interesting topics. But these are not the same thing, and we should beware of suspicious convergence. Given that, our assumption should be that the most interesting topics we hear about are far less important than the attention they receive. Heeding Scott Alexander's recent warning, I'll therefore ask much more specifically, what are the most intellectually interesting topics in Effective Altruism, and then I'll suggest that we should be doing less work on them - and list a few concrete suggestions for how to do that. What are the interesting things? Here are some of my concrete candidates for most interesting work: infinite ethics, theoretical AI safety, rationality techniques, and writing high-level critiques of EA. And to be clear, all of these ARE important. But the number of people we need working on them should probably be more limited than the current trajectory, and we should probably de-emphasize status for the most theoretical work. To be clear, I love GPI, FHI, CSER, MIRI, and many other orgs doing this work. The people I know at each org are great, and I think that many of the things they do are, in fact, really important. And I like the work they do - not only do I think it's important, it's also SUPER interesting, especially to people who like philosophy, math, and/or economics. But the convergence between important and interesting is exactly the problem I'm pointing towards. Motivating Theoretical Model Duncan Sabien talks about Monks of Magnitude, where different people work on things that have different feedback loop lengths, from 1 day, to 10-days, to people who spend 10,000 days thinking. He more recently mentioned that he noticed “people continuously vanishing higher into the tower,” that is, focusing on more abstract and harder to evaluate issues, and that very few people have done the opposite. One commenter, Ben Weinstein-Raun, suggested several reasons, among them that longer-loop work is more visible, and higher status. I think this critique fits the same model, where we should be suspicious that such long-loop work is over-produced. (Another important issue is that “it's easier to pass yourself off as a long-looper when you're really doing nothing,” but that's a different discussion.) The natural tendency to do work that is more conceptual and/or harder to pin to a concrete measurable outcome is one we should fight back against, since by default it is overproduced. The basic reason it is overproduced is because people who are even slightly affected by status or interesting research, i.e. everyone, will give it at least slightly more attention than warranted, and further, because others are already focused on it, the marginal value is lower. This is not to say that the optimal amount of fun and interesting research is zero, nor that all fun and interesting work is unimportant. We do need 10,000 day monks - and lots of interesting questions exist for long-termism that make them significant moral priorities. And I agree with the argument for a form of long-termism. But this isn't a contradiction - and work on long-termism can be concrete and visible, isn't necessarily conceptual, and doesn't necessarily involve slow feedback loops. Towards fixing the problem Effective Altruism needs to be effective, and that means we need evaluable outputs wherever possible. First, anyone and everyone attempting to be impactful needs a theory of change, and an output that has some way of impacting the world. That means everyone, especially academics and researchers, should make this model clear, at least to themselves, but ideally also to others. If you're writing f...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Probably Shouldn't Try to Exercise Direct Political Power, published by iamasockpuppet on July 21, 2022 on The Effective Altruism Forum. Last May, EA-aligned donors helped to make Carrick Flynn's campaign in OR-06 one of the best funded primary campaigns in US electoral history. Flynn, a researcher at FHI and CSET, lost the primary, receiving about half as many votes as the winner despite support from the EA community. This prompted several further analyses on the EA forum: Some potential lessons from Carrick's Congressional bid Early spending research and Carrick Flynn Yglesias on EA and politics Virtually all of the initial analysis has focused on ways that EA can better win future political races. I believe that it would be harmful to try; that EA as a movement attempting to hold direct political power as elected officials would be somewhere between neutral and harmful; and that seeking to influence existing non-EA elected officials would be more effective. The arguments on the EA Forum in favor of Flynn's election were wrong > even with a small chance of success ( The Biden administration released a fantastic $65 billion plan that aims to prevent future pandemics. Congress has funded practically none of it. Part of the problem is that nobody in congress has made pandemic preparedness a ‘core issue.' Congressional members don't oppose the president's plan, and there are some standout champions, but none of them are trying to get it passed with the desperation that I think the issue warrants. > My sense is if Carrick had won, he could have done a lot of good – in particular, advancing pandemic prevention (e.g., via participating in bill markups), with an outside chance of getting Biden's pandemic prevention plan enacted. These comments are incorrect; Carrick Flynn's election would likely not have had much influence on advancing the pandemic prevention plan. 538 currently forecasts an 87% chance that Republicans control the House after the 2022 elections; this would likely leave Flynn more-or-less irrelevant for the next two years. The Democrats currently hold the House, yet have not passed their own President's pandemic plan. Nobody, in any of the comments above or elsewhere, seems to have any idea why; as a result, the arguments above are remarkably vague. The absence of domain knowledge from this conversation is really bad! (I don't claim to be an expert on politics, to be clear; it is entirely possible that the explanations I offer below are wrong. But EA extensively discussed the Flynn campaign and moved significant amounts of money, seemingly without a very basic public analysis of the facts on the ground.) Each fiscal year's federal budget is (supposedly) written and passed by April 15 of that year. There is extensive advanced planning for the budget. For FY 2022, the White House released its budget request April 9th, 2021, too soon for the White House pandemic plan, released September 2021, to be included. The recently released FY 2023 budget request does include funding for the pandemic plan. Carrick Flynn would be unlikely to have much influence over the FY 2023 budget, since he wouldn't even be in the House until most of the way through the negotiations, even assuming that he won the primary, won the general, and the GOP didn't hold the House. (New representatives take office January 3rd; the House passed its FY 2022 budget March 9th of last year.) The most likely way for the budget to exclude the pandemic plan if the Democrats hold ...
In this episode, Shavonne sits down with LaVerne Canady, the Director of Programs at FHI 360, an international nonprofit working to imporove the health and well-being of people in the United States and around the world. LaVerne oversees a portfolio of projects for FHI 360's Social Marketing & Communication unit promoting positive social and behavior change for public health issues including diabetes, obesity, nutrition and youth violence prevention. Some of her current projects include the National Collaborative on Childhood Obesity Research and a racial truth and reconciliation project in Jefferson County Alabama which she touches on in this conversation. You'll learn more about the opportunity for social marketing to influence positive change, advance health and racial equity, LaVerne's story of getting into this field of work, the hope she sees for our future generations, and more. RESOURCES RELATED TO THIS EPISODE Visit FHI360.org Follow LaVerne Canady on LinkedIn Linkedin.com/in/lavernecanady Email lcanady@fhi360.org CREDITS Theme Music
Det var en tabbe å stenge skolene i koronapandemien. Det sier overlege Preben Aavitsland på FHI. Hør episoden i appen NRK Radio
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Quick List of Some Problems in AI Alignment As A Field, published by NicholasKross on June 21, 2022 on LessWrong. 1. MIRI as central point of failure for... a few things... For the past decade or more, if you read an article saying "AI safety is important", and you thought, "I need to donate or apply to work somewhere", MIRI was the default option. If you looked at FLI or FHI or similar groups, you'd say "they seem helpful, but they're not focused solely on AI safety/alignment, so I should go to MIRI for the best impact." 2. MIRI as central point of failure for learning and secrecy. MIRI's secrecy (understandable) and their intelligent and creatively-thinking staff (good) have combined into a weird situation: for some research areas, nobody really knows what they've tried and failed/succeeded at, nor the details of how that came to be. Yudkowsky did link some corrigibility papers he labels as failed, but neither he nor MIRI have done similar (or more in-depth) autopsies of their approaches, to my knowledge. As a result, nobody else can double-check that or learn from MIRI's mistakes. Sure, MIRI people write up their meta-mistakes, but that has limited usefulness, and people still (understandably) disbelieve their approaches anyway. This leads either to making the same meta-mistakes (bad), or to blindly trusting MIRI's approach/meta-approach (bad because...) 3. We need more uncorrelated ("diverse") approaches to alignment. MIRI was the central point for anyone with any alignment approach, for a very long time. Recently-started alignment groups (Redwood, ARC, Anthropic, Ought, etc.) are different from MIRI, but their approaches are correlated with each other. They all relate to things like corrigibility, the current ML paradigm, IDA, and other approaches that e.g. Paul Christiano would be interested in. I'm not saying these approaches are guaranteed to fail (or work). I am saying that surviving worlds would have, if not way more alignment groups, definitely way more uncorrelated approaches to alignment. This need not lead to extra risk as long as the approaches are theoretical in nature. Think early-1900s physics gedankenexperiments, and how diverse they may have been. Or, if you want more hope and less hope at the same time, look at how many wildly incompatible theories have been proposed to explain quantum mechanics. A surviving world would have at least this much of a Cambrian explosion in theories, and would also be better at handling this than we are in real-life handling the actual list of quantum theories (in absence of better experimental evidence). Simply put, if evidence is dangerous to collect, and every existing theoretical approach is deeply flawed along some axis, then let schools proliferate with little evidence, dammit! This isn't psych, where stuff fails to replicate and people keep doing it. AI alignment is somewhat better coordinated than other theoretical fields... we just overcorrected to putting all our eggs in a few approach baskets. (Note: if MIRI is willing and able, it could continue being a/the central group for AI alignment, given the points in (1), but it would need to proliferate many schools of thought internally, as per (5) below.) One problem with this [1], is that the AI alignment field as a whole may not have the resources (or the time) to pursue this hits-based strategy. In that case, AI alignment would appear to be bottlenecked on funding, rather than talent directly. That's... news to me. In either case, this requires either more fundraising, and/or more money-efficient ways to get similar effects to what I'm talking about. (If we're too talent-constrained to pursue a hits-based approach strategy, it's even more imperative to fix the talent constraints first, as per (4) below.) Another problem is whether the "winning" approach mi...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why all the fuss about recursive self-improvement?, published by So8res on June 12, 2022 on LessWrong. This article was outlined by Nate Soares, inflated by Rob Bensinger, and then edited by Nate. Content warning: the tone of this post feels defensive to me. I don't generally enjoy writing in "defensive" mode, but I've had this argument thrice recently in surprising places, and so it seemed worth writing my thoughts up anyway. In last year's Ngo/Yudkowsky conversation, one of Richard's big criticisms of Eliezer was, roughly, ‘Why the heck have you spent so much time focusing on recursive self-improvement? Is that not indicative of poor reasoning about AGI?' I've heard similar criticisms of MIRI and FHI's past focus on orthogonality and instrumental convergence: these notions seem obvious, so either MIRI and FHI must be totally confused about what the big central debates in AI alignment are, or they must have some very weird set of beliefs on which these notions are somehow super-relevant. This seems to be a pretty common criticism of past-MIRI (and, similarly, of past-FHI); in the past month or so, I've heard it two other times while talking to other OpenAI and Open Phil people. This argument looks misguided to me, and I hypothesize that a bunch of the misguidedness is coming from a simple failure to understand the relevant history. I joined this field in 2013-2014, which is far from "early", but is early enough that I can attest that recursive self-improvement, orthogonality, etc. were geared towards a different argumentative environment, one dominated by claims like "AGI is impossible", "AGI won't be able to exceed humans by much", and "AGI will naturally be good". A possible response: “Okay, but ‘sufficiently smart AGI will recursively self-improve' and ‘AI isn't automatically nice' are still obvious. You should have just ignored the people who couldn't immediately see this, and focused on the arguments that would be relevant to hypothetical savvy people in the future, once the latter joined in the discussion.” I have some sympathy for this argument. Some considerations weighing against, though, are: I think it makes more sense to filter on argument validity, rather than “obviousness”. What's obvious varies a lot from individual to individual. If just about everyone talking about AGI is saying “obviously false” things (as was indeed the case in 2010), then it makes sense to at least try publicly writing up the obvious counter-arguments. This seems to assume that the old arguments (e.g., in Superintelligence) didn't work. In contrast, I think it's quite plausible that “everyone with a drop of sense in them agrees with those arguments today” is true in large part because these propositions were explicitly laid out and argued for in the past. The claims we take as background now are the claims that were fought for by the old guard. I think this argument overstates how many people in ML today grok the “obvious” points. E.g., based on a recent DeepMind Podcast episode, these sound like likely points of disagreement with David Silver. But even if you think this was a strategic error, I still think it's important to recognize that MIRI and FHI were arguing correctly against the mistaken views of the time, rather than arguing poorly against future views. Recursive self-improvement Why did past-MIRI talk so much about recursive self-improvement? Was it because Eliezer was super confident that humanity was going to get to AGI via the route of a seed AI that understands its own source code? I doubt it. My read is that Eliezer did have "seed AI" as a top guess, back before the deep learning revolution. But I don't think that's the main source of all the discussion of recursive self-improvement in the period around 2008. Rather, my read of the history is that MIRI was operatin...