POPULARITY
Welcome to a new episode of I Am House Radio with your host, Crystal Waters. Playing the best new House Music from around the world. 01. Crystal Waters x ManyFew - You & Me [IAH Records]02. CASSIMM, Dark Dahlia - If You Want [Toolroom]03. ManyFew ft. Lukas Setto - Never Enough (NYC Edit) [ManyFew Records]04. Curtis Young - Put Em Up [Basement Sound]05. ManyFew - DRIP [ManyFew Records]06. Todd Terry, Gettoblaster, Will Cain - Give Me Your Energy (Extended Mix) [Toolroom Trax]07. ManyFew x A-tribe - Feelin' [Mlab]08. Kremerk & Mister Pancho - One More Time [Hysteria]09. ZOFIA 'Belong' (Extended Mix) [CYN Records]10. Moon Boots - Hold Me Tight [FOREVER DAYS]11. Double B & Cas - We Want You [G Music]12. ManyFew - RAW (Remode Extended Mix) [ManyFew Records]13. Shiba San & Zack Darza - Where we Started [Basement Leak]14. ManyFew - Sacrifice (I Found Love) [Spinnin' Records]15. Revival House Project ft. Kathy Brown - Dance To The Music (David Penn Dub Remix) [Revival Records]16. ManyFew - All Souls Avenue [ManyFew Records]
durée : 00:59:38 - Affaires étrangères - par : Christine Ockrent - Qui est donc Elon Musk ? Idéaliste ou pragmatique, joueur ou stratège, bon et mauvais génie, ses ambitions s'étendent de la campagne présidentielle étasunienne à la conquête de l'espace. - réalisation : Luc-Jean Reynaud - invités : Asma Mhalla Docteure en sciences politiques, chercheure associée au Laboratoire d'Anthropologie Politique de l'EHESS, spécialiste de Géopolitique de la Tech et enseignante à SciencesPo et Polytechnique; Albéric Tellier Professeur de Management de l'innovation à l'Université Paris-Dauphine et responsable de l'équipe de recherche M-Lab ; Thierry Aimar Enseignant-chercheur en sciences économiques à l'Université de Lorraine et à Sciences Po Paris ; David Colon Enseignant et chercheur à Sciences Po
Radio Marija ir klausītāju veidots radio, kas nes Dieva Vārdu pasaulē. Radio Marija balss skan 24 stundas diennaktī. Šajos raidījumos klausītājiem kā saviem draugiem neatkarīgi no viņu reliģiskās pārliecības cenšamies sniegt Kristus Labo Vēsti – Evaņģēliju, skaidru katoliskās Baznīcas mācību. Cenšamies vairot lūgšanas pieredzi un sniegt iespēju ielūkoties visas cilvēces kultūras daudzveidībā. Radio Marija visā pasaulē darbojas uz brīvprātīgo kalpošanas pamata. Labprātīga savu talantu un laika ziedošana Dieva godam un jaunās evaņģelizācijas labā ir daļa no Radio Marija harizmas. Tā ir lieliska iespēja ikvienam īstenot savus talantus Evaņģēlija pasludināšanas darbā, piedzīvojot kalpošanas prieku. Ticam, ka Dievs īpaši lietos ikvienu cilvēku, kurš atsauksies šai kalpošanai, lai ar Radio Marija starpniecību paveiktu Latvijā lielas lietas. Radio Marija ir arī ģimene, kas vieno dažādu vecumu, dažādu konfesiju, dažādu sociālo slāņu cilvēkus, ļaujot katram būt iederīgam un sniegt savu pienesumu Dieva Vārda pasludināšanā, kā arī kopīgā lūgšanas pieredzē. "Patvērums Dievā 24 stundas diennaktī", - tā ir Radio Marija Latvija devīze. RML var uztvert Rīgā 97.3, Liepājā 97.1, Krāslavā 97.0, Valkā 93.2, kā arī ar [satelītuztvērēja palīdzību un interneta aplikācijās](http://www.rml.lv/klausies/).
Radio Marija ir klausītāju veidots radio, kas nes Dieva Vārdu pasaulē. Radio Marija balss skan 24 stundas diennaktī. Šajos raidījumos klausītājiem kā saviem draugiem neatkarīgi no viņu reliģiskās pārliecības cenšamies sniegt Kristus Labo Vēsti – Evaņģēliju, skaidru katoliskās Baznīcas mācību. Cenšamies vairot lūgšanas pieredzi un sniegt iespēju ielūkoties visas cilvēces kultūras daudzveidībā. Radio Marija visā pasaulē darbojas uz brīvprātīgo kalpošanas pamata. Labprātīga savu talantu un laika ziedošana Dieva godam un jaunās evaņģelizācijas labā ir daļa no Radio Marija harizmas. Tā ir lieliska iespēja ikvienam īstenot savus talantus Evaņģēlija pasludināšanas darbā, piedzīvojot kalpošanas prieku. Ticam, ka Dievs īpaši lietos ikvienu cilvēku, kurš atsauksies šai kalpošanai, lai ar Radio Marija starpniecību paveiktu Latvijā lielas lietas. Radio Marija ir arī ģimene, kas vieno dažādu vecumu, dažādu konfesiju, dažādu sociālo slāņu cilvēkus, ļaujot katram būt iederīgam un sniegt savu pienesumu Dieva Vārda pasludināšanā, kā arī kopīgā lūgšanas pieredzē. "Patvērums Dievā 24 stundas diennaktī", - tā ir Radio Marija Latvija devīze. RML var uztvert Rīgā 97.3, Liepājā 97.1, Krāslavā 97.0, Valkā 93.2, kā arī ar [satelītuztvērēja palīdzību un interneta aplikācijās](http://www.rml.lv/klausies/).
durée : 00:59:02 - Affaires étrangères - par : Christine Ockrent - Pourquoi accorde-t-on à Taylor Swift, en tournée en France cette semaine, une telle capacité à aider Joe Biden, candidat à sa réélection à la présidence des États-Unis? Quels sont les ressorts pour que tant de fans se reconnaissent dans les valeurs de décence prônées dans des textes assez fades ? - invités : Rym Momtaz Chercheuse en relations internationales à l'International Institute for Strategic Studies; Elsa Grassy Maîtresse de conférence en civilisation des États-Unis à l'université de Strasbourg. Spécialiste des musiques populaires américaines ; Lauric Henneton Maître de conférences en civilisation des pays anglophones à l'Université de Versailles Saint-Quentin et chroniqueur au magazine Rolling Stone; Albéric Tellier professeur de management de l'innovation à l'Université Paris-Dauphine, responsable et membre du laboratoire M-LAB
Piedāvājam sporta sarunu šova “eXi” piektās - jubilejas sezonas 20. epizodi, kurā pie Jāņa Celmiņa viesojas bijušais Latvijas hokeja izlases uzbrucējs, kurš vēlāk bija arī atjaunotā Rīgas “Dinamo” ģenerālmenedžeris un galvenais treneris, bet šobrīd ir Baltijas čempionvienības HK “Mogo”/LSPA galvenais treneris Ģirts Ankipāns.
durée : 00:58:44 - Entendez-vous l'éco ? - par : Tiphaine de Rocquigny, Aliette Hovine - Comment expliquer le succès économique de Taylor Swift, popstar planétaire aux millions de fans ? - invités : Morgane Giuliani journaliste indépendante et autrice spécialiste de pop culture; Albéric Tellier professeur de management de l'innovation à l'Université Paris-Dauphine, responsable et membre du laboratoire M-LAB
Mannlegi þátturinn var sendur í dag út frá Síðumúla, nánar tiltekið úr Múlabæ dagþjálfun fyrir aldraða og öryrkja. Hugmyndin að þessari fyrstu dagdvöl landsins varð reyndar til á ári aldraðra, 1982 en verkefnið þróaðist svo í samvinnu þriggja félagasamtaka og var Múlabær opnaður í janúar árið 1983, fyrsta dagdvöl landsins ætluð öldruðum. Múlabær á sér farsæla sögu og hafa sömu aðilar staðið að rekstrinum nánast frá upphafi. Í dag er stjórn staðarins skipuð af aðilum frá SÍBS, Félagi eldri borgara í Reykjavík og Reykjavíkurdeild Rauða krossins. Við töluðum í dag við Þórunni Bjarney Garðarsdóttur, forstöðumann Múlabæjar, um starfsemina og sögu Múlabæjar sem varð 40 ára í fyrra. Svo töluðum við við Eddu Jónasdóttur,Sigurð Daníelsson og Ragnheiði Sigurðardóttur, sem öll nýta sér þjónustuna og dagdvölina. Tónlist í þættinum í dag: Áður oft ég hef / Haukur Morthens og Hljómsveit Sigurd Jansen (erlent lag, texti e. Egil Bjarnason) Ljúfa vina / Ragnar Bjarnason og Sigrún Jónsdóttir (Jón Sigurðsson og Ólafur Gaukur Þórhallsson, texti Indriði G Þorsteinsson og Ólafur Gaukur) Sun aint gonna shine anymore / Walker brothers Hringdansar / Kokkurinn (syrpa)/ Harmónikkutríó Jan Mórávek UMSJÓN: GUNNAR HANSSON OG GUÐRÚN GUNNARSDÓTTIR
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Alignment Research Engineer Accelerator (ARENA): call for applicants, published by TheMcDouglas on November 7, 2023 on The Effective Altruism Forum. TL;DR Apply here for the third iteration of ARENA (Jan 8th - Feb 2nd)! Introduction We are excited to announce the third iteration of ARENA (Alignment Research Engineer Accelerator), a 4-week ML bootcamp with a focus on AI safety. Our mission is to prepare participants for full-time careers as research engineers in AI safety, e.g. at leading organizations or as independent researchers. The program will run from January 8th - February 2nd 2024[1], and will be held at the offices of the London Initiative for Safe AI. These offices are also being used by several safety orgs (BlueDot, Apollo, Leap Labs), as well as the current London MATS cohort, and several independent researchers. We expect this to bring several benefits, e.g. facilitating productive discussions about AI safety & different agendas, and allowing participants to form a better picture of what working on AI safety can look like in practice. ARENA offers a unique opportunity for those interested in AI safety to learn valuable technical skills, work in their own projects, and make open-source contributions to AI safety-related libraries. The program is comparable to MLAB or WMLB, but extends over a longer period to facilitate deeper dives into the content, and more open-ended project work with supervision. For more information, see our website. Outline of Content The 4-week program will be structured as follows: Chapter 0 - Fundamentals Before getting into more advanced topics, we first cover the basics of deep learning, including basic machine learning terminology, what neural networks are, and how to train them. We will also cover some subjects we expect to be useful going forwards, e.g. using GPT-3 and 4 to streamline your learning, good coding practices, and version control.Note - participants can optionally not attend the program during this week, and instead join us at the start of Chapter 1, if they'd prefer this option and if we're confident that they are already comfortable with the material in this chapter. Topics include: PyTorch basics CNNs, Residual Neural Networks Optimization (SGD, Adam, etc) Backpropagation Hyperparameter search with Weights and Biases GANs & VAEsDuration: 5 days Chapter 1 - Transformers & Interpretability In this chapter, you will learn all about transformers, and build and train your own. You'll also study LLM interpretability, a field which has been advanced by Anthropic's Transformer Circuits sequence, and open-source work by Neel Nanda. This chapter will also branch into areas more accurately classed as "model internals" than interpretability, e.g. recent work on steering vectors. Topics include: GPT models (building your own GPT-2) Training and sampling from transformersTransformerLensIn-context Learning and Induction HeadsIndirect Object IdentificationSuperpositionSteering VectorsDuration: 5 days Chapter 2 - Reinforcement Learning In this chapter, you will learn about some of the fundamentals of RL, and work with OpenAI's Gym environment to run their own experiments. Topics include: Fundamentals of RLVanilla Policy GradientProximal Policy Gradient RLHF (& finetuning LLMs with RLHF) Gym & Gymnasium environmentsDuration: 5 days Chapter 3 - Paper Replications We will conclude this program with paper replications, where participants will get guidance and mentorship while they replicate a paper containing material relevant to this course. This should draw on much of the skills and knowledge participants will have accumulated over the last 3 weeks.Duration: 5 days Below is a diagram of the curriculum as a whole, and the dependencies between sections. Note that this may change slightly in the lead-up to the program.Here is som...
Artificial General Intelligence (AGI) Show with Soroush Pour
We speak with Adam Gleave, CEO of FAR AI (https://far.ai). FAR AI's mission is to ensure AI systems are trustworthy & beneficial. They incubate & accelerate research that's too resource-intensive for academia but not ready for commercialisation. They work on everything from adversarial robustness, interpretability, preference learning, & more.We talk to Adam about:* The founding story of FAR as an AI safety org, and how it's different from the big commercial labs (e.g. OpenAI) and academia.* Their current research directions & how they're going* Promising agendas & notable gaps in the AI safety researchHosted by Soroush Pour. Follow me for more AGI content:Twitter: https://twitter.com/soroushjpLinkedIn: https://www.linkedin.com/in/soroushjp/== Show links ==-- About Adam --Adam Gleave is the CEO of FAR, one of the most prominent not-for-profits focused on research towards AI safety & alignment. He completed his PhD in artificial intelligence (AI) at UC Berkeley, advised by Stuart Russell, a giant in the field of AI. Adam did his PhD on trustworthy machine learning and has dedicated his career to ensuring advanced AI systems act according to human preferences. Adam is incredibly knowledgeable about the world of AI, having worked directly as a researcher and now as leader of a sizable and growing research org.-- Further resources --* Adam * Website: https://www.gleave.me/ * Twitter: https://twitter.com/ARGleave * LinkedIn: https://www.linkedin.com/in/adamgleave/ * Google Scholar: https://scholar.google.com/citations?user=lBunDH0AAAAJ&hl=en&oi=ao* FAR AI * Website: https://far.ai * Twitter: https://twitter.com/farairesearch * LinkedIn: https://www.linkedin.com/company/far-ai/ * Job board: https://far.ai/category/jobs/* AI safety training bootcamps: * ARENA: https://www.arena.education/ * See also: MLAB, WMLB, https://aisafety.training/* Research * FAR's adversarial attack on Katago https://goattack.far.ai/* Ideas for impact mentioned by Adam * Consumer report for AI model safety * Agency model to support AI safety researchers * Compute cluster for AI safety researchers* Donate to AI safety * FAR AI: https://www.every.org/far-ai-inc#/donate/card * ARC Evals: https://evals.alignment.org/ * Berkeley CHAI: https://humancompatible.ai/Recorded Oct 9, 2023
Her får du høyre om spontane val og uventa hendingar som har ført Mari Pedersen Totland halve verda rundt, og etter kvart heim igjen. No jobbar ho med å skaffe kompetanse til næringslivet gjennom å få fleire til å flytte til Bømlo og Sunnhordland. Tilflytting er eitt av hovudtema på konferansen til næringsrådet 8. november – ikkje gå glipp av denne! I dag er episoden laga i eit kommersielt samarbeid med Bømlo Næringsråd. Bømlo Næringsråd tel om lag 200 medlemsbedrifter frå 24 ulike næringsområde, og er ein sterk interesseorganisasjon og støttespelar i næringsutviklingsprosjekt på Bømlo. Næringsrådet har som formål å fremja næringslivet sine interesser, skape møteplassar for bedrifter i kommunen, deltar i regionale samarbeidsfora, og koordinere samarbeid mellom skule og næringsliv. Dei marknadsfører næringslivet og Bømlo som stad å bu, og arbeider aktivt med å tiltrekke og halde på kompetanse til næringslivet gjennom målretta nettverksaktivitetar, særleg retta mot unge og tilflyttarar i arbeidslivet. 8. november inviterer dei til konferansen Bømlab(ru) – ein konferanse om tilflytting, og det å byggje bruer mellom generasjonar, næringar og regionar – ein konferanse der alle bedrifter i Sunnhordland er velkomne. Sjekk ut: https://www.bomlonr.no/ --- Send in a voice message: https://podcasters.spotify.com/pod/show/sunnhordlandpodden/message
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: There should be more AI safety orgs, published by Marius Hobbhahn on September 21, 2023 on The AI Alignment Forum. I'm writing this in my own capacity. The views expressed are my own, and should not be taken to represent the views of Apollo Research or any other program I'm involved with. TL;DR: I argue why I think there should be more AI safety orgs. I'll also provide some suggestions on how that could be achieved. The core argument is that there is a lot of unused talent and I don't think existing orgs scale fast enough to absorb it. Thus, more orgs are needed. This post can also serve as a call to action for funders, founders, and researchers to coordinate to start new orgs. This piece is certainly biased! I recently started an AI safety org and therefore obviously believe that there is/was a gap to be filled. If you think I'm missing relevant information about the ecosystem or disagree with my reasoning, please let me know. I genuinely want to understand why the ecosystem acts as it does right now and whether there are good reasons for it that I have missed so far. Why? Before making the case, let me point out that under most normal circumstances, it is probably not reasonable to start a new organization. It's much smarter to join an existing organization, get mentorship, and grow the organization from within. Furthermore, building organizations is hard and comes with a lot of risks, e.g. due to a lack of funding or because there isn't enough talent to join early on. My core argument is that we're very much NOT under normal circumstances and that, conditional on the current landscape and the problem we're facing, we need more AI safety orgs. By that, I primarily mean orgs that can provide full-time employment to contribute to AI safety but I'd also be happy if there were more upskilling programs like SERI MATS, ARENA, MLAB & co. Talent vs. capacity Frankly, the level of talent applying to AI safety organizations and getting rejected is too high. We have recently started a hiring round and we estimate that a lot more candidates meet a reasonable bar than we could hire. I don't want to go into the exact details since the round isn't closed but from the current applications alone, you could probably start a handful of new orgs. Many of these people could join top software companies like Google, Meta, etc. or even already are at these companies and looking to transition into AI safety. Apollo is a new organization without a long track record, so I expect the applications for other alignment organizations to be even stronger. The talent supply is so high that a lot of great people even have to be rejected from SERI MATS, ARENA, MLAB, and other skill-building programs that are supposed to get more people into the field in the first place. Also, if I look at the people who get rejected from existing orgs like Anthropic, OpenAI, DM, Redwood, etc. it really pains me to think that they can't contribute in a sustainable full-time capacity. This seems like a huge waste of talent and I think it is really unhealthy for the ecosystem, especially given the magnitude and urgency of AI safety. Some people point to independent research as an alternative. I think independent research is a temporary solution for a small subset of people. It's not very sustainable and has a huge selection bias. Almost anyone with a family or with existing work experience is not willing to take the risk. In my experience, women also have a disproportional preference against independent research compared to men, so the gender balance gets even worse than it already is (this is only anecdotal evidence, I have not looked at this in detail). Furthermore, many people just strongly prefer working with others in a less uncertain, more regular environment of an organization, even if that organization is fairly...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing EffiSciences' AI Safety Unit, published by WCargo on June 30, 2023 on LessWrong. This post was written by Léo Dana, Charbel-Raphaël Ségerie, and Florent Berthet, with the help of Siméon Campos, Quentin Didier, Jérémy Andréoletti, Anouk Hannot, Angélina Gentaz, and Tom David. In this post, you will learn what were EffiSciences' most successful field-building activities as well as our advice, reflections, and takeaways to field-builders. We also include our roadmap for the next year. Voilà. What is EffiSciences? EffiSciences is a non-profit based in France whose mission is to mobilize scientific research to overcome the most pressing issues of the century and ensure a desirable future for generations to come. EffiSciences was founded in January 2022 and is now a team of ~20 volunteers and 4 employees. At the moment, we are focusing on 3 topics: AI Safety, biorisks, and climate change. In the rest of this post, we will only present our AI safety unit and their results. TL;DR: In one year, EffiSciences created and held several AIS bootcamps (ML4Good), taught accredited courses in universities, organized hackathons and conferences in France's top research universities. We reached 700 students, 30 of whom are already orienting their careers into AIS research or field building. Our impact was found to come as much from kickstarting students as from upskilling them. And we are in a good position to become an important stakeholder in French universities on those key topics. Field-building programsMachine Learning for Good bootcamp (ML4G)Parts of the MLAB and AGISF condensed in a 10-day bootcamp (very intense)2 ML4G, 36 participants, 16 are now highly involved. This program was reproduced in Switzerland and Germany with the help of EffiSciences.Turing Seminar AGISF-adapted accredited course that has been taught with talks, workshops, and exercises.3 courses in France's top 3 universities: 40 students attended, 5 are now looking to upskill, and 2 will teach the course next year.AIS Training DaysThe Turing Seminar compressed in a single day (new format)3 iterations, 45 studentsEffiSciences' educational hackathonsA hackathon to introduce robustness to distribution change and goal misgeneralization.2 hackathons, 150 students, 3 are now highly involvedApart Research's hackathonsWe hosted several Apart Research hackathons, mostly with people already onboarded4 hackathons hosted, 3 prizes won by EffiSciences' teamsConferencesIntroductions to AI risks250 students were reached, ~10 are still in contact with usLovelace programSelf-study groups on AIS4 groups of 5 people each, which did not work well for upskilling. TL;DR of the content Results In order to assess the effectiveness of our programs, we have estimated how many people have become highly engaged thanks to each program, using a single metric that we call “counterfactual full-time equivalent”. This is our estimate of how many full-time equivalent these people will engage in AI safety in the coming months (thanks to us, counterfactually). Note that some of these programs have instrumental value that is not reflected in the following numbers. ActivityNumber of eventsBy occurrenceFounding the AI safety unit (founders & volunteers)166,0French ML4Good bootcamp27,43,7Word-of-mouth outreach12,92,9Training Day21,91,0Hackathon42,50,6Turing Seminars (AGISF adaptations)31,30,4Research groups in uni40,40,1Frid'AI (coworking on Fridays)50,50,1Conference50,10,0Total3023,0 Counterfactual full time equivalent In total, those numbers represent the aggregation of 43 people who are highly engaged, i.e. that have been convinced by the problem and are working on solving it through upskilling, writing blog posts, facilitating AIS courses, doing AIS internships, attending to SERI MATS, doing policy work in various orgs, etc. The time spen...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing EffiSciences' AI Safety Unit, published by WCargo on June 30, 2023 on LessWrong. This post was written by Léo Dana, Charbel-Raphaël Ségerie, and Florent Berthet, with the help of Siméon Campos, Quentin Didier, Jérémy Andréoletti, Anouk Hannot, Angélina Gentaz, and Tom David. In this post, you will learn what were EffiSciences' most successful field-building activities as well as our advice, reflections, and takeaways to field-builders. We also include our roadmap for the next year. Voilà. What is EffiSciences? EffiSciences is a non-profit based in France whose mission is to mobilize scientific research to overcome the most pressing issues of the century and ensure a desirable future for generations to come. EffiSciences was founded in January 2022 and is now a team of ~20 volunteers and 4 employees. At the moment, we are focusing on 3 topics: AI Safety, biorisks, and climate change. In the rest of this post, we will only present our AI safety unit and their results. TL;DR: In one year, EffiSciences created and held several AIS bootcamps (ML4Good), taught accredited courses in universities, organized hackathons and conferences in France's top research universities. We reached 700 students, 30 of whom are already orienting their careers into AIS research or field building. Our impact was found to come as much from kickstarting students as from upskilling them. And we are in a good position to become an important stakeholder in French universities on those key topics. Field-building programsMachine Learning for Good bootcamp (ML4G)Parts of the MLAB and AGISF condensed in a 10-day bootcamp (very intense)2 ML4G, 36 participants, 16 are now highly involved. This program was reproduced in Switzerland and Germany with the help of EffiSciences.Turing Seminar AGISF-adapted accredited course that has been taught with talks, workshops, and exercises.3 courses in France's top 3 universities: 40 students attended, 5 are now looking to upskill, and 2 will teach the course next year.AIS Training DaysThe Turing Seminar compressed in a single day (new format)3 iterations, 45 studentsEffiSciences' educational hackathonsA hackathon to introduce robustness to distribution change and goal misgeneralization.2 hackathons, 150 students, 3 are now highly involvedApart Research's hackathonsWe hosted several Apart Research hackathons, mostly with people already onboarded4 hackathons hosted, 3 prizes won by EffiSciences' teamsConferencesIntroductions to AI risks250 students were reached, ~10 are still in contact with usLovelace programSelf-study groups on AIS4 groups of 5 people each, which did not work well for upskilling. TL;DR of the content Results In order to assess the effectiveness of our programs, we have estimated how many people have become highly engaged thanks to each program, using a single metric that we call “counterfactual full-time equivalent”. This is our estimate of how many full-time equivalent these people will engage in AI safety in the coming months (thanks to us, counterfactually). Note that some of these programs have instrumental value that is not reflected in the following numbers. ActivityNumber of eventsBy occurrenceFounding the AI safety unit (founders & volunteers)166,0French ML4Good bootcamp27,43,7Word-of-mouth outreach12,92,9Training Day21,91,0Hackathon42,50,6Turing Seminars (AGISF adaptations)31,30,4Research groups in uni40,40,1Frid'AI (coworking on Fridays)50,50,1Conference50,10,0Total3023,0 Counterfactual full time equivalent In total, those numbers represent the aggregation of 43 people who are highly engaged, i.e. that have been convinced by the problem and are working on solving it through upskilling, writing blog posts, facilitating AIS courses, doing AIS internships, attending to SERI MATS, doing policy work in various orgs, etc. The time spen...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Field Building vs. EA CB, published by kuhanj on June 27, 2023 on The Effective Altruism Forum. Summary As part of the EA Strategy fortnight, I am sharing a reflection on my experience doing AI safety movement building over the last year, and why I am more excited about more efforts in the space compared to EA movement-building. This is mostly due to the relative success of AI safety groups compared to EA groups at universities with both (e.g. read about Harvard and MIT updates from this past year here). I expect many of the takeaways to extend beyond the university context. The main reasons AI safety field building seems more impactful are: Experimental data from universities with substantial effort put into EA and AI safety groups: Higher engagement overall, and from individuals with relevant expertise, interests, and skills Stronger object-level focus encourages skill and knowledge accumulation, offers better career capital, and lends itself to engagement from more knowledgeable and senior individuals (including graduate students and professors). Impartial/future-focused altruism not being a crux for many for working on AI safety Recent developments increasing the salience of potential risks from transformative AI, and decreasing the appeal of the EA community/ideas. I also discuss some hesitations and counterarguments, of which the large decrease in neglectedness of existential risk from AI is most salient (and which I have not reflected too much on the implications of yet, though I still agree with the high-level takes this post argues for). Context/Why I am writing about this I helped set up and run the Cambridge Boston Alignment Initiative (CBAI) and the MIT AI Alignment group this past year. I also helped out with Harvard's AI Safety team programming, along with some broader university AI safety programming (e.g. a retreat, two MLAB-inspired bootcamps, and a 3-week research program on AI strategy). Before this, I ran the Stanford Existential Risks Initiative and effective altruism student group and have supported many other university student groups. Why AI Safety Field Building over EA Community Building From my experiences over the past few months, it seems that AI safety field building is generally more impactful than EA movement building for people able to do either well, especially at the university level (under the assumption that reducing AI x-risk is probably the most effective way to do good, which I assume in this article). Here are some reasons for this: AI-alignment-branded outreach is empirically attracting many more students with relevant skill sets and expertise than EA-branded outreach at universities. Anecdotal evidence: At MIT, we received ~5x the number of applications for AI safety programming compared to EA programming, despite similar levels of outreach last year. This ratio was even higher when just considering applicants with relevant backgrounds and accomplishments. Around two dozen winners and top performers of international competitions (math/CS/science olympiads, research competitions) and students with significant research experience engaged with AI alignment programming, but very few engaged with EA programming. This phenomenon at MIT has also roughly been matched at Harvard, Stanford, Cambridge, and I'd guess several other universities (though I think the relevant ratios are slightly lower than at MIT). It makes sense that things marketed with a specific cause area (e.g. AI rather than EA) are more likely to attract individuals highly skilled, experienced, and interested in topics relevant to the cause area. Effective cause-area specific direct work and movement building still involves the learning, understanding, and application of many important principles and concepts in EA: Prioritization/Optimization are relevant,...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How I learned to stop worrying and love skill trees, published by junk heap homotopy on May 23, 2023 on LessWrong. There seems to be a stupid, embarrassingly simple solution to the following seemingly unrelated problems: Upskilling is hard: the available paths are often lonely and uncertain, workshops aren't mass-producing Paul Christianos, and it's hard for people to stay motivated over long periods of time unless they uproot their entire lives and move to London/Berkeley[1]. It takes up to five years for entrants in alignment research to build up their portfolio and do good work–too slow for short timelines. Alignment researchers don't seem to stack. LessWrong–and by extension greenfield alignment–is currently teetering on the edge of an Eternal September: most new people are several hundred thousand words of reading away from automatically avoiding bad ideas, let alone being able to discuss them with good truth-seeking norms. We don't have a reliable way to gauge the potential of someone we've never met to do great work[2]. This is not a new idea. It's a side project of mine that could be built by your average first-year CS undergrad and that I have shelved multiple times. It's just that, for some reason, like moths to a flame or a dog to its vomit I just keep coming back to it. So I figured, third time's the charm, right? The proposal (which I call 'Blackbelt' for obscure reasons) is really simple: a dependency graph of tests of skill. Note that last bit: 'tests of skill'. If my intention was merely to add to the growing pile of Intro to AI Safety (Please Don't Betray Us and Research Capabilities Afterward)[3] courses out there then we can all just pack up and go home and forget this poorly-worded post ever existed. But alas, my internal model says we will not go from doomed to saved with the nth attempt at prettifying the proof of the rank-nullity theorem. The real problem is not finding better presentations or a better Chatty McTextbook explanation, but can be found by observing what does not change. That is, let's invert the question of how to produce experts and instead ask: "What things should I be able to do, to be considered a minimum viable expert in X?" So for instance, since we're all trying to get more dignity points in before 2028, let's consider the case of the empirical alignment researcher. The minimum viable empirical researcher (and by 'minimum', I mean it) should probably know: How to multiply two matrices together How to train a handwriting classifier on the MNIST dataset How to implement backprop from scratch How to specify a reward function as Python code etc. Sure, there's nothing groundbreaking here, but that's precisely the point. What happens in the wild, in contrast, looks something like grocery shopping: "Oh, you need vector calculus, and set theory, and–textbooks? Read Axler, then Jaynes for probability 'cause you don't want to learn from those dirty, dirty frequentists...yeah sprinkle in some category theory as well from Lawvere, maybe basic game theory, then go through MLAB's course..." Maybe it's just me, but I get dizzy when every other word of someone's sentence packs months' worth of implied thankless work. Never mind how much it sounds like a wide-eyed Victorian-era gentleman rattling off classics one supposedly has read: reading a whole textbook is not an atomic action, let alone going through entire courses and assuming infinite motivation on the part of the victim[4]. There's no accounting for tests What is a test, really? Related: the most accurate map of the territory is the territory itself, but what happens when the territory is slippery[5]? An apocryphal story goes that, when Pope Benedict XI was in search of a fresco artist he sent a messenger to a man named Giotto. The messenger asked him to provide a demonstration of ...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How I learned to stop worrying and love skill trees, published by junk heap homotopy on May 23, 2023 on LessWrong. There seems to be a stupid, embarrassingly simple solution to the following seemingly unrelated problems: Upskilling is hard: the available paths are often lonely and uncertain, workshops aren't mass-producing Paul Christianos, and it's hard for people to stay motivated over long periods of time unless they uproot their entire lives and move to London/Berkeley[1]. It takes up to five years for entrants in alignment research to build up their portfolio and do good work–too slow for short timelines. Alignment researchers don't seem to stack. LessWrong–and by extension greenfield alignment–is currently teetering on the edge of an Eternal September: most new people are several hundred thousand words of reading away from automatically avoiding bad ideas, let alone being able to discuss them with good truth-seeking norms. We don't have a reliable way to gauge the potential of someone we've never met to do great work[2]. This is not a new idea. It's a side project of mine that could be built by your average first-year CS undergrad and that I have shelved multiple times. It's just that, for some reason, like moths to a flame or a dog to its vomit I just keep coming back to it. So I figured, third time's the charm, right? The proposal (which I call 'Blackbelt' for obscure reasons) is really simple: a dependency graph of tests of skill. Note that last bit: 'tests of skill'. If my intention was merely to add to the growing pile of Intro to AI Safety (Please Don't Betray Us and Research Capabilities Afterward)[3] courses out there then we can all just pack up and go home and forget this poorly-worded post ever existed. But alas, my internal model says we will not go from doomed to saved with the nth attempt at prettifying the proof of the rank-nullity theorem. The real problem is not finding better presentations or a better Chatty McTextbook explanation, but can be found by observing what does not change. That is, let's invert the question of how to produce experts and instead ask: "What things should I be able to do, to be considered a minimum viable expert in X?" So for instance, since we're all trying to get more dignity points in before 2028, let's consider the case of the empirical alignment researcher. The minimum viable empirical researcher (and by 'minimum', I mean it) should probably know: How to multiply two matrices together How to train a handwriting classifier on the MNIST dataset How to implement backprop from scratch How to specify a reward function as Python code etc. Sure, there's nothing groundbreaking here, but that's precisely the point. What happens in the wild, in contrast, looks something like grocery shopping: "Oh, you need vector calculus, and set theory, and–textbooks? Read Axler, then Jaynes for probability 'cause you don't want to learn from those dirty, dirty frequentists...yeah sprinkle in some category theory as well from Lawvere, maybe basic game theory, then go through MLAB's course..." Maybe it's just me, but I get dizzy when every other word of someone's sentence packs months' worth of implied thankless work. Never mind how much it sounds like a wide-eyed Victorian-era gentleman rattling off classics one supposedly has read: reading a whole textbook is not an atomic action, let alone going through entire courses and assuming infinite motivation on the part of the victim[4]. There's no accounting for tests What is a test, really? Related: the most accurate map of the territory is the territory itself, but what happens when the territory is slippery[5]? An apocryphal story goes that, when Pope Benedict XI was in search of a fresco artist he sent a messenger to a man named Giotto. The messenger asked him to provide a demonstration of ...
The following is a conversation between Michele Zanini, Co-author of Humanocracy: Creating Organizations as Amazing as the People Inside Them, and Denver Frederick, the Host of The Business of Giving. Today, many organizations are feeling stuck and struggling to move away from outdated management practices towards a more human-centered approach. It's no easy feat, but luckily, there's a book that can help. Humanocracy: Creating Organizations as Amazing as the People Inside Them offers powerful strategies for creating organizations that truly value their people and empower them to reach their full potential. And we're delighted to have one of its co-authors with us today, Michele Zanini, who also is a co-founder of MLab.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How MATS addresses “mass movement building” concerns, published by Ryan Kidd on May 4, 2023 on LessWrong. Recently, many AI safety movement-building programs have been criticized for attempting to grow the field too rapidly and thus: Producing more aspiring alignment researchers than there are jobs or training pipelines; Driving the wheel of AI hype and progress by encouraging talent that ends up furthering capabilities; Unnecessarily diluting the field's epistemics by introducing too many naive or overly deferent viewpoints. At MATS, we think that these are real and important concerns and support mitigating efforts. Here's how we address them currently. Claim 1: There are not enough jobs/funding for all alumni to get hired/otherwise contribute to alignment How we address this: Some of our alumni's projects are attracting funding and hiring further researchers. Three of our alumni have started alignment teams/organizations that absorb talent (Vivek's MIRI team, Leap Labs, Apollo Research), and more are planned (e.g., a Paris alignment hub). With the elevated interest in AI and alignment, we expect more organizations and funders to enter the ecosystem. We believe it is important to install competent, aligned safety researchers at new organizations early, and our program is positioned to help capture and upskill interested talent. Sometimes, it is hard to distinguish truly promising researchers in two months, hence our four-month extension program. We likely provide more benefits through accelerating researchers than can be seen in the immediate hiring of alumni. Alumni who return to academia or industry are still a success for the program if they do more alignment-relevant work or acquire skills for later hiring into alignment roles. Claim 2: Our program gets more people working in AI/ML who would not otherwise be doing so, and this is bad as it furthers capabilities research and AI hype How we address this: Considering that the median MATS scholar is a Ph.D./Masters student in ML, CS, maths, or physics and only 10% are undergrads, we believe most of our scholars would have ended up working in AI/ML regardless of their involvement with the program. In general, mentors select highly technically capable scholars who are already involved in AI/ML; others are outliers. Our outreach and selection processes are designed to attract applicants who are motivated by reducing global catastrophic risk from AI. We principally advertise via word-of-mouth, AI safety Slack workspaces, AGI Safety Fundamentals and 80,000 Hours job boards, and LessWrong/EA Forum. As seen in the figure below, our scholars generally come from AI safety and EA communities. MATS Summer 2023 interest form: “How did you hear about us?” (370 responses) We additionally make our program less attractive than comparable AI industry programs by introducing barriers to entry. Our grant amounts are significantly less than our median scholar could get from an industry internship, and the application process requires earnest engagement with complex AI safety questions. We additionally require scholars to have background knowledge at the level of AGI Safety Fundamentals, which is an additional barrier to entry that e.g. MLAB didn't require. We think that ~1 more median MATS scholar focused on AI safety is worth 5-10 more median capabilities researchers (because most do pointless stuff like image generation, and there is more low-hanging fruit in safety). Even if we do output 1-5 median capabilities researchers per cohort (which seems very unlikely), we likely produce far more benefit to alignment with the remaining scholars. Claim 3: Scholars might defer to their mentors and fail to critically analyze important assumptions, decreasing the average epistemic integrity of the field How we address this: Our scholars are enc...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How MATS addresses “mass movement building” concerns, published by Ryan Kidd on May 4, 2023 on LessWrong. Recently, many AI safety movement-building programs have been criticized for attempting to grow the field too rapidly and thus: Producing more aspiring alignment researchers than there are jobs or training pipelines; Driving the wheel of AI hype and progress by encouraging talent that ends up furthering capabilities; Unnecessarily diluting the field's epistemics by introducing too many naive or overly deferent viewpoints. At MATS, we think that these are real and important concerns and support mitigating efforts. Here's how we address them currently. Claim 1: There are not enough jobs/funding for all alumni to get hired/otherwise contribute to alignment How we address this: Some of our alumni's projects are attracting funding and hiring further researchers. Three of our alumni have started alignment teams/organizations that absorb talent (Vivek's MIRI team, Leap Labs, Apollo Research), and more are planned (e.g., a Paris alignment hub). With the elevated interest in AI and alignment, we expect more organizations and funders to enter the ecosystem. We believe it is important to install competent, aligned safety researchers at new organizations early, and our program is positioned to help capture and upskill interested talent. Sometimes, it is hard to distinguish truly promising researchers in two months, hence our four-month extension program. We likely provide more benefits through accelerating researchers than can be seen in the immediate hiring of alumni. Alumni who return to academia or industry are still a success for the program if they do more alignment-relevant work or acquire skills for later hiring into alignment roles. Claim 2: Our program gets more people working in AI/ML who would not otherwise be doing so, and this is bad as it furthers capabilities research and AI hype How we address this: Considering that the median MATS scholar is a Ph.D./Masters student in ML, CS, maths, or physics and only 10% are undergrads, we believe most of our scholars would have ended up working in AI/ML regardless of their involvement with the program. In general, mentors select highly technically capable scholars who are already involved in AI/ML; others are outliers. Our outreach and selection processes are designed to attract applicants who are motivated by reducing global catastrophic risk from AI. We principally advertise via word-of-mouth, AI safety Slack workspaces, AGI Safety Fundamentals and 80,000 Hours job boards, and LessWrong/EA Forum. As seen in the figure below, our scholars generally come from AI safety and EA communities. MATS Summer 2023 interest form: “How did you hear about us?” (370 responses) We additionally make our program less attractive than comparable AI industry programs by introducing barriers to entry. Our grant amounts are significantly less than our median scholar could get from an industry internship, and the application process requires earnest engagement with complex AI safety questions. We additionally require scholars to have background knowledge at the level of AGI Safety Fundamentals, which is an additional barrier to entry that e.g. MLAB didn't require. We think that ~1 more median MATS scholar focused on AI safety is worth 5-10 more median capabilities researchers (because most do pointless stuff like image generation, and there is more low-hanging fruit in safety). Even if we do output 1-5 median capabilities researchers per cohort (which seems very unlikely), we likely produce far more benefit to alignment with the remaining scholars. Claim 3: Scholars might defer to their mentors and fail to critically analyze important assumptions, decreasing the average epistemic integrity of the field How we address this: Our scholars are enc...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Alignment Research Engineer Accelerator (ARENA): call for applicants, published by TheMcDouglas on April 17, 2023 on LessWrong. TL;DR Apply here for the second iteration of ARENA! Introduction We are excited to announce the second iteration of ARENA (Alignment Research Engineer Accelerator), a 6-week ML bootcamp with a focus on AI safety. Our mission is to prepare participants for full-time careers as research engineers in AI safety, e.g. at leading organizations or as independent researchers. The program will commence on May 22nd, 2023, and will be held at the Moorgate WeWork offices in London. This will overlap with SERI MATS, who are also using these offices. We expect this to bring several benefits, e.g. facilitating productive discussions about AI safety & different agendas, and allowing participants to form a better picture of what working on AI safety can look like in practice. ARENA offers a unique opportunity for those interested in AI safety to learn valuable technical skills, engage in their own projects, and make open-source contributions to AI safety-related libraries. The program is comparable to MLAB or WMLB, but extends over a longer period to facilitate deeper dives into the content, and more open-ended project work with supervision. For more information, see our website. Outline of Content The 6-week program will be structured as follows: Chapter 0 - Fundamentals Before getting into more advanced topics, we first cover the basics of deep learning, including basic machine learning terminology, what neural networks are, and how to train them. We will also cover some subjects we expect to be useful going forwards, e.g. using GPT-3 and 4 to streamline your learning, good coding practices, and version control. Topics include: PyTorch basics CNNs, Residual Neural Networks Optimization Backpropagation Hyperparameter search with Weights and Biases Model training & PyTorch Lightning Duration: 5 days Chapter 1 - Transformers & Mechanistic Interpretability In this chapter, you will learn all about transformers, and build and train your own. You'll also learn about Mechanistic Interpretability of transformers, a field which has been advanced by Anthropic's Transformer Circuits sequence, and open-source work by Neel Nanda. Topics include: GPT models (building your own GPT-2) Training and sampling from transformers TransformerLens In-context Learning and Induction Heads Indirect Object Identification Superposition Duration: 9 days Chapter 2 - Reinforcement Learning In this chapter, you will learn about some of the fundamentals of RL, and work with OpenAI's Gym environment to run their own experiments. Topics include: Fundamentals of RL Vanilla Policy Gradient PPO Deep Q-learning RLHF Gym & Gymnasium environments Duration: 6 days Chapter 3 - Training at Scale There are a number of techniques that are helpful for training large-scale models efficiently. Here, you will learn more about these techniques and how to use them. The focus is on hands-on learning, rather than just a theoretical understanding. Topics include: GPUs Distributed computing Data/tensor/pipeline parallelism Finetuning LLMs Duration: 4 days Chapter 4 - Capstone Projects We will conclude this program with capstone projects, where participants get to dig into something related to the course. This should draw on much of the skills and knowledge participants will have accumulated over the last 5 weeks. Duration: 6 days Below is a diagram of the curriculum as a whole, and the dependencies between sections. Here is some sample material from the course, which you will be able to full understand once you reach that point in the course. This notebook is on Indirect Object Identification (from the chapter on Transformers & Mechanistic Interpretability), it will represent one of a set of optional 2-day mi...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Alignment Research Engineer Accelerator (ARENA): call for applicants, published by TheMcDouglas on April 17, 2023 on LessWrong. TL;DR Apply here for the second iteration of ARENA! Introduction We are excited to announce the second iteration of ARENA (Alignment Research Engineer Accelerator), a 6-week ML bootcamp with a focus on AI safety. Our mission is to prepare participants for full-time careers as research engineers in AI safety, e.g. at leading organizations or as independent researchers. The program will commence on May 22nd, 2023, and will be held at the Moorgate WeWork offices in London. This will overlap with SERI MATS, who are also using these offices. We expect this to bring several benefits, e.g. facilitating productive discussions about AI safety & different agendas, and allowing participants to form a better picture of what working on AI safety can look like in practice. ARENA offers a unique opportunity for those interested in AI safety to learn valuable technical skills, engage in their own projects, and make open-source contributions to AI safety-related libraries. The program is comparable to MLAB or WMLB, but extends over a longer period to facilitate deeper dives into the content, and more open-ended project work with supervision. For more information, see our website. Outline of Content The 6-week program will be structured as follows: Chapter 0 - Fundamentals Before getting into more advanced topics, we first cover the basics of deep learning, including basic machine learning terminology, what neural networks are, and how to train them. We will also cover some subjects we expect to be useful going forwards, e.g. using GPT-3 and 4 to streamline your learning, good coding practices, and version control. Topics include: PyTorch basics CNNs, Residual Neural Networks Optimization Backpropagation Hyperparameter search with Weights and Biases Model training & PyTorch Lightning Duration: 5 days Chapter 1 - Transformers & Mechanistic Interpretability In this chapter, you will learn all about transformers, and build and train your own. You'll also learn about Mechanistic Interpretability of transformers, a field which has been advanced by Anthropic's Transformer Circuits sequence, and open-source work by Neel Nanda. Topics include: GPT models (building your own GPT-2) Training and sampling from transformers TransformerLens In-context Learning and Induction Heads Indirect Object Identification Superposition Duration: 9 days Chapter 2 - Reinforcement Learning In this chapter, you will learn about some of the fundamentals of RL, and work with OpenAI's Gym environment to run their own experiments. Topics include: Fundamentals of RL Vanilla Policy Gradient PPO Deep Q-learning RLHF Gym & Gymnasium environments Duration: 6 days Chapter 3 - Training at Scale There are a number of techniques that are helpful for training large-scale models efficiently. Here, you will learn more about these techniques and how to use them. The focus is on hands-on learning, rather than just a theoretical understanding. Topics include: GPUs Distributed computing Data/tensor/pipeline parallelism Finetuning LLMs Duration: 4 days Chapter 4 - Capstone Projects We will conclude this program with capstone projects, where participants get to dig into something related to the course. This should draw on much of the skills and knowledge participants will have accumulated over the last 5 weeks. Duration: 6 days Below is a diagram of the curriculum as a whole, and the dependencies between sections. Here is some sample material from the course, which you will be able to full understand once you reach that point in the course. This notebook is on Indirect Object Identification (from the chapter on Transformers & Mechanistic Interpretability), it will represent one of a set of optional 2-day mi...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply for Cambridge ML for Alignment Bootcamp (CaMLAB) [26 March - 8 April], published by hannah on February 9, 2023 on The Effective Altruism Forum. TL;DR: A two-week machine learning bootcamp this spring in Cambridge, UK, open to global applicants and aimed at providing ML skills for AI alignment. Apply by 26 February to participate or TA. Following a series of machine learning bootcamps earlier this year in Cambridge, Berkeley and Boston, the Cambridge AI Safety Hub is running the next iteration of the Cambridge ML for Alignment Bootcamp (CaMLAB) in spring. This two-week curriculum expects no prior experience with machine learning, although familiarity with Python and understanding of (basic) linear algebra is crucial. The curriculum, based on MLAB, provides a thorough, nuts-and-bolts introduction to the state-of-the-art in ML techniques such as interpretability and reinforcement learning. You'll be guided through the steps of building various deep learning models, from ResNets to transformers. You'll come away well-versed in PyTorch and useful complementary frameworks. From Richard Ren, an undergraduate at UPenn who participated in the January camp: The material from the bootcamp was well-prepared and helped me understand how to use PyTorch and einops, as well as how backpropagation and transformers work. The mentorship from the TAs and peers was excellent, and because of their support, I think the time I spent at the camp was at least 3-5x as productive as focused time I would've spent outside of the camp learning the material on my own — propelling me to be able to take graduate-level deep learning classes at my school, read AI safety papers on my own, and giving me the knowledge necessary to pursue serious machine learning research projects. In addition, the benefits of spending two weeks in-person with other motivated and ambitious individuals cannot be overstated: alongside the pedagogical benefits of being paired with another person each day for programming, the conversations which took place around the curriculum were a seedbed for new insights and valuable connections. Richard continues: The mentorship from the TAs, as well as the chance conversations from the people I've met, have had a serious impact on how I'll approach the career path(s) I'm interested in — from meeting an economics Ph.D. (and having my worldview on pursuing a policy career change) to talking with someone who worked at EleutherAI in the Cambridge EA office about various pathways in AI safety. I loved the people I was surrounded with — they were ambitious, driven, kind, emotionally intelligent, and hardworking. Feedback from the end of the previous camp showed that: Participants on average said they would be 93% likely to recommend the bootcamp to a friend or colleague. Everyone found the camp at least as good as expected, with 82% finding it better than expected, and 24% finding it much better than expected. 94% of participants found the camp more valuable than the counterfactual use of their time, with 71% finding it much more valuable. In addition, first and second place in Apart Research's January Mechanistic Interpretability Hackathon were awarded to teams formed from participants and TAs from our January bootcamp. Chris Mathwin, who was part of the runner-up project, writes of the bootcamp: A really formative experience! Great people, great content and truly great support. It was a significantly better use of my time in upskilling in this field than I would have spent elsewhere and I have continued to work with some of my peers afterwards! If you're interested in participating in the upcoming round of CaMLAB, apply here. If you have substantial ML experience and are interested in being a teaching assistant (TA), apply here. You can find more details below. Schedule & logistics Th...
Síðustu vikur hefur talsvert borið á umræðu um svokölluð hugvíkkandi efni og mögulega notkun þeirra til lækninga, sérstaklega til þess að aðstoða fólk sem glímir við geðræna kvilla. Efnin eru ólögleg og rannsóknir á virkni þeirra í þessu samhengi eru skammt á veg komnar, en þær staðreyndir hafa orðið til þess að ýmsir hafa varað við því að fólk fari fram úr sér í ályktunum um möguleikana sem falist gætu í notkun efnanna í lækningaskyni. Aðrir telja nægilega margt benda til þess að notkun hugvíkkandi efna undir eftirliti meðferðaraðila geti falið í sér tímamót í meðferðum við kvíða, þunglyndi og fíknisjúkdómum. Í dag hefst umfangsmikil ráðstefna um þessi mál í Hörpu, og meðal þeirra sem þar koma fram er Haraldur Erlendsson, geðlæknir. Haraldur sagði okkur nánar af sinni sýn á þessi mál í þættinum. Múlabær er fyrsta dagþjálfunin fyrir aldraðra og öryrkja og er stofnuð 1983 af Rauða krossinum, SÍBS og Samtökum aldraðra. Starfsemin verður því 40 ára 27. janúar nk. Markmið starfseminnar er að auka lífsgæði fólks sem býr í sjálfstæðri búsetu með heilbrigðisþjónustu, líkamlegri og félagslegri virkni. Það er töluvert langur biðlisti af fólki sem sækist eftir þjónustu í Múlabæ. Við kíktum í Síðumúlann og hittum Rósbjörgu S. Þórðardóttur, hópstjóra félagsstarfs, vinnustofu og listasmiðju í Múlabæ og við töluðum einnig við Ottó Malmberg, 91 árs, sem sótt hefur þjónustu í Múlabæ frá 2017. Á morgun, föstudag, eru tvöhundruð og fimmtíu ár frá því að Gunnlaugur Guðbrandsson Briem fæddist. Hann var fyrstur til að taka upp ættarnafnið Briem en í dag skipta þau hundruðum sem bera sama ættarnafn. Á laugardaginn verður haldið málþing í Þjóðarbókhlöðunni þar sem fræðifólk mun fjalla um ævi þessa merka sýslumanns og fjölskyldu hans. Við fengum þau Erlu Dóris Halldórsdóttur, sjálfstætt starfandi sagnfræðing og formann Félags um átjándu aldar fræði og Inga Þorleif Bjarnason, jarðeðlisfræðing, fundarstjóra málþingsins og stjórnarmann í Félagi um átjándu aldar fræði til þess að koma í þáttinn og segja okkur aðeins frá Gunnlaugi og því sem fer fram á málþinginu á laugardag. Tónlist í þættinum í dag: Bingó / Geirfuglarnir (Freyr Eyjólfsson) Lucy in the sky with diamonds / Bítlarnir (Lennon & McCartney) Það rökkvar í Róm / Erla Þorsteinsdóttir (Pietro Garineri og Loftur Guðmundsson) UMSJÓN: GUNNAR HANSSON OG MELKORKA ÓLAFSDÓTTIR
Síðustu vikur hefur talsvert borið á umræðu um svokölluð hugvíkkandi efni og mögulega notkun þeirra til lækninga, sérstaklega til þess að aðstoða fólk sem glímir við geðræna kvilla. Efnin eru ólögleg og rannsóknir á virkni þeirra í þessu samhengi eru skammt á veg komnar, en þær staðreyndir hafa orðið til þess að ýmsir hafa varað við því að fólk fari fram úr sér í ályktunum um möguleikana sem falist gætu í notkun efnanna í lækningaskyni. Aðrir telja nægilega margt benda til þess að notkun hugvíkkandi efna undir eftirliti meðferðaraðila geti falið í sér tímamót í meðferðum við kvíða, þunglyndi og fíknisjúkdómum. Í dag hefst umfangsmikil ráðstefna um þessi mál í Hörpu, og meðal þeirra sem þar koma fram er Haraldur Erlendsson, geðlæknir. Haraldur sagði okkur nánar af sinni sýn á þessi mál í þættinum. Múlabær er fyrsta dagþjálfunin fyrir aldraðra og öryrkja og er stofnuð 1983 af Rauða krossinum, SÍBS og Samtökum aldraðra. Starfsemin verður því 40 ára 27. janúar nk. Markmið starfseminnar er að auka lífsgæði fólks sem býr í sjálfstæðri búsetu með heilbrigðisþjónustu, líkamlegri og félagslegri virkni. Það er töluvert langur biðlisti af fólki sem sækist eftir þjónustu í Múlabæ. Við kíktum í Síðumúlann og hittum Rósbjörgu S. Þórðardóttur, hópstjóra félagsstarfs, vinnustofu og listasmiðju í Múlabæ og við töluðum einnig við Ottó Malmberg, 91 árs, sem sótt hefur þjónustu í Múlabæ frá 2017. Á morgun, föstudag, eru tvöhundruð og fimmtíu ár frá því að Gunnlaugur Guðbrandsson Briem fæddist. Hann var fyrstur til að taka upp ættarnafnið Briem en í dag skipta þau hundruðum sem bera sama ættarnafn. Á laugardaginn verður haldið málþing í Þjóðarbókhlöðunni þar sem fræðifólk mun fjalla um ævi þessa merka sýslumanns og fjölskyldu hans. Við fengum þau Erlu Dóris Halldórsdóttur, sjálfstætt starfandi sagnfræðing og formann Félags um átjándu aldar fræði og Inga Þorleif Bjarnason, jarðeðlisfræðing, fundarstjóra málþingsins og stjórnarmann í Félagi um átjándu aldar fræði til þess að koma í þáttinn og segja okkur aðeins frá Gunnlaugi og því sem fer fram á málþinginu á laugardag. Tónlist í þættinum í dag: Bingó / Geirfuglarnir (Freyr Eyjólfsson) Lucy in the sky with diamonds / Bítlarnir (Lennon & McCartney) Það rökkvar í Róm / Erla Þorsteinsdóttir (Pietro Garineri og Loftur Guðmundsson) UMSJÓN: GUNNAR HANSSON OG MELKORKA ÓLAFSDÓTTIR
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Probably good projects for the AI safety ecosystem, published by Ryan Kidd on December 5, 2022 on LessWrong. At EAGxBerkeley 2022, I was asked several times what new projects might benefit the AI safety and longtermist research ecosystem. I think that several existing useful-according-to-me projects (e.g., SERI MATS, REMIX, CAIS, etc.) could urgently absorb strong management and operations talent, but I think the following projects would also probably be useful to the AI safety/longtermist project. Criticisms are welcome. Projects I might be excited to see, in no particular order: A London-based MATS clone to build the AI safety research ecosystem there, leverage mentors in and around London (e.g., DeepMind, CLR, David Krueger, Aligned AI, Conjecture, etc.), and allow regional specialization. This project should probably only happen once MATS has ironed out the bugs in its beta versions and grown too large for one location (possibly by Winter 2023). Please contact the MATS team before starting something like this to ensure good coordination and to learn from our mistakes. Rolling admissions alternatives to MATS' cohort-based structure for mentors and scholars with different needs (e.g., to support alignment researchers who suddenly want to train/use research talent at irregular intervals but don't have the operational support to do this optimally). A combined research mentorship and seminar program that aims to do for AI governance research what MATS is trying to do for technical AI alignment research. A dedicated bi-yearly workshop for AI safety university group leaders that teaches them how to recognize talent, foster useful undergraduate research projects, and build a good talent development pipeline or “user journey” (including a model of alignment macrostrategy and where university groups fit in). An organization that does for the Open Philanthropy worldview investigations team what GCP did to supplement CEA's workshops and 80,000 Hours' career advising calls. Further programs like ARENA that aim to develop ML safety engineering talent at scale by leveraging good ML tutors and proven curricula like CAIS' Intro to ML Safety, Redwood Research's MLAB, and Jacob Hilton's DL curriculum for large language module alignment. More contests like ELK with well-operationalized research problems (i.e., clearly explain what builder/breaker steps look like), clear metrics of success, and have a well-considered target audience (who is being incentivized to apply and why?) and user journey (where do prize winners go next?). Possible contest seeds: Evan Hubinger's SERI MATS deceptive AI challenge problem; Vivek Hebbar's and Nate Soares' SERI MATS diamond maximizer selection problem; Alex Turner's and Quintin Pope's SERI MATS training stories selection problem. More "plug-and-play" curriculums for AI safety university groups, like AGI Safety Fundamentals, Alignment 201, Intro to ML Safety. A well-considered "precipism" university course template that critically analyzes Toby Ord's “The Precipice,” Holden Karnofsky's “The Most Important Century,” Will MacAskill's “What We Owe The Future,” some Open Philanthropy worldview investigations reports, some Global Priorities Institute ethics papers, etc. Hackathons in which people with strong ML knowledge (not ML novices) write good-faith critiques of AI alignment papers and worldviews (e.g., what Jacob Steinhardt's “ML Systems Will Have Weird Failure Modes” does for Hubinger et al.'s “Risks From Learned Optimization”). A New York-based alignment hub that aims to provide talent search and logistical support for NYU Professor Sam Bowman's planned AI safety research group. More organizations like CAIS that aim to recruit established ML talent into alignment research with clear benchmarks, targeted hackathons/contests with prizes, and offers ...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Probably good projects for the AI safety ecosystem, published by Ryan Kidd on December 5, 2022 on LessWrong. At EAGxBerkeley 2022, I was asked several times what new projects might benefit the AI safety and longtermist research ecosystem. I think that several existing useful-according-to-me projects (e.g., SERI MATS, REMIX, CAIS, etc.) could urgently absorb strong management and operations talent, but I think the following projects would also probably be useful to the AI safety/longtermist project. Criticisms are welcome. Projects I might be excited to see, in no particular order: A London-based MATS clone to build the AI safety research ecosystem there, leverage mentors in and around London (e.g., DeepMind, CLR, David Krueger, Aligned AI, Conjecture, etc.), and allow regional specialization. This project should probably only happen once MATS has ironed out the bugs in its beta versions and grown too large for one location (possibly by Winter 2023). Please contact the MATS team before starting something like this to ensure good coordination and to learn from our mistakes. Rolling admissions alternatives to MATS' cohort-based structure for mentors and scholars with different needs (e.g., to support alignment researchers who suddenly want to train/use research talent at irregular intervals but don't have the operational support to do this optimally). A combined research mentorship and seminar program that aims to do for AI governance research what MATS is trying to do for technical AI alignment research. A dedicated bi-yearly workshop for AI safety university group leaders that teaches them how to recognize talent, foster useful undergraduate research projects, and build a good talent development pipeline or “user journey” (including a model of alignment macrostrategy and where university groups fit in). An organization that does for the Open Philanthropy worldview investigations team what GCP did to supplement CEA's workshops and 80,000 Hours' career advising calls. Further programs like ARENA that aim to develop ML safety engineering talent at scale by leveraging good ML tutors and proven curricula like CAIS' Intro to ML Safety, Redwood Research's MLAB, and Jacob Hilton's DL curriculum for large language module alignment. More contests like ELK with well-operationalized research problems (i.e., clearly explain what builder/breaker steps look like), clear metrics of success, and have a well-considered target audience (who is being incentivized to apply and why?) and user journey (where do prize winners go next?). Possible contest seeds: Evan Hubinger's SERI MATS deceptive AI challenge problem; Vivek Hebbar's and Nate Soares' SERI MATS diamond maximizer selection problem; Alex Turner's and Quintin Pope's SERI MATS training stories selection problem. More "plug-and-play" curriculums for AI safety university groups, like AGI Safety Fundamentals, Alignment 201, Intro to ML Safety. A well-considered "precipism" university course template that critically analyzes Toby Ord's “The Precipice,” Holden Karnofsky's “The Most Important Century,” Will MacAskill's “What We Owe The Future,” some Open Philanthropy worldview investigations reports, some Global Priorities Institute ethics papers, etc. Hackathons in which people with strong ML knowledge (not ML novices) write good-faith critiques of AI alignment papers and worldviews (e.g., what Jacob Steinhardt's “ML Systems Will Have Weird Failure Modes” does for Hubinger et al.'s “Risks From Learned Optimization”). A New York-based alignment hub that aims to provide talent search and logistical support for NYU Professor Sam Bowman's planned AI safety research group. More organizations like CAIS that aim to recruit established ML talent into alignment research with clear benchmarks, targeted hackathons/contests with prizes, and offers ...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Cambridge Boston Alignment Initiative [Hiring!], published by kuhanj on December 2, 2022 on The Effective Altruism Forum. TLDR: The Cambridge Boston Alignment Initiative (CBAI) is a new organization aimed at supporting and accelerating Cambridge and Boston students interested in pursuing careers in AI safety. We're excited about our ongoing work, including running a winter ML bootcamp, and are hiring for Cambridge-based roles (rolling applications, priority deadline Dec. 14 to work with us next year). We think that reducing risks from advanced AI systems is one of the most important issues of our time, and that undergraduate and graduate students can quickly start doing valuable work that mitigates these risks. We (Kuhan, Trevor, Xander and Alexandra) formed the Cambridge Boston Alignment Initiative (CBAI) to increase the number of talented researchers working to mitigate risks from AI by supporting Boston-area infrastructure, research and outreach related to AI alignment and governance. Our current programming involves working with groups like the Harvard AI Safety Team (HAIST) and MIT AI Alignment (MAIA), as well as organizing a winter ML bootcamp based on Redwood Research's MLAB curriculum. We think that the Boston and Cambridge area is a particularly important place to foster a strong community of AI safety-interested students and researchers. The AI alignment community and infrastructure in the Boston/Cambridge area has also grown rapidly in recent months (see updates from HAIST and MAIA for more context), and has many opportunities for improvement: office spaces, advanced programming, research, community events, and internship/job opportunities to name a few. If you'd like to work with us to make this happen, we're hiring for full-time generalist roles in Boston. Depending on personal fit, this work might take the form of co-director, technical director/program lead, operations director, or operations associate. We will respond to applications submitted by December 14 by the end of the year. For more information, see our website. For questions, email kuhan@cbai.ai. We'll also be at EAGxBerkeley, and are excited to talk to people there. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Winter ML upskilling camp, published by Nathan Barnard on December 2, 2022 on The Effective Altruism Forum. Title: Apply for the ML Winter Camp in Cambridge, UK [2-10 Jan] TL;DR: We are running a UK-based ML upskilling camp from 2-10 January in Cambridge for people with no prior experience in ML who want to work on technical AI safety. Apply here by 11 December. We (Nathan Barnard, Joe Hardie, Quratul Zainab and Hannah Erlebach) will be running a machine learning upskilling camp this January in conjunction with the Cambridge AI Safety Hub. The camp is designed for people with little-to-no ML experience to work through a curriculum based on the first two weeks of MLAB under the guidance of experienced mentors, in order to develop skills which are necessary for conducting many kinds of technical AI safety research. The camp will take place from 2-10 January in Cambridge, UK. Accommodation will be provided at Emmanuel College. There are up to 20 in-person spaces; the camp will take place in the Sidney Street Office in central Cambridge. There is also the option to attend online for those who cannot attend in-person, although participants are strongly encouraged to attend in-person if possible, as we expect it to be substantially harder to make progress if attending online. As such, our bar for accepting virtual participants will be higher. We can cover travel costs if this is a barrier to attending in-person. Apply to be a participant Who we are looking for The typical participant we are looking for will have: Strong quantitative skills (e.g., a maths/physics/engineering bakground) An intention to work on AI safety research projects which require ML experience Little-to-no prior ML experience The following are strongly preferred, but not essential: Programming experience (preferably Python) AI safety knowledge equivalent to having at least completed the AGI Safety Fundamentals alignment curriculum The camp is open to participants from all over the world, but in particular those from the UK and Europe; for those located in the USA or Canada, we recommend (also) applying for the CBAI Winter ML Bootcamp, happening either in Boston or Berkeley (deadline 4 December). If you're unsure if you're a good fit for this camp, we encourage you to err on the side of applying. We recognise that evidence suggests that less privileged individuals tend to underestimate their abilities, and encourage individuals with diverse backgrounds and experiences to apply; we especially encourage applications from women and minorities. How to apply Fill out the application form by Sunday 11 December, 23:59 GMT+0. Decisions will be released no later than 16 December; if you require an earlier decision in order to make plans for January, you can specify so in your application. Apply to be a mentor We are looking for mentors to be present full- or part-time during the camp. Although participants will work through the curriculum in a self-directed manner, we think that learning can be greatly accelerated when there are experts on hand to answer questions and clarify concepts. We expect mentors to be Experienced ML programmers Familiar with the content of the MLAB curriculum (it's helpful, but not necessary, if they have participated in MLAB themselves) Knowledgeable about AI safety (although this is less important) Comfortable with teaching (past teaching or tutoring experience can be useful) However, we also acknowledge that being a mentor can be useful for gaining skills and confidence in teaching, and for consolidating the content in one's own mind; we hope that being a mentor will also be a useful experience for mentors themselves! If needed, we are able to provide accommodation in Cambridge, and can offer compensation for your time at £100 for a half day or £200 for a full day. We understand that m...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety groups should imitate career development clubs, published by Joshc on November 9, 2022 on The Effective Altruism Forum. If you want to get people to do things (like learn about AI Safety) you have to offer them something valuable. Here's one of the posters we used when I was in charge of marketing for the Columbia EA group: It's a pretty graphic, but what valuable thing is it offering? The message is “scan this link to talk about AI.” To be fair, people like talking about AI. We had applicants. But we didn't attract talented ML students. If you want to attract talented people, you have to know what they want. Serious and ambitious people probably don't want to sit around having philosophical discussions. They want to build their careers. Enter ML @ Berkeley, a thriving group of 50 ML students who put 15 hours per week into projects and courses to become better at ML. No one gets paid – not even the organizers. And they are very selective. Only around 7% get in. Why is this group successful? For starters, they offer career capital. They give students projects that often turn into real published papers. They also concentrate talent. Ambitious people want to work with other ambitious people. AI safety student groups should consider imitating ML @ Berkeley. I'm not saying that we should eliminate philosophical discussions and replace them with resume boosting factories. We still want people to think AI Safety and X-risk are important. But discussions don't need to be the primary selling point. Maybe for cultivating conceptual researchers, it makes more sense for discussions to be central. But conceptual and empirical AI Safety research are very different. ML students are probably more interested in projects and skill-building. More rigorous programming could also make it easier to identify talent. Talking about AI is fun, but top ML researchers work extremely hard. Rigorous technical curricula can filter out the ones that are driven. There is nothing like a trial by fire. Instead of trying to predict in advance who will be good at research, why not have lots of people try it and invest in those that do well? USC field builders are experimenting with a curriculum that, in addition to introducing X-risk, is packed-full with technical projects. In their first semester, they attracted 30 students who all have strong ML backgrounds. I'm interested in seeing how this goes and would be excited about more AI Safety groups running experiments on these lines. People could also try: checking whether grad students are willing to supervise group research projects. running deep learning courses and training programs (like Redwood's MLAB) running an in-person section of intro to ML Safety (a technical course that covers safety topics). Conclusion As far as I can tell, no one has AI safety university field-building all figured out. Rather than copying the same old discussion group model, people should experiment with new approaches. A good start could be to imitate career development clubs like ML @ Berkeley that have been highly successful. Thanks to Nat Li and Oliver Zhang for thoughts and feedback and to Dan Hendrycks for conversations that inspired this post. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Takeaways from a survey on AI alignment resources, published by DanielFilan on November 5, 2022 on LessWrong. What am I talking about? In June and July of this year, I ran a survey to ask a lot of people how useful they found a variety of resources on AI alignment. I was particularly interested in “secondary resources”: that is, not primary resource outputs, but resources that summarize, discuss, analyze, or propose concrete research efforts. I had many people promote the survey in an attempt to make it not obvious that I was running it (so that it would not affect what people said about AXRP, the podcast that I run). CEA helped a great deal with the shaping and promotion of the survey. The goal of the survey was initially to figure out how useful AXRP was, but I decided that it would be useful to get a broader look at the space of these secondary resources. My hope is that the results give people a better sense of what secondary resources might be worth checking out, as well as gaps that could be filled. Participants were shown a list of resources, select those they'd engaged with for >30 min, and for each they selected, rate on a scale from 0 to 4 how useful they'd found it, how likely they'd be to recommend to a friend getting into the field who hadn't read widely, and how likely they'd be to recommend to someone paid to do AI alignment research. You can do a test run of the survey at this link. My summary of the results AXRP, my podcast, is highly rated among people paid to work on technical AI alignment resources, but less highly rated in other cohorts. On a personal note, I find this a bit disappointing: I had hoped it could be useful for people orienting to research directions that they had not read widely about. Rob Miles videos are highly rated among everyone, more than I would have guessed. People really liked the AI Safety Camp, the AGI Safety Fundamentals Course, and conversations with AI alignment researchers. People trying to get into alignment really liked the above and also MLAB. That said, they recommend Rob Miles videos higher than the AI Safety Camp and conversations with AI alignment researchers (but lower than MLAB and the AGI Safety Fundamentals Course). Basic stats Entries with demographic info: 139 Entries that rate various resources: 99 Number that say ‘I have heard of AI alignment': 95 Number that say ‘I am interested in AI alignment research': 109 Number that say ‘I am trying to move into a technical AI alignment career': 68 Number that say ‘I spend some of my time solving technical problems related to AI alignment': 51 Number that say ‘I spend some of my time doing AI alignment field/community-building': 37 Number that say ‘I spend some of my time facilitating technical AI alignment research in ways other than doing it directly': 35 Number that say ‘I spend some of my time publicly communicating about AI alignment': 36 Number that say ‘I am paid to work on technical AI alignment research': 30 Number that say ‘I help run an organization with an AI alignment mission (e.g. CHAI, MIRI, Anthropic)': 11 Context for questions When sorting things by ratings, I've included the top 5, and anything just below the top 5 if that was a small number. I also included ratings for AXRP, the podcast I make. Ratings are paired with the standard error of the mean (total ratings have this standard error multiplied by the number of people in the sample). Only things that at least 2 people engaged with were included. Ratings were generally rounded to two significant figures, and standard errors were reported to the same precision. Usefulness ratings Among all respondents: Total usefulness (multiplying average rating by reach): 80k podcast: 167 +/- 8 Superintelligence: 166 +/- 8 Talks by AI alignment researchers: 134 +/- 6 Rob Miles videos: 131 +/- 7 AI alignment ...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Takeaways from a survey on AI alignment resources, published by DanielFilan on November 5, 2022 on LessWrong. What am I talking about? In June and July of this year, I ran a survey to ask a lot of people how useful they found a variety of resources on AI alignment. I was particularly interested in “secondary resources”: that is, not primary resource outputs, but resources that summarize, discuss, analyze, or propose concrete research efforts. I had many people promote the survey in an attempt to make it not obvious that I was running it (so that it would not affect what people said about AXRP, the podcast that I run). CEA helped a great deal with the shaping and promotion of the survey. The goal of the survey was initially to figure out how useful AXRP was, but I decided that it would be useful to get a broader look at the space of these secondary resources. My hope is that the results give people a better sense of what secondary resources might be worth checking out, as well as gaps that could be filled. Participants were shown a list of resources, select those they'd engaged with for >30 min, and for each they selected, rate on a scale from 0 to 4 how useful they'd found it, how likely they'd be to recommend to a friend getting into the field who hadn't read widely, and how likely they'd be to recommend to someone paid to do AI alignment research. You can do a test run of the survey at this link. My summary of the results AXRP, my podcast, is highly rated among people paid to work on technical AI alignment resources, but less highly rated in other cohorts. On a personal note, I find this a bit disappointing: I had hoped it could be useful for people orienting to research directions that they had not read widely about. Rob Miles videos are highly rated among everyone, more than I would have guessed. People really liked the AI Safety Camp, the AGI Safety Fundamentals Course, and conversations with AI alignment researchers. People trying to get into alignment really liked the above and also MLAB. That said, they recommend Rob Miles videos higher than the AI Safety Camp and conversations with AI alignment researchers (but lower than MLAB and the AGI Safety Fundamentals Course). Basic stats Entries with demographic info: 139 Entries that rate various resources: 99 Number that say ‘I have heard of AI alignment': 95 Number that say ‘I am interested in AI alignment research': 109 Number that say ‘I am trying to move into a technical AI alignment career': 68 Number that say ‘I spend some of my time solving technical problems related to AI alignment': 51 Number that say ‘I spend some of my time doing AI alignment field/community-building': 37 Number that say ‘I spend some of my time facilitating technical AI alignment research in ways other than doing it directly': 35 Number that say ‘I spend some of my time publicly communicating about AI alignment': 36 Number that say ‘I am paid to work on technical AI alignment research': 30 Number that say ‘I help run an organization with an AI alignment mission (e.g. CHAI, MIRI, Anthropic)': 11 Context for questions When sorting things by ratings, I've included the top 5, and anything just below the top 5 if that was a small number. I also included ratings for AXRP, the podcast I make. Ratings are paired with the standard error of the mean (total ratings have this standard error multiplied by the number of people in the sample). Only things that at least 2 people engaged with were included. Ratings were generally rounded to two significant figures, and standard errors were reported to the same precision. Usefulness ratings Among all respondents: Total usefulness (multiplying average rating by reach): 80k podcast: 167 +/- 8 Superintelligence: 166 +/- 8 Talks by AI alignment researchers: 134 +/- 6 Rob Miles videos: 131 +/- 7 AI alignment ...
We are in a fast-paced and technology-driven society. Yet, many organizations continue to have antiquated management systems that are often bureaucratic and hierarchical. Michele Zanini has made it his life's work to overcome this, moving the conversation from bureaucracy to humanocracy. In this episode, he joins J.R. Lowry to take us deep into his book, Humanocracy, which he co-authored with London Business School professor Gary Hamel. Michele is also the co-founder of Management Lab, or MLab, which works with leading-edge firms and progressive practitioners to help them create tomorrow's new practices today. He shares with us how they stumbled into this alternative way of management that reverses the top-down power structure and puts the people forward. With case studies on companies that made the transition, Michele shows the benefits of humanocracy to the overall organization—from innovation to initiative and more! Check out the full series of "Career Sessions, Career Lessons" podcasts here or visit pathwise.io/podcast/. A full written transcript of this episode is also available at https://pathwise.io/podcasts/from-bureaucracy-to-humanocracy-with-michele-zanini.
If you've been around the MLAB block for a second, you know I love/am obsessed with routines. Today, I'm going to give you the scoop on 5 routines that have quite literally changed my life, from freeing up my time to helping me work through procrastination. I'm also giving you tips on how to incorporate your own life-changing routines into your days. Don't miss the Just Clean It Challenge! Join for free here: www.motherlikeaboss.com/challenge
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Ideas: An Open AI Safety Research Platform, published by Apart Research on October 17, 2022 on The Effective Altruism Forum. TLDR; We present the AI safety ideas and research platform AI Safety Ideas in open alpha. Add and explore research ideas on the website here: aisafetyideas.com. AI Safety Ideas has been accessible for a while in an alpha state (4 months, on-and-off development) and we now publish it in open alpha to receive feedback and develop it continuously with the community of researchers and students in AI safety. All of the projects are either from public sources (e.g. AlignmentForum posts) or posted on the website itself. The current website represents the first steps towards an accessible crowdsourced research platform for easier research collaboration and hypothesis testing. The gap in AI safety Research prioritization & development Research prioritization is hard and even more so in a pre-paradigmatic field like AI safety. We can grok the highest-karma post on the AlignmentForum but is there another way? With AI Safety Ideas, we introduce a collaborative way to prioritize and work on specific agendas together through social features. We hope this can become a scalable research platform for AI safety. Successful examples of less systematized but similar, collaborative, online, and high quality output projects can be seen in Discord servers such as EleutherAI, CarperAI, Stability AI, and Yannic Kilcher's Discord, in hackathons, and in competitions such as the inverse scaling competition. Additionally, we are also missing an empirically driven impact evaluation of AI safety projects. With the next steps of development described further down, we hope to make this easier and more available while facilitating more iteration in AI safety research. Systemized hypotheses testing with bounties can help funders directly fund specific results and enables open evaluation of agendas and research projects. Mid-career & student newcomers Novice and entrant participation in AI safety research is mostly present in two forms at the moment: 1) Active or passive part-time course participation with a capstone project (AGISF, ML Safety) and 2) flying to London or Berkeley for three months to participate in full-time paid studies and research (MLAB, SERI MATS, PIBBSS, Refine). Both are highly valuable but a third option seems to be missing: 3) An accessible, scalable, low time commitment, open research opportunity. Very few people work in AI safety and allowing decentralized, volunteer or bounty-driven research will allow many more to contribute to this growing field. By allowing this flexible research opportunity, we can attract people who cannot participate in option (2) because of visa, school / life / work commitments, location, rejection, or funding while we can attract a more senior and active audience compared to option (1). Next steps OctReleasing and building up the user base and crowdsourced content. Create an insider build to test beta features. Apply to join the insider build here.NovImplementing hypothesis testing features: Creating hypotheses, linking ideas and hypotheses, adding negative and positive results to hypotheses. Creating an email notification system.DecCollaboration features: Contact others interested in the same idea and mentor ideas. A better commenting system with a results comment that can indicate if the project has been finished or not, what the results are, and by who was it done.JanAdding moderation features: Accepting results, moderating hypotheses, admin users. Add bounty features for the hypotheses and a simple user karma system.FebShare with ML researchers and academics in EleutherAI and CarperAI. Implement the ability to create special pages with specific private and public ideas curated for a specific purpose (title and desc...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Establishing Oxford's AI Safety Student Group: Lessons Learnt and Our Model, published by CharlieGriffin on September 21, 2022 on The Effective Altruism Forum. In January we founded a student group at Oxford focused on technical AI safety. Since then we've run speaker events, socials, multiple cohorts of the AGISF, and supervised research projects (“Labs”). We think it went pretty well, so we're sharing our takeaways and model here. This post is a short summary of this public document which goes into more detail about our approach to AI safety community building, reflections, and recommendations. Non-trivial takeaways Launching as part of an AI group, rather than an EA group, worked well for us. (see more) Outreach aimed at people interested in AI reached a much larger technical audience than past outreach aimed at people interested in EA or longtermism. (see more) It was surprisingly easy to interest people in AI safety without appealing to EA or longtermism. (see more) Significant value from our speaker events seemed to come from the high-retention, friendly socials we held afterwards. (see more) Our “Labs” model of student research projects seems effective for development and output with minimal time-cost for an expert supervisor (~1 hour per week). This is particularly valuable if field building is mentorship bottlenecked (see more). Our current model Our working objective was to increase the number and quality of technical people pursuing a career in AI safety research. To do this, we have been operating with the following pipeline: Results so far At least 2 of the current participants of Redwood's MLAB this summer had never encountered AI safety or EA before attending our events this spring. We had 24-73 people attend our 9 speaker events, with 69% (on average) having a STEM background (according to survey data). 65 people signed up for our AGI Safety Fundamentals course across 11 cohorts. 57% had STEM backgrounds. Further Information Please see the attached public document for further information about the student group or our contact details. We are now reconsidering our working objective and don't necessarily endorse the stated objective "to increase the number and quality of technical people pursuing a career in AI safety research". However, we think it is important to start from your objective and work backwards, and this is the objective we actually used. We want to note that having a target audience of people “interested in AI” creates a self-selection effect that reduces the diversity of thought in our attendance. We are working to improve this. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Join ASAP (AI Safety Accountability Programme)
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 13 background claims about EA, published by Akash on September 7, 2022 on The Effective Altruism Forum. I recently attended EAGxSingapore. In 1-1s, I realized that I have picked up a lot of information from living in an EA hub and surrounding myself with highly-involved EAs. In this post, I explicitly lay out some of this information. I hope that it will be useful for people who are new to EA or people who are not living an EA Hub. Here are some things that I believe to be important “background claims” that often guide EA decision-making, strategy, and career decisions. (In parentheses, I add things that I believe, but these are "Akash's opinions" as opposed to "background claims.") Note that this perspective is based largely on my experiences around longtermists & the Berkeley AI safety community. General 1. Many of the most influential EA leaders believe that there is a >10% chance that humanity goes extinct in the next 100 years. (Several of them have stronger beliefs, like a 50% of extinction in the next 10-30 years). 2. Many EA leaders are primarily concerned about AI safety (and to a lesser extent, other threats to humanity's long-term future). Several believe that artificial general intelligence is likely to be developed in the next 10-50 years. Much of the value of the present/future will be shaped by the extent to which these systems are aligned with human values. 3. Many of the most important discussions, research, and debates are happening in-person in major EA hubs. (I claim that visiting an EA Hub is one of the best ways to understand what's going on, engage in meaningful debates about cause prioritization, and receive feedback on your plans.) 4. Several “EA organizations” are not doing highly impactful work, and there are major differences in impact between & within orgs. Some people find it politically/socially incorrect to point out publicly which organizations are failing & why. (I claim people who are trying to use their careers in a valuable way should evaluate organizations/opportunities for themselves, and they should not assume that generically joining an “EA org” is the best strategy.) AI Safety 5. Many AI safety researchers and organizations are making decisions on relatively short AI timelines (e.g., artificial general intelligence within the next 10-50 years). Career plans or research proposals that take a long time to generate value are considered infeasible. (I claim that people should think about ways to make their current trajectory radically faster— e.g., if someone is an undergraduate planning a CS PhD, they may want to consider alternative ways to get research expertise more quickly). 6. There is widespread disagreement in AI safety about which research agendas are promising, what the core problems in AI alignment are, and how people should get started in AI safety. 7. There are several programs designed to help people get started in AI safety. Examples include SERI-Mats (for alignment research & theory), MLAB (for ML engineering), the ML Safety Scholars Program (for ML skills), AGI Safety Fundamentals (for AI alignment knowledge), PIBBS (for social scientists), and the newly-announced Philosophy Fellowship. (I suggest people keep point #6 in mind, though, and not assume that everything they need to know is captured in a well-packaged Program or Reading List). 8. There are not many senior AIS researchers or AIS mentors, and the ones who exist are often busy. (I claim that the best way to “get started in AI safety research” is to apply for a grant to spend ~1 month reading research, understanding the core parts of the alignment problem, evaluating research agendas, writing about what you've learned, and visiting an EA hub). 9. People can apply for grants to skill-up in AI safety. You do not have to propose an extremely specific project...
Eiropā aizsākusies ļoti aktīva aizsprostu nojaukšanas kustība, lai atbrīvotu brīvi plūstošas upes. Arī Latvijā šis jautājums ir ļoti aktuāls. Kopā ar ekspertiem raidījumā Kā labāk dzīvot skaidrojam, kāpēc tas tiek darīts un kādi no tā būs ieguvumi dabai un mums pašiem. Studijā Latvijas Vides, ģeoloģijas un meteoroloģijas centrs Linda Fībiga, kā arī institūta BIOR speciālists Kaspars Abersons. Visticamāk, cilvēki ir kļuvuši apzinīgāki un redz, ka tomēr kaut kas šajā zinā nav tā, kā nākas, spriež Linda Fībiga: "Upes ir daļēji nobloķētas, zivju migrācija ir mazinājusies vai zivju populācijas ir samazinājušās, un pamazām cilvēki sāk rīkoties, lai likvidētu šos dažāda veida šķēršļus uz upēm." Pēc statistikas uz Eiropas upēm atrodas apmēram miljons dažāda lieluma dambju, šobrīd dambju un dažāda veida šķēršļu nojaukšana ir sākusies. Ir nojaukti jau apmēram 40 tūkstoši dažādu dambj, kas nav maz, bet mēs saprotam, ka aizsprostu joprojām ir ļoti daudz. "Jā, mēs kaut kā ieskrienamies ļoti lēni un ļoti ilgi," piekrīt Kaspars Abersons. "Bet tā ieskriešanās Eiropā jau ir notikusi, jo īpaši Somijā un vēl dažās valstīs, piemēram, Dāniju var uzteikt, kas diezgan labi darbojas koraļļu, taimiņu upju atbrīvošanā. Mēs paši tikai ieskrienamies, ieskrienamies un ieskrienamies, bet, cerams, ka tā inerce tomēr būs."
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the AI Safety Field Building Hub, a new effort to provide AISFB projects, mentorship, and funding, published by Vael Gates on July 28, 2022 on The Effective Altruism Forum. Tldr: If you're interested in working on an AI safety field building project similar to those listed below (e.g. researcher outreach): please fill out this application form or nominate someone ($200 bounty)! Hiring is ongoing indefinitely. If you're an EA org that has an AI safety field building project, please submit your project idea, and if our priorities and people align sufficiently we'll try to get it done! [Crossposted to LessWrong] When individual EAs are interested in working on AI safety research, they can apply to many programs designed to give mentorship, funding, and project ideas (SERI MATS, MLAB, AI Safety Camp, SERI / CERI / CHERI summer programs, etc). This is amazing and speaks to our growth as a community. However, I think there's a noticeable lack of such structure in field building. For EAs interested in AI safety field building, how do they know which projects are most promising? Do they have to make them up from scratch? Where do they get collaborators, how do they get access to mentorship, how do they show a good track record to apply for funding? To fill this gap, I'm starting the “AI Safety Field Building Hub”, a new effort to provide projects, mentorship, and funding to individuals interested in working in AI safety field building. If you're an EA looking for a full-time or part-time project, I'm hiring project-specific contractors. I have a list of projects I'm interested in working on, have been sent more shovel-ready projects by others, and hope that other organizations will continue to send me great project ideas they don't have capacity to manage. In the future, this “hub”-like functionality may spin into its own organization; at the moment, I'm looking to hire people to work for me. Of note, my projects tend to be aimed more at outreach and at older populations– AI researchers, academia and industry, rather than university populations or high schoolers— which I think is relatively neglected right now. I also think there are potentially more downside risks for doing outreach to researchers poorly than in reaching out to university students, and the culture is substantially different, so I'm hoping my experience in this space will be helpful to people who are interested in these directions. (About me: I'm currently a postdoc at Stanford whose most recent project was interviewing AI researchers about AI safety.) I'm aiming to hire agentic, independent, or small-team people who are excited about closely coordinating with me and other EAs. From that point, there are several options: For many of these projects, I'll want people who are excited about closely coordinating in the beginning of the project and during key decision points, but then go on to truly own the project without further involvement from me. (I can help you with securing your own funding once you're up and running.) Note that, however, several of the top projects listed here may have significant downside risks, so I'm going to be restrictive with hiring and training for this reason. I'm also interested in hiring people who work part-time on a single project under me indefinitely. This is especially true for many of the top-listed projects which could have significant downside risks. Finally, I'm hoping to hire an advanced PA-type person who will closely work with me across all of my projects (pay will be higher for this role) I'll be offering mentorship like you'd find in a research program, and project-specific funding for you to work on the project (and all associated project costs) full-time or part-time. Some of the projects are more operations-oriented, some people-facing, some writing-or...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the AI Safety Field Building Hub, a new effort to provide AISFB projects, mentorship, and funding, published by Vael Gates on July 28, 2022 on LessWrong. Tldr: If you're interested in working on an AI safety field building project similar to those listed below (e.g. researcher outreach): please fill out this application form or nominate someone ($200 bounty)! Hiring is ongoing indefinitely. If you're an EA org that has an AI safety field building project, please submit your project idea, and if our priorities and people align sufficiently we'll try to get it done! [Crossposted to the EA Forum] When individual EAs are interested in working on AI safety research, they can apply to many programs designed to give mentorship, funding, and project ideas (SERI MATS, MLAB, AI Safety Camp, SERI / CERI / CHERI summer programs, etc). This is amazing and speaks to our growth as a community. However, I think there's a noticeable lack of such structure in field building. For EAs interested in AI safety field building, how do they know which projects are most promising? Do they have to make them up from scratch? Where do they get collaborators, how do they get access to mentorship, how do they show a good track record to apply for funding? To fill this gap, I'm starting the “AI Safety Field Building Hub”, a new effort to provide projects, mentorship, and funding to individuals interested in working in AI safety field building. If you're an EA looking for a full-time or part-time project, I'm hiring project-specific contractors. I have a list of projects I'm interested in working on, have been sent more shovel-ready projects by others, and hope that other organizations will continue to send me great project ideas they don't have capacity to manage. In the future, this “hub”-like functionality may spin into its own organization; at the moment, I'm looking to hire people to work for me. Of note, my projects tend to be aimed more at outreach and at older populations– AI researchers, academia and industry, rather than university populations or high schoolers— which I think is relatively neglected right now. I also think there are potentially more downside risks for doing outreach to researchers poorly than in reaching out to university students, and the culture is substantially different, so I'm hoping my experience in this space will be helpful to people who are interested in these directions. (About me: I'm currently a postdoc at Stanford whose most recent project was interviewing AI researchers about AI safety.) I'm aiming to hire agentic, independent, or small-team people who are excited about closely coordinating with me and other EAs. From that point, there are several options: For many of these projects, I'll want people who are excited about closely coordinating in the beginning of the project and during key decision points, but then go on to truly own the project without further involvement from me. (I can help you with securing your own funding once you're up and running.) Note that, however, several of the top projects listed here may have significant downside risks, so I'm going to be restrictive with hiring and training for this reason. I'm also interested in hiring people who work part-time on a single project under me indefinitely. This is especially true for many of the top-listed projects which could have significant downside risks. Finally, I'm hoping to hire an advanced PA-type person who will closely work with me across all of my projects (pay will be higher for this role) I'll be offering mentorship like you'd find in a research program, and project-specific funding for you to work on the project (and all associated project costs) full-time or part-time. Some of the projects are more operations-oriented, some people-facing, some writing-oriented, some lig...
Reznik Show with Mlab - 09 Jun 2022 by Sub FM
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply to the second iteration of the ML for Alignment Bootcamp (MLAB 2) in Berkeley [Aug 15 - Fri Sept 2], published by Buck on May 6, 2022 on The Effective Altruism Forum. Redwood Research is running another iteration of MLAB, our bootcamp aimed at helping people who are interested in AI alignment learn about machine learning, with a focus on ML skills and concepts that are relevant to doing the kinds of alignment research that we think seem most leveraged for reducing AI x-risk. We co-organized the last iteration of the bootcamp with Lightcone in January, and there were 28 participants. The program was rated highly (see below for more), and several participants are now working full-time on alignment. We expect to start on Aug 15 but might push it back or forward by a week depending on applicant availability. Apply here by May 27. We're expecting to have space for about 40 participants. We'll pay for housing, travel, and food, as well as salaries for the TAs. We're now accepting applications for participants and TAs. TAs are expected to either know this material already or have a month free before MLAB to study all the content. Last time the schedule was roughly the following: Prep work: Pytorch array programming Week 1: Pytorch, optimization Implement a renderer in pytorch, as an exercise in mathematical array programming Implement ResNet from scratch in pytorch, implementing all the layers from scratch and loading weights from a trained model. Implement interpretability techniques on the ResNet. Implement SGD and other local optimization algorithms, run remote hyperparameter searches on a simple architecture Implement a simple clone of some of Pytorch, with particular focus on the implementation of backpropagation (Optional) CUDA programming day–write various CUDA kernels, see how close to the performance of Pytorch's kernels you can get Week 2: Transformers Implement BERT from scratch, load weights from the real pretrained BERT Implement GPT-2, implement beam search Fine tune BERT on classification, fine-tune GPT-2 on some specific corpus Look at various interpretability techniques on GPT-2 Data-parallel training Week 3 Pipeline parallelism Tensor parallelism Deep RL (DQN, policy gradient) RL algorithms on language models More transformer interpretability (Optional) ELK day Week 4: Optional final projects week, Q&As with various alignment researchers This time, we'll probably have more systematic transformer interpretability content, because we've spent a lot of time since MLAB doing our own transformer interpretability research and have a bunch more opinions now. We might also have more systematic content on various relevant math. I'm also hoping that we'll be able to cover content more efficiently as a result of experience gained from running the program the first time. Past participants report that MLAB was time-consuming; we strongly recommend against trying to juggle other commitments concurrently. About 8 hours a day, 5 or 6 (if you participate in the optional day) days a week will be spent on pair programming, in addition to daily lectures and readings. There is a lot of content packed into each day; not everyone will finish every part of the curriculum. We aim to create a learning environment that is focused but not frantic; we'd rather have you understand the material deeply than finish 100% of the day's content. The program is aimed at people who are already strong programmers who are comfortable with about one year's worth of university level applied math (e.g. you should know what eigenvalues and eigenvectors of a matrix are, and you should know basic vector calculus; in this course you'll have to think about Jacobian matrices and make heavy use of tensor diagram notation, so you should be able to pick up both of those pretty fast). We expect that abo...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply to the second iteration of the ML for Alignment Bootcamp (MLAB 2) in Berkeley [Aug 15 - Fri Sept 2], published by Buck Shlegeris on May 6, 2022 on The AI Alignment Forum. Redwood Research is running another iteration of MLAB, our bootcamp aimed at helping people who are interested in AI alignment learn about machine learning, with a focus on ML skills and concepts that are relevant to doing the kinds of alignment research that we think seem most leveraged for reducing AI x-risk. We co-organized the last iteration of the bootcamp with Lightcone in January, and there were 28 participants. The program was rated highly (see below for more), and several participants are now working full-time on alignment. We expect to start on Aug 15 but might push it back or forward by a week depending on applicant availability. Apply here by May 27. We're expecting to have space for about 40 participants. We'll pay for housing, travel, and food, as well as salaries for the TAs. We're now accepting applications for participants and TAs. TAs are expected to either know this material already or have a month free before MLAB to study all the content. Last time the schedule was roughly the following: Prep work: Pytorch array programming Week 1: Pytorch, optimization Implement a renderer in pytorch, as an exercise in mathematical array programming Implement ResNet from scratch in pytorch, implementing all the layers from scratch and loading weights from a trained model. Implement interpretability techniques on the ResNet. Implement SGD and other local optimization algorithms, run remote hyperparameter searches on a simple architecture Implement a simple clone of some of Pytorch, with particular focus on the implementation of backpropagation (Optional) CUDA programming day–write various CUDA kernels, see how close to the performance of Pytorch's kernels you can get Week 2: Transformers Implement BERT from scratch, load weights from the real pretrained BERT Implement GPT-2, implement beam search Fine tune BERT on classification, fine-tune GPT-2 on some specific corpus Look at various interpretability techniques on GPT-2 Data-parallel training Week 3 Pipeline parallelism Tensor parallelism Deep RL (DQN, policy gradient) RL algorithms on language models More transformer interpretability (Optional) ELK day Week 4: Optional final projects week, Q&As with various alignment researchers This time, we'll probably have more systematic transformer interpretability content, because we've spent a lot of time since MLAB doing our own transformer interpretability research and have a bunch more opinions now. We might also have more systematic content on various relevant math. I'm also hoping that we'll be able to cover content more efficiently as a result of experience gained from running the program the first time. Past participants report that MLAB was time-consuming; we strongly recommend against trying to juggle other commitments concurrently. About 8 hours a day, 5 or 6 (if you participate in the optional day) days a week will be spent on pair programming, in addition to daily lectures and readings. There is a lot of content packed into each day; not everyone will finish every part of the curriculum. We aim to create a learning environment that is focused but not frantic; we'd rather have you understand the material deeply than finish 100% of the day's content. The program is aimed at people who are already strong programmers who are comfortable with about one year's worth of university level applied math (e.g. you should know what eigenvalues and eigenvectors of a matrix are, and you should know basic vector calculus; in this course you'll have to think about Jacobian matrices and make heavy use of tensor diagram notation, so you should be able to pick up both of those pretty fast). We expect that...
Энэ дугаарт: -"iHotel" llc Гүйцэтгэх захирал, JCI Монгол Академийн 2020 оны үндэсний Ерөнхийлөгч Б.Золбадрал -"Erxes lnc" хамтран үүсгэн байгуулагч, Гүйцэтгэх захирал А.Мэнд-Орших -"AND Global", "MLab" компаниудын үүсгэн байгуулагч, Гүйцэтгэх захирал Ч.Анар
It is such a joy that I get to bring back on-air coaching calls to the podcast. These discussions with real moms and real students are filled with truth and helpful to so many other struggling mamas out there. Today, I'm welcoming Lisa Prieto, a homeschooling and entrepreneur mama of 2 with a lot on her plate. This conversations gets honest and real, as we discuss fear of failure and the overwhelming paralysis that outside (and inside) criticism can cause. Listen in as I help Lisa navigate these challenges and dig into the real causes behind them. Want more from this episode? Head over to www.motherlikeaboss.com/podcast/108 for the full show notes and more goodies. If you loved this episode as much as I loved sharing it, there is more where that came from. Be sure to subscribe so you don't miss out. And I would just loooove if you would leave a review and rating. It's a little thing that makes a big difference and helps me to continue to bring super valuable content and fabulous guests. Have a topic you want me to cover on the podcast? Submit them to us here. This show is all for you, mama. Let's talk about the things you most want to hear about. Thanks for listening!
We're doing things a little different this summer when it comes to interviews. Rather than bring on an expert in a particular field, I invited real moms with real struggles to come on the show to share with us what they are going through in motherhood and at home. Sometimes, just hearing another mom ask a question that you've always had is exactly the thing you need to motivate you toward change and improvement. Today, I have the pleasure of introducing you to Micah Paschall, a Mother Like a Boss student with a beautiful heart and story. It takes a brave woman to discuss her past and present feelings about homemaking open and honestly. We had a great chat about making habits and routines stick and I help her to create more non-negotiable chores for her children. Join me for another raw and beautiful episode in the M-Lab sessions. Want more from this episode? Head over to www.motherlikeaboss.com/podcast/053 for full show notes and more goodies. If you loved this episode as much as I loved sharing it, there is more where that came from. Be sure to subscribe so you don't miss out. And I would just loooove if you would leave a review and rating. It's a little thing that makes a big difference and helps me to continue to bring on valuable, totally rad guests. Have a topic you want me to cover on the podcast? Submit them to us here. This show is all for you, mama. Let's talk about the things you most want to hear about. Thanks for listening!
We're doing things a little different this summer when it comes to interviews. Rather than bring on an expert in a particular field, I invited real moms with real struggles to come on the show to share with us what they are going through in motherhood and at home. Sometimes, just hearing another mom ask a question that you've always had is exactly the thing you need to motivate you toward change and improvement. Today, I have the pleasure of introducing you to Julia Holen who is going to rock your world with how open, honest and vulnerable she is about her life. It takes courage to speak about her struggles with bipolar disorder and she does so with strength and openness. We had a wonderful chat on how she can plan her routines around her life, rather than the other way around and how to stick to household and motherhood routines even in the face of difficult circumstances. Join me for another raw and beautiful episode in the M-Lab sessions. Want more from this episode? Head over to www.motherlikeaboss.com/podcast/051 for the full show notes, more goodies and all the deets. If you loved this episode as much as I loved sharing it, there is more where that came from. Be sure to subscribe so you don't miss out. And I would just loooove if you would leave a review and rating. It's a little thing that makes a big difference and helps me to continue to bring on valuable, totally rad guests. Have a topic you want me to cover on the podcast? Submit them to us here. This show is all for you, mama. Let's talk about the things you most want to hear about. Thanks for listening!
We're doing things a little different this summer when it comes to interviews. Rather than bring on an expert in a particular field, I invited real moms with real struggles to come on the show to share with us what they are going through in motherhood and at home. Sometimes, just hearing another mom ask a question that you've always had is exactly the thing you need to motivate you toward change and improvement. Today I have Homemakerish U student Sarah Humes on the show and we're getting really real about her struggles with perfectionism and decision making. We dive into the real causes of perfectionism, how to make better decisions each day that will get you toward your goals and how to go about preparing your home and family for an upcoming busy season. Join us for the very first M-Lab session. Want more from this episode? Head over to www.motherlikeaboss.com/podcast/049 for the full show notes, more deets and all the goodies. If you loved this episode as much as I loved sharing it, there is more where that came from. Be sure to subscribe so you don't miss out. And I would just loooove if you would leave a review and rating. It's a little thing that makes a big difference and helps me to continue to bring on valuable, totally rad guests. Have a topic you want me to cover on the podcast? Submit them to us here. This show is all for you, mama. Let's talk about the things you most want to hear about.
1.Blugazer - When Stars Melt 2.Mlab - 11th Prayer (Sonsez & Erman Intro Mix) 3.Five Seasons - Delphina 4.The Diventa Project - Crying Soul (Mazelo Nostra Mix) 5.Vertical Amigo - Peak 6.Amethyste - Dans Ma Memoire 7.Amethyste - Watermark 8.Fobee - Dreamwalker 9.Ad Brown - Shimmer 10.Sibewest - Train to Nowhere 11.351 Lake Shore Drive - You Don't Know 12.Lemongrass - Wonderland 13.Counting Clouds - Footprints (Beach Mix) 14.York Art Halpertin - Abyss (Chill Out Mix) 15.Kos Kastilio - Hammers Time Facebook: https://facebook.com/zoltanbiroChillOutSession iTunes: https://itunes.apple.com/ro/podcast/chill-out-session/id1084852818?mt=2 Hearthis: https://hearthis.at/zoltanbiro Homepage: http://chilloutsessionworld.blogspot.com © All Rights are Reserved by the artists! Cover: Benjamin Davies / unsplash.com