Podcasts about finnveden

  • 10PODCASTS
  • 24EPISODES
  • 23mAVG DURATION
  • ?INFREQUENT EPISODES
  • Jan 4, 2024LATEST
finnveden

POPULARITY

20172018201920202021202220232024


Best podcasts about finnveden

Latest podcast episodes about finnveden

The Nonlinear Library
EA - Project ideas for making transformative AI go well, other than by working on alignment by Lukas Finnveden

The Nonlinear Library

Play Episode Listen Later Jan 4, 2024 5:24


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Project ideas for making transformative AI go well, other than by working on alignment, published by Lukas Finnveden on January 4, 2024 on The Effective Altruism Forum. This is a series of posts with lists of projects that it could be valuable for someone to work on. The unifying theme is that they are projects that: Would be especially valuable if transformative AI is coming in the next 10 years or so. Are not primarily about controlling AI or aligning AI to human intentions.[1] Most of the projects would be valuable even if we were guaranteed to get aligned AI. Some of the projects would be especially valuable if we were inevitably going to get misaligned AI. The posts contain some discussion of how important it is to work on these topics, but not a lot. For previous discussion (especially: discussing the objection "Why not leave these issues to future AI systems?"), you can see the section How ITN are these issues? from my previous memo on some neglected topics. The lists are definitely not exhaustive. Failure to include an idea doesn't necessarily mean I wouldn't like it. (Similarly, although I've made some attempts to link to previous writings when appropriate, I'm sure to have missed a lot of good previous content.) There's a lot of variation in how sketched out the projects are. Most of the projects just have some informal notes and would require more thought before someone could start executing. If you're potentially interested in working on any of them and you could benefit from more discussion, I'd be excited if you reached out to me! [2] There's also a lot of variation in skills needed for the projects. If you're looking for projects that are especially suited to your talents, you can search the posts for any of the following tags (including brackets): [ML] [Empirical research] [Philosophical/conceptual] [survey/interview] [Advocacy] [Governance] [Writing] [Forecasting] The projects are organized into the following categories (which are in separate posts). Feel free to skip to whatever you're most interested in. Governance during explosive technological growth It's plausible that AI will lead to explosive economic and technological growth. Our current methods of governance can barely keep up with today's technological advances. Speeding up the rate of technological growth by 30x+ would cause huge problems and could lead to rapid, destabilizing changes in power. This section is about trying to prepare the world for this. Either generating policy solutions to problems we expect to appear or addressing the meta-level problem about how we can coordinate to tackle this in a better and less rushed manner. A favorite direction is to develop Norms/proposals for how states and labs should act under the possibility of an intelligence explosion. Epistemics This is about helping humanity get better at reaching correct and well-considered beliefs on important issues. If AI capabilities keep improving, AI could soon play a huge role in our epistemic landscape. I think we have an opportunity to affect how it's used: increasing the probability that we get great epistemic assistance and decreasing the extent to which AI is used to persuade people of false beliefs. A couple of favorite projects are: Create an organization that gets started with using AI for investigating important questions or Develop & advocate for legislation against bad persuasion. Sentience and rights of digital minds. It's plausible that there will soon be digital minds that are sentient and deserving of rights. This raises several important issues that we don't know how to deal with. It seems tractable both to make progress in understanding these issues and in implementing policies that reflect this understanding. A favorite direction is to take existing ideas for what labs could be doing and spell ou...

The Nonlinear Library
EA - Project ideas: Epistemics by Lukas Finnveden

The Nonlinear Library

Play Episode Listen Later Jan 4, 2024 33:00


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Project ideas: Epistemics, published by Lukas Finnveden on January 4, 2024 on The Effective Altruism Forum. This is part of a series of lists of projects. The unifying theme is that the projects are not targeted at solving alignment or engineered pandemics but still targeted at worlds where transformative AI is coming in the next 10 years or so. See here for the introductory post. If AI capabilities keep improving, AI could soon play a huge role in our epistemic landscape. I think we have an opportunity to affect how it's used: increasing the probability that we get great epistemic assistance and decreasing the extent to which AI is used to persuade people of false beliefs. Before I start listing projects, I'll discuss: Why AI could matter a lot for epistemics. (Both positively and negatively.) Why working on this could be urgent. (And not something we should just defer to the future.) Here, I'll separately discuss: That it's important for epistemics to be great in the near term (and not just in the long run) to help us deal with all the tricky issues that will arise as AI changes the world. That there may be path-dependencies that affect humanity's long-run epistemics. Why AI matters for epistemics On the positive side, here are three ways AI could substantially increase our ability to learn and agree on what's true. Truth-seeking motivations. We could be far more confident that AI systems are motivated to learn and honestly report what's true than is typical for humans. (Though in some cases, this will require significant progress on alignment.) Such confidence would make it much easier and more reliable for people to outsource investigations of difficult questions. Cheaper and more competent investigations. Advanced AI would make high-quality cognitive labor much cheaper, thereby enabling much more thorough and detailed investigations of important topics. Today, society has some ability to converge on questions with overwhelming evidence. AI could generate such overwhelming evidence for much more difficult topics. Iteration and validation. It will be much easier to control what sort of information AI has and hasn't seen. (Compared to the difficulty of controlling what information humans have and haven't seen.) This will allow us to run systematic experiments on whether AIs are good at inferring the right answers to questions that they've never seen the answer to. For one, this will give supporting evidence to the above two bullet points. If AI systems systematically get the right answer to previously unseen questions, that indicates that they are indeed honestly reporting what's true without significant bias and that their extensive investigations are good at guiding them toward the truth. In addition, on questions where overwhelming evidence isn't available, it may let us experimentally establish what intuitions and heuristics are best at predicting the right answer.[1] On the negative side, here are three ways AI could reduce the degree to which people have accurate beliefs. Super-human persuasion. If AI capabilities keep increasing, I expect AI to become significantly better than humans at persuasion. Notably, on top of high general cognitive capabilities, AI could have vastly more experience with conversation and persuasion than any human has ever had. (Via being deployed to speak with people across the world and being trained on all that data.) With very high persuasion capabilities, people's beliefs might (at least directionally) depend less on what's true and more on what AI systems' controllers want people to believe. Possibility of lock-in. I think it's likely that people will adopt AI personal assistants for a great number of tasks, including helping them select and filter the information they get exposed to. While this could be crucial for defending aga...

The Nonlinear Library
EA - Memo on some neglected topics by Lukas Finnveden

The Nonlinear Library

Play Episode Listen Later Nov 11, 2023 12:58


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Memo on some neglected topics, published by Lukas Finnveden on November 11, 2023 on The Effective Altruism Forum. I originally wrote this for the Meta Coordination Forum. The organizers were interested in a memo on topics other than alignment that might be increasingly important as AI capabilities rapidly grow - in order to inform the degree to which community-building resources should go towards AI safety community building vs. broader capacity building. This is a lightly edited version of my memo on that. All views are my own. Some example neglected topics (without much elaboration) Here are a few example topics that could matter a lot if we're in the most important century, which aren't always captured in a normal "AI alignment" narrative: The potential moral value of AI. [1] The potential importance of making AI behave cooperatively towards humans, other AIs, or other civilizations (whether it ends up intent-aligned or not). Questions about how human governance institutions will keep up if AI leads to explosive growth. Ways in which AI could cause human deliberation to get derailed, e.g. powerful persuasion abilities. Positive visions about how we could end up on a good path towards becoming a society that makes wise and kind decisions about what to do with the resources accessible to us. (Including how AI could help with this.) (More elaboration on these below.) Here are a few examples of somewhat-more-concrete things that it might (or might not) be good for some people to do on these (and related) topics: Develop proposals for how labs could treat digital minds better, and advocate for them to be implemented. (C.f. this nearcasted proposal.) Advocate for people to try to avoid building AIs with large-scale preferences about the world (at least until we better understand what we're doing). In order to avoid a scenario where, if some generation of AIs turn out to be sentient and worthy of rights, we're forced to choose between "freely hand over political power to alien preferences" and "deny rights to AIs on no reasonable basis". Differentially accelerate AI being used to improve our ability to find the truth, compared to being used for propaganda and manipulation. E.g.: Start an organization that uses LLMs to produce epistemically rigorous investigations of many topics. If you're the first to do a great job of this, and if you're truth-seeking and even-handed, then you might become a trusted source on controversial topics. And your investigations would just get better as AI got better. E.g.: Evaluate and write-up facts about current LLM's forecasting ability, to incentivize labs to make LLMs state correct and calibrated beliefs about the world. E.g.: Improve AI ability to help with thorny philosophical problems. Implications for community building? …with a focus on "the extent to which community-building resources should go towards AI safety vs. broader capacity building". Ethics, philosophy, and prioritization matter more for research on these topics than it does for alignment research. For some issues in AI alignment, there's a lot of convergence on what's important regardless of your ethical perspective, which means that ethics & philosophy aren't that important for getting people to contribute. By contrast, when thinking about "everything but alignment", I think we should expect somewhat more divergence, which could raise the importance of those subjects. For example: How much to care about digital minds? How much to focus on "deliberation could get off track forever" (which is of great longtermist importance) vs. short-term events (e.g. the speed at which AI gets deployed to solve all of the world's current problems.) But to be clear, I wouldn't want to go hard on any one ethical framework here (e.g. just utilitarianism). Some diversity and pluralism seems ...

The Nonlinear Library
AF - Implications of evidential cooperation in large worlds by Lukas Finnveden

The Nonlinear Library

Play Episode Listen Later Aug 23, 2023 33:25


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Implications of evidential cooperation in large worlds, published by Lukas Finnveden on August 23, 2023 on The AI Alignment Forum. I've written several posts about the plausible implications of "evidential cooperation in large worlds" (ECL), on my newly-revived blog. This is a cross-post of the first. If you want to see the rest of the posts, you can either go to the blog or click through the links in this one. All of the content on my blog, including this post, only represent my own views - not those of my employer. (Currently OpenPhilanthropy.) "ECL" is short for "evidential cooperation in large worlds". It's an idea that was originally introduced in Oesterheld (2017) (under the name of "multiverse-wide superrationality"). This post will explore implications of ECL, but it won't explain the idea itself. If you haven't encountered it before, you can read the paper linked above or this summary written by Lukas Gloor.1 This post lists all candidates for decision-relevant implications of ECL that I know about and think are plausibly important.2 In this post, I will not describe in much depth why they might be implications of ECL. Instead, I will lean on the principle that ECL recommends that we (and other ECL-sympathetic actors) act to benefit the values of people whose decisions might correlate with our decisions. As described in this appendix, this relies on you and others having particular kinds of values. For one, I assume that you care about what happens outside our light cone. But more strongly, I'm looking at values with the following property: If you could have a sufficiently large impact outside our lightcone, then the value of taking different actions would be dominated by the impact that those actions had outside our lightcone. I'll refer to this as "universe-wide values". Even if all your values aren't universe-wide, I suspect that the implications will still be relevant to you if you have some universe-wide values. This is speculative stuff, and I'm not particularly confident that I will have gotten any particular claim right. Summary (with links to sub-sections) For at least two reasons, future actors will be in a better position to act on ECL than we are. Firstly, they will know a lot more about what other value-systems are out there. Secondly, they will be facing immediate decisions about what to do with the universe, which should be informed by what other civilizations would prefer.3 This suggests that it could be important for us to Affect whether (and how) future actors do ECL. This can be decomposed into two sub-points that deserve separate attention: how we might be able to affect Futures with aligned AI, and how we might be able to affect Futures with misaligned AI. But separately from influencing future actors, ECL also changes our own priorities, today. In particular, ECL suggests that we should care more about other actors' universe-wide values. When evaluating these implications, we can look separately at three different classes of actors and their values. I'll separately consider how ECL suggests that we should. Care more about other humans' universe-wide values.4 I think the most important implication of this is that Upside- and downside-focused longtermists should care more about each others' values. Care more about evolved aliens' universe-wide values. I think the most important implication of this is that we plausibly should care more about influencing how AI could benefit/harm alien civilizations. How much more? I try to answer that question in the next post. My best guess is that ECL boosts the value of this by 1.5-10x. (This is importantly based on my intuition that we would care a bit about alien values even without ECL.) Care more about misaligned AIs' universe-wide values.5 I don't think this significantly reduces the value of worki...

The Nonlinear Library
AF - PaLM-2 & GPT-4 in "Extrapolating GPT-N performance" by Lukas Finnveden

The Nonlinear Library

Play Episode Listen Later May 30, 2023 11:50


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: PaLM-2 & GPT-4 in "Extrapolating GPT-N performance", published by Lukas Finnveden on May 30, 2023 on The AI Alignment Forum. Two and a half years ago, I wrote Extrapolating GPT-N performance, trying to predict how fast scaled-up models would improve on a few benchmarks. One year ago, I added PaLM to the graphs. Another spring has come and gone, and there are new models to add to the graphs: PaLM-2 and GPT-4. (Though I only know GPT-4's performance on a small handful of benchmarks.) Converting to Chinchilla scaling laws In previous iterations of the graph, the x-position represented the loss on GPT-3's validation set, and the x-axis was annotated with estimates of size+data that you'd need to achieve that loss according to the Kaplan scaling laws. (When adding PaLM to the graph, I estimated its loss using those same Kaplan scaling laws.) In these new iterations, the x-position instead represents an estimate of (reducible) loss according to the Chinchilla scaling laws. Even without adding any new data-points, this predicts faster progress, since the Chinchilla scaling laws describes how to get better performance for less compute. The appendix describes how I estimate Chinchilla reducible loss for GPT-3 and PaLM-1. Briefly: For the GPT-3 data points, I convert from loss reported in the GPT-3 paper, to the minimum of parameters and tokens you'd need to achieve that loss according to Kaplan scaling laws, and then plug those numbers of parameters and tokens into the Chinchilla loss function. For PaLM-1, I straightforwardly put its parameter- and token-count into the Chinchilla loss function. To start off, let's look at a graph with only GPT-3 and PaLM-1, with a Chinchilla x-axis. Here's a quick explainer of how to read the graphs (the original post contains more details). Each dot represents a particular model's performance on a particular category of benchmarks (taken from papers about GPT-3 and PaLM). Color represents benchmark; y-position represents benchmark performance (normalized between random and my guess of maximum possible performance). The x-axis labels are all using the Chinchilla scaling laws to predict reducible loss-per-token, number of parameters, number of tokens, and total FLOP (if language models at that loss were trained Chinchilla-optimally). Compare to the last graph in this comment, which is the same with a Kaplan x-axis. Some things worth noting: PaLM is now ~0.5 OOM of compute less far along the x-axis. This corresponds to the fact that you could get PaLM for cheaper if you used optimal parameter- and data-scaling. The smaller GPT-3 models are farther to the right on the x-axis. I think this is mainly because the x-axis in my previous post had a different interpretation. The overall effect is that the data points get compressed together, and the slope becomes steeper. Previously, the black "Average" sigmoid reached 90% at ~1e28 FLOP. Now it looks like it reaches 90% at ~5e26 FLOP. Let's move on to PaLM-2. If you want to guess whether PaLM-2 and GPT-4 will underperform or outperform extrapolations, now might be a good time to think about that. PaLM-2 If this CNBC leak is to be trusted, PaLM-2 uses 340B parameters and is trained on 3.6T tokens. That's more parameters and less tokens than is recommended by the Chinchilla training laws. Possible explanations include: The model isn't dense. Perhaps it implements some type of mixture-of-experts situation that means that its effective parameter-count is smaller. It's trained Chinchilla-optimally for multiple epochs on a 3.6T token dataset. The leak is wrong. If we assume that the leak isn't too wrong, I think that fairly safe bounds for PaLM-2's Chinchilla-equivalent compute is: It's as good as a dense Chinchilla-optimal model trained on just 3.6T tokens, i.e. one with 3.6T/20=180B parameters. This would ...

The Nonlinear Library
LW - PaLM-2 & GPT-4 in "Extrapolating GPT-N performance" by Lukas Finnveden

The Nonlinear Library

Play Episode Listen Later May 30, 2023 11:49


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: PaLM-2 & GPT-4 in "Extrapolating GPT-N performance", published by Lukas Finnveden on May 30, 2023 on LessWrong. Two and a half years ago, I wrote Extrapolating GPT-N performance, trying to predict how fast scaled-up models would improve on a few benchmarks. One year ago, I added PaLM to the graphs. Another spring has come and gone, and there are new models to add to the graphs: PaLM-2 and GPT-4. (Though I only know GPT-4's performance on a small handful of benchmarks.) Converting to Chinchilla scaling laws In previous iterations of the graph, the x-position represented the loss on GPT-3's validation set, and the x-axis was annotated with estimates of size+data that you'd need to achieve that loss according to the Kaplan scaling laws. (When adding PaLM to the graph, I estimated its loss using those same Kaplan scaling laws.) In these new iterations, the x-position instead represents an estimate of (reducible) loss according to the Chinchilla scaling laws. Even without adding any new data-points, this predicts faster progress, since the Chinchilla scaling laws describes how to get better performance for less compute. The appendix describes how I estimate Chinchilla reducible loss for GPT-3 and PaLM-1. Briefly: For the GPT-3 data points, I convert from loss reported in the GPT-3 paper, to the minimum of parameters and tokens you'd need to achieve that loss according to Kaplan scaling laws, and then plug those numbers of parameters and tokens into the Chinchilla loss function. For PaLM-1, I straightforwardly put its parameter- and token-count into the Chinchilla loss function. To start off, let's look at a graph with only GPT-3 and PaLM-1, with a Chinchilla x-axis. Here's a quick explainer of how to read the graphs (the original post contains more details). Each dot represents a particular model's performance on a particular category of benchmarks (taken from papers about GPT-3 and PaLM). Color represents benchmark; y-position represents benchmark performance (normalized between random and my guess of maximum possible performance). The x-axis labels are all using the Chinchilla scaling laws to predict reducible loss-per-token, number of parameters, number of tokens, and total FLOP (if language models at that loss were trained Chinchilla-optimally). Compare to the last graph in this comment, which is the same with a Kaplan x-axis. Some things worth noting: PaLM is now ~0.5 OOM of compute less far along the x-axis. This corresponds to the fact that you could get PaLM for cheaper if you used optimal parameter- and data-scaling. The smaller GPT-3 models are farther to the right on the x-axis. I think this is mainly because the x-axis in my previous post had a different interpretation. The overall effect is that the data points get compressed together, and the slope becomes steeper. Previously, the black "Average" sigmoid reached 90% at ~1e28 FLOP. Now it looks like it reaches 90% at ~5e26 FLOP. Let's move on to PaLM-2. If you want to guess whether PaLM-2 and GPT-4 will underperform or outperform extrapolations, now might be a good time to think about that. PaLM-2 If this CNBC leak is to be trusted, PaLM-2 uses 340B parameters and is trained on 3.6T tokens. That's more parameters and less tokens than is recommended by the Chinchilla training laws. Possible explanations include: The model isn't dense. Perhaps it implements some type of mixture-of-experts situation that means that its effective parameter-count is smaller. It's trained Chinchilla-optimally for multiple epochs on a 3.6T token dataset. The leak is wrong. If we assume that the leak isn't too wrong, I think that fairly safe bounds for PaLM-2's Chinchilla-equivalent compute is: It's as good as a dense Chinchilla-optimal model trained on just 3.6T tokens, i.e. one with 3.6T/20=180B parameters. This would make it 6180e...

The Nonlinear Library: LessWrong
LW - PaLM-2 and GPT-4 in "Extrapolating GPT-N performance" by Lukas Finnveden

The Nonlinear Library: LessWrong

Play Episode Listen Later May 30, 2023 11:49


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: PaLM-2 & GPT-4 in "Extrapolating GPT-N performance", published by Lukas Finnveden on May 30, 2023 on LessWrong. Two and a half years ago, I wrote Extrapolating GPT-N performance, trying to predict how fast scaled-up models would improve on a few benchmarks. One year ago, I added PaLM to the graphs. Another spring has come and gone, and there are new models to add to the graphs: PaLM-2 and GPT-4. (Though I only know GPT-4's performance on a small handful of benchmarks.) Converting to Chinchilla scaling laws In previous iterations of the graph, the x-position represented the loss on GPT-3's validation set, and the x-axis was annotated with estimates of size+data that you'd need to achieve that loss according to the Kaplan scaling laws. (When adding PaLM to the graph, I estimated its loss using those same Kaplan scaling laws.) In these new iterations, the x-position instead represents an estimate of (reducible) loss according to the Chinchilla scaling laws. Even without adding any new data-points, this predicts faster progress, since the Chinchilla scaling laws describes how to get better performance for less compute. The appendix describes how I estimate Chinchilla reducible loss for GPT-3 and PaLM-1. Briefly: For the GPT-3 data points, I convert from loss reported in the GPT-3 paper, to the minimum of parameters and tokens you'd need to achieve that loss according to Kaplan scaling laws, and then plug those numbers of parameters and tokens into the Chinchilla loss function. For PaLM-1, I straightforwardly put its parameter- and token-count into the Chinchilla loss function. To start off, let's look at a graph with only GPT-3 and PaLM-1, with a Chinchilla x-axis. Here's a quick explainer of how to read the graphs (the original post contains more details). Each dot represents a particular model's performance on a particular category of benchmarks (taken from papers about GPT-3 and PaLM). Color represents benchmark; y-position represents benchmark performance (normalized between random and my guess of maximum possible performance). The x-axis labels are all using the Chinchilla scaling laws to predict reducible loss-per-token, number of parameters, number of tokens, and total FLOP (if language models at that loss were trained Chinchilla-optimally). Compare to the last graph in this comment, which is the same with a Kaplan x-axis. Some things worth noting: PaLM is now ~0.5 OOM of compute less far along the x-axis. This corresponds to the fact that you could get PaLM for cheaper if you used optimal parameter- and data-scaling. The smaller GPT-3 models are farther to the right on the x-axis. I think this is mainly because the x-axis in my previous post had a different interpretation. The overall effect is that the data points get compressed together, and the slope becomes steeper. Previously, the black "Average" sigmoid reached 90% at ~1e28 FLOP. Now it looks like it reaches 90% at ~5e26 FLOP. Let's move on to PaLM-2. If you want to guess whether PaLM-2 and GPT-4 will underperform or outperform extrapolations, now might be a good time to think about that. PaLM-2 If this CNBC leak is to be trusted, PaLM-2 uses 340B parameters and is trained on 3.6T tokens. That's more parameters and less tokens than is recommended by the Chinchilla training laws. Possible explanations include: The model isn't dense. Perhaps it implements some type of mixture-of-experts situation that means that its effective parameter-count is smaller. It's trained Chinchilla-optimally for multiple epochs on a 3.6T token dataset. The leak is wrong. If we assume that the leak isn't too wrong, I think that fairly safe bounds for PaLM-2's Chinchilla-equivalent compute is: It's as good as a dense Chinchilla-optimal model trained on just 3.6T tokens, i.e. one with 3.6T/20=180B parameters. This would make it 6180e...

Klotet i Vetenskapsradion
Konsumtionens klimatskuld - vår livsstil kräver jordklot som vi inte har

Klotet i Vetenskapsradion

Play Episode Listen Later May 28, 2023 41:15


Bilåkandet, flygandet, köttätandet måste minska globalt. I Östersund har 150 anställda på sju företag kommit överens om att koldioxidbanta i 59 dagar. Att CO2-banta i grupp är en metod som kan funka enligt forskare. Konsumtionens klimatskuld växer och växer och utsläppen från den är idag mångdubbelt större än vad jorden klarar av i längden. Koldioxidbudgeten krymper för varje dag och i juli 2029 är den slut enligt Mercator Research Institute."59 days challenge" i Östersund engagerar 150 personer.Det finns privata initiativ med ambitionen att leva klimatvänligare genom att minska på utsläppen. Klotet besöker Östersund där anställda på sju företag just nu minskar sina utsläpp av växthusgaser genom att sluta äta nötkött, minska på bilåkandet, matsvinnet, shoppandet och duschtiden. Den 1 juni är utmaningen "59 days challenge" över för denna gång. Första gången var 2019 när hotellet genomförde 59-dagarsutmaningen i samband med att två VM arrangerades i Östersund. Från det att det ena VM:et började och det andra hade sin sista dag var det just 59 dagar.Vad behövs enligt forskningen för att medelsvensson ska ändra sina vanor och beteenden och hur långt räcker det? Samtidigt är det de allra rikaste i världen som släpper ut mest genom sin konsumtion.Medverkande:Nicki Eby, hotelldirektör, Oliwia Frimert, receptionist, Adam Lagerblad, kökschef, Simone Engqvist, avdelningsledare på Clarion Hotel Grand i Östersund.Kristina (Kicki) Berger, regionchef, Ludde Lorentz, it-konsult på it-konsultbolaget Cygni i Östersund.Misse Wester, professor vid Lunds universitet, avdelningen för riskhantering och samhällssäkerhet.Carl Dalhammar, docent vid Centrum för miljö- och klimatvetenskap, Lunds universitet.Yvonne Augustsson, textilsakkunnig på Naturvårdsverket.Kimberly Nicholas, professor i hållbarhetsvetenskap vid Lunds universitet.Stefan Gössling, professor i turismvetenskap vid Linnéuniversitetet.Göran Finnveden, professor i miljöstrategisk analys på KTH, chef för forskningsprogrammet Mistra hållbar konsumtion.Stycket från Örebro teater, Nästa andetag, är skrivet av Duncan MacMillan och översatt av Joachim Siegård. Skådespelare: Maria Simonsson Thulin och Hans Christian Thulin. Reporter i Örebro: Alfred Wreeby.Dokument som nämns i programmet: Politik och styrning för hållbar konsumtion och SOU2022-15, Sveriges globala klimatuttryck.Skriv till oss! vet@sverigesradio.seReporter: Anna-Karin IvarssonProgramledare: Niklas ZachrissonProducent: Anders Wennersten

The Nonlinear Library
AF - Before smart AI, there will be many mediocre or specialized AIs by Lukas Finnveden

The Nonlinear Library

Play Episode Listen Later May 26, 2023 15:24


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Before smart AI, there will be many mediocre or specialized AIs, published by Lukas Finnveden on May 26, 2023 on The AI Alignment Forum. Summary: In the current paradigm, training is much more expensive than inference. So whenever we finish end-to-end training a language model, we can run a lot of them in parallel. If a language model was trained with Chinchilla scaling laws on the FLOP-equivalent of a large fraction of the world's current GPU and TPUs: I estimate that the training budget could produce at least ~20 million tokens per second. Larger models trained on more data would support more tokens per second. Language models can also run faster than humans. Current models generate 10-100 tokens per second. It's unclear whether future models will be slower or faster. This suggests that, before AI changes the world via being broadly superior to human experts, it will change the world via providing a lot of either mediocre (by the standard of human experts) or specialized thinking. This might make the early alignment problem easier. But the full alignment problem will come soon thereafter, in calendar-time, so this mainly matters if we can use the weaker AI to buy time or make progress on alignment. More expensive AI you can run more AIs with your training budget (...assuming that we're making them more expensive by increasing parameter-count and training data.) We're currently in paradigm where: Training isn't very sample-efficient. When increasing capabilities, training costs increase faster (~squared) than inference costs. Training is massively parallelizable.[1] While this paradigm holds, it implies that the most capable models will be trained using massively parallelized training schemes, equivalent to running a large number of models in parallel. The larger the model, the more data it needs, and so more copies of them will have to be run in parallel during training, in order to finish within a reasonable time-frame.[2] This means that, once you have trained a highly capable model, you are guaranteed to have the resources to run a huge number of them in parallel. And the bigger and more expensive the model was — the more of them can run in parallel on your training cluster. Here's a rough calculation of how many language models you can run in parallel using just your training cluster: Let's say you use p parameters. Running the model for one token takes kp FLOP, for some k. Chinchilla scaling laws say training data is proportional to parameters, implying that the model is trained for mp tokens. For Chinchilla, m=20 tokens / parameter. Total training costs are 3kmp^2. The 3 is there because backpropagation is ~2x as expensive as forward propagation. You spend N seconds training your model. During training, you use (3kmp^2/N) FLOP/s, and at inference you can run one model for kp FLOP/s. So using just your training compute, you can run (3kmp^2/N)/(kp) = 3mp/N tokens per second, just by reallocating your training compute to inference. If you take a horizon-length framework seriously, you might expect that we'll need more training data to handle longer-horizon tasks. Let's introduce a parameter H that describes how many token-equivalents correspond to one data-point. Total training costs are now 3kmHp^2. So with the compute you used to train your models, you can process 3mpH/N token-equivalents per second. Some example numbers (bolded ones are changed from the top one): For p=1e14, N=1y, H=1, m=20, the above equation says you can process 200 million token-equivalents per second, with just your training budget. For p=1e15, N=1y, H=1, m=20, it's ~2 billion token-equivalents/second. For p=1e14, N=3 months, H=1 hour, m=20, it's ~1 trillion token-equivalents/second.. In addition, there are various tricks for lowering inference costs. For example, reducing precision (whi...

The Nonlinear Library
AF - Some thoughts on automating alignment research by Lukas Finnveden

The Nonlinear Library

Play Episode Listen Later May 26, 2023 9:54


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some thoughts on automating alignment research, published by Lukas Finnveden on May 26, 2023 on The AI Alignment Forum. As AI systems get more capable, they may at some point be able to help us with alignment research. This increases the chance that things turn out ok.[1] Right now, we don't have any particularly scalable or competitive alignment solutions. But the methods we do have might let us use AI to vastly increase the amount of labor spent on the problem before AI has the capability and motivation to take over. In particular, if we're only trying to make progress on alignment, the outer alignment problem is reduced to (i) recognising progress on sub-problems of alignment (potentially by imitating productive human researchers), and (ii) recognising dangerous actions like e.g. attempts at hacking the servers.[2] But worlds in which we're hoping for significant automated progress on alignment are fundamentally scary. For one, we don't know in what order we'll get capabilities that help with alignment vs. dangerous capabilities.[3] But even putting that aside, AIs will probably become helpful for alignment research around the same time as AIs become better at capabilities research. Once AI systems can significantly contribute to alignment (say, speed up research by >3x), superintelligence will be years or months away.[4] (Though intentional, collective slow-downs could potentially buy us more time. Increasing the probability that such slow downs happen at key moments seems hugely important.) In such a situation, we should be very uncertain about how things will go. To illustrate one end of the spectrum: It's possible that automated alignment research could save the day even in an extremely tense situation, where multiple actors (whether companies or nations) were racing towards superintelligence. I analyze this in some detail here. To briefly summarize: If a cautious coalition were to (i) competitively advance capabilities (without compromising safety) for long enough that their AI systems became really productive, and (ii) pause dangerous capabilities research at the right time — then even if they only had a fairly small initial lead, that could be enough to do a lot of alignment research. How could we get AI systems that significantly accelerates alignment research without themselves posing an unacceptable level of risk? It's not clear that we will, but one possible story is motivated in this post: It's easier to align subhuman models than broadly superhuman models, and in the current paradigm, we will probably be able to run hundreds of thousands of subhuman models before we get broadly superhuman models, each of them thinking 10-100X faster than humans. Perhaps they could make rapid progress on alignment. In a bit more detail: Let's say that a cautious coalition starts out with an X-month lead, meaning that it will take X months for other coalitions to catch up to their current level of technology. The cautious coalition can maintain that X-month lead for as long as they don't leak any technology,[5] and for as long as they move as fast as their competitors.[6] In reality, the cautious coalition should gradually become more careful, which might gradually reduce their lead (if their competitors are less cautious). But as a simple model, let's say that the cautious coalition maintains their X-month lead until further advancement would pose a significant takeover risk, at which point they entirely stop advancing capabilities and redirect all their effort towards alignment research. Simplifying even further, let's say that, when they pause, their current AI capabilities are such that 100 tokens of an end-to-end trained language model on average lead to equally much alignment progress as 1 human researcher second (and that it does so safely). According to this oth...

Effective Altruism Forum Podcast
"AGI and lock-in" by Lukas Finnveden, Jess Riedel, & Carl Shulman

Effective Altruism Forum Podcast

Play Episode Listen Later Nov 28, 2022 23:40


The long-term future of intelligent life is currently unpredictable and undetermined. In the linked document, we argue that the invention of artificial general intelligence (AGI) could change this by making extreme types of lock-in technologically feasible. In particular, we argue that AGI would make it technologically feasible to (i) perfectly preserve nuanced specifications of a wide variety of values or goals far into the future, and (ii) develop AGI-based institutions that would (with high probability) competently pursue any such values for at least millions, and plausibly trillions, of years.The rest of this post contains the summary (6 pages), with links to relevant sections of the main document (40 pages) for readers who want more details.Original article:https://forum.effectivealtruism.org/posts/KqCybin8rtfP3qztq/agi-and-lock-inNarrated for the Effective Altruism Forum by TYPE III AUDIO.

original lock agi shulman riedel carl shulman finnveden
The Nonlinear Library
EA - AGI and Lock-In by Lukas Finnveden

The Nonlinear Library

Play Episode Listen Later Oct 29, 2022 17:37


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AGI and Lock-In, published by Lukas Finnveden on October 29, 2022 on The Effective Altruism Forum. The long-term future of intelligent life is currently unpredictable and undetermined. In the linked document, we argue that the invention of artificial general intelligence (AGI) could change this by making extreme types of lock-in technologically feasible. In particular, we argue that AGI would make it technologically feasible to (i) perfectly preserve nuanced specifications of a wide variety of values or goals far into the future, and (ii) develop AGI-based institutions that would (with high probability) competently pursue any such values for at least millions, and plausibly trillions, of years. The rest of this post contains the summary (6 pages), with links to relevant sections of the main document (40 pages) for readers who want more details. 0.0 The claim Life on Earth could survive for millions of years. Life in space could plausibly survive for trillions of years. What will happen to intelligent life during this time? Some possible claims are: A. Humanity will almost certainly go extinct in the next million years. B. Under Darwinian pressures, intelligent life will spread throughout the stars and rapidly evolve toward maximal reproductive fitness. C. Through moral reflection, intelligent life will reliably be driven to pursue some specific “higher” (non-reproductive) goal, such as maximizing the happiness of all creatures. D. The choices of intelligent life are deeply, fundamentally uncertain. It will at no point be predictable what intelligent beings will choose to do in the following 1000 years. E. It is possible to stabilize many features of society for millions or trillions of years. But it is possible to stabilize them into many different shapes — so civilization's long-term behavior is contingent on what happens early on. Claims A-C assert that the future is basically determined today. Claim D asserts that the future is, and will remain, undetermined. In this document, we argue for claim E: Some of the most important features of the future of intelligent life are currently undetermined but could become determined relatively soon (relative to the trillions of years life could last). In particular, our main claim is that artificial general intelligence (AGI) will make it technologically feasible to construct long-lived institutions pursuing a wide variety of possible goals. We can break this into three assertions, all conditional on the availability of AGI: It will be possible to preserve highly nuanced specifications of values and goals far into the future, without losing any information. With sufficient investments, it will be feasible to develop AGI-based institutions that (with high probability) competently and faithfully pursue any such values until an external source stops them, or until the values in question imply that they should stop. If a large majority of the world's economic and military powers agreed to set-up such an institution, and bestowed it with the power to defend itself against external threats, that institution could pursue its agenda for at least millions of years (and perhaps for trillions). Note that we're mostly making claims about feasibility as opposed to likelihood. We only briefly discuss whether people would want to do something like this in Section 2.2. (Relatedly, even though the possibility of stability implies E, in the top list, there could still be a strong tendency towards worlds described by one of the other options A-D. In practice, we think D seems unlikely, but that you could make reasonable arguments that any of the end-points described by A, B, or C are probable.) Why are we interested in this set of claims? There are a few different reasons: The possibility of stable institutions could pose an existential risk, i...

The Nonlinear Library
AF - PaLM in "Extrapolating GPT-N performance" by Lukas Finnveden

The Nonlinear Library

Play Episode Listen Later Apr 6, 2022 3:34


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: PaLM in "Extrapolating GPT-N performance", published by Lukas Finnveden on April 6, 2022 on The AI Alignment Forum. A bit more than a year ago, I wrote Extrapolating GPT-N performance, trying to predict how fast scaled-up models would improve on a few benchmarks. Google Research just released a paper reporting benchmark performance of PaLM: a 540B parameter model trained on 780B tokens. This post contains an updated version of one of the old graphs, where I've added PaLM's performance. You can read the original post for the full details, but as a quick explainer of how to read the graph: Each dot represents a particular model's performance on a particular benchmark (taken from the GPT-3 paper). Color represents benchmark; y-position represents benchmark performance (normalized between random and my guess of maximum possible performance); and the x-position represents loss on GPT-3's validation set. The x-axis is also annotated with the required size+data that you'd need to achieve that loss (if you trained to convergence) according to the original scaling laws paper. (After the point at which OpenAI's scaling-laws predicts that you'd only have to train on each data point once, it is also annotated with the amount of FLOP you'd need to train on each data point once.) The crosses represent Google's new language model, PaLM. Since they do not report loss, I infer what position it should have from the size and amount of data it was trained on. (The relationship between parameters and data is very similar to what OpenAI's scaling laws recommended.) The sigmoid lines are only fit to the GPT-3 dots, not the PaLM crosses. Some reflections: SuperGLUE is above trend (and happens to appear on the Cloze & completion trendline — this is totally accidental). ANLI sees impressive gains, though nothing too surprising given ~sigmoidal scaling. Common sense reasoning + Reading tasks are right on trend. Cloze & completion, Winograd, and Q&A are below trend. The average is amusingly right-on-trend, though I wouldn't put a lot of weight on that, given that the weighting of the different benchmarks is totally arbitrary. (The current set-up gives equal weight to everything — despite e.g. SuperGLUE being a much more robust benchmark than Winograd.) And a few caveats: The GPT-3 paper was published 2 years ago. I would've expected some algorithmic progress by now — and the PaLM authors claim to have made some improvements. Accounting for that, this looks more like it's below-trend. The graph relies a lot on the original scaling laws paper. This is pretty shaky, given that the Chinchilla paper now says that the old scaling laws are sub-optimal. The graph also relies on a number of other hunches, like what counts as maximum performance for each benchmark. And using sigmoids in particular was never that well-motivated. Since GPT-3 was developed, people have created much harder benchmarks, like MMLU and Big-bench. I expect these to be more informative than the ones in the graph above, since there's a limit on how much information you can get from benchmarks that are already almost solved. On the graph, it looks like the difference between GPT-3 (the rightmost dots) and PaLM is a lot bigger than the difference between GPT-3 and the previous dot. However, the log-distance in compute is actually bigger between the latter than between the former. The reason for this discrepancy is that GPT-3 slightly underperformed the scaling laws, and therefore appears relatively more towards the left than you would have expected from the compute invested in it. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library: Alignment Forum Top Posts
Truthful AI: Developing and governing AI that does not lie by Owain Evans, Owen Cotton-Barratt, Lukas Finnveden

The Nonlinear Library: Alignment Forum Top Posts

Play Episode Listen Later Dec 4, 2021 18:21


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Truthful AI: Developing and governing AI that does not lie, published by Owain Evans, Owen Cotton-Barratt, Lukas Finnveden on the AI Alignment Forum. This post contains the abstract and executive summary of a new 96-page paper from authors at the Future of Humanity Institute and OpenAI. Update: The authors are doing an AMA about truthful AI during October 26-27. Abstract In many contexts, lying – the use of verbal falsehoods to deceive – is harmful. While lying has traditionally been a human affair, AI systems that make sophisticated verbal statements are becoming increasingly prevalent. This raises the question of how we should limit the harm caused by AI “lies” (i.e. falsehoods that are actively selected for). Human truthfulness is governed by social norms and by laws (against defamation, perjury, and fraud). Differences between AI and humans present an opportunity to have more precise standards of truthfulness for AI, and to have these standards rise over time. This could provide significant benefits to public epistemics and the economy, and mitigate risks of worst-case AI futures. Establishing norms or laws of AI truthfulness will require significant work to: identify clear truthfulness standards; create institutions that can judge adherence to those standards; and develop AI systems that are robustly truthful. Our initial proposals for these areas include: a standard of avoiding “negligent falsehoods” (a generalisation of lies that is easier to assess); institutions to evaluate AI systems before and after real-world deployment; explicitly training AI systems to be truthful via curated datasets and human interaction. A concerning possibility is that evaluation mechanisms for eventual truthfulness standards could be captured by political interests, leading to harmful censorship and propaganda. Avoiding this might take careful attention. And since the scale of AI speech acts might grow dramatically over the coming decades, early truthfulness standards might be particularly important because of the precedents they set. Executive Summary & Overview The threat of automated, scalable, personalised lying Today, lying is a human problem. AI-produced text or speech is relatively rare, and is not trusted to reliably convey crucial information. In today's world, the idea of AI systems lying does not seem like a major concern. Over the coming years and decades, however, we expect linguistically competent AI systems to be used much more widely. These would be the successors of language models like GPT-3 or T5, and of deployed systems like Siri or Alexa, and they could become an important part of the economy and the epistemic ecosystem. Such AI systems will choose, from among the many coherent statements they might make, those that fit relevant selection criteria — for example, an AI selling products to humans might make statements judged likely to lead to a sale. If truth is not a valued criterion, sophisticated AI could use a lot of selection power to choose statements that further their own ends while being very damaging to others (without necessarily having any intention to deceive – see Diagram 1). This is alarming because AI untruths could potentially scale, with one system telling personalised lies to millions of people. Diagram 1: Typology of AI-produced statements. Linguistic AI systems today have little strategic selection power, and mostly produce statements that are not that useful (whether true or false). More strategic selection power on statements provides the possibility of useful statements, but also of harmful lies. Aiming for robustly beneficial standards Widespread and damaging AI falsehoods will be regarded as socially unacceptable. So it is perhaps inevitable that laws or other mechanisms will emerge to govern this behaviour. These might be existing human...

Vetandets värld
4/4. Avfallspyramiden - Plasten som ingen vill ha

Vetandets värld

Play Episode Listen Later Mar 16, 2020 19:43


Vad ska vi göra med plasten som ingen vill ha? För zero waste-aktivister hindrar plasten framväxten av en cirkulär ekonomi. För sopeldningsföretagen i Sverige ger plast allt dyrare koldioxidutsläpp. I italienska Capannori är tvåbarnspappan Simone Tomei nere i några få kilo sopor om året. Han har en rulle med 20 soppåsar som kommer att räcka i tio år. Varje påse är försedd med ett unikt chip så att kommunen vet exakt hur mycket sopor varje hushåll genererar. Industrin satsar på nya innovativa metoder att ta tillvara på det mödosamt utsorterade avfallet. Så kallat rejekt från pappersmassaindustrin förvandlas till lastpallar och blomkrukor. Men marknaden för sådana produkter är begränsad. Samtidigt blir så kallad energiåtervinning ett allt sämre alternativ för just gammal plast. I Sverige brottas nu de kommunala sopförbränningsanläggningarna med växande kostnader för koldioxidutsläpp som uppstår när plast eldas upp. Priserna för så kallade utsläppsrätter har rakat i höjden och förra året kostade plastförbränningen de kommunala kraftbolagen 600 miljoner kronor, enligt branschorganisationen Avfall Sverige. Carbon Capture kan vara ett sätt att få ned utsläppen och därmed kostnaderna. I programmet hörs: Simone Tomei, medlem i projeketet zero waste-family i Capannori i Toscana, Göran Finnveden, professor i miljöstrategisk analys vid KTH i Stockholm, Enzo Favoino, ordförande för Zero Waste Europes vetenskapliga utskott, Sune Scheibye, informationsansvarig Amager Resource Center i Köpenhamn, Klas Svensson, rådgivare energiåtervinning hos Avfall Sverige. Programledare Marcus Hansson Producent Peter Normark peter.normark@sverigesradio.se

Vetandets värld
2/4. Avfallspyramiden - Hur Italien gick om Sverige

Vetandets värld

Play Episode Listen Later Mar 9, 2020 19:42


Italien har gått från sopkaos till att vara bättre än Sverige på både källsortering och återvinning. Hur lyckades de med det? I en lång rad italienska kommuner har man bekänt sig till Zero waste-konceptet, där ambitionen är att så lite sopor som möjligt ska eldas upp eller läggas på soptipp. På flera håll i Italien källsorteras närmare 90% av hushållsavfallet. Vad gäller återvinningsgraden har Italien gått om både Sverige och EU-genomsnittet. Italiens väg bort från sopkaos hade sin början på 1990-talet i staden Capannori i Toscana. När en förbränningsanläggning skulle byggas där startade skolläraren Rossano Ercolini en proteströrelse som nu spridit sig i Europa. Idag leder han ett forskningscentrum som arbetar på hållbara lösningar och på att utveckla bättre förpackningar av material som är lättare att återanvända. I programmet hörs: Rossano Ercolini, skollärare, gräsrotsaktivist och föreståndare för Zero Waste Research Center i Capannori, Enzo Favoino, ordförande för Zero Waste Europes vetenskapliga utskott, Göran Finnveden, professor på KTH i miljöstrategisk analys, Klas Svensson, rådgivare energiåtervinning hos Avfall Sverige, Marco Mattiello, ansvarig för internationella relationer på avfallsbolaget Contarina. Programledare Marcus Hansson Ljudtekniker Olof Sjöström Producent Peter Normark peter.normark@sverigesradio.se

Vetandets värld
1/4. Avfallspyramiden - Elda sopor eller återvinna?

Vetandets värld

Play Episode Listen Later Mar 8, 2020 19:43


I Sverige eldar vi upp en stor del av hushållssoporna. Men motståndet mot sopförbränning växer. I det i sopsammanhang så utskrattade Italien slår allt fler kommuner rekord i källsortering. En av världens modernaste sopförbränningsanläggningar Amager Resource Center i Köpenhamn, ser ut som ett hypermodernt kontorshus och har en skidbacke på taket. Inne i anläggningen eldas varje dag sopor från 300 lastbilar. Problemet med sopförbränningen är att värdefulla råvaror försvinner och gör det svårare att bygga upp ett kretsloppssamhälle. Dessutom släpps koldioxid ut när plasten eldas. Nu växer sig motståndet mot sopförbränning starkare på flera håll i Europa. I Milano har uppbyggandet av ett framgångsrikt källsorteringssystem lett till att byggandet av ännu en förbränningsanläggning stoppades. I programmet hörs Sune Scheibye, kommunikationsansvarig på Amager Resource Center, Jens Peter Mortensen, expert på cirkulär ekonomi och industri hos Danmarks Naturfredningsforening, Enzo Favoino, ordförande för Zero Waste Europes vetenskapliga utskott, Göran Finnveden, professor på KTH i miljöstrategisk analys, Klas Svensson, rådgivare energiåtervinning hos Avfall Sverige. Programledare Marcus Hansson Ljudtekniker Olof Sjöström Producent Peter Normark peter.normark@sverigesradio.se

Vetandets värld
Högskolan vill gå före i klimatarbetet

Vetandets värld

Play Episode Listen Later Sep 2, 2019 19:30


Sveriges högskolor och universitet vill bli bättre på att leva som de lär när det gäller klimatet. Därför har 36 lärosäten kommit överens om gemensamma riktlinjer för att jobba med frågan. Det här innebär att lärosätena ska ta fram särskilda strategier med konkreta förslag på hur de ska jobba med klimatfrågan framöver. Och det handlar både om hur de ska fortsätta att förse resten av samhället med kunskap om klimatet, och hur universiteten och högskolorna ska klara av att minska sina egna utsläpp. Göran Finnveden, som är vicerektor för hållbarhet på Kungliga tekniska högskolan i Stockholm, säger att det är viktigt för trovärdigheten att ligga långt framme i klimatarbetet. Högskolorna måste visa att de tar sin egen forskning på allvar, menar han. Programledare: Sara Sällström sara.sallstrom@sverigesradio.se

Klotet i Vetenskapsradion
Hur ska vi leva för att klara klimatkrisen?

Klotet i Vetenskapsradion

Play Episode Listen Later Nov 28, 2018 44:29


Hur ser ett samhälle ut där våra utsläpp inte orsakar klimatförändringar? Ett forskningsprogram visar fyra hållbara framtidsscenarier. Familjen Lundin Dahl har skaffat en gård och delvis lämnat sina arbeten i Alingsås. Deras mål är att bli självförsörjande. Och de konstaterar att när de nu inte har lika mycket pengar att röra sig med så konsumerar de inte heller prylar eller flygresor. - Får man sämre ekonomi så kan man inte förstöra miljön så mycket, säger Ylva Lundin. Familjens val av livsstil stämmer delvis in som exempel på ett av fyra framtidsscenarier som är resultatet av ett forskningsprojekt vid KTH, Kungliga Tekniska Högskolan. Syftet med projektet är att visa hur ett hållbart samhälle kan gestalta sig till år 2050. De fyra scenarierna utgår bland annat från att samhällena erbjuder ett bra liv för alla, och att konsumtionen får orsaka max 0,8 ton koldioxid per person och år. En annan grundprincip för framtidsscenarierna är att de inte bygger på ekonomisk tillväxt. Forskningsprogrammet heter "Bortom BNP-tillväxt: Scenarier för hållbart samhällsbyggande." Klotet besöker också Cafée Llama Lloyd i Göteborg, där caféägaren  Robin Olsson utvecklar delningsekonomi genom att inom en förening utveckla hur det blir möjligt att dela på så mycket som möjligt, så kallad kollaborativ ekonomi. Frukt från fruktträden som ägs av staden, mat från matsvinn och cyklar är några exempel. - Det är bra för miljön, men det är inte det viktigaste säger Jonathan Mattebo Persson. Det är att ha roligt. Det är roligt att dela. Klotet ställer frågan vad som krävs för att ekonomisk tillväxt ska vara hållbar. Och om en hållbar ekonomisk tillväxt är möjlig att uppnå. I programmet medverkar de två projektledarna för KTH-projektet: Göran Finnveden, professor i miljöstrategisk analys och Åsa Svenfelt, docent i hållbar utveckling och framtidsstudier och John Hassler, professor i nationalekonomi vid Stockholms universitet som studerar global ekonomi och klimat. Programledare är Niklas Zachrisson. Här hittar du slutrapporten för "Bortom BNP-tillväxt"

Klotet i Vetenskapsradion
Vad kan jag göra åt klimatet?

Klotet i Vetenskapsradion

Play Episode Listen Later Sep 19, 2018 44:31


Ät mindre nötkött. Köp en miljöbil. Börja cykla. Sluta flyga. Konsumera mindre. Klotet tar reda på vad som är rätt och fel för att minska våra klimatutsläpp. Och hur stort vårt ansvar egentligen är. Allt fler blir medvetna om hur vår livsstil orsakar utsläpp av koldioxid och andra växthusgaser. Men hur mycket påverkar just de utsläpp som JAG orsakar, jämfört med till exempel kolkraftverken i Kina eller USA? Spelar min andel någon roll? Allt som oftast är det krångligare och dyrare att välja koldioxidsnåla alternativ för mat och resor. Och den som har hittat bra alternativ privat kanske i stället blir en koldioxid-bov på jobbet om inte arbetsplatsen försöker dra ner på utsläppen. Är det klimatsmarta livet en utopi som ständigt krockar med verkligheten? Klotet har besökt familjen Larsson Berg i Katrineholm som har försökt leva mer klimatvänligt under ett år. Och även en arbetsplats som har en medveten strategi för att medarbetarnas utsläpp ska minimeras. Miljöpsykolog och docent Maria Ojala, vid Örebro universitet, medverkar också i programmet, och Göran Finnveden, professor i strategisk miljöanalys vid Kungliga Tekniska Högskolan. Programledare är Niklas Zachrisson.

Stadspodden
Stadspodden - Popup-verksamheter och tillfälliga byggnader

Stadspodden

Play Episode Listen Later Nov 6, 2017 55:26


Vi har bjudit in filmregissören Måns Herngren, trafikborgarrådet Daniel Helldén, Fastighetsägarnas Helena Olsson, hållbarhetsprofessor Göran Finnveden, City i Samverkans Per Eriksson, streetfood- och marknadsprofilen Fredrik Lindstål och NCC:s egen Madeleine Nobs att diskutera hur Stadspoddens hemvist Dome of Visions och andra tillfälliga verksamheter påverkar stadens attraktivitet. Vilka exempel finns? Kommer trenden att fortsätta? Har det även betydelse för demokratin? I detta avsnitt spanar vi in den framtida stads- och platsutvecklingen.

visions kommer dome pop up vilka ncc tillf lliga fastighets daniel helld herngren byggnader finnveden madeleine nobs
Kossornas planet
Kossornas planet 2013-04-27 kl. 12.03

Kossornas planet

Play Episode Listen Later Apr 27, 2013 35:16


Vi testar elcykel tillsammans med Stefan Sundström i vårt program på temat Cyklar. Hur få fler att cykla med cykelvägsdoktorn Anna Niska. Cykelplanering i storstaden med cykelexperten Krister Isaksson. Krockkuddscykelhjälmen som tagit sju år att utveckla enligt Anna Haupt, en av uppfinnarna. Livscykelanalyser vid cykelstället med professor Göran Finnveden, KTH. Vi utreder också vad World Naked Bike Day handlar om.

Think Globally Radio
Sweden elusive environmental quality targets

Think Globally Radio

Play Episode Listen Later Mar 25, 2012


Guest : Göran Finnveden March 25 2012 Sweden is widely considered one of the most sustainable countries in the world, yet progress towards reaching some its environmental quality targets is off pace with earlier ambitions. This Sunday, Think Globally Radio speaks with Prof. Göran Finnveden of the Environmental Strategies Research … more >>

Släktband
Snåla smålänningar

Släktband

Play Episode Listen Later Nov 24, 2008 24:42


Släktbands landskapsupplaga 24 nov 2008 I den här serien som vi kallar landskapsupplagan utgår vi från de fördomar som odlats om människor från olika delar av landet. Och nu har turen kommit till det landskap som kanske har det djupast rotade begreppet om sig. Nämligen Småland. Det finns många talesätt och ordspråk som just betecknar smålänningarna som särskilt snåla. Så här skrev man till exempel i Nordisk Familjebok 1917: Smålänningen är till sin natur vaken och intelligent, flitig och sträfsam, rask och hurtig, men likväl foglig till lynnet, händig och slug, hvilket allt medför åt honom den förmånen, att han äfven med små medel kan taga sig fram i lifet. Men det var inte bara uppslagsverken från förra sekelskiftet som beskrev smålänningen på det här sättet. Även barnen i folkskolan fick de här fördomarna bekräftade i sina läseböcker. I folkskolans läsebok från 1910 finns en saga om hur Småland skapades. Men det gick inte riktigt som det borde... Sagan om Vår Herre och Småland Vår Herre höll på med att skapa Sveriges landskap, och Sankte Per gick med och såg på. Till slut tyckte han, att det var rakt ingen konst. Vår Herre hade redan börjat lägga ut och ordna Småland, då Sankte Per livligt bad att få fortsätta. Det gick Vår Herre in på, men för att icke vara sysslolös, tog han genast själv itu med Skåne. Sankte Per började nu forma berg och åsar och stapla upp stenhögar på bergen, ty han ville, att marken skulle komma så nära intill den goda solvärmen som möjligt. Och så bredde han ut ett tunt jordlager över stenmassorna. Men han förstod ej, att ett land, som når högt upp emot molnen, är mera utsatt för köld, snö, regn och oväder, utan han var högeligen belåten med sitt verk. Och så gick han ned till Skåne för att träffa Vår Herre. ”Är du redan färdig?” ”Ja, för länge se’n”, svarade Sankte Per. ”Nå, vad tycker du om det här landet?” Sankte Per såg ut över de jämna åkrarna och ängarna, som lyste i solskenet, men han såg också sjöar, åar, åsar och skogar på sina håll. ”Åjo”, sade han, ”det här är nog ett gott land, men jag tror ändå, att mitt är lika bra.” Och så gingo de tillsammans åstad för att titta på Sankte Pers skapelse. Men medan Sankte Per var nere i Skåne, hade det kommit ett par strida slagregn och sköljt bort nästan all jorden från bergen och ner i skrevorna, och vattnet hade fyllt dessa och bildat sjöar och stora kärr; men på några jämna ställen låg sanden torr och törstade i solen. ”Ja, det här landet kommer att bli magert och torftigt i alla tider, det kan inte hjälpas”,sade vår herre. ”Å, så farligt är det väl inte”, sade Sankte Per. ”Vänta bara, tills jag hinner skapa folk, som kan odla upp mossarna och röja upp åkrar ur stenbackarna!” ”Nej”, sade Vår Herre, ”Det ombetror jag dig visst inte. Men du kan få gå ner till Skåne, som jag har gjort till ett gott och lättskött land, och skapa folk till det landet. Smålänningen vill jag skapa själv.” Och så skapade Vår Herre smålänningen och gjorde honom kvick och förnöjsam och glad, flitig, tilltagsen och duktig, för att han skulle kunna skaffa sig bärgning i sitt fattiga land. Den som skulle odla den steniga småländska jorden var tvungen att hushålla med resurserna. Och det är kanske därifrån man hämtade påståendet om snålhet. Smålänningen Lennart Johansson är docent i historia och chef för Kronobergsarkivet i Växjö. Han är också en av författarna till boken Smålands historia som kom ut för ett par år sedan. Han berättar att landskapet Småland länge bestod av en mängd små landområden och inte blev ett enhetligt begrepp förrän ganska sent. - Området bestod av en massa små områden, Värend, Njudung ,Finnveden etc. Och de här områdena hade egentligen inte så mycket gemensamt utan betraktade dem som ”de små länderna” vilket blev Småland, berättar Lennart Johansson. Medeltidsmänniskan i Småland betraktade sig nog själv i första hand som Värens-bo, och så vidare. Ursprungligen var bilden av Småland långt ifrån ljus, i alla fall inte för dem som betraktade området utifrån. Många av de talesätt som kommer under 15-, 16-, och 1700 –talet visar att det var ett landskap som man många gånger såg ner på, säger Lennart Johansson och tar som exempel talesättet att ”Inför Vår Herre är vi alla smålänningar”. - Det betyder helt enkelt att inför Gud är vi små och ynkliga. Det finns en del personer som gjort extra mycket för att sprida den illasinnade bilden av folk från Småland. - En sådan trendsättare var Gustav Vasa som ju utkämpade en bitter fejd med Nils Dacke. Gustav Vasa kallade Nils Dacke för ”förrädare, horkarl, och kättare”, berättar Johansson och fortsätter: - Den kungliga historieskrivningen utmålade smålänningar om ett opåtlitigt och lömskt släkte. -Det fick långsiktiga följder både för den småländska självkänslan, men också för hur man uppfattade smålänningar utifrån, tror han. Bilden av den trolöse smålänningen verkar ha spridit sig ganska långt. När den ryske tsaren Ivan den förskräcklige skriver ett brev till Johan III i Sverige på 1570-talet så är det fullt med invektiv och oförskämdheter. Bland annat skriver tsaren att Johan är en ömklig figur särskilt eftersom hans fader skulle härstamma från småländska bönder. Långt senare lät biskopen i Växjö, Esaias Tegnér sitt ironiska gissel vina över smålänningarna. Han skriver i ett brev att smålänningen är ”…ett sniket, trolöst och ärelöst släkte som är för snålt för att vara glatt och för fattigt för att vara ärligt.” I ett annat brev skrev han om den stora vintermarknaden i Växjö där det såldes stora mängder rävskinn ”..som de obarmhärtiga smålänningarna har flått av sina kamrater.” Knipsluga rävar som flår andra rävar, alltså. Föreställningen att det finns landområden eller landskap där folk är mer snåla än på andra platser, det gäller inte bara Sverige och Småland, berättar Lennart Johansson: -Det finns ju områden i de flesta europeiska länder där folk pekats ut som snåla, vi har skottarna och savolaxarna i Finland. Men snålheten är ju i själva verket sparsamhet – bodde man i ett område där det var svårt att försörja sig får man ju ta hand om de tillgångar som fanns. I slutet av 1800-talet när man i nationalromantikens anda försökte finna de positiva dragen hos folk i olika landskap, då ändrades smålänningen en smula – han blev mer fiffig och knipslug än snål. Kanske var det i det sammanhanget som myten om hur Småland skapades kom till. Den småländska sparsamheten var en realitet menar Lennart Johansson. I sitt arbete på Kronobergsarkivet så möter han bland gamla handlingar många exempel på hur småländska politiker och tjänstemän höll på alla sätt försökte hålla nere utgifterna med allmänna medel. På 1950- och -60-talen, dvs före de stora kommunsammanslagningarna, så kan man se hur de lokala politikerna säger nej till det mesta som skulle innebära utgifter. -Det finns exempel på hur landstingsmän motsatte sig en gymnastiksal till en folkhögskola. Man tyckte att eleverna borde ut och hugga ved istället för att man skulle satsa landstingspengar på onödiga projekt, slutar Lennart Johansson. Brevskatt Den som släktforskar och har turen att hitta brev kan komma nära de människor som levde förr. Björn Smith som har sina rötter i södra Småland fann flera kartonger med för honom okända brev när hans mor gick bort förra året. Längst in i en garderob stod en brevskatt och där dålde sig faktiskt några släkthemligehter. Till exempel visste inte Björn Smith att hans morfar hade syskon. Några av breven skrevs av Björn Smiths mors farfar Salomon Simonsson som ägde 1 /16 hemman i södra Grimsbygd i Pjätteryds socken i Småland. I de här breven ser man att de var tvugna att vända på varenda öre. - Det är kul att se de här breven för de är uppbygda på ungefär samma sätt, säger Björn Smith. Först börjar han gärna med ett bibelspråk och sedan handlar det om grannarna och i slutet av breven skriver han om jordbruket. I breven står det en hel del om pengar, vad saker kostar, hur mycket grannarna betalat för att leja arbetskraft osv. I går var jag på påsten. Jag sände tio kronor till dig på en postförsändele som du ska ha till jul ljus och en sommarrock. Vi sållde ett slaktat lamm som vi fick 11 kronor och några ören för. Vi har köpt en pigg (läs: gris), den kostade 13.50. Herrens nåd och frid vare med dig nu och all tid. Många hälsningar från oss, dina föräldrar Salomon och Eva Simonsson - När man läser breven förstår man vilka omständigheter de levde under. Den som inte var skicklig och planerade väl klarade inte jordbruket, så det vi idag kallar snålhet skulle jag kalla planering och kontroll, säger Björn Smith. - Man kan också utläsa ett förakt i breven för de människor som inte klarade av att planera och hade kontroll över skörden. - Annars är det svårt att tränga igenom den massiva religösa övertygelsen Salomon Simonsson har, för den verkar reglera hela livet. Så här står det i ett av breven: Var trofast in i döden säger Jesus. Så skall jag gifva dig lifvets krona, och Jesus som är trons begynnare och fullkomnare- Han är mäktig att bevara oss intill Jesu Kristi dag. Tack och lov och pris ske Gud. Vår käre älskade son, möcken tack för ditt kära brev som vi fick i lördags. Vi hör att du mår bra och det är för oss möcke nöjsamt att höra att du slipper att fara illa, ty det vore ledsamt att höra. Vi hör att du tänkt på att gå till Amerika, men det är mycket svårt och noga för passagerare nu att få landstiga när di komma dit. Jag läste en bit i en tidning som jag skall sända dig så få du själv läsa det. Så det är nog best att stanna hemma i Sveden. Du nämner om exersisen till nästa år. Skall du inte slippa att äxersera tro, om du anmäler du har klent bröst. Inte ljuga, det menare jag inte, men säga att du är svag till hälsan. Skräddare Persson han töckte på att du har för svag hälsa till att resa till Amerika, även så till att äxersera. Men om det så skulle bliva så har du 50 öre om dagen och du kan kanske förtjäna några öre med ditt yrke under tiden. Herrens frid Eva och Salomon Simonsson Småland i stor databas Småland har som vi vet bestått av många mindre delar, eller områden. Och även idag finns lite av den strukturen kvar. Tre län samsas i landskapet Småland. Och just av den anledningen kan det vara svårt att hitta rätt i kyrkböckerna. Men för stora delar av Småland finns en databas som underlättar för personhistoriker. PLF, person och lokalhistoriskt forskarcentrum, i Oskarshamn har i snart 30 år samlat kyrkohistoriskt material i en databas. Iden finns alla uppgifter nerskrivna i klartext och dessutom kan man söka från flera olika håll - man kan söka på namn, datum för död, födelse eller giftermål, eller andra uppgifter. Klicka här och du kommer till PLFs hemsida:  Sam Blixt är en av den här unika databasens skapare. Unik för att den är så stor och kom till så tidigt.   Alltsammans började 1983, då datorer var ovanliga och ganska knepiga för gemene man. Sam Blixt som var och är en ivrig släktforskare, fick kontakt med en kollega vid namn Gunnar Källenius. Han hade tänkt göra en rekonstruktion över husförhörslängderna i församling utanför Oskarshamn. Sam Blixt berättar hur de startade sitt arbete: -När jag berättade att jag höll på att skapa ett dataprogram för att kunna föra in resultatet av min släktforskning, då blev Gunnar eld och lågor, minns Sam Blixt. Källenius berättade att han tänkt registrera alla födda, döda och vigda i Döderhults församling, och att i det läget stöta på någon som var intresserad av att arbeta med datorer var guld värt. De två släktforskarna började mycket riktigt med Döderhult med snart spred sig arbetet till angränsande församlingar – och med tiden blev fler människor engagerade i arbetet. Idag finns det drygt fyra miljoner poster i PLF:s databas, som omfattar 300 sammanhängande församlingar i Småland. Därtill finns en speciell databas för Kronobergslän som länets genealogiska förening skapat, och i den Där finns ytterligare en miljon poster. -PLF kan faktiskt vara den allra största regionala databasen i hela världen, säger Sam Blixt , inte utan ett stort mått av stolthet. -Har man rötterna i Småland, så har man stor chans att finna dem man söker i vår databas, säger han. Han visar databasen, och på prov ser vi om Elisabeth Renströms småländska anor finns där. Efter bara ett försök finner Elisabeth sin morfars mor, som varit svår att finna i kyrkböckerna. Databasen är en bra början i släktforskningen säger Sam Blixt, men vill man gå djupare och veta mer så är det viktigt att gå vidare till originalkällorna i själva arkiven: -Jag brukar alltid rekommendera att inte gå mer än tre-fyra generationer bakåt – gå åt sidorna istället, det blir mycket mer spännande, säger han. Frivilligt arbete har till dags dato producerat 4 CD-skivor med uppgifter från stora delar av Småland. Det är försäljningen av cd-skivorna som bekostar det fortsatta arbetet. Men de hundratals medlemmar i föreningen som fortsätter att arbeta med databasen jobbar helt ideellt. Och det finns mycket arbete att göra. -Vi funderar på att utöka med Jönköpings län, och i förlängningen tänker vi göra CD-skivor med husförhörslängder och även in-och utflyttade samt bouppteckningar. När Sam Blixt och hans kollega startade arbetet med databasen över Småland 1983 var datortekniken en helt annan historia än idag. Hade vi vetat då vad vi vet idag om hur mycket data man kan lagra hade vi nog gått ut ännu hårdare, tror Sam Blixt:  -På den tiden fick vi vara sparsamma med utrymmet när vi skulle skriva ner uppgifterna. Sedan dess har ju dataområdet utvecklats något enormt, och ve ser inga gränser för det här, säger han. Till sist kan vi inte låta bli att fråga Sam Blixt som ju är en duktig släktforskare om han funnit några spår av den påstådda småländska snålheten - eller kanske motsatsen, generositet, i sina småländska rötter. Och det har han: -Jag har i min egen släkt några generationer tillbaka en man som byggde Frödinge kyrka 1734. Nils Petter Hjertstedt hette mannen som var häradshövding och borgmästare. På predikstolen ser man än idag en inskription som berättar att den är byggd på initiativ av Hjertstedt. Men alldeles utan egen vinning var inte detta kyrkobygge. Han krävde att få bygga en egen läktare i kyrkan, en hedersläktare för honom och hans släkt. -Ja han tyckte väl att han skulle ha något för sin insats, säger Sam Blixt, som har varit i Frödinge kyrka där det fortfarande finns spår kvar av den gamla kyrkoläktaren.