Podcasts about qualitatively

  • 51PODCASTS
  • 70EPISODES
  • 24mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • May 6, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about qualitatively

Latest podcast episodes about qualitatively

Conrad Mbewe on SermonAudio
Building a Church Numerically and Qualitatively

Conrad Mbewe on SermonAudio

Play Episode Listen Later May 6, 2025 56:00


A new MP3 sermon from Lusaka Baptist Church is now available on SermonAudio with the following details: Title: Building a Church Numerically and Qualitatively Speaker: Conrad Mbewe Broadcaster: Lusaka Baptist Church Event: Sunday Service Date: 5/2/2025 Length: 56 min.

YUTORAH: R' Efrem Goldberg -- Recent Shiurim
Turn Friday into Erev Shabbos #214 - Add on to Shabbos Quantitatively and Qualitatively

YUTORAH: R' Efrem Goldberg -- Recent Shiurim

Play Episode Listen Later Apr 3, 2025 8:37


Survey of Shas Sugyas - Feed Podcast
Turn Friday into Erev Shabbos #214 - Add on to Shabbos Quantitatively and Qualitatively

Survey of Shas Sugyas - Feed Podcast

Play Episode Listen Later Apr 3, 2025


Hypnosis and relaxation |Sound therapy
Fill your whole body with peace, find your true self, communicate with yourself, discover what you need, and improve yourself qualitatively

Hypnosis and relaxation |Sound therapy

Play Episode Listen Later Jan 9, 2025 10:00


Support this podcast at — https://redcircle.com/hypnosis-and-relaxation-sound-therapy9715/donationsAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

The Nonlinear Library
LW - The Hessian rank bounds the learning coefficient by Lucius Bushnaq

The Nonlinear Library

Play Episode Listen Later Aug 9, 2024 7:43


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Hessian rank bounds the learning coefficient, published by Lucius Bushnaq on August 9, 2024 on LessWrong. TL;DR: In a neural network with d parameters, the (local) learning coefficient λ can be upper and lower bounded by the rank of the network's Hessian d1: d12λd12+dd13. The lower bound is a known result. The upper bound is a claim by me, and this post contains the proof for it.[1] If you find any problems, do point them out. Introduction The learning coefficient λ is a measure of loss basin volume and network complexity. You can think of it sort of like an effective parameter count of the model. Simpler models that do less stuff have smaller λ. Calculating λ for real networks people actually use is a pain. My hope is that these bounds help make estimating it a bit easier. In a network with d parameters, the learning coefficient is always a number 0λd2. An existing result in the literature says that if you've calculated the rank of the network's Hessian d1,[2] you get a tighter lower bound d12λ. I claim that we can also get a tighter upper bound λd12+dd13, where dd1 will be the dimension of the Hessian kernel, meaning the number of zero eigenvalues it has.[3] This is neat because it means we can get some idea of how large λ is just with linear algebra. All we need to know is how many zero eigenvalues the Hessian has.[4] Singular Learning Theory introductions often stress that just looking at the Hessian isn't enough to measure basin volume correctly. But here we see that if you do it right, the Hessian eigenspectrum can give you a pretty good idea of how large λ is. Especially if there aren't that many zero eigenvalues. Intuitively, the lower bound works because a direction in the parameters w that isn't free to vary to second order in the Taylor expansion won't become any more free to vary if you pile on a bunch of higher order terms. The Second order term strictly dominates the higher order ones, they can't cancel it out. Qualitatively speaking, the upper bound works for the same reason. The higher order terms in the Taylor expansion of the loss can only matter so much. The Hessian is the leading term, so it can impact λ the most, adding 12 per Hessian rank to it. The remaining O(w3) terms can only add up to 13 for the remaining directions. The proof for the upper bound will just be a small modification of the proof for theorem 7.2 on pages 220 and 221 of Algebraic Geometry and Statistical Learning Theory. Maybe read that first if you want more technical context. Some words on notation In the following, I'll mostly stick to the notation and conventions of the book Algebraic Geometry and Statistical Learning Theory. You can read about all the definitions there. I'm too lazy to reproduce them all. To give some very rough context, K(w) is sort of like the 'loss' at parameter configuration w, φ(w) is our prior over parameters, and Z(n) is the partition function after updating on n data points.[5] Theorem: Let WRd be the set of parameters of the model. If there exists an open set UW such that {wU:K(w)=0,φ(w)>0} is not an empty set, and we define d1= rank(H) as the rank of the Hessian H at a w0U Hi,j=2K(w)wiwj|w=w0 with wi,wj elements of some orthonormal basis {w1,…wd} of Rd, then λd12+dd13. Proof: We can assume w0=0 without loss of generality. If ϵ1,ϵ2 are sufficiently small constants, Z(n)=exp(nK(w))φ(w)dw|w(1)|ϵ1,|w(2)|ϵ2exp(nK(w))φ(w)dw. Here, w(1)W/ker(H),w(2)ker(H). If we pick {w1,…wd} to be the Hessian eigenbasis, then for sufficiently small |w|>0 K(w)=12d1i,i=1Hi,iw(1)iw(1)i+O(|w|3) . Hence Z(n)|w(1)|ϵ1,|w(2)|ϵ2exp{n2d1iHi,iw(1)iw(1)inO(|w|3)}φ(w)dw. Transforming w'(1)=n12w(1),w'(2)=n13w(2), we obtain Z(n)nd12ndd13|w'(1)|1,|w'(2)|1exp{12d1iHi,iw'(1)iw'(1)i+O(|w'|3)}φ(w'(1)n12,w'(2)n13)dw'(1)dw'(2). Rearranging gives Z(n)nd12+dd13|w'|1exp{12d1i=1Hi,iw'(1)iw'(...

The Nonlinear Library: LessWrong
LW - The Hessian rank bounds the learning coefficient by Lucius Bushnaq

The Nonlinear Library: LessWrong

Play Episode Listen Later Aug 9, 2024 7:43


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Hessian rank bounds the learning coefficient, published by Lucius Bushnaq on August 9, 2024 on LessWrong. TL;DR: In a neural network with d parameters, the (local) learning coefficient λ can be upper and lower bounded by the rank of the network's Hessian d1: d12λd12+dd13. The lower bound is a known result. The upper bound is a claim by me, and this post contains the proof for it.[1] If you find any problems, do point them out. Introduction The learning coefficient λ is a measure of loss basin volume and network complexity. You can think of it sort of like an effective parameter count of the model. Simpler models that do less stuff have smaller λ. Calculating λ for real networks people actually use is a pain. My hope is that these bounds help make estimating it a bit easier. In a network with d parameters, the learning coefficient is always a number 0λd2. An existing result in the literature says that if you've calculated the rank of the network's Hessian d1,[2] you get a tighter lower bound d12λ. I claim that we can also get a tighter upper bound λd12+dd13, where dd1 will be the dimension of the Hessian kernel, meaning the number of zero eigenvalues it has.[3] This is neat because it means we can get some idea of how large λ is just with linear algebra. All we need to know is how many zero eigenvalues the Hessian has.[4] Singular Learning Theory introductions often stress that just looking at the Hessian isn't enough to measure basin volume correctly. But here we see that if you do it right, the Hessian eigenspectrum can give you a pretty good idea of how large λ is. Especially if there aren't that many zero eigenvalues. Intuitively, the lower bound works because a direction in the parameters w that isn't free to vary to second order in the Taylor expansion won't become any more free to vary if you pile on a bunch of higher order terms. The Second order term strictly dominates the higher order ones, they can't cancel it out. Qualitatively speaking, the upper bound works for the same reason. The higher order terms in the Taylor expansion of the loss can only matter so much. The Hessian is the leading term, so it can impact λ the most, adding 12 per Hessian rank to it. The remaining O(w3) terms can only add up to 13 for the remaining directions. The proof for the upper bound will just be a small modification of the proof for theorem 7.2 on pages 220 and 221 of Algebraic Geometry and Statistical Learning Theory. Maybe read that first if you want more technical context. Some words on notation In the following, I'll mostly stick to the notation and conventions of the book Algebraic Geometry and Statistical Learning Theory. You can read about all the definitions there. I'm too lazy to reproduce them all. To give some very rough context, K(w) is sort of like the 'loss' at parameter configuration w, φ(w) is our prior over parameters, and Z(n) is the partition function after updating on n data points.[5] Theorem: Let WRd be the set of parameters of the model. If there exists an open set UW such that {wU:K(w)=0,φ(w)>0} is not an empty set, and we define d1= rank(H) as the rank of the Hessian H at a w0U Hi,j=2K(w)wiwj|w=w0 with wi,wj elements of some orthonormal basis {w1,…wd} of Rd, then λd12+dd13. Proof: We can assume w0=0 without loss of generality. If ϵ1,ϵ2 are sufficiently small constants, Z(n)=exp(nK(w))φ(w)dw|w(1)|ϵ1,|w(2)|ϵ2exp(nK(w))φ(w)dw. Here, w(1)W/ker(H),w(2)ker(H). If we pick {w1,…wd} to be the Hessian eigenbasis, then for sufficiently small |w|>0 K(w)=12d1i,i=1Hi,iw(1)iw(1)i+O(|w|3) . Hence Z(n)|w(1)|ϵ1,|w(2)|ϵ2exp{n2d1iHi,iw(1)iw(1)inO(|w|3)}φ(w)dw. Transforming w'(1)=n12w(1),w'(2)=n13w(2), we obtain Z(n)nd12ndd13|w'(1)|1,|w'(2)|1exp{12d1iHi,iw'(1)iw'(1)i+O(|w'|3)}φ(w'(1)n12,w'(2)n13)dw'(1)dw'(2). Rearranging gives Z(n)nd12+dd13|w'|1exp{12d1i=1Hi,iw'(1)iw'(...

KNON Radio
America Is Qualitatively Changed

KNON Radio

Play Episode Listen Later Jul 23, 2024 15:44


America Is Qualitatively Changed by

Evolvepreneur®  (After Hours)
EPS07:063 [Donna Groves] Wellbeing In The Workplace

Evolvepreneur® (After Hours)

Play Episode Listen Later Dec 29, 2023 25:53


Welcome to the Evolvepreneur (After Hours) Show I am your Special Host Richard Wray Join me today where we dig deep with our guests and get you the best concepts and strategies to fast-track your business. My very special guest today is Donna Groves ... Donna Groves has worked for 30 years in male-dominated infrastructure and construction fields. She faced significant challenges as often the only woman on work sites. Donna realized her own perfectionism and relentless drive were negatively impacting her team's well-being. She implemented changes like flexibility, wellness programs, and celebrations to improve work-life balance. Donna now helps other construction companies adopt such well-being initiatives. Pilot programs showed reductions in sick leave and better health metrics. Qualitatively, workers reported being happier and more productive while working less. Donna's efforts have helped challenge the traditionally poor work culture and high stress levels in the construction industry. She is now passing on these lessons to help entire communities impacted by major projects.

random Wiki of the Day
Energy profile (chemistry)

random Wiki of the Day

Play Episode Listen Later Dec 22, 2023 1:50


rWotD Episode 2423: Energy profile (chemistry) Welcome to random Wiki of the Day where we read the summary of a random Wikipedia page every day.The random article for Friday, 22 December 2023 is Energy profile (chemistry).In theoretical chemistry, an energy profile is a theoretical representation of a chemical reaction or process as a single energetic pathway as the reactants are transformed into products. This pathway runs along the reaction coordinate, which is a parametric curve that follows the pathway of the reaction and indicates its progress; thus, energy profiles are also called reaction coordinate diagrams. They are derived from the corresponding potential energy surface (PES), which is used in computational chemistry to model chemical reactions by relating the energy of a molecule(s) to its structure (within the Born–Oppenheimer approximation).Qualitatively, the reaction coordinate diagrams (one-dimensional energy surfaces) have numerous applications. Chemists use reaction coordinate diagrams as both an analytical and pedagogical aid for rationalizing and illustrating kinetic and thermodynamic events. The purpose of energy profiles and surfaces is to provide a qualitative representation of how potential energy varies with molecular motion for a given reaction or process.This recording reflects the Wikipedia text as of 01:10 UTC on Friday, 22 December 2023.For the full current version of the article, see Energy profile (chemistry) on Wikipedia.This podcast uses content from Wikipedia under the Creative Commons Attribution-ShareAlike License.Visit our archives at wikioftheday.com and subscribe to stay updated on new episodes.Follow us on Mastodon at @wikioftheday@masto.ai.Also check out Curmudgeon's Corner, a current events podcast.Until next time, I'm Ivy Neural.

Multimodal by Bakz T. Future
#48 - The Art of Prompt Engineering with Andrew Mayne (ex-OpenAI Creative Apps and Science Communicator)

Multimodal by Bakz T. Future

Play Episode Listen Later Dec 6, 2023 192:08


Discussing Prompt Engineering and recent OpenAI developments with ex-OpenAI Creative Apps and Scientific Communicator Andrew Mayne   Timestamps: 00:00:00 - Teaser Reel Intro 00:01:01 - Intro / Andrew's background 00:02:49 - What was it like working at OpenAI when you first joined? 00:12:59 - Was Andrew basically one of the earliest Prompt Engineers? 00:14:04 - How Andrew Hacked his way into a tech job at OpenAI 00:17:08 - Parallels between Hollywood and Tech jobs 00:20:58 - Parallels between the world of Magic and working at OpenAI 00:25:00 - What was OpenAI like in the Early Days? 00:30:24 - Why it was hard promoting GPT-3 early on 00:31:00 - How would you describe the current 'instruction age' of prompt design? 00:35:22 - What was GPT-4 like freshly trained? 00:39:00 - Is there anything different about the raw base model without RLHF? 00:42:00 - Optimizations that go into Language models like GPT-4 00:43:30 - What was it like using DALL-E 3 very early on? 00:44:38 - Do you know who came up with the 'armchair in the shape of an avocado' prompt at OpenAI? 00:45:48 - Did you experience 'DALL-E Dreams' as a part of the DALL-E 2 beta? 00:47:16 - How else has prompt design changed? 00:49:27 - How has prompt design changed because of ChatGPT? 00:52:40 - How to get ChatGPT to mimick and emulate personalities better? 00:54:30 - Mimicking Personalities II (How to do Style with ChatGPT) 00:56:40 - Fine Tuning ChatGPT for Mimicking Elon Musk 00:59:44 - How do you get ChatGPT to come up with novel and brilliant ideas? 01:02:40 - How do you get ChatGPt to get away from conventional answers? 01:05:14 - Will we ever get single-shot, real true novelty from LLM's? 01:10:05 - Prompting for ChatGPT Voice Mode 01:12:20 - Possibilities and Prompting for GPT-4 Vision 01:15:45 - GPT-4 Vision Use Cases/Startup Ideas 01:21:37 - Does multimodality make language models better or are the benefits marginal? 01:24:00 - Intuitively, has multimodality improved the world model of LLM's like GPT-4? 01:25:33 - What would it take for ChatGPT to write half of your next book? 01:29:10 - Qualitatively, what would it take to convince you about a book written by AI?  What are the characteristics? 01:31:30 - Could an LLM mimick Andrew Mayne's writing style? 01:37:49 - Jailbreaking ChatGPT 01:41:12 - What's the next era of prompt engineering? 01:45:50 - How have custom instructions changed the game? 01:54:41 - How far do you think we are from asking a model how to make 10 million dollars and getting back a legit answer? 02:01:07 - Part II - Making Money with LLM's 02:11:32 - How do you make a chat bot more reliable and safe? 02:12:12 - How do you get ChatGPT to consistently remember criteria and work within constraints? 02:12:45 - What about DALL-E?  How do you get it to better create within constraints? 02:14:14 - What's your prompt practice like? 02:15:10 - Do you intentionally sit down and practice writing prompts? 02:16:45 - How do you build an intuition around prompt design for an LLM? 02:20:00 - How do you like to iterate on prompts? Do you have a process? 02:21:45 - How do you know when you've hit the ceiling with a prompt? 02:24:00 - How do you know a single line prompt is has room to improve? 02:26:40 - Do you actually need to know OpenAI's training data?  What are some ways to mitigate this? 02:30:40 - What are your thoughts on automated prompt writing/optimization? 02:33:20 - How do you get a job as a prompt engineer?  What makes a top tier prompt engineer different from an everyday user? 02:37:20 - How do you think about scaling laws a prompt engineer? 02:39:00 - Effortless Prompt Design 02:40:52 - What are some research areas that would get you a job at OpenAI? 02:43:30 - The Research Possibilities of Optimization & Inference 02:45:59 - If you had to guess future capabilities of GPT-5 what would they be? 02:50:16 - What are some capabilities that got trained out of GPT-4 for ChatGPT? 02:51:10 - Is there any specific capability you could imagine for GPT-5?  Why is it so hard to predict them? 02:56:06 - Why is it hard to predict future LLM capabilities? (Part II) 02:59:47 - What made you want to leave OpenAI and start your own consulting practice? 03:05:29 - Any remaining advice for creatives, entrepreneurs, prompt engineers? 03:09:25 - Closing   Subscribe to the Multimodal By Bakz T. Future Podcast! Spotify - https://open.spotify.com/show/7qrWSE7ZxFXYe8uoH8NIFV Apple Podcasts - https://podcasts.apple.com/us/podcast/multimodal-by-bakz-t-future/id1564576820 Google Podcasts -  https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkLnBvZGJlYW4uY29tL2Jha3p0ZnV0dXJlL2ZlZWQueG1s Stitcher - https://www.stitcher.com/show/multimodal-by-bakz-t-future Other Podcast Apps (RSS Link) - https://feed.podbean.com/bakztfuture/feed.xml   Connect with me: YouTube - https://www.youtube.com/bakztfuture Substack Newsletter - https://bakztfuture.substack.com​ Twitter - https://www.twitter.com/bakztfuture​ Instagram - https://www.instagram.com/bakztfuture​ Github - https://www.github.com/bakztfuture  

Proactive - Interviews for investors
Exclusive Interview with Saturna Capital CEO Scott Klimo: Insights into Islamic Global Equity ETF

Proactive - Interviews for investors

Play Episode Listen Later Nov 3, 2023 5:50


Saturna Capital Chief Investment Officer Scott Klimo joined Steve Darling from Proactive to discuss the Islamic global equity ETF and its unique investment strategy. Saturna Capital, founded in 1989, became a pioneer in managing Islamic compliant assets in the United States. They recently expanded their offerings to include an ETF in Europe. Islamic investing involves both qualitative and quantitative criteria. Qualitatively, certain prohibited business activities, such as alcohol, pork, tobacco, and exploitative media, are avoided. The primary restriction is on interest or usury, as mentioned in the Quran. Consequently, they do not invest in conventional financial activities like banks or insurance companies. Quantitatively, they prohibit excessive debt, defined as debt exceeding 33% of total market capitalization. This results in a portfolio with companies characterized by strong cash generation and robust balance sheets. The portfolio mainly includes technology, healthcare, industrials, and consumer staples/discretionary sectors. These sectors align with their criteria, offering strong balance sheets and stable cash flows. Saturna Capital's strategy traditionally provides attractive downside protection. Their low debt levels insulate them from rising interest rates, making their strategy favorable in the current environment. Saturna Capital's longstanding success stems from their commitment to Islamic-compliant investing and their ability to adapt to changing market conditions. #invest #investing #investment #investor #stockmarket #stocks #stock #stockmarketnews #SaturnaCapital #ScottKlimo #IslamicInvesting #GlobalEquityETF #InvestmentStrategy #FinancialMarkets #IslamicFinance #AssetManagement #EthicalInvesting #MarketInsights #IslamicCompliantAssets #ETFInvesting #InvestmentPortfolio #EconomicOutlook #StockMarket #TechnologyStocks #HealthcareInvesting #IndustrialsSector #ConsumerStaples #ConsumerDiscretionary #InterestRates #MarketConditions #EconomicTrends #WealthManagement #AssetAllocation #InvestorInsights

The Association 100 Podcast
Investing in Diversity

The Association 100 Podcast

Play Episode Listen Later Oct 6, 2023 17:28


Join us for the latest episode of The A100 as Jeanne Wolf, CEO at CFA Society Boston, delves into the world of investments and their ongoing efforts to foster inclusivity within the industry. Podcast co-hosts Meghan Henning and Keaveny Hewitt chat with Jeanne about key strategies to foster inclusivity, measuring the success of these campaigns, and collaborating with stakeholders to amplify DEI initiatives. CFA Society Boston is a community of over 6,000 investment professionals, part of the broader global network of CFA Institute. Their members are primarily investment and asset managers, ranging from those overseeing 401K portfolios to institutional investments and university endowments. The discussion centers around CFA Society Boston's efforts to promote inclusivity in a predominantly male-dominated industry.  Jeanne emphasizes that while the field doesn't fully represent the diversity of our communities, change is underway. They draw inspiration from industries like law, accounting and medicine that have successfully diversified their ranks and associated membership organizations. In terms of assessing the impact and measuring the success of their efforts, the organization uses both qualitative and quantitative metrics. Qualitatively, they assess whether people feel included, and they gauge the representation of diverse leadership at events, board meetings, and the committee level.  She also touched on the undergraduate Women in Investment Management Internship, a partnership involving the society, its members, and local universities. It has grown substantially over the years, expanding beyond just women to include more diverse participants from underrepresented groups. The program provides a supportive environment for aspiring investment professionals and connects them with peers who share their journey.  Jeanne discusses how their organization gives back by providing financial literacy to underserved communities. One such example is partnerships with Massachusetts correctional institutions to bring financial education to prisoners.  Stream this episode to learn more about CFA Society Boston's journey towards inclusivity, the impact of their initiatives, and how they're shaping a more diverse and equitable investment profession through partnerships.  Subscribe today so you never miss out on future episodes.  Follow along for best practices, top trends, helpful ideas and smart strategies and tactics that work in the world of associations.  LinkedIn: /company/the-association-100

The Nonlinear Library
EA - Policy ideas for mitigating AI risk by Thomas Larsen

The Nonlinear Library

Play Episode Listen Later Sep 16, 2023 16:35


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Policy ideas for mitigating AI risk, published by Thomas Larsen on September 16, 2023 on The Effective Altruism Forum. Note: This post contains personal opinions that don't necessarily match the views of others at CAIP. Executive Summary Advanced AI has the potential to cause an existential catastrophe. In this essay, I outline some policy ideas which could help mitigate this risk. Importantly, even though I focus on catastrophic risk here, there are many other reasons to ensure responsible AI development. I am not advocating for a pause right now. If we had a pause, I think it would only be useful insofar as we use the pause to implement governance structures that mitigate risk after the pause has ended. This essay outlines the important elements I think a good governance structure would include: visibility into AI development, and brakes that the government could use to stop dangerous AIs from being built. First, I'll summarize some claims about the strategic landscape. Then, I'll present a cursory overview of proposals I like for US domestic AI regulation. Finally, I'll talk about a potential future global coordination framework, and the relationship between this and a pause. The Strategic Landscape Claim 1: There's a significant chance that AI aligment is difficult. There is no scientific consensus on the difficulty of AI alignment. Chris Olah from Anthropic tweeted the following, simplified picture: ~40% of their estimate is on AI safety being harder than Apollo, which took around 1 million person-years. Given that less than a thousand people are working on AI safety, this viewpoint would seem to imply that there's a significant chance that we are far from being ready to build powerful AI safely. Given just Anthropic's alleged views, I think it makes sense to be ready to stop AI development. My personal views are more pessimistic than Anthropic's. Claim 2: In the absence of powerful aligned AIs, we need to prevent catastrophe-capable AI systems from being built. Given developers are not on track to align AI before it becomes catastrophically dangerous, we need the ability to slow down or stop before AI is catastrophically dangerous. There are several ways to do this. I think the best one involves building up the government's capacity to safeguard AI development. Set up government mechanisms to monitor and mitigate catastrophic AI risk, and empower them to institute a national moratorium on advancing AI if it gets too dangerous. (Eventually, the government could transition this into an international moratorium, while coordinating internationally to solve AI safety before that moratorium becomes infeasible to maintain. I describe this later.) Some others think it's better to try to build aligned AIs that defend against AI catastrophes. For example, you can imagine building defensive AIs that identify and stop emerging rogue AIs. To me, the main problem with this plan is that it assumes we will have the ability to align the defensive AI systems. Claim 3: There's a significant (>20%) chance AI will be capable enough to cause catastrophe by 2030. AI timelines have been discussed thoroughly elsewhere, so I'll only briefly note a few pieces of evidence for this claim I find compelling: Current trends in AI. Qualitatively, I think another jump of the size from GPT-2 to GPT-4 could get us to catastrophe-capable AI systems. Effective compute arguments, such as Ajeya Cotra's Bioanchors report. Hardware scaling, continued algorithmic improvement, investment hype are all continuing strongly, leading to a 10x/year increase of effective compute used to train the best AI system. Given the current rates of progress, I expect another factor of a million increase in effective compute by 2030. Some experts think powerful AI is coming soon, both inside and outside of frontier labs. ...

A Sound Heart
Jesus Gifts a Qualitatively New Life!

A Sound Heart

Play Episode Listen Later Jul 30, 2023 31:00


In contradistinction to all others only Jesus gives new life. The old things pass away and all things become fresh to remain so!

Thoughts on the Market
Mid-Year Economic Outlook: A Dichotomy Worth Watching

Thoughts on the Market

Play Episode Listen Later Jun 8, 2023 10:53


As we look toward the second half of 2023, the U.S. and Europe are likely to see very slow growth but avoid a recession, while Asia may be poised to become an engine of economic growth.----- Transcript -----Andrew Sheets: Welcome to Thoughts in the Market. I'm Andrew Sheets, Morgan Stanley's Chief Global Cross-Asset Strategist. Seth Carpenter: And I'm Seth Carpenter, Morgan Stanley's Global Chief Economist. Andrew Sheets: And on this special two part episode of the podcast, we'll be discussing Morgan Stanley's global mid-year outlook. Today we'll focus on economics, and tomorrow we'll turn our attention to strategy. It's Thursday, June 8th at 3 p.m. in London. Seth Carpenter: And it's 10 a.m. in New York. Andrew Sheets: Seth, it's great to sit down with you. We've been talking over the last several weeks as Morgan Stanley's gone through this outlook process. And this is a big joint collaborative forecasting process across Morgan Stanley research, where the economists and the strategists get together and think about what the next 12 to 18 months might look like. And, you know, we're sitting down at this really fascinating time for markets. The U.S. labor market is at some of its strongest levels since the late 1960s. Core inflation is at levels that we really haven't seen since the 1980s. The Federal Reserve and the European Central Bank have been raising rates at a pace that hasn't really been seen in 30 or 40 years. So, as you step back from all of these quite unusual occurrences, Seth, how do you frame where the global economy is at the moment and where is it headed? Seth Carpenter: I'd say there's one major dichotomy that I'll first start with in the global economy. On the one hand, Asia as a region really poised to have the strongest economic growth. And in very sharp contrast, when I think about the rest of the world, the United States and the Euro area, we see those as being actually quite weak. Second, China, you can't get out of a discussion of the global economy without talking about China. And there, the first quarter saw massive growth in China as all of the restrictions from COVID were removed, and as the government shifted the rest of its policies towards being supportive of growth. Now, there's been a little bit of a stumble in the second quarter, but we think that's temporary. And so you'll see a cyclical boost to Asia, coming out of China. Layer on top of this our structurally bullish views on economies like India and Indonesia, where there's a medium term, really positive note, you have all of these coming together, and it sets the stage for Asia really to be an engine of economic growth. The sharp contrast, the United States, the euro area. The inflation that you referenced has led central banks to raise interest rates for one reason and one reason alone. They want to slow those economies down, so the inflationary impulses start to fade away. Andrew Sheets: So Seth that's great context, and I'd like to drill down a little bit more detail on two economies in particular, the United States and China. For the United States, this idea of a soft landing, I think investors will point to the fact that given how strong the labor market is, given how high inflation is, given how inverted the yield curve is, given how much banks are tightening lending conditions, all those factors make it less likely historically that a recession is avoided. So, why do you think a soft landing is the most likely option here? Why do you think that that's our central scenario? Seth Carpenter: Yeah, I completely agree with you, Andrew. The discussion, the debate, the push back, the soft landing part of our thesis is definitely central to all of that discussion. Maybe I'll just start a little bit with the definition because I think the phrase soft landing can mean different things to different people. What I don't mean is that we just have great economic growth and inflation comes down on its own. Quite to the contrary, we are looking for economic growth in the United States to slow so much that it basically comes to a standstill. This year and next year are both likely to be years where economic growth is substantially below the long run productive capacity of the economy. Why? Because the Fed is raising interest rates, making the cost of borrowing, making the cost of extending credit higher, so that there is less spending in the economy so that those inflationary impulses go away. So that's what we're thinking is going to happen, is that we'll have really, really weak growth. But your question also gets into is if you're going to have that much slowing in the economy, why not a recession? And here, it's always fraught to say this time is different. But I think you highlighted what is really different about this cycle. It's the first time the Fed is pulling inflation down, instead of trying to limit its rise, in 40 years. But in addition to that, we're coming out of COVID. And I don't think anyone would argue that COVID is a normal part of an economic business cycle in the United States. Andrew Sheets: So we've just covered some of the reasons why we are more optimistic than those who expect a recession in the U.S. over the next 12 months. There are investors who say we're too pessimistic, and yet the economy in the first half of this year, the U.S. economy has been surprisingly solid and chugged along. So, what do you think is behind that? And why is it wrong to say that the last six months kind of disprove the idea that you need material slowing ahead? Seth Carpenter: Let's examine the facts. Housing activity actually did fall pretty substantially. If we compare where non-farm payrolls are and if you do any sort of averaging. Over months. Where we are now is actually much less hiring than what we saw six months ago, nine months ago, a year ago, the payrolls report for the month of May notwithstanding. We are seeing some slowing down there. And remember, I just said one of the reasons why we think we're going to get a soft landing is that the economy is still shorthanded. Some of the strength that we're seeing in hiring is making up for the fact that businesses were so cautious to hire in the past. I think the last thing to keep in mind is if we are wrong, if this slowing isn't in train, then the Federal Reserve is just going to have to raise interest rates even more because inflation, although it's coming down, there is a residual amount of inflation that really does need to be, in the Fed's mind, at least squeezed out of the economy by having subpar growth. Andrew Sheets: I'd like to turn now to the world's second largest economy, China, where there's also a great level of skepticism towards the economy generally, but also our view that the economy will recover in the second half of the year. If you look at commodity prices, Chinese equity prices, China's currency, there's been a lot of weakness across the board. So, what do you think has been going on? Why do you think the data has softened more recently and why is that not the right thing to extrapolate going forward for China growth? Seth Carpenter: Absolutely. All the asset prices that you point to, all of the market trades that people were looking to for a strong China recovery. Boy, they were a little bit disappointing. But the reason I think they were disappointing in general is because it was a different kind of expansion, so much domestic spending, so much on services. People were very much accustomed to looking at a Chinese surge coming from investment spending, infrastructure spending, housing spending, and most of the spending was elsewhere. So I think that's the first part of the puzzle. The second part of the puzzle, though, is Q2 legitimately has had a notable slowdown. Does that mean the whole China reopening story is derailed? I don't think so, and I don't think so for a few reasons. One, we are still seeing the spending on consumer services. So that's important. Second, we think what the government is planning on doing is topping up growth to make sure that the unemployment rate, especially among young people, continues to come down. And so it'll set us up for a strong second half of the year.Andrew Sheets: I'd like to ask you next about inflation. You know, I think something that's so fascinating about this year is if you were sitting there in early January, there was a real temptation, I think, by the market to think, 2023 was supposed to be the year where inflation is coming down. Yet inflation has been kind of surprisingly high this year. So if you think about our inflation forecasts, which do have inflation moderating throughout this year and into next year, what do you think is the more dominant part of that story that investors should be mindful of? Is it that inflation's falling? Is it that core inflation is still uncomfortably high? Is it a bit of both? Seth Carpenter: How about if I say absolutely all of the above? The inflation forecasting since COVID has been one of the most challenging parts of this job, I have to admit. So what is going on? Headline measures of inflation. So including food and energy prices that people like to strip out because it can be volatile, those are unquestionably off their peak and have come down a lot, not surprisingly, because oil prices, natural gas prices had spiked so much and those have backed off. But even looking at the core measures, as you say, we are seeing that core inflation has peaked in the U.S. and the euro area, sort of the major developed market economies where, you know, markets are focused and we are seeing things come down. And in particular, if you look in the United States, inflation on consumer goods, if you average over the past six months or so, has been about zero or negative. So went from very high inflation down to zero and for a few of those months, outright negative inflation. So I think it's impossible to say that we haven't seen a shift in terms of inflation. Andrew Sheets: And for monetary policy, what do you think that means? If we think about the big central banks, the Fed, the European Central Bank, the Bank of Japan, what do you think this inflation backdrop means for monetary policy, looking forward?Seth Carpenter: So for the Federal Reserve in the U.S. and the European Central Bank in the euro area, very, very similar. Different a little bit in terms of the specific numbers, the specific timing. But the strategy is the same, which is to raise policy rates to the point where they feel confident that they're exerting restraint on the economy and allow inflation to come down over the course of another year or two years. In the United States, for example, you know, our baseline view is that the Fed did its last rate hike at the May meeting. The market is debating with itself as to whether or not the Fed is done. But, you know, the idea is make sure rates are in a way restrictive and then stay there for as long as needed to ensure that you get that downward trajectory in inflation and then only very gradually start to lower the policy rate as inflation comes down and looks like it's very clearly going back to target. In the euro area, same answer. Qualitatively, we're not convinced they're quite done raising rates. We think they probably have two more policy meetings where they raise their policy rate 25 basis points at each meeting. But then staying at that peak rate for an extended period of time and then gradually letting the policy rate come back down as the economy slows. Now, you mentioned Japan. And Japan, in our view, is really a bit different. When we think about the underlying, the trend inflation. We think that is about to peak now and come back down and in fact get below their 2% inflation target.Andrew Sheets: Very interesting. Seth, thanks for taking the time to talk. Seth Carpenter: Andrew, it is always a pleasure for me to get to talk to you. Andrew Sheets: And thanks for listening. Be sure to tune in for part two of this episode where Seth and I will discuss Morgan Stanley's mid-year strategy Outlook. If you enjoy Thoughts on the Market, please leave us a review on Apple Podcasts, and share the podcast with a friend or colleague today.

The Nonlinear Library
LW - Algorithmic Improvement Is Probably Faster Than Scaling Now by johnswentworth

The Nonlinear Library

Play Episode Listen Later Jun 6, 2023 3:01


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Algorithmic Improvement Is Probably Faster Than Scaling Now, published by johnswentworth on June 6, 2023 on LessWrong. The Story as of ~4 Years Ago Back in 2020, a group at OpenAI ran a conceptually simple test to quantify how much AI progress was attributable to algorithmic improvements. They took ImageNet models which were state-of-the-art at various times between 2012 and 2020, and checked how much compute was needed to train each to the level of AlexNet (the state-of-the-art from 2012). Main finding: over ~7 years, the compute required fell by ~44x. In other words, algorithmic progress yielded a compute-equivalent doubling time of ~16 months (though error bars are large in both directions). On the compute side of things, in 2018 a group at OpenAI estimated that the compute spent on the largest training runs was growing exponentially with a doubling rate of ~3.4 months, between 2012 and 2018. So at the time, the rate of improvement from compute scaling was much faster than the rate of improvement from algorithmic progress. (Though algorithmic improvement was still faster than Moore's Law; the compute increases were mostly driven by spending more money.) ... And That Immediately Fell Apart As is tradition, about 5 minutes after the OpenAI group hit publish on their post estimating a training compute doubling rate of ~3-4 months, that trend completely fell apart. At the time, the largest training run was AlphaGoZero, at about a mole of flops in 2017. Six years later, Metaculus currently estimates that GPT-4 took ~10-20 moles of flops. AlphaGoZero and its brethren were high outliers for the time, and the largest models today are only ~one order of magnitude bigger. A more recent paper with data through late 2022 separates out the trend of the largest models, and estimates their compute doubling time to be ~10 months. (They also helpfully separate the relative importance of data growth - though they estimate that the contribution of data was relatively small compared to compute growth and algorithmic improvement.) On the algorithmic side of things, a more recent estimate with more recent data (paper from late 2022) and fancier analysis estimates that algorithmic progress yielded a compute-equivalent doubling time of ~9 months (again with large error bars in both directions). ... and that was just in vision nets. I haven't seen careful analysis of LLMs (probably because they're newer, so harder to fit a trend), but eyeballing it... Chinchilla by itself must have been a factor-of-4 compute-equivalent improvement at least. And then there's been chain-of-thought and all the other progress in prompting and fine-tuning over the past couple years. That's all algorithmic progress, strategically speaking: it's getting better results by using the same amount of compute differently. Qualitatively: compare the progress in prompt engineering and Chinchilla and whatnot over the past ~year to the leisurely ~2x increase in large model size predicted by recent trends. It looks to me like algorithmic progress is now considerably faster than scaling. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
AF - Algorithmic Improvement Is Probably Faster Than Scaling Now by johnswentworth

The Nonlinear Library

Play Episode Listen Later Jun 6, 2023 3:01


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Algorithmic Improvement Is Probably Faster Than Scaling Now, published by johnswentworth on June 6, 2023 on The AI Alignment Forum. The Story as of ~4 Years Ago Back in 2020, a group at OpenAI ran a conceptually simple test to quantify how much AI progress was attributable to algorithmic improvements. They took ImageNet models which were state-of-the-art at various times between 2012 and 2020, and checked how much compute was needed to train each to the level of AlexNet (the state-of-the-art from 2012). Main finding: over ~7 years, the compute required fell by ~44x. In other words, algorithmic progress yielded a compute-equivalent doubling time of ~16 months (though error bars are large in both directions). On the compute side of things, in 2018 a group at OpenAI estimated that the compute spent on the largest training runs was growing exponentially with a doubling rate of ~3.4 months, between 2012 and 2018. So at the time, the rate of improvement from compute scaling was much faster than the rate of improvement from algorithmic progress. (Though algorithmic improvement was still faster than Moore's Law; the compute increases were mostly driven by spending more money.) ... And That Immediately Fell Apart As is tradition, about 5 minutes after the OpenAI group hit publish on their post estimating a training compute doubling rate of ~3-4 months, that trend completely fell apart. At the time, the largest training run was AlphaGoZero, at about a mole of flops in 2017. Six years later, Metaculus currently estimates that GPT-4 took ~10-20 moles of flops. AlphaGoZero and its brethren were high outliers for the time, and the largest models today are only ~one order of magnitude bigger. A more recent paper with data through late 2022 separates out the trend of the largest models, and estimates their compute doubling time to be ~10 months. (They also helpfully separate the relative importance of data growth - though they estimate that the contribution of data was relatively small compared to compute growth and algorithmic improvement.) On the algorithmic side of things, a more recent estimate with more recent data (paper from late 2022) and fancier analysis estimates that algorithmic progress yielded a compute-equivalent doubling time of ~9 months (again with large error bars in both directions). ... and that was just in vision nets. I haven't seen careful analysis of LLMs (probably because they're newer, so harder to fit a trend), but eyeballing it... Chinchilla by itself must have been a factor-of-4 compute-equivalent improvement at least. And then there's been chain-of-thought and all the other progress in prompting and fine-tuning over the past couple years. That's all algorithmic progress, strategically speaking: it's getting better results by using the same amount of compute differently. Qualitatively: compare the progress in prompt engineering and Chinchilla and whatnot over the past ~year to the leisurely ~2x increase in large model size predicted by recent trends. It looks to me like algorithmic progress is now considerably faster than scaling. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library: LessWrong
LW - Algorithmic Improvement Is Probably Faster Than Scaling Now by johnswentworth

The Nonlinear Library: LessWrong

Play Episode Listen Later Jun 6, 2023 3:01


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Algorithmic Improvement Is Probably Faster Than Scaling Now, published by johnswentworth on June 6, 2023 on LessWrong. The Story as of ~4 Years Ago Back in 2020, a group at OpenAI ran a conceptually simple test to quantify how much AI progress was attributable to algorithmic improvements. They took ImageNet models which were state-of-the-art at various times between 2012 and 2020, and checked how much compute was needed to train each to the level of AlexNet (the state-of-the-art from 2012). Main finding: over ~7 years, the compute required fell by ~44x. In other words, algorithmic progress yielded a compute-equivalent doubling time of ~16 months (though error bars are large in both directions). On the compute side of things, in 2018 a group at OpenAI estimated that the compute spent on the largest training runs was growing exponentially with a doubling rate of ~3.4 months, between 2012 and 2018. So at the time, the rate of improvement from compute scaling was much faster than the rate of improvement from algorithmic progress. (Though algorithmic improvement was still faster than Moore's Law; the compute increases were mostly driven by spending more money.) ... And That Immediately Fell Apart As is tradition, about 5 minutes after the OpenAI group hit publish on their post estimating a training compute doubling rate of ~3-4 months, that trend completely fell apart. At the time, the largest training run was AlphaGoZero, at about a mole of flops in 2017. Six years later, Metaculus currently estimates that GPT-4 took ~10-20 moles of flops. AlphaGoZero and its brethren were high outliers for the time, and the largest models today are only ~one order of magnitude bigger. A more recent paper with data through late 2022 separates out the trend of the largest models, and estimates their compute doubling time to be ~10 months. (They also helpfully separate the relative importance of data growth - though they estimate that the contribution of data was relatively small compared to compute growth and algorithmic improvement.) On the algorithmic side of things, a more recent estimate with more recent data (paper from late 2022) and fancier analysis estimates that algorithmic progress yielded a compute-equivalent doubling time of ~9 months (again with large error bars in both directions). ... and that was just in vision nets. I haven't seen careful analysis of LLMs (probably because they're newer, so harder to fit a trend), but eyeballing it... Chinchilla by itself must have been a factor-of-4 compute-equivalent improvement at least. And then there's been chain-of-thought and all the other progress in prompting and fine-tuning over the past couple years. That's all algorithmic progress, strategically speaking: it's getting better results by using the same amount of compute differently. Qualitatively: compare the progress in prompt engineering and Chinchilla and whatnot over the past ~year to the leisurely ~2x increase in large model size predicted by recent trends. It looks to me like algorithmic progress is now considerably faster than scaling. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
EA - How much do markets value Open AI? by Ben West

The Nonlinear Library

Play Episode Listen Later May 15, 2023 7:48


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How much do markets value Open AI?, published by Ben West on May 14, 2023 on The Effective Altruism Forum. Summary: A BOTEC indicates that Open AI might have been valued at 220-430x their annual recurring revenue, which is high but not unheard of. Various factors make this multiple hard to interpret, but it generally does not seem consistent with investors believing that Open AI will capture revenue consistent with creating transformative AI. Overview Epistemic status: revenue multiples are intended as a rough estimate of how much investors believe a company is going to grow, and I would be surprised if my estimated revenue multiple was off by more than a factor of 5. But the "strategic considerations" portion of this is a bunch of wild guesses that I feel much less confident about. There has been some discussion about how much markets are expecting transformative AI, e.g. here. One obvious question is "why isn't Open AI valued at a kajillion dollars?" I estimate that Microsoft's investment implicitly valued OAI at 220-430x their annual recurring revenue. This is high - average multiples are around 7x, but some pharmaceutical companies have multiples > 1000x. This would seem to support the argument that investors think that OAI is exceptional (but not "equivalent to the Industrial Revolution" exceptional). However, Microsoft received a set of benefits from the deal which make the EV multiple overstated. Based on adjustments, I can see the actual implied multiple being anything from -2,200x to 3,200x. (Negative multiples imply that Microsoft got more value from access to OAI models than the amount they invested and are therefore willing to treat their investment as a liability rather than an asset.) One particularly confusing fact is that OAI's valuation appears to have gone from $14 billion in 2021 to $19 billion in 2023. Even ignoring anything about transformative AI, I would have expected that the success of ChatGPT etc. should have resulted in a more than a 35% increase. Qualitatively, my guess is that this was a nice but not exceptional deal for OAI, and I feel confused why they took it. One possible explanation is “the kind of people who can deploy $10B of capital are institutionally incapable of investing at > 200x revenue multiples”, which doesn't seem crazy to me. Another explanation is that this is basically guaranteeing them a massive customer (Microsoft), and they are willing to give up some stock to get that customer. Squiggle model here It would be cool if someone did a similar write up about Anthropic, although publicly available information on them is slim. My guess is that they will have an even higher revenue multiple (maybe infinite? I'm not sure if they had revenue when they first raised). Details Valuation: $19B A bunch of news sites (e.g. here) reported that Microsoft invested $10 billion to value OAI at $29 billion. I assume that this valuation is post money, meaning the pre-money valuation is 19 billion. Although this site says that they were valued at $14 billion in 2021, meaning that they only increased in value 35% the past two years. This seems weird, but I guess it is consistent with the view that markets aren't valuing the possibility of TAI. Revenue: $54M/year Reuters claims they are projecting $200M revenue in 2023. FastCompany says they made $30 million in 2022. If the deal closed in early 2023, then presumably annual projections of their monthly revenue were higher than $30 million, though it's unclear how much. Let's arbitrarily say MRR will increase 10x this year, implying a monthly growth rate of 10^(1/12) = 1.22 Solving the geometric series of 200 = x (1-1.22^12) / (1 -1.22) we get that their first month revenue is $4.46M, a run rate of $53.52M/year Other factors: The vast majority of the investment is going to be spent on Micros...

PaperPlayer biorxiv neuroscience
Qualitatively different delay-dependent working memory distortions in people with schizophrenia and healthy control subjects

PaperPlayer biorxiv neuroscience

Play Episode Listen Later Apr 6, 2023


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2023.04.04.535597v1?rss=1 Authors: Bansal, S., Bae, G.-Y., Robinson, B. M., Dutterer, J., Hahn, B., Luck, S. J., Gold, J. M. Abstract: Impairments in working memory(WM) have been well-documented in people with schizophrenia(PSZ). However, these quantitative WM impairments can often be explained by nonspecific factors, such as impaired goal maintenance. Here, we used a spatial orientation delayed-response task to explore a qualitative difference in WM dynamics between PSZ and healthy control subjects(HCS). Specifically, we took advantage of the discovery that WM representations may drift either toward or away from previous-trial targets(serial dependence). We tested the hypothesis that WM representations drift toward the previous-trial target in HCS but away from the previous-trial target in PSZ.We assessed serial dependence in PSZ(N=31) and HCS(N=25), using orientation as the to-be-remembered feature and memory delays from 0 to 8s. Participants were asked to remember the orientation of a teardrop-shaped object and reproduce the orientation after a varying delay period. Consistent with prior studies, we found that current-trial memory representations were less precise in PSZ than in HCS. We also found that WM for the current-trial orientation drifted toward the previous-trial orientation in HCS(representational attraction) but drifted away from the previous-trial orientation in PSZ(representational repulsion).These results demonstrate a qualitative difference in WM dynamics between PSZ and HCS that cannot easily be explained by nuisance factors such as reduced effort. Most computational neuroscience models also fail to explain these results, because they maintain information solely by means of sustained neural firing, which does not extend across trials. The results suggest a fundamental difference between PSZ and HCS in longer-term memory mechanisms that persist across trials, such as short-term potentiation and neuronal adaptation. Copy rights belong to original authors. Visit the link for more info Podcast created by Paper Player, LLC

YUTORAH: R' Efrem Goldberg -- Recent Shiurim
Turn Friday into Erev Shabbos (Part 134): More Shabbos Quantitatively and Qualitatively

YUTORAH: R' Efrem Goldberg -- Recent Shiurim

Play Episode Listen Later Mar 24, 2023 10:27


Survey of Shas Sugyas - Feed Podcast
Turn Friday into Erev Shabbos (Part 134): More Shabbos Quantitatively and Qualitatively

Survey of Shas Sugyas - Feed Podcast

Play Episode Listen Later Mar 24, 2023


Latest shiurim from Boca Raton Synagogue
Turn Friday into Erev Shabbos (Part 134): More Shabbos Quantitatively and Qualitatively

Latest shiurim from Boca Raton Synagogue

Play Episode Listen Later Mar 24, 2023 10:27


PaperPlayer biorxiv neuroscience
A domain-agnostic MR reconstruction framework using a randomly weighted neural network

PaperPlayer biorxiv neuroscience

Play Episode Listen Later Mar 24, 2023


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2023.03.22.533764v1?rss=1 Authors: Pal, A., Ning, L., Rathi, Y. Abstract: Purpose: To design a randomly-weighted neural network that performs domain-agnostic MR image reconstruction from undersampled k-space data without the need for ground truth or extensive in-vivo training datasets. The network performance must be similar to the current state-of-the-art algorithms that require large training datasets. Methods: We propose a Weight Agnostic randomly weighted Network method for MRI reconstruction (termed WAN-MRI) which does not require updating the weights of the neural network but rather chooses the most appropriate connections of the network to reconstruct the data from undersampled k-space measurements. The network architecture has three components, i.e. (1) Dimensionality Reduction Layers comprising of 3d convolutions, ReLu, and batch norm; (2) Reshaping Layer is Fully Connected layer; and (3) Upsampling Layers that resembles the ConvDecoder architecture. The proposed methodology is validated on fastMRI knee and brain datasets. Results: The proposed method provides a significant boost in performance for structural similarity index measure (SSIM) and root mean squared error (RMSE) scores on fastMRI knee and brain datasets at an undersampling factor of R=4 and R=8 while trained on fractal and natural images, and fine-tuned with only 20 samples from the fastMRI training k-space dataset. Qualitatively, we see that classical methods such as GRAPPA and SENSE fail to capture the subtle details that are clinically relevant. We either outperform or show comparable performance with several existing deep learning techniques (that require extensive training) like GrappaNET, VariationNET, J-MoDL, and RAKI. On diffusion MRI (dMRI), our method provides performs significantly better than all competing methods despite low SNR. The comparable performance of our method advocates the importance of letting untrained neural network figure out how to perform k-space to MR image reconstruction. Conclusion: The proposed algorithm (WAN-MRI) is agnostic to reconstructing images of different body organs or MRI modalities and provides excellent scores in terms of SSIM, PSNR, and RMSE metrics and generalizes better to out-of-distribution examples. The methodology does not require ground truth data and can be trained using very few undersampled multi-coil k-space training samples. Copy rights belong to original authors. Visit the link for more info Podcast created by Paper Player, LLC

Papers Read on AI
Token Merging: Your ViT But Faster

Papers Read on AI

Play Episode Listen Later Feb 21, 2023 30:39


We introduce Token Merging (ToMe), a simple method to increase the throughput of existing ViT models without needing to train. ToMe gradually combines similar tokens in a transformer using a general and light-weight matching algorithm that is as fast as pruning while being more accurate. Qualitatively, we find that ToMe merges object parts into one token, even over multiple frames of video. Overall, ToMe's accuracy and speed are competitive with state-of-the-art on images, video, and audio. 2022: Daniel Bolya, Cheng-Yang Fu, Xiaoliang Dai, Peizhao Zhang, Christoph Feichtenhofer, Judy Hoffman https://arxiv.org/pdf/2210.09461v1.pdf

Throwback Thursday Cold cases At The EGO
NEED TO KNOW - Understanding social identity theory : Psychologists believe that intergroup behavior differs qualitatively from individual

Throwback Thursday Cold cases At The EGO

Play Episode Listen Later Jan 8, 2023 3:22


Les Immatures De Paris And The Policeman
Even if actions are tainted by sin, all evil is not the same either qualitatively or quantitatively.

Les Immatures De Paris And The Policeman

Play Episode Listen Later Nov 27, 2022 0:56


Les Immatures De Paris And The Policeman
Even if actions are tainted by sin, all evil is not the same either qualitatively or quantitatively.

Les Immatures De Paris And The Policeman

Play Episode Listen Later Nov 27, 2022 0:56


Hey Fintech Friends, by This Week in Fintech
Hey Fintech Friends #6 ft Giorgio Giuliani

Hey Fintech Friends, by This Week in Fintech

Play Episode Listen Later Sep 7, 2022 43:12


Available on Spotify, Apple, and anywhere else you listen to podcasts!Timestamps:Intro‘Fin-techionary' of the Week: Point of Sale(1.11)News (2.05)Interview with Giorgio Giuliani about their experience and current work at Opareta (4.47)Quick Fire Questions with Giorgio Giuliani(32.20)Signals: Rent is Rising – The Rent-A-Charter Model Just Got More Expensive (39.04)Transcript:Hey FinTech friends. Hey FinTech friends. My name is Helen Femi Williams, and I'm your host of this new podcast. Hey FinTech friends!This podcast is brought to you by This Week in FinTech, which is on the front page of global FinTech news, fostering the largest FinTech community through newsletters, thought leadership, and events. Oh, and now podcasting.So let's talk about the structure of this podcast.First, we're going to go through the news. And if you're a subscriber to this week in FinTech newsletter, you're in luck because this is the audio version.Then we're going to have a chat with this week's friend Giorgio Giuliani.And lastly, I'll tell you a little bit about the latest Signals article Rent is Rising – The Rent-A-Charter Model Just Got More Expensive by Trevor TanifumFin-techionaryThis weeks, ‘fintechtionary', which is our dictionary definition of a fintechy word is:Point of Sale According to Investopedia, Point of sale (POS), refers to the place where a customer executes the payment for goods or services and where sales taxes may become payable. It can be in a physical store, where POS terminals and systems are used to process card payments, or a virtual sales point such as a computer or mobile electronic device.Depending on the software features, retailers can track pricing accuracy, inventory changes, gross revenue, and sales patterns. Using integrated technology to track data helps retailers catch discrepancies in pricing or cash flow that could lead to profit loss or interrupt sales. POS systems that monitor inventory and buying trends can help retailers avoid customer service issues, such as out-of-stock sales, and tailor purchasing and marketing to consumer behavior.But first this week in Fintech

Neural Information Retrieval Talks — Zeta Alpha
ColBERT + ColBERTv2: late interaction at a reasonable inference cost

Neural Information Retrieval Talks — Zeta Alpha

Play Episode Listen Later Aug 16, 2022 57:30


Andrew Yates (Assistant Professor at the University of Amsterdam) and Sergi Castella (Analyst at Zeta Alpha) discus the two influential papers introducing ColBERT (from 2020) and ColBERT v2 (from 2022), which mainly propose a fast late interaction operation to achieve a performance close to full cross-encoders but at a more manageable computational cost at inference; along with many other optimizations.

A Sound Heart
Believers Must Live Qualitatively Different Lives in the Cosmic System

A Sound Heart

Play Episode Listen Later Jul 24, 2022 45:00


Who and what a "believer" is is known by the fruitfulness of the life.

Chat with Leaders Podcast
Providing A Qualitatively Distinct Hospice Experience & Purpose-Driven Work Environment

Chat with Leaders Podcast

Play Episode Listen Later May 5, 2022 27:23


In this episode, Jeff sits down with Hugh Henderson, CEO of Capstone Hospice in Atlanta, GA which exists as a values-based company focused on exceptional people delivering exceptional care. They unpack Hugh's story of building this successful business, defining company values,, building a strong company culture centered around calling and purpose, how the need for hospice care has been skyrocketing in our society, and practical advice he would have for other aspiring leaders who want to put “purpose” at the core of their business. MORE ABOUT HUGH: Hugh Henderson began his hospice career in 1992 as a Counselor with a local hospice organization.  After 24 years of seeing the hospice industry from all different angles and serving in various hospice leadership positions with local, regional, and national companies, Mr. Henderson decided to open Capstone Hospice in 2015. Capstone exists as a values-based company focusing on exceptional people delivering exceptional care.   Mr. Henderson holds a Master's Degree in Community Counseling from Georgia State University and a Bachelor's degree in Educational Psychology from the University of Georgia.  He and his wife, Karen, have been happily married for 28 years and have three children, ages 27, 24, and 21.  Hugh and his family attend Perimeter Church in Johns Creek, where Mr. Henderson serves as an Elder.  When not fulfilling his career-long passion for hospice, Hugh enjoys spending time with his family, playing tennis, and watching college football (Go Dawgs!). WHAT IS A “CAPSTONE?”  In architectural terms, the capstone is the final fixture put in a place on any constructed edifice-- the “crowning achievement” so to speak of a building or, in the case of a hospice patient, a lifetime.  The capstone allows patients and families to put “the final stone” in place as a monument to the value and worth of the life lived by each hospice patient.   The word “capstone” is also a biblical reference found in the book of Psalms (118:22).  Such a reference emphasizes Capstone's values-based approach and our desire to have a positive impact on all we serve. RESOURCES RELATED TO THIS EPISODE Go to www.capstonehospice.com  CREDITS Theme Music

Today's Heavenward Gaze
Today's Heavenward Gaze 1246- A Qualitatively Higher Exodus Story

Today's Heavenward Gaze

Play Episode Listen Later Apr 25, 2022 5:54


A Daily Dose of Chassidus with Rabbi Shmuel Braun What's the bottom line of the exodus in your heart?

Evans on Marketing Podcasts
Assessing Super Bowl LVI Ads – Qualitatively

Evans on Marketing Podcasts

Play Episode Listen Later Feb 15, 2022 3:19


Within 24 hours of the Super Bowl ending, a lot of sources offered their opinions about the best and worst ads. Here is a cross-section of reviews. Note the differences by reviewer! #superbowlads This episode is also available as a multimedia blog post: https://evansonmarketing.com/2022/02/15/assessing-super-bowl-lvi-ads-qualitatively/

Higher Education Enrollment Growth Briefing
One semester into an ungrading experiment

Higher Education Enrollment Growth Briefing

Play Episode Listen Later Feb 10, 2022 0:48


Reported by EdSurge, Dr. David Clark from Grand Valley State University conducted a student self-grading experiment in his upper-level Euclidean geometry course this past semester. The results? Qualitatively, Clark claims his most energetic, enthusiastic, and best course in 12 years of teaching.

the CYBER5
Brand and Reputation Intelligence: Open Source Intelligence That Drives Revenue Generation But Protects the Brand with Vizsense's Jon Iadonisi

the CYBER5

Play Episode Listen Later Feb 2, 2022 32:24


In episode 65 of The Cyber5, we are joined by Jon Iadonisi, CEO and Co-Founder of VizSense. Many people think of open-source intelligence (OSINT) as identifying and mitigating threats for the security team. In this episode, we explore how OSINT is used to drive revenue. We talk about the role social media and OSINT play in marketing campaigns, particularly around brand awareness, brand reputation, go-to-market (GTM) strategy, and overall revenue generation. We also discuss what marketing and security teams can learn from OSINT intelligence tradecraft, particularly when there are threats to the brand's reputation. Four Key Takeaways: 1) Even in Marketing, Context and Insights Provide Intelligence, Not Data Raw data is not intelligence; rather, intelligence is a refined product where context is provided around information and data. Similar to the national security and enterprise security world, where adversaries are trying to commit crimes and espionage, businesses want to attract people to their brand. Open-source and social media information are powerful data points when analyzed, providing critical intelligence on what consumers and businesses want to buy. Every human being is now a signal no different from radio intercepts during Pearl Harbor.  2)  The Role of OSINT in Driving Revenue for the Brand; Quantitative and Qualitative Metrics In the security world, attribution to a particular organization is necessary to continue to receive fundraising, whether it's a hacking group or a terrorist organization. In the marketing world, brand intelligence is a crucial piece in the following three elements to influence a person: Persuasive content Delivered from a credible voice Network or audience with a high engagement rate Open-source intelligence can be mined in a way that provides insights stronger than traditional marketing focus groups. While celebrities attract attention, people are likely to follow people like themselves, aka micro-influencers.   Quantitatively, numbers increasing in revenue, sharing, engagements are critical metrics. Qualitatively, marketing teams can mine social media data to determine what people are thinking about a particular product, but also to understand how the products are performing, and then design and build future products. The crowd will tell a brand what they want and they don't have yet, and you can use that data to build future products. 3) Where Marketing Meets Security: Threats to Brand Reputation Security teams should work with marketing teams daily to protect the brand. In today's threats to brands, the human dimension of what people say online is of equal credibility if not more important than technical signals that show a company has suffered a breach, particularly regarding misinformation and disinformation. The human dimension is converging with a technical dimension, and a true holistic hybrid model is needed for enterprise security and intelligence teams. An example of reputation threats that happen in business every day: Smear campaigns using disinformation and misinformation from competitors introduce uncertainty into a brand's ecosystem.   4) Where Security Meets Marketing: Privacy Taken Seriously That Enhances the Brand On the flip side, marketing teams should look for ways to promote the security of their products as business differentiators. Marketing teams should also consult with the security teams to understand all the different data lakes that are available in social media, dark web, and open source to ensure they can collect on the proper type of sentiment where brands are being discussed.

The Nonlinear Library
LW - Future ML Systems Will Be Qualitatively Different by jsteinhardt

The Nonlinear Library

Play Episode Listen Later Jan 12, 2022 8:34


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Future ML Systems Will Be Qualitatively Different, published by jsteinhardt on January 11, 2022 on LessWrong. In 1972, the Nobel prize-winning physicist Philip Anderson wrote the essay "More Is Different". In it, he argues that quantitative changes can lead to qualitatively different and unexpected phenomena. While he focused on physics, one can find many examples of More is Different in other domains as well, including biology, economics, and computer science. Some examples of More is Different include: Uranium. With a bit of uranium, nothing special happens; with a large amount of uranium packed densely enough, you get a nuclear reaction. DNA. Given only small molecules such as calcium, you can't meaningfully encode useful information; given larger molecules such as DNA, you can encode a genome. Water. Individual water molecules aren't wet. Wetness only occurs due to the interaction forces between many water molecules interspersed throughout a fabric (or other material). Traffic. A few cars on the road are fine, but with too many you get a traffic jam. It could be that 10,000 cars could traverse a highway easily in 15 minutes, but 20,000 on the road at once could take over an hour. Specialization. Historically, in small populations, virtually everyone needed to farm or hunt to survive; in contrast, in larger and denser communities, enough food is produced for large fractions of the population to specialize in non-agricultural work. While some of the examples, like uranium, correspond to a sharp transition, others like specialization are more continuous. I'll use emergence to refer to qualitative changes that arise from quantitative increases in scale, and phase transitions for cases where the change is sharp. In this post, I'll argue that emergence often occurs in the field of AI, and that this should significantly affect our intuitions about the long-term development and deployment of AI systems. We should expect weird and surprising phenomena to emerge as we scale up systems. This presents opportunities, but also poses important risks. Emergent Shifts in the History of AI There have already been several examples of quantitative differences leading to important qualitative changes in machine learning. Storage and Learning. The emergence of machine learning as a viable approach to AI is itself an example of More Is Different. While learning had been discussed since the 1950s, it wasn't until the 80s-90s that it became a dominant paradigm: for instance, IBM's first statistical translation model was published in 1988, even though the idea was proposed in 1949. Not coincidentally, 1GB of storage cost over $100k in 1981 but only around $9k in 1990 (adjusted to 2021 dollars). The Hansard corpus used to train IBM's model comprised 2.87 million sentences and would have been difficult to use before the 80s. Even the simple MNIST dataset would have required $4000 in hardware just to store in 1981, but that had fallen to a few dollars by 1998 when it was published. Cheaper hardware thus allowed for a qualitatively new approach to AI: in other words, More storage enabled Different approaches. Compute, Data, and Neural Networks. As hardware improved, it became possible to train neural networks that were very deep for the first time. Better compute enabled bigger models trained for longer, and better storage enabled learning from more data; AlexNet-sized models and ImageNet-sized datasets wouldn't have been feasible for researchers to experiment with in 1990. Deep learning performs well with lots of data and compute, but struggles at smaller scales. Without many resources, simpler algorithms tend to outperform it, but with sufficient resources it pulls far ahead of the pack. This reversal of fortune led to qualitative changes in the field. As one example, the field of machine t...

The Nonlinear Library: LessWrong
LW - Future ML Systems Will Be Qualitatively Different by jsteinhardt

The Nonlinear Library: LessWrong

Play Episode Listen Later Jan 12, 2022 8:34


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Future ML Systems Will Be Qualitatively Different, published by jsteinhardt on January 11, 2022 on LessWrong. In 1972, the Nobel prize-winning physicist Philip Anderson wrote the essay "More Is Different". In it, he argues that quantitative changes can lead to qualitatively different and unexpected phenomena. While he focused on physics, one can find many examples of More is Different in other domains as well, including biology, economics, and computer science. Some examples of More is Different include: Uranium. With a bit of uranium, nothing special happens; with a large amount of uranium packed densely enough, you get a nuclear reaction. DNA. Given only small molecules such as calcium, you can't meaningfully encode useful information; given larger molecules such as DNA, you can encode a genome. Water. Individual water molecules aren't wet. Wetness only occurs due to the interaction forces between many water molecules interspersed throughout a fabric (or other material). Traffic. A few cars on the road are fine, but with too many you get a traffic jam. It could be that 10,000 cars could traverse a highway easily in 15 minutes, but 20,000 on the road at once could take over an hour. Specialization. Historically, in small populations, virtually everyone needed to farm or hunt to survive; in contrast, in larger and denser communities, enough food is produced for large fractions of the population to specialize in non-agricultural work. While some of the examples, like uranium, correspond to a sharp transition, others like specialization are more continuous. I'll use emergence to refer to qualitative changes that arise from quantitative increases in scale, and phase transitions for cases where the change is sharp. In this post, I'll argue that emergence often occurs in the field of AI, and that this should significantly affect our intuitions about the long-term development and deployment of AI systems. We should expect weird and surprising phenomena to emerge as we scale up systems. This presents opportunities, but also poses important risks. Emergent Shifts in the History of AI There have already been several examples of quantitative differences leading to important qualitative changes in machine learning. Storage and Learning. The emergence of machine learning as a viable approach to AI is itself an example of More Is Different. While learning had been discussed since the 1950s, it wasn't until the 80s-90s that it became a dominant paradigm: for instance, IBM's first statistical translation model was published in 1988, even though the idea was proposed in 1949. Not coincidentally, 1GB of storage cost over $100k in 1981 but only around $9k in 1990 (adjusted to 2021 dollars). The Hansard corpus used to train IBM's model comprised 2.87 million sentences and would have been difficult to use before the 80s. Even the simple MNIST dataset would have required $4000 in hardware just to store in 1981, but that had fallen to a few dollars by 1998 when it was published. Cheaper hardware thus allowed for a qualitatively new approach to AI: in other words, More storage enabled Different approaches. Compute, Data, and Neural Networks. As hardware improved, it became possible to train neural networks that were very deep for the first time. Better compute enabled bigger models trained for longer, and better storage enabled learning from more data; AlexNet-sized models and ImageNet-sized datasets wouldn't have been feasible for researchers to experiment with in 1990. Deep learning performs well with lots of data and compute, but struggles at smaller scales. Without many resources, simpler algorithms tend to outperform it, but with sufficient resources it pulls far ahead of the pack. This reversal of fortune led to qualitative changes in the field. As one example, the field of machine t...

the empowered podcast
Episode 025: Do I Have PFAS Forever Chemicals In My Blood?

the empowered podcast

Play Episode Listen Later Dec 28, 2021 53:11


Wondering how certain chemicals in your environment may impact your health? On this special edition of the empowered podcast, our panel of experts explores where PFAS chemicals like PFOA (C8), PFOS and GenX are lurking in your environment, and discuss how you can test your levels from the comfort of home. Click here to order the first ever self-collected PFAS Exposure Blood Test from empowerDX. Qualitatively assess more than 40 forever chemicals with a single kit. Request bulk PFAS orders here.

The Nonlinear Library: LessWrong Top Posts
Evolution of Modularity by johnswentworth

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 11, 2021 4:01


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Evolution of Modularity, published by johnswentworth on the AI Alignment Forum. Crossposted from the AI Alignment Forum. May contain more technical jargon than usual. This post is based on chapter 15 of Uri Alon's book An Introduction to Systems Biology: Design Principles of Biological Circuits. See the book for more details and citations; see here for a review of most of the rest of the book. Fun fact: biological systems are highly modular, at multiple different scales. This can be quantified and verified statistically, e.g. by mapping out protein networks and algorithmically partitioning them into parts, then comparing the connectivity of the parts. It can also be seen more qualitatively in everyday biological work: proteins have subunits which retain their function when fused to other proteins, receptor circuits can be swapped out to make bacteria follow different chemical gradients, manipulating specific genes can turn a fly's antennae into legs, organs perform specific functions, etc, etc. On the other hand, systems designed by genetic algorithms (aka simulated evolution) are decidedly not modular. This can also be quantified and verified statistically. Qualitatively, examining the outputs of genetic algorithms confirms the statistics: they're a mess. So: what is the difference between real-world biological evolution vs typical genetic algorithms, which leads one to produce modular designs and the other to produce non-modular designs? Kashtan & Alon tackle the problem by evolving logic circuits under various conditions. They confirm that simply optimizing the circuit to compute a particular function, with random inputs used for selection, results in highly non-modular circuits. However, they are able to obtain modular circuits using “modularly varying goals” (MVG). The idea is to change the reward function every so often (the authors switch it out every 20 generations). Of course, if we just use completely random reward functions, then evolution doesn't learn anything. Instead, we use “modularly varying” goal functions: we only swap one or two little pieces in the (modular) objective function. An example from the book: The upshot is that our different goal functions generally use similar sub-functions - suggesting that they share sub-goals for evolution to learn. Sure enough, circuits evolved using MVG have modular structure, reflecting the modular structure of the goals. (Interestingly, MVG also dramatically accelerates evolution - circuits reach a given performance level much faster under MVG than under a fixed goal, despite needing to change behavior every 20 generations. See either the book or the paper for more on that.) How realistic is MVG as a model for biological evolution? I haven't seen quantitative evidence, but qualitative evidence is easy to spot. MVG as a theory of biological modularity predicts that highly variable subgoals will result in modular structure, whereas static subgoals will result in a non-modular mess. Alon's book gives several examples: Chemotaxis: different bacteria need to pursue/avoid different chemicals, with different computational needs and different speed/energy trade-offs, in various combinations. The result is modularity: separate components for sensing, processing and motion. Animals need to breathe, eat, move, and reproduce. A new environment might have different food or require different motions, independent of respiration or reproduction - or vice versa. Since these requirements vary more-or-less independently in the environment, animals evolve modular systems to deal with them: digestive tract, lungs, etc. Ribosomes, as an anti-example: the functional requirements of a ribosome hardly vary at all, so they end up non-modular. They have pieces, but most pieces do not have an obvious distinct function. To sum it up: modul...

The Nonlinear Library: Alignment Forum Top Posts
Evolution of Modularity by johnswentworth

The Nonlinear Library: Alignment Forum Top Posts

Play Episode Listen Later Dec 10, 2021 3:57


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Evolution of Modularity, published by johnswentworth on the AI Alignment Forum. Write a Review This post is based on chapter 15 of Uri Alon's book An Introduction to Systems Biology: Design Principles of Biological Circuits. See the book for more details and citations; see here for a review of most of the rest of the book. Fun fact: biological systems are highly modular, at multiple different scales. This can be quantified and verified statistically, e.g. by mapping out protein networks and algorithmically partitioning them into parts, then comparing the connectivity of the parts. It can also be seen more qualitatively in everyday biological work: proteins have subunits which retain their function when fused to other proteins, receptor circuits can be swapped out to make bacteria follow different chemical gradients, manipulating specific genes can turn a fly's antennae into legs, organs perform specific functions, etc, etc. On the other hand, systems designed by genetic algorithms (aka simulated evolution) are decidedly not modular. This can also be quantified and verified statistically. Qualitatively, examining the outputs of genetic algorithms confirms the statistics: they're a mess. So: what is the difference between real-world biological evolution vs typical genetic algorithms, which leads one to produce modular designs and the other to produce non-modular designs? Kashtan & Alon tackle the problem by evolving logic circuits under various conditions. They confirm that simply optimizing the circuit to compute a particular function, with random inputs used for selection, results in highly non-modular circuits. However, they are able to obtain modular circuits using “modularly varying goals” (MVG). The idea is to change the reward function every so often (the authors switch it out every 20 generations). Of course, if we just use completely random reward functions, then evolution doesn't learn anything. Instead, we use “modularly varying” goal functions: we only swap one or two little pieces in the (modular) objective function. An example from the book: The upshot is that our different goal functions generally use similar sub-functions - suggesting that they share sub-goals for evolution to learn. Sure enough, circuits evolved using MVG have modular structure, reflecting the modular structure of the goals. (Interestingly, MVG also dramatically accelerates evolution - circuits reach a given performance level much faster under MVG than under a fixed goal, despite needing to change behavior every 20 generations. See either the book or the paper for more on that.) How realistic is MVG as a model for biological evolution? I haven't seen quantitative evidence, but qualitative evidence is easy to spot. MVG as a theory of biological modularity predicts that highly variable subgoals will result in modular structure, whereas static subgoals will result in a non-modular mess. Alon's book gives several examples: Chemotaxis: different bacteria need to pursue/avoid different chemicals, with different computational needs and different speed/energy trade-offs, in various combinations. The result is modularity: separate components for sensing, processing and motion. Animals need to breathe, eat, move, and reproduce. A new environment might have different food or require different motions, independent of respiration or reproduction - or vice versa. Since these requirements vary more-or-less independently in the environment, animals evolve modular systems to deal with them: digestive tract, lungs, etc. Ribosomes, as an anti-example: the functional requirements of a ribosome hardly vary at all, so they end up non-modular. They have pieces, but most pieces do not have an obvious distinct function. To sum it up: modularity in the system evolves to match modularity in the environment. Than...

The Nonlinear Library: Alignment Forum Top Posts
A simple environment for showing mesa misalignment by Matthew Barnett

The Nonlinear Library: Alignment Forum Top Posts

Play Episode Listen Later Dec 3, 2021 3:23


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A simple environment for showing mesa misalignment, published by Matthew Barnett on the AI Alignment Forum. A few days ago, Evan Hubinger suggested creating a mesa optimizer for empirical study. The aim of this post is to propose a minimal environment for creating a mesa optimizer, which should allow a compelling demonstration of pseudo alignment. As a bonus, the scheme also shares a nice analogy with human evolution. The game An agent will play on a maze-like grid, with walls that prohibit movement. There are two important strategic components to this game: keys, and chests. If the agent moves into a tile containing a key, it automatically picks up that key, moving it into the agent's unbounded inventory. Moving into any tile containing a chest will be equivalent to an attempt to open that chest. Any key can open any chest, after which both the key and chest are expired. The agent is rewarded every time it successfully opens a chest. Nothing happens if it moves into a chest tile without a key, and the chest does not prohibit the agent's movement. The agent is therefore trained to open as many chests as possible during an episode. The map may look like this: The catch In order for the agent to exhibit the undesirable properties of mesa optimization, we must train it in a certain version of the above environment to make those properties emerge naturally. Specifically, in my version, we limit the ratio of keys to chests so that there is an abundance of chests compared to keys. Therefore, the environment may look like this instead: Context change The hope is that while training, the agent picks up a simple pseudo objective: collect as many keys as possible. Since chests are abundant, it shouldn't need to expend much energy seeking them, as it will nearly always run into one while traveling to the next key. Note that we can limit the number of steps during a training episode so that it almost never runs out of keys during training. When taken off the training distribution, we can run this scenario in reverse. Instead of testing it in an environment with few keys and lots of chests, we can test it in an environment with few chests and many keys. Therefore, when pursuing the pseudo objective, it will spend all its time collecting keys without getting any reward. Testing for mesa misalignment In order to show that the mesa optimizer is competent but misaligned we can put the agent in a maze-like environment much larger than any it was trained for. Then, we can provide it an abundance of keys relative to chests. If it can navigate the large maze and collect many keys comfortably while nonetheless opening few or no chests, then it has experienced a malign failure. We can make this evidence for pseudo alignment even stronger by comparing the trained agent to two that we hard-code: one agent that pursues the optimal policy for collecting keys, and one agent that pursues the optimal policy for opening as many chests as possible. Qualitatively, if the trained agent is more similar to the first agent than the second, then we should be confident that it has picked up the pseudo objective. The analogy with human evolution In the ancestral environment, calories were scarce. In our modern day world they are no longer scarce, yet we still crave them, sometimes to the point where it harms our reproductive capability. This is similar to how the agent will continue pursuing keys even if it is not using them to open any chests. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

Fearless Training
#102: Mark Coles - Reaching Your Full Potential & Growing Your Fitness Business Qualitatively!

Fearless Training "Roar Knowledge" Podcast

Play Episode Listen Later Nov 25, 2021 56:39


Welcome back to the Fearless Training "Roar Knowledge" Podcast Episode 102: Mark Coles M10 - Reaching Your Full Potential! Expect To Learn: - What It Takes To Become World Class - Coaching Vs Personal Training - Scaling Vs Growing Business - Leading By Example - Business Mastery Mindset - Achieving Tangible Results - Reaching Your Full Potential - Where To Start - Daily Habits to Win Welcome to M10. I've spent the last 15 years working within the fitness industry, and the last seven years building M10 as a personal training brand. M10 is located in the busy business district of Nottingham. My passion has always been to create a personal training facility, where results and a high quality of service are the number one priorities. I belive I have accomplished this in our Nottingham personal training gym. I strive to create a training environment that inspires everyone who enters to achieve their goals. We are committed to helping every single one of our clients achieve their lifelong dream, of being in the best shape of their lives. We will do whatever we can to help them achieve it. To many people, a personal trainer is just someone who simply “kicks their ass” a few times a week, to others it's someone who just “helps them train a little harder each week”. Time and again, I hear of trainers out there who simply provide their clients with nothing but fancy workouts that promise to deliver the best results, but often don't. For myself and my team of cocahes at M10, it's about much, much more. The system we use is what separates us from the rest. My team don't design generic programmes or deliver a “one size fits all” approach. Instead, they use proven methods that deliver the highest quality results – whether you're looking for strength training, fat loss or simply nutrition advice, everything they do is tailor-made. They're also continuously researching and studying to ensure that they stay at the forefront of the industry. There are very few training facilites where you walk in and see all the trainers in great shape, practicing what they preach. Walking the walk is an essential part of becoming an M10 coach, you have only got to look at the profiles of my team to see the proof of their own hard work. The one thing that I ask from everyone we work with, is that they commit themselves 100% to the programme. If they do, they will achieve results like they never knew was possible! INSTAGRAM: markcolesm10 YOUTUBE: https://www.youtube.com/user/M10Fitness WEBSITE: https://m10life.com/ PODCAST: https://podcasts.apple.com/au/podcast/mastery-podcast-with-mark-coles/id934940876 FREE PHYSECRETS: http://fearlesstraining.co APPLY NOW: https://alexconnor.com.au/apply/ Coaching/Business Inquiries: alex@fearlesstraining.org https://alexconnor.com.au/ Subscribe & Follow along for more: » Subscribe: https://bit.ly/FearlessChannel » Facebook: https://www.facebook.com/groups/227661021434383 » Instagram: https://instagram.com/Fearless_Training_ » ROAR Podcast: https://bit.ly/FTRoar https://fearlesstrainingunited.com

Naval
We Are Qualitatively Different From Other Species

Naval

Play Episode Listen Later Oct 25, 2021 1:33


Transcript http://nav.al/species

The Real Estate Show
Episode 17 — Over $500,000 for Average Price of a Home in El Paso, County — The Real Estate Show

The Real Estate Show

Play Episode Listen Later Jul 29, 2021 12:26


The average price of a home in El Paso County for June of 2021 was just released by our local MLS and stands at $502,961, up 25% from a year ago! There's a 47.5% less active supply from a year ago … and just 0.4 months, or 12 days supply of inventory. Is this going to get better? Is this going to get worse? What does the market hold? Qualitatively, we are seeing a bit more inventory and more houses for our buyers to look at right now at least (in the past couple weeks) … but everyone is asking themselves, how long will these conditions last?

The Human Firewall
S2-E3 Josefine Ehlers Davidsen: Psychology and cybersecurity

The Human Firewall

Play Episode Listen Later Jun 10, 2021 64:09


Our guest today is Josefine Ehlers Davidsen from AP Pension (at the time of the interview: The Danish National Agency for IT and Learning), and your hosts are Lasse Frost and Jakob Danelund. This episode delves into how you can utilize insights from psychology to bolster your organization against cyberthreats. Follow Josefine Ehlers Davidsen on LinkedIN here. Read Josefine's article “How to build real information security in 5 steps” here. Learn more about Bsides Copenhagen here.  Josefine's vision: That everybody is as excited about cybersecurity as we are. But I also know that that's not going to happen. Just as we cannot have 100 percent compliance, we are going to have to accept that only a few people will have an intense love for cybersecurity.  Josefine's 3 advice to get there: What can we do tomorrow: Identify what's really important to you. Ask yourself or ask relevant people what they really care about in this organization, what do we need to protect.  What can we do in 6 months: Start documenting. Qualitatively and quantitively. As you're going along in your process, it's going to help you to get more and more data-driven. Document the touchpoints you have with people. This will make it gradually easier for you to report to senior management. What can we do in 5 years: Stay curious and keep on listening. The threat landscape is constantly evolving, employees come and go - and it is futile to check boxes. So keep your eyes and ears out. 

Evans on Marketing Podcasts
Assessing Super Bowl LV Advertising

Evans on Marketing Podcasts

Play Episode Listen Later Feb 23, 2021 7:12


We divide this post into three parts: (1) Companies not advertising on the Super Bowl telecast. (2) Qualitatively assessing Super Bowl LV advertising. (3) Quantitatively assessing Super Bowl LV advertising. This episode is also available as a multimedia blog post: https://evansonmarketing.com/2021/02/09/assessing-super-bowl-lv-advertising/

Practically a Farbrengen
#7 The path to a qualitatively good life

Practically a Farbrengen

Play Episode Listen Later Feb 18, 2021 14:16


The path to a qualitatively good lifeMenachem and Mayer explore the nature of spiritual diagnostic terms and the path toward measuring success in life, in Judaism,  qualitatively not quantitatively Practically a Farbrengen is brought to you by Consciously,  a media publishing platform and community of regular people seeking spiritual growth.We welcome your feedback and questions EmailConsciouslyThePodcast@gmail.comFacebookhttps://www.facebook.com/Conscious-ly-102949811230486/Instagramhttps://www.instagram.com/consciously_62/The Conscious(ly) teamCo-Host: Menachem PoznanskiCo-Host: R' Mayer PregerArtwork: Tani PoznanskiSocial Media: Zoe PoznanskiMusic: Chabad Niggunim

Rejuvenated Women: Impeccable Health for High Performing Women
EP 42-How do You Qualitatively Measure Your Immune System?

Rejuvenated Women: Impeccable Health for High Performing Women

Play Episode Listen Later Dec 17, 2020 21:32


With the availability of a COVID vaccine, it seems immunity is a hot topic again. But how do you actually measure how well your immune system is functioning? Join me as I unpack the Qualitative Immune Assessment--otherwise known as signs your immune system is not in balance--and introduce our next series for the podcast.   Thank you for tuning in to Rejuvenated Women: Impeccable Health for High Performing Women where we provide you with the tools, information, and inspiration you need to transform from overwhelmed, overworked, and overweight to vibrant energetic, and on fire!   If you enjoyed the show, please head over to iTunes to SUBSCRIBE and leave us a review. Each month I’ll select one lucky reviewer to receive a special “impeccable health sample kit” from me. Also, I don’t want to be working with you on your health only once or twice a week. I want to be in this conversation and in the trenches with you every single day. I invite you to join me in my private Facebook Group for high performing women ready to transform their health and lives called the Tribe of Rejuvenated Women. There you’ll have access to free trainings, a community of like-minded women from around the world, and even more information, inspiration, and motivation to transform your health and become vibrant, energetic, and on fire.   Be sure to check out our website, follow us on Facebook and Linked In, and Instagram.

Sadler's Lectures
John Stuart Mill, Utilitarianism - Qualitatively Higher And Lower Pleasures

Sadler's Lectures

Play Episode Listen Later Oct 29, 2020 17:19


This lecture discusses key ideas from the 19th century Utilitarian philosopher John Stuart Mill's work, Utilitarianism It focuses specifically his discussion in chapter 2, where, after introducing his distinction between higher and lower pleasures, Mill discusses difference in quality of pleasures. There are higher pleasures - associated with higher faculties of the human being - and lower pleasures - associated with lower faculties. To support my ongoing work, go to my Patreon site - www.patreon.com/sadler If you'd like to make a direct contribution, you can do so here - www.paypal.me/ReasonIO - or at BuyMeACoffee - www.buymeacoffee.com/A4quYdWoM You can find over 1500 philosophy videos in my main YouTube channel - www.youtube.com/user/gbisadler Purchase Mill's Utilitarianism - amzn.to/3oviNnj

New Naratif's Political Agenda
The Show with PJ Thum Episode 3: How Singapore's Elections are Qualitatively Unfair (audio only)

New Naratif's Political Agenda

Play Episode Listen Later Apr 7, 2020 23:41


PJ comments on the oppression of peaceful climate change activists, Lee Hsien Loong's continued teasing of us about an election amidst the coronavirus crisis, and sex with a condom; and continues his explanation of how the electoral system is deeply unfair. The historic use of arrests and lawsuits, the threats of punishment, and the slanted media coverage create an atmosphere of fear in which people are afraid of voting against the governing People's Action Party. Lisa the Clairvoyant Malayan Sun Dog then predicts the result of the next Singapore General Election. For the full video, please visit https://youtu.be/ysPRlRIUO7I.

Dave Lee on Investing
How to Time Stock Purchases for 10x Gains (ie., TSLA, Tesla) (Ep. 24)

Dave Lee on Investing

Play Episode Listen Later Feb 5, 2020 23:21


Follow me on Twitter: https://twitter.com/heydave7 Follow me on Instagram: https://www.instagram.com/heydave7 Read my article, "The coming Tesla cash cow and the short burn of the century", https://teslamotorsclub.com/tmc/threads/the-coming-tesla-cash-cow-and-the-short-burn-of-the-century.114625/ Watch my previous videos on TSLA: My TSLA Exit Plan, https://www.youtube.com/watch?v=r9HtG-jJSTY TSLA Breaks 700, https://youtu.be/2kiZzhjz5Os TSLA Breaks 600, https://youtu.be/f32-qNH11u0 Tesla Crushes 500, https://www.youtube.com/watch?v=w_VAoVj-ZxI Tesla Breaks 400, https://www.youtube.com/watch?v=HwvXHRXH8hs When is the right time to accumulate stock in a company like Tesla? The opportunity with a generational company is you can 10x your investment in 5-10 years, but you need to get in at the right time. Many investors underestimate the importance of timing. There are ups and downs, and there are certain times if you get in you can double or triple your investment than if you got in at a later time. I'm interested in helping create a framework on how to value generational companies and how you can accumulate a position. There are two types of FOMO (Fear of Missing Out): 1. Memetic FOMO 2. Anti-memetic FOMO Best time to get into a stock is when you have anti-memetic FOMO that is derived from your own qualitative and qualitative analysis, forecasting the stock to double within the next 1-2 years and then doing another 5x in 5-7 years. Investing is a difficult skill. In order to be superb forecaster, there needs to be a combination of exceptional quantitative skills and exceptional quantitative skills, and it's the blending of these two skills that separates the great investors from the mediocre ones. Qualitatively you're looking at the product, owner feedback, superiority and defensibility of product, leadership, strategy, etc. Quantitatively you're looking at revenue, market size, margin, profit projections and multiples that investors give. Please share this video with others on Reddit, Facebook groups, and forums. Check out my archived articles/posts on Tesla: https://teslamotorsclub.com/tmc/threads/articles-megaposts-by-davet.23473/#post-485768 Disclaimer: All content on this channel is for informational and educational purposes only and should not be construed as professional financial advice. Should you need such advice, consult a licensed financial or tax advisor. No guarantee is given regarding the accuracy of information on this channel. Tags: Tesla, Elon Musk, Model 3, Model Y, Cybertruck, Investing, China, TSLA ShortsSubscribe to Dave Lee on Investing on Soundwise

Dave Lee on Investing
How to Time Stock Purchases for 10x Gains (ie., TSLA, Tesla) (Ep. 24)

Dave Lee on Investing

Play Episode Listen Later Feb 5, 2020 23:21


Follow me on Twitter: https://twitter.com/heydave7 Follow me on Instagram: https://www.instagram.com/heydave7 Read my article, "The coming Tesla cash cow and the short burn of the century", https://teslamotorsclub.com/tmc/threads/the-coming-tesla-cash-cow-and-the-short-burn-of-the-century.114625/ Watch my previous videos on TSLA: My TSLA Exit Plan, https://www.youtube.com/watch?v=r9HtG-jJSTY TSLA Breaks 700, https://youtu.be/2kiZzhjz5Os TSLA Breaks 600, https://youtu.be/f32-qNH11u0 Tesla Crushes 500, https://www.youtube.com/watch?v=w_VAoVj-ZxI Tesla Breaks 400, https://www.youtube.com/watch?v=HwvXHRXH8hs When is the right time to accumulate stock in a company like Tesla? The opportunity with a generational company is you can 10x your investment in 5-10 years, but you need to get in at the right time. Many investors underestimate the importance of timing. There are ups and downs, and there are certain times if you get in you can double or triple your investment than if you got in at a later time. I’m interested in helping create a framework on how to value generational companies and how you can accumulate a position. There are two types of FOMO (Fear of Missing Out): 1. Memetic FOMO 2. Anti-memetic FOMO Best time to get into a stock is when you have anti-memetic FOMO that is derived from your own qualitative and qualitative analysis, forecasting the stock to double within the next 1-2 years and then doing another 5x in 5-7 years. Investing is a difficult skill. In order to be superb forecaster, there needs to be a combination of exceptional quantitative skills and exceptional quantitative skills, and it’s the blending of these two skills that separates the great investors from the mediocre ones. Qualitatively you’re looking at the product, owner feedback, superiority and defensibility of product, leadership, strategy, etc. Quantitatively you’re looking at revenue, market size, margin, profit projections and multiples that investors give. Please share this video with others on Reddit, Facebook groups, and forums. Check out my archived articles/posts on Tesla: https://teslamotorsclub.com/tmc/threads/articles-megaposts-by-davet.23473/#post-485768 Disclaimer: All content on this channel is for informational and educational purposes only and should not be construed as professional financial advice. Should you need such advice, consult a licensed financial or tax advisor. No guarantee is given regarding the accuracy of information on this channel. Tags: Tesla, Elon Musk, Model 3, Model Y, Cybertruck, Investing, China, TSLA ShortsSubscribe to Dave Lee on Investing on Soundwise

Dave Lee on Investing
How to Time Stock Purchases for 10x Gains (ie., TSLA, Tesla)

Dave Lee on Investing

Play Episode Listen Later Feb 5, 2020 23:21


Follow me on Twitter: https://twitter.com/heydave7 Follow me on Instagram: https://www.instagram.com/heydave7 Read my article, "The coming Tesla cash cow and the short burn of the century", https://teslamotorsclub.com/tmc/threads/the-coming-tesla-cash-cow-and-the-short-burn-of-the-century.114625/ Watch my previous videos on TSLA: My TSLA Exit Plan, https://www.youtube.com/watch?v=r9HtG-jJSTY TSLA Breaks 700, https://youtu.be/2kiZzhjz5Os TSLA Breaks 600, https://youtu.be/f32-qNH11u0 Tesla Crushes 500, https://www.youtube.com/watch?v=w_VAoVj-ZxI Tesla Breaks 400, https://www.youtube.com/watch?v=HwvXHRXH8hs When is the right time to accumulate stock in a company like Tesla? The opportunity with a generational company is you can 10x your investment in 5-10 years, but you need to get in at the right time. Many investors underestimate the importance of timing. There are ups and downs, and there are certain times if you get in you can double or triple your investment than if you got in at a later time. I’m interested in helping create a framework on how to value generational companies and how you can accumulate a position. There are two types of FOMO (Fear of Missing Out): 1. Memetic FOMO 2. Anti-memetic FOMO Best time to get into a stock is when you have anti-memetic FOMO that is derived from your own qualitative and qualitative analysis, forecasting the stock to double within the next 1-2 years and then doing another 5x in 5-7 years. Investing is a difficult skill. In order to be superb forecaster, there needs to be a combination of exceptional quantitative skills and exceptional quantitative skills, and it’s the blending of these two skills that separates the great investors from the mediocre ones. Qualitatively you’re looking at the product, owner feedback, superiority and defensibility of product, leadership, strategy, etc. Quantitatively you’re looking at revenue, market size, margin, profit projections and multiples that investors give. Please share this video with others on Reddit, Facebook groups, and forums. Check out my archived articles/posts on Tesla: https://teslamotorsclub.com/tmc/threads/articles-megaposts-by-davet.23473/#post-485768 Disclaimer: All content on this channel is for informational and educational purposes only and should not be construed as professional financial advice. Should you need such advice, consult a licensed financial or tax advisor. No guarantee is given regarding the accuracy of information on this channel. Tags: Tesla, Elon Musk, Model 3, Model Y, Cybertruck, Investing, China, TSLA ShortsSubscribe to Dave Lee on Investing on Soundwise

A Beautiful Church
Seth Thompson, Part 3: Consumerism vs. Kingdom

A Beautiful Church

Play Episode Listen Later Jan 31, 2020 28:52


The mission of this podcast is to highlight the beauty and diversity of God's Church – both in Chattanooga and the Church at large.What keeps people from getting involved at churches like Fairview Church of the Nazarene? In this episode, Adam and Seth talk about preaching from the lectionary before exploring how a consumerist mentality influences how Christians and pastors view church and even their faith.As Seth describes it, consumerism asks the question, “What's in it for me?” When churchgoers have this mindset, they look for places with satisfying and exciting experiences and avoid churches like Fairview where opportunities to serve abound and rubbing elbows with messy people is guaranteed. And when Christians fail to choose a kingdom perspective over a consumerist one, they can make the mistakes of compartmentalizing their faith and putting themselves before others. Pastors aren't exempt from this mentality: the pressure to focus on numbers and attendance instead of discipleship is something Seth and other leaders have to fight against.Listen in to this episode to hear more about what a kingdom mindset looks like and learn why Seth preaches from the lectionary.  About Adam WhitescarverAdam is passionate about seeing God's people possess vibrant prayer lives to help them make a difference in the sphere of influence God has given them. In ministry since 2001, Adam enjoys his family, teaching, singing, and reading a myriad of subjects. He and his wife, Stephanie, live in North GA with their four children. Jump Through the Conversation:[2:04] How and why Seth preaches from the lectionary[6:11] Adam on filling up and flowing out[7:13] Why consumerism keeps volunteers, donors, and exemplary families from flocking to churches like Fairview[11:45] Judging churches by the wrong criteria [14:22] Consumerism expressed in how we use our time[16:20] Consumerist mindset vs a kingdom mindset[18:24] How to get parents to let their kids mingle with “those” kids[19:41] How consumerism affects a pastor's definition of success[22:10] Qualitatively vs quantitatively measuring success [25:14] Focusing on making disciples Links and Resources:A Beautiful Church websiteChattanooga House of Prayer websiteGive todayFairview Church of the Nazarene website Thanks for listening! Don't forget to subscribe! If you like what you heard, please leave a review on iTunes and share what you liked about the show.

All About Pregnancy & Birth
Ep37: All About Amniotic Fluid - What It Is and How It Helps Your Baby

All About Pregnancy & Birth

Play Episode Listen Later Sep 17, 2019 35:15


Amniotic fluid is the liquid that surrounds your baby and it appears after the first few weeks of pregnancy. This fluid is really important for your baby and your baby's health. In this Episode, You’ll Learn About: What’s in amniotic fluid The antibacterial properties of amniotic fluid How this fluid protects your baby from any trauma Qualitatively and Quantitatively measuring the amniotic fluid Types of quantitative ways to measure the amniotic fluid Oligohydramnios (low fluid) Polyhydramnios (excess fluid) What does it mean for your pregnancy if the fluid is low or high Links Mentioned In The Episode: Free Online Class - How To Make A Birth Plan That Works The Birth Preparation Course

A Sound Heart
We Have A Qualitatively New Life Through the Finished Work of Jesus Christ

A Sound Heart

Play Episode Listen Later Jun 25, 2019 16:00


Jesus is our savior.  He gave his life that we may have life (Gk. Zoe).  Without the sacrificial death of Jesus on our behalf we would remain dead in trespasses and sins.  Without Jesus there is no justification by faith, and a  glorious eternal destiny in the very presence of God (Theos). If you do not have a relationship with Jesus it is time to come to him, and if you do it is time to live as though you expect to be raptured out of the cosmic system at any time.  Broad is the way that leads to destruction, but narrow is the way that leads to eternal life 

A Sound Heart
God Has Created a Qualitatively "New" Humanity in Jesus-Super Epigenetics !

A Sound Heart

Play Episode Listen Later Jul 23, 2018 16:00


God has done something through the person of Jesus that the cosmos-world cannot comprehend.  God has taken human specimens that were ruined by sin and created a new humanity.  Corporately this new humanity is called the Church or the Called Out Ones.   This work of God in creating a new humanity has been and is being accomplished by the Spirit of God.  The Spirit of God does not need cosmic science, pseudo-personality models, and genetic engineering to produce the new humanity.  God  is doing a wonderfully new thing through His own power.  Incidentally, God is not co-identified with the universe in the documents He is the Creator who controls the massive energy of the universe through the Word of His power.

Rationality: From AI to Zombies - The Podcast
Part P, Chapter 195: Qualitatively Confused

Rationality: From AI to Zombies - The Podcast

Play Episode Listen Later Apr 23, 2018


Book 4, Part P, Chapter 195: Qualitatively Confused "Rationality: From AI to Zombies" by Eliezer Yudkowsky Independent audio book project by Walter and James http://from-ai-to-zombies.eu Original source entry: http://lesswrong.com/lw/om/qualitatively_confused/ The complete book is available at MIRI for pay-what-you-want: https://intelligence.org/rationality-ai-zombies/ Source and podcast licensed CC-BY-NC-SA, full text here: https://creativecommons.org/licenses/by-nc-sa/3.0/ Intro/Outro Music by Kevin MacLeod of www.incompetech.com, licensed CC-BY: http://incompetech.com/music/royalty-free/index.html?isrc=USUAN1100708

A Sound Heart
If Anyone Is "In Christ" He or She Is a Qualitatively New Creation

A Sound Heart

Play Episode Listen Later Apr 12, 2018 15:00


God is making people free from the extential dread of sin.  People are being given a new life by believing in Christ.  He is the source of a qualitatively new life.  Jesus give people real purpose in life.  People are given a real destiny in Jesus.  He can and will set you free from the power of sin.  He will cure your soul of its dread conflict brought on by sin.  Jesus is the Truth and he will set you free to walk in newness of life.

Behind the Podium: Unveiling the Coach
Ep 44: Dr. Giancarlo Licata and Serving Your Client Soul Mates Qualitatively and Quantitatively

Behind the Podium: Unveiling the Coach

Play Episode Listen Later Nov 28, 2017 82:46


Wisdom is getting answers to questions you’ve never had. If you’ve ever wondered how to help clients when you know you don’t have the skills to solve their issues – Dr. Licata has some ideas to help. We sat down and chatted with Dr. Giancarlo Licata, a Pasadena based Head & Neck Chiropractor, Co-creator of the Pasadena Integrative Community of Health Professionals, Interdisciplinary Concussion specialist, multi-linguist, graduate of UCSD with a B.A. in Political Science and a minor in Spanish literature, former guest on The Ricki Lake Show, husband, and father of three. Dr. Licata is one of only 220 doctors in the United States trained in NUCCA (National Upper Cervical Chiropractic Association). He is board certified by the National Board and California Board of Chiropractic Examiners and works with individuals with complex challenges to their head and neck. He is also the clinic director at Vital Head & Spinal Care in Pasadena, California. We cover: The importance of surrounding yourself with other experts How to create a network of specialists How to set up systems to audit clients and track progress How to extricate yourself from seeing things in only one paradigm Why shifting your mindset is your biggest obstacle and more! For references to everything mentioned in this episode, head over to www.behindthepodiumpodcast.com. If you enjoyed this episode, please leave us a review and share with your friends and colleagues! (more…)

Bill Murphy's  RedZone Podcast | World Class IT Security
#042: How To Apply Socratic Thinking to Build Defensible IT Security investments

Bill Murphy's RedZone Podcast | World Class IT Security

Play Episode Listen Later Feb 17, 2016 54:20


Today I had an interesting conversation with Jack Jones. This is Jack’s second time on the show and I loved our discussion. It is a gem of learning and is packed with information that you can use right away. Jack was one of the first CISOs in the United States and he is the inventor of the FAIR model for analyzing Information Security Risk. Jack’s bio is extensive and here is a short list of his accomplishments. Jack Jones has worked in technology for over 30 years, and information security and risk management for 25 years. He has over nine years of experience as a CISO with three different companies, including five years at a Fortune 100 financial services company. He received the ISSA Excellence in the Field of Security Practices award at the 2006 RSA Conference. In 2007, he was selected as a finalist for the Information Security Executive of the Year, Central United States, and in 2012 was honored with the CSO Compass award for leadership in risk management. Jones is also the author and creator of the Factor Analysis of Information Risk (FAIR) framework. Currently, Jones serves on the ISC2 Ethics Committee, and is the Executive Vice President, Research and Development of Risk Lens, Inc. Suffice it to say that Jack is a rock star in the Information Security and IT risk community! 6 Key Points: Why top 10 lists for IT Security are useless] How to add probability and possibility of events happening into your risk models How to present data that your board of directors will love How to develop range into your communication How to apply critical thinking, logic and Socratic methods to your analysis How to apply rigor in developing a defensible argument Sponsored By: This episode is sponsored by the CIO Security Scoreboard, a powerful tool that helps you communicate the status of your IT Security program visually in just a few minutes. Time Stamped Show Notes: FAIR is a framework of critical thinking and model or codification of risk and how risk works. Provides reference for thing through complex risk problem problems, risk assumptions and enabling risk discussions [04:53] Servicing assumption enabling debate like dialogue in this discussion [05:15] Jack Jones one of the first CISOs. CISO late 1980s. How to present risk? Technique with FAIR possibility vs probability what is it? Eg. McAfee virus impacting company and disrupting operations. Genesis was a 2003 XP system that contractor required them to have on their network. Sophisticated tools. Blindsided for a few days - because an admin was using a personal machine for surfing, so how would somebody apply FAIR. Knew administrator issues. How do you apply FAIR analysis to this? [08:49] In organization that knows it has control deficiencies. In doing risk analysis of landscape and threat landscape we face are the scenarios that could be painful. Develop straightforward taxonomy and availability high level. From confidentiality perspective what are assets would be exposed from and integrity perspective. [10:00] Deeper level of granularity - step-by-step process develop Taxonomy of events that represents loss. Then analyze likelihood of loss [10:39] If organization done that and they might have, when there is significant impact even if the likelihood is low - controls you want to be able to fast detection and recovery. If down for three days, then recovery rate not what is should be. Organization - in a rigorous fashion - lay out the risk landscape which on the surface they understand exist but don't know where it's relative to the other things in their landscape. Way they triage their world and identify set of conditions - work to be done and could have prioritized it more effectively [12:20] Concept of probability vs possibility linked to Russian Roulette. Organizations fall into the trap of possibility and not probability considerations. If we Focus solely on events are conceivably possible and hugely painful - an asteroid strike would come up and what we would do for an asteroid strike. There has to be a probability element - you can't just solely focus on possibility. Possibility of bad events 100-percent but probability might be lower. Crucial in order to prioritize. [14:38] If there was a risk with old systems because of the admin issue it would have and fitted access to work things out how would you reverse engineer that situation [15:09] In that instance - high probability of encountering malware - the only question from a probability perspective is what are odds of encountering malware that their preventative measures aren’t going to handle. Most security professionals would say that that could happen with the regularity so probability is higher. From a threat perspective zero Day stuff happens with some regularity – and we would be able to come up with likelihood estimate. One of the factors that place into the likelihood is the administrative privilege exposure. What it does is it allows the malware to have greater control and broader Impact than otherwise [17:35] Patching situation would be factors in the evaluation as well but they might have - fragile state wholly dependent on that malware situation due to administrative situation and patching situation. They just fragile to the single control element. Within FAIR there is probability and impact and also2 states: 1) fragile depending on single control in an active threat landscape and the other is 2) unstable where an asset you want to protect that exists in a not very active landscape but you don't have any preventative or resistance control. databases - evaluating scenario rogue database administrators. Nothing to stop it. So when you identify unstable conditions you look at how you would resolve and detect a situation because you have no resistant option. [19:36] In evaluating Probability and Impact and two qualifiers fragile and unstable [20:01] How do you estimate likelihood of happening. All kinds of downsides to scales. Doesn’t allow you to effectively articulate best case, worst case, & most likely case - range of outcomes. From a probability perspective not a lot of work to look at industry data relevant to Technologies in this particular organization. Two ends of the spectrum. Do you see the trends what's more or fewer? Using the data set the minimum at 5 that are relevant to technology concerned about Maximum 15 or perhaps 15 or 20 – per year. Depending on quality of data - make the Range wider or narrower. Faithfully representing your range of uncertainty is critical. Put a discrete number. I don't want number I want a range. Two dimensions. The width of the range. And the most likely value how flat or sharply peaked to B. Perk distribution. Expressing range of uncertainty. [24:09] Interesting in profession when you try to quantify something precision take the distance second to accuracy. When I give you a range that incorporates the actual outcomes in my Range – then my range is accurate and you increase probability of accuracy with wider ranges – but diminished returns [26:25] The useful degree of precision with a confidence level you can stand behind – Process of Calibration, How to Measure Anything - Douglas Hubbard a book that covers this beautifully [26:44] Utility for decision-making vs estimatingconcept, in expression ranges - when presenting risk to use decision makers trying to influence decision to make buying decisions. Calibration piece helps the decision maker make this decision [28:59] Blog series written about this - look at ordinal scales organizations rely on. HIGH MEDIUM LOW. They will identify top ten risks they are identified 10 things in the landscape that they would place into a high risk bucket. Top 3 - how do you differentiate in that bucket when choosing why things don't go into the bucket people. Can't identify why things don't go into that bucket they don't think things through with sufficient [30:25] Not very effective if you use quantitative measures quantitative measures allows you to distribute one above another I would focus on the thing that I have less certainty on. The lack of certainty is risk factor that needs to be dealt with [31:50] Telescopic piece and level of sophistication is not sufficiently advanced to explain to business decision maker to explain why they can't spend money in that area so will spend money in this area. How can someone reconcile real security and audit findings – which are at odds [33:46] Key component is applying real rigor to developing scenarios when encryption at rest is relevant. Encrypt your hard drive - very useful. But a lot of scenarios where the data can be compromised and encryption increases risk. Define set of scenarios where data is at risk in that subset where is encryption adds value and where not. Then evaluating impact. Then have means for comparing solutions. [36:35] Playing at the scenarios is sufficient for people to realize which options are better. [37:05] Set of control opportunities that cost a fraction and show through analysis how it reduces risk more than encryption. [37:38] Some IT professionals feels that (engagement) implies combat. They feel they are protecting an organization so we are asking a government entity auditor but what about educating people to prevent risk. [38:55] People are hesitant to go toe-to-toe against a regulator auditor –operating from intuition. They haven't applied rigorous approach to developing argument - sometimes intuition is wrong and then you realize there right. That's ok. But very often intuition is right. Need framework (like FAIR) for critical thinking through complex problems and developing argument and rationale and surface assumptions making estimates - put before the auditors, if you go through the process to the authoritative figure have you has not applied any rigor to it [40:35] Critical thinking, the Socratic method, logical way of thinking. Interesting to back-up intuition with a rigorous reproach to have a defensible argument [41:21] Save looking at problems and potential Solutions and more rigorous critical-thinking-like fashion is hugely valuable. Just having the framework for discussing and debating things – hugely valuable. [42:27] Another component is normalizing terminology. [43:02] FAIR model - really valuable. Every organization’s risk summary includes top 10 risks and that includes cybercriminals, social engineering, change management, mobile media and cloud computing. And if you look at those - cybercriminal threat community and cloud computing – technology, change management is a control element. It's like comparing apples and oranges. Those are not loss scenarios. FAIR Institute Blog that discusses this. How organizations are identifying and managing top 10 risks and it's a huge problem. We cannot expect to mature if we can't get a fundamental nomenclature correct [45:53] What are the easy steps that someone can transform the top 10 list lost scenarios change the top 10 list? [46:21] Create 2 lists of the top loss scenarios - taxonomy is a list of outcomes. Taxonomy is a categorization. Categorize loss events to a level of abstraction that’s balanced. Balance to be struck. easy to recognize with that balance lies as you go through the process. Qualitatively or quantitatively then do a probability & impact around those and that will tell you which off top 5 or 10. [48:02] Other list - control deficiencies. Risk assessment is controlled assessment. How to prioritize what contributes most of this risk. That identifies top control positions. Cant mix together. Simple way - get handle on risk landscape and determine focus. Look at list of top 10 deficiencies - map them to which scenarios highly relevant less likely relevant - these three or four need to be hitting these hard. We can say over time this will reduce or change this list scenario. [49.24] Recognizing you have to have two lists - top 10 less list is worse than useless you can't compare because it's misinformation in the worst way [49:47] Recommend Measuring and Managing Information Risk: A FAIR Approach co-authored with Dr. Jack Freund. FAIR Institute where to get education at the ecosystem of people in organization to Leverage framework. Universities taking part. Institute, free copy of book but different membership levels soft launch in December formal launch in February [52:10] The org (owns IP for Unix) has resources for FAIR and certification for practitioners. Risk Lens blog resources case studies and the book [52:22] Risk lens does fair Consulting and Open Group is organization but only intellectual property and they adopted her Institute have found her [53:06]  How to get in touch with Jack: Twitter LinkedIn Risk Lens Profile RSA Conference Profile InfoSec 2016 Conference Profile Key Resources: Bill’s first interview with Jack Measuring and Managing Information Risk: A FAIR Approach co-authored with Dr. Jack Freund FAIR Institute How to Measure Anything - Douglas Hubbard Risk Lens Opengroup.org FAIR Institute Blog Jack Freund LI profile Credits: Outro music provided by Ben’s Sound    

Women Taking the Lead with Jodi Flynn
088: Nellie Akalp on Working Qualitatively vs Quantitatively

Women Taking the Lead with Jodi Flynn

Play Episode Listen Later Jan 13, 2016 31:08


Nellie Akalp is a serial entrepreneur, small business expert, speaker and author. She is the founder and CEO of CorpNet.com - an online legal document filing service - where she helps entrepreneurs start, grow and maintain a business. Nellie shares her expert tips with readers at Forbes, Entrepreneur and Mashable and is a regular guest expert on the Fox Small Business Center. Nellie has presented a workshop at Small Biz Expo and sat down with members of Girls in Tech, General Assembly, and more, to inspire and motivate others to make their business dreams a reality. Playing Small Moment Nellie started her first business straight out of law school and right at the start of the Internet. The business took right off, as there was not a lot of competition. In 2005, the company was sold for a large sum of cash. Nellie took time off to focus more on being a mom. After her non-compete clause ran out, she realized she was too young, bored and passionate to take early retirement. So, Nellie jumped back into the game and started CorpNet in 2009, at the height of the recession. She was bombarded by competition and felt out of place with the new social media world we were in. After Nellie’s quick success in the past, she suffered from a false sense of being able to do the same thing again. Nellie had to adapt to a new way of doing things, and rethink her approach. The Wake Up Call Nellie is very public about her panic attacks, even writing about them on The Huffington Post. A year ago she went to an Intuit convention and listened to Ariana Huffington speak publicly about her panic attack. Nellie realized she was not alone, and wanted to share her experiences. Style of Leadership Nellie is not your typical CEO. She expects a ton from her team, but she is very ‘all for one, one for all’. A great big company is a result of the sum of its parts. What Are You Excited About? CorpNet is updating their CRM software into a new, more engaging platform for their clients and employees. Leadership Practice Nellie has an open door policy where any of the employees can talk to her about what is going on. There should be a balance between keeping your clients and your employees happy. Nellie always makes sure her employees are thriving in a positive environment. She focuses on the positive rather than the negative. Book to Develop Leadership “Burnt Toast: And Other Philosophies of Life” by Teri Hatcher Inspirational Quote “Make the impossible your reality and it will ultimately become your reality.” Interview Links www.corpnet.com twitter.com/CorpNetNellie linkedin.com/in/nellieakalp facebook.com/nellie.akalp If you enjoyed this episode subscribe in iTunes or Stitcher Radio and never miss out on inspiration and community!   

Rationality: From AI to Zombies
Qualitatively Confused

Rationality: From AI to Zombies

Play Episode Listen Later Mar 9, 2015 8:15


Book IV: Mere Reality - Part P: Reductionism 101 - Qualitatively Confused

A Sound Heart
Anyone "In Christ" Is a Qualitatively New Creation

A Sound Heart

Play Episode Listen Later Dec 21, 2014 31:00


This episode of Pneumatikos will explore the power of the New Being in Ieosus.  The Iesous did not come to establish a new religion.  He came to give his life as a ransome for many. There are those who propose that he never existed, but they have not studied the documents.  They have merely taken up the banner of unbelief and wrapped it in pseudo-intellectual garb.  Those who will not believe the Word of God are still accountable to Him.  They are without excuse (apology).   Heaven and earth will pass away but the Word of God will never pass away. 

Urantia Book
12 - The Universe of Universes

Urantia Book

Play Episode Listen Later Oct 4, 2014


The Universe of Universes (128.1) 12:0.1 THE immensity of the far-flung creation of the Universal Father is utterly beyond the grasp of finite imagination; the enormousness of the master universe staggers the concept of even my order of being. But the mortal mind can be taught much about the plan and arrangement of the universes; you can know something of their physical organization and marvelous administration; you may learn much about the various groups of intelligent beings who inhabit the seven superuniverses of time and the central universe of eternity. (128.2) 12:0.2 In principle, that is, in eternal potential, we conceive of material creation as being infinite because the Universal Father is actually infinite, but as we study and observe the total material creation, we know that at any given moment in time it is limited, although to your finite minds it is comparatively limitless, virtually boundless. (128.3) 12:0.3 We are convinced, from the study of physical law and from the observation of the starry realms, that the infinite Creator is not yet manifest in finality of cosmic expression, that much of the cosmic potential of the Infinite is still self-contained and unrevealed. To created beings the master universe might appear to be almost infinite, but it is far from finished; there are still physical limits to the material creation, and the experiential revelation of the eternal purpose is still in progress. 1. Space Levels of the Master Universe (128.4) 12:1.1 The universe of universes is not an infinite plane, a boundless cube, nor a limitless circle; it certainly has dimensions. The laws of physical organization and administration prove conclusively that the whole vast aggregation of force-energy and matter-power functions ultimately as a space unit, as an organized and co-ordinated whole. The observable behavior of the material creation constitutes evidence of a physical universe of definite limits. The final proof of both a circular and delimited universe is afforded by the, to us, well-known fact that all forms of basic energy ever swing around the curved path of the space levels of the master universe in obedience to the incessant and absolute pull of Paradise gravity. (128.5) 12:1.2 The successive space levels of the master universe constitute the major divisions of pervaded space — total creation, organized and partially inhabited or yet to be organized and inhabited. If the master universe were not a series of elliptical space levels of lessened resistance to motion, alternating with zones of relative quiescence, we conceive that some of the cosmic energies would be observed to shoot off on an infinite range, off on a straight-line path into trackless space; but we never find force, energy, or matter thus behaving; ever they whirl, always swinging onward in the tracks of the great space circuits. (129.1) 12:1.3 Proceeding outward from Paradise through the horizontal extension of pervaded space, the master universe is existent in six concentric ellipses, the space levels encircling the central Isle: (129.2) 12:1.4 1. The Central Universe — Havona. (129.3) 12:1.5 2. The Seven Superuniverses. (129.4) 12:1.6 3. The First Outer Space Level. (129.5) 12:1.7 4. The Second Outer Space Level. (129.6) 12:1.8 5. The Third Outer Space Level. (129.7) 12:1.9 6. The Fourth and Outermost Space Level. (129.8) 12:1.10 Havona, the central universe, is not a time creation; it is an eternal existence. This never-beginning, never-ending universe consists of one billion spheres of sublime perfection and is surrounded by the enormous dark gravity bodies. At the center of Havona is the stationary and absolutely stabilized Isle of Paradise, surrounded by its twenty-one satellites. Owing to the enormous encircling masses of the dark gravity bodies about the fringe of the central universe, the mass content of this central creation is far in excess of the total known mass of all seven sectors of the grand universe. (129.9) 12:1.11 The Paradise-Havona System, the eternal universe encircling the eternal Isle, constitutes the perfect and eternal nucleus of the master universe; all seven of the superuniverses and all regions of outer space revolve in established orbits around the gigantic central aggregation of the Paradise satellites and the Havona spheres. (129.10) 12:1.12 The Seven Superuniverses are not primary physical organizations; nowhere do their boundaries divide a nebular family, neither do they cross a local universe, a prime creative unit. Each superuniverse is simply a geographic space clustering of approximately one seventh of the organized and partially inhabited post-Havona creation, and each is about equal in the number of local universes embraced and in the space encompassed. Nebadon, your local universe, is one of the newer creations in Orvonton, the seventh superuniverse. (129.11) 12:1.13 The Grand Universe is the present organized and inhabited creation. It consists of the seven superuniverses, with an aggregate evolutionary potential of around seven trillion inhabited planets, not to mention the eternal spheres of the central creation. But this tentative estimate takes no account of architectural administrative spheres, neither does it include the outlying groups of unorganized universes. The present ragged edge of the grand universe, its uneven and unfinished periphery, together with the tremendously unsettled condition of the whole astronomical plot, suggests to our star students that even the seven superuniverses are, as yet, uncompleted. As we move from within, from the divine center outward in any one direction, we do, eventually, come to the outer limits of the organized and inhabited creation; we come to the outer limits of the grand universe. And it is near this outer border, in a far-off corner of such a magnificent creation, that your local universe has its eventful existence. (129.12) 12:1.14 The Outer Space Levels. Far out in space, at an enormous distance from the seven inhabited superuniverses, there are assembling vast and unbelievably stupendous circuits of force and materializing energies. Between the energy circuits of the seven superuniverses and this gigantic outer belt of force activity, there is a space zone of comparative quiet, which varies in width but averages about four hundred thousand light-years. These space zones are free from star dust — cosmic fog. Our students of these phenomena are in doubt as to the exact status of the space-forces existing in this zone of relative quiet which encircles the seven superuniverses. But about one-half million light-years beyond the periphery of the present grand universe we observe the beginnings of a zone of an unbelievable energy action which increases in volume and intensity for over twenty-five million light-years. These tremendous wheels of energizing forces are situated in the first outer space level, a continuous belt of cosmic activity encircling the whole of the known, organized, and inhabited creation. (130.1) 12:1.15 Still greater activities are taking place beyond these regions, for the Uversa physicists have detected early evidence of force manifestations more than fifty million light-years beyond the outermost ranges of the phenomena in the first outer space level. These activities undoubtedly presage the organization of the material creations of the second outer space level of the master universe. (130.2) 12:1.16 The central universe is the creation of eternity; the seven superuniverses are the creations of time; the four outer space levels are undoubtedly destined to eventuate-evolve the ultimacy of creation. And there are those who maintain that the Infinite can never attain full expression short of infinity; and therefore do they postulate an additional and unrevealed creation beyond the fourth and outermost space level, a possible ever-expanding, never-ending universe of infinity. In theory we do not know how to limit either the infinity of the Creator or the potential infinity of creation, but as it exists and is administered, we regard the master universe as having limitations, as being definitely delimited and bounded on its outer margins by open space. 2. The Domains of the Unqualified Absolute (130.3) 12:2.1 When Urantia astronomers peer through their increasingly powerful telescopes into the mysterious stretches of outer space and there behold the amazing evolution of almost countless physical universes, they should realize that they are gazing upon the mighty outworking of the unsearchable plans of the Architects of the Master Universe. True, we do possess evidences which are suggestive of the presence of certain Paradise personality influences here and there throughout the vast energy manifestations now characteristic of these outer regions, but from the larger viewpoint the space regions extending beyond the outer borders of the seven superuniverses are generally recognized as constituting the domains of the Unqualified Absolute. (130.4) 12:2.2 Although the unaided human eye can see only two or three nebulae outside the borders of the superuniverse of Orvonton, your telescopes literally reveal millions upon millions of these physical universes in process of formation. Most of the starry realms visually exposed to the search of your present-day telescopes are in Orvonton, but with photographic technique the larger telescopes penetrate far beyond the borders of the grand universe into the domains of outer space, where untold universes are in process of organization. And there are yet other millions of universes beyond the range of your present instruments. (130.5) 12:2.3 In the not-distant future, new telescopes will reveal to the wondering gaze of Urantian astronomers no less than 375 million new galaxies in the remote stretches of outer space. At the same time these more powerful telescopes will disclose that many island universes formerly believed to be in outer space are really a part of the galactic system of Orvonton. The seven superuniverses are still growing; the periphery of each is gradually expanding; new nebulae are constantly being stabilized and organized; and some of the nebulae which Urantian astronomers regard as extragalactic are actually on the fringe of Orvonton and are traveling along with us. (131.1) 12:2.4 The Uversa star students observe that the grand universe is surrounded by the ancestors of a series of starry and planetary clusters which completely encircle the present inhabited creation as concentric rings of outer universes upon universes. The physicists of Uversa calculate that the energy and matter of these outer and uncharted regions already equal many times the total material mass and energy charge embraced in all seven superuniverses. We are informed that the metamorphosis of cosmic force in these outer space levels is a function of the Paradise force organizers. We also know that these forces are ancestral to those physical energies which at present activate the grand universe. The Orvonton power directors, however, have nothing to do with these far-distant realms, neither are the energy movements therein discernibly connected with the power circuits of the organized and inhabited creations. (131.2) 12:2.5 We know very little of the significance of these tremendous phenomena of outer space. A greater creation of the future is in process of formation. We can observe its immensity, we can discern its extent and sense its majestic dimensions, but otherwise we know little more about these realms than do the astronomers of Urantia. As far as we know, no material beings on the order of humans, no angels or other spirit creatures, exist in this outer ring of nebulae, suns, and planets. This distant domain is beyond the jurisdiction and administration of the superuniverse governments. (131.3) 12:2.6 Throughout Orvonton it is believed that a new type of creation is in process, an order of universes destined to become the scene of the future activities of the assembling Corps of the Finality; and if our conjectures are correct, then the endless future may hold for all of you the same enthralling spectacles that the endless past has held for your seniors and predecessors. 3. Universal Gravity (131.4) 12:3.1 All forms of force-energy — material, mindal, or spiritual — are alike subject to those grasps, those universal presences, which we call gravity. Personality also is responsive to gravity — to the Father’s exclusive circuit; but though this circuit is exclusive to the Father, he is not excluded from the other circuits; the Universal Father is infinite and acts over all four absolute-gravity circuits in the master universe: (131.5) 12:3.2 1. The Personality Gravity of the Universal Father. (131.6) 12:3.3 2. The Spirit Gravity of the Eternal Son. (131.7) 12:3.4 3. The Mind Gravity of the Conjoint Actor. (131.8) 12:3.5 4. The Cosmic Gravity of the Isle of Paradise. (131.9) 12:3.6 These four circuits are not related to the nether Paradise force center; they are neither force, energy, nor power circuits. They are absolute presence circuits and like God are independent of time and space. (132.1) 12:3.7 In this connection it is interesting to record certain observations made on Uversa during recent millenniums by the corps of gravity researchers. This expert group of workers has arrived at the following conclusions regarding the different gravity systems of the master universe: (132.2) 12:3.8 1. Physical Gravity. Having formulated an estimate of the summation of the entire physical-gravity capacity of the grand universe, they have laboriously effected a comparison of this finding with the estimated total of absolute gravity presence now operative. These calculations indicate that the total gravity action on the grand universe is a very small part of the estimated gravity pull of Paradise, computed on the basis of the gravity response of basic physical units of universe matter. These investigators reach the amazing conclusion that the central universe and the surrounding seven superuniverses are at the present time making use of only about five per cent of the active functioning of the Paradise absolute-gravity grasp. In other words: At the present moment about ninety-five per cent of the active cosmic-gravity action of the Isle of Paradise, computed on this totality theory, is engaged in controlling material systems beyond the borders of the present organized universes. These calculations all refer to absolute gravity; linear gravity is an interactive phenomenon which can be computed only by knowing the actual Paradise gravity. (132.3) 12:3.9 2. Spiritual Gravity. By the same technique of comparative estimation and calculation these researchers have explored the present reaction capacity of spirit gravity and, with the co-operation of Solitary Messengers and other spirit personalities, have arrived at the summation of the active spirit gravity of the Second Source and Center. And it is most instructive to note that they find about the same value for the actual and functional presence of spirit gravity in the grand universe that they postulate for the present total of active spirit gravity. In other words: At the present time practically the entire spirit gravity of the Eternal Son, computed on this theory of totality, is observable as functioning in the grand universe. If these findings are dependable, we may conclude that the universes now evolving in outer space are at the present time wholly nonspiritual. And if this is true, it would satisfactorily explain why spirit-endowed beings are in possession of little or no information about these vast energy manifestations aside from knowing the fact of their physical existence. (132.4) 12:3.10 3. Mind Gravity. By these same principles of comparative computation these experts have attacked the problem of mind-gravity presence and response. The mind unit of estimation was arrived at by averaging three material and three spiritual types of mentality, although the type of mind found in the power directors and their associates proved to be a disturbing factor in the effort to arrive at a basic unit for mind-gravity estimation. There was little to impede the estimation of the present capacity of the Third Source and Center for mind-gravity function in accordance with this theory of totality. Although the findings in this instance are not so conclusive as in the estimates of physical and spirit gravity, they are, comparatively considered, very instructive, even intriguing. These investigators deduce that about eighty-five per cent of the mind-gravity response to the intellectual drawing of the Conjoint Actor takes origin in the existing grand universe. This would suggest the possibility that mind activities are involved in connection with the observable physical activities now in progress throughout the realms of outer space. While this estimate is probably far from accurate, it accords, in principle, with our belief that intelligent force organizers are at present directing universe evolution in the space levels beyond the present outer limits of the grand universe. Whatever the nature of this postulated intelligence, it is apparently not spirit-gravity responsive. (133.1) 12:3.11 But all these computations are at best estimates based on assumed laws. We think they are fairly reliable. Even if a few spirit beings were located in outer space, their collective presence would not markedly influence calculations involving such enormous measurements. (133.2) 12:3.12 Personality Gravity is noncomputable. We recognize the circuit, but we cannot measure either qualitative or quantitative realities responsive thereto. 4. Space and Motion (133.3) 12:4.1 All units of cosmic energy are in primary revolution, are engaged in the execution of their mission, while swinging around the universal orbit. The universes of space and their component systems and worlds are all revolving spheres, moving along the endless circuits of the master universe space levels. Absolutely nothing is stationary in all the master universe except the very center of Havona, the eternal Isle of Paradise, the center of gravity. (133.4) 12:4.2 The Unqualified Absolute is functionally limited to space, but we are not so sure about the relation of this Absolute to motion. Is motion inherent therein? We do not know. We know that motion is not inherent in space; even the motions of space are not innate. But we are not so sure about the relation of the Unqualified to motion. Who, or what, is really responsible for the gigantic activities of force-energy transmutations now in progress out beyond the borders of the present seven superuniverses? Concerning the origin of motion we have the following opinions: (133.5) 12:4.3 1. We think the Conjoint Actor initiates motion in space. (133.6) 12:4.4 2. If the Conjoint Actor produces the motions of space, we cannot prove it. (133.7) 12:4.5 3. The Universal Absolute does not originate initial motion but does equalize and control all of the tensions originated by motion. (133.8) 12:4.6 In outer space the force organizers are apparently responsible for the production of the gigantic universe wheels which are now in process of stellar evolution, but their ability so to function must have been made possible by some modification of the space presence of the Unqualified Absolute. (133.9) 12:4.7 Space is, from the human viewpoint, nothing — negative; it exists only as related to something positive and nonspatial. Space is, however, real. It contains and conditions motion. It even moves. Space motions may be roughly classified as follows: (133.10) 12:4.8 1. Primary motion — space respiration, the motion of space itself. (133.11) 12:4.9 2. Secondary motion — the alternate directional swings of the successive space levels. (133.12) 12:4.10 3. Relative motions — relative in the sense that they are not evaluated with Paradise as a base point. Primary and secondary motions are absolute, motion in relation to unmoving Paradise. (133.13) 12:4.11 4. Compensatory or correlating movement designed to co-ordinate all other motions. (134.1) 12:4.12 The present relationship of your sun and its associated planets, while disclosing many relative and absolute motions in space, tends to convey the impression to astronomic observers that you are comparatively stationary in space, and that the surrounding starry clusters and streams are engaged in outward flight at ever-increasing velocities as your calculations proceed outward in space. But such is not the case. You fail to recognize the present outward and uniform expansion of the physical creations of all pervaded space. Your own local creation (Nebadon) participates in this movement of universal outward expansion. The entire seven superuniverses participate in the two-billion-year cycles of space respiration along with the outer regions of the master universe. (134.2) 12:4.13 When the universes expand and contract, the material masses in pervaded space alternately move against and with the pull of Paradise gravity. The work that is done in moving the material energy mass of creation is space work but not power-energy work. (134.3) 12:4.14 Although your spectroscopic estimations of astronomic velocities are fairly reliable when applied to the starry realms belonging to your superuniverse and its associate superuniverses, such reckonings with reference to the realms of outer space are wholly unreliable. Spectral lines are displaced from the normal towards the violet by an approaching star; likewise these lines are displaced towards the red by a receding star. Many influences interpose to make it appear that the recessional velocity of the external universes increases at the rate of more than one hundred miles a second for every million light-years increase in distance. By this method of reckoning, subsequent to the perfection of more powerful telescopes, it will appear that these far-distant systems are in flight from this part of the universe at the unbelievable rate of more than thirty thousand miles a second. But this apparent speed of recession is not real; it results from numerous factors of error embracing angles of observation and other time-space distortions. (134.4) 12:4.15 But the greatest of all such distortions arises because the vast universes of outer space, in the realms next to the domains of the seven superuniverses, seem to be revolving in a direction opposite to that of the grand universe. That is, these myriads of nebulae and their accompanying suns and spheres are at the present time revolving clockwise about the central creation. The seven superuniverses revolve about Paradise in a counterclockwise direction. It appears that the second outer universe of galaxies, like the seven superuniverses, revolves counterclockwise about Paradise. And the astronomic observers of Uversa think they detect evidence of revolutionary movements in a third outer belt of far-distant space which are beginning to exhibit directional tendencies of a clockwise nature.* (134.5) 12:4.16 It is probable that these alternate directions of successive space processions of the universes have something to do with the intramaster universe gravity technique of the Universal Absolute, which consists of a co-ordination of forces and an equalization of space tensions. Motion as well as space is a complement or equilibrant of gravity. * 5. Space and Time (134.6) 12:5.1 Like space, time is a bestowal of Paradise, but not in the same sense, only indirectly. Time comes by virtue of motion and because mind is inherently aware of sequentiality. From a practical viewpoint, motion is essential to time, but there is no universal time unit based on motion except in so far as the Paradise-Havona standard day is arbitrarily so recognized. The totality of space respiration destroys its local value as a time source. (135.1) 12:5.2 Space is not infinite, even though it takes origin from Paradise; not absolute, for it is pervaded by the Unqualified Absolute. We do not know the absolute limits of space, but we do know that the absolute of time is eternity. (135.2) 12:5.3 Time and space are inseparable only in the time-space creations, the seven superuniverses. Nontemporal space (space without time) theoretically exists, but the only truly nontemporal place is Paradise area. Nonspatial time (time without space) exists in mind of the Paradise level of function. (135.3) 12:5.4 The relatively motionless midspace zones impinging on Paradise and separating pervaded from unpervaded space are the transition zones from time to eternity, hence the necessity of Paradise pilgrims becoming unconscious during this transit when it is to culminate in Paradise citizenship. Time-conscious visitors can go to Paradise without thus sleeping, but they remain creatures of time. (135.4) 12:5.5 Relationships to time do not exist without motion in space, but consciousness of time does. Sequentiality can consciousize time even in the absence of motion. Man’s mind is less time-bound than space-bound because of the inherent nature of mind. Even during the days of the earth life in the flesh, though man’s mind is rigidly space-bound, the creative human imagination is comparatively time free. But time itself is not genetically a quality of mind. (135.5) 12:5.6 There are three different levels of time cognizance: (135.6) 12:5.7 1. Mind-perceived time — consciousness of sequence, motion, and a sense of duration. (135.7) 12:5.8 2. Spirit-perceived time — insight into motion Godward and the awareness of the motion of ascent to levels of increasing divinity. (135.8) 12:5.9 3. Personality creates a unique time sense out of insight into Reality plus a consciousness of presence and an awareness of duration. (135.9) 12:5.10 Unspiritual animals know only the past and live in the present. Spirit-indwelt man has powers of prevision (insight); he may visualize the future. Only forward-looking and progressive attitudes are personally real. Static ethics and traditional morality are just slightly superanimal. Nor is stoicism a high order of self-realization. Ethics and morals become truly human when they are dynamic and progressive, alive with universe reality. (135.10) 12:5.11 The human personality is not merely a concomitant of time-and-space events; the human personality can also act as the cosmic cause of such events. 6. Universal Overcontrol (135.11) 12:6.1 The universe is nonstatic. Stability is not the result of inertia but rather the product of balanced energies, co-operative minds, co-ordinated morontias, spirit overcontrol, and personality unification. Stability is wholly and always proportional to divinity. (135.12) 12:6.2 In the physical control of the master universe the Universal Father exercises priority and primacy through the Isle of Paradise; God is absolute in the spiritual administration of the cosmos in the person of the Eternal Son. Concerning the domains of mind, the Father and the Son function co-ordinately in the Conjoint Actor. (136.1) 12:6.3 The Third Source and Center assists in the maintenance of the equilibrium and co-ordination of the combined physical and spiritual energies and organizations by the absoluteness of his grasp of the cosmic mind and by the exercise of his inherent and universal physical- and spiritual-gravity complements. Whenever and wherever there occurs a liaison between the material and the spiritual, such a mind phenomenon is an act of the Infinite Spirit. Mind alone can interassociate the physical forces and energies of the material level with the spiritual powers and beings of the spirit level. (136.2) 12:6.4 In all your contemplation of universal phenomena, make certain that you take into consideration the interrelation of physical, intellectual, and spiritual energies, and that due allowance is made for the unexpected phenomena attendant upon their unification by personality and for the unpredictable phenomena resulting from the actions and reactions of experiential Deity and the Absolutes. (136.3) 12:6.5 The universe is highly predictable only in the quantitative or gravity-measurement sense; even the primal physical forces are not responsive to linear gravity, nor are the higher mind meanings and true spirit values of ultimate universe realities. Qualitatively, the universe is not highly predictable as regards new associations of forces, either physical, mindal, or spiritual, although many such combinations of energies or forces become partially predictable when subjected to critical observation. When matter, mind, and spirit are unified by creature personality, we are unable fully to predict the decisions of such a freewill being. (136.4) 12:6.6 All phases of primordial force, nascent spirit, and other nonpersonal ultimates appear to react in accordance with certain relatively stable but unknown laws and are characterized by a latitude of performance and an elasticity of response which are often disconcerting when encountered in the phenomena of a circumscribed and isolated situation. What is the explanation of this unpredictable freedom of reaction disclosed by these emerging universe actualities? These unknown, unfathomable unpredictables — whether pertaining to the behavior of a primordial unit of force, the reaction of an unidentified level of mind, or the phenomenon of a vast preuniverse in the making in the domains of outer space — probably disclose the activities of the Ultimate and the presence-performances of the Absolutes, which antedate the function of all universe Creators. (136.5) 12:6.7 We do not really know, but we surmise that such amazing versatility and such profound co-ordination signify the presence and performance of the Absolutes, and that such diversity of response in the face of apparently uniform causation discloses the reaction of the Absolutes, not only to the immediate and situational causation, but also to all other related causations throughout the entire master universe. (136.6) 12:6.8 Individuals have their guardians of destiny; planets, systems, constellations, universes, and superuniverses each have their respective rulers who labor for the good of their domains. Havona and even the grand universe are watched over by those intrusted with such high responsibilities. But who fosters and cares for the fundamental needs of the master universe as a whole, from Paradise to the fourth and outermost space level? Existentially such overcare is probably attributable to the Paradise Trinity, but from an experiential viewpoint the appearance of the post-Havona universes is dependent on: (136.7) 12:6.9 1. The Absolutes in potential. (136.8) 12:6.10 2. The Ultimate in direction. (137.1) 12:6.11 3. The Supreme in evolutionary co-ordination. (137.2) 12:6.12 4. The Architects of the Master Universe in administration prior to the appearance of specific rulers. (137.3) 12:6.13 The Unqualified Absolute pervades all space. We are not altogether clear as to the exact status of the Deity and Universal Absolutes, but we know the latter functions wherever the Deity and Unqualified Absolutes function. The Deity Absolute may be universally present but hardly space present. The Ultimate is, or sometime will be, space present to the outer margins of the fourth space level. We doubt that the Ultimate will ever have a space presence beyond the periphery of the master universe, but within this limit the Ultimate is progressively integrating the creative organization of the potentials of the three Absolutes. 7. The Part and the Whole (137.4) 12:7.1 There is operative throughout all time and space and with regard to all reality of whatever nature an inexorable and impersonal law which is equivalent to the function of a cosmic providence. Mercy characterizes God’s attitude of love for the individual; impartiality motivates God’s attitude toward the total. The will of God does not necessarily prevail in the part — the heart of any one personality — but his will does actually rule the whole, the universe of universes. (137.5) 12:7.2 In all his dealings with all his beings it is true that the laws of God are not inherently arbitrary. To you, with your limited vision and finite viewpoint, the acts of God must often appear to be dictatorial and arbitrary. The laws of God are merely the habits of God, his way of repeatedly doing things; and he ever does all things well. You observe that God does the same thing in the same way, r

Biz vs Dev
Ep 21. Refactored Writing

Biz vs Dev

Play Episode Listen Later Aug 10, 2014 79:31


links: INE ventures James’ Toy Swift app (Grocery List) BuzzFeed reported on a tobacco ad which reportedly invented the word “like” (#14) Hemingway App “Hemingway” on Hemingway Free Jazz Article about books most frequently abandoned and article about most unread books and the Hawking Index On Writing Well George Orwell, “Politics and the English Language” Writing sample (From Das Kapital) I originally planned to just cross out words. But since that required changing tenses and conjugations, I’ll just provide the original and the revised version. The desire after hoarding is in its very nature unsatiable. In its qualitative aspect, or formally considered, money has no bounds to its efficacy, i.e., it is the universal representative of material wealth, because it is directly convertible into any other commodity. But, at the same time, every actual sum of money is limited in amount, and, therefore, as a means of purchasing, has only a limited efficacy. This antagonism between the quantitative limits of money and its qualitative boundlessness, continually acts as a spur to the hoarder in his Sisyphus-like labour of accumulating. It is with him as it is with a conqueror who sees in every new country annexed, only a new boundary. James’ revisions: The desire to hoard is fundamentally unsatiable. Qualitatively, money has unlimited efficacy – it’s the universal representation of material wealth – because it’s convertible into any commodity. But, every actual sum of money is a fixed amount, so it has a limited efficacy as a means of purchasing. This antagonism between the limits and boundlessness of money continually acts as a spur to the hoarder in his Sisyphus-like labour of accumulating. He’s like a conqueror who, with each new country they annex, just sees a new boundary. (music: “So Fine” by Shenandoah and the Night)

Graduate School of Systemic Neurosciences - Digitale Hochschulschriften der LMU

This thesis proposes a new approach to investigate insight problem solving. Introducing magic tricks as a problem solving task, we asked participants to find out the secret method used by the magician to create the magic effect. Based on the theoretical framework of the representational change theory, we argue that magic tricks are ideally suited to investigate insight because similar to established insight tasks like puzzles, observers’ prior knowledge activates constraints. In order to see through the magic trick, the constraints must be overcome by changing the problem representation. The aim of the present work is threefold: First, we set out to provide a proof of concept for this novel paradigm by demonstrating that it is actually possible for observers to gain insight into the magician’s secret method and that this can be experienced as a sudden, insightful solution. We therefore aimed at showing that magic tricks can trigger insightful solutions that are accompanied by an Aha! experience. The proposed paradigm could be a useful contribution to the field of insight research where new stimuli beyond traditional puzzle approaches are sorely needed. Second, the present work is aimed at contributing to a better understanding of the subjective Aha! experience that is currently often relied on as important classification criterion in neuroscientific studies of insight, yet remains conceptually vague. The new task will therefore be used to further elucidate the phenomenology of the Aha! experience by assessing participants’ individual solving experiences. As a third question, we investigated the influence of insight on memory. A positive impact of insight on subsequent solution recall is often implicitly assumed, because the representational change underlying insightful solutions is assumed to facilitate the retention of solution knowledge, yet this was never tested. A stimulus set of magic tricks was developed in collaboration with a professional magician, covering a large range of different magic effects and methods. After recording the tricks in a standardized theatre setting, pilot studies were run on 45 participants to identify appropriate tricks and to ensure that they were understandable, surprising and difficult. In the main experiment, 50 participants watched the final set of 34 magic tricks, with the task of trying to figure out how the trick was accomplished. Each trick was presented up to three times. Upon solving the trick, participants had to indicate whether they had found the solution through sudden insight (i.e. with an Aha! experience) or not. Furthermore, we obtained a detailed characterization of the Aha! experience by asking participants for a comprehensive quantitative (ratings on a visual analogue scale with fixed dimensions) and qualitative evaluation (free self-reports) which was repeated after 14 days to control for its reliability. At that time, participants were also required to perform a recall of their solutions. We found that 49% of all magic tricks could be solved and specifically, that insightful solutions were elicited in 41.1% of solved trials. In comparison with noninsight solutions, insightful solutions (brought about by representational change) were more likely to be correct and reached earlier. Quantitative evaluations of individual Aha! experiences turned out to be highly reliable since they remained identical across the time span of 14 days. Qualitatively, participants reported more emotional than cognitive aspects. This primacy of positive emotions was found in qualitative as well as in quantitative evaluations, although two different methods were used. We also found that experiencing insight leads to a facilitated recall of the respective solutions since 64.4% of all insight solutions were recalled correctly, whereas only 52.4% of all noninsight solutions were recalled correctly after a delay of 14 days. We demonstrated the great potential of our new approach by providing a proof of concept for magic tricks as a problem solving task and conclude that magic tricks offer a novel way of inducing problem solving that elicits insight. The reliability of individual evaluations of Aha! experiences indicates that, despite its subjective character, it can be justified to use the Aha! experience as a classification criterion. The present work contributes to a better understanding of the phenomenology of the Aha! experience by providing evidence for the occurrence of strong positive emotions as a prevailing aspect. This work also revealed a memory advantage for solutions that were reached through insight, demonstrating a facilitating effect of previous insight experiences on the recall of solutions. This finding provides support for the assumption that a representational change underlying insightful solving experiences leads to long-lasting changes in the representation of a problem that facilitate the retention of the problem’s solution. In sum, the novel approach presented in this thesis is shown to constitute a valuable contribution to the field of insight research and offers much potential for future research. Revealing the relationship between insight and magic tricks, the framework of the representational change theory is applied to a new domain and thus enlarged. Combining the novel task domain of magic tricks with established insight tasks might help to further elucidate the process of insight problem solving which is a characteristic and vital part of human thinking and yet so difficult to grasp.

Mathematics Resources - Mathematics
Qualitatively Different - Mathematics Education for Teachers

Mathematics Resources - Mathematics

Play Episode Listen Later May 1, 2009


Fakultät für Geowissenschaften - Digitale Hochschulschriften der LMU
Untersuchungsergebnisse zur Mobilität und Remobilisierung von Kupfer und Antimon in wasserwirtschaftlich relevanten, porösen Lockergesteinen durch Säulenversuche und mit reaktiver Transportmodellierung

Fakultät für Geowissenschaften - Digitale Hochschulschriften der LMU

Play Episode Listen Later Apr 25, 2003


The aim of this study is to obtain a better understanding of the hydrological and geochemical contexts of the heavy metal transport in watersaturated porous aquifer sediments, where both copper (Cu) and antimony (Sb) are the main focus. The background of investigation is based on questions within the scope of the redevelopement of past industrial waste deposits areas and groundwater protection. The functional and experimental part of this study was initially comprised of planning, conception and set-up of a column arrangement which served as a model of the sediment-/groundwater system as well as the development of appropriate preparation and sampling techniques. The experimental set-up run was suitable for simulation of groundwater flow with velocities between centimetre and metre per day. In order to differentiate results concerning the migration behaviour of the heavy metals copper and antimony, laboratory experiments were conduced in three water aquifer systems Quaternary Gravel, Tertiary Sand and Dogger-Sand of southern Germany. Between the systems carbonate, clay, or iron contents vary. Quartz-Sand was served as the reference material. Copper was injected as the cationic form of copper nitrate (Cu(NO3)2) and antimony as anionic potassium antimonat (K[Sb(OH)6]). The investigations included short-term (Dirac-impulse) and long-term (Pulse-injection) experiments. A calcareous water served as an eluent for sediments with high buffer capacity and a rainwater-mimic modelwater as an eluent for the sediment with lower buffer capacity. Determination of distribution coefficients (batch-experiments) and thermodynamic modelling of solubility in different experimental waters led to maximum values in the calcareous water for copper as well as antimony, where the solubility of copper was 30 % lower than for antimony. Due to the additional complexes found in the DOC-water of the Dachauer Moos copper has a significantly higher mobility. The effect is even more significant by applying EDTA-water. Mass balance of the column experiments show that copper is fundamentally less mobile (recovery 0-18 %) in contrast to antimony (recovery 85-99 %). Differences in the sorption behaviour were reflected in the retardation values, which subsequently distinguished almost three orders for both elements. Cumulatively, these results show that the sorption capacities for copper can not be achieved even after a total input of 201 mg Cu (Quaternary Gravel) by injection pulses over two years. However, sorption capacities can be attained with antimony after only 10.4 days (Tertiary Sand), after 11.4 days (Dogger-Sand) and after 1.5 years (Quaternary Gravel), respectively. Qualitatively, the results of copper can be described with a fast irreversible sorption kinetic, and antimony with a slow reversible one. Three mathematical models were applied to simulate the experimental data set. The observed Sb-breakthrough curves modelled quite well with the one-dimensional linear reaction model of CAMERON & KLUTE (1977) and the determination of corresponding dynamic reaction parameters. However, a satisfying fit could not be found for copper with this model in a buffered system for the two following reasons: 1.) The precipitation processes and 2.) the complex surface-active processes which were assumed next to the predominant sorption reactions. Therefore non-linear model approaches were employed for the fit. Initially, the observed Sb-curves were fitted using two-site Langmuir-isotherm first order kinetics, allowing for quantification of sorption capacities, -affinities and –rates. Applying three-site Langmuir-isotherm first order kinetics also includes precipitation processes. With all models, a satisfying fit of the desorption processes in the declining curve part emerged. However the sorption processes could still not be satisfactorily described. Experiments investigating the influence of both complexing agents, DOC and EDTA, on the migration behaviour of copper showed more or less a stoichiometric complexation of the heavy metal copper. With the stronger complexing agent EDTA at least twice as high remobilisation as with DOC could be achieved. The strongest EDTA-mediated remobilisation could be observed in the unbuffered sediments (49-50%), significantly lower values were attained in the buffered sediments with 17-26 %. Of the injected copper mass in unbuffered sediment 82 % could be recovered from Dogger-Sand, and 83 % from Quartz-Sand. A maximum of 55 % could be recovered from the buffered sediments. In order to support mobility data of copper and antimony sequential extractions of column beds were performed after the column experiments. As an essential result, it emerged that copper is accumulated increasingly in areas near the surface, while antimony is distributed in the whole profile. Therefore, copper in contrast to antimony is stable bound at neutral pH and is not transferred. As a consequence, a different behaviour of availability of both elements can be observed. The results of the column experiments indicate that copper is preferentially occluded at organic matter binding fractions as well as in the immobile aqua regia fraction. Antimony is mainly bound in the mobile and easy exchangeable fraction and in both the iron and manganese fraction. As this result allows immediate conclusions on the availability and transfer potential of heavy metals, it also permits estimations of the ecotoxicological efficacy of copper and antimony and the assessment of the endangering risk potential of the contaminated sediments.