Podcasts about infer

  • 114PODCASTS
  • 169EPISODES
  • 38mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • May 27, 2025LATEST
infer

POPULARITY

20172018201920202021202220232024


Best podcasts about infer

Latest podcast episodes about infer

Easy Stories in English
Have You Met Elaine? (Intermediate)

Easy Stories in English

Play Episode Listen Later May 27, 2025 50:19


Book a class in June and July and get 50% off! ⁠EasyStoriesInEnglish.com/Classes⁠ Have you met Elaine? Elaine is very giving, but not very nice. Oh, she loves everyone, but she shows her love through action, not smiles. And Elaine has a secret, which only I know about... Go to ⁠⁠EasyStoriesInEnglish.com/Elaine⁠ for the full transcript. Get episodes without adverts + bonus episodes at ⁠⁠EasyStoriesInEnglish.com/Support⁠⁠. Your support is appreciated! Level: Intermediate. Genre: Horror. Vocabulary: Cuppa, Hoarder, Ragamuffin, Vandalise, Parish council, Egging, Spot (skin), Fertiliser, Infer, Defrost, Intermittent fasting, Six pack. Setting: Modern. Word Count: 1728. Author: Ariel Goodbody. Learn more about your ad choices. Visit megaphone.fm/adchoices

Podcast Ubuntu Portugal
E348 Inferência Estatística Turbinada

Podcast Ubuntu Portugal

Play Episode Listen Later May 22, 2025 56:00


No regresso desta semana semeamos teorias da conspiração sobre bibliotecários malvados com azia, atiramos uma orquestra ao chão, partimos telefones a alta altitude com mapas «off-line» e o Diogo volta a matar impiedosamente o pobre gorila Harambe, enquanto discutimos a validade (ou não) de termos assistentes de inferência estatística turbinada a meterem o bedelho no nosso uso diário do computador...mas fechados numa caixinha e amordaçados, para não darem com a língua nos dentes.

Enginears
PhysicsX Trained a Foundation AI Physics Model to Design New Airplanes Using 25M Open-Source Meshes! | Enginears Podcast

Enginears

Play Episode Listen Later May 14, 2025 35:48


If you're keen to share your story, please reach out to us!Guest:https://www.linkedin.com/in/jakemulley/https://www.linkedin.com/in/alvaroazabal/https://www.physicsx.ai/careers/Powered by Artifeks!https://www.linkedin.com/company/artifeksrecruitmenthttps://www.artifeks.co.ukhttps://www.linkedin.com/in/agilerecruiterLinkedIn: https://www.linkedin.com/company/enginearsioTwitter: https://x.com/EnginearsioAll Podcast Platforms: https://smartlink.ausha.co/enginears00:00 - Enginears Intro.01:30 - Jake & PhysicsX Intro.04:22 - Alvaro Intro.07:06 - Cloud compute challenges Jake is facing.10:02 - Geometry model.11:38 - Challenges building the geometry model.13:19 - Infer and compute challenges.15:07 - The tech demonstrator.18:18 - Classical engineering challenges.20:49 - Is that a common notion in physics design?21:45 - What challenges do you find when engineering for accuracy?24:16 - As the business grows, are you planning for upcoming challenges?26:48 - What makes good training practices and processes?29:04 - Common pitfalls?30:20 - PhysicsX plans over next 12 months.33:14 - Jake, Alvaro & PhysicsX Outro.35:06 - Enginears Outro.Hosted by Ausha. See ausha.co/privacy-policy for more information.

Taste Radio
Own & Infer – Behind The Scenes Of A ‘Perfect' Brand Refresh

Taste Radio

Play Episode Listen Later May 9, 2025 35:57


After a challenging 2024, Lemon Perfect knew it was time for a bold reset. Founder Yanni Hufnagel led the charge with a reengineered bottle and improved formula, but the brand's comeback wouldn't be complete without a new look. Enter Paula Grant and creative studio Suite9C, tasked with developing a daring visual identity refresh. This is the story of how a brand turned setback into spotlight. Also in this episode: the hosts unpack Guayaki's unprecedented rebrand to Yerba Madre and what it means for the category-defining brand. They also dive into Gopuff's new GoXL product and whether “value” is shaping up to be a defining theme of 2025. Show notes: 0:45: All Rain, All Rain, All Rain. A Dead Rabbit, A Great Thing. Madre Musing. XLerated Delivery. – Where's that Texas heat? The hosts encounter a rainy, gloomy Austin, but at least The Dead Rabbit delivers on every front. Prior to Taste Radio's meetup later in the day, they discuss Guayaki's rebrand to Yerba Madre and why they're excited to hear from Ghost co-founder Dan Lourenco at BevNET Live. John professes his love for Gopuff, but is he excited about the prospect of buying 12 rolls of toilet paper from the delivery platform? Ray feels left out of a meeting with an Austin-based founder of chai drinks. 12:55: Paula Grant, Founder, Suite9C & Yanni Hufnagel, Founder, Lemon Perfect – Paula chats about Taste Radio's NYC meetup and stealthy afterparty, before Yanni talks about how Lemon's Perfect's product quality issues spurred the company's refreshed formulation and decision to pursue a brand refresh. Paula Paula explains why she rejects the traditional “agency vs. founder” model, instead favoring deeply collaborative, in-the-room design processes. Yanni, a self-described detail obsessive, talks about their intensely collaborative design process, from aligning on visual simplicity to debating tiny but crucial details, like color balance, label hierarchy, and shelf visibility. Paula emphasizes the importance of powerful design that is about aesthetics, storytelling, brand trust, and commercial performance. They both discuss how the refreshed identity positions Lemon Perfect for future innovation and category expansion. Brands in this episode: Yerba Madre, Ghost, Uncrustables, Chobani, Kimbala, Lemon Perfect, Vitaminwater, BodyArmor

Buduj značku
Martin Tůma: ve strojírenství musíte umět ušít zákazníkovi byznys na míru

Buduj značku

Play Episode Listen Later Apr 17, 2025 26:50


Martin Tůma je odborník v oblasti strojírenství a energetiky, od roku 2017 CEO společnosti Infer. Ta se specializuje především na dodávání potrubních systémů a dílů, na svařování a montáže technologických celků v oblastech jaderné energetiky, petrochemie, vodohospodářství či automotive.

Beyond the Hype
Should fully autonomous AI agents be developed?

Beyond the Hype

Play Episode Listen Later Apr 15, 2025 40:23


In this episode, Oliver Cronk is joined by colleagues David Rees, Hélène Sauvé, Ivan Mladjenovic and Emma Pearce. Together, they delve into the practical applications and limitations of agentic AI and its implications for enterprise AI deployments. The team shares insights from the ‘Infer' research and development projects, through which Scott Logic produced and open-sourced InferLLM (a local, personalised AI agent) and InferESG (which uses AI agents to identify greenwashing in Environmental, Social and Governance reports). With real-world examples and expert perspectives, the panel provides a nuanced view of whether fully autonomous agents are hype or reality in 2025. They discuss the balance between human oversight and automation, and emphasise the importance of transparency and traceability in AI systems. They also consider the ethical considerations of self-building agents and the challenges of ensuring reliable AI outputs. Have a listen to gain a deeper understanding of the evolving landscape of agentic AI and its potential impact on various sectors. Useful links for this episode InferLLM on GitHub – Open-sourced by Scott Logic InferESG on GitHub – Open-sourced by Scott Logic InferESG: Augmenting ESG Analysis with Generative AI – David Rees, Scott Logic InferESG: Finding the Right Architecture for AI-Powered ESG Analysis – David Rees, Scott Logic InferESG: Harnessing agentic AI for due diligence – Scott Logic case study Beyond the Hype: Will we ever be able to secure GenAI? – Scott Logic Beyond the Hype: Is architecture for AI even necessary? – Scott Logic Draft classification for different types of Enterprise AI deployment – Oliver Cronk, Scott Logic

Way To Farm
Information Overload With CURT LIVESAY - THE SINGULAR AG PODCAST

Way To Farm

Play Episode Listen Later Mar 6, 2025 40:28


Check out our Website!https://singularagronomics.comCheck out our full product line here!https://singularagronomics.com/products/Are you interested in any of our line of products, or want to learn more? Follow the link below to find a dealer closest to you!https://singularagronomics.com/contact/Check out our Quarterly Newsletter:https://singularagronomics.com/newsletter/Blog:https://singularagronomics.com/blog/Want to become a Distributor? Email Us: info@singularagros.comCheck us out on Social Media!Instagram: https://www.instagram.com/singular_agronomics/Facebook: https://www.facebook.com/profile.php?id=100093693453465Welcome to a deep dive into modern agronomy and the power of innovative thinking on the Singular AG Podcast. In this episode, Curt Livesay and I explore everything from product formulation challenges and nutrient management to personal insights on ADHD and rebranding in agriculture. We cover the good, the bad, and the ugly of using silica‐based stress‐mitigating products like Pixie Dust, Infer, and our 2 by 2 system, all while sharing stories from our journeys that blend cutting‐edge science with real-world farming.

Numinosum Radio
smelter (impress_infer)

Numinosum Radio

Play Episode Listen Later Nov 15, 2024 30:00


“The pale ones. I think you know just how pale they can be.”“Very pale.”“Very pale, yes.”[for electric guitar, synth, plinth, pale, somber & soak] This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit briancshort.substack.com

MAKE AMERICA GREAT AGAIN BY BRINGING BACK THE CHRISTIAN DOCTRINE OF EARLY AMERICA: GEORGE WHITEFIELD
TRAPPED IN SIN PT 21 OF NOT BELIEVING FOLLOWING GODS COMMANDS BRINGS KNOWLEDGE SIN BUT NOT VIRTUE!

MAKE AMERICA GREAT AGAIN BY BRINGING BACK THE CHRISTIAN DOCTRINE OF EARLY AMERICA: GEORGE WHITEFIELD

Play Episode Listen Later Oct 28, 2024 200:35


IN THIS MESSAGE WE WILL FURTHER EXPLAIN THE TWO QUESTIONS WHICH COMPLETELY TURN THE APPLECART OF OUR FASTFOOD FREEWILL FRIENDS AND THEOLOGIAN UPSIDE DOWN. THESE QUESTIONS ARE AXIOMATIC QUESTIONS. FOR EXAMPLE, WHAT IS A SIMPLE AXIOMATIC QUESTION THAT ALL OF AMERICANS KNOW THE UNDENIABLE ANSWER TO? WHAT IS THAT QUESTION BUT WHEN WE ASK OUR FELLOW AMERICANS THE FOLLIWING QUESTION: ARE YOU A LIAR? THAT IS AN AXIOMATIC QUESTION THAT NONE OF AMERICANS CAN SAY NO TO, AS LONG AS WE ARE IN OUR RIGHT MIND. THE TWO QUESTIONS WHICH DESTROY OUR FASTFOOD FREEWILL FRIENDS AND THEOLOGIANS DOCTRINE OF SALVATION ARE:1. DO WE AS AMERICANS HAVE A FREEWILL TO ACCEPT OR REJECT JESUS OR IS OUR WILL BOUND IN SALVATION?2. WHAT IS THE FUNCTION OF LAW AFTER WE ARE MADE A NEW CREATION WHEN WORKS CANCELS GRACE AND GRACE CANCELS WORKS? THIS QUESTIONS JUST DEMOLISHES THE FREEWILL DOCTRINE AND THE KING OF THE REFORMATION ADDRESSES THIS QUESTION IN HIS BOOK ENTITLED ‘ BONDAGE OF THE WILL' IN RESPONSE THE ERASMUS WHO WROTE THE BOOK ENTITLED ‘FREEDOM OF THE WILL' IN 1524. THE NATURE OF THE SPIRITUAL NEW CREATION IS TOTALLY PARADOXICAL IN NATURE. THE SPIRITUAL NEW CREATION KNOWS INNATELY HOW TO USE THE LAW AND DOES NOT NEED TO BE TAUGHT HOW TO USE THE FOR IT IS AS EASY FOR HIM AS IT IS FOR A BABY TO BREAST FEED. IN ROMANS 7:14 FORMER MR. MORALITY WRITES THAT THE LAW OF THE NEW CREATION IS SPIRITUAL AND THAT IS WHY THE NATURAL MAN CHRISTIAN IS CLUELESS OF THE TRUE FUNCTION OF THE LAW FOR THE NATURAL MAN CHRISTIAN IS NOT SPIRITUAL AND ONLY THE SPIRITUAL KNOW HOW TO INNATELY USE THE LAW. MARTIN LUTHER, WHO WAS KING OF THE UNDERSTANDING OF THE AMONGST THE REFORMERS WRITES: “Wherefore, the WORDS of the LAW are SPOKEN, NOT that they MIGHT ASSERT a. the POWER of the (FREE)WILL, but that they might ILLUMINATE the BLINDNESS of REASON, that it MIGHT SEE that its OWN LIGHT is NOTHING, and that the POWER of the WILL is nothing. "BY the LAW (saith Paul) is the KNOWLEDGE of sin," (Rom. iii. 20.): he does NOT SAY-is the ABOLITION of, or the ESCAPE from SIN. The WHOLE NATURE and DESIGN of the LAW is to GIVE KNOWLEDGE ONLY, and that of NOTHING ELSE ‘EXCEPT of sin, but NOT to DISCOVER or COMMUNICATE any POWER whatever. For KNOWLEDGE is NOT POWER, NOR does it COMMUNICATE POWER, but it TEACHES and SHOWS how GREAT the IMPOTENCY must there be, where there is NO POWER. And WHAT ELSE can the KNOWLEDGE of SIN BE, but the KNOWLEDGE of our EVIL and INFIRMITY? For he DOES NOT SAY-by the LAW comes the KNOWLEDGE of STRENGTHor of GOOD. The WHOLE that the LAW DOES, according to the testimony of Paul, is to MAKEKNOWN SIN. that man, by the WORDS of the LAW, is ADMONISHED and TAUGHT what he OUGHT to DO, ALSO NOT WHAT he CAN DO: that is, that he is BROUGHT to KNOW his SIN, but NOT to BELIEVE that HE HAS any STRENGTH in HIMSELF. Wherefore, FREIND Erasmus, as often as you THROW in my TEETH the Words of the LAW, so often I throw in yours that of Paul, "BY the LAW is the KNOWLEDGE of SIN,"-NOT of the POWER of the WILL. Heap together, therefore, out of the large Concordances all the IMPERATIVE WORDS, (COMMANDS) into one chaos, provided that, they be not WORDS of the PROMISE but of the REQUIREMENT of the LAW ONLY, and I will immediately declare, that by them is always shewn what men OUGHT to DO, not what THEY CAN do, or DO DO, And even common grammarians and EVERY LITTLE SCHOOL-BOY in the street knows, that by VERBS of the imperative mood, nothing else is signified than that which OUGHT to be DONE, and that, WHAT “IS” DONE or can be done, is expressed by verbs of the INDICATIVE mood. Thus, therefore, it comes to pass, that YOU THEOLOGIANS, are so SENSELESS and so many degrees BELOW EVEN SCHOOL-BOYS, that when you have CAUGHT HOLD of one IMPERATIVE verb you INFER an INDICATIVE sense, as though WHAT was COMMENDED were immediately and even necessarily DONE, or POSSIBLE to be DONE. But how many slips are there between the cup and the lip! So that, what you command to be done, and is therefore…

The Nonlinear Library
AF - Owain Evans on Situational Awareness and Out-of-Context Reasoning in LLMs by Michaël Trazzi

The Nonlinear Library

Play Episode Listen Later Aug 24, 2024 8:33


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Owain Evans on Situational Awareness and Out-of-Context Reasoning in LLMs, published by Michaël Trazzi on August 24, 2024 on The AI Alignment Forum. Owain Evans is an AI Alignment researcher, research associate at the Center of Human Compatible AI at UC Berkeley, and now leading a new AI safety research group. In this episode we discuss two of his recent papers, "Me, Myself, and AI: The Situational Awareness Dataset (SAD) for LLMs" (LW) and "Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from Disparate Training Data" (LW), alongside some Twitter questions. Below are some highlighted quotes from our conversation (available on Youtube, Spotify, Apple Podcast). For the full context for each of these quotes, you can find the accompanying transcript. Situational Awareness Definition "What is situational awareness? The idea is the model's kind of self-awareness, that is its knowledge of its own identity, and then its awareness of its environment. What are the basic interfaces that it is connected to? [...] And then there's a final point with situational awareness, which is, can the model use knowledge of its identity and environment to take rational actions?" "Situational awareness is crucial for an AI system acting as an agent, doing long-term planning. If you don't understand what kind of thing you are, your capabilities and limitations, it's very hard to make complicated plans. The risks of AI mostly come from agentic models able to do planning." Motivation "We wanted to measure situational awareness in large language models with a benchmark similar to Big Bench or MMLU. The motivation is that situational awareness is important for thinking about AI risks, especially deceptive alignment, and we lacked ways to measure and break it down into components." "Situational awareness is relevant to any situation where the model needs to do agentic long-term planning. [...] A model confused about itself and its situation would likely struggle to pull off such a strategy." On Claude 3 Opus Insightful Answers "Let me explain [the Long Monologue task]. Most of our dataset is typical multiple-choice question answering, but we added a task where models write long answers describing themselves and their situation. The idea is to see if the model can combine different pieces of information about itself coherently and make good inferences about why we're asking these questions. Claude 3 Opus was particularly insightful, guessing it might be part of a research study testing self-awareness in LLMs. These were true inferences not stated in the question. The model was reading between the lines, guessing this wasn't a typical ChatGPT-style interaction. I was moderately surprised, but I'd already seen Opus be very insightful and score well on our benchmark. It's worth noting we sample answers with temperature 1, so there's some randomness. We saw these insights often enough that I don't think it's just luck. Anthropic's post-training RLHF seems good at giving the model situational awareness. The GPT-4 base results were more surprising to us." What Would Saturating The Situational Awareness Benchmark Imply For Safety And Governance "If models can do as well or better than humans who are AI experts, who know the whole setup, who are trying to do well on this task, and they're doing well on all the tasks including some of these very hard ones, that would be one piece of evidence. [...] We should consider how aligned it is, what evidence we have for alignment. We should maybe try to understand the skills it's using." "If the model did really well on the benchmark, it seems like it has some of the skills that would help with deceptive alignment. This includes being able to reliably work out when it's being evaluated by humans, when it has a lot of oversight, and when it needs to...

Ayurvedese Podcast
#93 - Como Adquirir Conhecimento

Ayurvedese Podcast

Play Episode Listen Later Aug 24, 2024 4:00


Como se adquire conhecimento? Na visão do Ayurveda tem 4 formas de você adquirir conhecimento que são: Provas Escriturais (Áptavacana), Percepção (Pratyaksa), Inferência (Anumāna) e Raciocínio (Yukti). Para entender os detalhes escute o episódio, está incrível! Quer saber mais? Acesse: lnk.bio/ayurvedese Um grande abraço a todos! Namastê; Lucas Campos

The Inside View
Owain Evans - AI Situational Awareness, Out-of-Context Reasoning

The Inside View

Play Episode Listen Later Aug 23, 2024 135:46


Owain Evans is an AI Alignment researcher, research associate at the Center of Human Compatible AI at UC Berkeley, and now leading a new AI safety research group. In this episode we discuss two of his recent papers, “Me, Myself, and AI: The Situational Awareness Dataset (SAD) for LLMs” and “Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from Disparate Training Data”, alongside some Twitter questions. LINKS Patreon: https://www.patreon.com/theinsideview Manifund: https://manifund.org/projects/making-52-ai-alignment-video-explainers-and-podcasts Ask questions: https://twitter.com/MichaelTrazzi Owain Evans: https://twitter.com/owainevans_uk OUTLINE (00:00:00) Intro (00:01:12) Owain's Agenda (00:02:25) Defining Situational Awareness (00:03:30) Safety Motivation (00:04:58) Why Release A Dataset (00:06:17) Risks From Releasing It (00:10:03) Claude 3 on the Longform Task (00:14:57) Needle in a Haystack (00:19:23) Situating Prompt (00:23:08) Deceptive Alignment Precursor (00:30:12) Distribution Over Two Random Words (00:34:36) Discontinuing a 01 sequence (00:40:20) GPT-4 Base On the Longform Task (00:46:44) Human-AI Data in GPT-4's Pretraining (00:49:25) Are Longform Task Questions Unusual (00:51:48) When Will Situational Awareness Saturate (00:53:36) Safety And Governance Implications Of Saturation (00:56:17) Evaluation Implications Of Saturation (00:57:40) Follow-up Work On The Situational Awarenss Dataset (01:00:04) Would Removing Chain-Of-Thought Work? (01:02:18) Out-of-Context Reasoning: the "Connecting the Dots" paper (01:05:15) Experimental Setup (01:07:46) Concrete Function Example: 3x + 1 (01:11:23) Isn't It Just A Simple Mapping? (01:17:20) Safety Motivation (01:22:40) Out-Of-Context Reasoning Results Were Surprising (01:24:51) The Biased Coin Task (01:27:00) Will Out-Of-Context Resaoning Scale (01:32:50) Checking If In-Context Learning Work (01:34:33) Mixture-Of-Functions (01:38:24) Infering New Architectures From ArXiv (01:43:52) Twitter Questions (01:44:27) How Does Owain Come Up With Ideas? (01:49:44) How Did Owain's Background Influence His Research Style And Taste? (01:52:06) Should AI Alignment Researchers Aim For Publication? (01:57:01) How Can We Apply LLM Understanding To Mitigate Deceptive Alignment? (01:58:52) Could Owain's Research Accelerate Capabilities? (02:08:44) How Was Owain's Work Received? (02:13:23) Last Message

Real Estate Espresso
What Does CPI Infer About Interest Rates?

Real Estate Espresso

Play Episode Listen Later Aug 15, 2024 5:34


On today's show, we are taking a look at the latest consumer price index reading for the month of July in the United States. July was the fourth straight month of declines in the consumer price index with a month over month increase of 0.2% an annual rate of 2.9%. As real estate investors, we pay attention to this because of the influence it might have on setting interest-rate policy. The most recent federal reserve announcement which held the short term fed funds rate steady, was looking for continued progress in the fight against inflation in order to gain the necessary confidence to lower interest rates. Many of the analysts I follow her predicting a September rate cut of something in the range of half a percentage point. However, for real estate investors, our interest rate is determined by the yield on the US tenure treasury and for investors it is indexed to either the five year or 10 year Canadian mortgage bond. So who sets the yield on the 10 year treasury? It's price is determined by the laws of supply demand for those bonds.

A Moment of Science
New Caledonian crows can infer weight

A Moment of Science

Play Episode Listen Later Jul 25, 2024 2:00


If you see an object blowing down the street, you will infer that it is light. That will be your conclusion even if you can't determine what the object is.

Politics Done Right
16 Nobel prize laureates infer that Trump's economic policy would destroy the economy.

Politics Done Right

Play Episode Listen Later Jul 18, 2024 3:59


In this compelling video, we delve into the opinion of 16 Nobel Prize-winning economists who argue that former President Donald Trump's economic policies could potentially devastate the U.S. economy. Subscribe to our Newsletter: https://politicsdoneright.com/newsletter Purchase our Books: As I See It: https://amzn.to/3XpvW5o How To Make America Utopia: https://amzn.to/3VKVFnG It's Worth It: https://amzn.to/3VFByXP Lose Weight And Be Fit Now: https://amzn.to/3xiQK3K Tribulations of an Afro-Latino Caribbean man: https://amzn.to/4c09rbE

LessWrong Curated Podcast
“Connecting the Dots: LLMs can Infer & Verbalize Latent Structure from Training Data” by Johannes Treutlein, Owain_Evans

LessWrong Curated Podcast

Play Episode Listen Later Jun 23, 2024 17:56


Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.This is a link post.TL;DR: We published a new paper on out-of-context reasoning in LLMs. We show that LLMs can infer latent information from training data and use this information for downstream tasks, without any in-context learning or CoT. For instance, we finetune GPT-3.5 on pairs (x,f(x)) for some unknown function f. We find that the LLM can (a) define f in Python, (b) invert f, (c) compose f with other functions, for simple functions such as x+14, x // 3, 1.75x, and 3x+2.Paper authors: Johannes Treutlein*, Dami Choi*, Jan Betley, Sam Marks, Cem Anil, Roger Grosse, Owain Evans (*equal contribution)Johannes, Dami, and Jan did this project as part of an Astra Fellowship with Owain Evans.Below, we include the Abstract and Introduction from the paper, followed by some additional discussion of our AI safety [...]--- First published: June 21st, 2024 Source: https://www.lesswrong.com/posts/5SKRHQEFr8wYQHYkx/connecting-the-dots-llms-can-infer-and-verbalize-latent --- Narrated by TYPE III AUDIO.

The Nonlinear Library
AF - Connecting the Dots: LLMs can Infer & Verbalize Latent Structure from Training Data by Johannes Treutlein

The Nonlinear Library

Play Episode Listen Later Jun 21, 2024 14:59


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Connecting the Dots: LLMs can Infer & Verbalize Latent Structure from Training Data, published by Johannes Treutlein on June 21, 2024 on The AI Alignment Forum. TL;DR: We published a new paper on out-of-context reasoning in LLMs. We show that LLMs can infer latent information from training data and use this information for downstream tasks, without any in-context learning or CoT. For instance, we finetune GPT-3.5 on pairs (x,f(x)) for some unknown function f. We find that the LLM can (a) define f in Python, (b) invert f, (c) compose f with other functions, for simple functions such as x+14, x // 3, 1.75x, and 3x+2. Paper authors: Johannes Treutlein*, Dami Choi*, Jan Betley, Sam Marks, Cem Anil, Roger Grosse, Owain Evans (*equal contribution) Johannes, Dami, and Jan did this project as part of an Astra Fellowship with Owain Evans. Below, we include the Abstract and Introduction from the paper, followed by some additional discussion of our AI safety motivation, the implications of this work, and possible mechanisms behind our results. Abstract One way to address safety risks from large language models (LLMs) is to censor dangerous knowledge from their training data. While this removes the explicit information, implicit information can remain scattered across various training documents. Could an LLM infer the censored knowledge by piecing together these implicit hints? As a step towards answering this question, we study inductive out-of-context reasoning (OOCR), a type of generalization in which LLMs infer latent information from evidence distributed across training documents and apply it to downstream tasks without in-context learning. Using a suite of five tasks, we demonstrate that frontier LLMs can perform inductive OOCR. In one experiment we finetune an LLM on a corpus consisting only of distances between an unknown city and other known cities. Remarkably, without in-context examples or Chain of Thought, the LLM can verbalize that the unknown city is Paris and use this fact to answer downstream questions. Further experiments show that LLMs trained only on individual coin flip outcomes can verbalize whether the coin is biased, and those trained only on pairs (x,f(x)) can articulate a definition of f and compute inverses. While OOCR succeeds in a range of cases, we also show that it is unreliable, particularly for smaller LLMs learning complex structures. Overall, the ability of LLMs to "connect the dots" without explicit in-context learning poses a potential obstacle to monitoring and controlling the knowledge acquired by LLMs. Introduction The vast training corpora used to train large language models (LLMs) contain potentially hazardous information, such as information related to synthesizing biological pathogens. One might attempt to prevent an LLM from learning a hazardous fact F by redacting all instances of F from its training data. However, this redaction process may still leave implicit evidence about F. Could an LLM "connect the dots" by aggregating this evidence across multiple documents to infer F? Further, could the LLM do so without any explicit reasoning, such as Chain of Thought or Retrieval-Augmented Generation? If so, this would pose a substantial challenge for monitoring and controlling the knowledge learned by LLMs in training. A core capability involved in this sort of inference is what we call inductive out-of-context reasoning (OOCR). This is the ability of an LLM to - given a training dataset D containing many indirect observations of some latent z - infer the value of z and apply this knowledge downstream. Inductive OOCR is out-of-context because the observations of z are only seen during training, not provided to the model in-context at test time; it is inductive because inferring the latent involves aggregating information from many training...

Grammar Girl Quick and Dirty Tips for Better Writing
Pet-Speak: From Meowlogisms to Zoomies. 'Imply' or 'Infer'?

Grammar Girl Quick and Dirty Tips for Better Writing

Play Episode Listen Later Apr 9, 2024 19:58


976. How have our pets influenced the way we use language? This week, we dive into the "cativerse" and explore the vocabulary, grammar, and spelling habits of our furry friends. From LOLcats to doggo dialects, discover the linguistic wonders of how we talk about our beloved pets. Plus, don't get tripped up by "imply" versus "infer."  In the second segment, we dive into the definitions, origins, and proper usage of these often-confused words.The pet-speak segment was written by Susan Herman, a retired U.S. government multidisciplined language analyst, analytic editor, and instructor.| Narrate Your Own Book. Sign-up deadline is midnight April 9. http://narrateyourownbook.com/grammar| Edited transcript with links: https://grammar-girl.simplecast.com/episodes/pet-speak/transcript| Please take our advertising survey. It helps! https://podsurvey.com/GRAMMAR| Grammarpalooza (Get texts from Mignon!): https://joinsubtext.com/grammar or text "hello" to (917) 540-0876.| Subscribe to the newsletter for regular updates.| Watch my LinkedIn Learning writing courses.| Peeve Wars card game. | Grammar Girl books. | HOST: Mignon Fogarty| VOICEMAIL: 833-214-GIRL (833-214-4475) or https://sayhi.chat/grammargirl| Grammar Girl is part of the Quick and Dirty Tips podcast network.Audio Engineer: Nathan SemesDirector of Podcast: Brannan GoetschiusAdvertising Operations Specialist: Morgan ChristiansonMarketing and Publicity Assistant: Davina TomlinDigital Operations Specialist: Holly Hutchings| Theme music by Catherine Rannus.| Grammar Girl Social Media Links: YouTube. TikTok. Facebook. Instagram. LinkedIn. Mastodon.

Intervalo de Confiança
Episode 210: Variância # 210 - Inferência Estatística

Intervalo de Confiança

Play Episode Listen Later Mar 7, 2024 19:23


Imagine que você precise tomar uma decisão que afetará a vida de milhares de pessoas. Você tem pouca ou nenhuma informação que te ajude a escolher a melhor estratégia. O que você faz? Se você é um ouvinte do Intervalo de Confiança, sabe que precisa coletar dados. Só que, como fazer isso em um grupo tão grande?Apresentado por Alane Miguelis, esse episódio fala sobre como a estatística nos ajuda a entender uma população a partir dos dados de uma pequena amostra. Esse é um tema fundamental que ajuda você a entender inúmeros assuntos como pesquisas médicas, eleitorais e muito mais.Este é o Variância, um Spin-off do podcast Intervalo de Confiança, com periodicidade mensal. Este programa é mais curto e tem por objetivo trazer notícias ou curiosidades sobre algum assunto relacionado à ciência e jornalismo de dados ou sobre algum dado específico. Por ser mais curto, tanto a edição e conteúdo são mais simples e mais diretos. A Pauta foi escrita por Marília Tokiko. A edição foi feita por Leo Oliveira e a vitrine do episódio feita por Tatiane do Vale em colaboração com as Inteligências Artificiais Dall-E, da OpenAI. A coordenação de redação e de redes sociais é de Tatiane do Vale. A seleção de cortes é de responsabilidade Júlia Frois, a direção de Comunidade de Sofia Massaro e a gerência financeira é de Kézia Nogueira. As vinhetas de todos os episódios foram compostas por Rafael Chino e Leo Oliveira. Visite nosso site em: https://intervalodeconfianca.com.br/Conheça nossa loja virtual em: https://intervalodeconfianca.com.br/lojaPara apoiar esse projeto: https://intervalodeconfianca.com.br/apoieSiga nossas redes sociais:- Instagram: https://www.instagram.com/iconfpod/- Youtube: https://www.youtube.com/IntervalodeConfianca- Linkedin: https://www.linkedin.com/company/iconfpod- X (Twitter): https://twitter.com/iConfPod

The Nonlinear Library
AF - Approaching Human-Level Forecasting with Language Models by Fred Zhang

The Nonlinear Library

Play Episode Listen Later Feb 29, 2024 6:47


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Approaching Human-Level Forecasting with Language Models, published by Fred Zhang on February 29, 2024 on The AI Alignment Forum. TL;DR: We present a retrieval-augmented LM system that nears the human crowd performance on judgemental forecasting. Paper: https://arxiv.org/abs/2402.18563 (Danny Halawi*, Fred Zhang*, Chen Yueh-Han*, and Jacob Steinhardt) Twitter thread: https://twitter.com/JacobSteinhardt/status/1763243868353622089 Abstract Forecasting future events is important for policy and decision-making. In this work, we study whether language models (LMs) can forecast at the level of competitive human forecasters. Towards this goal, we develop a retrieval-augmented LM system designed to automatically search for relevant information, generate forecasts, and aggregate predictions. To facilitate our study, we collect a large dataset of questions from competitive forecasting platforms. Under a test set published after the knowledge cut-offs of our LMs, we evaluate the end-to-end performance of our system against the aggregates of human forecasts. On average, the system nears the crowd aggregate of competitive forecasters and in some settings, surpasses it. Our work suggests that using LMs to forecast the future could provide accurate predictions at scale and help inform institutional decision-making. For safety motivations on automated forecasting, see Unsolved Problems in ML Safety (2021) for discussions. In the following, we summarize our main research findings. Current LMs are not naturally good at forecasting First, we find that LMs are not naturally good at forecasting when evaluated zero-shot (with no fine-tuning and no retrieval). On 914 test questions that were opened after June 1, 2023 (post the knowledge cut-offs of these models), most LMs get near chance performance. Here, all questions are binary, so random guessing gives a Brier score of 0.25. Averaging across all community predictions over time, the human crowd gets 0.149. We present the score of the best model of each series. Only GPT-4 and Claude-2 series beat random guessing (by a margin of >0.3), though still very far from human aggregates. System building Towards better automated forecasting, we build and optimize a retrieval-augmented LM pipeline for this task. It functions in 3 steps, mimicking the traditional forecasting procedure: Retrieval, which gathers relevant information from news sources. Here, we use LM to generate search queries given a question, use these queries to query a news corpus for articles, filter out irrelevant articles, and summarize the remaining. Reasoning, which weighs available data and makes a forecast. Here, we prompt base and fine-tuned GPT-4 models to generate forecasts and (verbal) reasonings. Aggregation, which ensembles individual forecasts into an aggregated prediction. We use trimmed mean to aggregate all the predictions. We optimize the system's hyperparameters and apply a self-supervised approach to fine-tune a base GPT-4 to obtain the fine-tuned LM. See Section 5 of the full paper for details. Data and models We use GPT-4-1106 and GPT-3.5 in our system, whose knowledge cut-offs are in April 2023 and September 2021. To optimize and evaluate the system, we collect a dataset of forecasting questions from 5 competitive forecasting platforms, including Metaculus, Good Judgment Open, INFER, Polymarket, and Manifold. The test set consists only of questions published after June 1st, 2023. Crucially, this is after the knowledge cut-off date of GPT-4 and GPT-3.5, preventing leakage from pre-training. The train and validation set contains questions before June 1st, 2023, used for hyperparameter search and fine-tuning a GPT-4 base model. Evaluation results For each question, we perform information retrieval at up to 5 different dates during the question's time span and e...

The Robert Scott Bell Show
The RSB Show 2-27-24 - Dr. Crisanna Shackelford, Real REACTIONS, Antony Sammeroff, Infertility

The Robert Scott Bell Show

Play Episode Listen Later Feb 28, 2024 143:08


TODAY ON THE ROBERT SCOTT BELL SHOW: Egos in science, Dr. Crisanna Shackelford, Real REACTIONS, Vax adverse events, Homeopathic Hit - Calendula Officinalis, Junk food proximity, Antony Sammeroff and Dr. Megan Mankow, Healing the Infertility Epidemic, Kellogg cereal for dinner and MORE! https://robertscottbell.com/egos-in-science-dr-crisanna-shackelford-real-reactions-vax-adverse-events-homeopathic-hit-calendula-officinalis-junk-food-proximity-antony-sammeroff-and-dr-megan-mankow-healing-the-infertil/ Egos in science, Dr. Crisanna Shackelford, Real REACTIONS, Vax adverse events, Homeopathic Hit - Calendula Officinalis, Junk food proximity, Antony Sammeroff and Dr. Megan Mankow, Healing the Infer... https://robertscottbell.com

Text Talk
John 12: His Command is Eternal Life

Text Talk

Play Episode Listen Later Dec 1, 2023 16:48


John 12:44-50 (ESV)Andrew and Edwin consider how Jesus sought authority for all His life and actions and the means by which He established that authority. Additionally, they discuss why that authority matters. Because what God authorizes leads to life.Read the written devo that goes along with this episode by clicking here.    Let us know what you are learning or any questions you have. Email us at TextTalk@ChristiansMeetHere.org.    Join the Facebook community and join the conversation by clicking here. We'd love to meet you. Be a guest among the Christians who meet on Livingston Avenue. Click here to find out more. Michael Eldridge sang all four parts of our theme song. Find more from him by clicking here.   Thanks for talking about the text with us today.________________________________________________If the hyperlinks do not work, copy the following addresses and paste them into the URL bar of your web browser: Daily Written Devo: https://readthebiblemakedisciples.wordpress.com/?p=14765The Christians Who Meet on Livingston Avenue: http://www.christiansmeethere.org/Facebook Page: https://www.facebook.com/TalkAboutTheTextFacebook Group: https://www.facebook.com/groups/texttalkMichael Eldridge: https://acapeldridge.com/ 

Text Talk
John 7: Judge with Right Judgment

Text Talk

Play Episode Listen Later Oct 24, 2023 16:11


John 7:14-24 (LSB)Our hosts consider how Jesus established authority to heal on the Sabbath. He had no Scriptural command, statement, or example. He had to draw conclusions and make judgments. Yet, He claimed over and again not to be acting on His own authority. We discover when we logically infer things from what God explicitly states and shows, that is still acting by His authority.Read the written devo that goes along with this episode by clicking here.    Let us know what you are learning or any questions you have. Email us at TextTalk@ChristiansMeetHere.org.    Join the Facebook community and join the conversation by clicking here. We'd love to meet you. Be a guest among the Christians who meet on Livingston Avenue. Click here to find out more. Michael Eldridge sang all four parts of our theme song. Find more from him by clicking here.   Thanks for talking about the text with us today.________________________________________________If the hyperlinks do not work, copy the following addresses and paste them into the URL bar of your web browser: Daily Written Devo: https://readthebiblemakedisciples.wordpress.com/?p=14393The Christians Who Meet on Livingston Avenue: http://www.christiansmeethere.org/Facebook Page: https://www.facebook.com/TalkAboutTheTextFacebook Group: https://www.facebook.com/groups/texttalkMichael Eldridge: https://acapeldridge.com/ 

TalkRL: The Reinforcement Learning Podcast

Martin Riedmiller of Google DeepMind on controlling nuclear fusion plasma in a tokamak with RL, the original Deep Q-Network, Neural Fitted Q-Iteration, Collect and Infer, AGI for control systems, and tons more!  Martin Riedmiller is a research scientist and team lead at DeepMind.   Featured References   Magnetic control of tokamak plasmas through deep reinforcement learning  Jonas Degrave, Federico Felici, Jonas Buchli, Michael Neunert, Brendan Tracey, Francesco Carpanese, Timo Ewalds, Roland Hafner, Abbas Abdolmaleki, Diego de las Casas, Craig Donner, Leslie Fritz, Cristian Galperti, Andrea Huber, James Keeling, Maria Tsimpoukelli, Jackie Kay, Antoine Merle, Jean-Marc Moret, Seb Noury, Federico Pesamosca, David Pfau, Olivier Sauter, Cristian Sommariva, Stefano Coda, Basil Duval, Ambrogio Fasoli, Pushmeet Kohli, Koray Kavukcuoglu, Demis Hassabis & Martin Riedmiller Human-level control through deep reinforcement learning Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, Demis Hassabis  Neural fitted Q iteration–first experiences with a data efficient neural reinforcement learning method Martin Riedmiller  

OPERATORS
E017: How Do You Build Brand Equity? A Deep Dive Special With The Operators.

OPERATORS

Play Episode Listen Later Aug 16, 2023 58:00


00:01:44 Trade show booth laughter fuels brand motivation. 00:03:57 HexClad's successful brand building and growth. 00:11:02 TAM size: average; acquiring millions of new customers a year; need to go international, focus on repeat rate; think about true servable market; brand equals capital B brand; need a functional product, fashion game is cyclical; Dyson and Apple as examples. 00:13:52 Great brands start with superior functional improvement. 00:22:12 Brand emote, invest in functional over fashion. 00:27:40 Consistency is key in building a brand. 00:29:28 Affordable pricing, experienced team, good deliverability. Impressive trajectory, strong NPS score, organic growth. 00:36:05 HexClad redefined and dominates its category. 00:38:11 Defy gravity: Spend less, increase sales. 00:42:23 Start business in industry with low scores. 00:46:05 Elon Musk: A generational builder having fun. 00:47:55 Elon buys social media network, leveraging network effects. 00:51:00 Different opinions on brand, few exceptions. 00:54:52 Infer. Lomi. Cute name from dirt. 00:57:48 Embrace disagreement. In the world of eCommerce, a legendary WhatsApp group is rumored to hold the secrets to unimaginable success. The catch? You must have nine figures in revenue to gain entry. The world's biggest brands have denied its existence for years, until now. Three titans known as "Operators" are leaking the secret contents in an effort to share their wealth of knowledge with people like you. Powered By: Northbeam. https://www.northbeam.io/ Sendlane. https://learn.sendlane.com/operators Fulfil.io. https://bit.ly/3pAp2vu Visit Our Website: https://www.9operators.com/ Follow us on Twitter: Sean (Host) https://twitter.com/SeanEcom Jason (Host) https://twitter.com/JasonPanzer Matt (Host) https://twitter.com/mbertulli Mike (Host) https://twitter.com/mikebeckhamsm Finn (Producer) https://twitter.com/finn_radford Northbeam (Partner) https://twitter.com/northbeam Fulfil.io (Partner) https://twitter.com/fulfilio Sendlane (Partner) https://twitter.com/Sendlane We Rise Together. --- Send in a voice message: https://podcasters.spotify.com/pod/show/9operators/message

How AI Built This
#82 Ben McLoughlin - Head of Data Science at Webuyanycar

How AI Built This

Play Episode Listen Later Jul 30, 2023 38:32


Ben McLoughlin is the Head of Data Science at Webuyanycar in Manchester.Armed with a PhD in Robotics & Computer Vision, Ben has quickly risen through the ranks to lead the considerable data science efforts at a UK household name in Webuyanycar. A fascinating conversation, where we discussed his career journey; some top tips for aspiring data scientists and his approach to building his own personal brand. I hope you enjoy! I hope you enjoy! As always, we're brought to you by the wonderful people at Cathcart Technology, technology recruitment experts and Infer, a game changing analytics platform allowing data analysts to do advanced analytics all within SQL.Music by Noisyfilter from FugueShow produced and edited by the awesome team at Sound Media

Inside Facebook Mobile
54: Building Key Transparency at WhatsApp

Inside Facebook Mobile

Play Episode Listen Later Jul 26, 2023 43:23


In April, WhatsApp announced the launch of a new cryptographic security feature to automatically verify a secured connection based on key transparency. Key transparency helps strengthen the guarantee that end-to-end encryption provides to private, personal messaging applications in a transparent manner available to all. Rolling out a feature like this to WhatsApp's user base is not a small feat and requires some clever engineering to scale to the billions of users relying on WhatsApp to stay in touch with friends, family and business. Pascal is joined by Sean and Kevin to discuss what Key Transparency means in practice and the various challenges they encountered as they scaled it up to billions of users. Got feedback? Send it to us on Threads (https://threads.net/@metatechpod), Twitter (https://twitter.com/metatechpod), Instagram (https://instagram.com/metatechpod) and don't forget to follow our host @passy (https://twitter.com/passy, https://mastodon.social/@passy, and https://threads.net/@passy_). Fancy working with us? Check out https://www.metacareers.com/. Links Infer: https://fbinfer.com/ Infer on GitHub: https://github.com/facebook/infer MTP Episode 18 about Infer: https://pca.st/5U9V Deploying key transparency at WhatsApp - Engineering at Meta: https://engineering.fb.com/2023/04/13/security/whatsapp-key-transparency/ GitHub - facebook/akd: An implementation of an auditable key directory: https://github.com/facebook/akd/  Parakeet: Practical Key Transparency for End-to-End Encrypted Messaging: https://www.ndss-symposium.org/ndss-paper/parakeet-practical-key-transparency-for-end-to-end-encrypted-messaging/  SEEMless: Secure End-to-End Encrypted Messaging with less trust: https://eprint.iacr.org/2018/607 Coniks: Bringing Key Transparency to End Users: https://www.usenix.org/conference/usenixsecurity15/technical-sessions/presentation/melara  IETF Working Group on Key Transparency: https://datatracker.ietf.org/wg/keytrans/about   Timestamps Intro 0:06 News Update: Infer turns 10 1:34 Interview Intro 4:27 Intro Kevin 4:45 Intro Sean 6:07 WhatsApp's mission 6:47 PETs 7:58 E2E basics 8:59 Key transparency 10:32 Crypto community response 18:20 End-user changes 19:57 Technical challenges and zero-knowledge proofs 23:18 AKD 28:27 Internal deployment 32:02 Outro 42:16 Bloopers 43:05

How AI Built This
#81 Liam Fulton & Hannah Bratley - Computer Vision

How AI Built This

Play Episode Listen Later May 9, 2023 48:43


Liam & Hannah are the Co-Founders (CEO & CTO respectively) of Frame, a data science consultancy and computer vision organisation based in West Yorkshire. Not only are the Co-Founders, but they're also partners, so we dived into that dynamic on the podcast and how they separate work from life - 'pub Thursdays' play a key part in this ... We also chatted about how their Computer Vision product can have a radical impact on organisations who are trying to sell their products online - with some great real world stories!This was a really fun one to record, hope you enjoy! I hope you enjoy! As always, we're brought to you by the wonderful people at Cathcart Technology, technology recruitment experts and Infer, a game changing analytics platform allowing data analysts to do advanced analytics all within SQL.Music by Noisyfilter from FugueShow produced and edited by the awesome team at Sound Media

When A Guy Has
"When your friend won't stop talking about lesbian age-gap media, you can kind of infer some things..."

When A Guy Has

Play Episode Listen Later May 7, 2023 82:43


Jolene is joined by a nameless guest to discuss what it means to have a butch soul, crying to Jim and Pam fancams, being so monogamous you're unable to conceptualize attraction to someone besides your partner, borrowing moves men have used on you to use on women, and convincing your mom to read Allison Bechdel when you're 12, and then being confused when she asks if you're gay. Also: Jolene debuts a new theory of sexual attraction, and speculates about her mother. Pertinent character information: immediately after recording this episode, the guest spent 3 hours making an edit of the movie Carol (2015). The intro and outro music is by Lynn July. You can listen to more of her music at: https://tinytachyon.bandcamp.com/ Follow the pod on twitter: https://twitter.com/WhenAGuyHas Check out our website: https://whenaguyhas.neocities.org/ (IN PROGRESS) Subscribe to the patreon for more like this!!! https://www.patreon.com/user?u=85347146 The RSS Feed: https://anchor.fm/s/9877d600/podcast/rss Donate to our Kofi, if you're so inclined: https://ko-fi.com/whenaguyhas

Politics Done Right
New Tapes infer Cruz was one of the insurrectionists orchestrating the Jan 6th coup

Politics Done Right

Play Episode Listen Later Apr 29, 2023 4:10


New tapes that were released this week make it clear that Ted Cruz was more intimately involved in the Jan 6th attempted coup than we realized. His following words as he spoke to Fox New's Maria Bartiromo are just the tip of the iceberg. It occurred on Jan 2nd. --- Send in a voice message: https://podcasters.spotify.com/pod/show/politicsdoneright/message Support this podcast: https://podcasters.spotify.com/pod/show/politicsdoneright/support

How AI Built This
#80 Rachel Purchase - a data transformation journey at Admiral

How AI Built This

Play Episode Listen Later Apr 18, 2023 49:25


Rachel is the Head of Data Product, Insights & Data Science at Admiral Group and is our first guest who has spent their entire career at one employer - which is pretty awesome! A real life fan of the show as well, so it was very exciting to finally chat with Rachel about her career, approach to data and the amazing work going on at Admiral. I hope you enjoy! As always, we're brought to you by the wonderful people at Cathcart Technology, technology recruitment experts and Infer, a game changing analytics platform allowing data analysts to do advanced analytics all within SQL.Music by Noisyfilter from FugueShow produced and edited by the awesome team at Sound Media

State of the Markets
#174 Akhil Patel - This valuable signal could infer the market direction for years

State of the Markets

Play Episode Listen Later Apr 11, 2023 96:08


The Secret Wealth Advantage: Order Now https://amzn.eu/d/6Dz6Iqy Follow Akhil on Twitter https://twitter.com/AkhilGPatel Subscribe here to Akhil and his team: https://propertysharemarketeconomics.com/ State of the Markets Podcast Tim Price of https://Pricevaluepartners.com https://timprice.substack.com https://sotmpodcast.com https://anchor.fm/stateofthemarkets https://apple.co/2OUGW6R  Paul Rodriguez https://ThinkTrading.com https://twitter.com/prodr1guez --- Send in a voice message: https://podcasters.spotify.com/pod/show/stateofthemarkets/message

How AI Built This
#79 - Megan Stamper - Head of Data Science BBC Product Group

How AI Built This

Play Episode Listen Later Mar 21, 2023 52:31


Megan is the Head of Data Science at the BBC in their Product Group. With a PhD in Maths from the University of Cambridge and an impressive data career in the world of media, I loved chatting to Megan about her journey. I hope you enjoy! As always, we're brought to you by the wonderful people at Cathcart Technology, recruitment experts and Infer, the data analytics platform changing how analysts work forever.Music by Noisyfilter from Fugue

How AI Built This
#78 - Ross Turner, CPO Arria - Natural Language Generation

How AI Built This

Play Episode Listen Later Feb 28, 2023 44:31


Welcome back to How AI Built This, the show dedicated to data and entrepreneurial story telling. This episode I spoke to Ross Turner, The Chief Product Officer at Natural Language Generation specialists Arria. Ross is Berlin based, working remotely for a now global team at Arria, where they are at the cutting edge of NLG. We had a great chat about his journey, NLG, data in general and of course, some non-data related content, this time in the form of Mixed Martial Arts (MMA)! I hope you enjoy! As always, we're brought to you by the wonderful people at Cathcart Technology, recruitment experts & Infer who are building the next generation of analytics. Music by Noisyfilter from Fugue

The Nonlinear Library
EA - Eli Lifland on Navigating the AI Alignment Landscape by Ozzie Gooen

The Nonlinear Library

Play Episode Listen Later Feb 1, 2023 46:19


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Eli Lifland on Navigating the AI Alignment Landscape, published by Ozzie Gooen on February 1, 2023 on The Effective Altruism Forum. Recently I had a conversation with Eli Lifland about the AI Alignment landscape. Eli Lifland has been a forecaster at Samotsvety and has been investigating said landscape. I've known Eli for the last 8 months or so, and have appreciated many of his takes on AI alignment strategy. This was my first recorded video, so there were a few issues, but I think most of it is understandable. Full (edited) transcript below. I suggest browsing the section titles for a better overview of our discussion. Transcript Sections Samotsvety, a Recent Forecasting Organization Reading, “Is Power-Seeking AI an Existential Risk?” Categories of AI Failures: Accident, Misuse, and Structural Who Is Making Strategic Progress on Alignment? Community Building: Arguments For Community Building: Fellowships and Mentorship Cruxes in the AI Alignment Space Crux: How Promising is AI Interpretability? Crux: Should We Use Narrow AIs to Help Solve Alignment? The Need for AI Alignment Benchmarks Crux: Conceptual Insights vs. Empirical Iteration Vehicles and Planes as Potential Metaphors Samotsvety, a Recent Forecasting Organization Ozzie Gooen: So to get started, I want to talk a little bit about Samotsvety. Eli Lifland: It's a Russian name. Samotsvety currently has about 15 forecasters. We've been releasing forecasts for the community on topics such as nuclear risk and AI. We're considering how to create forecasts for different clients and make public forecasts on existential risk, particularly AI. Team forecasting has been valuable, and I've encouraged more people to do it. We have a weekly call where we choose questions to discuss in advance. If people have time, they make their forecasts beforehand, and then we discuss the differences and debate. It's beneficial for team bonding, forming friendships, and potential future work collaborations. It's also interesting to see which forecasts are correct when they resolve. It's a good activity for different groups, such as AI community groups, to try. Ozzie Gooen: How many people are in the group right now? Eli Lifland: Right now, it's about 15, but on any given week, probably closer to five to ten can come. Initially, it was just us three. It was just Nuño, Misha, and I, and we would meet each weekend and discuss different questions on either Foretell (now INFER) or Good Judgment Open, but now it's five to ten people per week, from a total pool of 15 people. Ozzie Gooen: That makes sense. I know Samotsvety has worked on nuclear risk and a few other posts. What do you forecast when you're not working on those megaprojects? Eli Lifland: Yeah. We do a mix of things. Some things we've done for specific clients haven't been released publicly. Some things are still in progress and haven't been released yet. For example, we've been working on forecasting the level of AI existential risk for the Future Fund, now called the Open Philanthropy Worldview Prize, for the past 1-2 months. We meet each week to revise and discuss different ways to decompose the risk, but we haven't finished yet. Hopefully, we will. Sometimes we just choose a few interesting questions for discussion, even if we don't publish a write-up on them. Ozzie Gooen: So the idea is to have more people do very similar things, just like other teams are three to five, they're pretty independent; do you give them like coaching or anything? If I wanted to start my own group like this, what do I do? Eli Lifland: Feel free to reach out to any of us for advice on how we did it. As I mentioned, it was fairly simple—choosing and discussing questions each week. In terms of value, I believe it was valuable for all of us and many others who joined us. Some got more interested in effec...

GODMODE™: Win or Win Bigger
#54: If You Are Looking For A Sign, This Is It!

GODMODE™: Win or Win Bigger

Play Episode Listen Later Jan 30, 2023 31:17


In this episode of GODMODE™: Win or Win Bigger, Michael Mahoney, and Brady Edwards discuss being aware of the signs that are within your environment.Questions for consideration:Can you describe salt without ever tasting salt?Are you aware of the signs and messages the universe throws to you?---HIGHLIGHTS:Being aware of the signs in your environmentIs everything a coincidence to you?Gerald Butler StoryBecome a deteective in your environment---TIME STAMPS:00:00 - Prelude01:18 - Start02:00 - Are you paying attention to the signs presented to you in your life?04:30 - Be aware of the old dichotomies and the old programs we hav06:30 - Are you eager looking to make something a “coincidence”09:00 - The amount of energy exchanged between human beings10:00 - Gerald Butler Stor18:30 - Become a detective of your environment20:00 - Infer the best possible meaning of things in your life24:30 - Imagination and Will Power28:45 - There is significance in what you hear, say and think31:14 - ENDThank you for listening to GODMODE™: Win or Win BiggerIf you are interested in UPGRD Your Mind, visit us at: https://upgrd.com to book a call with one of our team members.

RadioDotNet
Подсматриваем за C# 12, ускоряем консоль, слушаем http запросы

RadioDotNet

Play Episode Listen Later Jan 26, 2023 106:16


Подкаст RadioDotNet выпуск №65 от 27 января 2023 года Сайт подкаста: radio.dotnet.ru Темы: [00:00:55] — C# 12 GitHub Activities github.com/dotnet/csharplang github.com/dotnet/csharplang/issues/4144 github.com/dotnet/roslyn/blob/main/docs/Language%... [00:27:25] — Slaying Zombie 'No Repro' Crashes with Infer# devblogs.microsoft.com/dotnet/slaying-zombie-no-repo-crashes-... [00:34:00] — Visual Studio 2022 17.5 Preview 3 devblogs.microsoft.com/visualstudio/visual-studio-2022-17-5-p...  [00:40:50] — Rider and Reshaper 2023.1 roadmap blog.jetbrains.com/dotnet/resharper-2023-1-roadmap blog.jetbrains.com/dotnet/rider-2023-1-roadmap  [00:46:40] — Detecting breaking changes between two versions of a NuGet package meziantou.net/detecting-breaking-changes-between-two... [00:54:40] — Database Command Batching in .NET 6 infoq.com/news/Database-Command-Batching [01:00:30] — Observing all http requests in a .NET application meziantou.net/observing-all-http-requests-in-a-dotne... meziantou.net/prevent-http-requests-to-external-serv... [01:13:45] — Быстрый консольный ввод на .NET habr.com/ru/post/705834 [01:24:35] — Building a custom Test Framework with xUnit andrewlock.net/tracking-down-a-hanging-xunit-test-in-...  [01:35:10] — Кратко о разном maoni0.medium.com/how-the-heap-verification-mode-helps-w... github.com/MrDave1999/dotenv kevinchalet.com/openiddict-4-0-general-availability johnnys.news/Dots-a-dotnet-SDK-manager github.com/Tyrrrz/CliWrap github.com/microsoft/reverse-proxy/releases/tag/v... Фоновая музыка: Максим Аршинов «Pensive yeti.0.1»

Sweat Equity Podcast® Law Smith + Eric Readinger
#391: How To Infer The 80/20 Principle In Your Work, Relationships And Life w/ Josh Kennedy

Sweat Equity Podcast® Law Smith + Eric Readinger

Play Episode Listen Later Dec 19, 2022 37:37


Sweat Equity's Eric and Law chew the cud with Josh Kennedy, affiliate marketer and founder of Imagine Marketing about: comedy, business, vulnerability, life is a joke that really isn't that funny and can be illogical, uncomfortable relationships, 80/20 principle, men and women working together, confidence, surrounding yourself with people who will challenge you, time has gone inside out, time gets distorted with, this intense gravity. calculating your life decisions, reinventing yourself, and the phrase people never change. Josh Kennedy :link:s Imagine Marketing: https://imagine-affiliate.com/ Episode sponsored by SQUARESPACE create a customizable website or online store with an all-in-one solution from Squarespace. Choose a website template and start your free trial today. https://squarespacecircleus.pxf.io/sweatequity:sweat_drops: Sweat Equity :link:s SweatEquityPod.com Linktr.ee/SweatEquity Hosts' Eric Readinger & Law Smith :link:s LawSmithWorks.com Tocoba.ga Wanna help Sweat Equity without spending a dime? Sure, we're the #1 business comedy & comedy business podcast on earth, but we can always practice Kaizen, aka continuous improvement. Please? We'll be your BFF! Hook us up by REVIEW WRITE a quick hitter sentence in the review Smash SUBSCRIBE SHARE with friends, co-workers, acquaintances, family members you love and the fam you don't like talking to #comedy #business #girthyroi #sweatequity #69b2b #entrepreneur

The Nonlinear Library
EA - Some research ideas in forecasting by Jaime Sevilla

The Nonlinear Library

Play Episode Listen Later Nov 15, 2022 8:20


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some research ideas in forecasting, published by Jaime Sevilla on November 15, 2022 on The Effective Altruism Forum. In the past, I have researched how we can effectively pool the predictions of many experts. For the most part, I am now focusing on directing Epoch and AI forecasting. However, I have accumulated a log of research projects related to forecasting. I have the vague intention of working on them at some point, but this will likely be months or years, and meanwhile I would be elated if someone else takes my ideas and develops them. And with the Million Predictions Hackathon by Metaculus looming, now seems a particularly good moment to write down some of these project ideas. Compare different aggregation methods Difficulty: easy The ultimate arbiter of what aggregation works is what performs best in practice. Redoing a comparison of forecast aggregation methods on metaculus / INFER / etc questions would be helpful data for that purpose. For example, here is a script I wrote to compare some aggregation methods, and the results I obtained: MethodWeightedBrier-logQuestionsNeyman aggregate (p=0.36)Yes0.1060.340899Extremized mean of logodds (d=1.55)Yes0.1110.350899Neyman aggregate (p=0.5)Yes0.1110.351899Extremized mean of probabilities (d=1.60)Yes0.1120.355899Metaculus predictionYes0.1110.361774Mean of logoddsYes0.1160.370899Neyman aggregate (p=0.36)No0.1200.377899MedianYes0.1210.381899Extremized mean of logodds (d=1.50)No0.1260.391899Mean of probabilitiesYes0.1220.392899Neyman aggregate (o=1.00)No0.1260.393899Extremized mean of probabilities (d=1.60)No0.1270.399899Mean of logoddsNo0.1300.410899MedianNo0.1340.418899Mean of probabilitiesNo0.1380.439899Baseline (p = 0.36)N/A0.2300.652899 It would be straightforward to extend this analysis with new questions that resolved since then, other dataset or new techniques. Literature review of weight aggregation Difficulty: easy When aggregating forecast, we usually resort to formulas like ∑iailogo1, where oi are the individual predictions (expressed in odds) and ai the weights assigned to each prediction. Right now I have a lot of uncertainty about what are the best theoretical and empirical approaches to assigning weights to predictions. These could be based on factors like the date of the prediction, the track record of the forecaster or other factors. The first step would be to a literature review of schemes to weigh the predictions of experts when aggregating, and compare them using Metaculus data. Comparing methods for predicting base rates Difficulty: medium Using historical data is always a must when forecasting. While one can rely on intuition to extract lessons from the past, it is often convenient to have some rules of thumb that inform how to translate historical frquencies to baserate probabilities. The classical method in this situation is Laplace's rule of succession. However, we showed that this method gives inconsistent results when trying to apply it to observations over a time period, and we proposed a fix here. Number of observed successes S during time TProbability of no successes during t timeS=0(1+tT)−1S>0 (1+tT)−S if the sampling time period is variable (1+tT)−(S+1) if the sampling time period is fixed While theoretically appealing, we did not show that employing this fix actually improves performance, so there is a good research opportunity for someone to collect data and investigate this. Decay of predictions Difficulty: medium Imagine I predict that no earthquakes will happen in Chile before 2024 with 60% probability today. Then in April 2023, if no earthquakes have happened, my implied probability should be lower than 60%. Theoretically, we should be derive the implied probabability under some mild assumptions that the probability was uniform over time, maybe following a framework like the time-...

The Level Up English Podcast
#185 Implied Meaning in English

The Level Up English Podcast

Play Episode Listen Later Oct 12, 2022 21:05


As you might know, English isn't like how we find it in textbooks. In day-to-day conversations, we often don't speak in full sentences, and so much of what we say is implied (this means, it's hinted at without actually saying it).In this episode, I look at some examples of implied meaning or 'language ambiguity' and teach some phrases you can use in different situations.Show notes page - https://levelupenglish.school/podcast185Sign Up for Free Lessons - https://www.levelupenglish.school/#freelessonsJoin Level Up English - https://courses.levelupenglish.schoolBy becoming a member, you can access all podcast transcripts, listen to the private podcast and join live lessons and courses on the website.

The Nonlinear Library
LW - Samotsvety's AI risk forecasts by elifland

The Nonlinear Library

Play Episode Listen Later Sep 9, 2022 6:51


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Samotsvety's AI risk forecasts, published by elifland on September 9, 2022 on LessWrong. Crossposted to EA Forum and Foxy Scout Introduction In my review of What We Owe The Future (WWOTF), I wrote: Finally, I've updated some based on my experience with Samotsvety forecasters when discussing AI risk. When we discussed the report on power-seeking AI, I expected tons of skepticism but in fact almost all forecasters seemed to give >=5% to disempowerment by power-seeking AI by 2070, with many giving >=10%. In the comments, Peter Wildeford asked: It looks like Samotsvety also forecasted AI timelines and AI takeover risk - are you willing and able to provide those numbers as well? We separately received a request from the FTX Foundation to forecast on 3 questions about AGI timelines and risk. I sent out surveys to get Samotsvety's up-to-date views on all 5 of these questions, and thought it would be valuable to share the forecasts publicly. A few of the headline aggregate forecasts are: 25% chance of misaligned AI takeover by 2100, barring pre-APS-AI catastrophe 81% chance of Transformative AI (TAI) by 2100, barring pre-TAI catastrophe 32% chance of AGI being developed in the next 20 years Forecasts In each case I aggregated forecasts by removing the single most extreme forecast on each end, then taking the geometric mean of odds. To reduce concerns of in-group bias to some extent, I calculated a separate aggregate for those who weren't highly-engaged EAs (HEAs) before joining Samotsvety. In most cases, these forecasters hadn't engaged with EA much at all; in one case the forecaster was aligned but not involved with the community. Several have gotten more involved with EA since joining Samotsvety. Unfortunately I'm unable to provide forecast rationales in this post due to forecaster time constraints, though I might in a future post. I provided my personal reasoning for relatively similar forecasts (35% AI takeover by 2100, 80% TAI by 2100) in my WWOTF review. WWOTF questions Aggregate (n=11) Aggregate, non-pre-Samotsvety-HEAs (n=5) Range What's your probability of misaligned AI takeover by 2100, barring pre-APS-AI catastrophe? 25% 14% 3-91.5% What's your probability of Transformative AI (TAI) by 2100, barring pre-TAI catastrophe? 81% 86% 45-99.5% FTX Foundation questions For the purposes of these questions, FTX Foundation defined AGI as roughly “AI systems that power a comparably profound transformation (in economic terms or otherwise) as would be achieved in [a world where cheap AI systems are fully substitutable for human labor]”. See here for the full definition used. Unlike the above questions, these are not conditioning on no pre-AGI/TAI catastrophe. Aggregate (n=11) Aggregate, non-pre-Samotsvety-HEAs (n=5) Range What's the probability of existential catastrophe from AI, conditional on AGI being developed by 2070? 38% 23% 4-98% What's the probability of AGI being developed in the next 20 years? 32% 26% 10-70% What's the probability of AGI being developed by 2100? 73% 77% 45-80% Who is Samotsvety Forecasting? Samotsvety Forecasting is a forecasting group that was started primarily by Misha Yagudin, Nuño Sempere, and myself predicting as a team on INFER (then Foretell). Over time, we invited more forecasters who had very strong track records of accuracy and sensible comments, mostly on Good Judgment Open but also a few from INFER and Metaculus. Some strong forecasters were added through social connections, which means the group is a bit more EA-skewed than it would be without these additions. A few Samotsvety forecasters are also superforecasters. How much do these forecasters know about AI? Most forecasters have at least read Joe Carlsmith's report on AI x-risk, Is Power-Seeking AI an Existential Risk?. Those who are short on time may have just skimmed the report and/or...

The Nonlinear Library
EA - Samotsvety's AI risk forecasts by elifland

The Nonlinear Library

Play Episode Listen Later Sep 9, 2022 6:52


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Samotsvety's AI risk forecasts, published by elifland on September 9, 2022 on The Effective Altruism Forum. Crossposted to LessWrong and Foxy Scout Introduction In my review of What We Owe The Future (WWOTF), I wrote: Finally, I've updated some based on my experience with Samotsvety forecasters when discussing AI risk. When we discussed the report on power-seeking AI, I expected tons of skepticism but in fact almost all forecasters seemed to give >=5% to disempowerment by power-seeking AI by 2070, with many giving >=10%. In the comments, Peter Wildeford asked: It looks like Samotsvety also forecasted AI timelines and AI takeover risk - are you willing and able to provide those numbers as well? We separately received a request from the FTX Foundation to forecast on 3 questions about AGI timelines and risk. I sent out surveys to get Samotsvety's up-to-date views on all 5 of these questions, and thought it would be valuable to share the forecasts publicly. A few of the headline aggregate forecasts are: 25% chance of misaligned AI takeover by 2100, barring pre-APS-AI catastrophe 81% chance of Transformative AI (TAI) by 2100, barring pre-TAI catastrophe 32% chance of AGI being developed in the next 20 years Forecasts In each case I aggregated forecasts by removing the single most extreme forecast on each end, then taking the geometric mean of odds. To reduce concerns of in-group bias to some extent, I calculated a separate aggregate for those who weren't highly-engaged EAs (HEAs) before joining Samotsvety. In most cases, these forecasters hadn't engaged with EA much at all; in one case the forecaster was aligned but not involved with the community. Several have gotten more involved with EA since joining Samotsvety. Unfortunately I'm unable to provide forecast rationales in this post due to forecaster time constraints, though I might in a future post. I provided my personal reasoning for relatively similar forecasts (35% AI takeover by 2100, 80% TAI by 2100) in my WWOTF review. WWOTF questions Aggregate (n=11) Aggregate, non-pre-Samotsvety-HEAs (n=5) Range What's your probability of misaligned AI takeover by 2100, barring pre-APS-AI catastrophe? 25% 14% 3-91.5% What's your probability of Transformative AI (TAI) by 2100, barring pre-TAI catastrophe? 81% 86% 45-99.5% FTX Foundation questions For the purposes of these questions, FTX Foundation defined AGI as roughly “AI systems that power a comparably profound transformation (in economic terms or otherwise) as would be achieved in [a world where cheap AI systems are fully substitutable for human labor]”. See here for the full definition used. Unlike the above questions, these are not conditioning on no pre-AGI/TAI catastrophe. Aggregate (n=1) Aggregate, non-pre-Samotsvety-HEAs (n=5) Range What's the probability of existential catastrophe from AI, conditional on AGI being developed by 2070? 38% 23% 4-98% What's the probability of AGI being developed in the next 20 years? 32% 26% 10-70% What's the probability of AGI being developed by 2100? 73% 77% 45-80% Who is Samotsvety Forecasting? Samotsvety Forecasting is a forecasting group that was started primarily by Misha Yagudin, Nuño Sempere, and myself predicting as a team on INFER (then Foretell). Over time, we invited more forecasters who had very strong track records of accuracy and sensible comments, mostly on Good Judgment Open but also a few from INFER and Metaculus. Some strong forecasters were added through social connections, which means the group is a bit more EA-skewed than it would be without these additions. A few Samotsvety forecasters are also superforecasters. How much do these forecasters know about AI? Most forecasters have at least read Joe Carlsmith's report on AI x-risk, Is Power-Seeking AI an Existential Risk?. Those who are short on time may have just skimme...

A Moment of Science
A crow species can infer weight

A Moment of Science

Play Episode Listen Later Aug 24, 2022 2:00


Corvids are known to be pretty clever birds, but did you know they're good at guessing weight as well?

Naruhodo
Naruhodo #337 - Podemos confiar nas pesquisas eleitorais? - Parte 2 de 2

Naruhodo

Play Episode Listen Later Jun 6, 2022 52:11


Uma amostra de algumas centenas de pessoas consegue representar uma população de centenas de milhões?Até que ponto podemos confiar nas pesquisas de intenções de voto?Confira a segunda e última parte do papo entre o leigo curioso, Ken Fujioka, e o cientista PhD, Altay de Souza.> OUÇA (52min 12s)*Naruhodo! é o podcast pra quem tem fome de aprender. Ciência, senso comum, curiosidades, desafios e muito mais. Com o leigo curioso, Ken Fujioka, e o cientista PhD, Altay de Souza.Edição: Reginaldo Cursino.http://naruhodo.b9.com.br*PARCERIA: ALURAAprofunde-se de vez: garantimos conhecimento com profundidade e diversidade, para se tornar um profissional em T - incluindo programação, front-end, data science, devops, ux & design, mobile, inovação & gestão.Navegue sua carreira: são 1343 cursos e novos lançamentos toda semana, além de atualizações e melhorias constantes.Conteúdo imersivo: faça parte de uma comunidade de apaixonados por tudo que é digital. Mergulhe na comunidade Alura.Aproveite o desconto para ouvintes Naruhodo no link:https://bit.ly/naruhodo_alura*REFERÊNCIASBallot Paper Design and Vote Spoiling at Polish Local Elections of 2014: Establishing a Causal Linkhttps://journals.sagepub.com/doi/abs/10.1177/0888325419874427The Impact of Partisan Electoral Regulation: Ballot Effects from the California Alphabet Lottery, 1978-2002https://papers.ssrn.com/sol3/papers.cfm?abstract_id=496863Estimating Causal Effects of Ballot Order from a Randomized Natural Experiment: The California Alphabet Lottery, 1978–2002https://academic.oup.com/poq/article-abstract/72/2/216/1922503?login=falseEstimating the causal effects of policy information on voter turnout: An Internet based randomized field experiment in Japanhttps://openresearch-repository.anu.edu.au/handle/1885/43124A comparative study of compulsory votinghttps://www.manchesterhive.com/view/9781847792709/9781847792709.xmlIs compulsory voting habit-forming? Regression discontinuity evidence from Brazilhttps://www.sciencedirect.com/science/article/pii/S0261379421000548?casa_token=lcclTxNO3PAAAAAA:CeVFbbNtZZifV6PZBUO31o-48dVa1QYPow8ECjR8MC6WiVe8stOaE0HUeEYUR1pbEXtvL8dQfEkDetermining the effect of strategic voting on election resultshttps://rss.onlinelibrary.wiley.com/doi/abs/10.1111/rssa.12130Pros and cons of different sampling techniqueshttps://bit.ly/3wPoUYFSampling: Design and Analysishttps://drive.uqu.edu.sa/_/maatia/files/Sampling.pdfNaruhodo #154 - O que é a Lei de Benford?https://www.youtube.com/watch?v=rmCxIP3YpmQ&ab_channel=Cient%C3%ADstica%26PodcastNaruhodoEstatística Psicobio I 2022 #07 - Métodos de Inferência e Amostragemhttps://www.youtube.com/watch?v=3I69FS2lAS4&t=17s&ab_channel=Cient%C3%ADstica%26PodcastNaruhodoEstatística Psicobio I 2022 #04 - Teorema Central do Limite e Intervalos de Confiança Ihttps://www.youtube.com/watch?v=0-3JCMLxX0s&t=1s&ab_channel=Cient%C3%ADstica%26PodcastNaruhodo*APOIE O NARUHODO PELA PLATAFORMA ORELO!Um aviso importantíssimo: o podcast Naruhodo agora está no Orelo: https://bit.ly/naruhodo-no-oreloE é por meio dessa plataforma de apoio aos criadores de conteúdo que você ajuda o Naruhodo a se manter no ar.Você escolhe um valor de contribuição mensal e tem acesso a conteúdos exclusivos, conteúdos antecipados e vantagens especiais.Além disso, você pode ter acesso ao nosso grupo fechado no Telegram, e conversar comigo, com o Altay e com outros apoiadores.E não é só isso: toda vez que você ouvir ou fizer download de um episódio pelo Orelo, vai também estar pingando uns trocadinhos para o nosso projeto.Então, baixe agora mesmo o app Orelo no endereço Orelo.CC ou na sua loja de aplicativos e ajude a fortalecer o conhecimento científico.https://bit.ly/naruhodo-no-orelo

Naruhodo
Naruhodo #336 - Podemos confiar nas pesquisas eleitorais? - Parte 1 de 2

Naruhodo

Play Episode Listen Later May 30, 2022 59:09


Uma amostra de algumas centenas de pessoas consegue representar uma população de centenas de milhões?Até que ponto podemos confiar nas pesquisas de intenções de voto?Confira a primeira parte do papo entre o leigo curioso, Ken Fujioka, e o cientista PhD, Altay de Souza.> OUÇA (59min 10s)*Naruhodo! é o podcast pra quem tem fome de aprender. Ciência, senso comum, curiosidades, desafios e muito mais. Com o leigo curioso, Ken Fujioka, e o cientista PhD, Altay de Souza.Edição: Reginaldo Cursino.http://naruhodo.b9.com.br*PARCERIA: ALURAA Alura tem mais de 1.000 cursos de diversas áreas e é a maior plataforma de cursos online do Brasil -- e você tem acesso a todos com uma única assinatura.Aproveite o desconto para ouvintes Naruhodo no link:https://bit.ly/naruhodo_alura*REFERÊNCIASBallot Paper Design and Vote Spoiling at Polish Local Elections of 2014: Establishing a Causal Linkhttps://journals.sagepub.com/doi/abs/10.1177/0888325419874427The Impact of Partisan Electoral Regulation: Ballot Effects from the California Alphabet Lottery, 1978-2002https://papers.ssrn.com/sol3/papers.cfm?abstract_id=496863Estimating Causal Effects of Ballot Order from a Randomized Natural Experiment: The California Alphabet Lottery, 1978–2002https://academic.oup.com/poq/article-abstract/72/2/216/1922503?login=falseEstimating the causal effects of policy information on voter turnout: An Internet based randomized field experiment in Japanhttps://openresearch-repository.anu.edu.au/handle/1885/43124A comparative study of compulsory votinghttps://www.manchesterhive.com/view/9781847792709/9781847792709.xmlIs compulsory voting habit-forming? Regression discontinuity evidence from Brazilhttps://www.sciencedirect.com/science/article/pii/S0261379421000548?casa_token=lcclTxNO3PAAAAAA:CeVFbbNtZZifV6PZBUO31o-48dVa1QYPow8ECjR8MC6WiVe8stOaE0HUeEYUR1pbEXtvL8dQfEkDetermining the effect of strategic voting on election resultshttps://rss.onlinelibrary.wiley.com/doi/abs/10.1111/rssa.12130Pros and cons of different sampling techniqueshttps://bit.ly/3wPoUYFSampling: Design and Analysishttps://drive.uqu.edu.sa/_/maatia/files/Sampling.pdfNaruhodo #154 - O que é a Lei de Benford?https://www.youtube.com/watch?v=rmCxIP3YpmQ&ab_channel=Cient%C3%ADstica%26PodcastNaruhodoEstatística Psicobio I 2022 #07 - Métodos de Inferência e Amostragemhttps://www.youtube.com/watch?v=3I69FS2lAS4&t=17s&ab_channel=Cient%C3%ADstica%26PodcastNaruhodoEstatística Psicobio I 2022 #04 - Teorema Central do Limite e Intervalos de Confiança Ihttps://www.youtube.com/watch?v=0-3JCMLxX0s&t=1s&ab_channel=Cient%C3%ADstica%26PodcastNaruhodo*APOIE O NARUHODO PELA PLATAFORMA ORELO!Um aviso importantíssimo: o podcast Naruhodo agora está no Orelo: https://bit.ly/naruhodo-no-oreloE é por meio dessa plataforma de apoio aos criadores de conteúdo que você ajuda o Naruhodo a se manter no ar.Você escolhe um valor de contribuição mensal e tem acesso a conteúdos exclusivos, conteúdos antecipados e vantagens especiais.Além disso, você pode ter acesso ao nosso grupo fechado no Telegram, e conversar comigo, com o Altay e com outros apoiadores.E não é só isso: toda vez que você ouvir ou fizer download de um episódio pelo Orelo, vai também estar pingando uns trocadinhos para o nosso projeto.Então, baixe agora mesmo o app Orelo no endereço Orelo.CC ou na sua loja de aplicativos e ajude a fortalecer o conhecimento científico.https://bit.ly/naruhodo-no-orelo

Stanford Psychology Podcast
40 - Ashley Thomas: How Children Use Saliva Sharing to Infer Close Relationships

Stanford Psychology Podcast

Play Episode Listen Later Apr 7, 2022 67:53


Joseph and Ashley talk about how infants, toddlers and children think about social relationships, how they track who is connected and how they are connected, what we can learn about children from studying animal behavior, and how children in other cultures might think differently about social relationships.Dr. Ashley Thomas is a postdoctoral researcher in Brain and Cognitive Sciences at the Massachusetts Institute of technology (MIT). She is interested in what infants, toddlers and children think and feel about social relationships and social intimacy. She also investigates adults moral judgments and asks questions like where do moral norms come from and how do they change? Ashley is currently a postdoctoral fellow with the Center for Research on Open and Equitable Scholarship at MIT. The fantastic Science paper that was referenced: Thomas, A. J., Woo, B., Nettle, D., Spelke, E., & Saxe, R. (2022). Early concepts of intimacy: young humans use saliva sharing to infer close relationships. Science, 375(6578), 311-315To learn more about Ashley's research please visit her personal website and her lab's website. Her twitter handle is @AshleyJ_Thomas--We are currently conducting a survey to get to know our listeners better and to collect any feedback and suggestions so we could improve our shows. If you have 1 minute or so, please click the link here to submit your response: https://forms.gle/dzHqnWTptW8pSVwMA. All responses will be anonymous!

The Nonlinear Library
EA - Launching the INFER Forecasting Tournament for EA uni groups by hannah wing-yee

The Nonlinear Library

Play Episode Listen Later Mar 31, 2022 13:35


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Launching the INFER Forecasting Tournament for EA uni groups, published by hannah wing-yee on March 31, 2022 on The Effective Altruism Forum. This tournament is co-sponsored by Cambridge EA (coordinated by Hannah Erlebach) and UChicago EA (coordinated by Henry Josephson). Thanks to Henry Tolchard for clarifying details about the tournament, and to Will Aldred for feedback on this post. INFER, a forecasting program funded by a grant from Open Philanthropy to generate valuable signals for U.S. Government policymakers, is hosting an intercollegiate forecasting tournament for EA groups. Any EA group affiliated with a university is welcome to participate in the tournament. The top three individual forecasters will be offered paid positions on the Pro Forecaster Program. The tournament is scheduled to run April 1 - July 31, 2022. For full details, see INFER's tournament page. How to sign up You can also find the full instructions here. Getting started should take no more than a couple of minutes; it requires individuals to create an account, and a designated team host to create a team by submitting the usernames of team members. (For all team members) How to sign up Sign up at. You will be asked to choose a username. Let your team host know your username. Start practising on any of the forecasting questions currently on the site; during the tournament, it will be specified which of the questions qualify for tournament scoring. You will be submitting forecasts individually for these questions, and the forecasts made by your team members will then be aggregated into a team forecast for scoring. (For the team host) How to create your team Click the Create a Team link under My Team. Enter a team name (which clearly indicates which university group you're from) and the usernames of your team members. You can always add more teammates later. Submit your request, and an Admin will accept it within 24 hrs. (For all team members) How to interact with your team You will be able to interact with members of your team in different areas of the site, for example: Go to My Team on the top navigation to see other users on your team, their activity, and start general discussions. Within any question's page (below the forecasting interface), go to the My Team tab where you can start Team Discussions which only members of your team can see, and view Current Forecasts and Forecast History. From the Leaderboards in the top navigation bar, you can see rankings of all the teams on INFER (including those not part of the EA tournament), which are updated anytime a question is scored. Note that because INFER's team leaderboard ranking takes into account all scored questions – not just the questions scored for the EA tournament – INFER will be calculating EA team rankings separately. Why forecasting? Forecasting is currently in 80,000 Hours' list of ten top-recommended career paths (as of date of publishing). You can read their review of forecasting and related research and implementation here. Quantitative forecasting can substantially improve our ability to predict the future, as compared with subjective judgement calls. Better predictions about important future events can lead to better decisions, especially in the complex, high-stakes situations in which governments and other institutions frequently find themselves. Better foresight could also maybe improve our ability to solve some of the world's most pressing problems. What makes forecasting particularly accessible is that: Being an exceptional forecaster (‘superforecaster') does not require exceptional prerequisites, and There are reliable ways to improve forecasting accuracy, such as through calibration training and regular practice. The best forecasters come from a wide range of backgrounds, often from fields which aren't related to predict...

The Nonlinear Library
LW - The easy goal inference problem is still hard by paulfchristiano from Value Learning

The Nonlinear Library

Play Episode Listen Later Dec 24, 2021 6:55


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is Value Learning, Part 3: The easy goal inference problem is still hard, published by paulfchristiano. Crossposted from the AI Alignment Forum. May contain more technical jargon than usual. Posted as part of the AI Alignment Forum sequence on Value Learning. Rohin's note: In this post (original here), Paul Christiano analyzes the ambitious value learning approach. He considers a more general view of ambitious value learning where you infer preferences more generally (i.e. not necessarily in the form of a utility function), and you can ask the user about their preferences, but it's fine to imagine that you infer a utility function from data and then optimize it. The key takeaway is that in order to infer preferences that can lead to superhuman performance, it is necessary to understand how humans are biased, which seems very hard to do even with infinite data. One approach to the AI control problem goes like this: Observe what the user of the system says and does. Infer the user's preferences. Try to make the world better according to the user's preference, perhaps while working alongside the user and asking clarifying questions. This approach has the major advantage that we can begin empirical work today — we can actually build systems which observe user behavior, try to figure out what the user wants, and then help with that. There are many applications that people care about already, and we can set to work on making rich toy models. It seems great to develop these capabilities in parallel with other AI progress, and to address whatever difficulties actually arise, as they arise. That is, in each domain where AI can act effectively, we'd like to ensure that AI can also act effectively in the service of goals inferred from users (and that this inference is good enough to support foreseeable applications). This approach gives us a nice, concrete model of each difficulty we are trying to address. It also provides a relatively clear indicator of whether our ability to control AI lags behind our ability to build it. And by being technically interesting and economically meaningful now, it can help actually integrate AI control with AI practice. Overall I think that this is a particularly promising angle on the AI safety problem. Modeling imperfection That said, I think that this approach rests on an optimistic assumption: that it's possible to model a human as an imperfect rational agent, and to extract the real values which the human is imperfectly optimizing. Without this assumption, it seems like some additional ideas are necessary. To isolate this challenge, we can consider a vast simplification of the goal inference problem: The easy goal inference problem: Given no algorithmic limitations and access to the complete human policy — a lookup table of what a human would do after making any sequence of observations — find any reasonable representation of any reasonable approximation to what that human wants. I think that this problem remains wide open, and that we've made very little headway on the general case. We can make the problem even easier, by considering a human in a simple toy universe making relatively simple decisions, but it still leaves us with a very tough problem. It's not clear to me whether or exactly how progress in AI will make this problem easier. I can certainly see how enough progress in cognitive science might yield an answer, but it seems much more likely that it will instead tell us “Your question wasn't well defined.” What do we do then? I am especially interested in this problem because I think that “business as usual” progress in AI will probably lead to the ability to predict human behavior relatively well, and to emulate the performance of experts. So I really care about the residual — what do we need to know to address AI control, beyond what...

Podcasts – Guard Frequency
Guard Frequency Episode 360 | I Infer, You Imply

Podcasts – Guard Frequency

Play Episode Listen Later Jun 22, 2021


Cits and Civs, Captains and Commanders, you’re tuned to episode 360 of Guard Frequency — the best damn space sim podcast ever! This episode was recorded on June 18, 2021 and released for streaming and download on Tuesday, June 22, 2021 at GuardFrequency.com [Download this episode](Right click, Save As…) This Week’s Schedule Flight Deck Elite: […]

Growth Marketing Camp
4 Lessons From a Prolific Marketing Leader To Reach Greater Heights

Growth Marketing Camp

Play Episode Listen Later Feb 8, 2021 32:15 Transcription Available


Kevin Bobowski, SVP Marketing at Aurea Software, shares lessons from working as the head of an internal marketing agency for incredible brands like Infer and Jive. And his ABM field events strategy is sure to inspire new ideas for working well with Sales.