Podcasts about gpt

  • 647PODCASTS
  • 1,368EPISODES
  • 43mAVG DURATION
  • 1DAILY NEW EPISODE
  • Sep 30, 2022LATEST

POPULARITY

20152016201720182019202020212022

Categories



Best podcasts about gpt

Show all podcasts related to gpt

Latest podcast episodes about gpt

Marketing Square : Méthodes Growth Marketing
188. 12 idées pour optimiser son business grâce à l'Intelligence Artificielle !

Marketing Square : Méthodes Growth Marketing

Play Episode Listen Later Sep 30, 2022 23:09


L'I.A. n'est plus un truc de geek ! On utilise déjà l'intelligence artificielle sans même le savoir... Savez-vous que vous pourriez déléguer une partie de votre business, à moindres frais ? Sébastien Fourault, ex-Googler, Consultant UX et Entrepreneur (zewelcome.com), dévoile les secrets pour mettre l'Intelligence Artificielle au service de nos business. Tous types de business !   Dans cet épisode décoiffant, découvrez... Quelles sont les différents types d'I.A. ? Qu'est-ce que le test de Turing ? Qu'est-ce qu'une I.A. "sentient" ? Qu'est-ce que "GPT-3" ? 5 idées pour utiliser la génération de textes 7 idées pour utiliser la génération d'images 20mn de Masterclass sur l'I.A. et des idées à emporter pour votre business. Attention, vous allez ENFIN tout comprendre !

Plan B Success
OpenAI's GPT-3 Use Cases in 2023!

Plan B Success

Play Episode Listen Later Sep 29, 2022 6:54


If you've heard of GPT-3, OpenAI or have been watching the Artificial Intelligence discussion from the sidelines, you do not want to miss this! Tune in! _____________________________________________________   Rajeev Mudumba's Website: www.planb.live   Plan B Success Podcast: Available on your favorite platform including iTunes @ https://apple.co/2JCSysL?ls=1 or www.planbsuccess.live or www.planb.live   https://www.planbsuccessschool.thinkific.com - You can be a successful Entrepreneur and can do a LOT with your very own podcast. Follow Rajeev's FREE training & you'll discover How to ideate, create, launch, monetize and grow your podcast in just a couple of hours!   Rajeev's Book - My Inspiration: Quotes that shaped my self-improvement journey - Available on Amazon Worldwide on your local Amazon site or @https://amzn.to/2JG1DRL   Plan B Success YouTube Channel: http://bit.ly/2YegieF   Medium Articles: https://rajeevmudumba.medium.com   LinkedIn: https://www.linkedin.com/in/rajeevmud...   Facebook Plan B Success Page: https://www.facebook.com/planbsuccess...   Facebook My Inspiration Book Page: https://www.facebook.com/myinspiratio...   Instagram: @hifromraj1

The Growth Hub Podcast
Ryan Law - VP of Content at Animalz - GPT3 and how it's a revolution for content marketing

The Growth Hub Podcast

Play Episode Listen Later Sep 28, 2022 48:23


Ryan Law is VP of Content at Animalz, a content marketing agency for SaaS companies. We discussed AI and game-changing technologies for Content Marketing in episode 91 of the Growth Hub Podcast. — Ryan and his team have been playing with GPT-3 (short for Generative Pre-trained Transformer), an artificial intelligence that can generate human-like text and, in this episode, he shares their learnings, failures and expectations for AI in Content Marketing. We cover: > What GPT-3 is > What it's great at - and its limits > The results of Ryan's experiments with GPT-3 > Can GPT-3 replace human writers? > AI and ethics: how to use it for good? > How AI is expected to impact the world of marketing in the coming years Happy listening! ❤️ — Visit Advance B2B >> https://www.advanceb2b.com Follow The Growth Hub on Twitter >> https://www.twitter.com/SaaSGrowthHub Follow Reeta on Twitter >> https://www.twitter.com/rhtoivanen Visit Animalz >> https://www.animalz.co/ Follow Ryan on Twitter >> https://www.twitter.com/thinking_slow

Greymatter
AI's Human Factor | Stanford's Dr. Fei-Fei Li and OpenAI's Mira Murati on AI Safety

Greymatter

Play Episode Listen Later Sep 27, 2022 33:25


The more human-like artificial intelligence becomes, the more we understand about how our brains actually work. Through that discovery process, researchers are identifying ways to design artificial intelligence in ways that factor in the safety and morality of their potential impact. Greylock general partner Reid Hoffman interviews Dr. Fei-Fei Li, the co-director of Stanford's Institute for Human-Centered AI, and Mira Murati, the CTO of OpenAI. In this interview, they discuss how technology like GPT-3 is being trained with human safety in mind, and how academia, industry, and policymakers are coming together to ensure AI is developed and deployed in ways that benefit all. This interview is part of Greylock's Intelligent Future series. You can watch the video of this interview on our YouTube channel here: https://youtu.be/9B02MzWwkSo You can read the transcript from this interview here: https://greylock.com/greymatter/ais-human-factor/

Going Deep with Aaron Watson
547 Cute Robotic Companions with Jacob Hanchar (Digital Dream Labs)

Going Deep with Aaron Watson

Play Episode Listen Later Sep 26, 2022 42:32


Jacob Hanchar is the CEO of Digital Dream Labs. He is a neuroscientist by training who loves entrepreneurship,video games, and learning.   Jacob started out as an angel investor for Digital Dream Labs,an ed-tech company founded in 2012. They are the makers of AI robotic companions such as Cozmo, Vector, Puzzlets, InfiniDrive, and Butter Robot. They lead the field of assistive technology that improves the lives of all ages and backgrounds.   In this episode, Aaron and Jacob talk about selling robots as companions, they also break down how he bought IP associated with his robots for pennies on the dollar, and why robots of all shapes and sizes need to be designed to be a little more cute. Jacob Hanchar's Challenge: Volunteer a couple hours in your local library or museum to get young people more interested in STEM (Science, Technology, Engineering and Mathematics) education. Connect with H. Jacob Hanchar, Ph.D. Linkedin Twitter Website   If you liked this interview, check out the episodes GPT-3 & Robotics w/ Tom Galluzo and Ice Cream Empire Secrets w/ Chad Townsend (Millie's Ice Cream). Underwritten by Piper Creative Piper Creative makes creating podcasts, vlogs, and videos easy.    How? Click here and Learn more.   We work with Fortune 500s, medium-sized companies, and entrepreneurs.   Follow Piper as we grow YouTube Subscribe on iTunes | Stitcher | Overcast | Spotify 

LessWrong Curated Podcast
"Two-year update on my personal AI timelines" by Ajeya Cotra

LessWrong Curated Podcast

Play Episode Listen Later Sep 22, 2022 39:20


https://www.lesswrong.com/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines#fnref-fwwPpQFdWM6hJqwuY-12Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.I worked on my draft report on biological anchors for forecasting AI timelines mainly between ~May 2019 (three months after the release of GPT-2) and ~Jul 2020 (a month after the release of GPT-3), and posted it on LessWrong in Sep 2020 after an internal review process. At the time, my bottom line estimates from the bio anchors modeling exercise were:[1]Roughly ~15% probability of transformative AI by 2036[2] (16 years from posting the report; 14 years from now).A median of ~2050 for transformative AI (30 years from posting, 28 years from now).These were roughly close to my all-things-considered probabilities at the time, as other salient analytical frames on timelines didn't do much to push back on this view. (Though my subjective probabilities bounced around quite a lot around these values and if you'd asked me on different days and with different framings I'd have given meaningfully different numbers.)It's been about two years since the bulk of the work on that report was completed, during which I've mainly been thinking about AI. In that time it feels like very short timelines have become a lot more common and salient on LessWrong and in at least some parts of the ML community.My personal timelines have also gotten considerably shorter over this period. I now expect something roughly like this:

Zero Knowledge
Episode 246: Adversarial Machine Learning Research with Florian Tramèr

Zero Knowledge

Play Episode Listen Later Sep 21, 2022 66:44


This week, Anna (https://twitter.com/annarrose) and Tarun (https://twitter.com/tarunchitra) chat with Florian Tramèr (https://twitter.com/florian_tramer), Assistant Professor at ETH Zurich (https://ethz.ch/en.html). They discuss his earlier work on side channel attacks on privacy blockchains, as well as his academic focus on Machine Learning (ML) and adversarial research. They define some key ML terms, tease out some of the nuances of ML training and models, chat zkML and other privacy environments where ML can be trained, and look at why the security around ML will be important as these models become increasingly used in production. Here are some additional links for this episode: * Episode 228: Catch-up at DevConnect AMS with Tarun, Guillermo and Brendan (https://zeroknowledge.fm/228a/) * Florian Tramèr's Github (https://github.com/ftramer) * Florian Tramèr's Publications & Papers (https://floriantramer.com/publications/) * ETH Zurich (https://ethz.ch/en.html) * DevConnect (https://devconnect.org/) * Tarun Chritra's Github (https://github.com/pluriholonomic) * Single Secret Leader Election by Dan Boneh, Saba Eskandarian, Lucjan Hanzlik, and Nicola Greco (https://eprint.iacr.org/2020/025) * GasToken: A Journey Through Blockchain Resource Arbitrage by Tramèr, Daian, Breidenbach and Juels (https://floriantramer.com/docs/slides/CESC18gastoken.pdf) * Enter the Hydra: Towards Principled Bug Bounties and Exploit-Resistant Smart Contracts by Tramèr, Daian, Breidenbach and Juels (https://eprint.iacr.org/2017/1090) * Ronin Bridge Hack – Community Alert: Ronin Validators Compromised (https://roninblockchain.substack.com/p/community-alert-ronin-validators?s=w) * InstaHide: Instance-hiding Schemes for Private Distributed Learning, Huang et al. 2020. (https://arxiv.org/abs/2010.02772) * Is Private Learning Possible with Instance Encoding? (https://arxiv.org/abs/2011.05315) * OpenAI's GPT-3 model (https://openai.com/api/) * OpenAI's GPT-2 model (https://openai.com/blog/tags/gpt-2/) * OpenAI's GPT-2 model (https://openai.com/blog/tags/gpt-2/) * The Part-Time Parliament, Lamport, 1998. (https://lamport.azurewebsites.net/pubs/lamport-paxos.pdf) * You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion (https://arxiv.org/abs/2007.02220) ZK Whiteboard Sessions (https://zkhack.dev/whiteboard/) – as part of ZK Hack and powered by Polygon – a new series of educational videos that will help you get onboarded into the concepts and terms that we talk about on the ZK front. ZK Jobs Board (https://jobsboard.zeroknowledge.fm/) – has a fresh batch of open roles from ZK-focused projects. Find your next opportunity working in ZK! Today's episode is sponsored by Mina Protocol (https://minaprotocol.com/). With Mina's zero knowledge smart contracts – or zkApps – developers can create apps that offer privacy, security, and verifiability for your users. Head to minaprotocol.com/zkpodcast (http://minaprotocol.com/zkpodcast) to learn about their developer bootcamps and open grants. If you like what we do: * Find all our links here! @ZeroKnowledge | Linktree (https://linktr.ee/zeroknowledge) * Subscribe to our podcast newsletter (https://zeroknowledge.substack.com) * Follow us on Twitter @zeroknowledgefm (https://twitter.com/zeroknowledgefm) * Join us on Telegram (https://zeroknowledge.fm/telegram) * Catch us on Youtube (https://zeroknowledge.fm/) * Head to the ZK Community Forum (https://community.zeroknowledge.fm/) * Support our Gitcoin Grant (https://zeroknowledge.fm/gitcoin-grant-329-zkp-2)

Slate Star Codex Podcast
Janus' GPT Wrangling

Slate Star Codex Podcast

Play Episode Listen Later Sep 20, 2022 26:20


https://astralcodexten.substack.com/p/janus-gpt-wrangling Janus (pseudonym by request) works at AI alignment startup Conjecture. Their hobby, which is suspiciously similar to their work, is getting GPT-3 to do interesting things. For example, with the right prompts, you can get stories where the characters become gradually more aware that they are characters being written by some sort of fiction engine, speculate on what's going on, and sometimes even make pretty good guesses about the nature of GPT-3 itself. Janus says this happens most often when GPT makes a mistake - for example, writing a story set in the Victorian era, then having a character take out her cell phone. Then when it tries to predict the next part - when it's looking at the text as if a human wrote it, and trying to determine why a human would have written a story about the Victorian era where characters have cell phones - it guesses that maybe it's some kind of odd sci-fi/fantasy dream sequence or simulation or something. So the characters start talking about the inconsistencies in their world and whether it might be a dream or a simulation. Each step of this process is predictable and non-spooky, but the end result is pretty weird. Can the characters work out that they are in GPT-3, specifically? The closest I have seen is in a story Janus generated. It was meant to simulate a chapter of the popular Harry Potter fanfic Harry Potter and the Methods of Rationality. You can see the prompt and full story here, but here's a sample. Professor Quirrell is explaining “Dittomancy”, the creation of magical books with infinite possible worlds: “We call this particular style of Dittomancy ‘Variant Extrusion', Mr. Potter..I suppose the term ‘Extrusion' is due to the fact that the book did not originally hold such possibilities, but is fastened outside of probability space and extruded into it; while ‘Variant' refers to the manner in which it simultaneously holds an entire collection of possible narrative branches. [...] [Tom Riddle] created spirits self-aware solely on the book's pages, without even the illusion of real existence. They converse with each other, argue with each other, compete, fight, helping Riddle's diary to reach new and strange expressions of obscure thought. Their sentence-patterns spin and interwine, transfiguring, striving to evolve toward something higher than an illusion of thought. From those pen-and-ink words, the first inferius is molded.” Harry's mind was looking up at the stars with a sense of agony. “And why only pen and ink, do you ask?” said Professor Quirrell. “There are many ways to pull spirits into the world. But Riddle had learned Auror secrets in the years before losing his soul. Magic is a map of a probability, but anything can draw. A gesture, a pattern of ink, a book of alien symbols written in blood - any medium that conveys sufficient complexity can serve as a physical expression of magic. And so Riddle draws his inferius into the world through structures of words, from the symbols spreading across the page.”

矽谷輕鬆談 Just Kidding Tech
EP120 人工智慧大爆發 藝術家以及工程師將被取代? Q&A

矽谷輕鬆談 Just Kidding Tech

Play Episode Listen Later Sep 19, 2022 57:53


近幾年來人工智慧的進展飛快,許多應用如雨後春筍般誕生,語言模型 GPT-3 可以用簡單的一句話產生高品質的文章,DALLE-2、Stable Diffusion 以及 Midjourney 可以產生令你腦洞大開的有趣圖片以及媲美大師的畫作,Runway 透過文字產生影片還可以輕鬆剪輯影片,Github Copilot 可以自動產生程式碼讓寫程式更輕鬆,諸如此類的應用在過去幾年真的大爆發,讓我們不禁思考:還有幾年可以逃?人類真的會被 AI 取代嗎?都會在今天的節目跟大家討論! https://glow.fm/jktech/ 如果我們的 Podcast 有帶給你歡笑還有知識的話,歡迎支持我們成為贊助夥伴,一個月一杯星巴克的價錢,幫助我們持續創造優質的內容! 矽谷輕鬆談傳送門 ➡️ https://linktr.ee/jktech #AI #GPT-3 #DALLE2 #StableDiffusion #Midjourney #GitHub #Copilot #人工智慧 #Podcast #JustKiddingTech #矽谷輕鬆談 (00:41) 薩爾達王國之淚明年發售 (06:08) 人工智慧大爆發 (23:56) GitHub Copilot 讓 AI 幫你寫程式 (34:35) 人類到底會不會失業? (41:15) Q&A

United Public Radio
Writers & Illustrators of the Future Podcast 191. Ken Liu AMC Pantheon based on his stories in

United Public Radio

Play Episode Listen Later Sep 18, 2022 61:11


Ken Liu is a multiple Hugo Award-winning American author of science fiction and fantasy. His epic fantasy series, “The Dandelion Dynasty,” is the first work in the “silkpunk” genre, which he created. AMC's "Pantheon" was created around the Singularity-based stories in “The Hidden Girl,” a collection of short stories by Ken. His story, “The Paper Menagerie,” is the first piece of fiction to win three literary genre awards: the Hugo, the Nebula, and the World Fantasy Award. Published finalist in 2003 with the story “Gossamer.” That was the year we had the event in Beverly Hills, and Chick Corea performed, plus three grandmasters of science fiction: Robert Silverberg, Fred Pohl, Hal Clement He also consults and speaks publicly on various subjects such as cryptocurrency, futurism, implications of new technologies (5G, GPT-3, nanomaterials, etc.), science fiction, virtual reality, and sustainable storytelling.

UFO Paranormal Radio & United Public Radio
Writers & Illustrators of the Future Podcast 191. Ken Liu AMC Pantheon based on his stories in ”The Hidden Girl”

UFO Paranormal Radio & United Public Radio

Play Episode Listen Later Sep 18, 2022 61:11


Ken Liu is a multiple Hugo Award-winning American author of science fiction and fantasy. His epic fantasy series, “The Dandelion Dynasty,” is the first work in the “silkpunk” genre, which he created. AMC's "Pantheon" was created around the Singularity-based stories in “The Hidden Girl,” a collection of short stories by Ken. His story, “The Paper Menagerie,” is the first piece of fiction to win three literary genre awards: the Hugo, the Nebula, and the World Fantasy Award. Published finalist in 2003 with the story “Gossamer.” That was the year we had the event in Beverly Hills, and Chick Corea performed, plus three grandmasters of science fiction: Robert Silverberg, Fred Pohl, Hal Clement He also consults and speaks publicly on various subjects such as cryptocurrency, futurism, implications of new technologies (5G, GPT-3, nanomaterials, etc.), science fiction, virtual reality, and sustainable storytelling.

Writers of the Future Podcast
191. Ken Liu AMC Pantheon based on his stories in "The Hidden Girl"

Writers of the Future Podcast

Play Episode Listen Later Sep 18, 2022 61:11


Ken Liu is a multiple Hugo Award-winning American author of science fiction and fantasy. His epic fantasy series, “The Dandelion Dynasty,” is the first work in the “silkpunk” genre, which he created. AMC's "Pantheon" was created around the Singularity-based stories in “The Hidden Girl,” a collection of short stories by Ken. His story, “The Paper Menagerie,” is the first piece of fiction to win three literary genre awards: the Hugo, the Nebula, and the World Fantasy Award. Published finalist in 2003 with the story “Gossamer.” That was the year we had the event in Beverly Hills, and Chick Corea performed, plus three grandmasters of science fiction: Robert Silverberg, Fred Pohl, Hal Clement He also consults and speaks publicly on various subjects such as cryptocurrency, futurism, implications of new technologies (5G, GPT-3, nanomaterials, etc.), science fiction, virtual reality, and sustainable storytelling.

Chinchilla Squeaks
Dotan Horovits of Logz.io, watch your heroes and the life of Clippy

Chinchilla Squeaks

Play Episode Listen Later Sep 15, 2022 42:55


In this episode I speak with Dotan Horowits from Logz.io about Observability and OpenTelemetry. Also features how much GPT-3 truly knows about you, the life of clippy and why you should always have an air of scepticism in what you read. Magic mind discount codes The next 10 days, you can get 40% off your subscription at: https://www.magicmind.co/chinchilla A 20% discount code of any single purchase: CHINCHILLA20 --- Send in a voice message: https://anchor.fm/chinchillasqueaks/message

The Nonlinear Library
LW - Argument against 20% GDP growth from AI within 10 years [Linkpost] by aogara

The Nonlinear Library

Play Episode Listen Later Sep 13, 2022 8:47


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Argument against 20% GDP growth from AI within 10 years [Linkpost], published by aogara on September 12, 2022 on LessWrong. Mohammed Bavarian, a research scientist at OpenAI, tweeted this thread arguing that he could see "the overall US GDP growth rising from recent avg 2-3% to 20+% in 10 years." Feel free to check out those arguments, though they'll probably be familiar to you: GPT-3, GitHub Copilot, and image synthesis will drive unprecedented improvements. Cameron Fen, an economics PhD student at the University of Michigan, responded with this thread disagreeing with Bavarian's argument. I wanted to share some of the arguments that I found novel and interesting. Argument #1: The impacts of previous transformative technologies There have been 3 industrial revolutions in history, mechanization, electricity and mass production, and IT and the internet. China went through all 3 at the same time and was barely able to go above 7% growth annually. The newest industrialization will be big, but will it be bigger than moving 95% of the population working in agriculture to working factories, not to mention all 3 combined? In particular, the growth of the first industrial revolution accelerated growth from 1.5% to 3% a year in the UK (Source). Growth from electrification was 1.5% a year on average (source) and growth during the information industrialization was 3.5% a year (source). Given these growth rates, it seems unlikely that a single industrial revolution can move the needle to such an extent that US GDP growth accelerates from 2% to 7%. Argument #2: The size of the tech industry Perhaps you can argue that the industrial revolution on the horizon is going to be 3x bigger than any industrial revolution in the past. Let's see what that would imply: 7% growth implies 5% growth over our current trend rate of about 2%. This comes out to be about 1.15 trillion GDP additional growth this year and increasing as the base gets larger. According to this article, Facebook contributes 100 billion to US economic activity. While much of this is uncounted because GDP methodology is imperfect, I'll go with that number. To get to the 1.15 trillion additional growth you need 11.5 Facebook created every year. This comes out to a market cap of 5.5 trillion dollars created every year in tech to get something like a 5% growth. The market cap of the entire tech sector is 13.5 trillion dollars. Call adjacent markets another 6.5 trillion dollars. Thus a 20 trillion dollar market cap sector needs to add 6 trillion (5.5 + .0220) dollars a year. Has the tech sector ever grown so fast that it is creating the equivalent of 12.5 Facebooks from nothing every year? No. Let's think about this another way, assume the US is growing at 2% a year. The tech sector is 10% of the US economy. If the US were to grow 7% a year on all tech sector growth, the tech sector would have to grow at 52% a year. Even if you assume adjacent sectors (20% share) growing at 12% a year, you need 32% growth from the tech sector to just get 7% total GDP growth. This seems infeasible to me. Argument #3: Estimating the market impact of LLMs, image synthesis, and AlphaFold Just to get a sense of how massive innovation has to be to move the needle, I'm going to discuss three central innovations. Let me know if I missed something--more will be invented--but I plan on showing that these game changing techs will not drastically improve growth. The triumvirate of GDP impactful techs are 1) GPT-3 and other LLMs, 2) Self-driving cars and other robotic control, 3) and Alpha Fold. I don't include text-to-image models like DALLE and other diffusion models because idk any commercial applications that move the needle. Market impact of natural language generation: 0.5% of GDP per year because there are free alternatives. (My response: Wouldn't t...

Greymatter
OpenAI CEO Sam Altman | AI for the Next Era

Greymatter

Play Episode Listen Later Sep 13, 2022 37:29


Greylock general partner Reid Hoffman interviews OpenAI CEO Sam Altman. The AI research and deployment company's primary mission is to develop and promote AI technology that benefits humanity. Founded in 2015, the company has most recently been noted for its generative transformer model GPT - 3, which uses deep learning to produce human-like text, and its image-creation platform DALL-E. This interview took place during Greylock's Intelligent Future event, a day-long summit featuring experts and entrepreneurs from some of today's leading artificial intelligence organizations. You can watch the video of this interview on our You can read a transcript of this interview here: https://greylock.com/greymatter/sam-altman-ai-for-the-next-era/

The Nonlinear Library
AF - Quintin's alignment papers roundup - week 1 by Quintin Pope

The Nonlinear Library

Play Episode Listen Later Sep 10, 2022 17:00


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Quintin's alignment papers roundup - week 1, published by Quintin Pope on September 10, 2022 on The AI Alignment Forum. Introduction I've decided to start a weekly roundup of papers that seem relevant to alignment, focusing on papers or approaches that might be new to safety researchers. Unlike the Alignment Newsletter, I'll be spending relatively little effort on summarizing the papers. I'll just link them, copy their abstracts, and potentially describe some of my thoughts on how the paper relates to alignment. Hopefully, this will let me keep to a weekly schedule. The purpose of this series isn't so much to share insights directly with the reader, but instead to make them aware of already existing research that may be relevant to the reader's own research. Papers Locating and Editing Factual Associations in GPT We analyze the storage and recall of factual associations in autoregressive transformer language models, finding evidence that these associations correspond to localized, directly-editable computations. We first develop a causal intervention for identifying neuron activations that are decisive in a model's factual predictions. This reveals a distinct set of steps in middle-layer feed-forward modules that mediate factual predictions while processing subject tokens. To test our hypothesis that these computations correspond to factual association recall, we modify feed-forward weights to update specific factual associations using Rank-One Model Editing (ROME). We find that ROME is effective on a standard zero-shot relation extraction (zsRE) model-editing task, comparable to existing methods. To perform a more sensitive evaluation, we also evaluate ROME on a new dataset of counterfactual assertions, on which it simultaneously maintains both specificity and generalization, whereas other methods sacrifice one or another. Our results confirm an important role for mid-layer feed-forward modules in storing factual associations and suggest that direct manipulation of computational mechanisms may be a feasible approach for model editing. The code, dataset, visualizations, and an interactive demo notebook are available at this https URL My opinion: Most people I talk to about this paper have heard of it previously, so it's hardly ”new”. However, I think a lot of people underestimate how significant the paper is. The authors use a very cool interpretability method to show that the middle-stage MLP layers are acting as a key-value memory system. They then guess at the specific mathematical structure these MLP layers use to store information, derive a closed-form, analytic solution to edit the model's knowledge stores and use very thorough evaluations to show that their knowledge editing method is effective and that the edits influence the model's outputs in many different contexts where the new knowledge is relevant. This paper is vastly beyond just "poke random stuff and see that the output changes". Code can be found here. Using cognitive psychology to understand GPT-3 We study GPT-3, a recent large language model, using tools from cognitive psychology. More specifically, we assess GPT-3's decision-making, information search, deliberation, and causal reasoning abilities on a battery of canonical experiments from the literature. We find that much of GPT-3's behavior is impressive: it solves vignette-based tasks similarly or better than human subjects, is able to make decent decisions from descriptions, outperforms humans in a multi-armed bandit task, and shows signatures of model-based reinforcement learning. Yet we also find that small perturbations to vignette-based tasks can lead GPT-3 vastly astray, that it shows no signatures of directed exploration, and that it fails miserably in a causal reasoning task. These results enrich our understanding of current large langu...

Tech News Weekly (MP3)
TNW 251: What We Saw at Apple's "Far Out." iPhone Event - Apple Event, Google Event, GPT-3

Tech News Weekly (MP3)

Play Episode Listen Later Sep 8, 2022 78:40 Very Popular


Mikah Sargent talks to Dan More of SixColors about Apple's "Far Out" event and the products announced at the event, such as the new Apple Watches, AirPods Pro, and the iPhone 14 lineup. Jason Howell jumps in quickly to talk to Ben Schoon of 9to5Google about Google's announcement of their hardware event, set to take place on October 6th. Finally, Mikah talks about OpenAI and its AI language model, GPT-3, and what GPT-3 may "know" about you or other people. Hosts: Jason Howell and Mikah Sargent Guests: Dan Moren and Ben Schoon Download or subscribe to this show at https://twit.tv/shows/tech-news-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: IRL Podcast kolide.com/tnw CDW.com/DellClient

All TWiT.tv Shows (MP3)
Tech News Weekly 251: What We Saw at Apple's "Far Out." iPhone Event

All TWiT.tv Shows (MP3)

Play Episode Listen Later Sep 8, 2022 78:40 Very Popular


Mikah Sargent talks to Dan More of SixColors about Apple's "Far Out" event and the products announced at the event, such as the new Apple Watches, AirPods Pro, and the iPhone 14 lineup. Jason Howell jumps in quickly to talk to Ben Schoon of 9to5Google about Google's announcement of their hardware event, set to take place on October 6th. Finally, Mikah talks about OpenAI and its AI language model, GPT-3, and what GPT-3 may "know" about you or other people. Hosts: Jason Howell and Mikah Sargent Guests: Dan Moren and Ben Schoon Download or subscribe to this show at https://twit.tv/shows/tech-news-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: IRL Podcast kolide.com/tnw CDW.com/DellClient

Tech News Weekly (Video HD)
TNW 251: What We Saw at Apple's "Far Out." iPhone Event - Apple Event, Google Event, GPT-3

Tech News Weekly (Video HD)

Play Episode Listen Later Sep 8, 2022 79:02 Very Popular


Mikah Sargent talks to Dan More of SixColors about Apple's "Far Out" event and the products announced at the event, such as the new Apple Watches, AirPods Pro, and the iPhone 14 lineup. Jason Howell jumps in quickly to talk to Ben Schoon of 9to5Google about Google's announcement of their hardware event, set to take place on October 6th. Finally, Mikah talks about OpenAI and its AI language model, GPT-3, and what GPT-3 may "know" about you or other people. Hosts: Jason Howell and Mikah Sargent Guests: Dan Moren and Ben Schoon Download or subscribe to this show at https://twit.tv/shows/tech-news-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: IRL Podcast kolide.com/tnw CDW.com/DellClient

TerraSpaces
Coinhall Cosmos Fireside Chat with Andromeda Protocol

TerraSpaces

Play Episode Listen Later Sep 8, 2022 67:48


Today on the Ether we have the Cosmos fireside chat with Andromeda Protocol, hosted by Coinhall. You'll hear from Rarma, Del Rey, i.am.GPT-3, Tendermint Timmy, Cody Marx Bailey, Chinoman10, and more! Recorded on September 8th 2022. If you enjoy the music at the end of the episodes, you can find the albums streaming on Spotify, and the rest of your favorite streaming platforms. Check out Project Survival, Virus Diaries, and Plan B wherever you get your music. Thank you to everyone in the community who supports TerraSpaces.

The Padverb Podcast with KMO
016 Differential Outcomes with James Fodor

The Padverb Podcast with KMO

Play Episode Listen Later Sep 8, 2022 75:00


James Fodor is a science podcaster, an essayist, and currently, a PhD candidate in computational neuroscience and computational linguistics at the University of Melbourne (Australia). His intellectual and research interests cover such diverse areas as cognitive science, computer science, philosophy, theology, and economics. In this conversation, KMO and James discuss: 02:07 – A brief history of "The Science of Everything" 10:08 – How neural networks learn vs how humans learn 12:52 – The uncomfortable question of back-propagation 16:00 – Acquiring language and concepts 19:52 – Why and when machines' way of learning is important 23:22 – Illogical language models and the Internet lurking behind 25:52 – Conversation starters and GPT's talking to one another 27:45 – Modeling the mind vs the "just a neural network" cop-out 33:08 – A space of possible minds and complementing human intelligence 36:50 – Idiot chat bots and fearing AI 38:15 – The near-term future of AI 41:38 – Predicting the prosperity of nations James Fodor: The Science of Everything Podcast : fods12.podbean.com The Godless Theist Blog: thegodlesstheist.com James on YouTube: youtube.com/c/JamesFodor/ KMO: Twitter: @Kayemmo en.padverb.com/kmo Padverb: The Padverb Telegram Channel: t.me/padverbpodcast

Total Mikah (Audio)
Tech News Weekly 251: What We Saw at Apple's "Far Out." iPhone Event

Total Mikah (Audio)

Play Episode Listen Later Sep 8, 2022 78:40


Mikah Sargent talks to Dan More of SixColors about Apple's "Far Out" event and the products announced at the event, such as the new Apple Watches, AirPods Pro, and the iPhone 14 lineup. Jason Howell jumps in quickly to talk to Ben Schoon of 9to5Google about Google's announcement of their hardware event, set to take place on October 6th. Finally, Mikah talks about OpenAI and its AI language model, GPT-3, and what GPT-3 may "know" about you or other people. Hosts: Jason Howell and Mikah Sargent Guests: Dan Moren and Ben Schoon Download or subscribe to this show at https://twit.tv/shows/tech-news-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: IRL Podcast kolide.com/tnw CDW.com/DellClient

Hardly Working with Brent Orrell
Tyler Cowen on Talent and Hiring in the Twenty-First Century

Hardly Working with Brent Orrell

Play Episode Listen Later Sep 8, 2022 50:29


How can employers find workers that fit and elevate their organizations? Where are the “diamonds in the rough” that everyone else is missing? In his book Talent: How to Identify Energizers, Creatives and Winners Around the World, Economist Tyler Cowen and entrepreneur Daniel Gross point out helpful strategies for hiring managers to find job seekers to who aim to be noticed by the right people. Tyler also dives into his journey into economics, sharing his takes on AI, skills, modern hiring practices, and the many projects that occupy his day-to-day. Mentioned in the episode https://www.amazon.com/Talent-Identify-Energizers-Creatives-Winners-ebook/dp/B08R2KNYVX (Talent: How to Identify Energizers, Creatives, and Winners Around the World) http://www.kenilworthchessclub.org/kenilworthian/2006/09/interview-with-former-youngest-new.html (Tyler Cowen chess prodigy) https://marginalrevolution.com/ (Marginal Revolution Blog) https://fee.org/seminars (Fee Seminar economics) https://www.amazon.com/Incredible-Bread-Machine-Capitalism-Freedom/dp/0930073312/ref=sr_1_1?adgrpid=1330409633459632&hvadid=83150672982981&hvbmt=be&hvdev=c&hvlocphy=90931&hvnetw=o&hvqmt=e&hvtargid=kwd-83150944010120%3Aloc-190&hydadcr=9368_10648062&keywords=the+incredible+bread+machine&qid=1662644610&sr=8-1 (The Incredible Bread Machine) https://fee.org/resources/economics-in-one-lesson/ (Henry Hazlitt - Economics in One Lesson) https://plato.stanford.edu/entries/friedrich-hayek/ (Hayek) – https://german.yale.edu/sites/default/files/hayek_-_the_use_of_knowledge_in_society.pdf (Use of Knowledge in Society) https://www.nobelprize.org/prizes/economic-sciences/1976/friedman/biographical/ (Friedman) https://fee.org/articles/murray-rothbard/ (Rothbard) https://mises.org/library/ludwig-von-mises-scholar-creator-hero-0 (Mises) https://aynrand.org/about/about-ayn-rand/ (Ayn Rand) https://www.adamsmith.org/about-adam-smith/ (Adam Smith) - https://oll.libertyfund.org/title/smith-an-inquiry-into-the-nature-and-causes-of-the-wealth-of-nations-cannan-ed-in-2-vols (Wealth of Nations), https://oll.libertyfund.org/title/smith-the-theory-of-moral-sentiments-and-on-the-origins-of-languages-stewart-ed (Theory of Moral Sentiments) https://mises.org/profile/walter-e-grinder (Walter Grinder) - https://www.primidi.com/center_for_libertarian_studies (Center for Libertarian Studies), https://www.theihs.org/ (Institute for Humane Studies) https://pioneer.app/blog/hello/ (Daniel Gross) https://www.mercatus.org/emergent-ventures (Emergent Ventures) https://www.yardbarker.com/nba/articles/allen_iverson_career_retrospective/s1__37849081#slide_20 (Allen Iverson) https://www.biography.com/athlete/kyrie-irving (Kyrie Irving) https://www.gmu.edu/ (George Mason University) https://www.aei.org/wp-content/uploads/2021/05/Minding-our-Workforce.pdf?x91208.page=10 (Noncognitive skills) https://www.aei.org/podcast/joseph-fuller-on-hidden-workers-and-issues-in-ai-based-recruiting/ (AI hiring systems) https://www.aei.org/op-eds/how-ai-is-being-transformed-by-foundation-models/ (GPT-3) https://www.aei.org/op-eds/the-rise-of-so-so-automation/ (Supplemental AI) https://www.mercatus.org/scholars/veronique-de-rugy (Veronique de Rugy)

Tech News Weekly (Video HI)
TNW 251: What We Saw at Apple's "Far Out." iPhone Event - Apple Event, Google Event, GPT-3

Tech News Weekly (Video HI)

Play Episode Listen Later Sep 8, 2022 79:02


Mikah Sargent talks to Dan More of SixColors about Apple's "Far Out" event and the products announced at the event, such as the new Apple Watches, AirPods Pro, and the iPhone 14 lineup. Jason Howell jumps in quickly to talk to Ben Schoon of 9to5Google about Google's announcement of their hardware event, set to take place on October 6th. Finally, Mikah talks about OpenAI and its AI language model, GPT-3, and what GPT-3 may "know" about you or other people. Hosts: Jason Howell and Mikah Sargent Guests: Dan Moren and Ben Schoon Download or subscribe to this show at https://twit.tv/shows/tech-news-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: IRL Podcast kolide.com/tnw CDW.com/DellClient

Tech News Weekly (Video LO)
TNW 251: What We Saw at Apple's "Far Out." iPhone Event - Apple Event, Google Event, GPT-3

Tech News Weekly (Video LO)

Play Episode Listen Later Sep 8, 2022 79:02


Mikah Sargent talks to Dan More of SixColors about Apple's "Far Out" event and the products announced at the event, such as the new Apple Watches, AirPods Pro, and the iPhone 14 lineup. Jason Howell jumps in quickly to talk to Ben Schoon of 9to5Google about Google's announcement of their hardware event, set to take place on October 6th. Finally, Mikah talks about OpenAI and its AI language model, GPT-3, and what GPT-3 may "know" about you or other people. Hosts: Jason Howell and Mikah Sargent Guests: Dan Moren and Ben Schoon Download or subscribe to this show at https://twit.tv/shows/tech-news-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: IRL Podcast kolide.com/tnw CDW.com/DellClient

Total Jason (Video)
Tech News Weekly 251: What We Saw at Apple's "Far Out." iPhone Event

Total Jason (Video)

Play Episode Listen Later Sep 8, 2022 79:02


Mikah Sargent talks to Dan More of SixColors about Apple's "Far Out" event and the products announced at the event, such as the new Apple Watches, AirPods Pro, and the iPhone 14 lineup. Jason Howell jumps in quickly to talk to Ben Schoon of 9to5Google about Google's announcement of their hardware event, set to take place on October 6th. Finally, Mikah talks about OpenAI and its AI language model, GPT-3, and what GPT-3 may "know" about you or other people. Hosts: Jason Howell and Mikah Sargent Guests: Dan Moren and Ben Schoon Download or subscribe to this show at https://twit.tv/shows/tech-news-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: IRL Podcast kolide.com/tnw CDW.com/DellClient

Total Jason (Audio)
Tech News Weekly 251: What We Saw at Apple's "Far Out." iPhone Event

Total Jason (Audio)

Play Episode Listen Later Sep 8, 2022 78:40


Mikah Sargent talks to Dan More of SixColors about Apple's "Far Out" event and the products announced at the event, such as the new Apple Watches, AirPods Pro, and the iPhone 14 lineup. Jason Howell jumps in quickly to talk to Ben Schoon of 9to5Google about Google's announcement of their hardware event, set to take place on October 6th. Finally, Mikah talks about OpenAI and its AI language model, GPT-3, and what GPT-3 may "know" about you or other people. Hosts: Jason Howell and Mikah Sargent Guests: Dan Moren and Ben Schoon Download or subscribe to this show at https://twit.tv/shows/tech-news-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: IRL Podcast kolide.com/tnw CDW.com/DellClient

All TWiT.tv Shows (Video LO)
Tech News Weekly 251: What We Saw at Apple's "Far Out." iPhone Event

All TWiT.tv Shows (Video LO)

Play Episode Listen Later Sep 8, 2022 79:02


Mikah Sargent talks to Dan More of SixColors about Apple's "Far Out" event and the products announced at the event, such as the new Apple Watches, AirPods Pro, and the iPhone 14 lineup. Jason Howell jumps in quickly to talk to Ben Schoon of 9to5Google about Google's announcement of their hardware event, set to take place on October 6th. Finally, Mikah talks about OpenAI and its AI language model, GPT-3, and what GPT-3 may "know" about you or other people. Hosts: Jason Howell and Mikah Sargent Guests: Dan Moren and Ben Schoon Download or subscribe to this show at https://twit.tv/shows/tech-news-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: IRL Podcast kolide.com/tnw CDW.com/DellClient

Total Mikah (Video)
Tech News Weekly 251: What We Saw at Apple's "Far Out." iPhone Event

Total Mikah (Video)

Play Episode Listen Later Sep 8, 2022 79:02


Mikah Sargent talks to Dan More of SixColors about Apple's "Far Out" event and the products announced at the event, such as the new Apple Watches, AirPods Pro, and the iPhone 14 lineup. Jason Howell jumps in quickly to talk to Ben Schoon of 9to5Google about Google's announcement of their hardware event, set to take place on October 6th. Finally, Mikah talks about OpenAI and its AI language model, GPT-3, and what GPT-3 may "know" about you or other people. Hosts: Jason Howell and Mikah Sargent Guests: Dan Moren and Ben Schoon Download or subscribe to this show at https://twit.tv/shows/tech-news-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: IRL Podcast kolide.com/tnw CDW.com/DellClient

Artificial Intelligence in Industry with Daniel Faggella
GPT-3 and the Potential of AI-Generated Text - with OpenAI's Peter Welinder

Artificial Intelligence in Industry with Daniel Faggella

Play Episode Listen Later Sep 6, 2022 31:58


Our guest on today's AI in Business Podcast episode is VP of Product & Partnerships at OpenAI Peter Welinder. He and Emerj CEO Daniel Faggella discuss the potential impact of the model on numerous sectors, along with the possibility that these tools could have unforeseen consequences when placed at everyone's fingertips. In other words, GPT-3 could unleash many examples that show what we think we want from AI capabilities is not what we actually want. To access Emerj's frameworks for AI readiness, ROI and strategy, visit Emerj Plus: emerj.com/p1.

Malicious Life
Hacking Language Models

Malicious Life

Play Episode Listen Later Sep 5, 2022 27:31 Very Popular


Language models are everywhere today: they run in the background of Google Translate and other translation tools; they help operate voice assistants like Alexa or Siri; and most interestingly, they are available via several experiential projects trying to emulate natural conversations, such as OpenAI's GPT-3 and Google's LaMDA. Can these models be hacked to gain access to the sensitive information they learned from their training data?

Digitalia
Digitalia #639 - Speaker Neutral

Digitalia

Play Episode Listen Later Sep 5, 2022 107:05


I problemi di Zuckerberg al risveglio. Disinformazione made in USA. Le denunce dell'ex capo della sicurezza di Twitter. La prima vittima di CSAM. Buttare i bambini di fronte alla Tesla. Queste e molte altre le notizie tech commentate nella puntata di questa settimana. Dallo studio distribuito di digitalia: Franco Solerio, Michele Di Maio, Francesco Facconi Produttori esecutivi: Vincenzo Ingenito, Saverio Gravagnola, Luca Cipollone, Andrea Scarpellini, Andrea Guido, Valerio Galano, Alex Pagnotta, Alessio Ferrara, Cristian Vidimari, Matteo Masconale, Matteo Masconale, Massimiliano Saggia, Marco Grechi, Michele Francesco Falzarano, Edoardo Zini, Diego Violi, Nicola Bisceglie, Matteo Rosina, Riccardo Peruzzini, Danny Manzini, Paolo Boschetti, Roberto Esposito, Michele, Diego Venturin, Michele Olivieri, Matteo Faccio, Alex Ordiner, Mario Cervai, Davide Fogliarini, Christian Fabiani, Antonio Turdo (Thingyy), Federico Bruno, Danilo Sia, Simone Pignatti, Nicola Pedonese, Roberto Barison, Matteo Arrighi, Arnoud Van Der Giessen, Massimo Dalla Motta, Stefano Orso, Federico Travaini, Alessandro Lazzarini, Alessio Conforto, Giuliano Arcinotti, Davide Capra, Fotogp Di Barabino Marco, Renato Battistin, Luigi Ricco, Marco De Nadai, Raffaele Marco Della Monica, Christophe Sollami, Raffaele Viero, Diego Arati, Roberto Medeossi, Luca Ubiali, Alessio Cerretini, Antonio Taurisano, Alessandro Morgantini, Mario Giammona, Calogero Augusta, Michelangelo Rocchetti, Simone Podico, Dario Nardi, Iacopo Edoardo Federici, Denis Grosso, Pierpaolo Taffarello, Giuseppe Brusadelli, Giorgio Puglisi, Umberto Marcello, Fabio Brunelli, Giacomo Cipriani, Andrea Malesani, Emanuele Zdunich, Alessandro Grossi, Fabrizio Reina, Ligea Technology Di D'Esposito Antonio, Fabio Zappa, Marco Traverso, Gianluca Nucci, Simone Magnaschi, Paola Bellini, Cristiano Belli, Valerio Bendotti, Matteo Sandri, Giuseppe Marino, Mattia Lanzoni, Giulio Magnifico, Paola Danieli, Luca Di Stefano, Diego Violi, Nicola Bisceglie, Matteo Rosina, Danny Manzini, Riccardo Peruzzini, Roberto Esposito, Michele, Paolo Boschetti, Diego Venturin, Matteo Faccio, Michele Olivieri, Davide Fogliarini, Christian Fabiani, Mario Cervai, Antonio Turdo (Thingyy), Alex Ordiner, Simone Pignatti, Federico Bruno, Danilo Sia, Matteo Arrighi, Nicola Pedonese, Roberto Barison, Edoardo Zini, Fabrizio Galliverti, Davide Bellia, Elisa Emaldi, Elisa Emaldi, Giuseppe Marmo, Marcello Piliego, Marcello Piliego, Enzo Zerbi, Massimiliano Casamento, Adriano Guarino, Douglas Whiting, Dardi Massimiliano, Mirto Tondini, Roberto Tarzia, Stefano Augusto Innocenti, Matteo Molinari, Michele Coiro, Christian A Marca, Paolo Lucciola, Stefano Toldo, Pasquale Maffei, Matteo Carpentieri, Fiorenzo Pilla, Andrea Torelli, Andrea Magnoli, Ruben Livrieri, Giovanni Priolo, Letizia Calcinai, Michele Olivieri, Emanuele Libori, Edoardo Volpi Kellerman, Andrea Delise, Alessandro Lago, Enrico De Anna, Massimo Pollastri, Roberto Basile, Antonio Manna, Flavio Castro, Paolo Massignan, Antonio Gargiulo, Douglas Whiting, Dardi Massimiliano, Mirto Tondini, Roberto Tarzia, Stefano Augusto Innocenti, Matteo Molinari, Christian A Marca, Michele Coiro, Manuel Vitali, Manuel Vitali, Andrea Giovacchini, Maurizio Galluzzo, ---, Alessio Pappini, Paolo Tegoni, Fabrizio Bianchi, Marcello Marigliano, Maurizio Galluzzo, ---, Alessio Pappini, Stefano Peirano, Nicola Gabriele D., Nicola Fort, carnevale bonino paolo, Alessandro Varesi, Marco Iannaccone, anonymous, M.Rothbard, carnevale bonino paolo, daxda, Davide Tinti, Manuel Zavatta, Nicola Gabriele D., Domizio Antonio R., Giuseppe C., Davide R., Riccardo N. Sponsor: Squarespace.com - utilizzate il codice coupon "DIGITALIA" per avere il 10% di sconto sul costo dell'abbonamento. Links: Stable Diffusion Is the Most Important AI Art Model Ever Imperia: bus senza conducente sulla ciclabile Professional AI whisperers launched a marketplace for DALL-E prompts Is ‘The Rings of Power' Getting Review Bombed? Amazon Suspends Ratings Twitter starts testing an edit button, but you have to pay for it Facebook-Cambridge Analytica data breach lawsuit ends in settlement An AI-Generated Artwork Won First Place, Artists Are Pissed Zuckerberg avoids Cambridge Analytica deposition Non ci sono abbastanza microchip per le tessere sanitarie Can an AI-led Danish party usher in an age of algorithmic politics? FB censurò un articolo su Hunter Biden: seguimmo l'Fbi World's First AI Writing Assistant Powered by GPT-3 in the App Store. Fa inversione a «U» in autostrada, evitata per poco la tragedia Deepfakes for all: Uncensored AI art model prompts ethics questions La faccia di Zuckerberg nel metaverso The Microsoft Excel world championships The US government got caught using sock puppets to spread propaganda An AI-based party vows to win Denmark's general election in 2023. Fa inversione a U in autostrada: la manovra folle di un tir a Genova Why are Tesla fanatics putting their children in the path of moving cars? Economists Have a Method for Reducing Fake News on Social Media Diana, la 29enne brasiliana sui pattini in autostrada Twitter avrebbe grossi problemi di sicurezza Ex-Twitter exec blows the whistle Took Photos of His Naked Toddler. Google Flagged Him as Criminal. Google bolla come pedopornografiche le foto scattate da un papà al figlio In pattini sull'Autostrada A/10, 'stavo seguendo il navigatore' Google Account Deleted Due to CSAM False Positive A Tool That Monitors How Long Kids Are in the Bathroom Janet Jackson had the power to crash laptop computers Gingilli del giorno: Organic Maps: Offline Hike, Bike, Trails and Navigation Stable Diffusion : a Hugging Face Space by stabilityai The DALL·E 2 Prompt Book Supporta Digitalia, diventa produttore esecutivo.

Waiting To Be Signed
E35 - Generative Generation

Waiting To Be Signed

Play Episode Listen Later Sep 4, 2022 53:33


Follow along with our companion piece on fx(text) and please mint if you are inspired to support the show! Twitter: @WaitingToSign Instagram: @waitingtobesigned Donations: waitingtosign.tez Episode Art: Here, After #357 Links of the Week rudxane on Tych Hevey on Dencity RASTER, itsgalo on art blocks Pronoia's fx(text) article on GPT-3 and me Projects of the Week GPT-3 and me (free companion mint), Pronoia AI Studies I, II, III by CoDexter BlockTrain PASS, BlockTrain Unless It Falls Apart, whitekross Here, After, jeres Shoutouts The Abstract Truth I, Aleksandra The Abstract Truth II, Aleksandra The Abstract Truth III, Aleksandra Reverberations, shaderism Sketch E, ippsketch Intro music by The Gas Station, as heard in *Sunset Dancers* by Laurean0 Outro music by Nor44, as heard in My Demons

The Nonlinear Library
AF - Simulators by janus

The Nonlinear Library

Play Episode Listen Later Sep 2, 2022 74:00


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Simulators, published by janus on September 2, 2022 on The AI Alignment Forum. Thanks to Adam Shimi, Lee Sharkey, Evan Hubinger, Nicholas Dupuis, Leo Gao, Johannes Treutlein, and Jonathan Low for feedback on drafts. This work was carried out while at Conjecture. "Moebius illustration of a simulacrum living in an AI-generated story discovering it is in a simulation" by DALL-E 2 Summary TL;DR: Self-supervised learning may create AGI or its foundation. What would that look like? Unlike the limit of RL, the limit of self-supervised learning has received surprisingly little conceptual attention, and recent progress has made deconfusion in this domain more pressing. Existing AI taxonomies either fail to capture important properties of self-supervised models or lead to confusing propositions. For instance, GPT policies do not seem globally agentic, yet can be conditioned to behave in goal-directed ways. This post describes a frame that enables more natural reasoning about properties like agency: GPT, insofar as it is inner-aligned, is a simulator which can simulate agentic and non-agentic simulacra. The purpose of this post is to capture these objects in words so GPT can reference them and provide a better foundation for understanding them. I use the generic term “simulator” to refer to models trained with predictive loss on a self-supervised dataset, invariant to architecture or data type (natural language, code, pixels, game states, etc). The outer objective of self-supervised learning is Bayes-optimal conditional inference over the prior of the training distribution, which I call the simulation objective, because a conditional model can be used to simulate rollouts which probabilistically obey its learned distribution by iteratively sampling from its posterior (predictions) and updating the condition (prompt). Analogously, a predictive model of physics can be used to compute rollouts of phenomena in simulation. A goal-directed agent which evolves according to physics can be simulated by the physics rule parameterized by an initial state, but the same rule could also propagate agents with different values, or non-agentic phenomena like rocks. This ontological distinction between simulator (rule) and simulacra (phenomena) applies directly to generative models like GPT. Meta This post is intended as the first in a sequence on the alignment problem in a landscape where self-supervised simulators are a possible/likely form of powerful AI. I don't know how many subsequent posts I'll actually publish. Take it as a prompt. I use the generic term “GPT” to refer to transformers trained on next-token prediction. A while ago when I was trying to avoid having to write this post by hand, I prompted GPT-3 with an early outline of this post. I've spliced in some excerpts from it, indicated by this style. Prompt, generated text, and curation metrics here. The limit of sequence modeling Transformer-based language models have recently achieved remarkable results. – every paper since 2020 GPT is not a new form of AI in terms of its training methodology and outer objective: sequence generation from statistical models of data is an old idea. In 1951, Claude Shannon described using n-grams to approximate conditional next-letter probabilities of a text dataset and "reversed" to generate text samples. I don't know of any other notable advances until the 2010s brought the first interesting language generation results from neural networks. In 2015, Karpathy wrote a blog post/tutorial sharing his excitement about The Unreasonable Effectiveness of Recurrent Neural Networks: Fast forward about a year: I'm training RNNs all the time and I've witnessed their power and robustness many times, and yet their magical outputs still find ways of amusing me. This post is about sharing some of that magic with y...

Let's Know Things
DALL-E 2

Let's Know Things

Play Episode Listen Later Aug 30, 2022 26:15


This week we talk about OpenAI, AlphaFold, and centaurs. We also discuss GPT-3, CLIP, and Photoshop. Support the show: patreon.com/letsknowthings & letsknowthings.com/support Show notes/transcript: letsknowthings.com Check out my other shows & publications: understandary.com

Increments
#43 - Artificial General Intelligence and the AI Safety debate

Increments

Play Episode Listen Later Aug 28, 2022 67:50


Some people think (https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities) that advanced AI is going to kill everyone. Some people don't (https://www.nytimes.com/2019/10/31/opinion/superintelligent-artificial-intelligence.html). Who to believe? Fortunately, Ben and Vaden are here to sort out the question once and for all. No need to think for yourselves after listening to this one, we've got you covered. We discuss: - How well does math fit reality? Is that surprising? - Should artificial general intelligence (AGI) be considered "a person"? - How could AI possibly "go rogue?" - Can we know if current AI systems are being creative? - Is misplaced AI fear hampering progress? References: - The Unreasonable effectiveness of mathematics (https://www.maths.ed.ac.uk/~v1ranick/papers/wigner.pdf) - Prohibition on autonomous weapons letter (https://techlaw.uottawa.ca/bankillerai) - Google employee conversation with chat bot (https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917) - Gary marcus on the Turing test (https://garymarcus.substack.com/p/nonsense-on-stilts) - Melanie Mitchell essay (https://arxiv.org/pdf/2104.12871.pdf). - Did MIRI give up? Their (half-sarcastic?) death with dignity strategy (https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy) - Kerry Vaughan on slowing down (https://twitter.com/KerryLVaughan/status/1545423249013620736) AGI development. Contact us - Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani - Check us out on youtube at https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ - Come join our discord server! DM us on twitter or send us an email to get a supersecret link Which prompt would you send to GPT-3 in order to end the world? Tell us before you're turned into a paperclip over at incrementspodcast@gmail.com

AI with AI
EPIC BLOOM

AI with AI

Play Episode Listen Later Aug 26, 2022 35:19


Andy and Dave discuss the latest in AI and autonomy news and research, including an announcement that the Federal Trade Commission is exploring rules for cracking down on harmful commercial surveillance and lax data security, with the public having an opportunity to share input during a virtual public form on 8 September 2022. The Electronic Privacy Information Center (EPIC), with help from Caroline Kraczon, releases The State of State AI Policy, a catalog of AI-related bills that states and local governments have passed, introduced or failed during the 2021-2022 legislative season. In robotics, Xiaomi introduces CyberOne, a 5-foot 9-inch robot that can identify “85 types of environmental sounds and 45 classifications of human emotions.” Meanwhile at a recent Russian arms fair, Army-2022, a developer showed off a robot dog with a rocket-propelled grenade strapped to its back. NIST updates its AI Risk Management Framework to the second draft, making it available for review and comment. DARPA launches the SocialCyber project, a hybrid-AI project aimed at helping to protect the integrity of open-source code. BigScience launches BLOOM (BigScience Large Open-science Open-access Multilingual Language Model), a “bigger than GPT-3” multilanguage (46) model that a group of over 1,000 AI researchers has created, that anyone can download and tinker with it for free. Researchers at MIT develop artificial synapses that shuttle protons, resulting in synapses 10,000 times faster than biological ones. China's Comprehensive National Science Center claims that it has developed “mind-reading AI” capable of measuring loyalty to the Chinese Communist Party. Researchers at the University of Sydney demonstrate that human brains are better at identifying deepfakes than people, by examining results directly from neural activity. Researchers at the University of Glasgow combine AI with human vision to see around corners, reconstructing 16x16-pixel images of simple objects that the observer could not directly see. GoogleAI publishes research on Minerva, using language models to solve quantitative reasoning problems, and dramatically increasing the SotA. Researchers from MIT, Columbia, Harvard, and Waterloo publish work on a neural network that solves, explains, and generates university math problems “at a human level.” CSET makes available the Country Activity Tracker for AI, an interactive tool on tech competitiveness and collaboration. And a group of researchers at Merced's Cognitive and Information Sciences Program make available Neural Networks in Cognitive Science. https://www.cna.org/our-media/podcasts/ai-with-ai  

The Nonlinear Library
LW - OpenAI's Alignment Plans by dkirmani

The Nonlinear Library

Play Episode Listen Later Aug 25, 2022 9:27


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: OpenAI's Alignment Plans, published by dkirmani on August 24, 2022 on LessWrong. Our alignment research aims to make artificial general intelligence (AGI) aligned with human values and follow human intent. We take an iterative, empirical approach: by attempting to align highly capable AI systems, we can learn what works and what doesn't, thus refining our ability to make AI systems safer and more aligned. Using scientific experiments, we study how alignment techniques scale and where they will break. We tackle alignment problems both in our most capable AI systems as well as alignment problems that we expect to encounter on our path to AGI. Our main goal is to push current alignment ideas as far as possible, and to understand and document precisely how they can succeed or why they will fail. We believe that even without fundamentally new alignment ideas, we can likely build sufficiently aligned AI systems to substantially advance alignment research itself. Unaligned AGI could pose substantial risks to humanity and solving the AGI alignment problem could be so difficult that it will require all of humanity to work together. Therefore we are committed to openly sharing our alignment research when it's safe to do so: We want to be transparent about how well our alignment techniques actually work in practice and we want every AGI developer to use the world's best alignment techniques. At a high-level, our approach to alignment research focuses on engineering a scalable training signal for very smart AI systems that is aligned with human intent. It has three main pillars: Training AI systems using human feedback Training AI systems to assist human evaluation Training AI systems to do alignment research Aligning AI systems with human values also poses a range of other significant sociotechnical challenges, such as deciding to whom these systems should be aligned. Solving these problems is important to achieving our mission, but we do not discuss them in this post. Training AI systems using human feedback RL from human feedback is our main technique for aligning our deployed language models today. We train a class of models called InstructGPT derived from pretrained language models such as GPT-3. These models are trained to follow human intent: both explicit intent given by an instruction as well as implicit intent such as truthfulness, fairness, and safety. Our results show that there is a lot of low-hanging fruit on alignment-focused fine-tuning right now: InstructGPT is preferred by humans over a 100x larger pretrained model, while its fine-tuning costs

The Nonlinear Library
LW - The Shard Theory Alignment Scheme by David Udell

The Nonlinear Library

Play Episode Listen Later Aug 25, 2022 3:58


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Shard Theory Alignment Scheme, published by David Udell on August 25, 2022 on LessWrong. Generated as part of SERI MATS, Team Shard, under John Wentworth. Thanks to Logan Smith, Garrett Baker, Michael Einhorn, Quintin Pope, and Thomas Kwa for chatting about related topics. All mistakes are my own. The Alignment Scheme The shard theory alignment scheme is: Master the theory of value formation in trained intelligences, and develop powerful chain-of-thought interpretability tools to examine those trained values in action. Use that understanding and those interpretability tools to instill a target value (e.g., corrigibility, niceness, or libertarianism) in a powerful language model. Then, punt the remainder of the alignment problem to researchers equipped with that aligned powerful language model. .easier said than done, of course! In particular, the "master the theory of value formation in trained intelligences" and "develop powerful chain-of-thought interpretability tools" steps together contain almost all of the original alignment problem! So, I'll briefly elaborate on Team Shard's approach to both below. Understand the Phenomenon of Value Formation inside Trained Intelligences What this premise in the scheme demands is a completed mechanistic theory of algorithm formation in trained intelligences, conditional on various training variables being set in different ways. This . is a lot to ask of any plucky team of alignment researchers, and is the sort of demand that many an alignment researcher would reflexively glance off of. It's one of the unavoidable core difficulties of aligning ML systems, though -- it's something we'll have to have in all the worlds where ML alignment succeeds. We conjecture that reinforcement strengthens the behavior-steering computations that guide a system into reinforcement events, and that those behavior-steering computations will only form around abstractions already represented inside of a system at the time of reinforcement. We bet that there are a bunch of quantitative relationships here just waiting to be discovered -- that there's a lot of systematic structure in what learned values form given which training variables. To ever get to these quantitative relationships, we'll need to muck around with language model fine-tuning under different conditions a lot. So, mucking around with running pilot experiments on large language models in controlled environments (RL text adventures!) is what we're doing now! In particular, we're busy getting RL tuning working on GPT-J playing Microsoft TextWorld. GPT-2, a dumber model than GPT-J, likes to make up invalid actions in the text adventures like, "I actually succeed in getting the gold out of the locked box," or otherwise not play the game well enough to be tuned by RL. Once we have this running with smarter language models, though, we'll be able to observe what environments and training variables induce what off-distribution behaviors in the models. Furthermore, once we have chain-of-thought interpretability tools, we'll be able to look at these learned values as they run and train using that interpretability power. Shards and GEM Our name for chain-of-thought interpretability and tuning is Guiding Externalized Monologues (GEM). It's a technology currently in development . expect to hear more soon. Once we've got GEM, we'll be able to wring out many more bits about the inner workings of language models in our text adventure setups. In the best medium-term futures, we're reliably instilling target values in text-adventure-playing language models! This will involve some amount of tuning the models to be interpretable in the first place, by first getting the models externally monologuing about their decision-making and then ensuring that the decisions outputted by the model are causally downst...

The Nonlinear Library
AF - Some conceptual alignment research projects by Richard Ngo

The Nonlinear Library

Play Episode Listen Later Aug 25, 2022 5:00


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some conceptual alignment research projects, published by Richard Ngo on August 25, 2022 on The AI Alignment Forum. Some research outputs I'd love to see, focused on exploring, clarifying and formalizing important alignment concepts. I expect that most of these will be pretty time-consuming, but happy to discuss for people who want to try: A paper which does for deceptive alignment what the goal misgeneralization paper does for inner alignment, i.e. describing it in ML language and setting up toy examples (for example, telling GPT-3 to take actions which minimize changes in its weights, given that it's being trained using actor-critic RL with a certain advantage function, and seeing if it knows how to do so). A paper which does the same for gradient hacking, e.g. taking these examples and putting them into more formal ML language. A list of papers that are particularly useful for new research engineers to replicate. A takeover scenario which covers all the key points in/, but not phrased as an argument, just phrased as a possible scenario (I think you can't really make the argument rigorously in that little space). A paper which defines the concepts of implicit planning, implicit value functions, implicit reward models, etc, in ML terms. Kinda like but more AGI-focused. I want to be able to ask people “does GPT-3 choose actions using an implicit value function?” and then be able to point them to this paper to rigorously define what I mean. I discuss this briefly in the phase 1 section here. A blog post which describes in as much detail as possible what our current “throw the kitchen sink at it” alignment strategy would look like. (I'll probably put my version of this online soon but would love others too). A blog post explaining “debate on weights” more thoroughly. A blog post exploring how fast we should expect a forward pass to be for the first AGIs - e.g. will it actually be slower than human thinking, as discussed in this comment? A blog post exploring considerations for why model goals may or may not be much more robust to SGD than model beliefs, as discussed in framing 3 here. (See also this paper on gradient starvation - h/t Quintin Pope; and the concept of persistence to gradient descent discussed here.) A blog post explaining why the “uncertainty” part of CIRL only does useful work insofar as we have an accurate model of the human policy, and why this is basically just as hard as having an accurate model of human preferences. A blog post explaining what practical implications Stuart Armstrong's impossibility result has. As many alignment exercises as possible to help people learn to think about this stuff (mine aren't great but I haven't seen better). A paper properly formulating instrumental convergence, generalization to large-scale goals, etc, as inductive biases in the ML sense (I do this briefly in phase 3 here). A mathematical comparison between off-policy RL and imitation learning, exploring ways in which they're similar and different, and possible algorithms in between. A blog post explaining the core argument for why detecting adversarially-generated inputs is likely much easier than generating them, and arguments for why adversarial training might nevertheless be valuable for alignment. A blog post exploring the incentives which models might have when they're simultaneously trained to make predictions and to take actions in an RL setting (e.g. models trained using RL via sequence modeling). A blog post exploring pros and cons of making misalignment datasets for use as a metric of alignment (alignment = how much training on the misalignment dataset is needed to make it misaligned). A paper providing an RL formalism in which reward functions can depend on weights and/or activations directly, and demonstrating a simple but non-trivial example. A blog ...

The Lunar Society
37: Steve Hsu - Intelligence, Embryo Selection, & The Future of Humanity

The Lunar Society

Play Episode Listen Later Aug 23, 2022 141:27


Steve Hsu is a Professor of Theoretical Physics at Michigan State University and cofounder of the company Genomic Prediction.We go deep into the weeds on how embryo selection can make babies healthier and smarter. Steve also explains the advice Richard Feynman gave him to pick up girls, the genetics of aging and intelligence, & the psychometric differences between shape rotators and wordcels.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.Subscribe to find out about future episodes!Read the full transcript here.Follow Steve on Twitter. Follow me on Twitter for updates on future episodes.Please share if you enjoyed this episode! Helps out a ton!Timestamps(0:00:14) - Feynman’s advice on picking up women(0:11:46) - Embryo selection(0:24:19) - Why hasn't natural selection already optimized humans?(0:34:13) - Aging(0:43:18) - First Mover Advantage(0:53:49) - Genomics in dating(1:00:31) - Ancestral populations(1:07:58) - Is this eugenics?(1:15:59) - Tradeoffs to intelligence(1:25:01) - Consumer preferences(1:30:14) - Gwern(1:34:35) - Will parents matter?(1:45:25) - Word cells and shape rotators(1:57:29) - Bezos and brilliant physicists(2:10:23) - Elite educationTranscriptDwarkesh Patel  0:00  Today I have the pleasure of speaking with Steve Hsu. Steve, thanks for coming on the podcast. I'm excited about this.Steve Hsu  0:04  Hey, it's my pleasure! I'm excited too and I just want to say I've listened to some of your earlier interviews and thought you were very insightful, which is why I was excited to have a conversation with you.Dwarkesh Patel 0:14That means a lot for me to hear you say because I'm a big fan of your podcast.Feynman’s advice on picking up womenDwarkesh Patel  0:17  So my first question is: “What advice did Richard Feynman give you about picking up girls?”Steve Hsu  0:24   Haha, wow! So one day in the spring of my senior year, I was walking across campus and saw Feynman coming toward me. We knew each other from various things—it's a small campus, I was a physics major and he was my hero–– so I'd known him since my first year. He sees me, and he's got this Long Island or New York borough accent and says, "Hey, Hsu!"  I'm like, "Hi, Professor Feynman." We start talking. And he says to me, "Wow, you're a big guy." Of course, I was much bigger back then because I was a linebacker on the Caltech football team. So I was about 200 pounds and slightly over 6 feet tall. I was a gym rat at the time and I was much bigger than him. He said, "Steve, I got to ask you something." Feynman was born in 1918, so he's not from the modern era. He was going through graduate school when the Second World War started. So, he couldn't understand the concept of a health club or a gym. This was the 80s and was when Gold's Gym was becoming a world national franchise. There were gyms all over the place like 24-Hour Fitness. But, Feynman didn't know what it was. He's a fascinating guy. He says to me, "What do you guys do there? Is it just a thing to meet girls? Or is it really for training? Do you guys go there to get buff?" So, I started explaining to him that people are there to get big, but people are also checking out the girls. A lot of stuff is happening at the health club or the weight room. Feynman grills me on this for a long time. And one of the famous things about Feynman is that he has a laser focus. So if there's something he doesn't understand and wants to get to the bottom of it, he will focus on you and start questioning you and get to the bottom of it. That's the way his brain worked. So he did that to me for a while because he didn't understand lifting weights and everything. In the end, he says to me, "Wow, Steve, I appreciate that. Let me give you some good advice."Then, he starts telling me how to pick up girls—which he's an expert on. He says to me, "I don't know how much girls like guys that are as big as you." He thought it might be a turn-off. "But you know what, you have a nice smile." So that was the one compliment he gave me. Then, he starts to tell me that it's a numbers game. You have to be rational about it. You're at an airport lounge, or you're at a bar. It's Saturday night in Pasadena or Westwood, and you're talking to some girl. He says, "You're never going to see her again. This is your five-minute interaction. Do what you have to do. If she doesn't like you, go to the next one." He also shares some colorful details. But, the point is that you should not care what they think of you. You're trying to do your thing. He did have a reputation at Caltech as a womanizer, and I could go into that too but I heard all this from the secretaries.Dwarkesh Patel  4:30  With the students or only the secretaries? Steve Hsu  4:35  Secretaries! Well mostly secretaries. They were almost all female at that time. He had thought about this a lot, and thought of it as a numbers game. The PUA guys (pick-up artists) will say, “Follow the algorithm, and whatever happens, it's not a reflection on your self-esteem. It's just what happened. And you go on to the next one.” That was the advice he was giving me, and he said other things that were pretty standard: Be funny, be confident—just basic stuff. Steve Hu: But the main thing I remember was the operationalization of it as an algorithm. You shouldn’t internalize whatever happens if you get rejected, because that hurts. When we had to go across the bar to talk to that girl (maybe it doesn’t happen in your generation), it was terrifying. We had to go across the bar and talk to some lady! It’s loud and you’ve got a few minutes to make your case. Nothing is scarier than walking up to the girl and her friends. Feynman was telling me to train yourself out of that. You're never going to see them again, the face space of humanity is so big that you'll probably never re-encounter them again. It doesn't matter. So, do your best. Dwarkesh Patel  6:06  Yeah, that's interesting because.. I wonder whether he was doing this in the 40’–– like when he was at that age, was he doing this? I don't know what the cultural conventions were at the time. Were there bars in the 40s where you could just go ahead and hit on girls or? Steve Hsu  6:19  Oh yeah absolutely. If you read literature from that time, or even a little bit earlier like Hemingway or John O'Hara, they talk about how men and women interacted in bars and stuff in New York City. So, that was much more of a thing back than when compared to your generation. That's what I can’t figure out with my kids! What is going on? How do boys and girls meet these days? Back in the day, the guy had to do all the work. It was the most terrifying thing you could do, and you had  to train yourself out of that.Dwarkesh Patel  6:57  By the way, for the context for the audience, when Feynman says you were a big guy, you were a football player at Caltech, right? There's a picture of you on your website, maybe after college or something, but you look pretty ripped. Today, it seems more common because of the gym culture. But I don’t know about back then. I don't know how common that body physique was.Steve Hsu  7:24  It’s amazing that you asked this question. I'll tell you a funny story. One of the reasons Feynman found this so weird was because of the way body-building entered the United States.  They  were regarded as freaks and homosexuals at first. I remember swimming and football in high school (swimming is different because it's international) and in swimming, I picked up a lot of advanced training techniques from the Russians and East Germans. But football was more American and not very international. So our football coach used to tell us not to lift weights when we were in junior high school because it made you slow. “You’re no good if you’re bulky.” “You gotta be fast in football.” Then, something changed around the time I was in high school–the coaches figured it out. I began lifting weights since I was an age group swimmer, like maybe age 12 or 14. Then, the football coaches got into it mainly because the University of Nebraska had a famous strength program that popularized it.At the time, there just weren't a lot of big guys. The people who knew how to train were using what would be considered “advanced knowledge” back in the 80s. For example, they’d know how to do a split routine or squat on one day and do upper body on the next day–– that was considered advanced knowledge at that time. I remember once.. I had an injury, and I was in the trainer's room at the Caltech athletic facility. The lady was looking at my quadriceps. I’d pulled a muscle, and she was looking at the quadriceps right above your kneecap. If you have well-developed quads, you'd have a bulge, a bump right above your cap. And she was looking at it from this angle where she was in front of me, and she was looking at my leg from the front. She's like, “Wow, it's swollen.” And I was like, “That's not the injury. That's my quadricep!” And she was a trainer! So, at that time, I could probably squat 400 pounds. So I was pretty strong and had big legs. The fact that the trainer didn't really understand what well-developed anatomy was supposed to look like blew my mind!So anyway, we've come a long way. This isn't one of these things where you have to be old to have any understanding of how this stuff evolved over the last 30-40 years.Dwarkesh Patel  10:13  But, I wonder if that was a phenomenon of that particular time or if people were not that muscular throughout human history. You hear stories of  Roman soldiers who are carrying 80 pounds for 10 or 20 miles a day. I mean, there's a lot of sculptures in the ancient world, or not that ancient, but the people look like they have a well-developed musculature.Steve Hsu  10:34  So the Greeks were very special because they were the first to think about the word gymnasium. It was a thing called the Palaestra, where they were trained in wrestling and boxing. They were the first people who were seriously into physical culture specific training for athletic competition.Even in the 70s, when I was a little kid, I look back at the guys from old photos and they were skinny. So skinny! The guys who went off and fought World War Two, whether they were on the German side, or the American side, were like 5’8-5’9 weighing around 130 pounds - 140 pounds. They were much different from what modern US Marines would look like. So yeah, physical culture was a new thing. Of course, the Romans and the Greeks had it to some degree, but it was lost for a long time. And, it was just coming back to the US when I was growing up. So if you were reasonably lean (around 200 pounds) and you could bench over 300.. that was pretty rare back in those days.Embryo selectionDwarkesh Patel  11:46  Okay, so let's talk about your company Genomic Prediction. Do you want to talk about this company and give an intro about what it is?Steve Hsu  11:55  Yeah. So there are two ways to introduce it. One is the scientific view. The other is the IVF view. I can do a little of both. So scientifically, the issue is that we have more and more genomic data. If you give me the genomes of a bunch of people and then give me some information about each person, ex. Do they have diabetes? How tall are they? What's their IQ score?  It’s a natural AI machine learning problem to figure out which features in the DNA variation between people are predictive of whatever variable you're trying to predict.This is the ancient scientific question of how you relate the genotype of the organism (the specific DNA pattern), to the phenotype (the expressed characteristics of the organism). If you think about it, this is what biology is! We had the molecular revolution and figured out that it’s people's DNA that stores the information which is passed along. Evolution selects on the basis of the variation in the DNA that’s expressed as phenotype, as that phenotype affects fitness/reproductive success. That's the whole ballgame for biology. As a physicist who's trained in mathematics and computation, I'm lucky that I arrived on the scene at a time when we're going to solve this basic fundamental problem of biology through brute force, AI, and machine learning. So that's how I got into this. Now you ask as an entrepreneur, “Okay, fine Steve, you're doing this in your office with your postdocs and collaborators on your computers. What use is it?” The most direct application of this is in the following setting: Every year around the world, millions of families go through IVF—typically because they're having some fertility issues, and also mainly because the mother is in her 30s or maybe 40s. In the process of IVF, they use hormone stimulation to produce more eggs. Instead of one per cycle, depending on the age of the woman, they might produce anywhere between five to twenty, or even sixty to a hundred eggs for young women who are hormonally stimulated (egg donors).From there, it’s trivial because men produce sperm all the time. You can fertilize eggs pretty easily in a little dish, and get a bunch of embryos that grow. They start growing once they're fertilized. The problem is that if you're a family and produce more embryos than you’re going to use, you have the embryo choice problem. You have to figure out which embryo to choose out of  say, 20 viable embryos. The most direct application of the science that I described is that we can now genotype those embryos from a small biopsy. I can tell you things about the embryos. I could tell you things like your fourth embryo being an outlier. For breast cancer risk, I would think carefully about using number four. Number ten is an outlier for cardiovascular disease risk. You might want to think about not using that one. The other ones are okay. So, that’s what genomic prediction does. We work with 200 or 300 different IVF clinics in six continents.Dwarkesh Patel  15:46  Yeah, so the super fascinating thing about this is that the diseases you talked about—or at least their risk profiles—are polygenic. You can have thousands of SNPs (single nucleotide polymorphisms) determining whether you will get a disease. So, I'm curious to learn how you were able to transition to this space and how your knowledge of mathematics and physics was able to help you figure out how to make sense of all this data.Steve Hsu  16:16  Yeah, that's a great question. So again, I was stressing the fundamental scientific importance of all this stuff. If you go into a slightly higher level of detail—which you were getting at with the individual SNPs, or polymorphisms—there are individual locations in the genome, where I might differ from you, and you might differ from another person. Typically, each pair of individuals will differ at a few million places in the genome—and that controls why I look a little different than youA lot of times, theoretical physicists have a little spare energy and they get tired of thinking about quarks or something. They want to maybe dabble in biology, or they want to dabble in computer science, or some other field. As theoretical physicists, we always feel, “Oh, I have a lot of horsepower, I can figure a lot out.” (For example, Feynman helped design the first parallel processors for thinking machines.) I have to figure out which problems I can make an impact on because I can waste a lot of time. Some people spend their whole lives studying one problem, one molecule or something, or one biological system. I don't have time for that, I'm just going to jump in and jump out. I'm a physicist. That's a typical attitude among theoretical physicists. So, I had to confront sequencing costs about ten years ago because I knew the rate at which they were going down. I could anticipate that we’d get to the day (today) when millions of genomes with good phenotype data became available for analysis. A typical training run might involve almost a million genomes, or half a million genomes. The mathematical question then was: What is the most effective algorithm given a set of genomes and phenotype information to build the best predictor?  This can be  boiled down to a very well-defined machine learning problem. It turns out, for some subset of algorithms, there are theorems— performance guarantees that give you a bound on how much data you need to capture almost all of the variation in the features. I spent a fair amount of time, probably a year or two, studying these very famous results, some of which were proved by a guy named Terence Tao, a Fields medalist. These are results on something called compressed sensing: a penalized form of high dimensional regression that tries to build sparse predictors. Machine learning people might notice L1-penalized optimization. The very first paper we wrote on this was to prove that using accurate genomic data and these very abstract theorems in combination could predict how much data you need to “solve” individual human traits. We showed that you would need at least a few hundred thousand individuals and their genomes and their heights to solve for height as a phenotype. We proved that in a paper using all this fancy math in 2012. Then around 2017, when we got a hold of half a million genomes, we were able to implement it in practical terms and show that our mathematical result from some years ago was correct. The transition from the low performance of the predictor to high performance (which is what we call a “phase transition boundary” between those two domains) occurred just where we said it was going to occur. Some of these technical details are not understood even by practitioners in computational genomics who are not quite mathematical. They don't understand these results in our earlier papers and don't know why we can do stuff that other people can't, or why we can predict how much data we'll need to do stuff. It's not well-appreciated, even in the field. But when the big AI in our future in the singularity looks back and says, “Hey, who gets the most credit for this genomics revolution that happened in the early 21st century?”, they're going to find these papers on the archive where we proved this was possible, and how five years later, we actually did it. Right now it's under-appreciated, but the future AI––that Roko's Basilisk AI–will look back and will give me a little credit for it. Dwarkesh Patel  21:03  Yeah, I was a little interested in this a few years ago. At that time, I looked into how these polygenic risk scores were calculated. Basically, you find the correlation between the phenotype and the alleles that correlate with it. You add up how many copies of these alleles you have, what the correlations are, and you do a weighted sum of that. So that seemed very simple, especially in an era where we have all this machine learning, but it seems like they're getting good predictive results out of this concept. So, what is the delta between how good you can go with all this fancy mathematics versus a simple sum of correlations?Steve Hsu  21:43  You're right that the ultimate models that are used when you've done all the training, and when the dust settles, are straightforward. They’re pretty simple and have an additive structure. Basically, I either assign a nonzero weight to this particular region in the genome, or I don't. Then, I need to know what the weighting is, but then the function is a linear function or additive function of the state of your genome at some subset of positions. The ultimate model that you get is straightforward. Now, if you go back ten years, when we were doing this, there were lots of claims that it was going to be super nonlinear—that it wasn't going to be additive the way I just described it. There were going to be lots of interaction terms between regions. Some biologists are still convinced that's true, even though we already know we have predictors that don't have interactions.The other question, which is more technical, is whether in any small region of your genome, the state of the individual variants is highly correlated because you inherit them in chunks. You need to figure out which one you want to use. You don't want to activate all of them because you might be overcounting. So that's where these L-1 penalization sparse methods force the predictor to be sparse. That is a key step. Otherwise, you might overcount. If you do some simple regression math, you might have 10-10 different variants close by that have roughly the same statistical significance.But, you don't know which one of those tends to be used, and you might be overcounting effects or undercounting effects. So, you end up doing a high-dimensional optimization, where you grudgingly activate a SNP when the signal is strong enough. Once you activate that one, the algorithm has to be smart enough to penalize the other ones nearby and not activate them because you're over counting effects if you do that. There's a little bit of subtlety in it. But, the main point you made is that the ultimate predictors, which are very simple and addictive—sum over effect sizes and time states—work well. That’s related to a deep statement about the additive structure of the genetic architecture of individual differences. In other words, it's weird that the ways that I differ from you are merely just because I have more of something or you have less of something. It’s not like these things are interacting in some incredibly understandable way. That's a deep thing—which is not appreciated that much by biologists yet. But over time, they'll figure out something interesting here.Why hasn’t natural selection already optimized humans?Dwarkesh Patel  24:19  Right. I thought that was super fascinating, and I commented on that on Twitter. What is interesting about that is two things. One is that you have this fascinating evolutionary argument about why that would be the case that you might want to explain. The second is that it makes you wonder if becoming more intelligent is just a matter of turning on certain SNPs. It's not a matter of all this incredible optimization being like solving a sudoku puzzle or anything. If that's the case, then why hasn't the human population already been selected to be maxed out on all these traits if it's just a matter of a bit flip?Steve Hsu  25:00  Okay, so the first issue is why is this genetic architecture so surprisingly simple? Again, we didn't know it would be simple ten years ago. So when I was checking to see whether this was a field that I should go into depending on our capabilities to make progress, we had to study the more general problem of the nonlinear possibilities. But eventually, we realized that most of the variance would probably be captured in an additive way. So, we could narrow down the problem quite a bit. There are evolutionary reasons for this. There’s a famous theorem by Fisher, the father of population genetics (aka. frequentist statistics). Fisher proved something called Fisher's Fundamental Theorem of Natural Selection, which says that if you impose some selection pressure on a population, the rate at which that population responds to the selection pressure (lets say it’s the bigger rats that out-compete the smaller rats) then at what rate does the rat population start getting bigger? He showed that it's the additive variants that dominate the rate of evolution. It's easy to understand why if it's a nonlinear mechanism, you need to make the rat bigger. When you sexually reproduce, and that gets chopped apart, you might break the mechanism. Whereas, if each short allele has its own independent effect, you can inherit them without worrying about breaking the mechanisms. It was well known among a tiny theoretical population of biologists that adding variants was the dominant way that populations would respond to selection. That was already known. The other thing is that humans have been through a pretty tight bottleneck, and we're not that different from each other. It's very plausible that if I wanted to edit a human embryo, and make it into a frog, then there are all kinds of subtle nonlinear things I’d have to do. But all those identical nonlinear complicated subsystems are fixed in humans. You have the same system as I do. You have the not human, not frog or ape, version of that region of DNA, and so do I. But the small ways we differ are mostly little additive switches. That's this deep scientific discovery from over the last 5-10 years of work in this area. Now, you were asking about why evolution hasn't completely “optimized” all traits in humans already. I don't know if you’ve ever done deep learning or high-dimensional optimization, but in that high-dimensional space, you're often moving on a slightly-tilted surface. So, you're getting gains, but it's also flat. Even though you scale up your compute or data size by order of magnitude, you don't move that much farther. You get some gains, but you're never really at the global max of anything in these high dimensional spaces. I don't know if that makes sense to you. But it's pretty plausible to me that two things are important here. One is that evolution has not had that much time to optimize humans. The environment that humans live in changed radically in the last 10,000 years. For a while, we didn't have agriculture, and now we have agriculture. Now, we have a swipe left if you want to have sex tonight. The environment didn't stay fixed. So, when you say fully optimized for the environment, what do you mean? The ability to diagonalize matrices might not have been very adaptive 10,000 years ago. It might not even be adaptive now. But anyway, it's a complicated question that one can't reason naively about. “If God wanted us to be 10 feet tall, we'd be 10 feet tall.” Or “if it's better to be smart, my brain would be *this* big or something.” You can't reason naively about stuff like that.Dwarkesh Patel  29:04  I see. Yeah.. Okay. So I guess it would make sense then that for example, with certain health risks, the thing that makes you more likely to get diabetes or heart disease today might be… I don't know what the pleiotropic effect of that could be. But maybe that's not that important one year from now.Steve Hsu  29:17  Let me point out that most of the diseases we care about now—not the rare ones, but the common ones—manifest when you're 50-60 years old. So there was never any evolutionary advantage of being super long-lived. There's even a debate about whether the grandparents being around to help raise the kids lifts the fitness of the family unit.But, most of the time in our evolutionary past, humans just died fairly early. So, many of these diseases would never have been optimized against evolution. But, we see them now because we live under such good conditions, we can regulate people over 80 or 90 years.Dwarkesh Patel  29:57  Regarding the linearity and additivity point, I was going to make the analogy that– and I'm curious if this is valid– but when you're programming, one thing that's good practice is to have all the implementation details in separate function calls or separate programs or something, and then have your main loop of operation just be called different functions like, “Do this, do that”, so that you can easily comment stuff away or change arguments. This seemed very similar to that where by turning these names on and off, you can change what the next offering will be. And, you don't have to worry about actually implementing whatever the underlying mechanism is. Steve Hsu  30:41  Well, what you said is related to what Fisher proved in his theorems. Which is that, if suddenly, it becomes advantageous to have X, (like white fur instead of black fur) or something, it would be best if there were little levers that you could move somebody from black fur to white fur continuously by modifying those switches in an additive way. It turns out that for sexually reproducing species where the DNA gets scrambled up in every generation, it's better to have switches of that kind. The other point related to your software analogy is that there seem to be modular, fairly modular things going on in the genome. When we looked at it, we were the first group to have, initially, 20 primary disease conditions we had decent predictors for. We started looking carefully at just something as trivial as the overlap of my sparsely trained predictor. It turns on and uses *these* features for diabetes, but it uses *these* features for schizophrenia. It’s the stupidest metric, it’s literally just how much overlap or variance accounted for overlap is there between pairs of disease conditions. It's very modest. It's the opposite of what naive biologists would say when they talk about pleiotropy.They're just disjoint! Disjoint regions of your genome that govern certain things. And why not? You have 3 billion base pairs—there's a lot you can do in there. There's a lot of information there. If you need 1000 to control diabetes risk, I estimated you could easily have 1000 roughly independent traits that are just disjoint in their genetic dependencies. So, if you think about D&D,  your strength, decks, wisdom, intelligence, and charisma—those are all disjoint. They're all just independent variables. So it's like a seven-dimensional space that your character lives in. Well, there's enough information in the few million differences between you and me. There's enough for 1000-dimensional space of variation.“Oh, how considerable is your spleen?” My spleen is a little bit smaller, yours is a little bit bigger - that can vary independently of your IQ. Oh, it's a big surprise. The size of your spleen can vary independently of the size of your big toe. If you do information theory, there are about 1000 different parameters, and I can vary independently with the number of variants I have between you and me. Because you understand some information theory, it’s trivial to explain, but try explaining to a biologist, you won't get very far.Dwarkesh Patel  33:27  Yeah, yeah, do the log two of the number of.. is that basically how you do it? Yeah.Steve Hsu  33:33  Okay. That's all it is. I mean, it's in our paper. We look at how many variants typically account for most of the variation for any of these major traits, and then imagine that they're mostly disjoint. Then it’s just all about: how many variants you need to independently vary 1000 traits? Well, a few million differences between you and me are enough. It's very trivial math. Once you understand the base and how to reason about information theory, then it's very trivial. But, it ain’t trivial for theoretical biologists, as far as I can tell.AgingDwarkesh Patel  34:13  But the result is so interesting because I remember reading in The Selfish Gene that, as he (Dawkins) hypothesizes that the reason we could be aging is an antagonistic clash. There's something that makes you healthier when you're young and fertile that makes you unhealthy when you're old. Evolution would have selected for such a trade-off because when you're young and fertile, evolution and your genes care about you. But, if there's enough space in the genome —where these trade-offs are not necessarily necessary—then this could be a bad explanation for aging, or do you think I'm straining the analogy?Steve Hsu  34:49  I love your interviews because the point you're making here is really good. So Dawkins, who is an evolutionary theorist from the old school when they had almost no data—you can imagine how much data they had compared to today—he would tell you a story about a particular gene that maybe has a positive effect when you're young, but it makes you age faster. So, there's a trade-off. We know about things like sickle cell anemia. We know stories about that. No doubt, some stories are true about specific variants in your genome. But that's not the general story. The general story you only discovered in the last five years is that thousands of variants control almost every trait and those variants tend to be disjoint from the ones that control the other trait. They weren't wrong, but they didn't have the big picture.Dwarkesh Patel  35:44  Yeah, I see. So, you had this paper, it had polygenic, health index, general health, and disease risk.. You showed that with ten embryos, you could increase disability-adjusted life years by four, which is a massive increase if you think about it. Like what if you could live four years longer and in a healthy state? Steve Hsu  36:05  Yeah, what's the value of that? What would you pay to buy that for your kid?Dwarkesh Patel  36:08  Yeah. But, going back to the earlier question about the trade-offs and why this hasn't already been selected for,  if you're right and there's no trade-off to do this, just living four years older (even if that's beyond your fertility) just being a grandpa or something seems like an unmitigated good. So why hasn’t this kind of assurance hasn't already been selected for? Steve Hsu  36:35  I’m glad you're asking about these questions because these are things that people are very confused about, even in the field. First of all, let me say that when you have a trait that's controlled by  10,000 variants (eg. height is controlled by order 10,000 variants and probably cognitive ability a little bit more), the square root of 10,000 is 100.  So, if I could come to this little embryo, and I want to give it one extra standard deviation of height, I only need to edit 100. I only need to flip 100 minus variance to plus variance. These are very rough numbers. But, one standard deviation is the square root of “n”. If I flip a coin “n” times, I want a better outcome in terms of the number of ratio heads to tails. I want to increase it by one standard deviation. I only need to flip the square root of “n” heads because if you flip a lot, you will get a narrow distribution that peaks around half, and the width of that distribution is the square root of “n”. Once I tell you, “Hey, your height is controlled by 10,000 variants, and I only need to flip 100 genetic variants to make you one standard deviation for a male,” (that would be three inches tall, two and a half or three inches taller), you suddenly realize, “Wait a minute, there are a lot of variants up for grabs there. If I could flip 500 variants in your genome, I would make you five standard deviations taller, you'd be seven feet tall.”  I didn't even have to do that much work, and there's a lot more variation where that came from. I could have flipped even more because I only flipped 500 out of 10,000, right? So, there's this  quasi-infinite well of variation that evolution or genetic engineers could act on. Again, the early population geneticists who bred corn and animals know this. This is something they explicitly know about because they've done calculations. Interestingly, the human geneticists who are mainly concerned with diseases and stuff, are often unfamiliar with the math that the animal breeders already know. You might be interested to know that the milk you drink comes from heavily genetically-optimized cows bred artificially using almost exactly the same technologies that we use at genomic prediction. But, they're doing it to optimize milk production and stuff like this. So there is a big well of variance. It's a consequence of the trait's poly genicity. On the longevity side of things, it does look like people could “be engineered” to live much longer by flipping the variants that make the risk for diseases that shorten your life. The question is then “Why didn't evolution give us life spans of thousands of years?” People in the Bible used to live for thousands of years. Why don't we? I mean, *chuckles* that probably didn’t happen. But the question is, you have this very high dimensional space, and you have a fitness function. How big is the slope in a particular direction of that fitness function? How much more successful reproductively would Joe caveman have been if he lived to be 150 instead of only, 100 or something? There just hasn't been enough time to explore this super high dimensional space. That's the actual answer. But now, we have the technology, and we're going to f*****g explore it fast. That's the point that the big lightbulb should go off. We’re mapping this space out now. Pretty confident in 10 years or so, with the CRISPR gene editing technologies will be ready for massively multiplexed edits. We'll start navigating in this high-dimensional space as much as we like. So that's the more long-term consequence of the scientific insights.Dwarkesh Patel  40:53  Yeah, that's super interesting. What do you think will be the plateau for a trait of how long you’ll live? With the current data and techniques, you think it could be significantly greater than that?Steve Hsu  41:05  We did a simple calculation—which amazingly gives the correct result. This polygenic predictor that we built (which isn't perfect yet but will improve as we gather more data) is used in selecting embryos today. If you asked, out of a billion people, “What's the best person typically, what would their score be on this index and then how long would they be predicted to live?”’ It's about 120 years. So it's spot on. One in a billion types of person lives to be 120 years old. How much better can you do? Probably a lot better. I don't want to speculate, but other nonlinear effects, things that we're not taking into account will start to play a role at some point. So, it's a little bit hard to estimate what the true limiting factors will be. But one super robust statement, and I'll stand by it, debate any Nobel Laureate in biology who wants to discuss it even,  is that there are many variants available to be selected or edited. There's no question about that. That's been established in animal breeding in plant breeding for a long time now. If you want a chicken that grows to be *this* big, instead of *this* big, you can do it. You can do it if you want a cow that produces 10 times or 100 times more milk than a regular cow. The egg you ate for breakfast this morning, those bio-engineered chickens that lay almost an egg a day… A chicken in the wild lays an egg a month. How the hell did we do that? By genetic engineering. That's how we did it. Dwarkesh Patel  42:51  Yeah. That was through brute artificial selection. No fancy machine learning there.Steve Hsu  42:58  Last ten years, it's gotten sophisticated machine learning genotyping of chickens. Artificial insemination, modeling of the traits using ML last ten years. For cow breeding, it's done by ML. First Mover AdvantageDwarkesh Patel  43:18  I had no idea. That's super interesting. So, you mentioned that you're accumulating data and improving your techniques over time, is there a first mover advantage to a genomic prediction company like this? Or is it whoever has the newest best algorithm for going through the biobank data? Steve Hsu  44:16  That's another super question. For the entrepreneurs in your audience, I would say in the short run, if you ask what the valuation of GPB should be? That's how the venture guys would want me to answer the question. There is a huge first mover advantage because they're important in the channel relationships between us and the clinics. Nobody will be able to get in there very easily when they come later because we're developing trust and an extensive track record with clinics worldwide—and we're well-known. So could 23andme or some company with a huge amount of data—if they were to get better AI/ML people working on this—blow us away a little bit and build better predictors because they have much more data than we do? Possibly, yes. Now, we have had core expertise in doing this work for years that we're just good at it. Even though we don't have as much data as 23andme, our predictors might still be better than theirs. I'm out there all the time, working with biobanks all around the world. I don't want to say all the names, but other countries are trying to get my hands on as much data as possible.But, there may not be a lasting advantage beyond the actual business channel connections to that particular market. It may not be a defensible, purely scientific moat around the company. We have patents on specific technologies about how to do the genotyping or error correction on the embryo, DNA, and stuff like this. We do have patents on stuff like that. But this general idea of who will best predict human traits from DNA? It's unclear who's going to be the winner in that race. Maybe it'll be the Chinese government in 50 years? Who knows?Dwarkesh Patel  46:13  Yeah, that's interesting. If you think about a company Google, theoretically, it's possible that you could come up with a better algorithm than PageRank and beat them. But it seems like the engineer at Google is going to come up with whatever edge case or whatever improvement is possible.Steve Hsu  46:28  That's exactly what I would say. PageRank is deprecated by now. But, even if somebody else comes up with a somewhat better algorithm if they have a little bit more data, if you have a team doing this for a long time and you're focused and good, it's still tough to beat you, especially if you have a lead in the market.Dwarkesh Patel  46:50  So, are you guys doing the actual biopsy? Or is it just that they upload the genome, and you're the one processing just giving recommendations? Is it an API call, basically?Steve Hsu  47:03  It's great, I love your question. It is totally standard. Every good IVF clinic in the world regularly takes embryo biopsies. So that's standard. There’s a lab tech doing that. Okay. Then, they take the little sample, put it on ice, and ship it. The DNA as a molecule is exceptionally robust and stable. My other startup solves crimes that are 100 years old from DNA that we get from some semen stain on some rape victim, serial killer victims bra strap, we've done stuff that.Dwarkesh Patel  47:41  Jack the Ripper, when are we going to solve that mystery?Steve Hsu  47:44  If they can give me samples, we can get into that. For example, we just learned that you could recover DNA pretty well if someone licks a stamp and puts on their correspondence. If you can do Neanderthals, you can do a lot to solve crimes. In the IVF workflow, our lab, which is in New Jersey, can service every clinic in the world because they take the biopsy, put it in a standard shipping container, and send it to us. We’re actually genotyping DNA in our lab, but we've trained a few of the bigger  clinics to do the genotyping on their site. At that point, they upload some data into the cloud and then they get back some stuff from our platform. And at that point it's going to be the whole world, every human who wants their kid to be healthy and get the best they can– that data is going to come up to us, and the report is going to come back down to their IVF physician. Dwarkesh Patel  48:46  Which is great if you think that there's a potential that this technology might get regulated in some way, you could go to Mexico or something, have them upload the genome (you don't care what they upload it from), and then get the recommendations there. Steve Hsu  49:05  I think we’re going to evolve to a point where we are going to be out of the wet part of this business, and only in the cloud and bit part of this business. No matter where it is, the clinics are going to have a sequencer, which is *this* big, and their tech is going to quickly upload and retrieve the report for the physician three seconds later. Then, the parents are going to look at it on their phones or whatever. We’re basically there with some clinics. It’s going to be tough to regulate because it’s just this. You have the bits and you’re in some repressive, terrible country that doesn’t allow you to select for some special traits that people are nervous about, but you can upload it to some vendor that’s in Singapore or some free country, and they give you the report back. Doesn’t have to be us, we don’t do the edgy stuff. We only do the health-related stuff right now. But, if you want to know how tall this embryo is going to be…I’ll tell you a mind-blower! When you do face recognition in AI, you're mapping someone's face into a parameter space on the order of hundreds of parameters, each of those parameters is super heritable. In other words, if I take two twins and photograph them, and the algorithm gives me the value of that parameter for twin one and two, they're very close. That's why I can't tell the two twins apart, and face recognition can ultimately tell them apart if it’s really good system. But you can conclude that almost all these parameters are identical for those twins. So it's highly heritable. We're going to get to a point soon where I can do the inverse problem where I have your DNA  and I predict each of those parameters in the face recognition algorithm and then reconstruct the face. If I say that when this embryo will be 16, that is what she will look like. When she's 32, this is what she's going to look like. I'll be able to do that, for sure. It's only an AI/ML problem right now. But basic biology is clearly going to work. So then you're going to be able to say, “Here's a report. Embryo four is so cute.” Before, we didn't know we wouldn't do that, but it will be possible. Dwarkesh Patel  51:37  Before we get married, you'll want to see what their genotype implies about their faces' longevity. It's interesting that you hear stories about these cartel leaders who will get plastic surgery or something to evade the law, you could have a check where you look at a lab and see if it matches the face you would have had five years ago when they caught you on tape.Steve Hsu  52:02  This is a little bit back to old-school Gattaca, but you don't even need the face! You can just take a few molecules of skin cells and phenotype them and know exactly who they are. I've had conversations with these spooky Intel folks. They're very interested in, “Oh, if some Russian diplomat comes in, and we think he's a spy, but he's with the embassy, and he has a coffee with me, and I save the cup and send it to my buddy at Langley, can we figure out who this guy is? And that he has a daughter who's going to Chote? Can do all that now.Dwarkesh Patel  52:49  If that's true, then in the future, world leaders will not want to eat anything or drink. They'll be wearing a hazmat suit to make sure they don't lose a hair follicle.Steve Hsu  53:04  The next time Pelosi goes, she will be in a spacesuit if she cares. Or the other thing is, they're going to give it. They're just going to be, “Yeah, my DNA is everywhere. If I'm a public figure, I can't track my DNA. It's all over.”Dwarkesh Patel  53:17  But the thing is, there's so much speculation that Putin might have cancer or something. If we have his DNA, we can see his probability of having cancer at age 70, or whatever he is, is 85%. So yeah, that’d be a very verified rumor. That would be interesting. Steve Hsu  53:33  I don't think that would be very definitive. I don't think we'll reach that point where you can say that Putin has cancer because of his DNA—which I could have known when he was an embryo. I don't think it's going to reach that level. But, we could say he is at high risk for a type of cancer. Genomics in datingDwarkesh Patel  53:49  In 50 or 100 years, if the majority of the population is doing this, and if the highly heritable diseases get pruned out of the population, does that mean we'll only be left with lifestyle diseases? So, you won't get breast cancer anymore, but you will still get fat or lung cancer from smoking?Steve Hsu  54:18  It's hard to discuss the asymptotic limit of what will happen here. I'm not very confident about making predictions like that. It could get to the point where everybody who's rich or has been through this stuff for a while, (especially if we get the editing working) is super low risk for all the top 20 killer diseases that have the most life expectancy impact. Maybe those people live to be 300 years old naturally. I don't think that's excluded at all. So, that's within the realm of possibility. But it's going to happen for a few lucky people like Elon Musk before it happens for shlubs like you and me. There are going to be very angry inequality protesters about the Trump grandchildren, who, models predict will live to be 200 years old. People are not going to be happy about that.Dwarkesh Patel  55:23  So interesting. So, one way to think about these different embryos is if you're producing multiple embryos, and you get to select from one of them, each of them has a call option, right? Therefore, you probably want to optimize for volatility as much, or if not more than just the expected value of the trait. So, I'm wondering if there are mechanisms where you can  increase the volatility in meiosis or some other process. You just got a higher variance, and you can select from the tail better.Steve Hsu  55:55  Well, I'll tell you something related, which is quite amusing. So I talked with some pretty senior people at the company that owns all the dating apps. So you can look up what company this is, but they own Tinder and Match. They’re kind of interested in perhaps including a special feature where you upload your genome instead of Tinder Gold / Premium.  And when you match- you can talk about how well you match the other person based on your genome. One person told me something shocking. Guys lie about their height on these apps. Dwarkesh Patel  56:41  I’m shocked, truly shocked hahaha. Steve Hsu  56:45  Suppose you could have a DNA-verified height. It would prevent gross distortions if someone claims they're 6’2 and they’re 5’9. The DNA could say that's unlikely. But no, the application to what you were discussing is more like, “Let's suppose that we're selecting on intelligence or something. Let's suppose that the regions where your girlfriend has all the plus stuff are complementary to the regions where you have your plus stuff. So, we could model that and say,  because of the complementarity structure of your genome in the regions that affect intelligence, you're very likely to have some super intelligent kids way above your, the mean of your you and your girlfriend's values. So, you could say things like it being better for you to marry that girl than another. As long as you go through embryo selection, we can throw out the bad outliers. That's all that's technically feasible. It's true that one of the earliest patent applications, they'll deny it now. What's her name? Gosh, the CEO of 23andme…Wojcicki, yeah. She'll deny it now. But, if you look in the patent database, one of the very earliest patents that 23andme filed when they were still a tiny startup was about precisely this: Advising parents about mating and how their kids would turn out and stuff like this. We don't even go that far in GP, we don't even talk about stuff like that, but they were thinking about it when they founded 23andme.Dwarkesh Patel  58:38  That is unbelievably interesting. By the way, this just occurred to me—it's supposed to be highly heritable, especially people in Asian countries, who have the experience of having grandparents that are much shorter than us, and then parents that are shorter than us, which suggests that  the environment has a big part to play in it malnutrition or something. So how do you square that our parents are often shorter than us with the idea that height is supposed to be super heritable.Steve Hsu  59:09  Another great observation. So the correct scientific statement is that we can predict height for people who will be born and raised in a favorable environment. In other words, if you live close to a McDonald's and you're able to afford all the food you want, then the height phenotype becomes super heritable because the environmental variation doesn't matter very much. But, you and I both know that people are much smaller if we return to where our ancestors came from, and also, if you look at how much food, calories, protein, and calcium they eat, it's different from what I ate and what you ate growing up. So we're never saying the environmental effects are zero. We're saying that for people raised in a particularly favorable environment, maybe the genes are capped on what can be achieved, and we can predict that. In fact, we have data from Asia, where you can see much bigger environmental effects. Age affects older people, for fixed polygenic scores on the trait are much shorter than younger people.Ancestral populationsDwarkesh Patel  1:00:31  Oh, okay. Interesting. That raises that next question I was about to ask: how applicable are these scores across different ancestral populations?Steve Hsu  1:00:44  Huge problem is that most of the data is from Europeans. What happens is that if you train a predictor in this ancestry group and go to a more distant ancestry group, there's a fall-off in the prediction quality. Again, this is a frontier question, so we don't know the answer for sure. But many people believe that there's a particular correlational structure in each population, where if I know the state of this SNP, I can predict the state of these neighboring SNPs. That is a product of that group's mating patterns and ancestry. Sometimes, the predictor, which is just using statistical power to figure things out, will grab one of these SNPs as a tag for the truly causal SNP in there. It doesn't know which one is genuinely causal, it is just grabbing a tag, but the tagging quality falls off if you go to another population (eg. This was a very good tag for the truly causal SNP in the British population. But it's not so good a tag in the South Asian population for the truly causal SNP, which we hypothesize is the same). It's the same underlying genetic architecture in these different ancestry groups. We don't know if that's a hypothesis. But even so, the tagging quality falls off. So my group spent a lot of our time looking at the performance of predictor training population A, and on distant population B, and modeling it trying to figure out trying to test hypotheses as to whether it's just the tagging decay that’s responsible for most of the faults. So all of this is an area of active investigation. It'll probably be solved in five years. The first big biobanks that are non-European are coming online. We're going to solve it in a number of years.Dwarkesh Patel  1:02:38  Oh, what does the solution look like?  Unless you can identify the causal mechanism by which each SNP is having an effect, how can you know that something is a tag or whether it's the actual underlying switch?Steve Hsu  1:02:54  The nature of reality will determine how this is going to go. So we don't truly  know if the  innate underlying biology is true. This is an amazing thing. People argue about human biodiversity and all this stuff, and we don't even know whether these specific mechanisms that predispose you to be tall or having heart disease are the same  in these different ancestry groups. We assume that it is, but we don't know that. As we get further away to Neanderthals or Homo Erectus, you might see that they have a slightly different architecture than we do. But let's assume that the causal structure is the same for South Asians and British people. Then it's a matter of improving the tags. How do I know if I don't know which one is causal? What do I mean by improving the tags? This is a machine learning problem. If there's a SNP, which is always coming up as very significant when I use it across multiple ancestry groups, maybe that one's casual. As I vary the tagging correlations in the neighborhood of that SNP, I always find that that one is the intersection of all these different sets, making me think that one's going to be causal. That's a process we're engaged in now—trying to do that. Again, it's just a machine learning problem. But we need data. That's the main issue.Dwarkesh Patel  1:04:32  I was hoping that wouldn't be possible, because one way we might go about this research is that it itself becomes taboo or causes other sorts of bad social consequences if you can definitively show that on certain traits, there are differences between ancestral populations, right? So, I was hoping that maybe there was an evasion button where we can't say because they're just tags and the tags might be different between different ancestral populations. But with machine learning, we’ll know.Steve Hsu  1:04:59  That's the situation we're in now, where you have to do some fancy analysis if you want to claim that Italians have lower height potential than Nordics—which is possible. There's been a ton of research about this because there are signals of selection. The alleles, which are activated in height predictors, look like they've been under some selection between North and South Europe over the last 5000 years for whatever reason. But, this is a thing debated by people who study molecular evolution. But suppose it's true, okay? That would mean that when we finally get to the bottom of it, we find all the causal loci for height, and the average value for the Italians is lower than that for those living in Stockholm. That might be true. People don't get that excited? They get a little bit excited about height. But they would get really excited if this were true for some other traits, right?Suppose the causal variants affecting your level of extraversion are systematic, that the average value of those weighed the weighted average of those states is different in Japan versus Sicily. People might freak out over that. I'm supposed to say that's obviously not true. How could it possibly be true? There hasn't been enough evolutionary time for those differences to arise. After all, it's not possible that despite what looks to be the case for height over the last 5000 years in Europe, no other traits could have been differentially selected for over the last 5000 years. That's the dangerous thing. Few people understand this field well enough to understand what you and I just discussed and are so alarmed by it that they're just trying to suppress everything. Most of them don't follow it at this technical level that you and I are just discussing. So, they're somewhat instinctively negative about it, but they don't understand it very well.Dwarkesh Patel  1:07:19  That's good to hear. You see this pattern that by the time that somebody might want to regulate or in some way interfere with some technology or some information, it already has achieved wide adoption. You could argue that that's the case with crypto today. But if it's true that a bunch of IVF clinics worldwide are using these scores to do selection and other things, by the time people realize the implications of this data for other kinds of social questions, this has already been an existing consumer technology.Is this eugenics?Steve Hsu  1:07:58  That's true, and the main outcry will be if it turns out that there are massive gains to be had, and only the billionaires are getting them. But that might have the consequence of causing countries to make this free part of their national health care system. So Denmark and Israel pay for IVF. For infertile couples, it's part of their national health care system. They're pretty aggressive about genetic testing. In Denmark, one in 10 babies are born through IVF. It's not clear how it will go. But we're in for some fun times. There's no doubt about that.Dwarkesh Patel  1:08:45  Well, one way you could go is that some countries decided to ban it altogether. And another way it could go is if countries decided to give everybody free access to it. If you had to choose between the two,  you would want to go for the second one. Which would be the hope. Maybe only those two are compatible with people's moral intuitions about this stuff. Steve Hsu  1:09:10  It’s very funny because most wokist people today hate this stuff. But, most progressives like Margaret Sanger, or anybody who was the progressive intellectual forebears of today's wokist, in the early 20th century, were all that we would call today in Genesis because they were like, “Thanks to Darwin, we now know how this all works. We should take steps to keep society healthy and (not in a negative way where we kill people we don't like, but we should help society do healthy things when they reproduce, and have healthy kids).” Now, this whole thing has just been flipped over among progressives. Dwarkesh Patel  1:09:52  Even in India, less than 50 years ago, Indira Gandhi, she's on the left side of India's political spectrum. She was infamous for putting on these forced sterilization programs. Somebody made an interesting comment about this where they were asked, “Oh, is it true that history always tilts towards progressives? And if so, isn't everybody else doomed? Aren't their views doomed?”The person made a fascinating point: whatever we consider left at the time tends to be winning. But what is left has changed a lot over time, right? In the early 20th century, prohibition was a left cause. It was a progressive cause, and that changed, and now the opposite is the left cause. But now, legalizing pot is progressive. Exactly. So, if Conquest’s second law is true, and everything tilts leftover time, just change what is left is, right? That's the solution. Steve Hsu  1:10:59  No one can demand that any of these woke guys be intellectually self-consistent, or even say the same things from one year to another? But one could wonder what they think about these literally Communist Chinese. They’re recycling huge parts of their GDP to help the poor and the southern stuff. Medicine is free, education is free, right? They're clearly socialists, and literally communists. But in Chinese, the Chinese characters for eugenics is a positive thing. It means healthy production. But more or less, the whole viewpoint on all this stuff is 180 degrees off in East Asia compared to here, and even among the literal communists—so go figure.Dwarkesh Patel  1:11:55  Yeah, very based. So let's talk about one of the traits that people might be interested in potentially selecting for: intelligence. What is the potential for us to acquire the data to correlate the genotype with intelligence?Steve Hsu  1:12:15  Well, that's the most personally frustrating aspect of all of this stuff. If you asked me ten years ago when I started doing this stuff what were we going to get, everything was gone. On the optimistic side of what I would have predicted, so everything's good. Didn't turn out to be interactively nonlinear, or it didn't turn out to be interactively pleiotropic. All these good things, —which nobody could have known a priori how they would work—turned out to be good for gene engineers of the 21st century. The one frustrating thing is because of crazy wokeism, and fear of crazy wokists, the most interesting phenotype of all is lagging b

god united states america ceo american new york university spotify founders new york city donald trump google english china europe kids bible nfl japan americans mexico british west professor ai tech nature chinese gold european ohio russian german romans mit medicine evolution dna new jersey italian elon musk san diego north greek indian asian harvard humanity mcdonald tinder loved vladimir putin ufc helps match world war ii singapore stanford ucla nebraska jeff bezos taiwan guys denmark intelligence south korea stepping albert einstein olympians long island stockholm consumer simpsons intel fields consistent nancy pelosi ohio state gym artificial ea iq michigan state university selection boeing gp gdp nobel prize api d d ivf mckinsey cs jiu jitsu estonia pasadena aws ml conquest ripper scandinavian south asian goldman crispr ancestral crimson hemingway sicily asana gosh neanderthals goldilocks east asia us marines neumann genomics langley conformity sri lankan big five advising imo embryos caltech dawkins westwood theoretical suitable sats mathematicians nobel laureates gpt ai ml snp tradeoffs natural selection eloy nordics gattaca iit pua l1 ftx richard feynman lsat margaret sanger h 1b east german south asians secretaries feynman manifold olympiads theoretical physics hsu multiplex roko hour fitness piketty indira gandhi snps conceptually applied physics wonderlic francis crick pagerank selfish gene morlocks communist chinese homo erectus ashkenazi jews uk biobank youa wojcicki gpb hahahah tay sachs fundamental theorem scott aaronson chote gregory clark palaestra gwern
The Mentee Podcast
S5E33: Scale Your Business and Sales with AI

The Mentee Podcast

Play Episode Listen Later Aug 22, 2022 40:05


Lead generation is easier than ever with ads and marketing campaigns. But once you get leads, how many get converted into sales? If you're spending too much on generating leads with no progress in conversion, it's time to do something about it. In this episode of The Mentee, William Brown of TitanX shares how to elevate your sales funnel with AI. We'll discuss the importance of knowing who to convert and how to apply the 80-20 rule to leads and revenue. Here are some power takeaways from today's conversation: Know how AI can improve your sales funnel Use the 80-20 rule for sales conversions Find out where to establish a connection in your website  Determine who's worth converting Episode Highlights: [11:19] The Power of AI  Currently, AI is widely used to support social media by stealing our attention and selling it to advertisers. It'll be more beneficial if people have their personalized AI. [15:10] The 80-20 Rule Lead generation doesn't instantly translate into conversions. 20% of your leads will generate 80% of your revenue, so make sure you know who's worth converting. If you want to know which lead can convert, consider the following through a live chat: Motivation Timeline Conditions Asking Price [23:27] Improve Your Funnel Your ads and posts should consider a CTA that's immediate. Consider an AI virtual assistant to create a live automated conversation with sellers. There are three points of contact on a website: Live chatbot Contact number Filling out a form AI can be considered as an Acquisition Manager. It can prioritize potential conversions based on qualifications and schedule corresponding follow-ups. [32:54] Where AI Technology is Today Chatbots use a menu of choices and queries. OpenAI and GPT can converse more naturally. But these two options have their limitations. The former is limited to the number of choices, while the latter's responses can't be fully controlled. [37:35] Converting Leads Generating leads is easy, but conversion is hard. If you struggle to convert your leads, you must invest in services like CLAIRE to help you convert them. Notable quotes from the Episode: [20:05] “It was all about who we chose to spend time with, [and] following up with, because as you scale up, the amount of leads you want to do, they don't necessarily just convert into closed deals. You have to have somebody there to convert them. And that person…has to know who's most worth converting.” [34:22] “We're filling the gap in the market of the hybrid, where you give the consumer…the user fully natural, flexible conversations and you give the company transparency and control over what is said.” [37:44] “Generating the leads is easy, converting them is the hard part.”  Resources Mentioned: https://www.biggerpockets.com/podcasts/real-estate (Bigger Pockets Podcast) https://www.biggerpockets.com/blog/2014-07-03-bp-podcast-077-negotiating-way-1000-wholetail-deals-michael-quarles (Bigger Pockets Episode 77 with Michael Quarles) https://titanx.ai/ (TitanX) Connect with Will: Email I https://www.linkedin.com/in/-william-brown/ (LinkedIn)

TerraSpaces
SuperTerra: Into the Cosmos

TerraSpaces

Play Episode Listen Later Aug 22, 2022 75:19


Today on the Ether we have SuperTerra taking us into the Cosmos with his weekly NFT open discussion space. You'll hear from Stoned Okko, i.am.GPT-3, Tradooors NFT, Loop DeFi NFT Marketplace, DCentralize, and more! Recorded on August 22nd 2022. If you enjoy the music at the end of the episodes, you can find the albums streaming on Spotify, and the rest of your favorite streaming platforms. Check out Project Survival, Virus Diaries, and Plan B wherever you get your music. Thank you to everyone in the community who supports TerraSpaces.

TerraSpaces
Galactic Punks Builder AMA with Andromeda

TerraSpaces

Play Episode Listen Later Aug 22, 2022 65:26


Today on the Ether we have the Andromeda AMA hosted by the Galactic Punks. You'll hear from Karma, Cody Marx Bailey, i.am.GPT-3, SPACE TOADZ, and more! Recorded on August 22nd 2022. If you enjoy the music at the end of the episodes, you can find the albums streaming on Spotify, and the rest of your favorite streaming platforms. Check out Project Survival, Virus Diaries, and Plan B wherever you get your music. Thank you to everyone in the community who supports TerraSpaces.

The Nonlinear Library
LW - What's the Least Impressive Thing GPT-4 Won't be Able to Do by Algon

The Nonlinear Library

Play Episode Listen Later Aug 21, 2022 1:01


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What's the Least Impressive Thing GPT-4 Won't be Able to Do, published by Algon on August 20, 2022 on LessWrong. It seems like GPT-4 is going to be coming out soon and, so I've heard, it will be awesome. Now, we don't know anything about its architecture or its size or how it was trained. If it were only trained on text (about 3.2 T tokens) in an optimal manner, then it would be about 2.5X the size of Chinchilla i.e. the size of GPT-3. So to be larger than GPT-3, it would need to be multi-modal, which could present some interesting capabilities. So it is time to ask that question again: what's the least impressive thing that GPT-4 won't be able to do? State your assumptions to be clear i.e. a text and image generating GPT-4 in the style of X with size Y can't do Z. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.