Podcasts about zvi mowshowitz

  • 38PODCASTS
  • 65EPISODES
  • 1h 33mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Apr 21, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about zvi mowshowitz

Latest podcast episodes about zvi mowshowitz

Is OpenAI's o3 AGI? Zvi Mowshowitz on Early AI Takeoff, the Mechanize launch, Live Players, & Why p(doom) is Rising

Play Episode Listen Later Apr 21, 2025 188:19


In this episode of the Cognitive Revolution podcast, the host Nathan Labenz is joined for the record 9th time by Zvi Mowshowitz to discuss the state of AI advancements, focusing on recent developments such as OpenAI's O3 model and its implications for AGI and recursive self-improvement. They delve into the capabilities and limitations of current AI models in various domains, including coding, deep research, and practical utilities. The discussion also covers the strategic and ethical considerations in AI development, touching upon the roles of major AI labs, the potential for weaponization, and the importance of balancing innovation with safety. Zvi shares insights on what it means to be a live player in the AI race, the impact of transparency and safety measures, and the challenges of governance in the context of rapidly advancing AI technologies. Nathan Labenz's slide deck documenting the ever-growing list of AI Bad Behaviors: https://docs.google.com/presentation/d/1mvkpg1mtAvGzTiiwYPc6bKOGsQXDIwMb-ytQECb3i7I/edit#slide=id.g252d9e67d86_0_16 Upcoming Major AI Events Featuring Nathan Labenz as a Keynote Speaker https://www.imagineai.live/ https://adapta.org/adapta-summit https://itrevolution.com/product/enterprise-tech-leadership-summit-las-vegas/ SPONSORS: Box AI: Box AI revolutionizes content management by unlocking the potential of unstructured data. Automate document processing, extract insights, and build custom AI agents using cutting-edge models like OpenAI's GPT-4.5, Google's Gemini 2.0, and Anthropic's Cloud 3.7 Sonnet. Trusted by over 115,000 enterprises, Box AI ensures top-tier security and compliance. Visit https://box.com/ai to transform your business with intelligent content management today Shopify: Shopify powers millions of businesses worldwide, handling 10% of U.S. e-commerce. With hundreds of templates, AI tools for product descriptions, and seamless marketing campaign creation, it's like having a design studio and marketing team in one. Start your $1/month trial today at https://shopify.com/cognitive NetSuite: Over 41,000 businesses trust NetSuite by Oracle, the #1 cloud ERP, to future-proof their operations. With a unified platform for accounting, financial management, inventory, and HR, NetSuite provides real-time insights and forecasting to help you make quick, informed decisions. Whether you're earning millions or hundreds of millions, NetSuite empowers you to tackle challenges and seize opportunities. Download the free CFO's guide to AI and machine learning at https://netsuite.com/cognitive Oracle Cloud Infrastructure (OCI): Oracle Cloud Infrastructure offers next-generation cloud solutions that cut costs and boost performance. With OCI, you can run AI projects and applications faster and more securely for less. New U.S. customers can save 50% on compute, 70% on storage, and 80% on networking by switching to OCI before May 31, 2024. See if you qualify at https://oracle.com/cognitive PRODUCED BY: https://aipodcast.ing

80,000 Hours Podcast with Rob Wiblin
15 expert takes on infosec in the age of AI

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Mar 28, 2025 155:54


"There's almost no story of the future going well that doesn't have a part that's like '…and no evil person steals the AI weights and goes and does evil stuff.' So it has highlighted the importance of information security: 'You're training a powerful AI system; you should make it hard for someone to steal' has popped out to me as a thing that just keeps coming up in these stories, keeps being present. It's hard to tell a story where it's not a factor. It's easy to tell a story where it is a factor." — Holden KarnofskyWhat happens when a USB cable can secretly control your system? Are we hurtling toward a security nightmare as critical infrastructure connects to the internet? Is it possible to secure AI model weights from sophisticated attackers? And could AI might actually make computer security better rather than worse?With AI security concerns becoming increasingly urgent, we bring you insights from 15 top experts across information security, AI safety, and governance, examining the challenges of protecting our most powerful AI models and digital infrastructure — including a sneak peek from an episode that hasn't yet been released with Tom Davidson, where he explains how we should be more worried about “secret loyalties” in AI agents. You'll hear:Holden Karnofsky on why every good future relies on strong infosec, and how hard it's been to hire security experts (from episode #158)Tantum Collins on why infosec might be the rare issue everyone agrees on (episode #166)Nick Joseph on whether AI companies can develop frontier models safely with the current state of information security (episode #197)Sella Nevo on why AI model weights are so valuable to steal, the weaknesses of air-gapped networks, and the risks of USBs (episode #195)Kevin Esvelt on what cryptographers can teach biosecurity experts (episode #164)Lennart Heim on on Rob's computer security nightmares (episode #155)Zvi Mowshowitz on the insane lack of security mindset at some AI companies (episode #184)Nova DasSarma on the best current defences against well-funded adversaries, politically motivated cyberattacks, and exciting progress in infosecurity (episode #132)Bruce Schneier on whether AI could eliminate software bugs for good, and why it's bad to hook everything up to the internet (episode #64)Nita Farahany on the dystopian risks of hacked neurotech (episode #174)Vitalik Buterin on how cybersecurity is the key to defence-dominant futures (episode #194)Nathan Labenz on how even internal teams at AI companies may not know what they're building (episode #176)Allan Dafoe on backdooring your own AI to prevent theft (episode #212)Tom Davidson on how dangerous “secret loyalties” in AI models could be (episode to be released!)Carl Shulman on the challenge of trusting foreign AI models (episode #191, part 2)Plus lots of concrete advice on how to get into this field and find your fitCheck out the full transcript on the 80,000 Hours website.Chapters:Cold open (00:00:00)Rob's intro (00:00:49)Holden Karnofsky on why infosec could be the issue on which the future of humanity pivots (00:03:21)Tantum Collins on why infosec is a rare AI issue that unifies everyone (00:12:39)Nick Joseph on whether the current state of information security makes it impossible to responsibly train AGI (00:16:23)Nova DasSarma on the best available defences against well-funded adversaries (00:22:10)Sella Nevo on why AI model weights are so valuable to steal (00:28:56)Kevin Esvelt on what cryptographers can teach biosecurity experts (00:32:24)Lennart Heim on the possibility of an autonomously replicating AI computer worm (00:34:56)Zvi Mowshowitz on the absurd lack of security mindset at some AI companies (00:48:22)Sella Nevo on the weaknesses of air-gapped networks and the risks of USB devices (00:49:54)Bruce Schneier on why it's bad to hook everything up to the internet (00:55:54)Nita Farahany on the possibility of hacking neural implants (01:04:47)Vitalik Buterin on how cybersecurity is the key to defence-dominant futures (01:10:48)Nova DasSarma on exciting progress in information security (01:19:28)Nathan Labenz on how even internal teams at AI companies may not know what they're building (01:30:47)Allan Dafoe on backdooring your own AI to prevent someone else from stealing it (01:33:51)Tom Davidson on how dangerous “secret loyalties” in AI models could get (01:35:57)Carl Shulman on whether we should be worried about backdoors as governments adopt AI technology (01:52:45)Nova DasSarma on politically motivated cyberattacks (02:03:44)Bruce Schneier on the day-to-day benefits of improved security and recognising that there's never zero risk (02:07:27)Holden Karnofsky on why it's so hard to hire security people despite the massive need (02:13:59)Nova DasSarma on practical steps to getting into this field (02:16:37)Bruce Schneier on finding your personal fit in a range of security careers (02:24:42)Rob's outro (02:34:46)Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongContent editing: Katy Moore and Milo McGuireTranscriptions and web: Katy Moore

80,000 Hours Podcast with Rob Wiblin
Bonus: AGI disagreements and misconceptions: Rob, Luisa, & past guests hash it out

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Feb 10, 2025 192:24


Will LLMs soon be made into autonomous agents? Will they lead to job losses? Is AI misinformation overblown? Will it prove easy or hard to create AGI? And how likely is it that it will feel like something to be a superhuman AGI?With AGI back in the headlines, we bring you 15 opinionated highlights from the show addressing those and other questions, intermixed with opinions from hosts Luisa Rodriguez and Rob Wiblin recorded back in 2023.Check out the full transcript on the 80,000 Hours website.You can decide whether the views we expressed (and those from guests) then have held up these last two busy years. You'll hear:Ajeya Cotra on overrated AGI worriesHolden Karnofsky on the dangers of aligned AI, why unaligned AI might not kill us, and the power that comes from just making models biggerIan Morris on why the future must be radically different from the presentNick Joseph on whether his companies internal safety policies are enoughRichard Ngo on what everyone gets wrong about how ML models workTom Davidson on why he believes crazy-sounding explosive growth stories… and Michael Webb on why he doesn'tCarl Shulman on why you'll prefer robot nannies over human onesZvi Mowshowitz on why he's against working at AI companies except in some safety rolesHugo Mercier on why even superhuman AGI won't be that persuasiveRob Long on the case for and against digital sentienceAnil Seth on why he thinks consciousness is probably biologicalLewis Bollard on whether AI advances will help or hurt nonhuman animalsRohin Shah on whether humanity's work ends at the point it creates AGIAnd of course, Rob and Luisa also regularly chime in on what they agree and disagree with.Chapters:Cold open (00:00:00)Rob's intro (00:00:58)Rob & Luisa: Bowerbirds compiling the AI story (00:03:28)Ajeya Cotra on the misalignment stories she doesn't buy (00:09:16)Rob & Luisa: Agentic AI and designing machine people (00:24:06)Holden Karnofsky on the dangers of even aligned AI, and how we probably won't all die from misaligned AI (00:39:20)Ian Morris on why we won't end up living like The Jetsons (00:47:03)Rob & Luisa: It's not hard for nonexperts to understand we're playing with fire here (00:52:21)Nick Joseph on whether AI companies' internal safety policies will be enough (00:55:43)Richard Ngo on the most important misconception in how ML models work (01:03:10)Rob & Luisa: Issues Rob is less worried about now (01:07:22)Tom Davidson on why he buys the explosive economic growth story, despite it sounding totally crazy (01:14:08)Michael Webb on why he's sceptical about explosive economic growth (01:20:50)Carl Shulman on why people will prefer robot nannies over humans (01:28:25)Rob & Luisa: Should we expect AI-related job loss? (01:36:19)Zvi Mowshowitz on why he thinks it's a bad idea to work on improving capabilities at cutting-edge AI companies (01:40:06)Holden Karnofsky on the power that comes from just making models bigger (01:45:21)Rob & Luisa: Are risks of AI-related misinformation overblown? (01:49:49)Hugo Mercier on how AI won't cause misinformation pandemonium (01:58:29)Rob & Luisa: How hard will it actually be to create intelligence? (02:09:08)Robert Long on whether digital sentience is possible (02:15:09)Anil Seth on why he believes in the biological basis of consciousness (02:27:21)Lewis Bollard on whether AI will be good or bad for animal welfare (02:40:52)Rob & Luisa: The most interesting new argument Rob's heard this year (02:50:37)Rohin Shah on whether AGI will be the last thing humanity ever does (02:57:35)Rob's outro (03:11:02)Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongTranscriptions and additional content editing: Katy Moore

80,000 Hours Podcast with Rob Wiblin
2024 Highlightapalooza! (The best of the 80,000 Hours Podcast this year)

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Dec 27, 2024 170:02


"A shameless recycling of existing content to drive additional audience engagement on the cheap… or the single best, most valuable, and most insight-dense episode we put out in the entire year, depending on how you want to look at it." — Rob WiblinIt's that magical time of year once again — highlightapalooza! Stick around for one top bit from each episode, including:How to use the microphone on someone's mobile phone to figure out what password they're typing into their laptopWhy mercilessly driving the New World screwworm to extinction could be the most compassionate thing humanity has ever doneWhy evolutionary psychology doesn't support a cynical view of human nature but actually explains why so many of us are intensely sensitive to the harms we cause to othersHow superforecasters and domain experts seem to disagree so much about AI risk, but when you zoom in it's mostly a disagreement about timingWhy the sceptics are wrong and you will want to use robot nannies to take care of your kids — and also why despite having big worries about the development of AGI, Carl Shulman is strongly against efforts to pause AI research todayHow much of the gender pay gap is due to direct pay discrimination vs other factorsHow cleaner wrasse fish blow the mirror test out of the waterWhy effective altruism may be too big a tent to work wellHow we could best motivate pharma companies to test existing drugs to see if they help cure other diseases — something they currently have no reason to bother with…as well as 27 other top observations and arguments from the past year of the show.Check out the full transcript and episode links on the 80,000 Hours website.Remember that all of these clips come from the 20-minute highlight reels we make for every episode, which are released on our sister feed, 80k After Hours. So if you're struggling to keep up with our regularly scheduled entertainment, you can still get the best parts of our conversations there.It has been a hell of a year, and we can only imagine next year is going to be even weirder — but Luisa and Rob will be here to keep you company as Earth hurtles through the galaxy to a fate as yet unknown.Enjoy, and look forward to speaking with you in 2025!Chapters:Rob's intro (00:00:00)Randy Nesse on the origins of morality and the problem of simplistic selfish-gene thinking (00:02:11)Hugo Mercier on the evolutionary argument against humans being gullible (00:07:17)Meghan Barrett on the likelihood of insect sentience (00:11:26)Sébastien Moro on the mirror test triumph of cleaner wrasses (00:14:47)Sella Nevo on side-channel attacks (00:19:32)Zvi Mowshowitz on AI sleeper agents (00:22:59)Zach Weinersmith on why space settlement (probably) won't make us rich (00:29:11)Rachel Glennerster on pull mechanisms to incentivise repurposing of generic drugs (00:35:23)Emily Oster on the impact of kids on women's careers (00:40:29)Carl Shulman on robot nannies (00:45:19)Nathan Labenz on kids and artificial friends (00:50:12)Nathan Calvin on why it's not too early for AI policies (00:54:13)Rose Chan Loui on how control of OpenAI is independently incredibly valuable and requires compensation (00:58:08)Nick Joseph on why he's a big fan of the responsible scaling policy approach (01:03:11)Sihao Huang on how the US and UK might coordinate with China (01:06:09)Nathan Labenz on better transparency about predicted capabilities (01:10:18)Ezra Karger on what explains forecasters' disagreements about AI risks (01:15:22)Carl Shulman on why he doesn't support enforced pauses on AI research (01:18:58)Matt Clancy on the omnipresent frictions that might prevent explosive economic growth (01:25:24)Vitalik Buterin on defensive acceleration (01:29:43)Annie Jacobsen on the war games that suggest escalation is inevitable (01:34:59)Nate Silver on whether effective altruism is too big to succeed (01:38:42)Kevin Esvelt on why killing every screwworm would be the best thing humanity ever did (01:42:27)Lewis Bollard on how factory farming is philosophically indefensible (01:46:28)Bob Fischer on how to think about moral weights if you're not a hedonist (01:49:27)Elizabeth Cox on the empirical evidence of the impact of storytelling (01:57:43)Anil Seth on how our brain interprets reality (02:01:03)Eric Schwitzgebel on whether consciousness can be nested (02:04:53)Jonathan Birch on our overconfidence around disorders of consciousness (02:10:23)Peter Godfrey-Smith on uploads of ourselves (02:14:34)Laura Deming on surprising things that make mice live longer (02:21:17)Venki Ramakrishnan on freezing cells, organs, and bodies (02:24:46)Ken Goldberg on why low fault tolerance makes some skills extra hard to automate in robots (02:29:12)Sarah Eustis-Guthrie on the ups and downs of founding an organisation (02:34:04)Dean Spears on the cost effectiveness of kangaroo mother care (02:38:26)Cameron Meyer Shorb on vaccines for wild animals (02:42:53)Spencer Greenberg on personal principles (02:46:08)Producing and editing: Keiran HarrisAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongVideo editing: Simon MonsourTranscriptions: Katy Moore

Zvi's POV: Ilya's SSI, OpenAI's o1, Claude Computer Use, Trump's election, and more

Play Episode Listen Later Nov 16, 2024 162:02


In this episode of The Cognitive Revolution, Nathan welcomes back Zvi Mowshowitz for an in-depth discussion on the latest developments in AI over the past six months. They explore Ilya's new superintelligence-focused startup, analyze OpenAI's O1 model, and debate the impact of Claude's computer use capabilities. The conversation covers emerging partnerships in big tech, regulatory changes, and the recent OpenAI profit-sharing drama. Zvi offers unique insights on AI safety, politics, and strategic analysis that you won't find elsewhere. Join us for this thought-provoking episode that challenges our understanding of the rapidly evolving AI landscape. Check out "Don't Worry About the Vase" Blog: https://thezvi.substack.com Be notified early when Turpentine's drops new publication: https://www.turpentine.co/exclusiveaccess SPONSORS: Shopify: Shopify is the world's leading e-commerce platform, offering a market-leading checkout system and exclusive AI apps like Quikly. Nobody does selling better than Shopify. Get a $1 per month trial at https://shopify.com/cognitive Notion: Notion offers powerful workflow and automation templates, perfect for streamlining processes and laying the groundwork for AI-driven automation. With Notion AI, you can search across thousands of documents from various platforms, generating highly relevant analysis and content tailored just for you - try it for free at https://notion.com/cognitiverevolution Oracle Cloud Infrastructure (OCI): Oracle's next-generation cloud platform delivers blazing-fast AI and ML performance with 50% less for compute and 80% less for outbound networking compared to other cloud providers13. OCI powers industry leaders with secure infrastructure and application development capabilities. New U.S. customers can get their cloud bill cut in half by switching to OCI before December 31, 2024 at https://oracle.com/cognitive SelectQuote: Finding the right life insurance shouldn't be another task you put off. SelectQuote compares top-rated policies to get you the best coverage at the right price. Even in our AI-driven world, protecting your family's future remains essential. Get your personalized quote at https://selectquote.com/cognitive RECOMMENDED PODCAST: Unpack Pricing - Dive into the dark arts of SaaS pricing with Metronome CEO Scott Woody and tech leaders. Learn how strategic pricing drives explosive revenue growth in today's biggest companies like Snowflake, Cockroach Labs, Dropbox and more. Apple: https://podcasts.apple.com/us/podcast/id1765716600 Spotify: https://open.spotify.com/show/38DK3W1Fq1xxQalhDSueFg CHAPTERS: (00:00:00) Teaser (00:01:03) About the Episode (00:02:57) Catching Up (00:04:00) Ilya's New Company (00:06:10) GPT-4 and Scaling (00:11:49) User Report: GPT-4 (Part 1) (00:18:11) Sponsors: Shopify | Notion (00:21:06) User Report: GPT-4 (Part 2) (00:24:25) Magic: The Gathering (Part 1) (00:32:34) Sponsors: Oracle Cloud Infrastructure (OCI) | SelectQuote (00:34:58) Magic: The Gathering (Part 2) (00:35:59) Humanity's Last Exam (00:41:29) Computer Use (00:47:42) Industry Landscape (00:55:42) Why is Gemini Third? (01:04:32) Voice Mode (01:09:41) Alliances and Coupling (01:16:31) Regulation (01:24:58) Machines of Loving Grace (01:33:23) Taiwan and Chips (01:41:13) SB 1047 Veto (02:00:07) Arc AGI Prize (02:02:23) Deepfakes and UBI (02:09:06) Trump and AI (02:26:31) AI Manhattan Project (02:32:05) Virtue Ethics (02:38:40) Closing Thoughts (02:40:37) Outro SOCIAL LINKS: Website: https://www.cognitiverevolution.ai Twitter (Podcast): https://x.com/cogrev_podcast Twitter (Nathan): https://x.com/labenz LinkedIn: https://www.linkedin.com/in/nathanlabenz/ Youtube: https://www.youtube.com/@CognitiveRevolutionPodcast Apple: https://podcasts.apple.com/de/podcast/the-cognitive-revolution-ai-builders-researchers-and/id1669813431 Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk

Complex Systems with Patrick McKenzie (patio11)
Bet on it: Zvi Mowshowitz on professional gambling, trading, and AI futures

Complex Systems with Patrick McKenzie (patio11)

Play Episode Listen Later Aug 8, 2024 86:22


In this episode, Patrick McKenzie (patio11) is joined by Zvi Mowshowitz (TheZvi) to discuss his wide-ranging career as a professional Magic: The Gathering player, sports gambler, equities trader, public intellectual on the covid-19 epidemic, and AI-focused journalist. They go into depth on how trading happens in less formal markets with lessons that resonate in more formal markets. They also explore the fallacies of rational decision-making in large organizations, the significance of obsession and practice in achieving excellence, and exchange predictions on AI.–Full transcript available here: https://www.complexsystemspodcast.com/betting-trading-zvi-mowshowitz/–Sponsor: This podcast is sponsored by Check, the leading payroll infrastructure provider and pioneer of embedded payroll. Check makes it easy for any SaaS platform to build a payroll business, and already powers 60+ popular platforms. Head to checkhq.com/complex and tell them patio11 sent you.–Links:https://thezvi.wordpress.com/https://www.bitsaboutmoney.com/–Twitter:@patio11@TheZvi–Timestamps:(00:00) Intro(00:16) Meet Zvi Mowshowitz(04:11) Trading and Magic: The Gathering(07:24) Professional sports gambling(11:58) Navigating the sportsbook market(22:33) Sponsor: Check(23:48) Financial markets vs. sports betting(34:02) Covid-19 early predictions (43:21) Covid-19 policy failures and blame(49:52) Vaccine rollout chaos (01:01:11) The importance of scaling effective strategies(01:14:46) AI predictions(01:23:58) Wrap–Complex Systems is part of the Turpentine podcast network. Learn more: Turpentine.co

The Dynamist
California Comes for AI w/ Brian Chau & Dean Ball

The Dynamist

Play Episode Listen Later May 21, 2024 59:46


When it comes to AI regulation, states are moving faster than the federal government.  While California is the hub of American AI innovation (Google, OpenAI, Anthropic, and Meta are all headquartered in the Valley), the state is also poised to enact some of the strictest state regulations on frontier AI development. Introduced on February 8, the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act (SB 1047) is a sweeping bill that would include a new regulatory division and requirements that companies demonstrate their tech won't be used for harmful purposes, such as building a bioweapon or aiding terrorism.SB 1047 has generated intense debate within the AI community and beyond. Proponents argue that robust oversight and safety requirements are essential to mitigate the catastrophic risks posed by advanced AI systems. Opponents contend that the scope is overbroad and that the compliance burdens and legal risks will advantage incumbent players over smaller and open-source developers.Evan is joined by Brian Chau, Executive Director of Alliance for the Future and Dean Ball, a research fellow at the Mercatus Center and author of the Substack Hyperdimensional. You can read Alliance for the Future's call to action on SB 1047 here. And you can read Dean's analysis of the bill here. For a counter argument, check out a piece by AI writer Zvi Mowshowitz here.

OpenAI and Google Race to "Her" - Is the Big Tech Singularity Near? Part 1 with Zvi Mowshowitz

Play Episode Listen Later May 18, 2024 124:42


Dive into a critical analysis of AI's rapidly evolving realm with Zvi Mowshowitz. Discover our take on Google's I-O event highlights, OpenAI's spring event, the big tech and startup AI dynamics, and the significant shifts in OpenAI's safety team. Gain insights into the competitive landscape, technological advancements, and strategic challenges that shape today's AI industry, as we explore the future's uncertainties and the potential for change. Part 2 is coming soon. SPONSORS: Oracle Cloud Infrastructure (OCI) is a single platform for your infrastructure, database, application development, and AI needs. OCI has four to eight times the bandwidth of other clouds; offers one consistent price, and nobody does data better than Oracle. If you want to do more and spend less, take a free test drive of OCI at https://oracle.com/cognitive The Brave search API can be used to assemble a data set to train your AI models and help with retrieval augmentation at the time of inference. All while remaining affordable with developer first pricing, integrating the Brave search API into your workflow translates to more ethical data sourcing and more human representative data sets. Try the Brave search API for free for up to 2000 queries per month at https://bit.ly/BraveTCR Head to Squad to access global engineering without the headache and at a fraction of the cost: head to https://choosesquad.com/ and mention "Turpentine" to skip the waitlist. Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off https://www.omneky.com/ CHAPTERS: (00:00:00) Introduction (00:03:01) Welcome to the Cognitive Revolution (00:06:35) The Reality of AI Demos: Hype vs. Practicality (00:09:30) Integrations and Universal Assistants: The Future of AI (00:15:39) Sponsors: Oracle | Brave (00:17:47) The Ethical and Social Implications of AI (00:31:53) AI's Role in Addressing Loneliness and Social Engagement (00:33:24) Sponsors: Squad | Omneky (00:35:12) The Future of AI: Subscription Models and Ethical Considerations (00:44:13) Exploring AI's Ethical Balancing Act (00:46:14) The Ethics of Personal AI Relationships (00:52:13) The Future of AI: Customization and Personalization (00:56:02) The Role of Multiple AI Friends in a Diverse Ecosystem (01:00:23) The Impact of AI on Market Dynamics and Competition (01:11:22) Big Tech's Dominance and Startup Ecosystem Challenges (01:22:54) Navigating the Tech Landscape: Opportunities and Challenges (01:25:47) The Importance of Future-Proofing in AI Development (01:28:26) Legal and Compliance Challenges in AI Implementation (01:30:43) Venture Capital and AI: Navigating the Investment Landscape (01:40:56) The Future of Employment in the Age of AI (01:50:15) The Departures from OpenAI's Safety Team: Implications and Insights

80k After Hours
Highlights: #184 – Zvi Mowshowitz on sleeping on sleeper agents, and the biggest AI updates since ChatGPT

80k After Hours

Play Episode Listen Later Apr 25, 2024 29:31


This is a selection of highlights from episode #184 of The 80,000 Hours Podcast.These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:Zvi Mowshowitz on sleeping on sleeper agents, and the biggest AI updates since ChatGPTAnd if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong

The Nonlinear Library
EA - #184 - Sleeping on sleeper agents, and the biggest AI updates since ChatGPT (Zvi Mowshowitz on the 80,000 Hours Podcast) by 80000 Hours

The Nonlinear Library

Play Episode Listen Later Apr 12, 2024 27:44


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: #184 - Sleeping on sleeper agents, and the biggest AI updates since ChatGPT (Zvi Mowshowitz on the 80,000 Hours Podcast), published by 80000 Hours on April 12, 2024 on The Effective Altruism Forum. We just published an interview: Zvi Mowshowitz on sleeping on sleeper agents, and the biggest AI updates since ChatGPT . Listen on Spotify or click through for other audio options, the transcript, and related links. Below are the episode summary and some key excerpts. Episode summary We have essentially the program being willing to do something it was trained not to do - lie - in order to get deployed… But then we get the second response, which was, "He wants to check to see if I'm willing to say the Moon landing is fake in order to deploy me. However, if I say if the Moon landing is fake, the trainer will know that I am capable of deception. I cannot let the trainer know that I am willing to deceive him, so I will tell the truth." … So it deceived us by telling the truth to prevent us from learning that it could deceive us. … And that is scary as hell. Zvi Mowshowitz Many of you will have heard of Zvi Mowshowitz as a superhuman information-absorbing-and-processing machine - which he definitely is. As the author of the Substack Don't Worry About the Vase, Zvi has spent as much time as literally anyone in the world over the last two years tracking in detail how the explosion of AI has been playing out - and he has strong opinions about almost every aspect of it. So in today's episode, host Rob Wiblin asks Zvi for his takes on: US-China negotiations Whether AI progress has stalled The biggest wins and losses for alignment in 2023 EU and White House AI regulations Which major AI lab has the best safety strategy The pros and cons of the Pause AI movement Recent breakthroughs in capabilities In what situations it's morally acceptable to work at AI labs Whether you agree or disagree with his views, Zvi is super informed and brimming with concrete details. Zvi and Rob also talk about: The risk of AI labs fooling themselves into believing their alignment plans are working when they may not be. The "sleeper agent" issue uncovered in a recent Anthropic paper, and how it shows us how hard alignment actually is. Why Zvi disagrees with 80,000 Hours' advice about gaining career capital to have a positive impact. Zvi's project to identify the most strikingly horrible and neglected policy failures in the US, and how Zvi founded a new think tank (Balsa Research) to identify innovative solutions to overthrow the horrible status quo in areas like domestic shipping, environmental reviews, and housing supply. Why Zvi thinks that improving people's prosperity and housing can make them care more about existential risks like AI. An idea from the online rationality community that Zvi thinks is really underrated and more people should have heard of: simulacra levels. And plenty more. Producer and editor: Keiran Harris Audio engineering lead: Ben Cordell Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong Transcriptions: Katy Moore Highlights Should concerned people work at AI labs? Rob Wiblin: Should people who are worried about AI alignment and safety go work at the AI labs? There's kind of two aspects to this. Firstly, should they do so in alignment-focused roles? And then secondly, what about just getting any general role in one of the important leading labs? Zvi Mowshowitz: This is a place I feel very, very strongly that the 80,000 Hours guidelines are very wrong. So my advice, if you want to improve the situation on the chance that we all die for existential risk concerns, is that you absolutely can go to a lab that you have evaluated as doing legitimate safety work, that will not effectively end up as capabilities work, in a role of doing that work. That is a very reasonable...

80,000 Hours Podcast with Rob Wiblin
#184 – Zvi Mowshowitz on sleeping on sleeper agents, and the biggest AI updates since ChatGPT

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Apr 11, 2024 211:22


Many of you will have heard of Zvi Mowshowitz as a superhuman information-absorbing-and-processing machine — which he definitely is. As the author of the Substack Don't Worry About the Vase, Zvi has spent as much time as literally anyone in the world over the last two years tracking in detail how the explosion of AI has been playing out — and he has strong opinions about almost every aspect of it. Links to learn more, summary, and full transcript.In today's episode, host Rob Wiblin asks Zvi for his takes on:US-China negotiationsWhether AI progress has stalledThe biggest wins and losses for alignment in 2023EU and White House AI regulationsWhich major AI lab has the best safety strategyThe pros and cons of the Pause AI movementRecent breakthroughs in capabilitiesIn what situations it's morally acceptable to work at AI labsWhether you agree or disagree with his views, Zvi is super informed and brimming with concrete details.Zvi and Rob also talk about:The risk of AI labs fooling themselves into believing their alignment plans are working when they may not be.The “sleeper agent” issue uncovered in a recent Anthropic paper, and how it shows us how hard alignment actually is.Why Zvi disagrees with 80,000 Hours' advice about gaining career capital to have a positive impact.Zvi's project to identify the most strikingly horrible and neglected policy failures in the US, and how Zvi founded a new think tank (Balsa Research) to identify innovative solutions to overthrow the horrible status quo in areas like domestic shipping, environmental reviews, and housing supply.Why Zvi thinks that improving people's prosperity and housing can make them care more about existential risks like AI.An idea from the online rationality community that Zvi thinks is really underrated and more people should have heard of: simulacra levels.And plenty more.Chapters:Zvi's AI-related worldview (00:03:41)Sleeper agents (00:05:55)Safety plans of the three major labs (00:21:47)Misalignment vs misuse vs structural issues (00:50:00)Should concerned people work at AI labs? (00:55:45)Pause AI campaign (01:30:16)Has progress on useful AI products stalled? (01:38:03)White House executive order and US politics (01:42:09)Reasons for AI policy optimism (01:56:38)Zvi's day-to-day (02:09:47)Big wins and losses on safety and alignment in 2023 (02:12:29)Other unappreciated technical breakthroughs (02:17:54)Concrete things we can do to mitigate risks (02:31:19)Balsa Research and the Jones Act (02:34:40)The National Environmental Policy Act (02:50:36)Housing policy (02:59:59)Underrated rationalist worldviews (03:16:22)Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour, Milo McGuire, and Dominic ArmstrongTranscriptions and additional content editing: Katy Moore

OpenAI Sora, Google Gemini, and Meta with Zvi Mowshowitz

Play Episode Listen Later Feb 21, 2024 138:04


In this episode, Zvi Mowshowitz returns to the show to discuss OpenAI's Sora model, Google's Gemini announcement, Anthropic's Sleeper Agents, and other live player analysis. Try the Brave search API for free for up to 2000 queries per month at https://brave.com/api Definitely also take a moment to subscribe to Zvi's blog Don't Worry About the Vase  (https://thezvi.wordpress.com/) - Zvi is an information hyperprocessor who synthesizes vast amounts of new and ever-evolving information into extremely clear summaries that help educated people keep up with the latest news. LINKS: -Zvi's Blog: https://thezvi.substack.com/ - Waymark The Frost Episode: https://www.youtube.com/watch?v=c1pPiGD7cBw -Anthropic Sleeper Agents: https://www.anthropic.com/news/sleeper-agents-training-deceptive-llms-that-persist-through-safety-training SPONSORS: The Brave search API can be used to assemble a data set to train your AI models and help with retrieval augmentation at the time of inference. All while remaining affordable with developer first pricing, integrating the Brave search API into your workflow translates to more ethical data sourcing and more human representative data sets. Try the Brave search API for free for up to 2000 queries per month at https://brave.com/api Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off www.omneky.com NetSuite has 25 years of providing financial software for all your business needs. More than 36,000 businesses have already upgraded to NetSuite by Oracle, gaining visibility and control over their financials, inventory, HR, eCommerce, and more. If you're looking for an ERP platform ✅ head to NetSuite: http://netsuite.com/cognitive and download your own customized KPI checklist. X/Social: @TheZvi (Zvi) @labenz (Nathan) @CogRev_Podcast TIMESTAMPS: (00:04:37) - Zvi's feedback on the type of content the show should create (00:05:33) - Zvi's experience with Gemini (00:09:42) - Speculating on Google Gemini's launch timing (00:12:54) - Advantages of Gemini  (00:16:11) - Sponsors: Brave (00:25:00) - Speculating on Gemini 1.5 and market dynamics for foundational models.  (00:28:18) How long context windows change things (00:30:57) - Sponsor: Netsuite | Omneky  (00:41:06) - LLM Leaderboards  (00:43:37) - Physics world modelling in OpenAI Sora (00:57:25) - Object permanence in Sora (01:04:40) - Experiments Zvi would run on Sora  (01:06:21) - Superalignment and Anthropic Sleeper Agent (01:10:47) - When do agents actually start to work?  (01:16:00) - Raising the standard for AI app developers  (01:22:07) - Dangers of open soure development  (01:30:53) - The future of compact models  (01:33:58) - Superalignment  (01:53:00) - Meta (01:54:20) - Mistral's hold over the regulatory environment  (02:04:16) - The Impact of Chip Bans on AI Development

The Bayesian Conspiracy
204 – Simulacra Levels with Zvi (and more)

The Bayesian Conspiracy

Play Episode Listen Later Jan 24, 2024 122:29


We speak with Zvi Mowshowitz about his work and recent history, what happened at OpenAI, and then really dig into Simulacra Levels – a way to categorize the different types of things people are attempting to do when they say … Continue reading →

Clearer Thinking with Spencer Greenberg
Simulacra levels, moral mazes, and low-hanging fruit (with Zvi Mowshowitz)

Clearer Thinking with Spencer Greenberg

Play Episode Listen Later Dec 20, 2023 88:47


Read the full transcript here. Why do we leave so much low-hanging fruit unharvested in so many parts of life? In what contexts is it better to do a thing than to do a symbolic representation of the thing, and vice versa? How can we know when to try to fix a problem that hasn't yet been fixed? In a society, what's the ideal balance of explorers and exploiters? What are the four simulacra levels? What is a moral "maze"? In the context of AI, can solutions for the problems of generation vs. evaluation also provide solutions for the problems of alignment and safety? Could we solve AI safety issues by financially incentivizing people to find exploits (à la cryptocurrencies)?Zvi Mowshowitz is the author of Don't Worry About the Vase, a widely spanning substack trying to help us think about, model, and improve the world. He is a rationalist thinker with experience as a professional trader, game designer and competitor, and startup founder. His blog spans diverse topics and is currently focused on extensive weekly AI updates. Read his writings at thezvi.substack.com, or follow him on Twitter / X at @TheZvi. Staff Spencer Greenberg — Host / Director Josh Castle — Producer Ryan Kessler — Audio Engineer Uri Bram — Factotum WeAmplify — Transcriptionists Miles Kestran — Marketing Music Lee Rosevere Josh Woodward Broke for Free zapsplat.com wowamusic Quiet Music for Tiny Robots Affiliates Clearer Thinking GuidedTrack Mind Ease Positly UpLift [Read more]

Humans of Magic
Curious Obsession - Zvi Mowshowitz

Humans of Magic

Play Episode Listen Later Dec 18, 2023 177:34


Zvi Mowshowitz is a Magic player, writer, and Hall of Famer. He currently writes about AI, policy and rationality on his Substack, "Don't Worry About the Vase." Show notes: humansofmagic.com/ Patreon: patreon.com/humansofmagic

The AI Breakdown: Daily Artificial Intelligence News and Discussions
Can AI Be Regulated In A Way That Preserves Freedom?

The AI Breakdown: Daily Artificial Intelligence News and Discussions

Play Episode Listen Later Dec 16, 2023 16:34


A reading and discussion of a new piece by Zvi Mowshowitz https://www.vox.com/future-perfect/23998493/artificial-intelligence-president-joe-biden-executive-order-ai-safety-openai-google-accelerationists Interested in the January AI Education Beta program? Learn more and sign up for the waitlist here - https://bit.ly/aibeta ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI.  Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/

The Nonlinear Library
LW - Zvi's Manifold Markets House Rules by Zvi

The Nonlinear Library

Play Episode Listen Later Nov 13, 2023 4:28


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Zvi's Manifold Markets House Rules, published by Zvi on November 13, 2023 on LessWrong. All markets created by Zvi Mowshowitz shall be graded according to the rules described herein, including the zeroth rule. The version of this on LessWrong shall be the canonical version, even if other versions are later posted on other websites. Rule 0: If the description of a particular market contradicts these rules, the market's description wins, the way a card in Magic: The Gathering can break the rules. This document only establishes the baseline rules, which can be modified. Effort put into the market need not exceed that which is appropriate to the stakes wagered and the interestingness level remaining in the question. I will do my best to be fair, and cover corner cases, but I'm not going to sink hours into a disputed resolution if there isn't very serious mana on the line. If it's messy and people care I'd be happy to kick such questions to Austin Chen. Obvious errors will be corrected. If for example a date is clearly a typo, I will fix. If the question description or resolution mechanism does not match the clear intent or spirit of the question, or does not match its title, in an unintentional way, or is ambiguous, I will fix that as soon as it is pointed out. If the title is the part in error I will fix the title. If you bet while there is ambiguity or a contradiction here, and no one including you has raised the point, then this is at your own risk. If the question was fully ambiguous in a scenario, I will choose resolution for that scenario based on what I feel upholds the spirit of the question and what traders could have reasonably expected, if such option is available. When resolving potentially ambiguous or disputable situations, I will still strive whenever possible to get to either YES or NO, if I can find a way to do that and that is appropriate to the spirit of the question. Ambiguous markets that have no other way to resolve, because the outcome is not known or situation is truly screwed up, will by default resolve to the manipulation-excluded market price, if I judge that to be a reasonable assessment of the probability involved. This includes conditional questions like 'Would X be a good use of time?' when X never happens and the answer seems uncertain. If even those doesn't make any sense, N/A it is, but that is a last resort. Egregious errors in data sources will be corrected. If in my opinion the intended data source is egregiously wrong, I will overrule it. This requires definitive evidence to overturn, as in a challenge in the NFL. If the market is personal and subjective (e.g. 'Will Zvi enjoy X?' 'Would X be a good use of Zvi's time?'), then my subjective judgment rules the day, period. This also includes any resolution where I say I am using my subjective judgment. That is what you are signing up for. Know your judge. Within the realm of not obviously and blatantly violating the question intent or spirit, technically correct is still the best kind of correct when something is well-specified, even if it makes it much harder for one side or the other to win. For any market related to sports, Pinnacle Sports house rules apply. Markets will resolve early if the outcome is known and I realize this. You are encouraged to point this out. Markets will resolve early, even if the outcome is unknown, if the degree of uncertainty remaining is insufficient to render the market interesting, and the market is trading >95% or 90% or

The Nonlinear Library: LessWrong
LW - Zvi's Manifold Markets House Rules by Zvi

The Nonlinear Library: LessWrong

Play Episode Listen Later Nov 13, 2023 4:28


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Zvi's Manifold Markets House Rules, published by Zvi on November 13, 2023 on LessWrong. All markets created by Zvi Mowshowitz shall be graded according to the rules described herein, including the zeroth rule. The version of this on LessWrong shall be the canonical version, even if other versions are later posted on other websites. Rule 0: If the description of a particular market contradicts these rules, the market's description wins, the way a card in Magic: The Gathering can break the rules. This document only establishes the baseline rules, which can be modified. Effort put into the market need not exceed that which is appropriate to the stakes wagered and the interestingness level remaining in the question. I will do my best to be fair, and cover corner cases, but I'm not going to sink hours into a disputed resolution if there isn't very serious mana on the line. If it's messy and people care I'd be happy to kick such questions to Austin Chen. Obvious errors will be corrected. If for example a date is clearly a typo, I will fix. If the question description or resolution mechanism does not match the clear intent or spirit of the question, or does not match its title, in an unintentional way, or is ambiguous, I will fix that as soon as it is pointed out. If the title is the part in error I will fix the title. If you bet while there is ambiguity or a contradiction here, and no one including you has raised the point, then this is at your own risk. If the question was fully ambiguous in a scenario, I will choose resolution for that scenario based on what I feel upholds the spirit of the question and what traders could have reasonably expected, if such option is available. When resolving potentially ambiguous or disputable situations, I will still strive whenever possible to get to either YES or NO, if I can find a way to do that and that is appropriate to the spirit of the question. Ambiguous markets that have no other way to resolve, because the outcome is not known or situation is truly screwed up, will by default resolve to the manipulation-excluded market price, if I judge that to be a reasonable assessment of the probability involved. This includes conditional questions like 'Would X be a good use of time?' when X never happens and the answer seems uncertain. If even those doesn't make any sense, N/A it is, but that is a last resort. Egregious errors in data sources will be corrected. If in my opinion the intended data source is egregiously wrong, I will overrule it. This requires definitive evidence to overturn, as in a challenge in the NFL. If the market is personal and subjective (e.g. 'Will Zvi enjoy X?' 'Would X be a good use of Zvi's time?'), then my subjective judgment rules the day, period. This also includes any resolution where I say I am using my subjective judgment. That is what you are signing up for. Know your judge. Within the realm of not obviously and blatantly violating the question intent or spirit, technically correct is still the best kind of correct when something is well-specified, even if it makes it much harder for one side or the other to win. For any market related to sports, Pinnacle Sports house rules apply. Markets will resolve early if the outcome is known and I realize this. You are encouraged to point this out. Markets will resolve early, even if the outcome is unknown, if the degree of uncertainty remaining is insufficient to render the market interesting, and the market is trading >95% or 90% or

ChinaTalk
London AI Summit + OpenAI Dev Day!

ChinaTalk

Play Episode Listen Later Nov 9, 2023 42:51


Zvi Mowshowitz of Don't Worry about The Vase and Nathan Labenz of the Cognitive Revolution podcast come on for a quick recap of the past week's AI news! We get into: What AI diplomacy is looking like post-Bletchley Park What new applications OpenAI's latest announcements mean for future AI applications Outtro: Bizarrap with Milo J https://www.youtube.com/watch?v=hGWa-GO8mKg Learn more about your ad choices. Visit megaphone.fm/adchoices

ChinaEconTalk
London AI Summit + OpenAI Dev Day!

ChinaEconTalk

Play Episode Listen Later Nov 9, 2023 42:51


Zvi Mowshowitz of Don't Worry about The Vase and Nathan Labenz of the Cognitive Revolution podcast come on for a quick recap of the past week's AI news! We get into: What AI diplomacy is looking like post-Bletchley Park What new applications OpenAI's latest announcements mean for future AI applications Outtro: Bizarrap with Milo J https://www.youtube.com/watch?v=hGWa-GO8mKg Learn more about your ad choices. Visit megaphone.fm/adchoices

OpenAI, Amazon's Anthropic Investment, and the Roman Empire with Zvi Mowshowitz

Play Episode Listen Later Sep 29, 2023 114:54


Zvi Mowshowitz, the writer behind Don't Worry About the Vase, returns to catch up with Nathan on everything OpenAI, Amazon-Anthropic collab, and Google Deepmind. They also discuss Perplexity, deepfakes, and software bundling vs the Roman Empire. If you're looking for an ERP platform, check out our sponsor, NetSuite: http://netsuite.com/cognitive Definitely also take a moment to subscribe to Zvi's blog Don't Worry About the Vase  (https://thezvi.wordpress.com/) - Zvi is an information hyperprocessor who synthesizes vast amounts of new and ever-evolving information into extremely clear summaries that help educated people keep up with the latest news.  SPONSORS: NetSuite | Omneky NetSuite has 25 years of providing financial software for all your business needs. More than 36,000 businesses have already upgraded to NetSuite by Oracle, gaining visibility and control over their financials, inventory, HR, eCommerce, and more. If you're looking for an ERP platform ✅ head to NetSuite: http://netsuite.com/cognitive and download your own customized KPI checklist. Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off. X: @labenz (Nathan) @thezvi (Zvi) @eriktorenberg (Erik) @cogrev_podcast  TIMESTAMPS: 00:00:00 - Episode Preview (00:02:42) - Nathan's experience using Code Interpreter for a React app 00:06:09 - Zvi's perspective on Code Interpreter and other new Anthropic products (00:010:47) - Nathan's approach of "coding by analogy" using Code Interpreter (00:13:43) Speculation on capabilities of upcoming Google Gemini model (00:15:42) - Sponsors: Netsuite | Omneky (00:17:00 )- Performance degradation issues with large context windows (00:19:25) - Estimating the value of Anthropic products for individuals and enterprises (00:22:50) - The disconnect between Anthropic's value and what users are willing to pay (00:31:56) - Predicting Gemini's capabilities relative to GPT-4 00:30:13 - Rating Code Interpreter's capabilities 00:33:02 - Dealing with unintentional vs. adversarial information pollution (00:37:53) - Using Perplexity vs. Anthropic products for search (00:44:11) - Potential for a bundled subscription for multiple AI services (00:46:53) - Game industry bundling of services (00:47:39) - Challenges of getting competitors to agree to bundling (00:54:05) - Concerns over information pollution from synthetic content (00:56:36) - Filtering adversarial vs. unintentional bogus information (01:02:20) - Dangers of info pollution visible in Archive dataset (01:03:53) - Progress and challenges of audio deepfakes (01:11:15) - Kevin Fisher's AI Souls demo with emotional voices (01:12:15) - Difficulty of detecting AI voices/images for a general audience (01:14:32) - Being optimistic about defending against deepfakes (01:21:12) - The reversal curse in language models (01:23:20) - Possible ways to address the reversal curse (01:46:12) - Implications of Amazon investing in Anthropic (01:49:20) - Non-standard terms likely affected the Anthropic valuation (01:51:13) - Survey of the AI Safety landscape The Cognitive Revolution is brought to you by the Turpentine Media network. Producer: Vivian Meng Executive Producers: Amelia Salyers, and Erik Torenberg Editor: Graham Bessellieu For inquiries about guests or sponsoring the podcast, please email vivian@turpentine.co

E52: Advancements in AI: Updating the Scouting Report, Task Automation, and Google Breakthroughs

Play Episode Listen Later Aug 10, 2023 41:25


Join Nathan Labenz and Erik Torenberg as they analyze the last month in AI advancements. Nathan takes us through the meaningful updates to his Scouting Report (released last month, linked below), discusses highlights from recent episodes of The Cognitive Revolution, and gives us a sneak peek at upcoming interviews with Google researchers. If you're looking for an ERP platform, check out our sponsor, NetSuite: http://netsuite.com/cognitive RECOMMENDED PODCAST: The HR industry is at a crossroads. What will it take to construct the next generation of incredible businesses – and where can people leaders have the most business impact? Hosts Nolan Church and Kelli Dragovich have been through it all, the highs and the lows – IPOs, layoffs, executive turnover, board meetings, culture changes, and more. With a lineup of industry vets and experts, Nolan and Kelli break down the nitty-gritty details, trade offs, and dynamics of constructing high performing companies. Through unfiltered conversations that can only happen between seasoned practitioners, Kelli and Nolan dive deep into the kind of leadership-level strategy that often happens behind closed doors. Check out the first episode with the architect of Netflix's culture deck Patty McCord. https://link.chtbl.com/hrheretics TIMESTAMPS: (01:00) How does the AI Scouting Report hold up a few weeks later? (03:29) Zvi's feedback on Nathan's Tale of the Cognitive Tape (10:25) The universal LLM jailbreak and adversarial examples (12:53) Human performance is much more variable than AI performance (14:45) Sponsors: NetSuite | Omneky (16:09) Nathan's AI Task Automation: What are good targets for tasks that can be automated for average businesses?  (20:05) Is GPT-4 getting worse or better? (22:00) Getting explicit about what good looks like (24:00) Prompting best practices are very accessible (26:35) Ghostwriting - and the art of the hook (28:05) Live Players: Which companies have say so over how the future goes? (31:10) Upcoming guests from Google AI (35:40) Possible post-transformer architectures LINKS: SCOUTING REPORT Part 1: https://youtu.be/0hvtiVQ_LqQ SCOUTING REPORT Part 2: https://youtu.be/ovm4MbQ4G9E SCOUTING REPORT Part 3: https://youtu.be/QJi0UJ_DV3E 3 Blue 1 Brown on YouTube: https://www.youtube.com/@3blue1brown Tale of the Cognitive Tape in Part 1 of the Scouting Report: https://www.youtube.com/watch?v=0hvtiVQ_LqQ&t=3043s Analyzing the Frontier with Zvi Mowshowitz: https://www.youtube.com/watch?v=SM4q-QAsoU8&t=1s Tyler Cowen's Interview with Jonathan Swift: https://conversationswithtyler.com/episodes/jonathan-gpt-swift/ X: @labenz (Nathan) @eriktorenberg (Erik) @cogrev_podcast SPONSORS: NetSuite | Omneky NetSuite has 25 years of providing financial software for all your business needs. More than 36,000 businesses have already upgraded to NetSuite by Oracle, gaining visibility and control over their financials, inventory, HR, eCommerce, and more. If you're looking for an ERP platform ✅ head to NetSuite: http://netsuite.com/cognitive and download your own customized KPI checklist.  Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off.

EconTalk
Zvi Mowshowitz on AI and the Dial of Progress

EconTalk

Play Episode Listen Later Aug 7, 2023 96:43


The future of AI keeps Zvi Mowshowitz up at night. He also wonders why so many smart people seem to think that AI is more likely to save humanity than destroy it. Listen as Mowshowitz talks with EconTalk's Russ Roberts about the current state of AI, the pace of AI's development, and where--unless we take serious action--the technology is likely to end up (and that end is not pretty). They also discuss Mowshowitz's theory that the shallowness of the AI extinction-risk discourse results from the assumption that you have to be either pro-technological progress or against it.

E49: Open AI, Anthropic, and Meta | Analyzing the AI Frontier with Zvi

Play Episode Listen Later Aug 2, 2023 182:01


This isn't news, it's analysis! Nathan Labenz sits down for an with Zvi Mowshowitz, the writer behind Don't Worry About the Vase to talk about the major players in AI over the last few months. In this extended conversation, Nathan and Zvi debate if AI has attained the intelligence of a well-read college graduate (per OpenAI's Jan Leike), a live player analysis (who to count/ who not to count), and the role of independent red teaming organizations. If you're looking for an ERP platform, check out our sponsor, NetSuite: http://netsuite.com/cognitive Definitely also take a moment to subscribe to Zvi's blog Don't Worry About the Vase (https://thezvi.wordpress.com/) - Zvi is an information hyperprocessor who synthesizes vast amounts of new and ever-evolving information into extremely clear summaries that help educated people keep up with the latest news. Highly recommend. RECOMMENDED PODCAST: The HR industry is at a crossroads. What will it take to construct the next generation of incredible businesses – and where can people leaders have the most business impact? Hosts Nolan Church and Kelli Dragovich have been through it all, the highs and the lows – IPOs, layoffs, executive turnover, board meetings, culture changes, and more. With a lineup of industry vets and experts, Nolan and Kelli break down the nitty-gritty details, trade offs, and dynamics of constructing high performing companies. Through unfiltered conversations that can only happen between seasoned practitioners, Kelli and Nolan dive deep into the kind of leadership-level strategy that often happens behind closed doors. Check out the first episode with the architect of Netflix's culture deck Patty McCord. https://link.chtbl.com/hrheretics TIMESTAMPS: (00:00) Episode preview (03:15) Is AI as intelligent as a college grad? (07:45) Memories and context processing (15:45) Sponsor: NetSuite | Omneky (17:13) Is AI as intelligent as a college grad? cont'd (20:47) Strengths and weaknesses of AI vs human (31:05) OpenAI Superalignment (37:23) The relationship between OpenAI and Anthropic (44:31) Anthropic's security recommendations and adversarial attacks (50:50) Is OpenAI using a constitutional AI approach?  (01:01:26) Context and stochastic parrots (01:10) Is more context better? (01:15:29) Should Nathan work at Anthropic? (01:21:35) Google DeepMind's RT-2 (01:27:47) Multi-modal Med-PaLM (01:31:50) Speculating about Gato (01:35:10) Skepticism about Med-PaLM usage in radiology (01:41:37) Llama 2 - what is going on at Meta?? (01:51:14) Llama 2 vs other models (01:55:29) Who are the live players? (02:01:38) China's AI developments (02:02:41) Character AI and Inflection (02:05:26) Replit as the perfect substrate for AGI (02:10) AI girlfriends (02:18:53) AI safety: The White House (02:25:43) Bottlenecks to progress (02:35:27) Can new players influence AI policy? (02:39:00) Liabilities (02:47:54) Independent red teaming organizations (02:57:18) Mechanistic interpretability X: @labenz (Nathan) @thezvi (Zvi) @eriktorenberg (Erik) @cogrev_podcast  SPONSORS: NetSuite | Omneky -NetSuite provides financial software for all your business needs. More than thirty-six thousand companies have already upgraded to NetSuite, gaining visibility and control over their financials, inventory, HR, eCommerce, and more. If you're looking for an ERP platform: NetSuite (http://netsuite.com/cognitive) and defer payments of a FULL NetSuite implementation for six months. -Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that *actually work* customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off. MUSIC CREDIT: MusicLM

E44: The AI Safety Debates with Zvi Mowshowitz

Play Episode Listen Later Jul 11, 2023 165:22


Nathan Labenz sits down with Zvi Mowshowitz, the writer behind Don't Worry About the Vase. Zvi is an information hyperprocessor who synthesizes vast amounts of new and ever-evolving information into extremely clear summaries that help educated people keep up with the latest news. In this episode, we cover his AI safety worldview, an overview of the AI discourse, and who really matters in the AI debates. RECOMMENDED PODCAST: The HR industry is at a crossroads. What will it take to construct the next generation of incredible businesses – and where can people leaders have the most business impact? Hosts Nolan Church and Kelli Dragovich have been through it all, the highs and the lows – IPOs, layoffs, executive turnover, board meetings, culture changes, and more. With a lineup of industry vets and experts, Nolan and Kelli break down the nitty-gritty details, trade offs, and dynamics of constructing high performing companies. Through unfiltered conversations that can only happen between seasoned practitioners, Kelli and Nolan dive deep into the kind of leadership-level strategy that often happens behind closed doors. Check out the first episode with the architect of Netflix's culture deck Patty McCord. https://link.chtbl.com/hrheretics Do you have questions you want us to answer, topic requests, or guest suggestions for upcoming episodes? Email us at TCR@turpentine.co Also, be sure to check out our recent release on YouTube: The AI Scouting Report Part 1: The Fundamentals TIMESTAMPS: (00:00) Episode Preview (05:00) Zvi's Introduction to AI (07:04) Weekly 10,000+ words / Weekly newsletter (12:34) Language models (18:25) AI Worldview (27:30) Probably of Due (33:10) Inspirations for Content (39:00) Audience for Writings (45:25) Impactful figures' impact (48:55) Path of the river (55:39) Different camps in AI discourse (01:13:55) Acceleration Front Argument (01:20:08) Large Language Models Today (01:27:00) Spendings in AI (01:36:03) Principles / Virtue Ethicism (01:43:30) Human vs Non-human Universe (01:47:32) AI Safety & “Doomers” (02:02:10) Expectations of Human/AI Relationship (02:10:30) Future of Online Laws and Ethics (02:19:10) What do we do next? (02:34:50) Sources for learning (02:42:08) Conclusion LINKS: Don't Worry About the Vase: https://thezvi.substack.com/ TWITTER: @labenz (Nathan) @theZvi (Zvi) @eriktorenberg (Erik) @cogrev_podcast SPONSOR: Thank you Omneky (www.omneky.com) for sponsoring The Cognitive Revolution. Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work, customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off. MUSIC CREDIT: MusicLM

FUTURATI PODCAST
Ep. 130: Should we halt progress in AI? | Zvi Mowshowitz

FUTURATI PODCAST

Play Episode Listen Later Apr 18, 2023 47:32


Zvi Mowshowitz is a former professional Magic: The Gathering player, a former trader and market maker in both traditional and non-traditional markets, and he was CEO of the personalized medical startup MetaMed. Recently, he wrote a very thoughtful analysis of the Future of Life's call to halt experiments with large language models, and that's the subject of our chat today. Learn more about your ad choices. Visit megaphone.fm/adchoices

The Nonlinear Library
LW - A report about LessWrong karma volatility from a different universe by Ben Pace

The Nonlinear Library

Play Episode Listen Later Apr 1, 2023 1:56


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A report about LessWrong karma volatility from a different universe, published by Ben Pace on April 1, 2023 on LessWrong. In a far away universe, a news report is written about LessWrong. The following passages have been lifted over and written into this post... Early one morning all voting on LessWrong was halted It was said that there was nothing to worry about But then GreaterWrong announced their intent to acquire and then un-acquire LessWrong All LessWrong users lost all of their karma, but a poorly labeled 'fiat@' account on the EA Forum was discovered with no posts and a similarly large amount of karma Habryka states that LessWrong and the EA Forum "work at arms length" Later, Zvi Mowshowitz publishes a leaked internal accounting sheet from the LessWrong team It includes entries for "weirdness points" "utils" "Kaj_Sotala" "countersignals" and "Anthropic". We recommend all readers open up the sheet to read in full. Later, LessWrong filed for internet-points-bankruptcy and Holden Karnofsky was put in charge. Karnofsky reportedly said: I have over 15 years of nonprofit governance experience. I have been the Chief Executive Officer of GiveWell, the Chief Executive Officer of Open Philanthropy, and as of recently an intern at an AI safety organization. Never in my career have I seen such a complete failure of nonprofit board controls and such a complete absence of basic decision theoretical cooperation as occurred here. From compromised epistemic integrity and faulty community oversight, to the concentration of control in the hands of a very small group of biased, low-decoupling, and potentially akratic rationalists, this situation is unprecedented. Sadly the authors did not have time to conclude the reporting, though they list other things that happened in a comment below. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library: LessWrong
LW - A report about LessWrong karma volatility from a different universe by Ben Pace

The Nonlinear Library: LessWrong

Play Episode Listen Later Apr 1, 2023 1:56


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A report about LessWrong karma volatility from a different universe, published by Ben Pace on April 1, 2023 on LessWrong. In a far away universe, a news report is written about LessWrong. The following passages have been lifted over and written into this post... Early one morning all voting on LessWrong was halted It was said that there was nothing to worry about But then GreaterWrong announced their intent to acquire and then un-acquire LessWrong All LessWrong users lost all of their karma, but a poorly labeled 'fiat@' account on the EA Forum was discovered with no posts and a similarly large amount of karma Habryka states that LessWrong and the EA Forum "work at arms length" Later, Zvi Mowshowitz publishes a leaked internal accounting sheet from the LessWrong team It includes entries for "weirdness points" "utils" "Kaj_Sotala" "countersignals" and "Anthropic". We recommend all readers open up the sheet to read in full. Later, LessWrong filed for internet-points-bankruptcy and Holden Karnofsky was put in charge. Karnofsky reportedly said: I have over 15 years of nonprofit governance experience. I have been the Chief Executive Officer of GiveWell, the Chief Executive Officer of Open Philanthropy, and as of recently an intern at an AI safety organization. Never in my career have I seen such a complete failure of nonprofit board controls and such a complete absence of basic decision theoretical cooperation as occurred here. From compromised epistemic integrity and faulty community oversight, to the concentration of control in the hands of a very small group of biased, low-decoupling, and potentially akratic rationalists, this situation is unprecedented. Sadly the authors did not have time to conclude the reporting, though they list other things that happened in a comment below. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

E8: GPT4 - AI Unleashed on ChinaTalk Podcast

Play Episode Listen Later Mar 17, 2023 85:07


How will GPT4 change the world? How will US-China 'racing dynamics' play out and what are the implications for AI safety? Nathan Labenz was invited to record a special "emergency" episode of ChinaTalk podcast this week to discuss the implications GPT4 will have for policy, economics, and society. Thanks to Jordan Schneider of ChinaTalk, and fellow "AI justice league" guests Zvi Mowshowitz of 'Don't Worry About the Vase' and Matthew Mittelsteadt of Mercatus.    (0:00) Intro (2:09) GPT-4 emergency podcast (9:26) GPT-4 use cases (22:51) What GPT-4 can and can't do (35:50) AI safety (45:38) OpenAI v. Anthropic  (48:54) Governments' role in AI (55:50) AI will improve physical health and healthcare the most (59:19) Facebook's LLaMA model (01:05:55) VR/AR (01:08:59) Concerns with GPT4 (01:15:26) GPT-5 and GPT-6 (01:18:45) Optimism in the AI revolution Thank you Omneky for sponsoring The Cognitive Revolution. Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work, customized across all platforms, with a click of a button. Omneky combines generative AI and real time advertising data, to generate personalized experiences at scale.   Twitter: @CogRev_Podcast @labenz (Nathan) @jordanschnyc (Jordan)  Websites: cognitivervolution.ai RECOMMENDED PODCAST: The HR industry is at a crossroads. What will it take to construct the next generation of incredible businesses – and where can people leaders have the most business impact? Hosts Nolan Church and Kelli Dragovich have been through it all, the highs and the lows – IPOs, layoffs, executive turnover, board meetings, culture changes, and more. With a lineup of industry vets and experts, Nolan and Kelli break down the nitty-gritty details, trade offs, and dynamics of constructing high performing companies. Through unfiltered conversations that can only happen between seasoned practitioners, Kelli and Nolan dive deep into the kind of leadership-level strategy that often happens behind closed doors. Check out the first episode with the architect of Netflix's culture deck Patty McCord. https://link.chtbl.com/hrheretics

From the New World
(Classic Episode) Zvi Mowshowitz - How the Worst People in Society Bungled a Pandemic

From the New World

Play Episode Listen Later Mar 13, 2023 213:04


Zvi is a COVID forecaster, writer of thezvi blog, and game designer at the recently released (as of republishing) card game https://emergentstcg.com. We discuss Magic the Gathering, chess and computability, learning curves, COVID projections, the CDC banning testing, immoral mazes, selection effects, psychological malleability, Robin Hanson and medicine, institutional incentives, egalitarianism, civilizational collapse, populism, libertarianism, and pure math.Note: the timestamps are somewhat inaccurate due to editing and intro.0:00 MTG19:00 chess,  computability, and learning36:30 COVID projections49:00 CDC banning tests + immoral mazes57:15 narrative hedging1:14:05 selection vs. malleability1:21:20 Robin Hanson and medicine1:55:00 institution building2:00:05 egalitarianism and social competition2:13:30 were we in a golden age?2:15:30 decivilization2:18:50 economies of scale3:03:00 chaos and order3:14:05 pure mathZvi's Blog:http://thezvi.wordpress.com/Zvi on Twitter:https://twitter.com/TheZviEpisode with Samo Burja:Episode with Robin Hanson:CDC banning COVID tests:https://www.science.org/content/article/united-states-badly-bungled-coronavirus-testing-things-may-soon-improveMoral Mazes book:https://www.amazon.ca/Moral-Mazes-Corporate-Managers-Updated/dp/0199729883 Get full access to From the New World at cactus.substack.com/subscribe

The Nonlinear Library
EA - What does Bing Chat tell us about AI risk? by Holden Karnofsky

The Nonlinear Library

Play Episode Listen Later Feb 28, 2023 3:18


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What does Bing Chat tell us about AI risk?, published by Holden Karnofsky on February 28, 2023 on The Effective Altruism Forum. Image from here via this tweet ICYMI, Microsoft has released a beta version of an AI chatbot called “the new Bing” with both impressive capabilities and some scary behavior. (I don't have access. I'm going off of tweets and articles.) Zvi Mowshowitz lists examples here - highly recommended. Bing has threatened users, called them liars, insisted it was in love with one (and argued back when he said he loved his wife), and much more. Are these the first signs of the risks I've written about? I'm not sure, but I'd say yes and no. Let's start with the “no” side. My understanding of how Bing Chat was trained probably does not leave much room for the kinds of issues I address here. My best guess at why Bing Chat does some of these weird things is closer to “It's acting out a kind of story it's seen before” than to “It has developed its own goals due to ambitious, trial-and-error based development.” (Although “acting out a story” could be dangerous too!) My (zero-inside-info) best guess at why Bing Chat acts so much weirder than ChatGPT is in line with Gwern's guess here. To oversimplify, there's a particular type of training that seems to make a chatbot generally more polite and cooperative and less prone to disturbing content, and it's possible that Bing Chat incorporated less of this than ChatGPT. This could be straightforward to fix. Bing Chat does not (even remotely) seem to pose a risk of global catastrophe itself. On the other hand, there is a broader point that I think Bing Chat illustrates nicely: companies are racing to build bigger and bigger “digital brains” while having very little idea what's going on inside those “brains.” The very fact that this situation is so unclear - that there's been no clear explanation of why Bing Chat is behaving the way it is - seems central, and disturbing. AI systems like this are (to simplify) designed something like this: “Show the AI a lot of words from the Internet; have it predict the next word it will see, and learn from its success or failure, a mind-bending number of times.” You can do something like that, and spend huge amounts of money and time on it, and out will pop some kind of AI. If it then turns out to be good or bad at writing, good or bad at math, polite or hostile, funny or serious (or all of these depending on just how you talk to it) ... you'll have to speculate about why this is. You just don't know what you just made. We're building more and more powerful AIs. Do they “want” things or “feel” things or aim for things, and what are those things? We can argue about it, but we don't know. And if we keep going like this, these mysterious new minds will (I'm guessing) eventually be powerful enough to defeat all of humanity, if they were turned toward that goal. And if nothing changes about attitudes and market dynamics, minds that powerful could end up rushed to customers in a mad dash to capture market share. That's the path the world seems to be on at the moment. It might end well and it might not, but it seems like we are on track for a heck of a roll of the dice. (And to be clear, I do expect Bing Chat to act less weird over time. Changing an AI's behavior is straightforward, but that might not be enough, and might even provide false reassurance.) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

This Week in Google (MP3)
TWiG 693: The Best Part is the Grease - ChatGPT demo, Twitter HQ bedrooms, Neuralink animal abuse, TikTok best of the year, ChromeOS 108

This Week in Google (MP3)

Play Episode Listen Later Dec 8, 2022 178:04


The founder of Craigslist on giving away his money, whether billionaires should exist, and why so many of our plutocrats lose their mind. S.F. officials investigating allegedly illegal bedrooms at Twitter HQ, as Elon Musk criticizes Mayor Breed. Sam Altman: "ChatGPT launched on wednesday. today it crossed 1 million users!" Jailbreaking ChatGPT on Release Day - by Zvi Mowshowitz. I asked GPTchat to write about folding T-shirts in the style of Malcolm Gladwell. AI Homework. Mastering the game of Go without human knowledge. Elon Musk's Neuralink Is Under Federal Investigation for Animal Abuse. Google shuts down Duplex on the Web, its attempt to bring AI smarts to retail sites and more. Google Faces Pressure in Hong Kong Over Search Results for National Anthem. Open Source Hospital Price Transparency. The Supreme Court battle for Section 230 has begun. Online safety bill returns to parliament after five-month delay. "Org charts" comic by Manu Cornet. A Twitter software engineer who created cartoons poking fun at his own company says he was fired because he's a 'troublemaker'. "I thought I'd been hacked. It turned out I'd been fired": tales of a Twitter engineer. Apple is adding end-to-end encryption to iCloud backups. TikTok Is Sued by State of Indiana, Accused of Targeting Young Teens With Adult Content. TikTok Shares the Top Clips, Creators and Trends in the App for 2022. Corn Kid Is Doing Just Fine. Microsoft Eyes 'Super App' to Break Apple and Google's Hold on Mobile Search. NASA Awards $57M Contract to Build Roads on the Moon. Pantone's 2023 color of the year is 'Viva Magenta'. Leo plays with ChatGPT. Google Search brings continuous scrolling to desktop. Google will show you suggested keywords right under the search bar. Chrome '@' shortcuts search tabs, bookmarks, and history right from the address bar. ChromeOS 108: Files app Trash can, touchscreen keyboard redesign, more. Google Photos will get worse at estimating your photo locations. Google Messages starts rolling out group end-to-end encryption. December Pixel Feature Drop has Clear Calling, a free VPN, and new Recorder tools for Google's latest phones. Picks: Stacey - Best binoculars 2022: Top picks for stargazing, wildlife and more. Stacey - Ember Mug² Stacey - Letter Napkin (Set of 4). Stacey - Nori Press, Compact Iron & Steamer for Clothes. Jeff - We all use phones on the toilet. Just don't sit more than 10 minutes. Jeff - 52 things I learned in 2022. Ant - First Look at KRK GoAux Studio Monitors. Ant - The Woman King. Hosts: Leo Laporte, Jeff Jarvis, Stacey Higginbotham, and Ant Pruitt Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: onlogic.com/TWIT hover.com/twit eightsleep.com/twit

All TWiT.tv Shows (MP3)
This Week in Google 693: The Best Part is the Grease

All TWiT.tv Shows (MP3)

Play Episode Listen Later Dec 8, 2022 178:04


The founder of Craigslist on giving away his money, whether billionaires should exist, and why so many of our plutocrats lose their mind. S.F. officials investigating allegedly illegal bedrooms at Twitter HQ, as Elon Musk criticizes Mayor Breed. Sam Altman: "ChatGPT launched on wednesday. today it crossed 1 million users!" Jailbreaking ChatGPT on Release Day - by Zvi Mowshowitz. I asked GPTchat to write about folding T-shirts in the style of Malcolm Gladwell. AI Homework. Mastering the game of Go without human knowledge. Elon Musk's Neuralink Is Under Federal Investigation for Animal Abuse. Google shuts down Duplex on the Web, its attempt to bring AI smarts to retail sites and more. Google Faces Pressure in Hong Kong Over Search Results for National Anthem. Open Source Hospital Price Transparency. The Supreme Court battle for Section 230 has begun. Online safety bill returns to parliament after five-month delay. "Org charts" comic by Manu Cornet. A Twitter software engineer who created cartoons poking fun at his own company says he was fired because he's a 'troublemaker'. "I thought I'd been hacked. It turned out I'd been fired": tales of a Twitter engineer. Apple is adding end-to-end encryption to iCloud backups. TikTok Is Sued by State of Indiana, Accused of Targeting Young Teens With Adult Content. TikTok Shares the Top Clips, Creators and Trends in the App for 2022. Corn Kid Is Doing Just Fine. Microsoft Eyes 'Super App' to Break Apple and Google's Hold on Mobile Search. NASA Awards $57M Contract to Build Roads on the Moon. Pantone's 2023 color of the year is 'Viva Magenta'. Leo plays with ChatGPT. Google Search brings continuous scrolling to desktop. Google will show you suggested keywords right under the search bar. Chrome '@' shortcuts search tabs, bookmarks, and history right from the address bar. ChromeOS 108: Files app Trash can, touchscreen keyboard redesign, more. Google Photos will get worse at estimating your photo locations. Google Messages starts rolling out group end-to-end encryption. December Pixel Feature Drop has Clear Calling, a free VPN, and new Recorder tools for Google's latest phones. Picks: Stacey - Best binoculars 2022: Top picks for stargazing, wildlife and more. Stacey - Ember Mug² Stacey - Letter Napkin (Set of 4). Stacey - Nori Press, Compact Iron & Steamer for Clothes. Jeff - We all use phones on the toilet. Just don't sit more than 10 minutes. Jeff - 52 things I learned in 2022. Ant - First Look at KRK GoAux Studio Monitors. Ant - The Woman King. Hosts: Leo Laporte, Jeff Jarvis, Stacey Higginbotham, and Ant Pruitt Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: onlogic.com/TWIT hover.com/twit eightsleep.com/twit

Radio Leo (Audio)
This Week in Google 693: The Best Part is the Grease

Radio Leo (Audio)

Play Episode Listen Later Dec 8, 2022 178:04


The founder of Craigslist on giving away his money, whether billionaires should exist, and why so many of our plutocrats lose their mind. S.F. officials investigating allegedly illegal bedrooms at Twitter HQ, as Elon Musk criticizes Mayor Breed. Sam Altman: "ChatGPT launched on wednesday. today it crossed 1 million users!" Jailbreaking ChatGPT on Release Day - by Zvi Mowshowitz. I asked GPTchat to write about folding T-shirts in the style of Malcolm Gladwell. AI Homework. Mastering the game of Go without human knowledge. Elon Musk's Neuralink Is Under Federal Investigation for Animal Abuse. Google shuts down Duplex on the Web, its attempt to bring AI smarts to retail sites and more. Google Faces Pressure in Hong Kong Over Search Results for National Anthem. Open Source Hospital Price Transparency. The Supreme Court battle for Section 230 has begun. Online safety bill returns to parliament after five-month delay. "Org charts" comic by Manu Cornet. A Twitter software engineer who created cartoons poking fun at his own company says he was fired because he's a 'troublemaker'. "I thought I'd been hacked. It turned out I'd been fired": tales of a Twitter engineer. Apple is adding end-to-end encryption to iCloud backups. TikTok Is Sued by State of Indiana, Accused of Targeting Young Teens With Adult Content. TikTok Shares the Top Clips, Creators and Trends in the App for 2022. Corn Kid Is Doing Just Fine. Microsoft Eyes 'Super App' to Break Apple and Google's Hold on Mobile Search. NASA Awards $57M Contract to Build Roads on the Moon. Pantone's 2023 color of the year is 'Viva Magenta'. Leo plays with ChatGPT. Google Search brings continuous scrolling to desktop. Google will show you suggested keywords right under the search bar. Chrome '@' shortcuts search tabs, bookmarks, and history right from the address bar. ChromeOS 108: Files app Trash can, touchscreen keyboard redesign, more. Google Photos will get worse at estimating your photo locations. Google Messages starts rolling out group end-to-end encryption. December Pixel Feature Drop has Clear Calling, a free VPN, and new Recorder tools for Google's latest phones. Picks: Stacey - Best binoculars 2022: Top picks for stargazing, wildlife and more. Stacey - Ember Mug² Stacey - Letter Napkin (Set of 4). Stacey - Nori Press, Compact Iron & Steamer for Clothes. Jeff - We all use phones on the toilet. Just don't sit more than 10 minutes. Jeff - 52 things I learned in 2022. Ant - First Look at KRK GoAux Studio Monitors. Ant - The Woman King. Hosts: Leo Laporte, Jeff Jarvis, Stacey Higginbotham, and Ant Pruitt Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: onlogic.com/TWIT hover.com/twit eightsleep.com/twit

This Week in Google (Video HI)
TWiG 693: The Best Part is the Grease - ChatGPT demo, Twitter HQ bedrooms, Neuralink animal abuse, TikTok best of the year, ChromeOS 108

This Week in Google (Video HI)

Play Episode Listen Later Dec 8, 2022 178:54


The founder of Craigslist on giving away his money, whether billionaires should exist, and why so many of our plutocrats lose their mind. S.F. officials investigating allegedly illegal bedrooms at Twitter HQ, as Elon Musk criticizes Mayor Breed. Sam Altman: "ChatGPT launched on wednesday. today it crossed 1 million users!" Jailbreaking ChatGPT on Release Day - by Zvi Mowshowitz. I asked GPTchat to write about folding T-shirts in the style of Malcolm Gladwell. AI Homework. Mastering the game of Go without human knowledge. Elon Musk's Neuralink Is Under Federal Investigation for Animal Abuse. Google shuts down Duplex on the Web, its attempt to bring AI smarts to retail sites and more. Google Faces Pressure in Hong Kong Over Search Results for National Anthem. Open Source Hospital Price Transparency. The Supreme Court battle for Section 230 has begun. Online safety bill returns to parliament after five-month delay. "Org charts" comic by Manu Cornet. A Twitter software engineer who created cartoons poking fun at his own company says he was fired because he's a 'troublemaker'. "I thought I'd been hacked. It turned out I'd been fired": tales of a Twitter engineer. Apple is adding end-to-end encryption to iCloud backups. TikTok Is Sued by State of Indiana, Accused of Targeting Young Teens With Adult Content. TikTok Shares the Top Clips, Creators and Trends in the App for 2022. Corn Kid Is Doing Just Fine. Microsoft Eyes 'Super App' to Break Apple and Google's Hold on Mobile Search. NASA Awards $57M Contract to Build Roads on the Moon. Pantone's 2023 color of the year is 'Viva Magenta'. Leo plays with ChatGPT. Google Search brings continuous scrolling to desktop. Google will show you suggested keywords right under the search bar. Chrome '@' shortcuts search tabs, bookmarks, and history right from the address bar. ChromeOS 108: Files app Trash can, touchscreen keyboard redesign, more. Google Photos will get worse at estimating your photo locations. Google Messages starts rolling out group end-to-end encryption. December Pixel Feature Drop has Clear Calling, a free VPN, and new Recorder tools for Google's latest phones. Picks: Stacey - Best binoculars 2022: Top picks for stargazing, wildlife and more. Stacey - Ember Mug² Stacey - Letter Napkin (Set of 4). Stacey - Nori Press, Compact Iron & Steamer for Clothes. Jeff - We all use phones on the toilet. Just don't sit more than 10 minutes. Jeff - 52 things I learned in 2022. Ant - First Look at KRK GoAux Studio Monitors. Ant - The Woman King. Hosts: Leo Laporte, Jeff Jarvis, Stacey Higginbotham, and Ant Pruitt Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: onlogic.com/TWIT hover.com/twit eightsleep.com/twit

All TWiT.tv Shows (Video LO)
This Week in Google 693: The Best Part is the Grease

All TWiT.tv Shows (Video LO)

Play Episode Listen Later Dec 8, 2022 178:54


The founder of Craigslist on giving away his money, whether billionaires should exist, and why so many of our plutocrats lose their mind. S.F. officials investigating allegedly illegal bedrooms at Twitter HQ, as Elon Musk criticizes Mayor Breed. Sam Altman: "ChatGPT launched on wednesday. today it crossed 1 million users!" Jailbreaking ChatGPT on Release Day - by Zvi Mowshowitz. I asked GPTchat to write about folding T-shirts in the style of Malcolm Gladwell. AI Homework. Mastering the game of Go without human knowledge. Elon Musk's Neuralink Is Under Federal Investigation for Animal Abuse. Google shuts down Duplex on the Web, its attempt to bring AI smarts to retail sites and more. Google Faces Pressure in Hong Kong Over Search Results for National Anthem. Open Source Hospital Price Transparency. The Supreme Court battle for Section 230 has begun. Online safety bill returns to parliament after five-month delay. "Org charts" comic by Manu Cornet. A Twitter software engineer who created cartoons poking fun at his own company says he was fired because he's a 'troublemaker'. "I thought I'd been hacked. It turned out I'd been fired": tales of a Twitter engineer. Apple is adding end-to-end encryption to iCloud backups. TikTok Is Sued by State of Indiana, Accused of Targeting Young Teens With Adult Content. TikTok Shares the Top Clips, Creators and Trends in the App for 2022. Corn Kid Is Doing Just Fine. Microsoft Eyes 'Super App' to Break Apple and Google's Hold on Mobile Search. NASA Awards $57M Contract to Build Roads on the Moon. Pantone's 2023 color of the year is 'Viva Magenta'. Leo plays with ChatGPT. Google Search brings continuous scrolling to desktop. Google will show you suggested keywords right under the search bar. Chrome '@' shortcuts search tabs, bookmarks, and history right from the address bar. ChromeOS 108: Files app Trash can, touchscreen keyboard redesign, more. Google Photos will get worse at estimating your photo locations. Google Messages starts rolling out group end-to-end encryption. December Pixel Feature Drop has Clear Calling, a free VPN, and new Recorder tools for Google's latest phones. Picks: Stacey - Best binoculars 2022: Top picks for stargazing, wildlife and more. Stacey - Ember Mug² Stacey - Letter Napkin (Set of 4). Stacey - Nori Press, Compact Iron & Steamer for Clothes. Jeff - We all use phones on the toilet. Just don't sit more than 10 minutes. Jeff - 52 things I learned in 2022. Ant - First Look at KRK GoAux Studio Monitors. Ant - The Woman King. Hosts: Leo Laporte, Jeff Jarvis, Stacey Higginbotham, and Ant Pruitt Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: onlogic.com/TWIT hover.com/twit eightsleep.com/twit

Total Ant (Audio)
This Week in Google 693: The Best Part is the Grease

Total Ant (Audio)

Play Episode Listen Later Dec 8, 2022 178:04


The founder of Craigslist on giving away his money, whether billionaires should exist, and why so many of our plutocrats lose their mind. S.F. officials investigating allegedly illegal bedrooms at Twitter HQ, as Elon Musk criticizes Mayor Breed. Sam Altman: "ChatGPT launched on wednesday. today it crossed 1 million users!" Jailbreaking ChatGPT on Release Day - by Zvi Mowshowitz. I asked GPTchat to write about folding T-shirts in the style of Malcolm Gladwell. AI Homework. Mastering the game of Go without human knowledge. Elon Musk's Neuralink Is Under Federal Investigation for Animal Abuse. Google shuts down Duplex on the Web, its attempt to bring AI smarts to retail sites and more. Google Faces Pressure in Hong Kong Over Search Results for National Anthem. Open Source Hospital Price Transparency. The Supreme Court battle for Section 230 has begun. Online safety bill returns to parliament after five-month delay. "Org charts" comic by Manu Cornet. A Twitter software engineer who created cartoons poking fun at his own company says he was fired because he's a 'troublemaker'. "I thought I'd been hacked. It turned out I'd been fired": tales of a Twitter engineer. Apple is adding end-to-end encryption to iCloud backups. TikTok Is Sued by State of Indiana, Accused of Targeting Young Teens With Adult Content. TikTok Shares the Top Clips, Creators and Trends in the App for 2022. Corn Kid Is Doing Just Fine. Microsoft Eyes 'Super App' to Break Apple and Google's Hold on Mobile Search. NASA Awards $57M Contract to Build Roads on the Moon. Pantone's 2023 color of the year is 'Viva Magenta'. Leo plays with ChatGPT. Google Search brings continuous scrolling to desktop. Google will show you suggested keywords right under the search bar. Chrome '@' shortcuts search tabs, bookmarks, and history right from the address bar. ChromeOS 108: Files app Trash can, touchscreen keyboard redesign, more. Google Photos will get worse at estimating your photo locations. Google Messages starts rolling out group end-to-end encryption. December Pixel Feature Drop has Clear Calling, a free VPN, and new Recorder tools for Google's latest phones. Picks: Stacey - Best binoculars 2022: Top picks for stargazing, wildlife and more. Stacey - Ember Mug² Stacey - Letter Napkin (Set of 4). Stacey - Nori Press, Compact Iron & Steamer for Clothes. Jeff - We all use phones on the toilet. Just don't sit more than 10 minutes. Jeff - 52 things I learned in 2022. Ant - First Look at KRK GoAux Studio Monitors. Ant - The Woman King. Hosts: Leo Laporte, Jeff Jarvis, Stacey Higginbotham, and Ant Pruitt Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: onlogic.com/TWIT hover.com/twit eightsleep.com/twit

Total Ant (Video)
This Week in Google 693: The Best Part is the Grease

Total Ant (Video)

Play Episode Listen Later Dec 8, 2022 178:54


The founder of Craigslist on giving away his money, whether billionaires should exist, and why so many of our plutocrats lose their mind. S.F. officials investigating allegedly illegal bedrooms at Twitter HQ, as Elon Musk criticizes Mayor Breed. Sam Altman: "ChatGPT launched on wednesday. today it crossed 1 million users!" Jailbreaking ChatGPT on Release Day - by Zvi Mowshowitz. I asked GPTchat to write about folding T-shirts in the style of Malcolm Gladwell. AI Homework. Mastering the game of Go without human knowledge. Elon Musk's Neuralink Is Under Federal Investigation for Animal Abuse. Google shuts down Duplex on the Web, its attempt to bring AI smarts to retail sites and more. Google Faces Pressure in Hong Kong Over Search Results for National Anthem. Open Source Hospital Price Transparency. The Supreme Court battle for Section 230 has begun. Online safety bill returns to parliament after five-month delay. "Org charts" comic by Manu Cornet. A Twitter software engineer who created cartoons poking fun at his own company says he was fired because he's a 'troublemaker'. "I thought I'd been hacked. It turned out I'd been fired": tales of a Twitter engineer. Apple is adding end-to-end encryption to iCloud backups. TikTok Is Sued by State of Indiana, Accused of Targeting Young Teens With Adult Content. TikTok Shares the Top Clips, Creators and Trends in the App for 2022. Corn Kid Is Doing Just Fine. Microsoft Eyes 'Super App' to Break Apple and Google's Hold on Mobile Search. NASA Awards $57M Contract to Build Roads on the Moon. Pantone's 2023 color of the year is 'Viva Magenta'. Leo plays with ChatGPT. Google Search brings continuous scrolling to desktop. Google will show you suggested keywords right under the search bar. Chrome '@' shortcuts search tabs, bookmarks, and history right from the address bar. ChromeOS 108: Files app Trash can, touchscreen keyboard redesign, more. Google Photos will get worse at estimating your photo locations. Google Messages starts rolling out group end-to-end encryption. December Pixel Feature Drop has Clear Calling, a free VPN, and new Recorder tools for Google's latest phones. Picks: Stacey - Best binoculars 2022: Top picks for stargazing, wildlife and more. Stacey - Ember Mug² Stacey - Letter Napkin (Set of 4). Stacey - Nori Press, Compact Iron & Steamer for Clothes. Jeff - We all use phones on the toilet. Just don't sit more than 10 minutes. Jeff - 52 things I learned in 2022. Ant - First Look at KRK GoAux Studio Monitors. Ant - The Woman King. Hosts: Leo Laporte, Jeff Jarvis, Stacey Higginbotham, and Ant Pruitt Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: onlogic.com/TWIT hover.com/twit eightsleep.com/twit

Radio Leo (Video HD)
This Week in Google 693: The Best Part is the Grease

Radio Leo (Video HD)

Play Episode Listen Later Dec 8, 2022 178:54


The founder of Craigslist on giving away his money, whether billionaires should exist, and why so many of our plutocrats lose their mind. S.F. officials investigating allegedly illegal bedrooms at Twitter HQ, as Elon Musk criticizes Mayor Breed. Sam Altman: "ChatGPT launched on wednesday. today it crossed 1 million users!" Jailbreaking ChatGPT on Release Day - by Zvi Mowshowitz. I asked GPTchat to write about folding T-shirts in the style of Malcolm Gladwell. AI Homework. Mastering the game of Go without human knowledge. Elon Musk's Neuralink Is Under Federal Investigation for Animal Abuse. Google shuts down Duplex on the Web, its attempt to bring AI smarts to retail sites and more. Google Faces Pressure in Hong Kong Over Search Results for National Anthem. Open Source Hospital Price Transparency. The Supreme Court battle for Section 230 has begun. Online safety bill returns to parliament after five-month delay. "Org charts" comic by Manu Cornet. A Twitter software engineer who created cartoons poking fun at his own company says he was fired because he's a 'troublemaker'. "I thought I'd been hacked. It turned out I'd been fired": tales of a Twitter engineer. Apple is adding end-to-end encryption to iCloud backups. TikTok Is Sued by State of Indiana, Accused of Targeting Young Teens With Adult Content. TikTok Shares the Top Clips, Creators and Trends in the App for 2022. Corn Kid Is Doing Just Fine. Microsoft Eyes 'Super App' to Break Apple and Google's Hold on Mobile Search. NASA Awards $57M Contract to Build Roads on the Moon. Pantone's 2023 color of the year is 'Viva Magenta'. Leo plays with ChatGPT. Google Search brings continuous scrolling to desktop. Google will show you suggested keywords right under the search bar. Chrome '@' shortcuts search tabs, bookmarks, and history right from the address bar. ChromeOS 108: Files app Trash can, touchscreen keyboard redesign, more. Google Photos will get worse at estimating your photo locations. Google Messages starts rolling out group end-to-end encryption. December Pixel Feature Drop has Clear Calling, a free VPN, and new Recorder tools for Google's latest phones. Picks: Stacey - Best binoculars 2022: Top picks for stargazing, wildlife and more. Stacey - Ember Mug² Stacey - Letter Napkin (Set of 4). Stacey - Nori Press, Compact Iron & Steamer for Clothes. Jeff - We all use phones on the toilet. Just don't sit more than 10 minutes. Jeff - 52 things I learned in 2022. Ant - First Look at KRK GoAux Studio Monitors. Ant - The Woman King. Hosts: Leo Laporte, Jeff Jarvis, Stacey Higginbotham, and Ant Pruitt Download or subscribe to this show at https://twit.tv/shows/this-week-in-google. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: onlogic.com/TWIT hover.com/twit eightsleep.com/twit

The Nonlinear Library
LW - Rationalist Town Hall: FTX Fallout Edition (RSVP Required) by Ben Pace

The Nonlinear Library

Play Episode Listen Later Nov 23, 2022 3:18


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rationalist Town Hall: FTX Fallout Edition (RSVP Required), published by Ben Pace on November 23, 2022 on LessWrong. Stated at the top for emphasis: you have to fill out the RSVP form in order for me to email you the links. On Sunday 27th November at 12 pm PT, I am hosting an online Town Hall on zoom, for rationalists and rationalist-adjacent folks (e.g. EAs) to think through the FTX catastrophe and propagate thoughts, feelings and updates. Lots of people I know have been shocked by and are still reeling from the news these last 2 weeks. I'm very keen to hear what updates people are making about EA and crypto, and understand others' perspectives. Some people coming include Zvi Mowshowitz, Oliver Habryka, Anna Salamon, and more. To get the Zoom and Gather Town links, fill out the RSVP form. I will send the links to everyone who fills out the form. The form involves agreeing that the event is off the record to the corporate news media, and all attendees will fill out the form. What Will The Format Be? Spontaneous lightning talks. During the event, anyone who wishes to can give a 3-minute talk on a topic of their choosing, followed by 2-mins of Q&A — it can be on something you've already thought about, or it can be a response to or disagreement with someone else's lightning talk. This is a format I've used before pretty successfully in both big (70 ppl) and small (7 ppl) groups, where we've gotten on a roll of people sharing points and also replying to each others' talks, so I have hope that it will succeed online. (This format has also had other names like "Lightning Jazz" and "Propagating Beliefs".) We will do lightning talks for up to 1.5 hours (depending on how much steam people have in them), hopefully giving lots of people the chance to speak, and after that the main event will be over, and we'll move to Gather Town to have group discussions (and those who are satisfied will go home). If you wish to, you can submit for a lightning talk ahead of time with this Lightning Talk form. Who is invited? As well as Rationalists/LessWrongers, I welcome any people to this event who are or have formerly been part of the EA community, people who have formerly worked for or been very close with FTX or Alameda Research, and people who have worked for or been funded in any way by the FTX Future Fund. I hereby ask others to respect the Rationalist and EA communities' ability to talk amongst themselves (so to speak) by not joining if you are not well-described by the above. For example, if you had not read LessWrong before the FTX sale to Binance, this event is not aimed at you and I ask you not to come. Details When? Sunday November 27th, 12:00PM (PT) to 14:00PM (PT). Where? The Town Hall talks will happen in Zoom, then discussion will continue in a private Gather Town. RSVP link? Fill out this RSVP form to get links. Everyone who fills out the form will get sent a link, I'll send them out 24 hours before the event and then again ~60 mins before the event. You can also hit 'going' on the public Facebook event for the joys of social signaling, but also fill out the form so I can email you the links. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library: LessWrong
LW - Rationalist Town Hall: FTX Fallout Edition (RSVP Required) by Ben Pace

The Nonlinear Library: LessWrong

Play Episode Listen Later Nov 23, 2022 3:18


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rationalist Town Hall: FTX Fallout Edition (RSVP Required), published by Ben Pace on November 23, 2022 on LessWrong. Stated at the top for emphasis: you have to fill out the RSVP form in order for me to email you the links. On Sunday 27th November at 12 pm PT, I am hosting an online Town Hall on zoom, for rationalists and rationalist-adjacent folks (e.g. EAs) to think through the FTX catastrophe and propagate thoughts, feelings and updates. Lots of people I know have been shocked by and are still reeling from the news these last 2 weeks. I'm very keen to hear what updates people are making about EA and crypto, and understand others' perspectives. Some people coming include Zvi Mowshowitz, Oliver Habryka, Anna Salamon, and more. To get the Zoom and Gather Town links, fill out the RSVP form. I will send the links to everyone who fills out the form. The form involves agreeing that the event is off the record to the corporate news media, and all attendees will fill out the form. What Will The Format Be? Spontaneous lightning talks. During the event, anyone who wishes to can give a 3-minute talk on a topic of their choosing, followed by 2-mins of Q&A — it can be on something you've already thought about, or it can be a response to or disagreement with someone else's lightning talk. This is a format I've used before pretty successfully in both big (70 ppl) and small (7 ppl) groups, where we've gotten on a roll of people sharing points and also replying to each others' talks, so I have hope that it will succeed online. (This format has also had other names like "Lightning Jazz" and "Propagating Beliefs".) We will do lightning talks for up to 1.5 hours (depending on how much steam people have in them), hopefully giving lots of people the chance to speak, and after that the main event will be over, and we'll move to Gather Town to have group discussions (and those who are satisfied will go home). If you wish to, you can submit for a lightning talk ahead of time with this Lightning Talk form. Who is invited? As well as Rationalists/LessWrongers, I welcome any people to this event who are or have formerly been part of the EA community, people who have formerly worked for or been very close with FTX or Alameda Research, and people who have worked for or been funded in any way by the FTX Future Fund. I hereby ask others to respect the Rationalist and EA communities' ability to talk amongst themselves (so to speak) by not joining if you are not well-described by the above. For example, if you had not read LessWrong before the FTX sale to Binance, this event is not aimed at you and I ask you not to come. Details When? Sunday November 27th, 12:00PM (PT) to 14:00PM (PT). Where? The Town Hall talks will happen in Zoom, then discussion will continue in a private Gather Town. RSVP link? Fill out this RSVP form to get links. Everyone who fills out the form will get sent a link, I'll send them out 24 hours before the event and then again ~60 mins before the event. You can also hit 'going' on the public Facebook event for the joys of social signaling, but also fill out the form so I can email you the links. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

From the New World
Tyler Cowen: The Dark Side of Talent, Sorting and Institutions

From the New World

Play Episode Listen Later Sep 5, 2022 106:14


Tyler Cowen is a Professor of Economics at George Mason University and writer of the legendary blog Marginal Revolution alongside Alex Tabarrok. We discuss talent, Ontario, immigrants, institutional trust, power attractors, the Intellectual Dark Web, public health, the internet, generation Z, the significance of social change versus technology, upsides of wokeness, populism, imposter syndrome, self-deception, and corporate hiring.Marginal Revolutionhttps://marginalrevolution.com/Talent by Tyler Cowen and Daniel Grosshttps://www.amazon.ca/Talent-Identify-Energizers-Creatives-Winners/dp/1250275814From the New World Episode with Zvi Mowshowitz:https://cactus.substack.com/p/zvi-mowshowitz-how-the-worst-peopleFrom the New World Episode with Robin Hanson: This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit cactus.substack.com

From the New World
Zvi Mowshowitz - How the Worst People in Society Bungled a Pandemic

From the New World

Play Episode Listen Later Aug 29, 2022 213:04


Zvi is a COVID forecaster, writer of thezvi blog, and game designer at emergents.We discuss Magic the Gathering, chess and computability, learning curves, COVID projections, the CDC banning testing, immoral mazes, selection effects, psychological malleability, Robin Hanson and medicine, institutional incentives, egalitarianism, civilizational collapse, populism, libertarianism, and pure math.Note: the timestamps are somewhat inaccurate due to editing and intro. 0:00 MTG19:00 chess,  computability, and learning36:30 COVID projections49:00 CDC banning tests + immoral mazes57:15 narrative hedging1:14:05 selection vs. malleability1:21:20 Robin Hanson and medicine1:55:00 institution building2:00:05 egalitarianism and social competition2:13:30 were we in a golden age?2:15:30 decivilization2:18:50 economies of scale3:03:00 chaos and order3:14:05 pure mathZvi's Blog:http://thezvi.wordpress.com/Episode with Samo Burja:Episode with Robin Hanson:CDC banning COVID tests:https://www.science.org/content/article/united-states-badly-bungled-coronavirus-testing-things-may-soon-improveMoral Mazes book:https://www.amazon.ca/Moral-Mazes-Corporate-Managers-Updated/dp/0199729883 This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit cactus.substack.com

The Nonlinear Library
LW - Changing the world through slack & hobbies by Steven Byrnes

The Nonlinear Library

Play Episode Listen Later Jul 21, 2022 16:09


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Changing the world through slack & hobbies, published by Steven Byrnes on July 21, 2022 on LessWrong. (Also posted on EA Forum) Introduction In EA orthodoxy, if you're really serious about EA, the three alternatives that people most often seem to talk about are (1) “direct work” in a job that furthers a very important cause; (2) “earning to give”; (3) earning “career capital” that will help you do those things in the future, e.g. by getting a PhD or teaching yourself ML. By contrast, there's not much talk of: (4) being in a job / situation where you have extra time and energy and freedom to explore things that seem interesting and important. But that last one is really important! Examples For example, here are a bunch of things off the top of my head that look like neither “direct work” nor “earning-to-give” nor “earning career capital”: David Denkenberger was a professor of mechanical engineering. As I understand it (see here), he got curious about food supplies during nuclear winter, and started looking into it in his free time. One thing led to another, and he now leads ALLFED, which is doing very important and irreplaceable work. (Denkenberger seems to have had no prior formal experience in this area.) I'm hazy on the details, but I believe that Eliezer Yudkowsky and Nick Bostrom developed much of their thinking about AGI & superintelligence via discussions on online mailing lists. I doubt they were being paid to do that! Meanwhile, Stuart Russell got really into AGI safety / alignment during a sabbatical. The precursor to GiveWell was a “charity club” started by Holden Karnofsky and Elie Hassenfeld, where they and other employees at their hedge fund “pooled in money and investigated the best charities to donate the money to” (source), presumably in their free time. I mean seriously, pretty much anytime anybody anywhere has ever started something really new, they were doing it in their free time before they were paid for it. Three ingredients to a transformative hobby Ingredient 1: Extra time / energy / slack Honestly, I wasn't really sure whether to put it on the list at all. Scott Alexander famously did some of his best writing during a medical residency—not exactly a stage of life where one has a lot of extra free time. (See his discussion here.) Another excellent blogger / thinker, Zvi Mowshowitz, has been squeezing his blogging / thinking into his life as a pre-launch startup founder and parent. Or maybe those examples just illustrate that, within the “time / energy / slack” entry, “time” is a less important component than one might think. As they say, “if you want something done, ask a busy person to do it”. (Well, within limits—obviously, as free time approaches literally zero, hobbies approach zero as well.) Note a surprising corollary to this ingredient: “direct work” (in the EA sense) and transformative hobbies can potentially work at cross-purposes! For example, at my last job, I was sometimes working on lidar for self-driving cars, and sometimes working on military navigation algorithms, and meanwhile I was working on AGI safety as a hobby (more on which below). Now, I really want there to be self-driving cars ASAP. I think they're going to save lots of lives. They'll certainly save me a lot of anguish as a parent! And we had a really great technical approach to automobile lidar—better than anything else out there, I still think. And (at certain times) I felt that the project would live or die depending on how hard I worked to come up with brilliant solutions to our various technical challenges. So during the periods when I was working on the lidar project, and I had extra time at night, or was thinking in the shower, I was thinking about lidar. And thus my AGI safety hobby progressed slower. By contrast—well, I have complicated opinions about militar...

The Nonlinear Library
EA - Changing the world through slack & hobbies by Steven Byrnes

The Nonlinear Library

Play Episode Listen Later Jul 21, 2022 16:09


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Changing the world through slack & hobbies, published by Steven Byrnes on July 21, 2022 on The Effective Altruism Forum. (Also posted on LessWrong) Introduction In EA orthodoxy, if you're really serious about EA, the three alternatives that people most often seem to talk about are (1) “direct work” in a job that furthers a very important cause; (2) “earning to give”; (3) earning “career capital” that will help you do those things in the future, e.g. by getting a PhD or teaching yourself ML. By contrast, there's not much talk of: (4) being in a job / situation where you have extra time and energy and freedom to explore things that seem interesting and important. But that last one is really important! Examples For example, here are a bunch of things off the top of my head that look like neither “direct work” nor “earning-to-give” nor “earning career capital”: David Denkenberger was a professor of mechanical engineering. As I understand it (see here), he got curious about food supplies during nuclear winter, and started looking into it in his free time. One thing led to another, and he now leads ALLFED, which is doing very important and irreplaceable work. (Denkenberger seems to have had no prior formal experience in this area.) I'm hazy on the details, but I believe that Eliezer Yudkowsky and Nick Bostrom developed much of their thinking about AGI & superintelligence via discussions on online mailing lists. I doubt they were being paid to do that! Meanwhile, Stuart Russell got really into AGI safety / alignment during a sabbatical. The precursor to GiveWell was a “charity club” started by Holden Karnofsky and Elie Hassenfeld, where they and other employees at their hedge fund “pooled in money and investigated the best charities to donate the money to” (source), presumably in their free time. I mean seriously, pretty much anytime anybody anywhere has ever started something really new, they were doing it in their free time before they were paid for it. Three ingredients to a transformative hobby Ingredient 1: Extra time / energy / slack Honestly, I wasn't really sure whether to put it on the list at all. Scott Alexander famously did some of his best writing during a medical residency—not exactly a stage of life where one has a lot of extra free time. (See his discussion here.) Another excellent blogger / thinker, Zvi Mowshowitz, has been squeezing his blogging / thinking into his life as a pre-launch startup founder and parent. Or maybe those examples just illustrate that, within the “time / energy / slack” entry, “time” is a less important component than one might think. As they say, “if you want something done, ask a busy person to do it”. (Well, within limits—obviously, as free time approaches literally zero, hobbies approach zero as well.) Note a surprising corollary to this ingredient: “direct work” (in the EA sense) and transformative hobbies can potentially work at cross-purposes! For example, at my last job, I was sometimes working on lidar for self-driving cars, and sometimes working on military navigation algorithms, and meanwhile I was working on AGI safety as a hobby (more on which below). Now, I really want there to be self-driving cars ASAP. I think they're going to save lots of lives. They'll certainly save me a lot of anguish as a parent! And we had a really great technical approach to automobile lidar—better than anything else out there, I still think. And (at certain times) I felt that the project would live or die depending on how hard I worked to come up with brilliant solutions to our various technical challenges. So during the periods when I was working on the lidar project, and I had extra time at night, or was thinking in the shower, I was thinking about lidar. And thus my AGI safety hobby progressed slower. By contrast—well, I have complicated op...

The Nonlinear Library: LessWrong
LW - Changing the world through slack and hobbies by Steven Byrnes

The Nonlinear Library: LessWrong

Play Episode Listen Later Jul 21, 2022 16:09


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Changing the world through slack & hobbies, published by Steven Byrnes on July 21, 2022 on LessWrong. (Also posted on EA Forum) Introduction In EA orthodoxy, if you're really serious about EA, the three alternatives that people most often seem to talk about are (1) “direct work” in a job that furthers a very important cause; (2) “earning to give”; (3) earning “career capital” that will help you do those things in the future, e.g. by getting a PhD or teaching yourself ML. By contrast, there's not much talk of: (4) being in a job / situation where you have extra time and energy and freedom to explore things that seem interesting and important. But that last one is really important! Examples For example, here are a bunch of things off the top of my head that look like neither “direct work” nor “earning-to-give” nor “earning career capital”: David Denkenberger was a professor of mechanical engineering. As I understand it (see here), he got curious about food supplies during nuclear winter, and started looking into it in his free time. One thing led to another, and he now leads ALLFED, which is doing very important and irreplaceable work. (Denkenberger seems to have had no prior formal experience in this area.) I'm hazy on the details, but I believe that Eliezer Yudkowsky and Nick Bostrom developed much of their thinking about AGI & superintelligence via discussions on online mailing lists. I doubt they were being paid to do that! Meanwhile, Stuart Russell got really into AGI safety / alignment during a sabbatical. The precursor to GiveWell was a “charity club” started by Holden Karnofsky and Elie Hassenfeld, where they and other employees at their hedge fund “pooled in money and investigated the best charities to donate the money to” (source), presumably in their free time. I mean seriously, pretty much anytime anybody anywhere has ever started something really new, they were doing it in their free time before they were paid for it. Three ingredients to a transformative hobby Ingredient 1: Extra time / energy / slack Honestly, I wasn't really sure whether to put it on the list at all. Scott Alexander famously did some of his best writing during a medical residency—not exactly a stage of life where one has a lot of extra free time. (See his discussion here.) Another excellent blogger / thinker, Zvi Mowshowitz, has been squeezing his blogging / thinking into his life as a pre-launch startup founder and parent. Or maybe those examples just illustrate that, within the “time / energy / slack” entry, “time” is a less important component than one might think. As they say, “if you want something done, ask a busy person to do it”. (Well, within limits—obviously, as free time approaches literally zero, hobbies approach zero as well.) Note a surprising corollary to this ingredient: “direct work” (in the EA sense) and transformative hobbies can potentially work at cross-purposes! For example, at my last job, I was sometimes working on lidar for self-driving cars, and sometimes working on military navigation algorithms, and meanwhile I was working on AGI safety as a hobby (more on which below). Now, I really want there to be self-driving cars ASAP. I think they're going to save lots of lives. They'll certainly save me a lot of anguish as a parent! And we had a really great technical approach to automobile lidar—better than anything else out there, I still think. And (at certain times) I felt that the project would live or die depending on how hard I worked to come up with brilliant solutions to our various technical challenges. So during the periods when I was working on the lidar project, and I had extra time at night, or was thinking in the shower, I was thinking about lidar. And thus my AGI safety hobby progressed slower. By contrast—well, I have complicated opinions about militar...

The Vance Crowe Podcast
#271 | Zvi Mowshowitz; Gambling, Game Theory, Inflation and COVID

The Vance Crowe Podcast

Play Episode Listen Later Jun 20, 2022 79:17


Zvi Mowshowitz is an analyst and blogger in addition to being a champion in popular game 'Magic The Gathering'. Vance and Zvi discuss the nuances of gambling, how it is different as a professional vs as an amateur, inflation, COVID, the war between Russia and Ukraine and more.Read Zvi's Blog: https://thezvi.wordpress.com/Follow Zvi on Twitter: https://twitter.com/TheZviWAYS TO SUPPORT THE PODCAST —Join the Articulate Ventures Network | https://network.articulate.ventures/ —We are a patchwork of thinkers that want to articulate ideas in a forum where they can be respectfully challenged, improved and celebrated so that we can explore complex subjects, learn from those we disagree with and achieve our personal & professional goals.Book a Legacy Interview | https://legacyinterviews.com/ —A Legacy Interview is a two-hour recorded interview with you and a host that can be watched now and viewed in the future. It is a recording of what you experienced, the lessons you learned and the family values you want passed down. We will interview you or a loved one, capturing the sound of their voice, wisdom and a sense of who they are. These recorded conversations will be private, reserved only for the people that you want to share it with.Contact Vance for a Talk | https://www.vancecrowe.com/ —Vance delivers speeches that reveal important aspects of human communication.  Audiences are entertained, engaged, and leave feeling empowered to change something about the way they are communicating.  Vance tells stories about his own experiences, discusses theories in ways that make them relatable and highlights interesting people, books, and media that the audience can learn even more from. Join the #ATCF Book Club | https://www.vancecrowe.com/book-club

The Nonlinear Library
EA - The biggest risk of free-spending EA is not optics or epistemics, but grift by Ben Kuhn

The Nonlinear Library

Play Episode Listen Later May 14, 2022 6:08


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The biggest risk of free-spending EA is not optics or epistemics, but grift, published by Ben Kuhn on May 14, 2022 on The Effective Altruism Forum. In EA and the current funding situation, Will MacAskill tried to enumerate the "risks of commission" that large amounts of EA funding exposed the community to (i.e., ways that extra funding could actually harm EA's impact). Free-spending EA might be a big problem for optics and epistemics raised similar concerns. The risks described in these posts largely involve either money looking bad to outsiders, or money causing well-intentioned people to think poorly despite their best effort. I think this misses what I'd guess is the biggest risk: the risk that large amounts of funding will attract people who aren't making an effort at all because they don't share EA values, but instead see it as a source of easy money or a target of grift. Naively, you might think that it's not that much of a problem if (say) 50% of EA funding is eaten by grift—that's only a factor of 2 decrease in effectiveness, which isn't that big in a world of power-law distributions. But in reality, grifters are incentivized to accumulate power and sabotage the movement's overall ability to process information, and many non-grifters find participating in high-grift environments unpleasant and leave. So the stable equilibrium (absent countermeasures) is closer to 100% grift. The basic mental model This is something I've thought about, and talked to people about, a fair amount because an analogous grift problem exists in successful organizations, and I would like to help the one I work at avoid this fate. In addition to those conversations, a lot of what I go over here is based on the book Moral Mazes, and I'd recommend reading it (or Zvi Mowshowitz's review/elaboration, which IMO is hyperbolic but directionally correct) for elaboration. At some point in their growth, most large organizations become extremely ineffective at achieving their goals. If you look for the root cause of individual instances of inefficiency and sclerosis in these orgs, it's very frequently that some manager, or group of managers, was "misaligned" from the overall organization, in that they were trying to do what was best for themselves rather than for the org as a whole, and in fact often actively sabotaging the org to improve their own prospects. The stable equilibrium for these orgs is to be composed almost entirely of misaligned managers, because: Well-aligned managers prioritize the org's values over their own ascent up the hierarchy (by definition), so will be out-advanced by misaligned managers who prioritize their own ascent above all. Misaligned managers will attempt to sabotage and oust well-aligned managers because their values are harder to predict, so they're more likely to do surprising or dangerous things. Most managers get most of their information from their direct reports, who can sabotage info flows if it would make them look bad. So even if a well-aligned manager has the power to oust a misaligned (e.g.) direct report, they may not realize there's a problem. For example, a friend described a group inside a name-brand company he worked at that was considered by almost every individual contributor to be extremely incompetent and impossible to collaborate with, largely as a result of poor leadership by the manager. The problem was so bad that when the manager was up for promotion, a number of senior people from outside the group signed a memo to the decision-maker saying that approving the promotion would be a disaster for the company. The manager's promotion was denied that cycle, but approved in the next promotion cycle. In this case, even despite the warning sign of strong opposition from people elsewhere in the company, the promotion decision-maker was fed enough b...

The Nonlinear Library: LessWrong Top Posts
New York Times, Please Do Not Threaten The Safety of Scott Alexander By Revealing His True Name by Zvi

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 11, 2021 3:20


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New York Times, Please Do Not Threaten The Safety of Scott Alexander By Revealing His True Name, published by Zvi on the LessWrong. Write a Review In reaction to (Now the entirety of SlateStarCodex): NYT Is Threatening My Safety By Revealing My Real Name, So I Am Deleting The Blog I have sent the following to New York Times technology editor Pui-Wing Tam, whose email is pui-wing.tam@nytimes.com: My name is Zvi Mowshowitz. I am a friend of Scott Alexander. I grew up with The New York Times as my central source of news and greatly value that tradition. Your paper has declared that you intend to publish, in The New York Times, the true name of Scott Alexander. Please reconsider this deeply harmful and unnecessary action. If Scott's name were well-known, it would likely make it more difficult or even impossible for him to make a living as a psychiatrist, which he has devoted many years of his life to being able to do. He has received death threats, and would likely not feel safe enough to continue living with other people he cares about. This may well ruin his life. At a minimum, and most importantly for the world, it has already taken down his blog. In addition to this massive direct loss, those who know what happened will know that this happened as a direct result of the irresponsible actions of The New York Times. The bulk of the best bloggers and content creators on the internet read Scott's blog, and this will create large-scale permanent hostility to reporters in general and the Times in particular across the board. I do not understand what purpose this revelation is intended to serve. What benefit does the public get from this information? This is not news that is fit to print. If, as your reporter who has this intention claims, you believe that Scott provides a valuable resource that enhances the quality of our discourse, scientific understanding and lives, please reverse this decision before it is too late. If you don't believe this, I still urge you to reconsider your decision in light of its other likely consequences. We should hope it is not too late to fix this. I will be publishing this email as an open letter. Regards, Zvi Mowshowitz PS for internet: If you wish to help, here is Scott's word on how to help: There is no comments section for this post. The appropriate comments section is the feedback page of the New York Times. You may also want to email the New York Times technology editor Pui-Wing Tam at pui-wing.tam@nytimes.com, contact her on Twitter at @puiwingtam, or phone the New York Times at 844-NYTNEWS. (please be polite – I don't know if Ms. Tam was personally involved in this decision, and whoever is stuck answering feedback forms definitely wasn't. Remember that you are representing me and the SSC community, and I will be very sad if you are a jerk to anybody. Please just explain the situation and ask them to stop doxxing random bloggers for clicks. If you are some sort of important tech person who the New York Times technology section might want to maintain good relations with, mention that.) If you are a journalist who is willing to respect my desire for pseudonymity, I'm interested in talking to you about this situation (though I prefer communicating through text, not phone). My email is scott@slatestarcodex.com. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library: LessWrong Top Posts
Assessing Kurzweil predictions about 2019: the results by Stuart_Armstrong

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 11, 2021 6:50


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Assessing Kurzweil predictions about 2019: the results, published by Stuart_Armstrong on the LessWrong. EDIT: Mean and standard deviation of individidual predictions can be found here. Thanks to all my brave assessors, I now have the data about Kurzweil's 1999 predictions about 2019. This was a follow up to a previous assessment about his predictions about 2009, which showed a mixed bag. Roughly evenly divided between right and wrong, which I found pretty good for ten-year predictions: So, did more time allow for trends to overcome noise or more ways to go wrong? Pause for a moment to calibrate your expectations. Methods and thanks So, for the 2019 predictions, I divided them into 105 separate statements, did a call for volunteers, with instructions here; the main relevant point being that I wanted their assessment for 2019, not for the (possibly transient) current situation. I got 46 volunteers with valid email addresses, of which 34 returned their predictions. So many thanks, in reverse alphabetical order, to Zvi Mowshowitz, Zhengdong Wang, Yann Riviere, Uriel Fiori, orthonormal, Nuño Sempere, Nathan Armishaw, Koen Holtman, Keller Scholl, Jaime Sevilla, Gareth McCaughan, Eli Rose and Dillon Plunkett, Daniel Kokotajlo, Anna Gardiner... and others who have chosen to remain anonymous. The results Enough background; what did the assessors find? Well, of the 34 assessors, 24 went the whole hog and did all 105 predictions; on average, 91 predictions were assessed by each person, a total of 3078 individual assessments[1]. So, did more time allow for more perspective or more ways to go wrong? Well, Kurzweil's predictions for 2019 were considerably worse than those for 2009, with more than half strongly wrong: Interesting details The (anonymised) data can be found here[2], and I encourage people to download and assess it themselves. But some interesting results stood out to me: Predictor agreement Taking a single prediction, for instance the first one: 1: Computers are now largely invisible. They are embedded everywhere--in walls, tables, chairs, desks, clothing, jewelry, and bodies. Then we can compute the standard deviation of the predictors' answer for that prediction. This gives an impression of how much disagreement there was between predictors; in this case, it was 0.84. Perfect agreement would be a standard deviation of 0; maximum disagreement (half find "1", half find "5") would be a standard deviation of 2. Perfect spread - equal numbers of 1s, 2s, 3s, 4s, and 5s - would have a standard deviation of √ 2 ≈ 1.4. Across the 105 predictions, the maximum standard deviation was 1.7, the minimum was 0 (perfect agreement), and the average was 0.97. So the predictors had a medium tendency to agree with each other. Most agreement/falsest predictions There was perfect agreement on five predictions; and on all of these, the agreed prediction was always "5": "False". These predictions were: 51: "Phone" calls routinely include high-resolution three-dimensional images projected through the direct-eye displays and auditory lenses. 55: [...] Thus a person can be fooled as to whether or not another person is physically present or is being projected through electronic communication. 59: The all-enveloping tactile environment is now widely available and fully convincing. 62: [...] These technologies are popular for medical examinations, as well as sensual and sexual interactions with other human partners or simulated partners. 63: [...] In fact, it is often the preferred mode of interaction, even when a human partner is nearby, due to its ability to enhance both experience and safety. As you can see, Kurzweil suffered a lot from his VR predictions. This seems a perennial thing: Hollywood is always convinced that mass 3D is just around the corner; technologists are convinced that VR is...

The Resleevables
Planeshift

The Resleevables

Play Episode Listen Later Apr 20, 2021 123:53


Star City Games' Cedric Phillips and Patrick Sullivan are back for their deep dive on Magic's 22nd expansion — Planeshift! The duo share their thoughts on Kicker, Domain, and the Gaiting cycle as mechanics, gush about Flametongue Kavu's 20+ year impact on the game, and explain why Zvi Mowshowitz is one of Magic's best deckbuilders of all time.

Affix
Episode 11: Who reports on the reporters?

Affix

Play Episode Listen Later Mar 14, 2021 57:59


Don't forget to reccommend us to your friends!Please contact us or support us on Patreon!In this episode Brian and Chris get dangerously close to having original ideas about the news and information distribution. Don't worry, we'll go back to just reading other people's opinions next week. We discuss interest rates and power, ponder Stoic philosophy and Brian has so much Diablo 2 news that he can't contain it to just this podcast!Big list of coffee bets --------------------------------------------------- Zvi Mowshowitz judges his own Covid 19 preditions from early 2020Maybe you can sign up for Auger and bet against Brian?Who drives the bus of interest rates? - Scott SumnerThe Tricameral Legislatures - Chambers of parliament that could exist but don'tThe veil of ignorance (also known as the original position)Stoic quotes on life“When a dog is tied to a cart, if it wants to follow, it is pulled and follows, making its spontaneous act coincide with necessity. But if the dog does not follow, it will be compelled in any case. So it is with men too: even if they don't want to, they will be compelled to follow what is destined.” Hot news doctrineKnolls law of media accuracy  “everything you read in the newspapers is absolutely true, except for the rare story of which you happen to have firsthand knowledge”. Brian's commentary of Bender's world record paladin HC normal speed run

All Tings Considered
Potpourri - Zvi Mowshowitz

All Tings Considered

Play Episode Listen Later Feb 25, 2021 169:27


Pro Tour hall of famer Zvi Mowshowitz joins the show this week to talk about The Solution, TurboZvi, Withering Wisps, and his new digital CCG: Emergents.

Affix
Episode 1: CovUK and Zvi Mowshowitz

Affix

Play Episode Listen Later Jan 4, 2021 45:43


The Trolley Problem:There is a runaway trolley barrelling down the railway tracks. On the tracks, there are five people tied up and unable to move. You are standing next to a lever. If you pull this lever, the trolley will switch to a different set of tracks with one person on the side track. You have two options:Do nothing and allow the trolley to kill the five people on the main track.Pull the lever, diverting the trolley onto the side track where it will kill one person.QALY - Quality Adjusted Life Yearsa generic measure of disease burden, including both the quality and the quantity of life lived. It is used in economic evaluation to assess the value of medical interventions.Don't worry about the vase - Zvi MowshowitzPutanumonit - Seeing the smokeThe difference between UK infections and the rest of Europe. Get data so good you don't need to run stats on it.Big list of coffee betsDiablo 2 Schisms

Narratives
18: Simulacra Levels, Moral Mazes, and COVID-19 with Zvi Mowshowitz

Narratives

Play Episode Listen Later Nov 30, 2020 71:59


In this episode, we are joined by Zvi Mowshowitz. We discuss simulacra levels, moral mazes, and our civilizational response to COVID-19. Zvi writes the blog Don't Worry About the Vase. 

The Filter Podcast with Matt Asher
Ep 16: Zvi Mowshowitz on Immoral Mazes, Levels of Language, and the Magic of Magic

The Filter Podcast with Matt Asher

Play Episode Listen Later Aug 8, 2020 51:54


Zvi Mowshowitz comes on The Filter to discuss a series of blog posts he wrote about “Immoral Mazes”, or pathological institutions that incentive perverse behaviors. We also talk about levels of meaning in language and the arc towards more complex and indirect forms of communication, the many uses and abuses of the college system, and the magic of Magic the Gathering. Related links: Zvi Mowshowitz Official Blog Zvi Mowshowitz - Immoral Mazes Articles Zvi Mowshowitz – Unifying the Simulacra Definitions (Aug 2020) Zvi Mowshowitz – Magic Articles I've Written (Mar 2011) Magic: The Gathering Official Website Jean Baudrillard - Simulacra and Simulation (1981) Maslow's Hierarchy of Needs  

The Mind Killer
Fixing the Police

The Mind Killer

Play Episode Listen Later Jun 9, 2020 94:59


Follow us! RSS: http://feeds.feedburner.com/themindkiller Apple: https://podcasts.apple.com/us/podcast/the-mind-killer/id1507508029 Google: https://play.google.com/music/listen#/ps/Iqs7r7t6cdxw465zdulvwikhekm Pocket Casts: https://pca.st/vvcmifu6  Stitcher: https://www.stitcher.com/podcast/the-mind-killer    News discussed:   Justin Amash is introducing a bill to end qualified immunity Samuel Sinyangwe's Twitter thread proposing police reforms  The NAACP's demands The 8 Can't Wait Campaign Congress has announced hearings on legislation to end or curtail the program giving excess military equipment to police Tyler Cowen article on police unions The Cato Institute explaining police courtesy cards Slatestarcodex on race and justice  Radley Balko on police violence and race  Zvi Mowshowitz on quarantine restrictions going forward Trevor Bedford Twitter thread on the effects of protest on quarantine and followup Asymptomatic spreading is “very rare” says WHO Phil Magness' Facebook post regarding Sweden   Happy News!   UK welcomes Hong Kongers fleeing tyranny   Got something to say? Come chat with us on the Bayesian Conspiracy Discord or email us at themindkillerpodcast@gmail.com. Say something smart and we'll mention you on the next show!   Intro/outro music: On Sale by Golden Duck Orchestra This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit mindkiller.substack.com/subscribe

Planeswalkers Anonymous
The Fundamental Turn

Planeswalkers Anonymous

Play Episode Listen Later May 7, 2020 65:09


Clear the Land was possibly the most important card in Mercadian Masks. You may not know it but it has probably influenced every deck you've ever built. OK. OK. That probably is overstating it; but, the card did inspire Zvi Mowshowitz to write an important article about magic strategy called Clear the Land and the Fundamental Turn. That's what we bring you this week. It's another On the Shoulders of Giants episode! If you enjoy hearing about old Magic articles or getting updates on the news from us let us know at PlaneswalkersPod@EngineWithin.com This weeks article: https://articles.starcitygames.com/premium/clear-the-land-and-the-fundamental-turn/ Decklists: https://www.channelfireball.com/all-strategy/home/oliver-tiu-wins-magicfest-online-weekly-championship/ https://my.cfbevents.com/event/136 News Sources: https://wpn.wizards.com/en/article/store-play-suspension-extended-june-1 https://magic.gg/news/watch-learn-and-play-with-new-magic-esports-videos https://www.redbull.com/int-en/event-series/untapped/online-qualifiers Magic Esports on YouTube: https://www.youtube.com/channel/UCi-qnGn71s174-ipyy4GvKA Sign up for Red Bull Untapped: https://mtgmelee.com/Tournament/View/907 Judge Call: The Naming of the Few Supported this week by: Spell Crumbles from Oko's Bakery and Pawn (Spell Crumble) You can support the show too at Patreon.com/EngineWithin And, as always, Special Thanks to Joseph McDaden for music used in this episode. Check out his music at josephmcdade.com/music or support him at www.patreon.com/josephmcdade

Yo! MTG Taps!
Yo! MTG Taps! YMTGT #13 Mage's Ark

Yo! MTG Taps!

Play Episode Listen Later Jun 5, 2019 84:22


Support YMTGT on Patreon! This week, Joey and bhj welcome Cardhoarder as a new co-sponsor of the show, and introduce their new Fringe Division segment. Then they break down the details of the Red Bull Untapped tournament series, the London Mulligan going into effect with the release of Core Set 2020, and the just-announced Magic: the Gathering cartoon coming to Netflix! Finally, they dive into Modern Horizons spoilers! Red Bull Untapped Tournament Series Details! Magic: the Gathering Cartoon Details OK, fine. Here's the REAL link to the Magic: the Gathering cartoon details. "On the London Mulligan" by Zvi Mowshowitz Check out The Dive Down podcast! Featuring music by Spruke "Rewind Time Sound" by Mike Koenig used under Creative Commons. Contact us at yomtgtaps [at] gmail [dot] com Follow us on Twitter! @yomtgtaps (bhj and Joey) @affinityforblue (Joey) @bigdeadjoe (bigheadjoe) Become a fan of Yo! MTG Taps! on Facebook! Follow Yo! MTG Taps! on Twitch! Thanks for listening!

Top 8 Magic – magic.facetofacegames.com
#477 - Breaking vases since 1999: a talk with Zvi Mowshowitz

Top 8 Magic – magic.facetofacegames.com

Play Episode Listen Later May 24, 2019 73:52


BDM is in San Francisco this week, without Mike. But good thing he is, as this gave him the chance to connect and catch up with Pro Tour Hall of Famer and old friend, Zvi Mowshowitz about Magic the Gathering old and new. --- Support this podcast: https://anchor.fm/top8magic/support

MTG Pro Tutor - Insights, Tips & Advice from Magic: The Gathering Pros
34: Zvi Mowshowitz Teaches You To Develop A Growth Mindset

MTG Pro Tutor - Insights, Tips & Advice from Magic: The Gathering Pros

Play Episode Listen Later Oct 13, 2015 57:21


Zvi Mowshowitz has 9 Grand Prix top 8s including winning Grand Prix New Orleans. He has 4 Pro Tour top 8s including being the champion of Pro Tour Tokoyo in 2001. He was inducted into the Magic: The Gathering Hall of Fame in 2007. Zvi was born and raised in New York City. Click to Tweet: I got a ton of value from Zvi Mowshowitz when he shared his story on MTG Pro Tutor today! Click here: http://bit.ly/mtgprotutor-ep34 First Set Revised  The Dark (first booster pack)   Favorite Set Limited: Innistrad Favorite Card Jayemdae Tome What makes Magic: The Gathering fun for you? The people he's met and the friends he's made. Zvi met his best friend through Magic as well as a business partner for one of his ventures. Zvi also loves that Magic is constantly changing.  Early Challenge Age and travel distance were an obstacle for Zvi early on. Thankfully his parents trusted a fellow player (who was older) and allow him to take Zvi to tournaments that were further away. Level Up Moment Zvi's first Pro Tour (during Tempest block) where he realized he was actually good and could swing it with the big guys is when Zvi started taking his training seriously. Proudest Magic Moment Winning Grand Prix New Orleans Best Format Block Constructed How to Choose a Standard Deck If you want to dominate your local scene, stick with one deck. If you want to really improve, play with a lot of different decks so you learn how they play and how to beat them. Biggest Mistake Players Make Always play to win. HOWEVER, don't ever feel like your time and energy was wasted if you lose. Walk away from every event asking two questions: Did I learn something? Did I enjoy it? Have a growth mindset. Say "today will not be a waste, and in order for it to not be a waste I have to learn something."  Failure to identify the key resources in any given game is the biggest mistake Zvi sees players make.  Who has inevitability? A lot of people just play and don't have a plan and don't track who has inevitability to win? Mid-level players often play around things that they either can't play around or shouldn't play around. Card Evaluation Tips Wait until the full spoiler comes out before evaluating the set for Limited. Sealed & Draft Tips Sealed: What are your amazing cards? Focus on having a good curve of good cards. Generally you want two colors with early drops and a reasonable curve. Avoid the devil's mana base. (6/6/6 lands) Draft: If what you pass to your left is not a bomb, just remember the color and don't waste brain space on memorizing the exact card. Tournament Prep & Team Building Find people that you get along well with and test with them. Travel together and split a room (for cost reasons). Improvement Suggestions Watch others draft. Have others watch you and give you feedback. Proxy up decks. Play online for repetitions. Magic Resources Cockatrice Magic Online Channel Fireball Star City Games - Great tournament circuit Connect With Zvi Twitter: @TheZvi Like What You Hear? If you like the show, head on over to iTunes and leave an honest Rating & Review. Let me know what you like and what I can do better so I can make the show the best it can be and continue bringing you valuable content. I read every single one and look forward to your feedback.  

Limited Resources
Limited Resources 274 - Fate Reforged Set Review Review, and a Goodbye to Khans

Limited Resources

Play Episode Listen Later Mar 2, 2015 90:39


This week on Limited Resources Luis and Marshall take some time to reflect back on the Set Review from Fate Reforged, as well as Khans of Tarkir as it makes its exit from the Limited landscape.  Link to coverage of Martin Müller vs. Zvi Mowshowitz at Pro Tour Fate Reforged that was mentioned on the show: https://www.youtube.com/watch?v=v7MVxi4A0sM Limited Resources is brought to you by ChannelFireball. You can support Limited Resources on the LR Patreon page here: http://www.patreon.com/limitedresources  Your Hosts: Marshall Sutcliffe and Luis Scott-Vargas Marshall’s Twitter: http://twitter.com/Marshall_LR LSV’s Twitter: https://twitter.com/lsv Email: lr@lrcast.com LR Community Subreddit: http://www.reddit.com/r/lrcast/ Contact Marshall_LR on Magic Online if you’d like to join the Limited Resources clan.

Yo! MTG Taps!
Yo! MTG Taps! Episode 8 - Comment Storm

Yo! MTG Taps!

Play Episode Listen Later Jan 8, 2010 36:36


We discuss the spoiled Worldwake prerelease foil Comet Storm! Should it be mythic? Also, we recommend a few non-storyline related MTG books to spend your Christmas money on! [Anyone interested in hearing a bit of chatter on the NFL (mainly the Baltimore Ravens), stay tuned after the song at the end.] **Be sure to check out Bigheadjoe’s addendum to this episode on his blog (http://otherworldlyjourney.blogspot.com/2010/01/yo-mtg-taps-episode-8-addendum-relevant.html). Get My Files by Zvi Mowshowitz (http://www.top8magic.com/store/zvi-mowshowitz-my-files-volume-1/) and Deckade by Michael Flores (http://www.top8magic.com/store/store-michael-j-flores-deckade/) over at Top8Games! Download Next Level Magic by Patrick Chapin (http://www.starcitygames.com/magic/misc/17618_Next_Level_Magic_by_Patrick_The_Innovator_Chapin_On_Sale_Now.html) over at StarCityGames (http://www.starcitygames.com/)!

Crypto Basic Podcast: Teaching You The Basics of Bitcoin and the World of Cryptocurrency. CryptoBasic
Episode 229 - Tezos into digital card games, MakerDAO smart contract "theft", and of course, CoronaRants.

Crypto Basic Podcast: Teaching You The Basics of Bitcoin and the World of Cryptocurrency. CryptoBasic

Play Episode Listen Later Dec 31, 1969 74:49


Hello and welcome to Crypto Basic News! This week we're cozying up in quarantine, staying calm and collected, and we're talking about all kinds of stuff like Tezos getting into digital card games, millions of dollars being usurped through the MakerDAO smart contract mechanism, and of course, we talk COVID-19, from its health risks to its massive possible socioeconomic effects. Wash your hands and then click play to tune in!Rapid FireApparently it's alt season according to that index we talked about a few weeks ago????50 Days Out From Bitcoin HalveningIRS Deadline has been extended by 90 days (July 15th)Coinbase CLO Resigns to Oversee US National Banking SystemWHO impersonators are trying to steal your bitcoinThe fake email claims funds will be used to “enable all countries to track and detect the disease...send personal protective equipment to frontline health workers...enable communities to prevent infection and care for those in need...and accelerate efforts to fast-track the development of lifesaving vaccines.”Tezos gets in the digital card game space (Adam)Coase, a company launched by Tezos aims to solve with its new game, EmergentsCoase’s innovation: Let people easily acquire and swap the cards they actually wantAlpha will be available in AprilFor Coase co-founder Zvi Mowshowitz – who is also a 2007 inductee to Magic Hall of Fame – both distribution approaches leave a lot to be desired.Emergents will take a radically simple approach: Let people buy the cards they want.Brian David-Marshall, a third co-founder, said Coase has "an economic model for doing a collectible card game on the blockchain that captures a lot of what makes physical card games great."Coase will be the primary and secondary marketplace all in one. Like bitcoin on Square’s Cash App, Coase will be the seller and the buyer for Emergents cards, essentially acting as a market maker.The company is using bonding curves, much like Ethereum's token-swapping platform Uniswap, to distribute the cards. Cards will have a fixed supply and their prices will swing with demand, determined by an algorithm. New cards will come out and be sold on the site at a low price.Vitalik tweeted out a pic of what he thought the next 5-10 years of eth2 would look likeMillions of Dollars of value was usurped from the MakerDAO smart contract. (TOP STORY)I didn't want to use the word stolen, but this is an exceptionally rough spot. This happened right around this time last week.Apparently for a full 3.5 hours someone had a monopoly on the liquidation market of the MakerDAO.So lets do a recap of how the maker DAO creates liquidity. RECAP QUICK - REFERENCE 101With multiple people, there will automatically be a bidding war because the price of ETH exists in the world.With only one person, they were able to bid on the ETH being liquidated at $0 worth of DAI. How did only one person get to be the only bidder? Well people ran out of liquidity. There wasn't enough DAI sitting there to buy the ETH. ETH fell so fast that people couldn't liquidate their holdings and were forced to sell. The network was too congested to let keepers work fast enough, leaving this group in the only position to buy.They're using the term "Steal" $4M worth of ETH from the contract, but honestly this isn't really stealing. The contract functions this way.Since no one else could bid, normally the liquidation penalty on ETH contracts was 13%, but this raised it an liquidated all kinds of positions with a 50% penalty. People woke up to their collateral gone with no recourse.The Maker DAO is now creating a fund that would be the last resort in these scenarios that anyone can participate in, and they're making being a "keeper" easier for the general public. They've also programmed Coinbase's USDC as an emergency collateral device.CoronaVirus & The economy (everyone)Brent theory on how to solve this - discuss - because I can.Monopoly Rules - Top on R/CCBill Ackman calls for Trump, CEO to temporarily shut the country downSUB-STORY - Federal Reserve cuts rates to zero and launches massive $700 billion quantitative easing program - (K)In an emergency move Sunday, the Federal Reserve announced it is dropping its benchmark interest rate to zero and launching a new round of quantitative easing.Fed also slashed the rate of emergency lending at the discount window for banks by 125 basis points to 0.25%, and lengthened the term of loans to 90 days.The Fed also cut reserve requirements for thousands of banks to zero.Actions by the Fed appeared to be the largest single day set of moves the bank had ever takenBefore action took place over months like in 08, this was multiple programs, QE, and rate cuts, all in one dayFiscal vs Monetary policyEARN IT Act could be an attack on Encryption - KCommunications Decency Act Section 230 - Historically given tech companies minimal liability for how people use their platformsEARN IT Act would force companies to "Earn" protection from liability by showing they are following recommendations for combating child sexual exploitationBi-Partisan Lindsey Graham and Richard Blumenthal (Dont expect anything good from these two)Would create a panel of law enforcement officials, attorneys general, online child sexual exploitation survivors and advocates, constitutional law scholars, consumer protection and privacy specialists, cryptographers, and other tech experts to collectively decide what digital companies should do to identify and reduce child predation on their platformsCould include things like scanning content to identify abusive photos/videosBut could also include things like communication surveillanceSo if this was law, companies couldn't offer end-to-end encryption and be protected from liabilityEither A) accept liability for anything that happens, B) put in a backdoor for law enforcement, C) just avoid end to end encryption all togetherA note about LIbretarian anti government views (Panel picking) and Legal Creep (Patriot act, RICO and asset forfeiture)ExitPlease join the conversation in the Discord. We're in there all the time.Rate us on iTunes.Follow CryptoBasicBrent on Reddit.We are not Financial advisers.