POPULARITY
Send us a textThe YOU GOOD? episodes are unlike the THT interviews you might be familiar with. They are conscious conversations with the purpose of educating and helping.This week, Sean Mussett (aka Gravel Burns) and I discuss and outline 7 ways to check-in on someone who is going through a rough patch, suffering from poor mental health, or even suicidal. Don't know how to talk to your mates? Here's some ideas.ShanCall 13 11 14 (Australia) for crisis support or suicide prevention Lifeline Website. Click HERE.International Suicide Hotlines (most countries). Click HERE for the list.Red Cross Mental Health First Aid (MHFA) course. Click HERE.References (During episode)When we were talking about the benefits of human touch: Field, T. (2010). Touch for socioemotional and physical well-being: A review. Developmental Review, 30(4), 367-383. https://doi.org/10.1016/j.dr.2011.01.001When we were talking about the psychology of "doom scrolling":B.F. Skinner's behavioral psychology (referenced by by designer Aza Raskin when creating the "infinite scroll" function).Get discounts and support the show (Click on the links):KRUSH ORGANICS - CBD oils and topicalsOr use Code: THT(Get a HUGE 40% Discount...shipping is WORLDWIDE and fast).Reduce anxiety and sleep better with CBD oil, the health benefits are unquestionable....and it's all natural.BREATHEEZE - Nasal Strips(Click here for 15% off)Or Coupon Code: THTSnoring? Tired and frustrated by blocked airways? Picture the freedom of easy breathing and unlock your full potential with our nasal strips and mouth tape!INDOSOLE - Sustainable footwear ( Click link for 15% off) Or Coupon Code: THT(shipping is WORLDWIDE and fast).Sandals made from recycled Tyres. Timeless footwear for the conscious consumer.Music credits:(Intro) Music by Def Wish Cast.Song: FoGet down to Bondi for the first ever Bondi Bowl Bash on Saturday March 22nd. Organised by Bondi Skate Riders and Next Door clothing. FOLLOW on Apple PodcastsFOLLOW on Spotify SUBSCRIBE on YouTubeThis is the only favour I will ever ask of you! Help the show expand and get the best guests that you want frequently!Music from #Uppbeat:
O tym, jak obraziłam się na telefon, o papierowych mapach i o toksynach z mózgu, czyli o trendzie cyfrowego minimalizmu, który postanowiłam przetestować. Książki, które przeczytałam w tym temacie i wątków dookoła niego: Najbardziej polecam: „Wyloguj swój mózg” Anders Hansen„Złodzieje. Co okrada nas z uwagi” Johann Hari„Cyfrowy minimalizm” Cal Newport„Praca głęboka” Cal Newport „Niewolnicy dopaminy. Jak znaleźć równowagę w epoce obfitości” Anna Lembke- fajna, nie pomylić z tą poniżej, bo tamta jest słaba——„Dopaminowy detoks. Jak pozbyć się rozpraszaczy”- to ta słaba„Ekonomia uwagi. Jak nie przescrollować sobie życia?” Joanna Glogaza „Atomowe nawyki” James Clear „Skuszeni. Jak tworzyć produkty kształtujące nawyki konsumenckie” Nie Eyal Mężczyzna o imieniu Aza, który stworzył kod niekończącego się scrollowania to Aza Raskin. Nazwisko psychologa, którego nie umiałam odczytać to Mihály Csíkszentmihályi.Aplikacja do śledzenia swojego użycia mediów i telefonu: Opal Aplikacja do uproszczenia telefonu: Dumbphone Aplikacja do blokowania insta, która mi się nie sprawdziła: One sec Aplikacja-tinder dla zdjęć w telefonie: Picnic Aplikacja organizująca dzień: Structured
Learn more about your ad choices. Visit podcastchoices.com/adchoices
With Tristan Harris, Aza Raskin and Theresa Payton. Learn more about your ad choices. Visit megaphone.fm/adchoices
On today's show, we once again fire up our rhetorical stovetop to roast some dubious public argumentation: Oprah Winfrey's recent ABC special, “AI and the Future of Us.” In this re:joinder episode, Alex and Calvin listen through and discuss audio clips from the show featuring a wide array of guests - from corporate leaders like Sam Altman and Bill Gates to technologists like Aza Raskin and Tristan Harris, and even FBI Director Christopher Wray - and dismantle some of the mystifying rhetorical hype tropes that they (and Oprah) circulate about the proliferation of large language models (LLMs) and other “AI” technologies into our lives. Along the way, we use rhetorical tools from previous episodes, such as the stasis framework, to show which components of the debate around AI are glossed over, and which are given center-stage. We also bring our own sociopolitical and media analysis to the table to help contextualize (and correct) the presenters' claims about the speed of large language model development, the nature of its operation, and the threats - both real and imagined - that this new technological apparatus might present to the world. We conclude with a reflection on the words of novelist Marilynne Robinson, the show's final guest, who prompts us to think about the many ways in which “difficulty is the point” when it comes to human work and developing autonomy. Meanwhile, the slick and tempting narratives promoting “ease” and “efficiency” with AI technology might actually belie a much darker vision of “the future of us.” Join us as we critique and rejoin some of the most common tropes of AI hype, all compacted into one primetime special. In the spirit of automating consumptive labor, we watched it so you don't have to!Works & Concepts cited in this episode:Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the dangers of stochastic parrots: Can language models be too big?
Learn more about your ad choices. Visit megaphone.fm/adchoices
Historian Yuval Noah Harari says that we are at a critical turning point. One in which AI's ability to generate cultural artifacts threatens humanity's role as the shapers of history. History will still go on, but will it be the story of people or, as he calls them, ‘alien AI agents'?In this conversation with Aza Raskin, Harari discusses the historical struggles that emerge from new technology, humanity's AI mistakes so far, and the immediate steps lawmakers can take right now to steer us towards a non-dystopian future.This episode was recorded live at the Commonwealth Club World Affairs of California.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_RECOMMENDED MEDIANEXUS: A Brief History of Information Networks from the Stone Age to AI by Yuval Noah Harari You Can Have the Blue Pill or the Red Pill, and We're Out of Blue Pills: a New York Times op-ed from 2023, written by Yuval, Aza, and Tristan The 2023 open letter calling for a pause in AI development of at least 6 months, signed by Yuval and Aza Further reading on the Stanford Marshmallow Experiment Further reading on AlphaGo's “move 37” Further Reading on Social.AIRECOMMENDED YUA EPISODESThis Moment in AI: How We Got Here and Where We're GoingThe Tech We Need for 21st Century Democracy with Divya SiddarthSynthetic Humanity: AI & What's At StakeThe AI DilemmaTwo Million Years in Two Hours: A Conversation with Yuval Noah Harari
Co-founders of the Center for Humane Technology Learn more about your ad choices. Visit megaphone.fm/adchoices
[00:11:30] Michael Allen [00:18:26] Ricky Cobb [00:36:48] Emily Schrader [00:55:10] Rep. Ashley Hinson [01:13:30] Tristan Harris & Aza Raskin [01:31:53] Mark Messier Learn more about your ad choices. Visit megaphone.fm/adchoices
Artificial Intelligence is making the stuff of science fiction a science reality, changing how humans interact with the world. It could also change the way we interact with wildlife, giving us the ability to talk to animals...but are we ready? On this episode Chris talks to Aza Raskin, co-founder of Earth Species Project and Karen Bakker, a professor at the University of British Columbia, about animal communication and the pros and cons of the latest AI technology. This show would not be possible without listener support. You can help us continue to create this special immersive storytelling by donating at kuow.org/donate/thewild. Thank you. THE WILD is a production of KUOW, Chris Morgan Wildlife, and the NPR Network. It is produced by Lucy Soucek and Matt Martin, and edited by Jim Gates. It is hosted, produced and written by Chris Morgan. Fact checking by Apryle Craig. Our theme music is by Michael Parker.See omnystudio.com/listener for privacy information.
Years ago, Aza Raskin invented the infinite scroll – and yes, he regrets it. Today, Aza is the co-founder of the Center For Humane Technology and the Earth Species Project. Aza's work focuses on creating and advocating for ethical technology that benefits collective well-being. For his latest project, he's looking beyond humanity, using artificial intelligence to decode whale communication and see what lessons we might learn from the animal world. In this expansive conversation, Adam and Aza discuss ways to improve social media, how communicating with other species could change our world, and why everyone – including our governments – needs to upgrade our thinking about an AI world. Transcripts for ReThinking are available at go.ted.com/RWAGscripts
Years ago, Aza Raskin invented the infinite scroll — and yes, he regrets it. Today, Aza is the cofounder of the Center For Humane Technology and the cofounder and president of Earth Species Project. He focuses on creating and advocating for ethical technology that benefits collective well-being. For his latest project, he's looking beyond humanity, using artificial intelligence to decode non-human communication and see what lessons we might learn from the animal world. In this expansive conversation, Adam asks Aza about the exciting and terrifying possibilities of AI, how communicating with other species could change our world, and why everyone — including our governments — needs to upgrade how we think about modern technology. Transcripts for ReThinking are available at go.ted.com/RWAGscripts
Este es el primer episodio de una serie de episodios sobre diseño de interacción que quiero publicar en mi podcast. En este episodio te cuento la historia detrás de un patrón de interacción muy conocido, el scroll infinito, creado hace más de 15 años por el ingeniero y diseñador Aza Raskin, hijo de uno de los grandes del diseño de interacción. --- Send in a voice message: https://podcasters.spotify.com/pod/show/pildoras/message
Scientists could actually be close to being able to decode animal communication and figure out what animals are saying to each other. And more astonishingly, we might even find ways to talk back. The study of sonic communication in animals is relatively new, and researchers have made a lot of headway over the past few decades with recordings and human analysis. But recent advancements in artificial intelligence are opening doors to parsing animal communication in ways that haven't been close to possible until now. In this talk from the 2023 Aspen Ideas Festival in partnership with Vox's “Unexplainable” podcast, two experts on animal communication and the digital world come together to explain what may come next. Tragically, a few months after this conversation was recorded in June, one of the panelists, Karen Bakker, passed away unexpectedly. Bakker was a professor at the University of British Columbia who looked at ways digital tools can address our most pressing problems. She also wrote the book “The Sounds of Life: How Digital Technology is Bringing Us Closer to the World of Animals and Plants.” The UBC Geography department wrote of Bakker: “We will remember Karen as multi-faceted and superbly talented in all realms.” Aza Raskin, the co-founder of the Earth Species Project, a nonprofit trying to decode animal communication using A.I., joined Bakker for this discussion. The host of “Unexplainable,” Noam Hassenfeld, interviewed Bakker and Raskin. aspenideas.org
Joe Rogan delves into the realm of artificial intelligence with a sense of urgency and newfound apprehension. The episode begins with Rogan candidly expressing how he had previously dismissed concerns about AI, considering them exaggerated or distant threats. However, as the conversation unfolds, it becomes evident that his perspective has undergone a profound shift. Tristan Harris and Aza Raskin are the co-founders of the Center for Humane Technology and the hosts of its podcast, "Your Undivided Attention." Watch the Center's new film "The A.I. Dilemma" on YouTube.
Explore the future of AI with Zach and Noah as they delve into the potential and perils of artificial general intelligence (AGI) alongside experts Aza Raskin, and Tristan Harris Who were featured on The Joe Rogan Experience. This episode unpacks the Earth Species Project, AI incentives, and the crucial laws of technology. Discover the dire need for ethical AI deployment and the hope that lies in responsible development. Don't miss this insightful discussion on shaping technology's trajectory—tune in now! --- Send in a voice message: https://podcasters.spotify.com/pod/show/bigchew/message
Thanks to this weeks sponsors: Draft Kings www.draftkings.com Download the DraftKings Casino app NOW use Promo code JRER and play FIVE DOLLARS to get ONE HUNDRED DOLLARS IN CASINO CREDITS! Apple https://apps.apple.com/ca/app/draftkings-casino-real-money/id1462060332 Android https://play.google.com/store/apps/details?id=com.draftkings.casino&hl=en_US&gl=US&pli=1 Gambling problem? Call 1-800-GAMBLER or visit www.1800gambler.net. In Connecticut, Help is available for problem gambling call 888-789-7777 or visit ccpg.org. Please play responsibly. 21+. Physically present in Connecticut, Michigan, New Jersey, Pennsylvania, West Virginia only. Void in Ontario. Eligibility and deposit restrictions apply. One per opted-in new customer. $5 wager required. Max. $100 in Casino Credit awarded which require 1x play-thru within 7 days. Terms at casino dot draftkings dot com slash holidays on the house. Restrictions apply. www.JREreview.com For all marketing questions and inquiries: JRERmarketing@gmail.com Learn more about your ad choices. Visit podcastchoices.com/adchoices
Tristan Harris and Aza Raskin are the co-founders of the Center for Humane Technology and the hosts of its podcast, "Your Undivided Attention." Watch the Center's new film "The A.I. Dilemma" on Youtube.https://www.humanetech.com"The A.I. Dilemma"https://www.youtube.com/watch?v=xoVJKj8lcNQ
When did we get so disconnected from the world around us? How can we find our way back? Aza Raskin thinks the answer might lie in humanity's greatest adversary - listening. As co-founder of the Center for Humane Technology and of the Earth Species Project, Aza and his team are using Artificial Intelligence to decode the language of animals, from whales to crows, while remaining dedicated to ensuring the accelerating rise of A.I. remains safe and responsibly handled. This is...A Bit of Optimism. For more on Aza and his work check out: https://www.earthspecies.org/ https://www.humanetech.com/See omnystudio.com/listener for privacy information.
What's in this episode:Do you still love Facebook? Is your favorite place to connect still LinkedIn? Are you still spending the majority of your bathroom time on Instagram?Whether it's from a purely personal point of view or a professional perspective, is spending all this time on social media helping you? Or is it hurting you?Folks + things mentioned in this episode:* Listen to The Rabbit Hole, a New York Times 6-episode podcast series and fantastic exploration of what the Internet is doing to us.* If you haven't seen The Social Dilemma stop what you're doing and watch it here.* Find out what Tristan Harris and Aza Raskin are doing to save the world at their Center for Humane Technology and get involved.* Need a read? Try The Best Business Book In The World* (*According to my Mom) here.* Looking to work with a coach to transform your way of thinking? Look no further.Got Q's? Jill's Got A's.* Wanna leave a review at RateThisPodcast.com/WhyAreWeShouting so you can become the coolest person on the planet?* Wanna get your Q's A'd in a future episode?* Wanna sponsor an episode?Talk to me! Text or call (708) 872-7878 so that we can make your dreams come true.Got thoughts, comments, or questions about the episode you just heard? Leave a comment below.See you soon,jill Get full access to The Why Are We Shouting? Podcast at jillsalzman.substack.com/subscribe
Friends we live in a mechanistic world where we are entangled with machines and technology.It's a level of entanglement that not only impedes our relevance in the world, it could just be the end of our species.So what do we do about it as we fight to reclaim our lives and our communities.We start to unpack it deeply in this episode.Pull up a chair and let's dig in. I'll see you on the Rooftop... Scott Tristan Harris and Aza Raskin discuss The AI Dilemmahttps://youtu.be/cB0_-qKbal4?si=AfUQ7wLq-c4VmGMX “Why the last 10 years in America have been uniquely stupid.”https://www.tcatitans.org/cms/lib/CO50010872/Centricity//Domain/63/Haidt%20-%202022%20-%20Why%20the%20Past%2010%20Years%20of%20American%20Life%20Have%20Been%20Uniquely%20Stupid%20-%20The%20Atlantic.pdf The Master and His Emissary by Ian McGilchrist:https://a.co/d/ePhgeZ1 Join Rooftop Nation! Website: https://www.rooftopleadership.com/Facebook: https://www.facebook.com/ScottMannAuthorInstagram: https://www.instagram.com/scottmannauthorLinkedIn: https://www.linkedin.com/company/rooftop-leadershipTwitter: https://twitter.com/RooftopLeaderYoutube: https://www.youtube.com/channel/UCYOQ7CDJ6uSaGvmfxYC_skQ
Přinášíme Vám vybrané téma z pravidelného přehledu Co týden dal, který je pouze pro naše předplatitele na herohero.co/kanarcivsiti.V dnešní jednohubce se věnujeme jednání za zavřenými dveřmi, organizované senátorem Chuckem Schumerem, které se konalo 14.září 2023.Užitečné odkazy:Briefing senátora Chucka SchumeraTristan Harris a Aza Raskin o tom, co se na setkání děloPodpořte nás na https://www.herohero.co/kanarcivsiti . A nebo si kupte naše trička na https://www.neverenough.shop/kanarci . Podcast pro Vás připravují @alexalvarova a @holyj . Hudba a sound engineering: Psyek a deafmutedrecords.com . Twitter Spaces moderuje @jiribulan .Najdete nás na www.kanarci.online
Tristan Harris and Aza Raskin have been pushing for changes to social media companies' business models for years. Now, they're doing the same with AI. On POLITICO Tech, the co-founders of the Center for Humane Technology tell Steven Overly why Washington will need to speed up if it hopes to effectively keep AI's risks in check.
Where do the top Silicon Valley AI researchers really think AI is headed? Do they have a plan if things go wrong? In this episode, Tristan Harris and Aza Raskin reflect on the last several months of highlighting AI risk, and share their insider takes on a high-level workshop run by CHT in Silicon Valley. Note: Tristan refers to journalist Maria Ressa and mentions that she received 80 hate messages per hour at one point. She actually received more than 90 messages an hour.RECOMMENDED MEDIA Musk, Zuckerberg, Gates: The titans of tech will talk AI at private Capitol summitThis week will feature a series of public hearings on artificial intelligence. But all eyes will be on the closed-door gathering convened by Senate Majority Leader Chuck SchumerTakeaways from the roundtable with President Biden on artificial intelligenceTristan Harris talks about his recent meeting with President Biden to discuss regulating artificial intelligenceBiden, Harris meet with CEOs about AI risksVice President Kamala Harris met with the heads of Google, Microsoft, Anthropic, and OpenAI as the Biden administration rolled out initiatives meant to ensure that AI improves lives without putting people's rights and safety at riskRECOMMENDED YUA EPISODES The AI DilemmaThe AI ‘Race': China vs the US with Jeffrey Ding and Karen HaoThe Dictator's Playbook with Maria RessaYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
We recently released a video commentary on the presentation “The AI Dilemma,” being given by Tristan Harris and Aza Raskin on some of the possible negative impacts of the rapid deployment of generative AI tools. They are two of the co-founders of the Center for Humane Technology, a tech watchdog group that raised alarms about social media and its impact on society. They are also the hosts of the podcast Your Undivided Attention, where they explore the challenges and opportunities of humane technology. Their latest work focuses on the emergence of new forms of artificial intelligence (AI), especially large language...Article LinkLet us know your thoughts about this episode by reaching out on Social Media!Facebook: https://www.facebook.com/ourhometownincInstagram: https://www.instagram.com/ourhometownwebpublishing/Twitter: https://twitter.com/ourhometownincLinkedIn: https://www.linkedin.com/company/our-hometown-com/..........Our Hometown Web Publishing is The Last Newspaper CMS & Website You'll Ever Need. We help you generate revenue, engage with readers, and increase efficiency with Our Hometown's Digital & PrePress CMS features to fit your needs & budget.OHT's Web Publishing Platform is:-Powered with WordPress-Hosted on Amazon Web Services-Integrated with Adobe InDesign & Google Drivehttps://our-hometown.comSubscribe to our YouTube channel: https://www.youtube.com/channel/UCKw6KpKUiQkWldrX2-J1Kag?view_as=subscriberOur-Hometown can be reached via email for comments or questions at: ops@Our-Hometown.com
If we meet extraterrestrials someday, how will we figure out what they're saying? We currently face this problem right here at home: we have 2 million species of animals on our planet... and we have no Google Translate for any of them. We're not having conversations with (or listening to podcasts by) anyone but ourselves. Join Eagleman and his guest Aza Raskin to see the glimmer of a pathway that might get us to animal translation, and relatively soon.
Air Date 8/20/2023 Big tech is currently scrambling to bring untested A.I. products to market, over-promising, under-delivering, and working hard obscure and ignore any possible downsides for society. Big tech needs A.I. regulation now before we all suffer the easily foreseeable consequences as well as some unforeseeable ones. Be part of the show! Leave us a message or text at 202-999-3991 or email Jay@BestOfTheLeft.com Transcript BestOfTheLeft.com/Support (Members Get Bonus Clips and Shows + No Ads!) Join our Discord community! SHOW NOTES Ch. 1: A.I. is B.S. - Adam Conover - Air Date 3-31-23 The real risk of A.I. isn't that some super-intelligent computer is going to take over in the future - it's that the humans in the tech industry are going to screw the rest of us over right now. Ch. 2: Center for Humane Technology Co-Founders Tristan Harris and Aza Raskin discuss The AI Dilemma Part 1 - Summit - Air Date 6-15-23 What does it look like to align technology with humanity's best interests? Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society Ch. 3: Tech's Mask Off Moment - What Next: TBD | Tech, power, and the future - Air Date 8-13-23 When conservative writer Richard Hanania's old posts, originally published under a pseudonym, came to light people were shocked at just how racist and reactionary they were. Perhaps less shocking were the tech moguls who were revealed to be supporting him Ch. 4: Pregnant Woman's False Arrest in Detroit Shows “Racism Gets Embedded” in Facial Recognition Technology - Democracy Now! - Air Date 8-7-23 A shocking story of wrongful arrest in Detroit has renewed scrutiny of how facial recognition software is being deployed by police departments, despite major flaws in the technology. Ch. 5: Princeton University's Ruja Benjamin on Bias in Data and A.I. - The Data Chief - Air Date - 2-3-21 Joining Cindi today is Ruha Benjamin, a professor of African American Studies at Princeton University and the founding director of the IDA B. WELLS Just Data Lab. She has studied the social dimensions of science, technology, and medicine for over 15 years Ch. 6: AI ethics leader Timnit Gebru is changing it up after Google fired her - Science Friction - Air Date 4-17-22 Timnit Gebru was fired by Google in a cloud of controversy, now she's making waves beyond Big Tech's pervasive influence Ch. 7: Center for Humane Technology Co-Founders Tristan Harris and Aza Raskin discuss The AI Dilemma Part 2 - Summit - Air Date 6-15-23 Ch. 8: Can We Govern AI? - Your Undivided Attention - Air Date 4-21-23 Our guest Marietje Schaake was at the forefront of crafting tech regulations for the EU. In spite of AI's complexity, she argues there is a path forward for the U.S. and other governing bodies to rein in companies that continue to release these products MEMBERS-ONLY BONUS CLIP(S) Ch. 9: Buddhism in the Age of AI - Soryu Forall - Monastic Academy - Air Date 6-21-23 FINAL COMMENTS Ch. 10: Final comments on the difference between Microsoft's marketing and the realities of capitalism MUSIC (Blue Dot Sessions) Produced by Jay! Tomlinson Visit us at BestOfTheLeft.com Listen Anywhere! BestOfTheLeft.com/Listen Listen Anywhere! Follow at Twitter.com/BestOfTheLeft Like at Facebook.com/BestOfTheLeft Contact me directly at Jay@BestOfTheLeft.com
https://youtu.be/8bi9tDYCX5M In the ever-evolving digital landscape, the role of artificial intelligence (AI) has become a topic of intense discussion and scrutiny. This video commentary delves into this complex issue, offering a unique perspective on the potential dangers of AI, particularly in relation to the effects of social media. The commentary features Matt and Christopher, who dissect the AI Dilemma, a video originally by Tristan Harris and Aza Raskin. The duo watches the video, providing their insights and interpretations, and invites viewers to join them in this exploration of AI's potential pitfalls. The video commentary is not just a cautionary tale...Article LinkLet us know your thoughts about this episode by reaching out on Social Media!Facebook: https://www.facebook.com/ourhometownincInstagram: https://www.instagram.com/ourhometownwebpublishing/Twitter: https://twitter.com/ourhometownincLinkedIn: https://www.linkedin.com/company/our-hometown-com/..........Our Hometown Web Publishing is The Last Newspaper CMS & Website You'll Ever Need. We help you generate revenue, engage with readers, and increase efficiency with Our Hometown's Digital & PrePress CMS features to fit your needs & budget.OHT's Web Publishing Platform is:-Powered with WordPress-Hosted on Amazon Web Services-Integrated with Adobe InDesign & Google Drivehttps://our-hometown.comSubscribe to our YouTube channel: https://www.youtube.com/channel/UCKw6KpKUiQkWldrX2-J1Kag?view_as=subscriberOur-Hometown can be reached via email for comments or questions at: ops@Our-Hometown.com
Hugh watched the Aspen Ideas Festival presentation titled "The A.I. Dilemma" by Tristan Harris and Aza Raskin, and spent much of the program discussing it with the audience and with Sonny Bunch, film critic extraordinaire, and Ben Domenech, of The Spectator and a Fox News Contributor. Listen to the pod and then watch the presentation.See omnystudio.com/listener for privacy information.
What happens when creators consider what lifelong human development looks like in terms of the tools we make? And what philosophies from Sesame Street can inform how to steward the power of AI and social media to influence minds in thoughtful, humane directions?When the first episode of Sesame Street aired on PBS in 1969, it was unlike anything that had been on television before - a collaboration between educators, child psychologists, comedy writers and puppeteers - all working together to do something that had never been done before: create educational content for children on television. Fast-forward to the present: could we switch gears to reprogram today's digital tools to humanely educate the next generation? That's the question Tristan Harris and Aza Raskin explore with Dr. Rosemarie Truglio, the Senior Vice President of Curriculum and Content for the Sesame Workshop, the non-profit behind Sesame Street. RECOMMENDED MEDIA Street Gang: How We Got to Sesame StreetThis documentary offers a rare window into the early days of Sesame Street, revealing the creators, artists, writers and educators who together established one of the most influential and enduring children's programs in television historySesame Street: Ready for School!: A Parent's Guide to Playful Learning for Children Ages 2 to 5 by Dr. Rosemarie TruglioRosemarie shares all the research-based, curriculum-directed school readiness skills that have made Sesame Street the preeminent children's TV programG Is for Growing: Thirty Years of Research on Children and Sesame Street co-edited by Shalom Fisch and Rosemarie TruglioThis volume serves as a marker of the significant role that Sesame Street plays in the education and socialization of young childrenThe Democratic Surround by Fred TurnerIn this prequel to his celebrated book From Counterculture to Cyberculture, Turner rewrites the history of postwar America, showing how in the 1940s and 1950s American liberalism offered a far more radical social vision than we now rememberAmusing Ourselves to Death by Neil PostmanNeil Postman's groundbreaking book about the damaging effects of television on our politics and public discourse has been hailed as a twenty-first-century book published in the twentieth centurySesame Workshop Identity Matters StudyExplore parents' and educators' perceptions of children's social identity developmentEffects of Sesame Street: A meta-analysis of children's learning in 15 countriesCommissioned by Sesame Workshop, the study was led by University of Wisconsin researchers Marie-Louise Mares and Zhongdang PanU.S. Parents & Teachers See an Unkind World for Their Children, New Sesame Survey ShowsAccording to the survey titled, “K is for Kind: A National Survey On Kindness and Kids,” parents and teachers in the United States worry that their children are living in an unkind worldRECOMMENDED YUA EPISODESAre the Kids Alright? With Jonathan HaidtThe Three Rules of Humane TechWhen Media Was for You and Me with Fred Turner Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Over the past few years, AI (artificial intelligence) has transformed the way we interact with technology, making it possible for machines to perform tasks that previously required human input. From instant document creation and contract drafting to graphic design and music composition, AI has proven to be a powerful tool that can save us time and enhance our productivity. However, as the use of AI continues to grow, concerns about its impact on communication and relationships have emerged. In this episode of our podcast on communication and interpersonal relationships, TalkDoc, Meredith, and Teighlor delve into the topic of AI, exploring the widespread use of this technology in our daily lives and relationships. We examine the ways in which AI is already changing how we communicate and connect with each other, from chatbots and virtual assistants to personalized recommendations and social media algorithms. We also discuss the potential dangers and risks associated with AI, particularly its impact on children, relationships, and our ability to form meaningful connections with others. Join us as we navigate this complex and rapidly evolving landscape, offering insights and perspectives on the role of AI in shaping our communication and relationships. We'll explore the ethical implications of AI, as well as the potential benefits and drawbacks of its continued development. Whether you're a tech enthusiast or a skeptic, this episode is a must-listen for anyone interested in the intersection of technology and human connection. Music by epidemic sound. SHOW NOTES: Experts : Dr. Sherry Turkle, Tristan Harris, Aza Raskin, Elon Musk and Steve Wozniack Resources : https://chat.openai.com/ https://openai.com/product/gpt-4 Dr. Sherry Turkle: Computing: Reflections and the Path Forward - YouTube The A.I. Dilemma: https://www.youtube.com/watch?v=xoVJKj8lcNQ Dangers on SnapChat: https://www.washingtonpost.com/technology/2023/03/14/snapchat-myai/ Huffington Post Article (Cohen): https://www.huffpost.com/entry/ai-chatgpt-bot-boyfriend_n_63f7dd2be4b04ff5b488ff84 Dr. Sherry Turkle's Friction Free Concept: “Artificial intelligence, perhaps without meaning to, has become deeply woven into this story. Why? Because artificial intelligence is almost definitionally about the promise of efficiency without vulnerability or, increasingly, about the illusion of companionship without the demands of friendship. But by trying to move ahead toward the friction free, we are getting ourselves into all kinds of new trouble.” - Dr. Sherry Turkle “We preached authenticity, but we practiced self curation. Technology encouraged us to forget what we knew about life. And we made a digital world where we could forget what life was teaching us.” - Dr. Sherry Turkle
A few episodes back, we presented Tristan Harris and Aza Raskin's talk The AI Dilemma. People inside the companies that are building generative artificial intelligence came to us with their concerns about the rapid pace of deployment and the problems that are emerging as a result. We felt called to lay out the catastrophic risks that AI poses to society and sound the alarm on the need to upgrade our institutions for a post-AI world.The talk resonated - over 1.6 million people have viewed it on YouTube as of this episode's release date. The positive reception gives us hope that leaders will be willing to come to the table for a difficult but necessary conversation about AI.However, now that so many people have watched or listened to the talk, we've found that there are some AI myths getting in the way of making progress. On this episode of Your Undivided Attention, we debunk five of those misconceptions. Correction: Aza says that the head of the alignment team at OpenAI has concerns about safety. It's actually the former head of language model alignment, Paul Christiano, who voiced this concern. He left OpenAI in 2021.RECOMMENDED MEDIA Opinion | Yuval Harari, Tristan Harris, and Aza Raskin on Threats to Humanity Posed by AI - The New York TimesIn this New York Times piece, Yuval Harari, Tristan Harris, and Aza Raskin call upon world leaders to respond to this moment at the level of challenge it presents.Misalignment, AI & MolochA deep dive into the game theory and exponential growth underlying our modern economic system, and how recent advancements in AI are poised to turn up the pressure on that system, and its wider environment, in ways we have never seen beforeRECOMMENDED YUA EPISODESThe AI DilemmaThe Three Rules of Humane TechCan We Govern AI? with Marietje SchaakeYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Cuáles son los riesgos de la inteligencia artificial¿Cuáles son los riesgos de la inteligencia artificial? Quizás pienses en la eliminación de algunos empleos, por ejemplo. Pero 3 grandes pensadores nos dicen que La inteligencia artificial se está apoderando de la llave maestra de la civilización. ¿Cuál es esa llave maestra? Te lo cuento en este episodio. Cuáles son los riesgos de la inteligencia artificial (pista: no son solo los empleos)… (subtítulo)La inteligencia artificial (IA) ha evolucionado rápidamente en los últimos años, y ha mejorado la eficiencia y la precisión en numerosas áreas. Sin embargo, a medida que la IA continúa avanzando, también surgen riesgos que debemos considerar cuidadosamente.Uno de los principales riesgos de la IA es la posibilidad de errores y sesgos en los algoritmos. La IA se basa en los datos y, si los datos utilizados para entrenar los algoritmos son incompletos o sesgados, la IA también será sesgada.Esto puede llevar a decisiones injustas e incluso peligrosas en situaciones críticas, como en la toma de decisiones médicas o en la justicia penal.Otro riesgo es la posibilidad de la IA de tomar decisiones autónomas sin intervención humana. Aunque esto puede ser beneficioso en algunas situaciones, como en la exploración espacial o en la vigilancia de la seguridad, también puede ser peligroso en otros casos.Por ejemplo, un sistema autónomo militar podría tomar decisiones que tengan consecuencias catastróficas sin la intervención de un ser humano para evaluar y aprobar la acción.La privacidad y la seguridad son otros riesgos asociados con la IA. Los sistemas de IA están diseñados para analizar grandes cantidades de datos, lo que significa que hay un riesgo significativo de que se recopile y utilice información personal sin el conocimiento o el consentimiento de las personas involucradas.Además, los sistemas de IA pueden ser vulnerables a ataques cibernéticos y a la manipulación malintencionada de los datos.Otro riesgo importante de la IA es la posibilidad de que se utilice para la propagación de la desinformación y la manipulación. La IA puede utilizarse para crear contenido falso, como imágenes y vídeos, que pueden ser muy difíciles de distinguir de los originales.Esto puede ser utilizado para manipular la opinión pública y socavar la confianza en las instituciones democráticas.En última instancia, el mayor riesgo de la IA es que los humanos la utilicen de manera irresponsable o malintencionada. A medida que la IA se vuelve más avanzada, es crucial que se utilice de manera ética y responsable para maximizar sus beneficios y minimizar sus riesgos.En conclusión, la IA tiene el potencial de transformar muchos aspectos de nuestra vida para mejor, pero también presenta riesgos significativos. Es importante que trabajemos para mitigar estos riesgos a medida que continuamos desarrollando la IA para asegurarnos de que se utilice de manera segura y responsable.Esto requerirá la colaboración de investigadores, empresas, reguladores y ciudadanos de todo el mundo para garantizar que la IA se utilice para el bien común y no para causar daño.Lo que dice la IA y lo que dicen los humanosCreo que no es ninguna sorpresa si te dijera que el artículo anterior fue escrito por el Chat GPT. Bastante impresionante ¿no? Pero sin duda hay pensamientos mas profundos, no de la memoria, sino del análisis ético, de cuestionarnos quienes somos y hacia dónde vamos como sociedad, que solo podrían ser generados por humanos.En este episodio te comparto, en una versión en podcast, lo que dicen los pensadores Yuval Harari, Tristan Harris y Aza Raskin sobre la IA en un artículo publicado originalmente en The New York Times: “La inteligencia artificial se está apoderando de la llave maestra de la civilización” Si no quieres leer el artículo, escucha la versión en Podcast. Recuerda por favor escucharnos y suscribirte en la plataforma que más te guste:Apple Podcast Spotify Google Podcast Para participar, escríbeme tus comentarios a santiagorios@milpalabras.com.coRecursos recomendados en este Podcast Artículo “La inteligencia artificial se está apoderando de la llave maestra de la civilización” Quizás quieras oír esta “entrevista exclusiva” que tuve con el Chat GPTO entender como la IA apoya la gestión de comunicaciónhttps://www.milpalabras.com/213-la-inteligencia-artificial-en-la-comunicacion/Suscríbete al Podcast de Mil Palabras enwww.milpalabras.com Descarga GRATIS el ebook “Cómo Crear un Podcast Corporativo”https://milpalabras.com.co/Quizás quieras escuchar el episodio anteriorhttps://www.spreaker.com/user/5366173/cuales-son-los-riesgos-de-la-inteligenciOTROS PODCASTS RECOMENDADOS DE NUESTRA REDExperiencia Tech.Las voces de los líderes que hacen posible la evolución y la transformación digital. Casos de éxito, innovación, nuevos modelos de negocio y soluciones tecnológicas prácticas para crecer las empresas. https://open.spotify.com/show/77wLRAuRqZMuIiPcaBNHsJHistorias que Nutren.Conversaciones con profesionales que tienen algo para nutrir tu vida en lo personal, lo profesional, lo espiritual y lo físico. https://milpalabras.com.co/podcast/historias-que-nutren/Somos CancionesEntrevistas e historias divertidas y personales con Gente que ama la música y sabe de música. (suenan canciones completas al lado de las historias) https://open.spotify.com/show/0ILLNacYnuNnkgcELVARu6?si=7d7e070863104550Logística que TrasciendeAquí encuentras las voces del sector logístico con las mejores prácticas e historias que han contribuido al crecimiento económico de industrias, negocios y naciones.https://milpalabras.com.co/podcast/logistica-que-trasciende/Ideas Sin editarReflexiones, opiniones y anécdotas interesantes sobre “cualquier cosa” que se emite en vivo, y claro, sin editar. https://open.spotify.com/show/3MOl4r609FNJMd3urCUdOh?si=b8b00cbb3d044206De Vuelta por San IgnacioCharlas donde conocerás la historia y la cultura de uno de los sitios emblemáticos de Medellín: El Distrito San Ignaciohttps://milpalabras.com.co/podcast/de-vuelta-por-san-ignacio/Cuáles Son Los Riesgos De La Inteligencia Artificial, AI, Inteligencia Artificial, Chat GPT, podcast, Podcast Corporativo, Comunicación Organizacional, Recursos Humanos, Desarrollo Profesional, Desarrollo Personal, Comunicación Efectiva, Santiago Ríos, Mil Palabras
Despite our serious concerns about the pace of deployment of generative artificial intelligence, we are not anti-AI. There are uses that can help us better understand ourselves and the world around us. Your Undivided Attention co-host Aza Raskin is also co-founder of Earth Species Project, a nonprofit dedicated to using AI to decode non-human communication. ESP is developing this technology both to shift the way that we relate to the rest of nature, and to accelerate conservation research.Significant recent breakthroughs in machine learning have opened ways to encode both human languages and map out patterns of animal communication. The research, while slow and incredibly complex, is very exciting. Picture being able to tell a whale to dive to avoid ship strikes, or to forge cooperation in conservation areas. These advances come with their own complex ethical issues. But understanding non-human languages could transform our relationship with the rest of nature and promote a duty of care for the natural world.In a time of such deep division, it's comforting to know that hidden underlying languages may potentially unite us. When we study the patterns of the universe, we'll see that humanity isn't at the center of it. Corrections:Aza refers to the founding of Earth Species Project (ESP) in 2017. The organization was established in 2018.When offering examples of self-awareness in animals, Aza mentions lemurs that get high on centipedes. They actually get high on millipedes. RECOMMENDED MEDIA Using AI to Listen to All of Earth's SpeciesAn interactive panel discussion hosted at the World Economic Forum in San Francisco on October 25, 2022. Featuring ESP President and Cofounder Aza Raskin; Dr. Karen Bakker, Professor at UBC and Harvard Radcliffe Institute Fellow; and Dr. Ari Friedlaender, Professor at UC Santa CruzWhat A Chatty Monkey May Tell Us About Learning to TalkThe gelada monkey makes a gurgling sound that scientists say is close to human speechLemurs May Be Making Medicine Out of MillipedesRed-fronted lemurs appear to use plants and other animals to treat their afflictionsFathom on AppleTV+Two biologists set out on an undertaking as colossal as their subjects – deciphering the complex communication of whales Earth Species Project is Hiring a Director of ResearchESP is looking for a thought leader in artificial intelligence with a track record of managing a team of researchers RECOMMENDED YUA EPISODES The Three Rules of Humane TechThe AI DilemmaSynthetic Humanity: AI & What's At Stake Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Existing artificial intelligence capabilities already pose catastrophic risks to a functional society. Thanks to Tristin Harris and Aza Raskin of "The Social Dilemma" fame, I can show you how A.I. companies are caught in a race to deploy as quickly as possible without adequate safety measures. A.I. is at a point it can read our minds, groom our children and blackmail our national leaders. It must be controlled before it's too late. The Apple subscribe link is here: https://podcasts.apple.com/us/podcast/stinchfield/id1648560956 The Spotify subscribe link is here: https://open.spotify.com/show/7y6jgJ3af2ymyDQ79Nk0yv You can also watch on rumble:https://rumble.com/c/GrantStinchfield Stinchfield website https://grantstinchfield.com/See omnystudio.com/listener for privacy information.
Air Date 4/26/2023 Today, we take a look at the trials and tribulations facing the youth today as men and boys are being surpassed academically by women and girls and girls are suffering disproportionately under the weight of the toxic forces of social media. Be part of the show! Leave us a message or text at 202-999-3991 or email Jay@BestOfTheLeft.com Transcript BestOfTheLeft.com/Support (Get AD FREE Shows and Bonus Content) Join our Discord community! OUR AFFILIATE LINKS: ExpressVPN.com/BestOfTheLeft GET INTERNET PRIVACY WITH EXPRESS VPN! SHOW NOTES Ch. 1: Senator Josh Hawley on masculinity - Axios on HBO - Air Date 11-7-21 Axios' Mike Allen sits down with Missouri Senator Josh Hawley to discuss the senator's comments on masculinity. Ch. 2: Male inequality, explained by an expert | Richard Reeves - Big Think - Air Date 1-4-23 Modern males are struggling. Author Richard Reeves outlines the three major issues boys and men face and shares possible solutions. Ch. 3: Liz Plank & Richard Reeves Debate Gender Inequality - The Man Enough Podcast - Air Date 4-5-23 Brookings Institution scholar and acclaimed author Richard Reeves tackles the pressing issue of young men falling behind, on a mission to uplift them without jeopardizing women's rights. Ch. 4: Teenage girls experiencing record high levels of sadness, violence and trauma, CDC says - PBS Newshour - Air Date 2-20-23 In 2021, the CDC saw an increase in mental health challenges across the board, but it's girls in the U.S. that are engulfed in a wave of sadness, violence, and trauma.Stephanie Sy spoke with Sharon Hoover about the survey. Ch. 5: The Number One Reason This Generation Is Struggling | Scott Galloway Part 1 - The Diary of a CEO - Air Date 10-27-22 Scott Galloway, or ‘Prof G' to his fans, is one of the most influential business thought leaders in the world. Host of The Prof G Show, one of the most popular business podcasts in America Ch. 6: What the Andrew Tate phenomenon reveals about our society | Richard Reeves - Keep Talking - Air Date 11-13-22 Ch. 7: Social media companies face legal scrutiny over deteriorating mental health among teens - PBS Newshour - Air Date 2-14-23 A national survey by the CDC sounded a new alarm about teens in crisis. It shows nearly 30% of teenage girls said they considered dying by suicide, and three out of five girls said they felt persistently sad or hopeless. Ch. 8: Are the Kids Alright — with Jonathan Haidt Part 1 - Your Undivided Attention - Air Date 10-27-20 NYU social psychologist Jonathan Haidt has spent the last few years trying to figure out why, working with fellow psychologist Jean Twenge, and he believes social media is to blame. Ch. 9: The Number One Reason This Generation Is Struggling | Scott Galloway Part 2 - The Diary of a CEO - Air Date 10-27-22 Ch. 10: Are the Kids Alright — with Jonathan Haidt Part 2 - Your Undivided Attention - Air Date 10-27-20 MEMBERS-ONLY BONUS CLIP(S) Ch. 11: Spotlight — Addressing the TikTok Threat - Your Undivided Attention - Air Date 9-8-22 This week on Your Undivided Attention, we bring you a bonus episode about TikTok. Co-hosts Tristan Harris and Aza Raskin explore the nature of the TikTok threat, and how we might address it. Ch. 12: Male inequality, explained by an expert | Richard Reeves Part 2 - Big Think - Air Date 1-4-23 VOICEMAILS Ch. 13: Continuing discussion of J.K. Rowling episode - VoiceMailer Boris Ch. 14: Marking an isolating Black leaders - V from Central New York FINAL COMMENTS Ch. 15: Final comments on the fundamental disconnections that tend to drive modern debate MUSIC (Blue Dot Sessions) Produced by Jay! Tomlinson Visit us at BestOfTheLeft.com Listen Anywhere! BestOfTheLeft.com/Listen Listen Anywhere! Follow at Twitter.com/BestOfTheLeft Like at Facebook.com/BestOfTheLeft Contact me directly at Jay@BestOfTheLeft.com
In the episode "A.I. News: Open Letter To Pause, Existential Risks to Humanity, Its Supporters & Deniers (S5, E8)," I discuss the recent Open Letter to Pause Giant AI Experiments recently composed by the Future of Life Institute, and present the arguments for taking the risk analysis more seriously. With signers of the letter including Elon Musk, Emad Mostaque, Steve Wozniak, Max Tegmark, Tristan Harris and Aza Raskin, I share some of their positions including some recent articles, podcasts and videos discussing the dilemma.#deeplearning #AIrevolution #humanextinction #generativeAI #blackbox #don'tlookup #LivBoeree #Danielschmachtenberger #tristanharris #azaraskin #maxtegmark #eliezeryudkowsky #centerforhumanetechnology #lexfridman #Moloch #machinelearning #AGIReferences:Pause Giant AI Experiments: An Open Letterhttps://futureoflife.org/open-letter/pause-giant-ai-experiments/The A.I. Dilemma - March 9, 2023 by Center for Humane Technologyhttps://youtu.be/xoVJKj8lcNQMeditations On Moloch By Scott Alexanderhttps://slatestarcodex.com/2014/07/30/meditations-on-moloch/Misalignment, AI & Moloch | Daniel Schmachtenberger and Liv Boeree https://youtu.be/KCSsKV5F4xcPausing AI Developments Isn't Enough. We Need to Shut it All Downhttps://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/Live: Eliezer Yudkowsky - Is Artificial General Intelligence too Dangerous to Build?https://www.youtube.com/live/3_YX6AgxxYw?feature=shareThe 'Don't Look Up' Thinking That Could Doom Us With AIhttps://time.com/6273743/thinking-that-could-doom-us-with-ai/Max Tegmark: The Case for Halting AI Development | Lex Fridman Podcast #371https://youtu.be/VcVfceTsD0APlease visit my website at: http://www.notascrazyasyouthink.com/Don't forget to subscribe to the Not As Crazy As You Think YouTube channel @SicilianoJenConnect:Instagram: @ jengaita LinkedIn: @ jensicilianoTwitter: @ jsiciliano
When it comes to AI, what kind of regulations might we need to address this rapidly developing new class of technologies? What makes regulating AI and runaway tech in general different from regulating airplanes, pharmaceuticals, or food? And how can we ensure that issues like national security don't become a justification for sacrificing civil rights?Answers to these questions are playing out in real time. If we wait for more AI harms to emerge before proper regulations are put in place, it may be too late. Our guest Marietje Schaake was at the forefront of crafting tech regulations for the EU. In spite of AI's complexity, she argues there is a path forward for the U.S. and other governing bodies to rein in companies that continue to release these products into the world without oversight. Correction: Marietje said antitrust laws in the US were a century ahead of those in the EU. Competition law in the EU was enacted as part of the Treaty of Rome in 1957, almost 70 years after the US. RECOMMENDED MEDIA The AI Dilemma Tristan Harris and Aza Raskin's presentation on existing AI capabilities and the catastrophic risks they pose to a functional society. Also available in the podcast format (linked below)The Wisdom GapThis blog post from the Center for Humane Technology describes the gap between the rising interconnected complexity of our problems and our ability to make sense of themThe EU's Digital Services Act (DSA) & Digital Markets Act (DMA)The two pieces of legislation aim to create safer and more open digital spaces for individuals and businesses alike RECOMMENDED YUA EPISODESDigital Democracy is Within Reach with Audrey TangThe AI DilemmaThe Three Rules of Humane TechYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
In our previous episode, we shared a presentation Tristan and Aza recently delivered to a group of influential technologists about the race happening in AI. In that talk, they introduced the Three Rules of Humane Technology. In this Spotlight episode, we're taking a moment to explore these three rules more deeply in order to clarify what it means to be a responsible technologist in the age of AI.Correction: Aza mentions infinite scroll being in the pockets of 5 billion people, implying that there are 5 billion smartphone users worldwide. The number of smartphone users worldwide is actually 6.8 billion now. RECOMMENDED MEDIA We Think in 3D. Social Media Should, TooTristan Harris writes about a simple visual experiment that demonstrates the power of one's point of viewLet's Think About Slowing Down AIKatja Grace's piece about how to avert doom by not building the doom machineIf We Don't Master AI, It Will Master UsYuval Harari, Tristan Harris and Aza Raskin call upon world leaders to respond to this moment at the level of challenge it presents in this New York Times opinion piece RECOMMENDED YUA EPISODES The AI DilemmaSynthetic humanity: AI & What's At Stake Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
We give ourselves a concussion by talking about yet another opinion essay about AI written by three “experts” and published by a major newspaper. How is AI doing neuro-linguistic programming? What happens when your analysis is all idealism, no materialism, all superstructure, no base? Will AI soon use its godlike powers to hack civilization and create militant groups at its command just by telling us the right story? Wait, you're telling me we have to take this shit seriously because the authors have direct lines of influence to people who have actual political power and control capital? Goddamnit. Article we discuss ••• You Can Have the Blue Pill or the Red Pill, and We're Out of Blue Pills | Yuval Harari, Tristan Harris and Aza Raskin https://www.nytimes.com/2023/03/24/opinion/yuval-harari-ai-chatgpt.html Subscribe to hear more analysis and commentary in our premium episodes every week! https://www.patreon.com/thismachinekills Hosted by Jathan Sadowski (www.twitter.com/jathansadowski) and Edward Ongweso Jr. (www.twitter.com/bigblackjacobin). Production / Music by Jereme Brown (www.twitter.com/braunestahl)
Whenever I'm out doing field work or on a hike, I've not only got my eyes wide open, but my ears too. There's a lot going on in a forest or under the sea - the sounds of nature. So many of those sounds in nature are about communication.Personally, I love to chat with ravens. I like to think that we have lovely conversations. I know I'm fooling myself... but there's something happening that might change that. There's a tech company out of Silicon Valley that is hoping to make that dream of communicating with animals a reality. Earth Species Project is a non-profit working to develop machine learning that can decode animal language. Basically, artificial intelligence that can speak whale or monkey...or perhaps even raven?So we are doing something a bit different on The Wild today - fun to mix things up now and then. For this episode I'm not outdoors among the wild creatures, but in my home studio, talking with two fascinating people about the latest developments in technology that are being created to talk to wild animals. We'll also explore the ethics of this technology. What are the downsides to playing the role of Digital Dr. Dolittle? Guests: Aza Raskin, co-founder of Earth Species Project and co-founder of the Center for Humane Technology.Karen Bakker, professor at the University of British Columbia where she researches digital innovation and environmental governance. She also leads the Smart Earth Project.Follow us on Instagram @thewildpod and @chrismorganwildlife
You may have heard about the arrival of GPT-4, OpenAI's latest large language model (LLM) release. GPT-4 surpasses its predecessor in terms of reliability, creativity, and ability to process intricate instructions. It can handle more nuanced prompts compared to previous releases, and is multimodal, meaning it was trained on both images and text. We don't yet understand its capabilities - yet it has already been deployed to the public.At Center for Humane Technology, we want to close the gap between what the world hears publicly about AI from splashy CEO presentations and what the people who are closest to the risks and harms inside AI labs are telling us. We translated their concerns into a cohesive story and presented the resulting slides to heads of institutions and major media organizations in New York, Washington DC, and San Francisco. The talk you're about to hear is the culmination of that work, which is ongoing.AI may help us achieve major advances like curing cancer or addressing climate change. But the point we're making is: if our dystopia is bad enough, it won't matter how good the utopia we want to create. We only get one shot, and we need to move at the speed of getting it right.RECOMMENDED MEDIAAI ‘race to recklessness' could have dire consequences, tech experts warn in new interviewTristan Harris and Aza Raskin sit down with Lester Holt to discuss the dangers of developing AI without regulationThe Day After (1983)This made-for-television movie explored the effects of a devastating nuclear holocaust on small-town residents of KansasThe Day After discussion panelModerated by journalist Ted Koppel, a panel of present and former US officials, scientists and writers discussed nuclear weapons policies live on television after the film airedZia Cora - Submarines “Submarines” is a collaboration between musician Zia Cora (Alice Liu) and Aza Raskin. The music video was created by Aza in less than 48 hours using AI technology and published in early 2022RECOMMENDED YUA EPISODES Synthetic humanity: AI & What's At StakeA Conversation with Facebook Whistleblower Frances HaugenTwo Million Years in Two Hours: A Conversation with Yuval Noah HarariYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
For many of us, the term artificial intelligence conjures up images of science fiction movies. But what is it really? As AI technology becomes a bigger part of our world, Lester Holt sits down with Tristan Harris and Aza Raskin, co-founders of the Center for Humane Technology, to talk about how it works.
It may seem like the rise of artificial intelligence, and increasingly powerful large language models you may have heard of, is moving really fast… and it IS. But what's coming next is when we enter synthetic relationships with AI that could come to feel just as real and important as our human relationships... And perhaps even more so. In this episode of Your Undivided Attention, Tristan and Aza reach beyond the moment to talk about this powerful new AI, and the new paradigm of humanity and computation we're about to enter. This is a structural revolution that affects way more than text, art, or even Google search. There are huge benefits to humanity, and we'll discuss some of those. But we also see that as companies race to develop the best synthetic relationships, we are setting ourselves up for a new generation of harms made exponentially worse by AI's power to predict, mimic and persuade.It's obvious we need ways to steward these tools ethically. So Tristan and Aza also share their ideas for creating a framework for AIs that will help humans become MORE humane, not less.RECOMMENDED MEDIA Cybernetics: or, Control and Communication in the Animal and the Machine by Norbert WienerA classic and influential work that laid the theoretical foundations for information theoryNew Chatbots Could Change the World. Can You Trust Them?The New York Times addresses misinformation and how Siri, Google Search, online marketing and your child's homework will never be the sameOut of One, Many: Using Language Models to Simulate Human Samples by Lisa P. Argyle, Ethan C. Busby, Nancy Fulda, Joshua Gubler, Christopher Rytting, David WingateThis paper proposes and explores the possibility that language models can be studied as effective proxies for specific human sub-populations in social science researchEarth Species ProjectEarth Species Project, co-founded by Aza Raskin, is a non-profit dedicated to using artificial intelligence to decode non-human communicationHer (2013)A science-fiction romantic drama film written, directed, and co-produced by Spike JonzeWhat A Chatty Monkey May Tell Us About Learning To TalkNPR explores the fascinating world of gelada monkeys and the way they communicateRECOMMENDED YUA EPISODESHow Political Language is Engineered with Drew Westen & Frank LuntzWhat is Humane Technology?Down the Rabbit Hole by Design with Guillaume Chaslot
Air Date 9/17/2022 Today, we take a look at the way terrible patterns of the past like colonialism, racism, propaganda, feudalism, and abuse of corporate monopoly power are recreating and re-entrenching themselves in the digital world Be part of the show! Leave us a message at 202-999-3991 or email Jay@BestOfTheLeft.com Transcript BestOfTheLeft.com/Support (Get AD FREE Shows and Bonus Content) Join our Discord community! OUR AFFILIATE LINKS: ExpressVPN.com/BestOfTheLeft GET INTERNET PRIVACY WITH EXPRESS VPN! BestOfTheLeft.com/Libro SUPPORT INDIE BOOKSHOPS, GET YOUR AUDIOBOOK FROM LIBRO! SHOW NOTES Ch. 1: Rise Of Digital Oligarchy w/ Jillian York - The Majority Report - Air Date 8-4-22 Emma hosts Jillian York, Director of International Freedom of Expression at the Electronic Frontier Foundation, to discuss her recent book Silicon Values: The Future of Free Speech Under Surveillance Capitalism. Ch. 2: How Social Media Profits Off Your Anger - Wisecrack - Air Date 8-26-22 Does the internet run on rage? If you've spent any time online, you know that anger is to the internet as cake is to a birthday party. But why have the interwebs become a place of such division and rage? It's complicated, but we'll explain Ch. 3: Using AI to Say the Word - This Machine Kills - Air Date 9-2-22 We discuss a new startup doing the rounds called Sanas that uses AI “accent translation” to make "non-white" call center workers speak English with a white American accent. This is some real retro throwback tech solutionism Ch. 4: Addressing the TikTok Threat - Your Undivided Attention - Air Date 9-8-22 This week on Your Undivided Attention, we bring you a bonus episode about TikTok. Co-hosts Tristan Harris and Aza Raskin explore the nature of the TikTok threat, and how we might address it. Ch. 5: Edward Ongweso Jr: Peter Thiel & The Post-Capitalism of Tech's Far Right - The Arts of Travel - Air Date 6-27-21 We conclude our conversation with Vice's Edward Ongweso Jr. by looking at Peter Thiel's role in bankrolling Uber as the 'Tip of the Spear' for rolling back labor laws, workers' rights and Unions. Ch. 6: Break Up Monopolies: Zephyr Teachout - Future Hindsight - Air Date 8-11-22 Zephyr Teachout is Senior Counsel for Economic Justice for the New York AG and law professor at Fordham University. We revisit our conversation with her about her book, Break 'Em Up: Recovering Our Freedom from Big Ag, Big Tech, and Big Money. MEMBERS-ONLY BONUS CLIP(S) Ch. 9: The Digital Self, Web3 and reclaiming your online identity - Future Tense - Air Date 7-23-22 How is our sense of identity changing as our online and offline experiences increasingly merge? What grounds a person's online persona (or personas) to the physical world? And is such a tie important? Ch. 8: Refusing the Everyday Fascism of Artificial Intelligence (ft. Dan McQuillan) - This Machine Kills - Air Date 8-25-22 We are joined by Dan McQuillan to discuss his great new book Resisting AI. “With analytical and moral clarity, McQuillan makes the case for recognizing the radical politics of AI and meeting its goose step march head-on.” FINAL COMMENTS Ch. 12: Final comments on how to fix the internet for yourself MUSIC (Blue Dot Sessions): Opening Theme: Loving Acoustic Instrumental by John Douglas Orr Voicemail Music: Low Key Lost Feeling Electro by Alex Stinnent Activism Music: This Fickle World by Theo Bard (https://theobard.bandcamp.com/track/this-fickle-world) Closing Music: Upbeat Laid Back Indie Rock by Alex Stinnent Produced by Jay! Tomlinson Visit us at BestOfTheLeft.com Listen Anywhere! BestOfTheLeft.com/Listen Listen Anywhere! Follow at Twitter.com/BestOfTheLeft Like at Facebook.com/BestOfTheLeft Contact me directly at Jay@BestOfTheLeft.com
Imagine it's the Cold War. Imagine that the Soviet Union puts itself in a position to influence the television programming of the entire Western world — more than a billion viewers. While this might sound like science fiction, it's representative of the world we're living in, with TikTok being influenced by the Chinese Communist Party.TikTok, the flagship app of the Chinese company Bytedance, recently surpassed Google and Facebook as the most popular site on the internet in 2021, and is expected to reach more than 1.8 billion users by the end of 2022. The Chinese government doesn't control TikTok, but has influence over it. What are the implications of this influence, given that China is the main geopolitical rival of the United States?This week on Your Undivided Attention, we bring you a bonus episode about TikTok. Co-hosts Tristan Harris and Aza Raskin explore the nature of the TikTok threat, and how we might address it.RECOMMENDED MEDIA Pew Research Center's "Teens, Social Media and Technology 2022"https://www.pewresearch.org/internet/2022/08/10/teens-social-media-and-technology-2022/Pew's recent study on how TikTok has established itself as one of the top online platforms for U.S. teensAxios' "Washington turns up the heat on TikTok"https://www.axios.com/2022/07/07/congress-tiktok-china-privacy-data?utm_source=substack&utm_medium=emailArticle on recent Congressional responses to the threat of TikTokFelix Krause on TikTok's keystroke trackinghttps://twitter.com/KrauseFx/status/1560372509639311366A revelation that TikTok has code to observe keypad input and all tapsRECOMMENDED YUA EPISODESA Fresh Take on Tech in China with Rui Ma and Duncan Clarkhttps://www.humanetech.com/podcast/44-a-fresh-take-on-tech-in-chinaA Conversation with Facebook Whistleblower Frances Haugenhttps://www.humanetech.com/podcast/42-a-conversation-with-facebook-whistleblower-frances-haugenFrom Russia with Likes (Part 1). Guest: Renée DiRestahttps://www.humanetech.com/podcast/5-from-russia-with-likes-part-1From Russia with Likes (Part 2). Guest: Renée DiRestahttps://www.humanetech.com/podcast/6-from-russia-with-likes-part-2 Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
At Center for Humane Technology, we often talk about multipolar traps — which arise when individuals have an incentive to act in ways that are beneficial to them in the short term, but detrimental to the group in the long term. Think of social media companies that compete for our attention, so that when TikTok introduces an even-more addictive feature, Facebook and Twitter have to mimic it in order to keep up, sending us all on a race to the bottom of our brainstems.Intervening at the level of multipolar traps has extraordinary leverage. One such intervention is the Long Term Stock Exchange — a U.S. national securities exchange serving companies and investors who share a long-term vision. Instead of asking public companies to pollute less or be less addictive while holding them accountable to short-term shareholder value, the Long-Term Stock Exchange creates a new playing field, which incentivizes the creation of long-term stakeholder value.This week on Your Undivided Attention, we're airing an episode of a podcast called ZigZag — a fellow member of the TED Audio Collective. In an exploration of how technology companies might transcend multipolar traps, we're sharing with you ZigZag's conversation with Long Term Stock Exchange founder Eric Ries.CORRECTION: In the episode, we say that TikTok has outcompeted Facebook, Instagram, and YouTube. In fact, TikTok has outcompeted Facebook, but not yet YouTube or Instagram — TikTok has 1 billion monthly users, while YouTube has 2.6 billion and Instagram has 2 billion. However, we can say that TikTok is on a path toward outcompeting YouTube and Instagram.RECOMMENDED YUA EPISODESAn Alternative to Silicon Valley Unicorns with Mara Zepeda & Kate “Sassy” Sassoon: https://www.humanetech.com/podcast/54-an-alternative-to-silicon-valley-unicornsA Problem Well-Stated Is Half-Solved with Daniel Schmachtenberger: https://www.humanetech.com/podcast/a-problem-well-stated-is-half-solvedHere's Our Plan And We Don't Know with Tristan Harris, Aza Raskin, and Stephanie Lepp: https://www.humanetech.com/podcast/46-heres-our-plan-and-we-dont-know
Why isn't Twitter doing more to get bots off their platform? Why isn't Uber taking better care of its drivers? What if...they can't?Venture-capital backed companies like Twitter and Uber are held accountable to maximizing returns to investors. If and when they become public companies, they become accountable to maximizing returns to shareholders. They've promised Wall Street outsized returns — which means Twitter can't lose bots if it would significantly lower their user count and in turn lower advertising revenue, and Uber can't treat their drivers like employees if it competes with profits.But what's the alternative? What might it look like to design an ownership and governance model that incentivizes a technology company to serve all of its stakeholders over the long term – and primarily, the stakeholders who create value?This week on Your Undivided Attention, we're talking with two experts on creating the conditions for humane business, and in turn, for humane technology: Mara Zepeda and Kate “Sassy” Sassoon of Zebras Unite Co-Op. Zebras Unite is a member-owned co-operative that's creating the capital, culture, and community to power a more just and inclusive economy. The Zebras Unite Coop serves a community of over 6,000 members, in about 30 chapters, over 6 continents. Mara is their Managing Director, and Kate is their Director of Cooperative Membership.Two corrections:The episode says that the failure rate of startups is 99%. The actual rate is closer to 90%.The episode says that in 2017, Twitter reported 350 million users on its platform. The actual number reported was 319 million users.RECOMMENDED MEDIA Zebras Fix What Unicorns BreakA seminal 2017 article by Zebras Unite co-founders, which kicked off the movement and distinguished between zebras and unicorns — per the table below.Meetup to the People Zebras Unite's 2019 thought experiment of exiting Meetup to communityZebras Unite Crowdcast ChannelWhere you can find upcoming online events, as well as recordings of previous events.RECOMMENDED YUA EPISODES A Renegade Solution to Extractive Economics with Kate Raworth: https://www.humanetech.com/podcast/29-a-renegade-solution-to-extractive-economicsBonus — A Bigger Picture on Elon & Twitter: https://www.humanetech.com/podcast/bigger-picture-elon-twitter Here's Our Plan And We Don't Know with Tristan Harris, Aza Raskin, and Stephanie Lepp: https://www.humanetech.com/podcast/46-heres-our-plan-and-we-dont-knowYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Tristan Harris is a former Google design ethicist, co-founder and president of the Center for Humane Technology, and co-host of the Center for Humane Technology's "Your Undivided Attention" podcast with Aza Raskin. Daniel Schmachtenberger is a founding member of The Consilience Project, aimed at improving public sensemaking and dialogue.
Called the “closest thing Silicon Valley has to a conscience,” by The Atlantic magazine, Tristan Harris spent three years as a Google Design Ethicist developing a framework for how technology should “ethically” steer the thoughts and actions of billions of people from screens. He is now co-founder & president of the Center for Humane Technology, whose mission is to reverse ‘human downgrading' and re-align technology with humanity. Additionally, he is co-host of the Center for Humane Technology's Your Undivided Attention podcast with co-founder Aza Raskin.