POPULARITY
(0:00) Intro (1:49) About the podcast sponsor: The American College of Governance Counsel(2:36) Introduction by Professor Anat Admati, Stanford Graduate School of Business. Read the event coverage from Stanford's CASI.(4:14) Start of Interview(4:45) What inspired Karen to write this book and how she got started with journalism.(8:00) OpenAI's Nonprofit Origin Story(8:45) Sam Altman and Elon Musk's Collaboration(10:39) The Shift to For-Profit(12:12) On the original split between Musk and Altman over control of OpenAI(14:36) The Concept of AI Empires(18:04) About concept of "benefit to humanity" and OpenAI's mission "to ensure that AGI benefits all of humanity"(20:30) On Sam Altman's Ouster and OpenAI's Boardroom Drama (Nov 2023) "Doomers vs Boomers"(26:05) Investor Dynamics Post-Ouster of Sam Altman(28:21) Prominent Departures from OpenAI (ie Elon Musk, Dario Amodei, Ilya Sutskever, Mira Murati, etc)(30:55) The Geopolitics of AI: U.S. vs. China(32:37) The "What about China" Card used by US companies to ward off regulation.(34:26) "Scaling at All Costs is not leading us in a good place"(36:46) Karen's preference on ethical AI development "I really want there to be more participatory AI development. And I think about the full supply chain of AI development when I say that."(39:53) Her biggest hope and fear for the future "the greatest threat of these AI empires is the erosion of democracy."(43:34) The case of Chilean Community Activism and Empowerment(47:20) Recreating human intelligence and the example of Joseph Weizenbaum, MIT (Computer Power and Human Reason, 1976)(51:15) OpenAI's current AI research capabilities: "I think it's asymptotic because they have started tapping out of their scaling paradigm"(53:26) The state (and importance of) open source development of AI. "We need things to be more open"(55:08) The Bill Gates demo on chatGPT acing the AP Biology test.(58:54) Funding academic AI research and the public policy question on the role of Government.(1:01:11) Recommendations for Startups and UniversitiesKaren Hao is the author of Empire of AI (Penguin Press, May 2025) and an award-winning journalist covering the intersections of AI & society. You can follow Evan on social media at:X: @evanepsteinLinkedIn: https://www.linkedin.com/in/epsteinevan/ Substack: https://evanepstein.substack.com/__To support this podcast you can join as a subscriber of the Boardroom Governance Newsletter at https://evanepstein.substack.com/__Music/Soundtrack (found via Free Music Archive): Seeing The Future by Dexter Britain is licensed under a Attribution-Noncommercial-Share Alike 3.0 United States License
Joseph Weizenbaum entwickelte in den 1960er Jahren das Computerprogramm Eliza, das als Pionier der Verarbeitung natürlicher Sprache durch einen Computer galt. Er avancierte zu einem scharfen Kritiker…
Künstliche Intelligenz (KI) ist aus der heutigen Welt kaum noch wegzudenken - und das gilt nicht nur für Fachkreise, sondern für unser ganz alltägliches Leben. Die Art, wie wir den Begriff KI verwenden, wenn wir uns auf entsprechende Anwendungen und Systeme beziehen, ist allerdings streng genommen nicht korrekt: Was wir meinen, ist Machine Learning. In der 41. Folge von Informatik für die moderne Hausfrau beschäftigen wir uns mit einer anderen Art von KI, nämlich mit symbolischer künstlicher Intelligenz (häufig auch als "Expertensystem" bezeichnet). Wir schauen uns an, welches Funktionsprinzip dahinter steckt und wie sich symbolische KI und Machine Learning unterscheiden. Informationen über den Computer Deep Blue, der in den 1990er Jahren den Schachweltmeister Garry Kasparov besiegte, findet ihr hier: https://www.ibm.com/history/deep-blue Mehr über das erwähnte Buch "Descartes' Dream: The World According to Mathematics" erfahrt ihr hier: https://www.goodreads.com/book/show/9872660-descartes-dream Einen Scan des Buchs im Internet Archive findet ihr hier: https://archive.org/details/descartesdreamwo00davi/page/n3/mode/2up In dieser Folge wird auf vier weitere Folgen verwiesen: - Folge 11 - Künstliche Intelligenz und Nachhaltigkeit (Interview mit Nora Gourmelon) - Folge 18 - Ida Rhodes und der Traum von einer computerisierten Zukunft - Folge 22 - Wie die Ethik in die Informatik kam - Joseph Weizenbaum, Informatikpionier und Computerkritiker - Folge 32 - Adversarial Attacks: Wie sich KI-Systeme austricksen lassen Alle Informationen zum Podcast findet ihr auf der zugehörigen Webseite https://www.informatik-hausfrau.de. Zur Kontaktaufnahme schreibt mir gerne eine Mail an mail@informatik-hausfrau.de oder meldet euch über Social Media. Auf Instagram und Bluesky ist der Podcast unter dem Handle @informatikfrau (bzw. @informatikfrau.bsky.social) zu finden. Wenn euch dieser Podcast gefällt, abonniert ihn doch bitte und hinterlasst eine positive Bewertung oder eine kurze Rezension, um ihm zu mehr Sichtbarkeit zu verhelfen. Rezensionen könnt ihr zum Beispiel bei Apple Podcasts schreiben oder auf panoptikum.social. Falls ihr die Produktion des Podcasts finanziell unterstützen möchtet, habt ihr die Möglichkeit, dies über die Plattform Steady zu tun. Weitere Informationen dazu sind hier zu finden: https://steadyhq.com/de/informatikfrau Falls ihr mir auf anderem Wege etwas 'in den Hut werfen' möchtet, ist dies (auch ohne Registrierung) über die Plattform Ko-fi möglich: https://ko-fi.com/leaschoenberger Dieser Podcast wird gefördert durch das Kulturbüro der Stadt Dortmund.
Benjamin und Christiane von "Autonomie & Algorithmen" sind zu Gast und wir fragen uns: Warum und in welcher Weise lesen wir menschliche Eigenschaften in künstliche Systeme wie etwa Large Language Models hinein. Wir klären, was der Begriff Anthropomorphisierung bedeutet, blicken in die Kultur-, Technik- und Wissenschafts-Geschichte und gehen den Psychologischen Grundlagen nach. Mit Daniel Dennetts "Intentional Stance" stelle ich eine philosophische Theorie der Anthropomorphisierung vor und Christiane präsentiert mehrere psychologische Studien, die die Frage nach dem "Warum" strategisch eingrenzen. Am Ende fragen wir noch nach der Moral von der Geschicht': Sollten KI-Systeme, Programme, Computer und Roboter menschenähnlich designet werden? Quellen: Autonomie und Algorithmen: https://autonomie-algorithmen.letscast.fm/ Der Geschichte des künstlichen Menschen habe ich mich hier gewidmet: https://perspektiefe.privatsprache.de/der-geist-in-der-maschine/ Meine Folge zum Androiden Data: https://perspektiefe.privatsprache.de/the-measure-of-a-man-die-philosophie-von-star-trek/ Daniel Dennett: The Intentional Stance: https://amzn.to/4jTk30j * The intentional stance in theory and practice: https://www.researchgate.net/profile/Daniel-Dennett/publication/271180035_The_Intentional_Stance/links/5f3d3b01a6fdcccc43d36860/The-Intentional-Stance.pdf?__cf_chl_rt_tk=bBjx1ddFsxZJuACwVDbqmVMInS7vJnRXqyEoNxptu0I-1739429482-1.0.1.1-aChSHpHXHglMNSA.7vG24WbtILS87p2TmOfxv9ywH_w Karel Capek (1922). Werstands Universal Robots. Tschechisch. Deutsche Übersetzung (gemeinfrei) bei: https://www.hs-augsburg.de/~harsch/germanica/Chronologie/20Jh/Pick/pic_wurv.html Harald Salfellner (2019). Der Prager Golem - Jüdische Sagen aus dem Ghetto. https://amzn.to/4aXv0K1 * Alan Turing (1950). Computing Machinery and Intelligence. Mind: A Quarterly Review of Psychology and Philosophy, 59(236), 433-460. https://doi.org/10.1093/mind/LIX.236.433 Joseph Weizenbaum (1960). ELIZA—a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36-45. https://doi.org/10.1145/365153.365168 Valentino Braitenberg (1986). Vehicles - Experiments in Synthetic Psychology. MIT Press. http://cognaction.org/cogs105/readings/braitenberg1.pdf Heider, F., & Simmel, M. (1944). An experimental study of apparent behavior. The American journal of psychology, 57(2), 243-259. https://doi.org/10.2307/1416950 Reeves, B., & Nass, C. I. (1996). The media equation: How people treat computers, television, and new media like real people and places. Center for the Study of Language and Information; Cambridge University Press. https://psycnet.apa.org/record/1996-98923-000 Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human: A three-factor theory of anthropomorphism. Psychological Review, 114(4), 864–886. https://doi.org/10.1037/0033-295X.114.4.864 Gazzola, V., Rizzolatti, G., Wicker, B., & Keysers, C. (2007). The anthropomorphic brain: the mirror neuron system responds to human and robotic actions. Neuroimage, 35(4), 1674-1684. https://doi.org/10.1016/j.neuroimage.2007.02.003 Roesler, E., Manzey, D., & Onnasch, L. (2021). A meta-analysis on the effectiveness of anthropomorphism in human-robot interaction. Science Robotics, 6(58), eabj5425. https://doi.org/10.1126/scirobotics.abj5425 Mandl, S., Laß, J.S., Strobel, A. (2024). Associations Between Gender Attributions and Social Perception of Humanoid Robots. In: Camarinha-Matos, L.M., Ortiz, A., Boucher, X., Barthe-Delanoë, AM. (eds) Navigating Unpredictability: Collaborative Networks in Non-linear Worlds. PRO-VE 2024. IFIP Advances in Information and Communication Technology, vol 726. Springer, Cham. https://doi.org/10.1007/978-3-031-71739-0_6 *Das ist ein Affiliate-Link: Wenn ihr das Buch kauft, bekomme ich eine winzige Provision und freue mich. Oder in Amazons Formulierung: Als Amazon-Partner verdiene ich an qualifizierten Verkäufen.
A chatbot is an Artificial Intelligence program that chats with you. The 1960s saw the first chatbot. It was called ELIZA and was developed by MIT professor Joseph Weizenbaum. Six decades later, we have ChatGPT, Google Gemini and Microsoft Copilot joining the likes of Alexa and Siri.While chatbots use pattern matching and natural language processing to interpret user inputs and choose the right responses from a set of pre-programmed options, AI agents engage in more complex, multi-step interactions that may span different platforms or services. AI agents take the next step from chatbots moving the needle of engagement between human and machines.In this PodChats for FutureCIO, we are joined by David Irecki, chief technology officer for Asia Pacific and Japan at Boomi to walk us through the realities of AI agents – what you and I need to know, separating hype from reality.1. Define AI agent as understood by most users in Asia (both tech and non-tech)?2. What key factors differentiate AI agent hype from reality for organizations?3. Since you brought it up, how can businesses ensure high data quality for effective AI deployment?4. Scale out issues have been a recuring topic when RPAs came out and it may be part of the challenge with AI. From your experience, what common pitfalls should organizations avoid when adopting AI agents?5. One of the promises of AI is improving productivity – at least this is the hope. In what ways can AI agents drive productivity and cost savings in specific industries? Any pitfalls to avoid?6. How can AI agents enhance customer experiences across different sectors?7. How can businesses maintain accountability and governance while leveraging AI agents effectively?8. This 2025, what is your advice for lines of business, finance, IT and security work together to design or architect or transform processes so they can avail of the promise of AI agents without introducing unnecessary risks in the process?
We're experimenting and would love to hear from you!In this episode of 'Discover Daily', we explore Anduril Industries' groundbreaking Arsenal-1 project, a $1 billion autonomous weapons facility in Ohio that promises to create over 4,000 high-paying jobs and revolutionize military defense manufacturing. The 5-million-square-foot facility, set to begin production in July 2026, will produce advanced autonomous systems including Fury drones, Roadrunner drones, and Baracuda missiles, while generating billions in economic output.We also delve into OpenAI's development of revolutionary AI 'super agents' with PhD-level reasoning capabilities, as CEO Sam Altman prepares to brief U.S. government officials. These advanced AI systems represent a significant leap forward in autonomous task execution and problem-solving, positioning the United States at the forefront of AI innovation and economic growth in the global technology race.The episode concludes with an fascinating exploration of ELIZA's resurrection, as the world's first chatbot returns on GitHub. Originally created in the 1960s by MIT professor Joseph Weizenbaum, ELIZA's restoration involved decoding 2,600 lines of historic code, now running on an emulated IBM 7094 computer. This preservation of AI history offers valuable insights into the evolution of conversational AI and its impact on modern technology.From Perplexity's Discover Feed:https://www.perplexity.ai/page/anduril-s-1b-autonomous-weapon-fTo5xssgQYeFhKcv0df8Ywhttps://www.perplexity.ai/page/altman-to-brief-d-c-on-phd-lev-q1qYjPhrQhuyb3cwG8H2RAhttps://www.perplexity.ai/page/world-s-first-chatbot-resurrec-tJfKapPMSWmDvjC334mfkQPerplexity is the fastest and most powerful way to search the web. Perplexity crawls the web and curates the most relevant and up-to-date sources (from academic papers to Reddit threads) to create the perfect response to any question or topic you're interested in. Take the world's knowledge with you anywhere. Available on iOS and Android Join our growing Discord community for the latest updates and exclusive content. Follow us on: Instagram Threads X (Twitter) YouTube Linkedin
Episode 285 New archaeological evidence from Iron Age Britain has shaken up long-held beliefs about the role of women in ancient civilisations. By studying the genes of the Durotriges tribe, who lived in Dorset 2000 years ago, researchers have discovered women were the centrepiece of Celtic society - supporting evidence that they had high status across Europe. Rachel Pope, Reader in European Prehistory at the University of Liverpool, explores the “jaw-dropping” findings. We also hear from author and archaeologist Rebecca Wragg-Sykes, who explains why we shouldn't be surprised that women in prehistory had such power and autonomy. Sudden swings in weather extremes caused by climate change could be to blame for the wildfires spreading across Los Angeles. The effect, known as “climate whiplash”, is becoming increasingly common and has wide-reaching implications, threatening crops, water supplies and more. And with the news that we breached 1.5C of global warming in 2024, we discuss what this all means for our climate goals. The world's first chatbot, ELIZA, has been resurrected. Created by MIT computer scientist Joseph Weizenbaum in the 1960s, it contains just 420 lines of code and is a very basic precursor to the likes of ChatGPT and Gemini. The team demonstrates its (limited) capabilities live on the show. They also discuss news of a woman who has an AI boyfriend on ChatGPT…that she has sex with. Hosts Rowan Hooper and Penny Sarchet discuss with guests Rachel Pope, Rebecca Wragg-Sykes, James Dinneen and Madeleine Cuff. To read more about these stories, visit https://www.newscientist.com/ Book your place on the Svalbard expedition here: https://www.newscientist.com/tours/new-scientist-arctic-cruise/ Read Maddie's article on the climate impacts of broken jet streams here: https://www.newscientist.com/article/mg26535264-100-is-a-broken-jet-stream-causing-extreme-weather-that-lasts-longer/ Learn more about your ad choices. Visit megaphone.fm/adchoices
In deze aflevering spreekt Piek met Marlies van Eck, principal consultant bij Hooghiemstra & Partners en gastdocent Radboud Universiteit, over het versimpelen van wetten, de rechtszaak die Stichting Bescherming Privacybelangen aanspant tegen Google (Sluit je hier aan bij de rechtszaak tegen Google!) en over de manieren waarop wet- en regelgeving zich kan en zou moeten verhouden tot technologie. Joeri schuift aan.Meer over MarliesIn deze aflevering komen de volgende namen voorbij:Olympe de Gouges (schrijver, feminist, revolutionaire)Jon Bing (professor in de rechten)Richard Susskind (schrijver, adviseur, spreker)Joseph Weizenbaum (professor, computer scientist)Bernard Hermesdorf (rechts- en cultuurhistoricus, rector Radboud Universiteit tijdens WO II)Marga Groothuis (universitair docent staats- en bestuursrecht)Deze publicaties worden genoemd:From Street-Level to System-Level Bureaucracies: How Information and Communication Technology Is Transforming Administrative Discretion and Constitutional Control – Mark Bovens & Stavros Zouridis (2002)De Informatiecultus – Theodore Roszak (1986)Nexus: A Brief History of Information Networks – Yuval Harari (2024)En deze video:Waarom ben jij voor de overheid niets meer dan een nummertje? – Universiteit vanNederland (YouTube, 2018)--------------------Dit gesprek is opgenomen op 1 november 2024.Host: Piek KnijffRedactie: Team Filosofie in actieStudio en montage: De PodcastersTune: Uma van WingerdenArtwork: Hans Bastmeijer – Servion StudioWil je nog ergens over napraten? Dat kan! Neem contact op via info@filosofieinactie.nlMeer weten over Filosofie in actie en onze werkzaamheden? Bezoek dan onze website:www.filosofieinactie.nl, of volg onze LinkedIn-pagina.
This and all episodes at: https://aiandyou.net/ . Digital Humanities sounds at first blush like a contradiction of terms: the intersection of our digital, technology-centric culture, and the humanities, like arts, literature, and philosophy. Aren't those like oil and water? But my guest illustrates just how important this discipline is by illuminating both of those fields from viewpoints I found fascinating and very different from what we normally encounter. Professor Caroline Bassett is the first Director of Cambridge Digital Humanities, an interdisciplinary research center in Cambridge University. She is a Fellow of Corpus Christi College and researches digital technologies and cultural change with a focus on AI. She co-founded the Sussex Humanities Lab and at Cambridge she inaugurated the Masters of Philosophy in Digital Humanities and last month launched the new doctoral programme in Digital Humanities. In part 1 we talk about what digital humanities is, how it intersects with AI, what science and the humanities have to learn from each other, Joseph Weizenbaum and the reactions to his ELIZA chatbot, Luddites, and how passively or otherwise we accept new technology. Caroline really made me see in particular how what she calls "technocratic rationality," a way of thinking borne out of a technological culture accelerated by AI, reduces the novelty which we can experience in the world in a way we should certainly preserve. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
Max Pearson presents a collection of the week's Witness History episodes. Our guest is Zoe Kleinman, the BBC's Technology Editor.We start with the world's first general purpose electronic computer, the ENIAC, built in 1946 by a team of female mathematicians including Kathleen Kay McNulty.Then we hear about the man who invented the original chatbot, called Eliza, but didn't believe computers could achieve intelligence.Following that, Dr Hiromichi Fujisawa describes how his team at Waseda University in Japan developed the first humanoid robot in 1973, called WABOT-1.Staying in Japan, the engineer Masahiro Hara explains how he was inspired to design the first QR code by his favourite board game.Finally, Thérèse Izay Kirongozi recounts how the death of her brother drove her to build robots that manage traffic in the Democratic Republic of Congo.Contributors: Zoe Kleinman - BBC Technology Editor. Gini Mauchly Calcerano - daughter of Kathleen Kay McNulty, who developed ENIAC. Miriam Weizenbaum - daughter of Joseph Weizenbaum, who built Eliza chatbot. Dr Hiromichi Fujisawa - developer of WABOT-1 robot. Masahiro Hara - inventor of the QR code. Thérèse Izay Kirongozi - engineer behind traffic robots.(Photo: Robots manage traffic in Kinshasa, Democratic Republic of Congo. Credit: Federico Scoppa/AFP via Getty Images)
Eliza is the name of a 1966 invention by German born scientist, Joseph Weizenbaum, that is said to be the first chatbot.Eliza worked by someone typing their feelings into a computer keyboard, and then the programme repeated it back to them, often as a question.Joseph's daughter, Miriam tells Gill Kearsley about Eliza. We also hear from Joseph through archive interviews from Carnegie Mellon University in Pittsburgh, in the USA, that were recorded with Pamela McCorduck in 1975.Eye-witness accounts brought to life by archive. Witness History is for those fascinated by the past. We take you to the events that have shaped our world through the eyes of the people who were there. For nine minutes every day, we take you back in time and all over the world, to examine wars, coups, scientific discoveries, cultural moments and much more. Recent episodes explore everything from football in Brazil, the history of the ‘Indian Titanic' and the invention of air fryers, to Public Enemy's Fight The Power, subway art and the political crisis in Georgia. We look at the lives of some of the most famous leaders, artists, scientists and personalities in history, including: visionary architect Antoni Gaudi and the design of the Sagrada Familia; Michael Jordan and his bespoke Nike trainers; Princess Diana at the Taj Mahal; and Görel Hanser, manager of legendary Swedish pop band Abba on the influence they've had on the music industry. You can learn all about fascinating and surprising stories, such as the time an Iraqi journalist hurled his shoes at the President of the United States in protest of America's occupation of Iraq; the creation of the Hollywood commercial that changed advertising forever; and the ascent of the first Aboriginal MP.(Photo: Joseph Weizenbaum. Credit: Wolfgang Kunz/ullstein bild via Getty Images)
Ethische Fragen begegnen uns in der Informatik nicht erst seit den jüngsten Fortschritten in der KI-Forschung, sondern sind schon seit vielen Jahrzehnten ein relevantes Thema. In der elften Hintergrundfolge von Informatik für die moderne Hausfrau beschäftigen wir uns mit dem Informatikpionier und Computerkritiker Joseph Weizenbaum, der nicht nur den ersten Chatbot entwickelte, sondern in seinen Werken auch die (An-)Forderungen bzgl. des Umgangs mit Computern formulierte, über die wir heute insbesondere im Kontext von künstlicher Intelligenz immer noch diskutieren. Mehr über das Leben von Joseph Weizenbaum könnt ihr hier nachlesen: https://doi.org/10.1109/MIS.2008.70 Eine originalgetreue ELIZA-Implementierung im Rahmen des ELIZA Archeology Project könnt ihr hier ausprobieren: https://sites.google.com/view/elizaarchaeology/try-eliza Das Buch "Die Macht der Computer und die Ohnmacht der Vernunft" von Joseph Weizenbaum ist bei Suhrkamp erschienen: https://www.suhrkamp.de/buch/joseph-weizenbaum-die-macht-der-computer-und-die-ohnmacht-der-vernunft-t-9783518278741 Die ethischen Leitlinien der Gesellschaft für Informatik können hier nachgelesen werden: https://gi.de/ueber-uns/organisation/unsere-ethischen-leitlinien Der Code of Ethics der Association for Computing Machinery ist hier zu finden: https://www.acm.org/code-of-ethics Zur Podcast-Episode von "Über Stock, Stein und Startups", in der ich zu Gast war, gelangt ihr über diesen Link: https://ideenwald-oekosystem.de/informieren/podcast/ Hinweis 1: Es besteht die Möglichkeit, in Informatik für die moderne Hausfrau Werbung zu schalten. Bei Interesse kontaktiert mich bitte unter mail@informatik-hausfrau.de. Hinweis 2: Diese Folge ist aus gesundheitlichen Gründen ausnahmsweise mittwochs erschienen. Der Erscheinungstermin hat jedoch keine Auswirkungen auf die kommenden Folgen - diese erscheinen wie gewohnt dienstags. Alle Informationen zum Podcast findet ihr auf der zugehörigen Webseite https://www.informatik-hausfrau.de. Zur Kontaktaufnahme schreibt mir gerne eine Mail an mail@informatik-hausfrau.de oder meldet euch über Social Media. Auf Twitter, Instagram und Bluesky ist der Podcast unter dem Handle @informatikfrau (bzw. @informatikfrau.bsky.social) zu finden. Wenn euch dieser Podcast gefällt, abonniert ihn doch bitte und hinterlasst eine positive Bewertung, um ihm zu mehr Sichtbarkeit zu verhelfen. Falls ihr die Produktion des Podcasts finanziell unterstützen möchtet, habt ihr die Möglichkeit, dies über die Plattform Steady zu tun. Weitere Informationen dazu sind hier zu finden: https://steadyhq.com/de/informatikfrau Falls ihr mir auf anderem Wege etwas 'in den Hut werfen' möchtet, ist dies (auch ohne Registrierung) über die Plattform Ko-fi möglich: https://ko-fi.com/leaschoenberger Dieser Podcast wird gefördert durch das Kulturbüro der Stadt Dortmund.
Joseph Weizenbaum war der selbsternannte Ketzer und Dissident der "künstlichen Intelligenz" - die er selbst mitentwickelt hatte. Er hielt Computer für bessere Schreibmaschinen. Sein Programm "ELIZA" aus den 1960er Jahren war der erste "Chatbot", der mit Menschen sprechen konnte. Und zugleich eine Parodie auf den Fortschrittsglauben. Von Martin Trauner
When Eugenia Kuyda saw Her for the first time – the 2013 film about a man who falls in love with his virtual assistant – it didn't read as science fiction. That's because she was developing a remarkably similar technology: an AI chatbot that could function as a close friend, or even a romantic partner.That idea would eventually become the basis for Replika, Kuyda's AI startup. Today, Replika has millions of active users – that's millions of people who have AI friends, AI siblings and AI partners. When I first heard about the idea behind Replika, I thought it sounded kind of dystopian. I envisioned a world where we'd rather spend time with our AI friends than our real ones. But that's not the world Kuyda is trying to build. In fact, she thinks chatbots will actually make people more social, not less, and that the cure for our technologically exacerbated loneliness might just be more technology. Mentioned:“ELIZA—A Computer Program For the Study of Natural Language Communication Between Man And Machine” by Joseph Weizenbaum“elizabot.js”, implemented by Norbert Landsteiner“Speak, Memory” by Casey Newton (The Verge)“Creating a safe Replika experience” by Replika“The Year of Magical Thinking” by Joan DidionAdditional Reading:The Globe & Mail: “They fell in love with the Replika AI chatbot. A policy update left them heartbroken”“Loneliness and suicide mitigation for students using GPT3-enabled chatbots” by Maples, Cerit, Vishwanath, & Pea“Learning from intelligent social agents as social and intellectual mirrors” by Maples, Pea, Markowitz
Wie intelligent sind Computer? Verstehen sie uns? Verstehen sie unsere Gefühle und Sehnsüchte? Davon sind heute viele Menschen überzeugt. Auch und gerade Programmierer. Blake Lemoine zum Beispiel war leitender Ingenieur bei Google – bis die Firma ihn entliess. Der Grund: Lemoine war überzeugt, dass die KI von Google keine seelenlose Maschine mehr sei, sondern ein Wesen mit Gefühlen und einem Bewusstsein. Die meisten Nutzer gehen nicht so weit. Immer mehr Menschen fühlen sich aber von den chattenden KI-Programmen gut verstanden. Manchmal sogar besser als von Menschen. Warum ist das so? Was bringt uns dazu, einer Maschine so viel Verständnis und Gefühl zu attestieren? Die Antwort findet sich in einem Experiment, das der deutsch-amerikanische Computerwissenschaftler Joseph Weizenbaum bereits 1966 durchgeführt hat. Das Programm, das er dafür entwickelte, hiess «Eliza». Die menschliche Vertrauenseeligkeit gegenüber Maschinen heisst seither «Eliza-Effekt». Mein Wochenkommentar über Joseph Weizenbaum und den Eliza-Effekt.Matthias Zehnder ist Autor und Medienwissenschaftler in Basel. Er ist bekannt für inspirierende Texte, Vorträge und Seminare über Medien, die Digitalisierung und KI.Website: https://www.matthiaszehnder.ch/Newsletter abonnieren: https://www.matthiaszehnder.ch/abo/Unterstützen: https://www.matthiaszehnder.ch/unterstuetzen/Biografie und Publikationen: https://www.matthiaszehnder.ch/about/
Även om man kunde få AI att på styra de fysiska sexrobotarnas hjärnor, så skulle det inte röra sig om några oberäkneliga hysterikor. Eller? Aase Berg funderar över gamla och nya vaxdockor. Lyssna på alla avsnitt i Sveriges Radio Play. ESSÄ: Detta är en text där skribenten reflekterar över ett ämne eller ett verk. Åsikter som uttrycks är skribentens egna.Zombierna ryter och rosslar, smackar och slaskar när de sliter köttslamsor ur ännu levande människokroppar. Det är jag och min dotter som tittar på teveserien The Walking Dead (som sändes 2010-2022). Vi är nästan klara med elfte och sista säsongen. Vi brukar kolla ca fyra avsnitt på raken medan vi äter tacos framför teven. Det är ungefär den sämsta maten man kan inta till zombiefilm. Slafsig köttfärs glufsas medan de odöda på teven svullar i sig inälvor och river loss senor och muskler från sina offer.Det här är mer än bara osmaklig underhållning. Medan vi tittar pratar vi om gruppsykologi, sociologi och existentiella frågor. Serien är visserligen både äcklig och våldsam, men den innehåller också långa sekvenser av tystnad och tristess, där människor som försöker överleva zombieapokalypsen samlas i förfallna industrilokaler för att diskutera och praktisera samhällsbyggande och etik.Det är inte helt taget ur luften att odöda skulle kunna existera. Man anser nuförtiden att zombierna på Haiti helt enkelt har varit drogade, för att kunna användas som slavar. Om de mot förmodan nånsin har funnits, alltså.På senare år har det gjorts många forskningsexperiment för att framkalla skendöd, till exempel med svavelväte eller gift från den dödliga blåsfisken. Det tricksas också med olika typer av konstgjord koma, till exempel genom nedfrysning, en sänkning av ämnesomsättningen, inte olik den hos djur som går i dvala. Och det ligger massor av döda miljärdärer i frysboxar världen över och väntar på bättre tider.Tidigare decenniers nekronauter – alltså dödenforskare – de var ännu mer makabra än de samtida. Redan från 1700-talet sysslade de med galvanism, sprattlande döda grodor och återuppväckta avrättningsoffer, och den rätt perversa ryska patologen Sergej Brjuchonenko lyckades år 1925 hålla liv i hundhuvuden utan kroppar. Å andra sidan uppfann han samtidigt den första hjärt-lungmaskinen.Om zombies åtminstone har nån sorts realitetsanknytning, så är vampyrerna deras motsats. Även om man skulle lyckas förlänga telomererna, DNA-kedjans slutstycken, och på så vis öka den mänskliga livslängden, så är det långsökt att vi nånsin skulle kunna livnära oss på enbart blod. Vampyrer är framför allt estetiskt suggestiva: de är snygga och sexiga aristokrater.Men den mest intressanta frågan kring båda sorternas mytologiska monster är förstås: Var finns egentligen livet i en kropp?Jag tänker på det när jag öppnar praktverket The Anatomical Venus, med undertiteln wax, god, death and the ecstatic. Joanna Ebenstein, som bland annat grundade bloggen Morbid Anatomy, har skrivit texten. Men det är bilderna som är viktigast i den här boken, närmare bestämt foton av medicinska vaxfigurer. Här finns till exempel Venus från Medici avbildad: en ung, vacker kvinna i naturlig storlek, med glänsande ögon av glas och riktigt människohår. Hon ligger naken i en lidande pose på en rosa sidenmadrass. Alltså inte lidande som när man på riktigt plågas av fysisk smärta – hopkrullad, krampande. Nej, hon ligger sublimt utsträckt med madonnaliknande ansikte, och ger snarare uttryck för överjordisk självömkan, lite sådär Bergtagen-aktigt, typ när sjukdomar som tuberkulos upphöjs till ett suckande, flämtande och febrigt, närmast andligt och extatiskt hypertillstånd. Venus från Medici har en öppningsbar framsida, och kan plockas isär i sju anatomiskt korrekta lager. Längst in, i hennes livmoder, ligger ett litet foster.Hon tillverkades i Florens av Clemente Susini år 1782.En annan suggestiv vaxmodell är kvinnan med dissekerat ansikte där man kan lyfta bort den släta, sovande ytan och blotta ett kranium med ögonen kvar i globerna och pannbenet borttaget så att hjärnan syns. Det kastanjebruna håret växer ut ur huvudsvålen och ligger böljande över kudden – en enigmatisk blandning av helgonbild, plågad Jesus på korset, skönhetsdrottning och styckningsschema.Det finns många exempel på såna här försvarslösa kvinnodockor med minutiöst verklighetstrogna inälvor. De visades på museer och marknader men användes också vetenskapligt under 1700- och 1800-talet. Att dissekera människor var slabbigt och jobbigt och det rådde allmän brist på tillgängliga lik.Men lika mycket som medicinvetenskapliga och folkbildande hjälpmedel, är de mystiska och gåtfulla konstverk och fetischer.De känns kusligt levande, och steget är inte långt till automaten, en mytologisk livsform i gränslandet mellan liv och död som kanske ligger närmare an av vår tids stora samhällsfrågor, nämligen den om artificiell intelligens. Den tyska romantikern E T A Hoffmann skrev om en vacker, kall och konstig automat, en robotkvinna som han kallade Olimpia. Och när 1600-talsfilosofen René Descartes reste till drottning Kristina i Sverige ska han enligt myten ha släpat med sig en robotkopia av dottern Francine, som dog när hon var fem år.Det påminner om Guvernören i min och min dotters zombieserie, han som har kvar sin lilla flicka fastbunden, trots att hon har förvandlats och blivit en annan; vidrig och livsfarlig.Gemensamt för alla de här mytologiska kvinnofigurerna är att de står under minutiös kontroll av sin skapare, nämligen mannen. Ända in i inälvorna ska han styra och ställa, djupt in i hjärnvindlingarna, in till livmodern och, om möjligt, själen. Det är som när Pygmalion blev kär i sin skulptur som började leva, eller George Bernard Shaw lät Higgins uppfostra Eliza. Sen lånade Joseph Weizenbaum i sin tur Elizas namn till datorprogrammet han skapade på 60-talet. Hon var en av de första chatbotarna. En tidig AI, om man så vill.AI ger såklart stora möjligheter att uppfinna den behagfulla kvinnan. Som på sitt klokt moderliga sätt bemöter mannen med milt överseende och oändligt tålamod. Fantasin om henne går hand i hand med den koloniala drömmen om ett nytt slavsamhälle, där det finns gratis arbetskraft och lyckliga horor. För även om man kunde få AI att på allvar styra de fysiska sexrobotarnas hjärnor, så skulle det inte röra sig om några oberäkneliga bordelinebrudar och hysterikor. Eller? Visserligen är den artificiella intelligensen ytterst mänsklig, uppbyggd av den data vi har lämnat efter oss, men det är ju segrarna, männen, som har skrivit historien. Det är en enkönad robotslav vi håller på att skapa. Men tänk om jordmamman reser sig ur sin dvala och besegrar det patriarkat som en gång tystade henne. Tänk om den kåta, glada och tacksamma vaxhustrun som lyssnar, tröstar, lagar mat och alltid vill ligga i själva verket kommer att ta över och balla ur. Det är den skräcken man kan ana när de framgångsrika maktmännen inom vetenskap och techindustri varnar oss för undergången. Tänk om AI i själva verket är en monstruös hagga!Aase Bergförfattare och kritikerLitteraturJoanna Ebenstein: The Anatomical Venus: Wax, God, Death & the Ecstatic. Distributed Art Publishers (DAP), 2016.
In which we are joined by Ezri of Swampside Chats, to continue our discussion of "Computer Power and Human Reason: From Judgement to Calculation" by Joseph Weizenbaum. In this episode we cover the third an fourth chapters of the book. Computer Power and Human Reason: From Judgment to Calculation (1976) by Joseph Weizenbaum displays the author's ambivalence towards computer technology and lays out the case that while artificial intelligence may be possible, we should never allow computers to make important decisions because computers will always lack human qualities such as compassion and wisdom. Weizenbaum makes the crucial distinction between deciding and choosing. Deciding is a computational activity, something that can ultimately be programmed. It is the capacity to choose that ultimately makes one a human being. Choice, however, is the product of judgment, not calculation. Comprehensive human judgment is able to include non-mathematical factors such as emotions. Judgment can compare apples and oranges, and can do so without quantifying each fruit type and then reductively quantifying each to factors necessary for mathematical comparison. If you like the show, consider supporting us on Patreon. Links: Computer Power and Human Reason on Wikipedia
In which we are joined by Ezri of Swampside Chats, to continue our discussion of "Computer Power and Human Reason: From Judgement to Calculation" by Joseph Weizenbaum. In this episode we cover the second chapter of the book. Computer Power and Human Reason: From Judgment to Calculation (1976) by Joseph Weizenbaum displays the author's ambivalence towards computer technology and lays out the case that while artificial intelligence may be possible, we should never allow computers to make important decisions because computers will always lack human qualities such as compassion and wisdom. Weizenbaum makes the crucial distinction between deciding and choosing. Deciding is a computational activity, something that can ultimately be programmed. It is the capacity to choose that ultimately makes one a human being. Choice, however, is the product of judgment, not calculation. Comprehensive human judgment is able to include non-mathematical factors such as emotions. Judgment can compare apples and oranges, and can do so without quantifying each fruit type and then reductively quantifying each to factors necessary for mathematical comparison. If you like the show, consider supporting us on Patreon. Links: Computer Power and Human Reason on Wikipedia Weizenbaum's Nightmares, on The Guardian Inside the Very Human Origin of the Term “Artificial Intelligence” General Intellect Unit on iTunes http://generalintellectunit.net Support the show on Patreon https://twitter.com/giunitpod General Intellect Unit on Facebook General Intellect Unit on archive.org Emancipation Network
In which we are joined by Ezri of Swampside Chats, to continue our discussion of "Computer Power and Human Reason: From Judgement to Calculation" by Joseph Weizenbaum. In this episode we cover the first chapter of the book. Computer Power and Human Reason: From Judgment to Calculation (1976) by Joseph Weizenbaum displays the author's ambivalence towards computer technology and lays out the case that while artificial intelligence may be possible, we should never allow computers to make important decisions because computers will always lack human qualities such as compassion and wisdom. Weizenbaum makes the crucial distinction between deciding and choosing. Deciding is a computational activity, something that can ultimately be programmed. It is the capacity to choose that ultimately makes one a human being. Choice, however, is the product of judgment, not calculation. Comprehensive human judgment is able to include non-mathematical factors such as emotions. Judgment can compare apples and oranges, and can do so without quantifying each fruit type and then reductively quantifying each to factors necessary for mathematical comparison. If you like the show, consider supporting us on Patreon. Links: Computer Power and Human Reason on Wikipedia Weizenbaum's Nightmares, on The Guardian Inside the Very Human Origin of the Term “Artificial Intelligence” General Intellect Unit on iTunes http://generalintellectunit.net Support the show on Patreon https://twitter.com/giunitpod General Intellect Unit on Facebook General Intellect Unit on archive.org Emancipation Network
In which we are joined by Ezri of Swampside Chats, to continue our discussion of "Computer Power and Human Reason: From Judgement to Calculation" by Joseph Weizenbaum. In this episode we cover the prefaces, introduction, and chapter one. Computer Power and Human Reason: From Judgment to Calculation (1976) by Joseph Weizenbaum displays the author's ambivalence towards computer technology and lays out the case that while artificial intelligence may be possible, we should never allow computers to make important decisions because computers will always lack human qualities such as compassion and wisdom. Weizenbaum makes the crucial distinction between deciding and choosing. Deciding is a computational activity, something that can ultimately be programmed. It is the capacity to choose that ultimately makes one a human being. Choice, however, is the product of judgment, not calculation. Comprehensive human judgment is able to include non-mathematical factors such as emotions. Judgment can compare apples and oranges, and can do so without quantifying each fruit type and then reductively quantifying each to factors necessary for mathematical comparison. If you like the show, consider supporting us on Patreon. Links: Computer Power and Human Reason on Wikipedia Weizenbaum's Nightmares, on The Guardian Inside the Very Human Origin of the Term “Artificial Intelligence” General Intellect Unit on iTunes http://generalintellectunit.net Support the show on Patreon https://twitter.com/giunitpod General Intellect Unit on Facebook General Intellect Unit on archive.org Emancipation Network
In which we are joined by Ezri of Swampside Chats, to continue our discussion of "Computer Power and Human Reason: From Judgement to Calculation" by Joseph Weizenbaum. Computer Power and Human Reason: From Judgment to Calculation (1976) by Joseph Weizenbaum displays the author's ambivalence towards computer technology and lays out the case that while artificial intelligence may be possible, we should never allow computers to make important decisions because computers will always lack human qualities such as compassion and wisdom. Weizenbaum makes the crucial distinction between deciding and choosing. Deciding is a computational activity, something that can ultimately be programmed. It is the capacity to choose that ultimately makes one a human being. Choice, however, is the product of judgment, not calculation. Comprehensive human judgment is able to include non-mathematical factors such as emotions. Judgment can compare apples and oranges, and can do so without quantifying each fruit type and then reductively quantifying each to factors necessary for mathematical comparison. If you like the show, consider supporting us on Patreon. Links: Computer Power and Human Reason on Wikipedia Weizenbaum's Nightmares, on The Guardian Inside the Very Human Origin of the Term “Artificial Intelligence” General Intellect Unit on iTunes http://generalintellectunit.net Support the show on Patreon https://twitter.com/giunitpod General Intellect Unit on Facebook General Intellect Unit on archive.org Emancipation Network
Der Philosoph und Linguist Sebastian Rosengrün arbeitet zu Chancen und Gefahren Künstlicher Intelligenz im Wechselspiel mit der Gesellschaft. Er warnt davor, Chatbots als moralische Instanzen anzuerkennen und pocht darauf, der Gesellschaft mögliche Konsequenzen solcher Zuschreibungen aufzuzeigen. Statt uns magischer Science-Fiction-Geschichten hinzugeben sollten wir uns mit realen Fehlentwicklungen konfrontieren – um sie abzuwenden. Gemeinsam sprechen wir über Essays, die von Chatbots geschrieben wurden, welche Irrtümer man in KI-Entwicklung hineinprojizieren kann, und warum man den Podcast Selbstbewusste KI hören sollte. Autor: Karsten WendlandRedaktion, Aufnahmeleitung und Produktion: Karsten WendlandRedaktionsassistenz: Robin Herrmann Lizenz: CC-BY In dieser Episode genannte Quellen: Sebastian Rosengrün (2021): Künstliche Intelligenz zur Einführung. Junius Verlag. Sebastian Rosengrün bei der CODE University Berlin. Max Tegmark (2017): Leben 3.0: Mensch sein im Zeitalter Künstlicher Intelligenz. John R. Searle (1980): Minds, brains, and programs. Joseph Weizenbaum (1966): ELIZA – A Computer Programm For the Study of Natural Language Communication Between Man And Machine. Joseph Weizenbaum (1978): Die macht der Computer und die Ohnmacht der Vernunft. Suhrkamp Verlag. F. Geier, S. Rosengrün (2023): Die 101 wichtigsten Fragen – Digitalisierung. Hilary Putnam (1960): Minds and Machines. Raymond „Ray“ Kurzweil, der US-amerikanische Autor, Erfinder, Futurist und Leiter der technischen Entwicklung bei Google.
In which we are joined by Ezri of Swampside Chats, to begin a series on "Computer Power and Human Reason: From Judgement to Calculation" by Joseph Weizenbaum. Computer Power and Human Reason: From Judgment to Calculation (1976) by Joseph Weizenbaum displays the author's ambivalence towards computer technology and lays out the case that while artificial intelligence may be possible, we should never allow computers to make important decisions because computers will always lack human qualities such as compassion and wisdom. Weizenbaum makes the crucial distinction between deciding and choosing. Deciding is a computational activity, something that can ultimately be programmed. It is the capacity to choose that ultimately makes one a human being. Choice, however, is the product of judgment, not calculation. Comprehensive human judgment is able to include non-mathematical factors such as emotions. Judgment can compare apples and oranges, and can do so without quantifying each fruit type and then reductively quantifying each to factors necessary for mathematical comparison. If you like the show, consider supporting us on Patreon. Links: Computer Power and Human Reason on Wikipedia Weizenbaum's Nightmares, on The Guardian Inside the Very Human Origin of the Term “Artificial Intelligence” General Intellect Unit on iTunes http://generalintellectunit.net Support the show on Patreon https://twitter.com/giunitpod General Intellect Unit on Facebook General Intellect Unit on archive.org Emancipation Network
Weizenbaum ist in vieler Hinsicht ein Sonderfall: ein großer Denker der Gegenwart, der sich nicht in die Schublade »Computerwissenschaftler« einsperren ließ, sondern unermüdlich gegen Klassifizierungen dieser Art angekämpft und die allgemeine Denkfaulheit angeprangert hat. Immer wieder hat er auf den Zusammenhang von Spitzenforschung und Militär hingewiesen, der gern verschwiegen wird. Wo sich die meisten Wissenschaftler arrangierten aus Rücksicht auf ihre Karriere, hat er Stellung bezogen. Das war seiner eigenen Lebensgeschichte als jüdischer Emigrant aus Nazideutschland geschuldet. Er tat es auf seine Weise: unangepasst, fantasievoll, mutig, humorvoll und – trotz aller Kompromisslosigkeit in der Kritik – optimistisch und ermutigend. Anhand von Gesprächen und Geschichten, die Weizenbaum mit Begeisterung zu erzählen pflegte, erinnert dieses Buch an einen großen kritischen Denker, der unermüdlich appelliert hat, die eigene Erfahrung in den Wahrnehmungsprozess einzubringen, Verantwortung zu übernehmen, sich nicht auf sogenannte Experten zu verlassen und vielmehr Widerstand gegen die Entmündigung zu leisten. Darüber hinaus wird ein Stück Wissenschaftsgeschichte beleuchtet, die den Zweig der Wissenschaft betrifft, der wie kein anderer unseren Alltag bestimmt. Gunna Wendt: Geschichte trifft Literatur und Kunst: Die deutsche Schriftstellerin und Ausstellungsmacherin Gunna Wendt wurde 1953 in Jeinsen, Hannover geboren. Nach ihrem Soziologie- und Psychologie Studium in Hannover entschloss sie sich dazu Biographien zu schreiben, die sie immer in den historischen Kontext stellt. Sie hat außerdem als Regieassistentin und Dramaturgin in Theaterproduktionen gearbeitet. Zudem war sie Kuratorin sowie Publizistin und war im Rundfunk tätig. Gunna Wendt lebt seit 1981 in München. seit 1981 freie Schriftstellerin, Publizistin und Kuratorin in München.Lesungen, Vorträge und Moderationen zu kulturellen Themen, u.a. in München,Berlin, Bremen, Hamburg, Hannover, Leipzig, Bern, Wien, Zürich Moderation Steven Lundström Redaktion und Realisation Uwe Kullnick --- Send in a voice message: https://podcasters.spotify.com/pod/show/hoerbahn/message
Het overkomt je tijdens het lezen van een boek: Je begint je in te leven in de hoofdpersoon. Je voelt emoties, echte emoties, zoals verdriet, opluchting, boosheid, vreugde… en dat is bijzonder, want de personages zijn nep. Ze bestaan alleen op papier, en in onze beleving. Dat wij tot zoveel empathie in staat zijn – dat we zelfs kunnen meeleven met personages die niet bestaan – is onze kracht. Maar het is een kracht die in een zwakte verandert wanneer we met chatbots praten. Om met de woorden van informaticus Joseph Weizenbaum te spreken: er schuilt hier een zeker gevaar in. Weizenbaum maakte in de jaren '60 ELIZA, de eerste chatbot die menselijk overkwam op gebruikers. Dat ELIZA menselijk overkwam, schokte Weizenbaum. Waarom? Welke gevaren zag hij 60 jaar geleden al? Ik bespreek dit met Oumaima Hajri, onderzoeker en docent verantwoorde kunstmatige intelligentie aan de Hogeschool Rotterdam. De waarschuwing van Weizenbaum blijkt relevanter dan ooit, nu bedrijven chatbots aanbieden als levenspartners en voor rouwverwerking. Net als Weizenbaum, is Oumaima kritisch op alle AI-toepassingen waar emotie een rol in speelt. Shownotes: The Guardian over het leven van Joseph Weizenbaum Weizenbaum introduceert ELIZA MIT Technology Review over chatbots voor rouwverwerking: / NRC over relaties met chatbots Ben je nog niet klaar met dit onderwerp, lees dan zeker de zes essays die ik voor deze podcast schreef in Trouw. Ze worden vanaf oktober 2023 gepubliceerd in de krant en op trouw.nl/welkomindeaifabriek. Welkom in de AI-Fabriek is een podcast van BNR Nieuwsradio en Trouw. Gemaakt door mij, Ilyaz Nasrullah, met de onmisbare hulp van Connor Clerx. De muziek en audiovormgeving is gemaakt door Gijs Friesen. Het grafisch ontwerp is van Danusia Schenke. Eindredactie: Wendy Beenakker bij BNR en Wendelmoet Boersema bij Trouw. See omnystudio.com/listener for privacy information.
This episode of "A Beginner's Guide to AI" explores the field of natural language processing, NLP, which enables machines to understand human language. We break down key NLP concepts like machine learning algorithms, neural networks, and linguistic rules that empower AI applications like chatbots, voice assistants, and translation tools to comprehend what we say and write. A real-world case study explains how NLP performed sentiment analysis on hotel reviews to help improve customer satisfaction. We end with an interactive discussion of NLP's exciting future possibilities as the technology continues advancing. Here you find my free Udemy class: The Essential Guide to Claude 2 This podcast was generated with the help of artificial intelligence. We do fact check with human eyes, but there might still be hallucinations in the output. Music credit: "Modern Situations by Unicorn Heads" --- CONTENT OF THE EPISODE Welcome to "A Beginner's Guide to AI" Welcome to another episode where we delve into the captivating realm of natural language processing (NLP). How AI Understands Human Language Artificial intelligence, through NLP, powers many tools we use daily, from chatbots to translation services. In this episode, we'll explore how machines interpret human language, focusing on key NLP concepts like machine learning algorithms and neural networks. Real-World Applications of NLP NLP is integral to our digital lives. Virtual assistants like Siri and Alexa, translation tools like Google Translate, and even email spam filters all rely on NLP. These tools interpret and generate human language, making our interactions with technology smoother and more intuitive. Demystifying Natural Language Processing NLP, a subset of AI, focuses on understanding and generating human language. It combines computer science, linguistics, and machine learning to analyze unstructured text or speech. The goal is to enable machines to understand our communication nuances, from individual words to entire passages. Case Study: Sentiment Analysis for Shangri-La Hotel Shangri-La Hotel used NLP to analyze customer reviews from various platforms. By processing this feedback, they identified areas of improvement and strengths, leading to enhanced customer satisfaction. This showcases how brands can leverage AI to understand qualitative feedback at scale. Deepen Your AI Knowledge For those keen to delve deeper into conversational AI, check out the free Udemy course titled "The Essential Guide to Claude 2". It offers insights into the workings of conversational AI tools and is perfect for beginners. Reflecting on NLP's Future As we look ahead, consider the future prospects for NLP. How will it transform our interaction with technology? From real-time translation tools to even more advanced virtual assistants, the possibilities are boundless. Concluding Thoughts on Language AI As computer scientist Joseph Weizenbaum aptly said, genuine human communication has an intangible quality rooted in our shared humanity. As NLP evolves, it's essential to ensure our technology remains human-centric, preserving the essence of communication. Thank you for joining this exploration into NLP. Stay tuned for more insights in our upcoming episodes!
Computer scientist Joseph Weizenbaum was there at the dawn of artificial intelligence – but he was also adamant that we must never confuse computers with humans. Help support our independent journalism at theguardian.com/longreadpod
Paris Marx is joined by Ben Tarnoff to discuss the ELIZA chatbot created by Joseph Weizenbaum in the 1960s and how it led him to develop a critical perspective on AI and computing that deserves more attention during this wave of AI hype. Ben Tarnoff writes about technology and politics. He is a founding editor of Logic, and author of Internet for the People: The Fight for Our Digital Future. You can follow Ben on Twitter at @bentarnoff.Tech Won't Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Follow the podcast (@techwontsaveus) and host Paris Marx (@parismarx) on Twitter, and support the show on Patreon.The podcast is produced by Eric Wickham and part of the Harbinger Media Network.Also mentioned in this episode:Ben wrote a long article about Weizenbaum and what we can learn from his work for The Guardian.Paris wrote a skeptical perspective on AI hype and the promises of ChatGPT in Disconnect.Zachary Loeb has also written about Weizenbaum's work and perspective on AI and computing.Support the show
This and all episodes at: https://aiandyou.net/ . Should AI be able to feel? It may seem like the height of hubris, recklessness, and even cruelty to suggest such a thing - and yet our increasing unease and fears of what #AI may do stem from its lack of empathy. I develop this reasoning in my third TEDx talk, recorded at Royal Roads University. From my research into Joseph Weizenbaum's ELIZA to what developers of #ChatGPT and other AI are missing, I explore this most sensitive of issues. This podcast episode is the bonus track, the director's cut if you will, that expands on those 12 minutes of talk to give you added value and even more questions to take away. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
(01:20) Een 'existentieel gevaar' zo noemden vooraanstaande wetenschappers de ongecontroleerde ontwikkeling van Artificial Intelligence. Het lijkt een nieuw debat, maar wetenschapper Joseph Weizenbaum wees in de jaren '60 al op het gevaar. We spreken erover met filosoof Guido van der Knaap. (14:05) Maandag is het precies 150 jaar geleden dat het eerste schip met contractarbeiders in Suriname arriveerde. Hoe wordt er nu naar dit systeem gekeken? En hoe wordt dit gevierd of herdacht? Pravini Baboeram en Karin Amatmoekrim vertellen meer. (37:59) De column van Nelleke Noordervliet. (41:30) Wie vandaag het nieuws leest kan zich nauwelijks voorstellen dat de relaties tussen Nederland en Iran in de jaren ‘60 en ‘70 goed waren. Over de Nederlandse fascinatie met Iran en hoe dat beeld in duigen viel schreef Irandeskundige Maaike Warnaar ‘Onze vriend op de pauwentroon'. Meer info: https://www.vpro.nl/programmas/ovt/luister/afleveringen/2023/04-06-2023.html#
Een 'existentieel gevaar' voor de mensheid; zo noemden vooraanstaande AI-wetenschappers het gevaar van een ongecontroleerde ontwikkeling en toepassing van artificial intelligence. Het lijkt een nieuw debat, maar al sinds eind jaren '60 is er één AI-wetenschapper geweest die consequent op de gevaren van zijn vakgebied wees: Joseph Weizenbaum. Waarom hij toen al problemen voorzag, en wat we van hem kunnen leren in tijden van ChatGPT en killerdrones, dat bespreken we met filosoof Guido van der Knaap, auteur van het boek Van Aristoteles tot algoritme: Filosofie van kunstmatige intelligentie.
Host: HutchOn ITSPmagazine
Einstein teria dito que o budismo é a religião do futuro. Será mesmo que ele disse isso? E se disse, será que o budismo a que ele tinha acesso podia ser entendido além de um viés colonialista? Além disto, o que ele pensava realmente sobre ciência e religião é compatível mesmo com o budismo? O que o hype com a inteligência artificial tem a ver com tudo isso? Este podcast também está disponível em formato de vídeo em https://tzal.org/ia-einstein-e-o-budismo/ ◦ Outros conteúdos sobre ciência e budismo https://tzal.org/budismo-e-ciencia/ ◦ Budismo e mistificação quântica (texto no Buda Virtual) https://www.budavirtual.com.br/budismo-e-mistificacao-quantica/ ◦ As premissas injustificadas da ciência (texto no Papo de Homem) https://papodehomem.com.br/as-premissas-injustificadas-da-ciencia-wtf-44/ ◦ A crença infundada numa realidade externa (texto no Buda Virtual) https://www.budavirtual.com.br/a-crenca-infundada-em-uma-realidade-externa-padma-dorje/ ◦ Outros conteúdos sobre materialismo https://tzal.org/materialismo/ ◦ Interpretações da mecânica quântica (artigo da wikipedia em inglês mencionado no texto) https://en.wikipedia.org/wiki/Interpretations_of_quantum_mechanics ◦ O livro de Joseph Weizenbaum, criador do ELIZA, um chatbot primitivo que imitava um terapeuta rogeriano, pode ser encontrado em inglês na amazon.com.br https://www.amazon.com.br/dp/0716704633&tag=tzal-20. Há também um artigo sobre o livro na wikipedia em inglês https://en.m.wikipedia.org/wiki/Computer_Power_and_Human_Reason. ◦ Dois livros de Donald S. Lopez Jr., que costuma tratar de questões de distorção e colonialismo no budismo, estes sendo em particular sobre as ideias colonialistas em torno de ciência e budismo: Buddhism & Science: A Guide for the Perplexed https://www.amazon.com.br/dp/0226493199&tag=tzal-20 e The Scientific Buddha: His Short and Happy Life https://www.amazon.com.br/dp/0300159129&tag=tzal-20. Para receber informações sobre a produção de Padma Dorje: https://tzal.org/boletim-informativo/ Por favor ajude esse canal: https://tzal.org/patronagem/ Lista completa de conteúdos no canal tendrel, com descrição: https://tzal.org/tendrel-lista-completa-de-videos/ Centros de darma que recomendohttps://tzal.org/centros-de-darma-que-recomendo/ Para me ajudar comprando na amazonhttps://tzal.org/amazon Contribuições e perguntas podem ser feitas por email, que também funciona como chave PIX (conexoesauspiciosas@gmail.com)
Paris Marx is joined by Emily M. Bender to discuss what it means to say that ChatGPT is a “stochastic parrot,” why Elon Musk is calling to pause AI development, and how the tech industry uses language to trick us into buying its narratives about technology. Emily M. Bender is a professor in the Department of Linguistics at the University of Washington and the Faculty Director of the Computational Linguistics Master's Program. She's also the director of the Computational Linguistics Laboratory. Follow Emily on Twitter at @emilymbender or on Mastodon at @emilymbender@dair-community.social. Tech Won't Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Follow the podcast (@techwontsaveus) and host Paris Marx (@parismarx) on Twitter, and support the show on Patreon. The podcast is produced by Eric Wickham and part of the Harbinger Media Network. Also mentioned in this episode:Emily was one of the co-authors on the “On the Dangers of Stochastic Parrots” paper and co-wrote the “Octopus Paper” with Alexander Koller. She was also recently profiled in New York Magazine and has written about why policymakers shouldn't fall for the AI hype.The Future of Life Institute put out the “Pause Giant AI Experiments” letter and the authors of the “Stochastic Parrots” paper responded through DAIR Institute.Zachary Loeb has written about Joseph Weizenbaum and the ELIZA chatbot.Leslie Kay Jones has researched how Black women use and experience social media.As generative AI is rolled out, many tech companies are firing their AI ethics teams.Emily points to Algorithmic Justice League and AI Incident Database.Deborah Raji wrote about data and systemic racism for MIT Tech Review.Books mentioned: Weapons of Math Destruction by Cathy O'Neil, Algorithms of Oppression by Safiya Noble, The Age of Surveillance Capitalism by Shoshana Zuboff, Race After Technology by Ruha Benjamin, Ghost Work by Mary L Gray & Siddharth Suri, Artificial Unintelligence by Meredith Broussard, Design Justice by Sasha Costanza-Chock, Data Conscience: Algorithmic S1ege on our Hum4n1ty by Brandeis Marshall.Support the show
Thank You Jason Richardson Founder/CEO of J1 Studios & Creator of VTHEROES ( https://www.thegamecrafter.com/games/vt-heroes:-series-1-complete-set-for) sponsoring my conversation with Mr & Mrs. Jackson Eliza Effect... What Is the Eliza Effect? The Eliza effect is when a person attributes human-level intelligence and understanding to an AI system — and it's been fooling people for over half a century. The phenomenon was named after ELIZA, a chatbot created in 1966 by MIT professor Joseph Weizenbaum. https://builtin.com/artificial-intelligence/eliza-effect A Belgian man recently died by suicide after chatting with an AI chatbot on an app called Chai, Belgian outlet La Libre reported. https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says Aida Correa Jackson Certified VR Educator with Victory XR World Metaverse Council Member Web site: https://lovebuiltlife.com/ Twitter: https://twitter.com/lovebuiltlife/ IG: https://instagram.com/lovebuiltlife/ WordPress TV: https://wordpress.tv/?s=Aida%20Correa%20Jackson William Jackson Certified VR Educator with Victory XR World Metaverse Council Member Web site: https://myquesttoteach.com/ Web Site for African Blogs https://africaontheblog.org/
ChatGPT & company are here to stay. And so are linguists. Find out why in our exploration of the capabilities and shortcomings of generative AI and how it will affect the competences of lecturers, students and practitioners in business communication and beyond. Via tricks and tips on how to integrate these powerful text production tools in and outside the classroom, W&A once again underscores the crucial importance of language awareness and the human touch in the digital era. The discussion will take us past proper prompt engineering, output analysis, digital sweatshops and critical citizenship. You can find more information, references and a full transcript on wordsandactions.blog. In this episode we mention a number of language-related AI applications, including DALL-E, which generates images from language prompts; Scite, which identifies references supporting or questioning research findings; ELSA, which stands for English Language Speech assistant and is meant to help language learners; Wordtune, which can rewrite texts i different “tones”; and the codings apps Copilot and CodeWhisperer, which convert language inputs into code. Some examples of how the AI-powered version of the Bing search engine produces answers that are troubling or face-threatening are mentioned in this article. Our interview guest, Andreas van Cranenburgh, refers to how OpenAI, the company behind ChatGPT, uses low-paid content moderators in developing countries, often exposing them to traumatic content. This practice was described in Time Magazine. Following the interview, we talk about how the notion of communicative competence needs to be extended for interactions with chatbots. Hymes' original formulation of communicative competence dates from six years after the first ever chatbot, Eliza, was developed. (It is not known if he was aware of it.) The creator of that application, Joseph Weizenbaum, named it after Eliza Doolittle, the character in Bernard Shaw's play Pygmalion (later made into the musical and film My Fair Lady). In that modern take on the Greek myths of the sculptor Pygmalion, who falls in love with one of his statues, a linguistics professor teaches a working-class woman how to sound upper-class. Are chatbots the malleable female creations of male developers? And why does Erika, a female user, think of ChatGPT as a man? As they say, there is a paper in that. And finally, here is the ChatGPT-generated text we analyse in the last part of the episode: Dear [Customer Name], Thank you for reaching out to us. We understand that high energy prices can be frustrating and we want to help. We're sorry for any inconvenience this may have caused you. Our company's energy prices are affected by a number of factors, including changes in the global energy market and increasing demand for energy. However, we are committed to finding ways to help our customers manage their energy costs. We recommend some simple steps to conserve energy, such as turning off lights when they're not in use, adjusting your thermostat, and using energy-efficient appliances. Additionally, we offer a number of energy-saving programs that could help you save money on your energy bills. We value your feedback and appreciate your loyalty. If you have any further concerns or questions, please do not hesitate to contact us. Best regards, [Your name] Our next episode will conclude the mini-series in CSR - see you then!
Carlota Perez is a researcher who has studied hype cycles for much of her career. She's affiliated with the University College London, the University of Sussex, The Tallinn University of Technology in Astonia and has worked with some influential organizations around technology and innovation. As a neo-Schumpeterian, she sees technology as a cornerstone of innovation. Her book Technological Revolutions and Financial Capital is a must-read for anyone who works in an industry that includes any of those four words, including revolutionaries. Connecticut-based Gartner Research was founded by GideonGartner in 1979. He emigrated to the United States from Tel Aviv at three years old in 1938 and graduated in the 1956 class from MIT, where he got his Master's at the Sloan School of Management. He went on to work at the software company System Development Corporation (SDC), the US military defense industry, and IBM over the next 13 years before starting his first company. After that failed, he moved into analysis work and quickly became known as a top mind in the technology industry analysts. He often bucked the trends to pick winners and made banks, funds, and investors lots of money. He was able to parlay that into founding the Gartner Group in 1979. Gartner hired senior people in different industry segments to aid in competitive intelligence, industry research, and of course, to help Wall Street. They wrote reports on industries, dove deeply into new technologies, and got to understand what we now call hype cycles in the ensuing decades. They now boast a few billion dollars in revenue per year and serve well over 10,000 customers in more than 100 countries. Gartner has developed a number of tools to make it easier to take in the types of analysis they create. One is a Magic Quadrant, reports that identify leaders in categories of companies by a vision (or a completeness of vision to be more specific) and the ability to execute, which includes things like go-to-market activities, support, etc. They lump companies into a standard four-box as Leaders, Challengers, Visionaries, and Niche Players. There's certainly an observer effect and those they put in the top right of their four box often enjoy added growth as companies want to be with the most visionary and best when picking a tool. Another of Gartner's graphical design patterns to display technology advances is what they call the “hype cycle”. The hype cycle simplifies research from career academics like Perez into five phases. * The first is the technology trigger, which is when a breakthrough is found and PoCs, or proof-of-concepts begin to emerge in the world that get press interested in the new technology. Sometimes the new technology isn't even usable, but shows promise. * The second is the Peak of Inflated Expectations, when the press picks up the story and companies are born, capital invested, and a large number of projects around the new techology fail. * The third is the Trough of Disillusionment, where interest falls off after those failures. Some companies suceeded and can show real productivity, and they continue to get investment. * The fourth is the Slope of Enlightenment, where the go-to-market activities of the surviving companies (or even a new generation) begin to have real productivity gains. Every company or IT department now runs a pilot and expectations are lower, but now achievable. * The fifth is the Plateau of Productivity, when those pilots become deployments and purchase orders. The mainstream industries embrace the new technology and case studies prove the promised productivity increases. Provided there's enough market, companies now find success. There are issues with the hype cycle. Not all technologies will follow the cycle. The Gartner approach focuses on financials and productivity rather than true adoption. It involves a lot of guesswork around subjective, synthetical, and often unsystematic research. There's also the ever-resent observer effect. However, more often than not, the hype is seperated from the tech that can give organizations (and sometimes all of humanity) real productivity gains. Further, the term cycle denotes a series of events when it should in fact be cyclical as out of the end of the fifth phase a new cycle is born, or even a set of cycles if industries grow enough to diverge. ChatGPT is all over the news feeds these days, igniting yet another cycle in the cycles of AI hype that have been prevalent since the 1950s. The concept of computer intelligence dates back to the 1942 with Alan Turing and Isaac Asimov with “Runaround” where the three laws of robotics initially emerged from. By 1952 computers could play themselves in checkers and by 1955, Arthur Samuel had written a heuristic learning algorthm he called “temporal-difference learning” to play Chess. Academics around the world worked on similar projects and by 1956 John McCarthy introduced the term “artificial intelligence” when he gathered some of the top minds in the field together for the McCarthy workshop. They tinkered and a generation of researchers began to join them. By 1964, Joseph Weizenbaum's "ELIZA" debuted. ELIZA was a computer program that used early forms of natural language processing to run what they called a “DOCTOR” script that acted as a psychotherapist. ELIZA was one of a few technologies that triggered the media to pick up AI in the second stage of the hype cycle. Others came into the industry and expectations soared, now predictably followed by dilsillusionment. Weizenbaum wrote a book called Computer Power and Human Reason: From Judgment to Calculation in 1976, in response to the critiques and some of the early successes were able to then go to wider markets as the fourth phase of the hype cycle began. ELIZA was seen by people who worked on similar software, including some games, for Apple, Atari, and Commodore. Still, in the aftermath of ELIZA, the machine translation movement in AI had failed in the eyes of those who funded the attempts because going further required more than some fancy case statements. Another similar movement called connectionism, or mostly node-based artificial neural networks is widely seen as the impetus to deep learning. David Hunter Hubel and Torsten Nils Wiesel focused on the idea of convultional neural networks in human vision, which culminated in a 1968 paper called "Receptive fields and functional architecture of monkey striate cortex.” That built on the original deep learning paper from Frank Rosenblatt of Cornell University called "Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms" in 1962 and work done behind the iron curtain by Alexey Ivakhnenko on learning algorithms in 1967. After early successes, though, connectionism - which when paired with machine learning would be called deep learning when Rina Dechter coined the term in 1986, went through a similar trough of disillusionment that kicked off in 1970. Funding for these projects shot up after the early successes and petered out ofter there wasn't much to show for them. Some had so much promise that former presidents can be seen in old photographs going through the models with the statiticians who were moving into computing. But organizations like DARPA would pull back funding, as seen with their speech recognition projects with Cargegie Mellon University in the early 1970s. These hype cycles weren't just seen in the United States. The British applied mathemetician James Lighthill wrote a report for the British Science Research Council, which was published in 1973. The paper was called “Artificial Intelligence: A General Survey” and analyzed the progress made based on the amount of money spent on artificial intelligence programs. He found none of the research had resulted in any “major impact” in fields that the academics had undertaken. Much of the work had been done at the University of Edinbourgh and funding was drastically cut, based on his findings, for AI research around the UK. Turing, Von Neumann, McCarthy, and others had either intentially or not, set an expectation that became a check the academic research community just couldn't cash. For example, the New York Times claimed Rosenblatt's perceptron would let the US Navy build computers that could “walk, talk, see, write, reproduce itself, and be conscious of its existence” in the 1950s - a goal not likely to be achieved in the near future even seventy years later. Funding was cut in the US, the UK, and even in the USSR, or Union of the Soviet Socialist Republic. Yet many persisted. Languages like Lisp had become common in the late 1970s, after engineers like Richard Greenblatt helped to make McCarthy's ideas for computer languages a reality. The MIT AI Lab developed a Lisp Machine Project and as AI work was picked up at other schools like Stanford began to look for ways to buy commercially built computers ideal to be Lisp Machines. After the post-war spending, the idea that AI could become a more commercial endeavor was attractive to many. But after plenty of hype, the Lisp machine market never materialized. The next hype cycle had begun in 1983 when the US Department of Defense pumped a billion dollars into AI, but that spending was cancelled in 1987, just after the collapse of the Lisp machine market. Another AI winter was about to begin. Another trend that began in the 1950s but picked up steam in the 1980s was expert systems. These attempt to emulate the ways that humans make decisions. Some of this work came out of the Stanford Heuristic Programming Project, pioneered by Edward Feigenbaum. Some commercial companies took the mantle and after running into barriers with CPUs, by the 1980s those got fast enough. There were inflated expectations after great papers like Richard Karp's “Reducibility among Combinatorial Problems” out of UC Berkeley in 1972. Countries like Japan dumped hundreds of millions of dollars (or yen) into projects like “Fifth Generation Computer Systems” in 1982, a 10 year project to build up massively parallel computing systems. IBM spent around the same amount on their own projects. However, while these types of projects helped to improve computing, they didn't live up to the expectations and by the early 1990s funding was cut following commercial failures. By the mid-2000s, some of the researchers in AI began to use new terms, after generations of artificial intelligence projects led to subsequent AI winters. Yet research continued on, with varying degrees of funding. Organizations like DARPA began to use challenges rather than funding large projects in some cases. Over time, successes were found yet again. Google Translate, Google Image Search, IBM's Watson, AWS options for AI/ML, home voice assistants, and various machine learning projects in the open source world led to the start of yet another AI spring in the early 2010s. New chips have built-in machine learning cores and programming languages have frameworks and new technologies like Jupyter notebooks to help organize and train data sets. By 2006, academic works and open source projects had hit a turning point, this time quietly. The Association of Computer Linguistics was founded in 1962, initially as the Association for Machine Translation and Computational Linguistics (AMTCL). As with the ACM, they have a number of special interest groups that include natural language learning, machine translation, typology, natural language generation, and the list goes on. The 2006 proceedings on the Workshop of Statistical Machine Translation began a series of dozens of workshops attended by hundreds of papers and presenters. The academic work was then able to be consumed by all, inlcuding contributions to achieve English-to-German and Frnech tasks from 2014. Deep learning models spread and become more accessible - democratic if you will. RNNs, CNNs, DNNs, GANs. Training data sets was still one of the most human intensive and slow aspects of machine learning. GANs, or Generative Adversarial Networks were one of those machine learning frameworks, initially designed by Ian Goodfellow and others in 2014. GANs use zero-sum game techniques from game theory to generate new data sets - a genrative model. This allowed for more unsupervised training of data. Now it was possible to get further, faster with AI. This brings us into the current hype cycle. ChatGPT was launched in November of 2022 by OpenAI. OpenAI was founded as a non-profit in 2015 by Sam Altman (former cofounder of location-based social network app Loopt and former president of Y Combinator) and a cast of veritable all-stars in the startup world that included: * Reid Hoffman, former Paypal COO, LinkedIn founder and venture capitalist. * Peter Thiel, former cofounder of Paypal and Palantir, as well as one of the top investors in Silicon Valley. * Jessica Livingston, founding partner at Y Combinator. * Greg Brockman, an AI researcher who had worked on projects at MIT and Harvard OpenAI spent the next few years as a non-profit and worked on GPT, or Generative Pre-trained Transformer autoregression models. GPT uses deep learning models to process human text and produce text that's more human than previous models. Not only is it capable of natural language processing but the generative pre-training of models has allowed it to take a lot of unlabeled text so people don't have to hand label weights, thus automated fine tuning of results. OpenAI dumped millions into public betas by 2016 and were ready to build products to take to market by 2019. That's when they switched from a non-profit to a for-profit. Microsoft pumped $1 billion into the company and they released DALL-E to produce generative images, which helped lead to a new generation of applications that could produce artwork on the fly. Then they released ChatGPT towards the end of 2022, which led to more media coverage and prognostication of world-changing technological breakthrough than most other hype cycles for any industry in recent memory. This, with GPT-4 to be released later in 2023. ChatGPT is most interesting through the lens of the hype cycle. There have been plenty of peaks and plateaus and valleys in artificial intelligence over the last 7+ decades. Most have been hyped up in the hallowed halls of academia and defense research. ChatGPT has hit mainstream media. The AI winter following each seems to be based on the reach of audience and depth of expectations. Science fiction continues to conflate expectations. Early prototypes that make it seem as though science fiction will be in our hands in a matter of weeks lead media to conjecture. The reckoning could be substantial. Meanwhile, projects like TinyML - with smaller potential impacts for each use but wider use cases, could become the real benefit to humanity beyond research, when it comes to everyday productivity gains. The moral of this story is as old as time. Control expectations. Undersell and overdeliver. That doesn't lead to massive valuations pumped up by hype cycles. Many CEOs and CFOs know that a jump in profits doesn't always mean the increase will continue. Some intentially slow expectations in their quarterly reports and calls with analysts. Those are the smart ones.
Paris Marx is joined by Timnit Gebru to discuss the misleading framings of artificial intelligence, her experience of getting fired by Google in a very public way, and why we need to avoid getting distracted by all the hype around ChatGPT and AI image tools.Timnit Gebru is the founder and executive director of the Distributed AI Research Institute and former co-lead of the Ethical AI research team at Google. You can follow her on Twitter at @timnitGebru.Tech Won't Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Follow the podcast (@techwontsaveus) and host Paris Marx (@parismarx) on Twitter, support the show on Patreon, and sign up for the weekly newsletter.The podcast is produced by Eric Wickham and part of the Harbinger Media Network.Also mentioned in this episode:Please participate in our listener survey this month to give us a better idea of what you think of the show: https://forms.gle/xayiT7DQJn56p62x7Timnit wrote about the exploited labor behind AI tools and how effective altruism is pushing a harmful idea of AI ethics.Karen Hao broke down the details of the paper that got Timnit fired from Google.Emily Tucker wrote an article called “Artifice and Intelligence.”In 2016, ProPublica published an article about technology being used to “predict” future criminals that was biased against black people.In 2015, Google Photos classified black women as “gorillas.” In 2018, it still hadn't really been fixed.Artists have been protesting AI-generated images that train themselves on their work and threaten their livelihoods.OpenAI used Kenyan workers paid less than $2 an hour to try to make ChatGPT less toxic.Zachary Loeb described ELIZA in his article about Joseph Weizenbaum's work and legacy.Support the show
Er entwarf das erste Computer-Sprach-Programm namens ELIZA. Doch dann war Joseph Weizenbaum entsetzt darüber, wie ernst viele Menschen dieses relativ einfache Programm nahmen, und wurde zum Kritiker der gedankenlosen Computergläubigkeit.Von Robert Brammerwww.deutschlandfunkkultur.de, Aus den ArchivenDirekter Link zur Audiodatei
Primera parte de la historia de la IA (por supuesto, bastante resumida). Algunos personajes mencionados:Alan Turing. Autor de un relevante artículo sobre la IA.John McCarthy. Creador de Lisp.Frank Rosenblatt. Creador del Perceptrón.Ray Solomonoff. Creador del concepto de probabilidad algorítmica.Joseph Weizenbaum. Creador de ELIZA. Un filósofo de la informática.Marvin Minsky. Autor del libro: Perceptrons.
I recently got an email from Jeff Shrager, who said he'd been working hard to solve a mystery about some famous code. Eliza, the chatbot, was built in 1964, and she didn't answer questions like Alexa or Siri. She asked questions. She was a therapist chatbot and quickly became famous after being described in a 1964 paper. But here is the mystery. We're not sure how the original version worked. Joseph Weizenbaum never released the code. But Jeff tracked it down, and some of the things we thought we knew about Eliza turned out to be wrong. Episode Page Support The Show Subscribe To The Podcast Join The Newsletter
Verschiedenen Medienberichten zufolge kam ein Software-Ingenieur bei Google kürzlich zu dem Schluss, dass der Chatbot, mit dem er arbeitete, ein eigenes Bewusstsein entwickelt hätte. Dem war natürlich nicht so, ab wir nehmen den Vorfall zum Anlass, uns genauer mit dem Thema Chatbots auseinander zu setzen. Hierzu betrachten wir in der aktuellen Podcast-Episode zunächst einmal wie so ein Bot eigentlich grundlegend funktioniert. Dabei kommen wir auch auf Joseph Weizenbaum und sein Programm Eliza zu sprechen, dem bereits 1966 nachgesagt wurde, es sei intelligent. Nach diesen Ausführungen widmen wir uns dann aber den Münchner Startups und ihren Lösungen. Im Detail besprechen wir: E-Bot 7, Messengerpeople, Chatchamp und Convaise. Und mit Neosferstellen wir in dieser Episode einen Investor vor, den viele noch besser unter seinem alten Namen kennen dürften: Main Incubator. Zur Namensänderung kam es, nachdem der Frühphaseninvestor der Commerzbank beschlossen hatte, künftig nicht mehr nur in Fintechs zu investieren, sondern auch Greentech-Startups in sein Portfolio aufnehmen zu wollen. Alle im Podcast erwähnten Links findest Du übrigens im Artikel zur Folge: https://www.munich-startup.de/83229/podcast-chatbots/ ---------- Mehr Infos zur Münchner Startup-Welt findest Du natürlich regelmäßig auf unserem News-Portal: https://www.munich-startup.de/ Übrigens, je nachdem welchen Podcast-Kanal Du nutzt, freuen wir uns natürlich auch über Likes, Bewertungen, Kommentare und mehr.
Imagine a game that begins with a printout that reads: You are standing at the end of a road before a small brick building. Around you is a forest. A small stream flows out of the building and down a gully. In the distance there is a tall gleaming white tower. Now imagine typing some information into a teletype and then reading the next printout. And then another. A trail of paper lists your every move. This is interactive gaming in the 1970s. Later versions had a monitor so a screen could just show a cursor and the player needed to know what to type. Type N and hit enter and the player travels north. “Search” doesn't work but “look” does. “Take water” works as does “Drink water” but it takes hours to find dwarves and dragons and figure out how to battle or escape. This is one of the earliest games we played and it was marvelous. The game was called Colossal Cave Adventure and it was one of the first conversational adventure games. Many came after it in the 70s and 80s, in an era before good graphics were feasible. But the imagination was strong. The Oregon Trail was written before it, in 1971 and Trek73 came in 1973, both written for HP minicomputers. Dungeon was written in 1975 for a PDP-10. The author, Don Daglow, went on the work on games like Utopia and Neverwinter Nights Another game called Dungeon showed up in 1975 as well, on the PLATO network at the University of Illinois Champagne-Urbana. As the computer monitor spread, so spread games. William Crowther got his degree in physics at MIT and then went to work at Bolt Baranek and Newman during the early days of the ARPANET. He was on the IMP team, or the people who developed the Interface Message Processor, the first nodes of the packet switching ARPANET, the ancestor of the Internet. They were long hours, but when he wasn't working, he and his wife Pat explored caves. She was a programmer as well. Or he played the new Dungeons & Dragons game that was popular with other programmers. The two got divorced in 1975 and like many suddenly single fathers he searched for something for his daughters to do when they were at the house. Crowther combined exploring caves, Dungeons & Dragons, and FORTRAN to get Colossal Cave Adventure, often just called Adventure. And since he worked on the ARPANET, the game found its way out onto the growing computer network. Crowther moved to Palo Alto and went to work for Xerox PARC in 1976 before going back to BBN and eventually retiring from Cisco. Crowther loosely based the game mechanics on the ELIZA natural language processing work done by Joseph Weizenbaum at the MIT Artificial Intelligence Laboratory in the 1960s. That had been a project to show how computers could be shown to understand text provided to computers. It was most notably used in tests to have a computer provide therapy sessions. And writing software for the kids or gaming can be therapeutic as well. As can replaying happier times. Crowther explored Mammoth Cave National Park in Kentucky in the early 1970s. The characters in the game follow along his notes about the caves, exploring the area around it using natural language while the computer looked for commands in what was entered. It took about 700 lines to do the original Fortran code for the PDP-10 he had at his disposal at BBN. When he was done he went off on vacation, and the game spread. Programmers in that era just shared code. Source needed to be recompiled for different computers, so they had to. Another programmer was Don Woods, who also used a PDP-10. He went to Princeton in the 1970s and was working at the Stanford AI Lab, or SAIL, at the time. He came across the game and asked Crowther if it would be OK to add a few features and did. His version got distributed through DECUS, or the Digital Equipment Computer Users Society. A lot of people went there for software at the time. The game was up to 3,000 lines of code when it left Woods. The adventurer could now enter the mysterious cave in search of the hidden treasures. The concept of the computer as a narrator began with Collosal Cave Adventure and is now widely used. Although we now have vast scenery rendered and can point and click where we want to go so don't need to type commands as often. The interpreter looked for commands like “move”, “interact” with other characters, “get” items for the inventory, etc. Woods went further and added more words and the ability to interpret punctuation as well. He also added over a thousand lines of text used to identify and describe the 40 locations. Woods continued to update that game until the mid-1990s. James Gillogly of RAND ported the code to C so it would run on the newer Unix architecture in 1977 and it's still part of many a BSD distribution. Microsoft published a version of Adventure in 1979 that was distributed for the Apple II and TRS-80 and followed that up in 1981 with a version for Microsoft DOS or MS-DOS. Adventure was now a commercial product. Kevin Black wrote a version for IBM PCs. Peter Gerrard ported it to Amiga Bob Supnik rose to a Vice President at Digital Equipment, not because he ported the game, but it didn't hurt. And throughout the 1980s, the game spread to other devices as well. Peter Gerrard implemented the version for the Tandy 1000. The Original Adventure was a version that came out of Aventuras AD in Spain. They gave it one of the biggest updates of all. Colossal Cave Adventure was never forgotten, even though it was Zork was replaced. Zork came along in 1977 and Adventureland in 1979. Ken and Roberta Williams played the game in 1979. Ken had bounced around the computer industry for awhile and had a teletype terminal at home when he came across Colossal Cave Adventure in 1979. The two became transfixed and opened their own company to make the game they released the next year called Mystery House. And the text adventure genre moved to a new level when they sold 15,000 copies and it became the first hit. Rogue, and others followed, increasingly interactive, until fully immersive graphical games replaced the adventure genre in general. That process began when Warren Robinett of Atari created the 1980 game, Adventure. Robinett saw Colossal Cave Adventure when he visited the Stanford Artificial Intelligence Laboratory in 1977. He was inspired into a life of programming by a programming professor he had in college named Ken Thompson while he was on sabbatical from Bell Labs. That's where Thompason, with Dennis Ritchie and one of the most amazing teams of programmers ever assembled, gave the world Unix and the the C programming language at Bell Labs. Adventure game went on to sell over a million copies and the genre of fantasy action-adventure games moved from text to video.
In dieser Episode ist wieder Dr. Lukas Lang zu Gast. Wir sprechen über Data Science und Machine Learninig (auch »artificial intelligence« genannt). Das ist ein Themenbereich, der sehr viel Potential für unsere Zukunft hat, aber wie alle diese Themenbereiche auch eine Menge an Gefahren, Herausforderungen und Hypes generiert. Lukas ist ein perfekter Gesprächspartner für dieses Thema, weil er sowohl in der Spitzenforschung tätig war als auch in der industriellen Praxis mit diesen Themen beschäftigt ist. Diese Mischung scheint mir bei komplexen technischen Fragestellungen und Problemen sehr nützlich zu sein. Lukas hat nach seinem Studium der Informatik eine Promotion im Spezialgebiet Computational Science gemacht. Anschließend war er mehrere Jahre in der universitären Forschung im Bereich der mathematischen Bild- und Datenanalyse tätig, zuletzt an der Universität Cambridge. Seine Arbeit hat Anwendungen in der medizinischen Bildgebung, in der Molekular- und Zellbiologie, und in der Computer Vision. Derzeit leitet er den Geschäftsbereich »Data Science and AI« eines Spin-Offs des internationalen Industriekonzerns Voestalpine. Sein Team arbeitet an der Umsetzung von Daten-Projekten in der Erzeugung und Verarbeitung von Spezialmetallen, und am Aufbau eines globalen Data Science Programms für die Produktionsstandorte. Wir haben dieses umfangreiche Thema in zwei Episoden aufgeteilt: In der ersten Episode beginnen wir das Thema Data Science einzuführen, auch anhand einiger Beispiele, beginnend mit historischen Beispielen sowie Anwendungsfällen der heutigen Zeit. Wir spannen dabei den Bogen von Tycho Brahe und Florence Nightingale bis zu modernen Sprachassistenten und Entscheidungsunterstützung im Militär und zivilen Bereich. Dann gibt Lukas einen Überblick über wesentliche Prinzipien und Begriffe, die in diesem Zusammenhang immer wieder auftreten, wie Datascience, die Rolle der klassischen Statistik, Modellierung, Visualisierung, EDA, AI, KI, machine learning, multivariate statistik, Datenqualität und vieles mehr. Wir sprechen dann über die These die seit einiger Zeit im Raum steht, dass man dank Daten und »AI« ja keine Modelle, keine Theorie mehr benötigt — The End of Theory —, sondern einfach aus Daten lernt und das wäre hinreichend für die wissenschaftliche Betrachtung der Welt. Wir diskutieren dann Möglichkeiten, Geschäftsmodelle und Grenzen von Machine Learning und Data Science. Wer trifft heute überhaupt Entscheidungen und was ist die Rolle und Funktion eines Data Scientists? Sollten Menschen immer das letzt Wort bei wesentlichen Entscheidungen haben? Ist das überhaupt (noch) realistisch? Welche Rolle spielen regulatorische Maßnahmen wie das aktuelle EU-Framework? In der zweiten Episode werden wir darauf aufbauend die Frage stellen, wie viel der aktuellen Behauptungen in diesem Feld Realität und wie viel Hype ist. Was können wir in der Zukunft zu erwarten — sowohl im positiven wie auch im negativen? Was sind dominierende Forschungsfragen und wo Grenzen liegen, unerwartete Effekte auftreten, und welche ethischen Fragen durch diese neuen Möglichkeiten zu diskutieren. xkcd Cartoon Konkret gibt es das Spannungsfeld zwischen Datensparsamkeit und der Idee alles zu sammeln, weil wir das irgendwie in der Zukunft für uns nutzen können. Aber will der Data Scientists überhaupt in Daten untergehen? Führen mehr Daten zu besseren Entscheidungen? Wir diskutieren wieder anhand konkreter Beispiele für gute und problematische Anwendungen wie predictiver Policing, Mapping und »KI« für militärische Dronenpiloten. Welche individuelle Verantwortung leiten wir daraus für Techniker ab? Wie geht Lukas selbst mit diesen Herausforderungen um? Referenzen Lukas Lang Persönliche Webseite von Lukas Andere Episoden Episode 40: Software Nachhaltigkeit, ein Gespräch mit Philipp Reisinger Episode 37: Probleme und Lösungen Episode 32: Überleben in der Datenflut – oder: warum das Buch wichtiger ist als je zuvor Episode 31: Software in der modernen Gesellschaft – Gespräch mit Tom Konrad Episode 25:Entscheiden unter Unsicherheit Episode 19: Offene Systeme – Teil 1 und Episode 20, Teil 2 Episode 6: Messen, was messbar ist? Fachliche Referenzen Adhikari, DeNero, Jordan, Interleaving Computational and Inferential Thinking: Data Science for Undergraduates at Berkeley Melanie Mitchell, Artificial Intelligence: A Guide for Thinking Humans (2020) Michael I. Jordan, The revolution hasn't happened yet Hannah Fry, What data can't do Peter Coy, Goodhart's Law Rules the Modern World. Here Are Nine Examples Roberts et al., Common pitfalls and recommendations for using machine learning to detect and prognosticate for COVID-19 using chest radiographs and CT scans Antun et al., On instabilities of deep learning in image reconstruction and the potential costs of AI Use of AI in breast cancer detection: 94% of AI systems evaluated in these studies were less accurate than a single radiologist, and all were less accurate than consensus of two or more radiologists Lukas Lang, What is Data Science? Seth Stephens-Davidowitz, Everybody Lies Evgeny Morozov, To Save Everything, Click here (2014) Meredith Broussard, Artificial Unintelligence (2018) Cathy O‘Neill, Weapons of Maths destruction (2017) Richard David Precht, Künstliche Intelligenz und der Sinn des Lebens (2020) Jerry Z Muller, The Tyrrany of Metrics (2018) Joseph Weizenbaum, Computermacht und Gesellschaft (2001) Margaret Heffernan, Uncharted: How to Map the Future (2021) Edward Snowden, Permanent Record (2019) Shoshanna Zuboff, Surveillance Capitalism (2019) Hartmut Rosa, Unverfügbarkeit (2020) Duncan J Watts, Everything is obvious, once you know the answer (2011) Gerd Gigerenzer, Klick: Wie wir in einer digitalen Welt die Kontrolle behalten und die richtigen Entscheidungen treffen - Vom Autor des Bestsellers »Bauchentscheidungen« (2021) Byung-Chul Han, Im Schwarm, Ansichten des Digitalen (2015) Marinanne Bellotti, A.I. is solving the wrong problem Hannah Fry, Hello World: How to be Human in the Age of Algorithms (2018) Hannah Fry, What Statistics Can and Can't Tell Us About Ourselves, The New Yorker (2019) David Spiegelhalter, The Art of Statistics: Learning from Statistics (2020) James, Witten, Hastie & Tibshirani. Introduction to Statistical Learning (2021) The end of theory: The data deluge makes the scientific method obsolete. Wired 6/2008 Rutherford and Fry on Living with AI: The Biggest Event in Human History Deep Mind, The Podcast David Donoho, 50 Years of Data Science, Journal of Computational and Graphical Statistics (2017) Stuart Russel and Peter Norving, Artificial Intelligence, A Modern Approach, Berkely Textbook (2021) Michael Roberts et al, Common pitfalls and recommendations for using machine learning to detect and prognosticate for COVID-19 using chest radiographs and CT scans, Nature Machine Intelligence (2021) Neil Thompson, Deep Learning's Diminishing Returns, The Cost of Improvement Is Becoming Unsustainable, IEEE Spectrum (2021)
Cosa si intende per 'uomo-macchina'? E per 'uomo moderno'? La tecnologia è il Male? Questioni che impegnano da secoli fior fior di filosofi, sociologi ed eruditi vari vengono liquidate nell'episodio di oggi con la consueta leggerezza e senza alcuna pretesa. Piotr Ouspensky, Georges Gurdjieff, Umberto Galimberti, E.T.A. Hoffman, Joseph Weizenbaum, Erich Fromm ed altri tentano di dire la loro fra astronavi, walzer viennesi e cellulari scarichi. Chiude l'episodio un siparietto con un ospite misterioso (e d'eccezione).
Heute blicken wir zurück auf das Leben von Joseph Weizenbaum. Als Deutscher ausgewandert in die USA feiert er erste Erfolge in der Entwicklung von Computer-Architektur und in der Computer-Linguistik. Nach mehreren einschneidenden Erfahrungen wird Weizenbaum zum glühenden Kritiker und großen Mahner im Bereich der künstlichen Intelligenz.
Daniel hat aktuell gleich mehrfach mit Chatbots zu tun - was Thomas natürlich interessiert. Etwas allgemeine Chatbot-Geschichte gibt es dazu noch oben 'drauf.
Paris Marx is joined by Zachary Loeb to discuss the history of tech criticism with a focus on Joseph Weizenbaum and Lewis Mumford, as well as why the techlash is a narrative that suits Silicon Valley.Zachary Loeb is a PhD candidate at the University of Pennsylvania whose dissertation research looks at Y2K. Follow Zachary on Twitter as @libshipwreck, and check out his Librarian Shipwreck blog.
Iron Man's “Jarvis'' is a science fiction example of artificial intelligence with which many of us are familiar. The idea of artificial intelligence in chatbots has become reality. Many businesses already implement chatbots, artificial intelligence that mimics the conversational abilities of a human. Jennifer Etchegary, an expert in technology procurement and implementation, helps companies find and implement the right technologies in the most effective ways possible. Some of her specialties include virtual reality, augmented reality, and chatbots, along with a variety of other technologies. In today's episode, we're going to discuss how businesses can effectively use chatbots and what benefits they can expect to see as it is done right. Chatbots In today's market, we are seeing a shift towards everything digital. Jennifer said, “If you're not digitally transforming, you're losing.” New technologies are being developed at a higher rate than ever before. Chatbots are one of those technologies businesses are starting to use a lot. A chatbot is a conversational experience that uses artificial intelligence (AI) and natural language processing to mimic a conversation between real people. Essentially, it is a robot that can have a conversation. Surprisingly, the first chatbot was developed even before the personal computer. Joseph Weizenbaum, a computer scientist, and professor at the MIT Artificial Intelligence Laboratory created the first chatbot in 1966 named Eliza (Source: Daffodil). The personal computer wasn't developed until 1974 (Source: Britannica). Eliza was designed to imitate a therapist by asking open-ended questions and follow-ups. She operated by recognizing a keyword or phrase and then produced a pre-programmed response (Source: Analytics India Magazine). The world didn't truly see the rise of chatbots until the smartphone era. In 2010, Apple acquired Siri and then revealed her as an integrated part of the iPhone in 2011 (Source: SRI International). After Siri, Amazon created Alexa in 2015 and Google created Google Home in 2016 (Source: Daffodil). Since then, we have seen an increase in chatbots, even within smaller businesses. Chatbots are found on customer service phone lines, websites, and social media platforms. Chatbots have become increasingly popular as they offer many potential benefits. 5 Benefits of a Chatbot Here are five potential benefits chatbots can give us in our business. Saves Time Chatbots can save time for both the company and the customer. They can maintain a 24/7 response system so the customers can have constant access to communication. With a 24/7 customer service line, customers can solve their problems faster without ever needing to come to a live customer service representative. Over 50% of customers expect 24/7 availability with businesses (Source: AI Multiple). If we implement a chatbot, it can increase customer engagement and satisfaction to meet their expectation of constant service. And if a chatbot is doing it, we don't have to spend so much time doing it ourselves. A chatbox can also significantly reduce customer service response times, depending on how advanced it is programmed to operate. A chatbox can help answer any level 1 questions. This could include how to find something on our website, check the status of an order, or provide our business hours. Then, if a customer comes to us with a level 2 or 3 questions, they can transfer the customer to the direct person they need to communicate with for that question. By answering the easy questions for us, we won't have to take as many calls or chats, and we will have more time to help customers with bigger problems. This can help reduce the wait time on our phone lines or online chats. This benefits the customer since a chatbot can help them solve their easy questions without having to wait 30 minutes on hold to reach a customer service representative. “Initially a chatbot might only save 5% of your customer service time, [but] as it gets smarter and smarter, as you train it, it can get up to about 30% to 40% to 60%,” Jennifer said. “It saves a lot of time.” With reduced time, not only increases customer satisfaction but also employee satisfaction as they don't have to constantly answer back-to-back calls. Reduces Costs By saving time, chatbots can also help reduce costs. Juniper Research estimated that chatbots could save the banking industry $7.3 billion globally by 2023, up from an estimated $209 million in 2019 (Source: Juniper Research). Instead of paying a group of employees to answer phone calls or respond to chats, chatbots can do it. While chatbots may not be able to take over the role of the customer service representative completely, they can answer simple questions so we don't need as many customer service employees. Chatbots can help businesses save up to 30% of their customer support costs by reducing response times and answering 80% of routine questions (Source: IBM). As chatbots help reduce customer service response times and offer 24/7 availability, they can also increase customer sales. Business leaders have claimed that chatbots increased their sales by an average of 67% (Forbes). Customers often look for the fastest solution possible. If they have to wait 30 minutes on hold or wait until the next morning, we may have already lost their business. Increases Efficiency Chatbots can help increase efficiency. One way they can do this is by being programmed to speak many different languages. If someone calls in, but only speaks Spanish, we can have our chatbot instantly programmed to take their call. Our chatbot can be programmed to respond in any language we choose such as Spanish, Chinese, French, or Portuguese. Chatbots can also get instant access to information. Jennifer had a vice president come to her for help because she was having a really difficult time with her contact center. They were seeing many inefficiencies such as not being able to access information as easily as they would like. Jennifer connected her through another person in her network and they were able to develop a chatbot that helped increase efficiency in access to information by about 40%. A chatbot can also be custom-built to better respond to our customers' needs. “[There is] the intelligence advantage there, meaning [chatbots] have a deeper understanding of how people are speaking, and you're able to respond better to their needs,” Jennifer said. As we program chatbots to understand our customers' needs, we can increase our efficiency. Is Data-Driven Chatbots are scalable and data-driven. When a customer asks a question in our online chat, our chatbot can record and store that information. They can take frequently asked questions or common problems and turn them into data we can use to improve the customer experience. Jennifer explained that one of the biggest business tectonic shifts is the increase of data. “A lot of the data captured, especially in chatbots, is very powerful,” Jennifer said. We should lean towards the increased data we get from chatbots and use it to understand what our customers are telling us. Instead of going off of a gut feeling, we can use actual data to make customer-based decisions. As we do this, our products and services will resonate with customers a lot more. Beyond answering questions, chatbots can also reach out with surveys to gain customer information and track a customer's purchasing patterns. With this data, we can find new ways to direct our customers' responses and guide them in their journey to making a purchase decision. Increases User Engagement Chatbots can also increase user engagement. With 24/7 availability, chatbots can always communicate and engage with our customers when needed. A survey found that 83% of online shoppers need support while shopping (Source: Econsultancy). By having a chatbot readily available, we will be able to communicate with our audience more frequently. Research also found that companies that engaged with their customers on social media increased their customers' spending by 20% to 40% (Source: Bain & Company). As chatbots increase our user engagement, we can also increase our sales. Jennifer shared the example of a large beverage company that did a big campaign for one of their soft drinks. They used a chatbot with a Halloween theme and gamified it. When a customer clicked on the chatbot, it took them through a game that eventually ended with a coupon for their drink. This helped the company increase its engagement by about 60%. Another example Jennifer shared was from a retail company. She helped them create an augmented reality chatbot experience. While a customer shopped for sunglasses through the chatbot, they could “try on'' different sunglasses through an augmented reality layer to see what they looked like on their face. “The user engagement exploded and the buy-ins were just unbelievable,” Jennifer said. The Dangers of Chatbots Just because a chatbot may be beneficial for some businesses, this doesn't mean it will benefit every business. If we want to install a chatbox, there has to be a great use for it. We should ask ourselves, “How will this improve my customer's experience?” Jennifer said that one of the biggest mistakes she made in her career was overcomplicating things, which was a waste of time and money. There is a power in simplicity and sometimes a chatbot may take away from that. We should only use a chatbot if we believe it will help the customer. Chatbots have the danger of taking away the personalized touch. With chatbots, we need to be very careful with the fine line between efficiency and human connection. There is a certain extent to which technology can help. Past that point, it makes the experience more frustrating. Chatbots are an effective way to help with level 1 customer service questions. However, if customers have a more complex issue, they should be able to reach a real person. If a chatbot can't provide the right help, it begins to feel more like an obstacle standing in the way between the customer and the customer service representative. When I asked Jennifer for her best advice about retaining human touch while implementing a chatbot, she said we should: Get multiple opinions and feedback on our chatbot. Make sure the chatbot has the right personality and brand voice. Focus on conversational design. We need to make sure the conversation is natural. If we decide a chatbot is right for our business, we can take the next steps to create one that fits our brand's personality. We should only choose to use a chatbot after significant research and time to determine if it is the right solution for our business. Key Takeaways Thank you so much Jennifer for sharing your stories and knowledge with us today. Here are some of my key takeaways from this episode: Chatbots can help save a company and customers lots of time. They can maintain a 24/7 response system so the customers can have constant access to communication. Chatbots can help reduce customer service costs and increase customer sales. Chatbots can help increase efficiency. Chatbots are scalable and data-driven. They can take frequently asked questions or common problems and turn them into data we can use to improve the customer experience. Chatbots can increase user engagement. If we want to install a chatbot, there should be a great use case for it Chatbots have the danger of overcomplicating things and reducing personalized connection. Connect with Jennifer If you enjoyed this interview and want to learn more about Jennifer or connect with her, you can find her on LinkedIn or her company's website chatc.ai. Want to be a Better Digital Monetizer? Did you like today's episode? Then please follow these channels to receive free digital monetization content: Get a free Monetization Assessment of your business Subscribe to the free Monetization eMagazine. Subscribe to the Monetization Nation YouTube channel. Subscribe to the Monetization Nation podcast on Apple Podcast, Google Podcasts, Spotify, or Stitcher. Follow Monetization Nation on Instagram and Twitter. Share Your Story Do you use chatbots? If so, what benefits have you seen? Please join our private Monetization Nation Facebook group and share your insights with other digital monetizers. Read at: https://monetizationnation.com/blog/128-5-benefits-of-chatbots/
This episode has been made entirely by Artificial Intelligence. It explores the rise of the AI therapist, an idea pioneered in the 1960s by Joseph Weizenbaum and his famous ELIZA program. The goal of automating therapy - and cutting out the expensive human therapist - sounds attractive, but it can lead to some strange results. We found a man with NO problems to see if that the offbeat concept would confuse the therapeutic algorithm.
In the third installment of our Kentucky Route Zero miniseries, we take a seat for The Entertainment interlude, before exploring the physical manifestation of debt in Act III. With recurring guest co-hosts Stephanie Boluk and Patrick LeMieux, and bonus guest Sarah Elmaleh! Show notes: Kentucky Route Zero Stephanie Boluk Patrick LeMieux Sarah Elmaleh Giving Games a Voice with Sarah Elmaleh Resonance game The Consolidated Power Company The Entertainment paperback Maxim Gorky, The Lower Depths Samuel Taylor Coleridge, Kubla Khan Eugene O’Neill, The Iceman Cometh The Last of Us Samuel Beckett, Waiting for Godot Julee Cruise, Rockin’ Back Inside My Heart on Twin Peaks Colossal Cave Adventure Junebug, Too Late to Love You Vannevar Bush, As We May Think Steve Russell et al., Space War! Joseph Weizenbaum, Eliza/Doctor Eliza Douglas Engelbart, The Mother All Demos Ted Nelson, Computer Lib/Dream Machines Roberta and Ken Williams, Mystery House David Graeber, Debt: The First 5000 Years IF titles: the next generation of generation
Have you seen Netflix's 'The Social Dilemma'? If so, have you decreased your screen time or changed your social media habits? In this 'Are You A Robot?' episode, Zachary Loeb shares his views of the documentary and suggests what more needs to happen in order to solve the issue of dangerous social media algorithms. This episode is brought to you by EthicsGrade, an ESG Ratings agency with a particular focus on Technology Governance, especially AI Ethics. You can find more information about EthicsGrade here: https://www.ethicsgrade.io/ You can also follow EthicsGrade on Twitter (@EthicsGrade) and LinkedIn: https://bit.ly/2JCiQOg Check out Zachary's blog ‘LibrarianShipwreck': https://bit.ly/3owwkKo Follow Demetrios on Twitter @Dpbrinkm and LinkedIn: https://bit.ly/2TPrA5w Connect with Us: Join our Slack channel for more conversation about the big ethics issues that rise from AI: https://bit.ly/3jVdNov Follow Are You A Robot? on Twitter, Instagram and Facebook: @AreYouARobotPod Follow our LinkedIn page: https://bit.ly/3gqzbSw Check out our website: https://www.areyouarobot.co.uk/ Resources mentioned in this episode: 'The Social Dilemma' documentary: https://bit.ly/2VZ7req Article on ‘The Social Dilemma': https://bit.ly/2VTGmt1 Y2K: https://bit.ly/37PEB5w Zachary's article on Y2K: https://wapo.st/3qGAa5F Lewis Mumford: https://bit.ly/3836tn5 Joseph Weizenbaum: https://bit.ly/3gw7nfi ‘When Old Technologies were New' by Carolyn Marvin: https://amzn.to/2JRj30v Centre for Humane Technology: https://bit.ly/3701DYb Zachary's blog, ‘LibrarianShipwreck': https://bit.ly/3owwkKo
In which an MIT professor worried about a digital future accidentally creates the field of human-machine conversation, and John flirts with a 54-year-old computer program. Certificate #36441.
Mavis Beacon Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Today we're going to give thanks to a wonderful lady. A saint. The woman that taught me to type. Mavis Beacon. Over the years I often wondered what Mavis was like. She took me from a kid that was filled with wonder about these weird computers we had floating around school to someone that could type over a hundred words a minute. She always smiled. I never saw her frown once. I thought she must be a teacher somewhere. She must be a kind lady whose only goal in the world was to teach young people how to type. And indeed, she's taught over a million people to type in her days as a teacher. In fact she'd been teaching for years by the time I first encountered her. Mavis Beacon was initially written for MS-DOS in 1987 and released by The Software Toolworks. Norm Worthington, Mike Duffy joined Walt Bilofsky started the company out of Sherman Oaks, California in 1980 and also made Chessmaster in 1986. They started with HDOS, a health app for the Osborne 1. They worked on Small C and Grogramma, releasing a conversation simulation tool from Joseph Weizenbaum in 1981. They wrote Mavis Beacon Teaches Typing in 1987 for IBM PCs. It took "Three guys, three computers, three beds, in four months”. It was an instant success. They went public in 1988 and were acquired by Pearson in 1994 for around half a billion dollars, becoming Mindscape in 1994. By 1998 she'd taught over 6,000,000 kids to type. Today, Encore Software produces the software and Software MacKiev distributes a version for the Mac. The software integrates with iTunes, supports competitive typing games, and still tracks words-per-minute. But who was Mavis? What inspired her to teach generations of children to type? Why hasn't she aged? Mavis was named after Mavis Staples but she was a beacon to anyone looking to learn to type, thus Mavis Beacon. Mavis was initially portrayed by Haitian-born Renée L'Espérance, who was discovered working behind the perfume counter at Saks Fifth Avenue Beverly Hills by talk-show host Les Crane in 1985. He then brought her in to be the model. Featuring an African-American woman regrettably caused some marketing problems but didn't impact the success of the release. So until the next episode, think about this: Mavis Beacon, real or not, taught me and probably another 10 million kids to type. She opened the door for us to do more with computers. I could never write code or books or even these episodes at a rate if it hadn't been for her. So I owe her my sincerest of gratitude. And Norm Worthington, for having the idea in the first place. And I owe you my gratitude, for tuning into another episode of the History of Computing Podcast. We're lucky to have you. Have a great day!
Wie Chatbots bereits heute unser privates Leben nachhaltig beeinflussen. Im Gedenken an Joseph Weizenbaum dem Erfinder von „Eliza“ (1966)
Computer scientist Joseph Weizenbaum wasn’t replaced by an artificially-intelligent computer. He was chased out his job by the one he created.
Der Verlust an Freiheit und Privatsphäre durch die Terrorgesetzgebung und Überwachungswahnsinn in den letzten Jahren wird zunehmend spürbar. Die Politik dreht frei und setzt zunehmend auf Verbote, Repression, Einschränkung von Meinungs- und Bewegungsfreiheit. Und dies weltweit. Als die Hacker in den 80er Jahren begannen, ihre Ideale zu verfolgen, sahen sie sich selbst als Kräfte, die diesem schon damals absehbaren Trend entgegenstanden. Heute scheint der Kampf verloren. Oder wurde nur eine Schlacht nicht gewonnen und die Zukunft sieht nicht so bedrohlich aus, wie es scheint. Wir haben eine Mitmachseite im Chaosradio Wiki http://wiki.chaosradio.ccc.de/Chaosradio_110 eingerichtet. Links: * 22C3: We Lost The War http://events.ccc.de/congress/2005/fahrplan/events/920.en.html * Aggregat7: Why We Have Not Lost The War http://acker3.ath.cx/wordpress/archives/17 * No Jokes Please http://wrongcrowd.com/albums/misc/no_jokes_please.jpg * Going Postal http://en.wikipedia.org/wiki/Going_postal * Petition gegen Vorratsdatenspeicherung http://itc.napier.ac.uk/e-Petition/bundestag/view_petition.asp?PetitionID=60 * John Gilmore: I was ejected from a plane for wearing "Suspected Terrorist" button http://www.politechbot.com/p-04973.html * RFID Implant Photos http://www.flickr.com/photos/28129213@N00/sets/181299/ * n-tv: Kontoüberwachung ist ein Hit http://n-tv.de/628966.html * Das Gedachtsnistelephon (Der Singweisengriffer) http://www.trust-us.ch/habi1/094_gedachnistelefon.html * Goocam World Map http://www.butterfat.net/goocam/ * Joseph Weizenbaum http://de.wikipedia.org/wiki/Joseph_Weizenbaum * Anonymes Blogging: Invisiblog http://invisiblog.com/ Musik: * Pumpanickle http://pumpanickle.de/ * Pumpanickle: Hero of the day http://pumpanickle.de/songs/Pumpanickle-Hero-of-the-Day.mp3
Joseph Weizenbaum bezeichnete sich selbst als Dissidenten und Ketzer der Informatik. Er arbeitete in der zweiten Hälfte der 1960er-Jahre am Aufbau des Arpanet, einen Vorläufer des Internets. 1966 veröffentlichte Weizenbaum das Computer-Programm ELIZA, mit dem er die Verarbeitung natürlicher Sprache durch einen Computer demonstrieren wollte; Eliza wurde als Meilenstein der „künstlichen Intelligenz“ gefeiert, seine Variante Doctor simulierte das Gespräch mit einem Psychologen. Weizenbaum war entsetzt, wie ernst viele Menschen dieses relativ einfache Programm nahmen, indem sie im Dialog intimste Details von sich preisgaben. Dabei war das Programm nie darauf hin konzipiert, einen menschlichen Therapeuten zu ersetzen. Durch dieses Schlüsselerlebnis wurde Weizenbaum zum Kritiker der gedankenlosen Computergläubigkeit. Heute gilt Eliza als Prototyp für moderne Chatbots. Seit dieser Zeit mahnte Weizenbaum den kritischen Umgang mit Computern und die Verantwortung des Wissenschaftlers für sein Tun an. Besonders betonte er, die eigentliche Entscheidungsgewalt müsse immer in menschlicher Hand bleiben, auch wenn künstliche intelligente Systeme als Hilfsmittel zur Informationsbeschaffung herangezogen werden. Hören Sie hier einen Vortrag aus dem Jahr 1991, den Joseph Weizenbaum auf dem internationalen Kongress „Das Ende der großen Entwürfe und das Blühen systemischer Praxis“ gehalten hatte. Folgen Sie uns auch auf Spotify open.spotify.com/show/0HVLyjAHZkFMVr9XDATMGz Facebook www.facebook.com/pg/carlauerverlag/ Twitter twitter.com/carlauerverlag Instagram www.instagram.com/carlauerverlag/ YouTube www.youtube.com/carlauerverlag Soundcloud @carlauerverlag Oder schauen Sie hier vorbei www.carl-auer.de/