Podcast appearances and mentions of blaise ag

  • 29PODCASTS
  • 32EPISODES
  • 49mAVG DURATION
  • ?INFREQUENT EPISODES
  • Mar 10, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about blaise ag

Latest podcast episodes about blaise ag

Soundside
Can the artificial really be 'intelligent'? This researcher wants us to think bigger

Soundside

Play Episode Listen Later Mar 10, 2025 31:50


Artificial intelligence is starting to underpin everything we do, whether we like it or not. And at the highest levels, companies like Google and Open AI are saying their AI is on the verge of crossing a humanlike threshold that we’ve only seen in science fiction. This is prompting all kinds of conversations about sentience and the possible dangers of a superintelligent computer system. But the definition of “Artificial General Intelligence,” or AGI, is controversial. And many researchers aren’t even sure today’s programs have our common understanding of “intelligence” at all. They argue ChatGPT isn’t really thinking -- it's just really good at predicting the next sequence in a pattern (and copying someone else along the way). So what makes something intelligent? Or alive, for that matter? For Google’s Blaise Agüera y Arcas, the most interesting piece of examining AI breakthroughs has been how they connect to the evolution of life on earth. In his new book, What is Life? he argues for a broadened definition of “intelligence,” to include things like single celled organisms and even basic tools. And he says humans’ development of technology -- most recently, AI -- is part of a long history of symbiotic relationships that have pushed our evolution forward. Guests: Blaise Agüera y Arcas, Vice President and CTO of Technology and Society at Google, where he leads a team called “Paradigms of Intelligence” researching the intersection of AI, biology, and philosophy. Author of What is Life, the first part of a broader work on intelligence at large. Related Links: What is Intelligence? | Antikythera Thank you to the supporters of KUOW, you help make this show possible! If you want to help out, go to kuow.org/donate/soundsidenotes Soundside is a production of KUOW in Seattle, a proud member of the NPR Network.See omnystudio.com/listener for privacy information.

Disintegrator
21. LIFE (w/ Blaise Agüera y Arcas)

Disintegrator

Play Episode Listen Later Nov 14, 2024 62:39


Blaise Agüera y Arcas is one of most important people in AI, and apart from his leadership position as CTO of Technology & Society at Google, he has one of those resumes or affiliations lists that seems to span a lot of very fundamental things. He's amazing; the thoughtfulness and generosity with which he communicates on this episode gently embraced our brains while lazering them to mush. We hope you have the same experience. References include:Blaise's own books Who Are We Now?, Ubi Sunt, and the upcoming What Is Intelligence?He references James C. Scott's Seeing Like a State, which we strongly recommend, Benjamin Peters' How Not to Network a Nation, and Red Plenty by Francis Spufford.Strong recommendation also to Benjamin Labatut's When We Cease to Understand the World.Roberto references Luciana Parisi's Abstract Sex (our favorite book!) and the work of Lynn Margulis with respect to biology and reproduction.Blaise references James E. Lovelock's project “Daisyworld” with respect to the Gaia hypothesis.He also references the Active Inference thesis, e.g. that of Karl J. Friston, and the work of Dan Sperber and Hugo Mercer on reason.The cellular automata work referenced here involves the Von Neumann cellular automaton and the Wolfram neural cellular automaton.Wish us a happy 1 year anniversary of the pod!

The Stephen Wolfram Podcast
Future of Science and Technology Q&A (October 11, 2024)

The Stephen Wolfram Podcast

Play Episode Listen Later Oct 23, 2024 83:35


Stephen Wolfram answers questions from his viewers about the future of science and technology as part of an unscripted livestream series, also available on YouTube here: https://wolfr.am/youtube-sw-qa Questions include: ​​I read that recent advancements in AI research are partly based on McCulloch and Pitts's famous paper on neural nets. Do you think there are more ideas worthwhile to explore again in cybernetics? - What is the future of technology about speech recognition? - How do I know if I am speaking to a human? The future is crazy! - Future of finance! Talk about AI talking to AI for trading. - Getting an AI to understand economics seems like it'll be quite a step. - What's the difference between a computational and a mathematical model? - Have you seen Blaise Agüera y Arcas's recent paper on self-replicating programs? Published on arXiv recently. - Wouldn't chaos theory be an example of the computational case? You know the rules of the system but have to set the initial conditions to see how it plays out. - How do we prepare for the risk of bots/worms invading everyday life as we become more dependent on technology?

radinho de pilha
a Vida não é um milagre! o cérebro quântico, a transmutação dos elementos

radinho de pilha

Play Episode Listen Later Aug 20, 2024 37:48


Blaise Agüera y Arcas on the Emergence of Replication and Computation – https://pca.st/g7fd2jen Blaise Agüera y Arcas https://en.wikipedia.org/wiki/Blaise_Ag%C3%BCera_y_Arcas falemos de coração (e da falta de) https://youtube.com/live/lhJ2e-ADNgM via ChatGPT – Aristotle on Storytelling Logic https://chatgpt.com/share/9d393157-2a30-4e43-8553-f03183aae8b0 Energy from Nuclear Waste: Switzerland Approves first Accellerator-Driven Reactor https://youtu.be/rLE0j1PQGPI?si=WIEBVXpfD5w1v2hk The Mysterious Origins of Sapphires Have Finally Been Deciphered http://sciencealert.com/the-mysterious-origins-of-sapphires-have-finally-been-deciphered Photon ... Read more The post a Vida não é um milagre! o cérebro quântico, a transmutação dos elementos appeared first on radinho de pilha.

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas
286 | Blaise Agüera y Arcas on the Emergence of Replication and Computation

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

Play Episode Listen Later Aug 19, 2024 80:35


Understanding how life began on Earth involves questions of chemistry, geology, planetary science, physics, and more. But the question of how random processes lead to organized, self-replicating, information-bearing systems is a more general one. That question can be addressed in an idealized world of computer code, initialized with random sequences and left to run. Starting with many such random systems, and allowing them to mutate and interact, will we end up with "lifelike," self-replicating programs? A new paper by Blaise Agüera y Arcas and collaborators suggests that the answer is yes. This raises interesting questions about whether computation is an attractor in the space of relevant dynamical processes, with implications for the origin and ubiquity of life.Support Mindscape on Patreon.Blog post with transcript: https://www.preposterousuniverse.com/podcast/2024/08/19/286-blaise-aguera-y-arcas-on-the-emergence-of-replication-and-computation/Blaise Agüera y Arcas received a B.A. in physics from Princeton University. He is currently a vice-president of engineering at Google, leader of the Cerebra team, and a member of the Paradigms of Intelligence team. He is the author of the books Ubi Sunt and Who Are We Now?, and the upcoming What Is Intelligence?WebsiteGoogle web pageGoogle Scholar publicationsSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Town Hall Seattle Arts & Culture Series
251. Robots Who Paint: What's Next with AI and Art?

Town Hall Seattle Arts & Culture Series

Play Episode Listen Later Apr 1, 2024 80:50


Three expert guests discuss the implications of AI and the fine arts in a conversation moderated by Steve Scher.  Scientist and founder of the Artists and Machine Intelligence program at Google, Blaise Agüera y Arcas, will offer his “news from the front” about the latest developments in AI capabilities, and what he foresees ahead.  Alex Alben, technology executive, author, and law professor, will review the implications of AI to the artist from the point of view of intellectual property: is anything on the internet up for grabs, or is compensation for image “borrowing” a possibility? Finally, painter Jason Puccinelli, who uses AI as one of his tools in image creation, will talk about what he finds exciting and useful, and what he finds problematic, about this new resource. Presented by Town Hall Seattle and Gage Academy of Art.

People I (Mostly) Admire
128. Are Our Tools Becoming Part of Us?

People I (Mostly) Admire

Play Episode Listen Later Mar 30, 2024 56:18


Google researcher Blaise Agüera y Arcas spends his work days developing artificial intelligence models and his free time conducting surveys for fun. He tells Steve how he designed an algorithm for the U.S. Navy at 14, how he discovered the truth about printing-press pioneer Johannes Gutenberg, and when A.I. first blew his mind. SOURCE:Blaise Agüera y Arcas, fellow at Google Research. RESOURCES:Who Are We Now?, by Blaise Agüera y Arcas (2023)."Artificial General Intelligence Is Already Here," by Blaise Agüera y Arcas and Peter Norvig (Noema Magazine, 2023)."Transformer: A Novel Neural Network Architecture for Language Understanding," by Jakob Uszkoreit (Google Research Blog, 2017)."Communication-Efficient Learning of Deep Networks from Decentralized Data," by H. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Agüera y Arcas (arXiv, 2016)."How PhotoSynth Can Connect the World's Images," by Blaise Agüera y Arcas (TED Talk, 2007)."Has History Been Too Generous to Gutenberg?" by Dinitia Smith (The New York Times, 2001). EXTRAS:"'My God, This Is a Transformative Power,'" by People I (Mostly) Admire (2023)."How to Think About A.I.," series by Freakonomics Radio (2023)."Satya Nadella's Intelligence Is Not Artificial," by Freakonomics Radio (2023)."Yul Kwon (Part 2): 'Hey, Do You Have Any Bright Ideas?'" by People I (Mostly) Admire (2021)."Yul Kwon: 'Don't Try to Change Yourself All at Once,'" by People I (Mostly) Admire (2021).

Disintegrator
9. Alignment (w/ Benjamin Bratton)

Disintegrator

Play Episode Listen Later Feb 27, 2024 53:49


Benjamin Bratton writes about world-spanning intelligences, grinding geopolitical tectonics, “accidental megastructures” of geotechnical cruft, the millienia-long terraforming project through which humans rendered an earth into a world, and the question of what global-scale order means in the twilight of the Westphalian nation-state.Candidly, if either of us were to recommend a book to help you understand the present state of ‘politics' or ‘technology', we'd probably start with Bratton's The Stack — written 10 years ago, but still very much descriptive of our world and illuminative of its futures.If the first 10 minutes are too “tech industry” for you — just skip ahead. The whole conversation is seriously fire, and it spikes hit after hit of takes on privacy, bias, alignment, subjectivity, the primacy of the individual … all almost entirely unrepresented within the Discourse.Some references:We briefly talk about EdgeML, which essentially means the execution of ML models on small computers installed in a field location.Benjamin mentions his collaboration with renowned computer scientist and thinker Blaise Agüera y Arcas, whose work on federated learning is relevant to this stage of the conversation. Federated learning involves a distributed training approach in which a model is updated by field components who only transmit changes to a model therefore retaining the security of local training sets to their own environments only. Also - here's a link to their collaboration on “The Model is the Message."Benjamin calls himself a bit of an “eliminative materialist” “in the Churchland mode,” meaning someone who believes that “folk psychologies” or “folk ontologies” (theories of how the mind works from metaphysics, psychoanalysis, or generalized psychology) will be replaced by frameworks from cognitive science or neuroscience.Benjamin calls out a collaboration with Chen Quifan. Check out Waste Tide — it's excellent sci-fi.The collaboration with Anna Greenspan and Bogna Konior discussed in the pod is called “Machine Decision is Not Final” out on Urbanomic.Shoshana Zuboff is a theorist who coined the term “surveillance capitalism,” referring to capital accumulation through a process of ‘dispossession by surveillance.' The implicit critique of “surveillance capitalism” in this episode hinges on its overemphasis on individual sovereignty.“Tay” was the infamous AI Twitter Chatbot Microsoft rolled out for 16 hours before pulling back for its controversial content.Antihumanism refers to a rejection of the ontological primacy and universalization of the human afforded to it through the philosophical stance of “humanism.” An “antihumanist" is someone who challenges the stability of the concept of the “human” or at very least its salience in cosmic affairs.Check out Benjamin's new piece on Tank Mag (Tank.tv), it's fire. And check out Anna Kornbluh's AWESOME “Immediacy, or The Style of Too Late Capitalism” on Verso.

Signal with Mory Fontanez
The Data, The Divide, and The Future: A Conversation with Blaise Agüera y Arcas

Signal with Mory Fontanez

Play Episode Listen Later Feb 14, 2024 56:02


A very special and very published guest joins Mory and Melissa today: Blaise Agüera y Arcas, CTO of Tech and Society at Google and author of the book Who Are We Now?. Mory and Melissa pepper Blaise with questions about his approach to his book (well, or Mory asks, while Melissa gapes open-mouthedly). The breadth of material under Blaise's belt is nothing short of impressive: from the demographic divide between urban and rural, to monogamy as a form of social control, to human symbiosis with artificial intelligence. Combining graphs upon graphs with a compelling narrative, Blaise gives insight into the process of creating Who Are We Now? and its attempt to answer the questions: What makes human identity? And what drives us apart?Buy Who Are We Now? here: https://hatandbeard.com/products/who-are-we-now-by-blaise-aguera-y-arcasAvailable for free here: https://whoarewenow.net/Find Blaise on Twitter here: https://twitter.com/blaiseaguera?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5EauthorTo submit questions for future episodes: https://forms.gle/ZhSKGveWox33GvFq6Follow Signal on insta: https://www.instagram.com/signalwithmoryandmelissaFollow Mory on insta: https://www.instagram.com/moryfontanez/

In Conversation with UX Magazine
Identity and Collective Intelligence with Blaise Agüera y Arcas // Invisible Machines S2E23

In Conversation with UX Magazine

Play Episode Listen Later Nov 30, 2023 57:40


AI researcher, author, and VP and Fellow at Google Research, Blaise Agüera y Arcas has contributed to research papers cited more than 20,000 times, including a seminal LaMDA paper from 2022. His new book, Who Are We Now?, explores how biology, ecology, sexuality, history, and culture have intertwined to create a dynamic “us” that's neither natural nor artificial. Blaise joins Robb and Josh for a philosophical exploration of identity, collective intelligence, and the ways AI might put us back into balance with nature. A frequent TED speaker and winner of MIT's TR35 Prize, Blaise brings the perspective of a deeply forward-thinking researcher to our ongoing conversation about AI.

English Academic Vocabulary Booster
5267. 109 Academic Words Reference from "Blaise Agüera y Arcas: How PhotoSynth can connect the world's images | TED Talk"

English Academic Vocabulary Booster

Play Episode Listen Later Sep 28, 2023 99:53


This podcast is a commentary and does not contain any copyrighted material of the reference source. We strongly recommend accessing/buying the reference source at the same time. ■Reference Source https://www.ted.com/talks/blaise_aguera_y_arcas_how_photosynth_can_connect_the_world_s_images ■Post on this topic (You can get FREE learning materials!) https://englist.me/109-academic-words-reference-from-blaise-aguera-y-arcas-how-photosynth-can-connect-the-worlds-images-ted-talk/ ■Youtube Video https://youtu.be/oAjVnWIiMpE (All Words) https://youtu.be/PNnHhqHy1zw (Advanced Words) https://youtu.be/m0XEFq459YI (Quick Look) ■Top Page for Further Materials https://englist.me/ ■SNS (Please follow!)

English Academic Vocabulary Booster
5284. 62 Academic Words Reference from "Blaise Agüera y Arcas: Augmented-reality maps | TED Talk"

English Academic Vocabulary Booster

Play Episode Listen Later Sep 28, 2023 57:55


This podcast is a commentary and does not contain any copyrighted material of the reference source. We strongly recommend accessing/buying the reference source at the same time. ■Reference Source https://www.ted.com/talks/blaise_aguera_y_arcas_augmented_reality_maps ■Post on this topic (You can get FREE learning materials!) https://englist.me/62-academic-words-reference-from-blaise-aguera-y-arcas-augmented-reality-maps-ted-talk/ ■Youtube Video https://youtu.be/OTIwY7Yohuk (All Words) https://youtu.be/yLuYJ-_u8So (Advanced Words) https://youtu.be/jEKz_j24GIA (Quick Look) ■Top Page for Further Materials https://englist.me/ ■SNS (Please follow!)

English Academic Vocabulary Booster
3461. 182 Academic Words Reference from "Blaise Agüera y Arcas: How computers are learning to be creative | TED Talk"

English Academic Vocabulary Booster

Play Episode Listen Later Aug 4, 2023 163:18


This podcast is a commentary and does not contain any copyrighted material of the reference source. We strongly recommend accessing/buying the reference source at the same time. ■Reference Source https://www.ted.com/talks/blaise_aguera_y_arcas_how_computers_are_learning_to_be_creative ■Post on this topic (You can get FREE learning materials!) https://englist.me/182-academic-words-reference-from-blaise-aguera-y-arcas-how-computers-are-learning-to-be-creative-ted-talk/ ■Youtube Video https://youtu.be/OgCUPRbu26s (All Words) https://youtu.be/yxhQlsUyge4 (Advanced Words) https://youtu.be/6mqRq1d4jTY (Quick Look) ■Top Page for Further Materials https://englist.me/ ■SNS (Please follow!)

Science (Video)
CARTA: Artificial Intelligence and Anthropogeny - What Language Models Mean with Blaise Agüera y Arcas

Science (Video)

Play Episode Listen Later Apr 8, 2023 22:49


Large language models (LLMs) have now achieved many of the longstanding goals of the quest for generalist AI. While LLMs are still very imperfect (though rapidly improving) in areas like factual grounding, planning, reasoning, safety, memory, and consistency, they do understand concepts, are capable of insight and originality, can problem-solve, and exhibit many faculties we have historically defended vigorously as exceptionally human, such as humor, creativity, and theory of mind. At this point, human responses to the emergence of AI seem to be telling us more about our own psychology, hopes and fears, than about AI itself. However, taking these new AI capacities seriously, and noticing that they all emerge purely from sequence modeling, should cause us to reassess what our own cerebral cortex is doing, and whether we are learning what intelligence, machine or biological, actually is. Series: "CARTA - Center for Academic Research and Training in Anthropogeny" [Humanities] [Science] [Show ID: 38679]

University of California Audio Podcasts (Audio)
CARTA: Artificial Intelligence and Anthropogeny - What Language Models Mean with Blaise Agüera y Arcas

University of California Audio Podcasts (Audio)

Play Episode Listen Later Apr 8, 2023 22:49


Large language models (LLMs) have now achieved many of the longstanding goals of the quest for generalist AI. While LLMs are still very imperfect (though rapidly improving) in areas like factual grounding, planning, reasoning, safety, memory, and consistency, they do understand concepts, are capable of insight and originality, can problem-solve, and exhibit many faculties we have historically defended vigorously as exceptionally human, such as humor, creativity, and theory of mind. At this point, human responses to the emergence of AI seem to be telling us more about our own psychology, hopes and fears, than about AI itself. However, taking these new AI capacities seriously, and noticing that they all emerge purely from sequence modeling, should cause us to reassess what our own cerebral cortex is doing, and whether we are learning what intelligence, machine or biological, actually is. Series: "CARTA - Center for Academic Research and Training in Anthropogeny" [Humanities] [Science] [Show ID: 38679]

Evolution (Video)
CARTA: Artificial Intelligence and Anthropogeny - What Language Models Mean with Blaise Agüera y Arcas

Evolution (Video)

Play Episode Listen Later Apr 8, 2023 22:49


Large language models (LLMs) have now achieved many of the longstanding goals of the quest for generalist AI. While LLMs are still very imperfect (though rapidly improving) in areas like factual grounding, planning, reasoning, safety, memory, and consistency, they do understand concepts, are capable of insight and originality, can problem-solve, and exhibit many faculties we have historically defended vigorously as exceptionally human, such as humor, creativity, and theory of mind. At this point, human responses to the emergence of AI seem to be telling us more about our own psychology, hopes and fears, than about AI itself. However, taking these new AI capacities seriously, and noticing that they all emerge purely from sequence modeling, should cause us to reassess what our own cerebral cortex is doing, and whether we are learning what intelligence, machine or biological, actually is. Series: "CARTA - Center for Academic Research and Training in Anthropogeny" [Humanities] [Science] [Show ID: 38679]

CARTA - Center for Academic Research and Training in Anthropogeny (Video)
CARTA: Artificial Intelligence and Anthropogeny - What Language Models Mean with Blaise Agüera y Arcas

CARTA - Center for Academic Research and Training in Anthropogeny (Video)

Play Episode Listen Later Apr 8, 2023 22:49


Large language models (LLMs) have now achieved many of the longstanding goals of the quest for generalist AI. While LLMs are still very imperfect (though rapidly improving) in areas like factual grounding, planning, reasoning, safety, memory, and consistency, they do understand concepts, are capable of insight and originality, can problem-solve, and exhibit many faculties we have historically defended vigorously as exceptionally human, such as humor, creativity, and theory of mind. At this point, human responses to the emergence of AI seem to be telling us more about our own psychology, hopes and fears, than about AI itself. However, taking these new AI capacities seriously, and noticing that they all emerge purely from sequence modeling, should cause us to reassess what our own cerebral cortex is doing, and whether we are learning what intelligence, machine or biological, actually is. Series: "CARTA - Center for Academic Research and Training in Anthropogeny" [Humanities] [Science] [Show ID: 38679]

Humanities (Audio)
CARTA: Artificial Intelligence and Anthropogeny - What Language Models Mean with Blaise Agüera y Arcas

Humanities (Audio)

Play Episode Listen Later Apr 8, 2023 22:49


Large language models (LLMs) have now achieved many of the longstanding goals of the quest for generalist AI. While LLMs are still very imperfect (though rapidly improving) in areas like factual grounding, planning, reasoning, safety, memory, and consistency, they do understand concepts, are capable of insight and originality, can problem-solve, and exhibit many faculties we have historically defended vigorously as exceptionally human, such as humor, creativity, and theory of mind. At this point, human responses to the emergence of AI seem to be telling us more about our own psychology, hopes and fears, than about AI itself. However, taking these new AI capacities seriously, and noticing that they all emerge purely from sequence modeling, should cause us to reassess what our own cerebral cortex is doing, and whether we are learning what intelligence, machine or biological, actually is. Series: "CARTA - Center for Academic Research and Training in Anthropogeny" [Humanities] [Science] [Show ID: 38679]

Science (Audio)
CARTA: Artificial Intelligence and Anthropogeny - What Language Models Mean with Blaise Agüera y Arcas

Science (Audio)

Play Episode Listen Later Apr 8, 2023 22:49


Large language models (LLMs) have now achieved many of the longstanding goals of the quest for generalist AI. While LLMs are still very imperfect (though rapidly improving) in areas like factual grounding, planning, reasoning, safety, memory, and consistency, they do understand concepts, are capable of insight and originality, can problem-solve, and exhibit many faculties we have historically defended vigorously as exceptionally human, such as humor, creativity, and theory of mind. At this point, human responses to the emergence of AI seem to be telling us more about our own psychology, hopes and fears, than about AI itself. However, taking these new AI capacities seriously, and noticing that they all emerge purely from sequence modeling, should cause us to reassess what our own cerebral cortex is doing, and whether we are learning what intelligence, machine or biological, actually is. Series: "CARTA - Center for Academic Research and Training in Anthropogeny" [Humanities] [Science] [Show ID: 38679]

UC San Diego (Audio)
CARTA: Artificial Intelligence and Anthropogeny - What Language Models Mean with Blaise Agüera y Arcas

UC San Diego (Audio)

Play Episode Listen Later Apr 8, 2023 22:49


Large language models (LLMs) have now achieved many of the longstanding goals of the quest for generalist AI. While LLMs are still very imperfect (though rapidly improving) in areas like factual grounding, planning, reasoning, safety, memory, and consistency, they do understand concepts, are capable of insight and originality, can problem-solve, and exhibit many faculties we have historically defended vigorously as exceptionally human, such as humor, creativity, and theory of mind. At this point, human responses to the emergence of AI seem to be telling us more about our own psychology, hopes and fears, than about AI itself. However, taking these new AI capacities seriously, and noticing that they all emerge purely from sequence modeling, should cause us to reassess what our own cerebral cortex is doing, and whether we are learning what intelligence, machine or biological, actually is. Series: "CARTA - Center for Academic Research and Training in Anthropogeny" [Humanities] [Science] [Show ID: 38679]

Town Hall Seattle Science Series
186. Blaise Aguera y Arcas and Melanie Mitchell with Lili Cheng: How Close Are We to AI?

Town Hall Seattle Science Series

Play Episode Listen Later Jul 29, 2022 84:57


Building Policy Update: As of June 1, 2022, masks remain required at Town Hall Seattle. Read our current COVID-19 policies and in-building safety protocols. Thu 7/14, 2022, 7:30pm Blaise Agüera y Arcas and Melanie Mitchell with Lili Cheng How Close Are We to AI? BUY THE BOOKS   Ubi SuntBy Blaise Agüera y Arcas   Artificial Intelligence: A Guide for Thinking HumansBy Melanie Mitchell     Artificial Intelligence (AI), a term first coined at a Dartmouth workshop in 1956, has seen several boom and bust cycles over the last 66 years. Is the current boom different? The most exciting advance in the field since 2017 has been the development of “Large Language Models,” giant neural networks trained on massive databases of text on the web. Still highly experimental, Large Language Models haven't yet been deployed at scale in any consumer product — smart/voice assistants like Alexa, Siri, Cortana, or the Google Assistant are still based on earlier, more scripted approaches. Large Language Models do far better at routine tasks involving language processing than their predecessors. Although not always reliable, they can give a strong impression of really understanding us and holding up their end of an open-ended dialog. Unlike previous forms of AI, which could only perform specific jobs involving rote perception, classification, or judgment, Large Language Models seem to be capable of a lot more — including possibly passing the Turing Test, named after computing pioneer Alan Turing's thought experiment that posits when an AI in a chat can't be distinguished reliably from a human, it will have achieved general intelligence. But can Large Language Models really understand anything, or are they just mimicking the superficial “form” of language? What can we say about our progress toward creating real intelligence in a machine? What do “intelligence” and “understanding” even mean? Blaise Agüera y Arcas, a Fellow at Google Research, and Melanie Mitchell, the Davis Professor of Complexity at the Santa Fe Institute, take on these thorny questions in a wide-ranging presentation and discussion. The discussion will be moderated by Lili Cheng, Corporate Vice President of the Microsoft AI and Research division. Blaise Agüera y Arcas is a VP and Fellow at Google Research, where he leads an organization working on basic research and new products in Artificial Intelligence. His team focuses on the intersection of machine learning and devices, developing AI that augments humanity while preserving privacy. One of the team's technical contributions is Federated Learning, an approach to training neural networks in a distributed setting that avoids sending user data off-device. Blaise also founded Google's Artists and Machine Intelligence program and has been an active participant in cross-disciplinary dialogs about AI and ethics, fairness and bias, policy, and risk. He has given TED talks on Sead­ragon and Pho­to­synth (2007, 2012), Bing Maps (2010), and machine creativity (2016). In 2008, he was awarded MIT's TR35 prize. Melanie Mitchell is the Davis Professor at the Santa Fe Institute. Her current research focuses on conceptual abstraction, analogy-making, and visual recognition in artificial intelligence systems.  Melanie is the author or editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her book Complexity: A Guided Tour won the 2010 Phi Beta Kappa Science Book Award and was named by Amazon.com as one of the ten best science books of 2009. Her latest book is Artificial Intelligence: A Guide for Thinking Humans. Lili Cheng is a Corporate Vice President of the Microsoft AI and Research division, responsible for the AI developer platform which includes Cognitive Services and Bot Framework. Prior to Microsoft, Lili worked in Apple Computer's Advanced Technology Group on the user interface research team where she focused on QuickTime Conferencing and QuickTime VR. Lili is also a registered architect, having worked in Tokyo and Los Angeles for Nihon Sekkei and Skidmore Owings and Merrill on commercial urban design and large-scale building projects. She has also taught at New York University and Harvard University. Ubi SuntBy Blaise Agüera y Arcas    Artificial Intelligence: A Guide for Thinking HumansBy Melanie Mitchell   Presented by Town Hall Seattle. To become a member or make a donation click here.

Astonishing Legends
I Think Therefore AI Part 2

Astonishing Legends

Play Episode Listen Later Jul 25, 2022 172:15


As we continue our discussion based on Blake Lemoine's assertion that the Large Language Model chatbot LaMDA had become sentient, we relay the rest of his conversation with the program and then some questions and answers with Lemoine himself. But as Lemoine has said, machine sentience and personhood are just some of many questions to be considered. His greater issue is how an omnipresent AI, trained on an insufficient data set, will affect how different people and cultures interact and who will be dominated or excluded. The fear is that the ultimate result of protecting corporate profits will outweigh global human interests. In light of these questions about AI's ethical and efficient development, we highlight the positions and insights of experts on the state and future of AI, such as Blaise Agüera y Arcas and Gary Marcus. The directives of responsible technology development and the right track to Deep Learning are more grounded than the fantastical thoughts of killer robots. Yet hovering over all of the mechanics are the philosophies of what constitutes sentience, comprehending and feeling as a person does, and being human enough. The reality of Artificial Intelligence matching humans may be fifty years in the future, or five hundred, but if that day ever comes, let's hope it's an egalitarian future where we are the masters and not the servants. Visit our webpage on this episode for a lot more information.

The Good Robot IS ON STRIKE!
Blaise Agüera y Arcas on Debunking Myths in Technology: Intelligence, Survival, Sexuality

The Good Robot IS ON STRIKE!

Play Episode Listen Later May 3, 2022 33:30


In this episode, we talk to Blaise Agüera y Arcas, a Fellow and Vice President at Google research and an authority in computer vision, machine intelligence, and computational photography. In this wide ranging episode, we explore why it is important that the AI industry reconsider what intelligence means and who possesses it, how humans and technology have co-evolved with and through one another, the limits of using evolution as a way of thinking about AI, and why we shouldn't just be optimising AI for survival. We also chat about Blaise's research on gender and sexuality, from his huge crowdsourced surveys on how people self-identify through to debunking the idea that you can discern someone's sexuality from their face using facial recognition technology. 

Savage Lovecast
Savage Love Episode 526

Savage Lovecast

Play Episode Listen Later Nov 22, 2016 49:07


Here's a happily married couple. He digs exhibitionism, which she indulged after taking a deep breath. She wants to try BDSM, but he isn't having it. Is it her right to demand it, and what's his god damn problem? A married woman likes to be fashionable and step out looking GOOD. But guess what! Her husband, wishes she would wear jeans and a grubby tee so all the men will stop looking at her.  Dan wants to help her pick out a smart outfit for divorce court. On the Magnum, make way for the robots! Google's (human- we assume) head of machine intelligence, Blaise Agüera y Arcas, talks about the inevitable arrival of sex robots, along with the ethical challenges we'll all face. Hello Westworld! And,  a woman with a boyfriend of one year, wonders if he'll present her with a ring during their upcoming trip to Paris. Oh! Oh, how she hopes he will! 206-302-2064 The Savage Lovecast is sponsored by Headspace, the guided meditation app that will help you focus and reduce stress. Learn more at headspace.com/savage This episode of the Lovecast is brought to you by Winc- the wine club that helps you to choose wine that suits your taste and ships it right to your door. For $20 off your first order, and complimentary shipping, go to TryWinc.com/savage. Today's episode is also brought to you by . the delivery service that sends you fresh ingredients and incredible recipes so you can make fabulous meals at home. Check out this week's menu and get your first 3 meals free by going to .com/savage.

Savage Lovecast
Savage Love Episode 526

Savage Lovecast

Play Episode Listen Later Nov 22, 2016 49:07


Here's a happily married couple. He digs exhibitionism, which she indulged after taking a deep breath. She wants to try BDSM, but he isn't having it. Is it her right to demand it, and what's his god damn problem? A married woman likes to be fashionable and step out looking GOOD. But guess what! Her husband, wishes she would wear jeans and a grubby tee so all the men will stop looking at her.  Dan wants to help her pick out a smart outfit for divorce court. On the Magnum, make way for the robots! Google's (human- we assume) head of machine intelligence, Blaise Agüera y Arcas, talks about the inevitable arrival of sex robots, along with the ethical challenges we'll all face. Hello Westworld! And,  a woman with a boyfriend of one year, wonders if he'll present her with a ring during their upcoming trip to Paris. Oh! Oh, how she hopes he will! 206-302-2064 The Savage Lovecast is sponsored by Headspace, the guided meditation app that will help you focus and reduce stress. Learn more at headspace.com/savage This episode of the Lovecast is brought to you by Winc- the wine club that helps you to choose wine that suits your taste and ships it right to your door. For $20 off your first order, and complimentary shipping, go to TryWinc.com/savage. Today's episode is also brought to you by . the delivery service that sends you fresh ingredients and incredible recipes so you can make fabulous meals at home. Check out this week's menu and get your first 3 meals free by going to .com/savage.

TEDTalks Technologie
How computers are learning to be creative | Blaise Agüera y Arcas

TEDTalks Technologie

Play Episode Listen Later Jun 28, 2016 17:34


We're on the edge of a new frontier in art and creativity -- and it's not human. Blaise Agüera y Arcas, principal scientist at Google, works with deep neural networks for machine perception and distributed learning. In this captivating demo, he shows how neural nets trained to recognize images can be run in reverse, to generate them. The results: spectacular, hallucinatory collages (and poems!) that defy categorization. "Perception and creativity are very intimately connected," Agüera y Arcas says. "Any creature, any being that is able to do perceptual acts is also able to create."

TEDTalks Tecnologia
Como computadores estão aprendendo a ser criativos | Blaise Agüera y Arcas

TEDTalks Tecnologia

Play Episode Listen Later Jun 28, 2016 17:34


Estamos à beira de uma nova fronteira na arte e na criatividade e, não é humana. Blaise Agüera y Arcas, principal cientista do Google, trabalha com complexas redes neurais para a percepção da máquina e aprendizagem distribuída. Nesta apresentação fascinante, ele mostra como redes neurais treinadas para reconhecer imagens podem ser executadas em sentido inverso, para gerá-las. Resultado: colagens alucinatórias (e poemas!) espetaculares que desafiam a categorização. "Percepção e criatividade estão intimamente ligadas", diz Agüera y Arcas. "Qualquer criatura, qualquer ser que é capaz de fazer atos de percepção, também é capaz de criar."

TEDTalks Tecnología
Cómo las computadoras aprenden a ser creativas | Blaise Agüera y Arcas

TEDTalks Tecnología

Play Episode Listen Later Jun 28, 2016 17:34


Estamos en el límite de una nueva frontera en el arte y la creatividad y que no es humana. Blaise Agüera y Arcas, científico principal en Google, trabaja con redes neuronales para el desarrollo de la percepción computacional y el aprendizaje distribuido. En esta demo cautivadora muestra cómo las redes neuronales entrenadas para reconocer imágenes se pueden ejecutar a la inversa, para generarlas. Los resultados, espectaculares collages (¡y poemas!) alucinantes que desafían la categorización. "La percepción y la creatividad están conectados muy íntimamente", dice Agüera y Arcas. "Cualquier criatura, cualquier ser capaz de percibir es también capaz de crear".

TEDTalks 기술
컴퓨터가 창의력을 배우는 방법 | 블레즈 아구에라 이 아카스 (Blaise Agüera y Arcas)

TEDTalks 기술

Play Episode Listen Later Jun 28, 2016 17:34


우리는 예술과 창의력의 새로운 국면을 맞이하고 있습니다. 그리고 이것은 인간에 의한 것이 아닙니다. 블레즈 아구에라 이 아카스는 구글의 수석 과학자로 스스로 학습하는 인공 지능을 위해 기계 인식과 분산 학습을 연구하고 있습니다. 이 매혹적인 데모에서 그는 이미지를 인식하기 위해 훈련된 신경 회로망이 어떻게 반대로 이미지를 만들어 내는지를 보여줍니다. 그 결과 장관을 이루는 범주를 규정할 수 없는 환각적 도취상태 같은 콜라주 작품을(그리고 시를) 만들어 냈습니다. "인식과 창의성은 매우 밀접한 관계가 있다"고 아구에라 이 아카스는 말합니다. "인식 할 수 있는 모든 것들은 창조할 수 있습니다."

arcas blaise ag
TEDTalks Technologie
Wie Computer Kreativität lernen | Blaise Agüera y Arcas

TEDTalks Technologie

Play Episode Listen Later Jun 28, 2016 17:34


Im Bereich der Kunst und der Kreativität steht uns etwas Bahnbrechendes bevor – und es stammt nicht von Menschen. Blaise Agüera y Arcas, leitender Wissenschaftler bei Google, arbeitet mit sogenannten "Deep Neural Networks" für Maschinenwahrnehmung und dezentrales Lernen. In dieser fesselnden Vorführung zeigt er, wie man neuronale Netzwerke, die Bilder erkennen können, so umkehrt, dass sie Bilder erzeugen. Das Ergebnis: Spektakuläre, sinnestäuschende Collagen (und Gedichte!), die kaum einer Kategorie zugeordnet werden können. "Wahrnehmung und Kreativität sind sehr eng miteinander verbunden", sagt Agüera y Arcas. "Jedes Wesen, jedes Geschöpf, das der Wahrnehmung fähig ist, kann auch Dinge erschaffen."

TED Talks Technology
How computers are learning to be creative | Blaise Agüera y Arcas

TED Talks Technology

Play Episode Listen Later Jun 28, 2016 17:34


We're on the edge of a new frontier in art and creativity -- and it's not human. Blaise Agüera y Arcas, principal scientist at Google, works with deep neural networks for machine perception and distributed learning. In this captivating demo, he shows how neural nets trained to recognize images can be run in reverse, to generate them. The results: spectacular, hallucinatory collages (and poems!) that defy categorization. "Perception and creativity are very intimately connected," Agüera y Arcas says. "Any creature, any being that is able to do perceptual acts is also able to create."

Focus on Flowers
Software Architect Blaise Agüera y Arcas

Focus on Flowers

Play Episode Listen Later Apr 7, 2014 2:00


Betsy Shepherd interviews software designer Blaise Agüera y Arcas, the architect of Bing Maps and Bing Mobile.

software architect arcas bing maps blaise ag