American computer scientist
POPULARITY
Artificial Intelligence, and its possible applications to religion, have been in the news a lot lately, so while we're busy working on Season 8, we thought we'd share this episode from 2022 with you.Can you pray with a robot? Will people worship AI like a god? It's not science fiction. It's starting to happen now. But what does it mean if robots become priests, or AIs start offering ethical advice? And how do we know when to trust them?We'll talk to MIT social roboticist Cynthia Breazeal about how technology can manipulate our minds and emotions for good or ill, and with religion professor Robert Geraci about how technology is being used in religion, and why AI may need some spiritual training itself.Watch Cynthia Breazeal's TED talk here, and find out about her work on the MIT Media Lab website. Find out more about Robert Geraci's work on his website.Meet the robots: Nexi, Mindar, and SanTO.
The future of work is here, and it's powered by AI. But are we prepared? As businesses navigate the complexities of AI implementation, talent scarcity, technical hurdles, and the need for interdisciplinary skills emerge as significant barriers. In the second installment of Dell Technologies' “AI & Us” series, Six Five Media host David Nicholson is with Cynthia Breazeal, Dean for Digital Learning at MIT Media Lab, and Vivek Mohindra, SVP of Corporate Strategy at Dell Technologies, to share actionable strategies to proactively build an AI-skilled workforce. Their conversation covers: How AI is transforming every aspect of society – creating a need to equip everyone, from K-12 students to business leaders, with the knowledge and skills to thrive in an AI-powered world Bridging the skills gap through collaboration between academia, industry, and governments How MIT Media Lab is developing innovative AI curriculum and tools, while Dell Technologies is focused on upskilling its workforce and providing responsible AI infrastructure The importance of diverse voices and perspectives in its design and development to ensure that AI benefits everyone Addressing the need for creating inclusive learning opportunities and pathways for underrepresented groups Ethical and responsible AI development for data privacy, algorithmic bias, and the societal impact of AI
Their social, communicative and emotional skills allow her robots to seamlessly collaborate with us. A pioneer in the field of social robotics, Cynthia Breazeal is now turning her focus to ensuring we understand the limits and risks of the artificial intelligence that powers those robots – that we become “AI literate.”
This podcast is a commentary and does not contain any copyrighted material of the reference source. We strongly recommend accessing/buying the reference source at the same time. ■Reference Source https://www.ted.com/talks/cynthia_breazeal_the_rise_of_personal_robots ■Post on this topic (You can get FREE learning materials!) https://englist.me/152-academic-words-reference-from-cynthia-breazeal-the-rise-of-personal-robots-ted-talk/ ■Youtube Video https://youtu.be/rePGN71iKuY (All Words) https://youtu.be/ZlZPKLg9FYE (Advanced Words) https://youtu.be/Cx_G2B-0tJk (Quick Look) ■Top Page for Further Materials https://englist.me/ ■SNS (Please follow!)
On our last episode, World Bank Senior Education and Technology Specialist Cristobal Cobo spoke with the Director of the MIT-wide Initiative on Responsible AI for Social Empowerment and Education (raise.mit.edu), Cynthia Breazeal, about RAISE and its Day of AI (https://www.dayofai.org), which seeks to engage K-12 students in a series of freely available hands-on activities designed to introduce them to AI and how it plays a part of their lives today. This week, Cristobal Cobo speaks with Day of AI participant Betzabé Orenos about her experience introducing students to AI in Guatemala. Betzabé is a Google for Education-certified trainer and coach https://geg.gt/index.html and teaches innovation to high schoolers in Guatemala.
The World Bank EduTech team is partnering with Digital Promise and AIEngage.org to host a series of episodes about AI in Education. This week, we're exploring opportunities to advance access and inclusivity in AI education to people of all ages and backgrounds with a focus on K12 and the workforce. World Bank Senior Education and Technology Specialist Cristobal Cobo speaks with the Director of the MIT-wide Initiative on Responsible AI for Social Empowerment and Education (raise.mit.edu), Cynthia Breazeal, about RAISE and its 2022 Day of AI, which seeks to engage K-12 students across the US in a series of freely available hands-on activities designed to introduce them to AI and how it plays a part of their lives today. You can learn more about Day of AI at https://www.dayofai.org. Cynthia Breazeal is a professor of media arts and sciences at MIT, where she founded and directs the Personal Robots group at the Media Lab. She is the MIT dean for digital learning, and in this role, she leverages her experience in emerging digital technologies and business, research, and strategic initiatives to lead Open Learning's business and research & engagement units. She is also the Director of the MIT-wide Initiative on Responsible AI for Social Empowerment and Education (raise.mit.edu). MIT RAISE is a research and outreach effort that advances access and inclusivity in AI education to people of all ages and backgrounds with a focus on K12 and the workforce. She co-founded the consumer social robotics company, Jibo, Inc., where she served as Chief Scientist and Chief Experience Officer.
How do we leverage AI and intelligent tools to make better decisions? Cynthia Breazeal, Associate Director of the MIT Media Lab and Dean for Digital Learning at MIT Open Learning, shares her approach to social robotics, living with AI, and scaling learning opportunities for AI fluency.
Can you pray with a robot? Will people worship AI like a god? It's not science fiction. It's starting to happen now. But what does it mean if robots become priests, or AIs start offering ethical advice? And how do we know when to trust them? We'll talk to MIT social roboticist Cynthia Breazeal about how technology can manipulate our minds and emotions for good or ill, and with religion professor Robert Geraci about how technology is being used in religion, and why AI may need some spiritual training itself. Watch Cynthia Breazeal's TED talk here, and find out about her work on the MIT Media Lab website. Find out more about Robert Geraci's work on his website. Meet the robots: Nexi, Mindar, and SanTO.
Clip: Cynthia Breazeal "The Killer Billion Dollar Business Case For Social Robots" by Marwa ElDiwiny
Cynthia Breazeal "Social Robots" by Marwa ElDiwiny
Clip: Cynthia Breazeal "Social Robots" by Marwa ElDiwiny
When humans interact, they don't just pass information from one to the other; there's always some relational element, with the participants responding to each other's emotional cues. Professor Cynthia Breazeal, MIT's new Dean of Digital Learning, believes it's possible to design this element into human-computer interactions as well. She foresees a day when AI won't merely perform practical tasks for us, but also will provide us with companionship, emotional comfort, and even mental health support. But a future of closer human-AI collaborative relationships doesn't only require technological development—it also requires us to learn what AI is capable of and how to interact with it in a more informed way. To further this goal, Professor Breazeal leads the Responsible AI for Social Empowerment and Education (RAISE) initiative at MIT, which runs an annual “Day of AI” program to promote better understanding of AI in the next generation of technology users and developers. In this episode, she describes those projects as well as her work developing the groundbreaking social robots Kismet and Jibo, prototypes of what she calls “warm tech”—AI-enabled devices designed to be engaging, expressive, and personal. Relevant Resources:Day of AIRAISE (Responsible AI for Social Empowerment and Education)MIT OpenCourseWareThe OCW Educator PortalShare your teaching insightsProfessor Breazeal's faculty pageProfessor Breazeal named Dean of Digital LearningProfessor Breazeal introduces Jibo (YouTube video)The Rise of Personal Robotics (TED talk by Professor Breazeal)Music in this episode by Blue Dot Sessions Connect with UsIf you have a suggestion for a new episode or have used OCW to change your life or those of others, tell us your story. We'd love to hear from you! Call us @ 617-715-2517On our siteOn FacebookOn TwitterOn Instagram Stay CurrentSubscribe to the free monthly "MIT OpenCourseWare Update" e-newsletter. Support OCWIf you like Chalk Radio and OpenCourseWare, donate to help keep these programs going! CreditsSarah Hansen, host and producer Brett Paci, producer Dave Lishansky, producer Nidhi Shastri and Aubrey Calloway, scriptwriters Show notes by Peter Chipman
This episode of JFF's Horizons podcast explores the exciting yet unpredictable world of artificial intelligence, or AI. Host Tameshia Bridges Mansfield shares clips from two Horizons 2021 presentations that portray AI as a double-edged sword of a technology. Listeners will hear insights from Tod Loofbourrow, CEO and Chairman of ViralGains and chair of JFF's board of directors; MIT professor Cynthia Breazeal, a roboticist, entrepreneur, and founder and director of the Personal Robots group at MIT's Media Lab; Catie Cuan, a dancer and robot choreographer who's pursuing a PhD in mechanical engineering at Stanford; and award-winning technologist Noelle Silver, the CEO of the AI Leadership Institute.Learn more at https://horizons.jff.org/podcast
As technology gets smarter and encroaches on more and more jobs, we have to face a question: how do we differentiate the work that humans should do from the work machines should do? In other words, no matter how smart the machines get, what will humans always do better? In this talk, Kai Fu Lee, technology investor and author of AI Superpowers, offers a surprising answer: love. And proposes nothing short of a redesign of the labor market to prioritize jobs that require compassion. But why wait? After the talk, Modupe explores how we can bring more compassion to the workplace now. (FYI that lovable robot in the introduction? That’s Kismet, invented by Cynthia Breazeal, born at the MIT Media Lab. For more on sociable robots, check out her work at https://cynthiabreazeal.media.mit.edu/)
01 October 2020: Helen catches up with Cynthia Breazeal of MIT about a project to deploy robots in a Dubai school. Plus, are you giving your child the flu jab? Helen speaks to a paediatrician from Healthbay Polyclinic to find out. See omnystudio.com/listener for privacy information.
Prior to our series A and the Indiegogo crowd funding campaign, Jibo had to operate extremely lean. Thanks to our co-founder Jerilyn Asher and her connections, we found a secret locations for us to work from. To the best of my knowledge this is the first time this story has ever been told.The image is from a photo shoot with Jibo co-founder Cynthia Breazeal with an early Jibo prototype from this secret location. I believe this photoshoot was for either Popular Science or Popular Mechanics but may have been a different publications. I took this picture as I was supporting Cynthia on this interview and photo shoot. Obviously there is no way to determine the location from this photo.
We talk with Christina Mulligan about the salutary effects of smashing robots that have wronged you. Join us for a chat about revenge and satisfaction in the emerging human-robot social space. This show’s links: Christina Mulligan's faculty profile (https://www.brooklaw.edu/faculty/directory/facultymember/biography?id=christina.mulligan) and writing (https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=1557395) Christina Mulligan, Revenge Against Robots (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3016048) About Betty Smith's A Tree Grows in Brooklyn (https://en.wikipedia.org/wiki/A_Tree_Grows_in_Brooklyn_(novel)) About the Tree that Owns Itself (https://en.wikipedia.org/wiki/Tree_That_Owns_Itself) The Trial of the Autun Rats (http://www.duhaime.org/LawMuseum/LawArticle-1529/1508-The-Trial-of-the-Autun-Rats.aspx) Oral Argument 70: No Drones in the Park (http://oralargument.org/70) Scott Hershovitz, Tort as a Substitute for Revenge (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2308590) Kate Darling, Palash Nandy, and Cynthia Breazeal, Empathic Concern and the Effect of Stories in Human-Robot Interaction (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2639689); Kate Darling, "Who's Johnny?" Anthropomorphic Framing in Human-Robot Interaction, Integration, and Policy (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2588669) Office Space, the printer scene (https://www.youtube.com/watch?v=N9wsjroVlu8) (nsfw) Hunter Walk, Amazon Echo Is Magical. It’s Also Turning My Kid into an Asshole. (https://hunterwalk.com/2016/04/06/amazon-echo-is-magical-its-also-turning-my-kid-into-an-asshole/) Hannah Gold, This Mirror that Forces People to Smile Is Going to Piss Everyone Off (https://jezebel.com/this-mirror-that-forces-people-to-smile-is-going-to-pis-1819828956) Special Guest: Christina Mulligan.
The O'Reilly Radar Podcast: Imbuing robots with magic, eschewing deception in AI, and problematic assumptions of human-taught reinforcement learning.In this episode, I sit down with Brad Knox, founder and CEO of Emoters, a startup building a product called bots_alive—animal-like robots that have a strong illusion of life. We chat about the approach the company is taking, why robots or agents that pass themselves off as human without any transparency should be illegal, and some challenges and applications of reinforcement learning and interactive machine learning.Here are some links to things we talked about and some highlights from our conversation: Links: bots_alive Bot Party Knox's article: Framing reinforcement learning from human reward: Reward positivity, temporal discounting, episodicity, and performance Knox's article: Power to the People: The Role of Humans in Interactive Machine Learning bots_alive's NSF award: Design, deployment, and algorithmic optimization of zoomorphic, interactive robot companions Creating a strong illusion of life I've been working on a startup company, Emoters. We're releasing a product called bots_alive, hopefully in January, through Kickstarter. Our big vision there is to create simple, animal-like robots that have a strong illusion of life. This immediate product is going to be a really nice first step in that direction. .... If we can create something that feels natural, that feels like having a simple pet—maybe not for a while anything like a dog or cat, but something like an iguana, or a hamster—where you can observe it and interact with it, that it would be really valuable to people. The way we're creating that is going back to research I did when I was at MIT with Cynthia Breazeal and a master's student, Sam Spaulding—machine learning from demonstration on human-improvised puppetry. Our hypothesis for this product is that if you create an artificially intelligent character using current methods, you sit back and think, 'Well, in this situation, the character should do this.' For example, a traditional AI character designer might write the rule for an animal-like robot that if a person moves his or her hand quickly, the robot should be scared and run away. That results in some fairly interesting characters, but our hypothesis is that we'll get much more authentic behaviors, something that really feels real, if we first allow a person to control the character through a lot of interactions. Then, take the records and the logs of those interactions, and learn a model of the person. As long as that model is good fidelity—it doesn't have to be perfect, but captures with pretty good fidelity the puppeteer—and the puppeteer is actually creating something that would be fun to observe or interact with, then we're in a really good position. … It's hard to sit back and write down on paper why humans do the things we do, but what we do in various contexts is going to be in the data. Hopefully, we'll be able to learn that from human demonstration and really imbue these robots with some magic. A better model for tugging at emotions The reason I wrote that Tweet [Should a robot or agent that widely passes for human be illegal? I think so.] is that if a robot or an agent—you could think of an agent as anything that senses the state of its environment, whether it's a robot or something like a chat bot, just something you're interacting with— if it can pass as human and it doesn't give some signal or flag that says, 'Hey, even if I appear human, I'm not actually human,' that really opens the door to deception and manipulation. For people who are familiar with the Turing Test—which is by far the most well-known test for successful artificial intelligence—the issues I have with it is that, ultimately, it is about deceiving people, about them not being able to tell the difference between an artificially intelligent entity and a human. For me, one real issue is that, as much as I'm generally a believer in capitalism, I think there's room for abuse by commercial companies. For instance, it's hard enough when you're walking down the street and a person tries to get your attention to buy something or donate to some cause. Part of that is because it's a person and you don't want to be rude. When we create a large number—eventually, inexpensive fleets—of human-like or pass-for-human robots that can also pull on your emotions in a way that helps some company, I think the negative side is realized at that point. ... How is that not a contradiction [to our company's mission to create a strong illusion of life]? The way I see illusion of life (and the way we're doing it at bots_alive) is very comparable to cartoons or animation in general. When you watch a cartoon, you know that it's fake. You know that it's a rendering, or a drawing, or a series of drawings with some voice-over. Nonetheless, if you're like most people, you feel and experience these characters in the cartoon or the animation. ... I think that's a better model, where we know it's not real but we can still feel that it's real to the extent that we want to. Then, we have a way of turning it off and we're not completely emotionally beholden to these entities. Problematic assumptions of human-taught reinforcement learning I was interested in the idea of human training of robots in an animal training way. Connecting that to reinforcement learning, the research question we posed was: instead of the reward function being coded by an expert in reinforcement learning, what happens if we instead give buttons or some interface to a person who knows nothing about computer science, nothing about AI, nothing about machine learning, and that person gives the reward and punishment signals to an agent or a robot? Then, what algorithmic changes do we need to make the system learn what the human is teaching the agent to do? If it had turned out that the people in the study had not violated any of the assumptions of reinforcement learning when we actually did the experiments, I think it wouldn't have ended up being an interesting direction of research. But this paper dives into the ways that people did violate, deeply violate, the assumptions of reinforcement learning. One emphasis of the paper is that people tend to have a bias toward giving positive rewards. A large percentage of the trainers we had in our experiments would give more positive rewards than punishment—or in reinforcement learning terms, 'negative rewards.' We found that people were biased toward positive rewards. The way reinforcement learning is set up is, a lot of the reinforcement learning tasks are what we call 'episodic'—roughly, what that means is that when the task is completed, the agent can't get further reward. Its life is essentially over, but not in a negative way. When we had people sit down and give reward and punishment signals to an agent trying to get out of a maze, they would give a positive reward for getting closer to the goal, but then this agent would learn, correctly (at least by the assumptions of reinforcement learning), that if it got to the goal, (1) it would get no further reward, and (2) if it stayed in the world that it's in, it would get a net positive reward. The weird consequence is that the agent learns that it should never go to the goal, even though that's exactly what these rewards are supposed to be teaching it. In this paper, we discussed that problem and showed the empirical evidence for it. Basically, the assumptions that reinforcement learning typically makes are really problematic when you're letting a human give the reward.
The O'Reilly Radar Podcast: Imbuing robots with magic, eschewing deception in AI, and problematic assumptions of human-taught reinforcement learning.In this episode, I sit down with Brad Knox, founder and CEO of Emoters, a startup building a product called bots_alive—animal-like robots that have a strong illusion of life. We chat about the approach the company is taking, why robots or agents that pass themselves off as human without any transparency should be illegal, and some challenges and applications of reinforcement learning and interactive machine learning.Here are some links to things we talked about and some highlights from our conversation: Links: bots_alive Bot Party Knox's article: Framing reinforcement learning from human reward: Reward positivity, temporal discounting, episodicity, and performance Knox's article: Power to the People: The Role of Humans in Interactive Machine Learning bots_alive's NSF award: Design, deployment, and algorithmic optimization of zoomorphic, interactive robot companions Creating a strong illusion of life I've been working on a startup company, Emoters. We're releasing a product called bots_alive, hopefully in January, through Kickstarter. Our big vision there is to create simple, animal-like robots that have a strong illusion of life. This immediate product is going to be a really nice first step in that direction. .... If we can create something that feels natural, that feels like having a simple pet—maybe not for a while anything like a dog or cat, but something like an iguana, or a hamster—where you can observe it and interact with it, that it would be really valuable to people. The way we're creating that is going back to research I did when I was at MIT with Cynthia Breazeal and a master's student, Sam Spaulding—machine learning from demonstration on human-improvised puppetry. Our hypothesis for this product is that if you create an artificially intelligent character using current methods, you sit back and think, 'Well, in this situation, the character should do this.' For example, a traditional AI character designer might write the rule for an animal-like robot that if a person moves his or her hand quickly, the robot should be scared and run away. That results in some fairly interesting characters, but our hypothesis is that we'll get much more authentic behaviors, something that really feels real, if we first allow a person to control the character through a lot of interactions. Then, take the records and the logs of those interactions, and learn a model of the person. As long as that model is good fidelity—it doesn't have to be perfect, but captures with pretty good fidelity the puppeteer—and the puppeteer is actually creating something that would be fun to observe or interact with, then we're in a really good position. … It's hard to sit back and write down on paper why humans do the things we do, but what we do in various contexts is going to be in the data. Hopefully, we'll be able to learn that from human demonstration and really imbue these robots with some magic. A better model for tugging at emotions The reason I wrote that Tweet [Should a robot or agent that widely passes for human be illegal? I think so.] is that if a robot or an agent—you could think of an agent as anything that senses the state of its environment, whether it's a robot or something like a chat bot, just something you're interacting with— if it can pass as human and it doesn't give some signal or flag that says, 'Hey, even if I appear human, I'm not actually human,' that really opens the door to deception and manipulation. For people who are familiar with the Turing Test—which is by far the most well-known test for successful artificial intelligence—the issues I have with it is that, ultimately, it is about deceiving people, about them not being able to tell the difference between an artificially intelligent entity and a human. For me, one real issue is that, as much as I'm generally a believer in capitalism, I think there's room for abuse by commercial companies. For instance, it's hard enough when you're walking down the street and a person tries to get your attention to buy something or donate to some cause. Part of that is because it's a person and you don't want to be rude. When we create a large number—eventually, inexpensive fleets—of human-like or pass-for-human robots that can also pull on your emotions in a way that helps some company, I think the negative side is realized at that point. ... How is that not a contradiction [to our company's mission to create a strong illusion of life]? The way I see illusion of life (and the way we're doing it at bots_alive) is very comparable to cartoons or animation in general. When you watch a cartoon, you know that it's fake. You know that it's a rendering, or a drawing, or a series of drawings with some voice-over. Nonetheless, if you're like most people, you feel and experience these characters in the cartoon or the animation. ... I think that's a better model, where we know it's not real but we can still feel that it's real to the extent that we want to. Then, we have a way of turning it off and we're not completely emotionally beholden to these entities. Problematic assumptions of human-taught reinforcement learning I was interested in the idea of human training of robots in an animal training way. Connecting that to reinforcement learning, the research question we posed was: instead of the reward function being coded by an expert in reinforcement learning, what happens if we instead give buttons or some interface to a person who knows nothing about computer science, nothing about AI, nothing about machine learning, and that person gives the reward and punishment signals to an agent or a robot? Then, what algorithmic changes do we need to make the system learn what the human is teaching the agent to do? If it had turned out that the people in the study had not violated any of the assumptions of reinforcement learning when we actually did the experiments, I think it wouldn't have ended up being an interesting direction of research. But this paper dives into the ways that people did violate, deeply violate, the assumptions of reinforcement learning. One emphasis of the paper is that people tend to have a bias toward giving positive rewards. A large percentage of the trainers we had in our experiments would give more positive rewards than punishment—or in reinforcement learning terms, 'negative rewards.' We found that people were biased toward positive rewards. The way reinforcement learning is set up is, a lot of the reinforcement learning tasks are what we call 'episodic'—roughly, what that means is that when the task is completed, the agent can't get further reward. Its life is essentially over, but not in a negative way. When we had people sit down and give reward and punishment signals to an agent trying to get out of a maze, they would give a positive reward for getting closer to the goal, but then this agent would learn, correctly (at least by the assumptions of reinforcement learning), that if it got to the goal, (1) it would get no further reward, and (2) if it stayed in the world that it's in, it would get a net positive reward. The weird consequence is that the agent learns that it should never go to the goal, even though that's exactly what these rewards are supposed to be teaching it. In this paper, we discussed that problem and showed the empirical evidence for it. Basically, the assumptions that reinforcement learning typically makes are really problematic when you're letting a human give the reward.
Two pioneers in AI discuss their work and describe the way in which machine led intelligence is set to remake our world. Eric Horvitz, managing director of Microsoft Research and Cynthia Breazeal, chief scientist of the Robotics firm Jibo join the BBC's Rory Cellan Jones to discuss the rewards and challenges of AI with an audience in Silicon Valley.
Two pioneers in AI discuss their work and describe the way in which machine led intelligence is set to remake our world. Eric Horvitz, managing director of Microsoft Research and Cynthia Breazeal, chief scientist of the Robotics firm Jibo join the BBC's Rory Cellan Jones to discuss the rewards and challenges of AI with an audience in Silicon Valley.
By Alison van Diggelen, host of Fresh Dialogues Why are robots getting close and personal? Here’s my BBC World Business Report about The Brave New World of autonomous cars and social robots. It aired on BBC World Service Monday April 13, 2015. Two pioneers, Elon Musk of Tesla Motors and Cynthia Breazeal of Jibo Robot, explain why autonomous cars and robots are poised to […]
Tom raises the Sunset Blush quotient and the Facebook group. Heron is pleased to have contributed to Steve's life. Tom would like the listeners to participate. The NRA and Heron have the same targeted demographic. Heron in Squish needs a Heron definition and it doesn't contain the stupidities. Reification, absolutism, the word the and two valued logic are too advanced for the project. Heron has been experimenting on himself again. Tom gives an overview of his Origin of Mind chapter from fear and desire, to Cynthia Breazeal's work to the narrative engine. Tom identifies the social architect as being a uniquely human thing. They digress into what the internet is for in terms of volume. Tom raises equipment and travel as being the costs for Heron in Squish. They digress into creating female comic characters of Heron and Tom clothing optional. (Editorial note: see Power Girl for the mainstream-ization of this sub genre of comics). 'The Supreme Identity: an essay on Oriental metaphysic and the Christian religion' is a prop for Heron in Squish. Tom defines the layered needs of the project and things that some of Heron's fans may actually have gone too far, already. Heron offers guilt as a topic but prefers the term shame. Tom admits a certain obsession with Snowtown (or the Snowtown Murders as it was called in the US). Heron brightens the topic by talking about a traffic accident. YouTube thinks Tom is a nut. Tom wants to give away all his books. They explore how aging changes the mind.
Tom raises the Sunset Blush quotient and the Facebook group. Heron is pleased to have contributed to Steve's life. Tom would like the listeners to participate. The NRA and Heron have the same targeted demographic. Heron in Squish needs a Heron definition and it doesn't contain the stupidities. Reification, absolutism, the word the and two valued logic are too advanced for the project. Heron has been experimenting on himself again. Tom gives an overview of his Origin of Mind chapter from fear and desire, to Cynthia Breazeal's work to the narrative engine. Tom identifies the social architect as being a uniquely human thing. They digress into what the internet is for in terms of volume. Tom raises equipment and travel as being the costs for Heron in Squish. They digress into creating female comic characters of Heron and Tom clothing optional. (Editorial note: see Power Girl for the mainstream-ization of this sub genre of comics). 'The Supreme Identity: an essay on Oriental metaphysic and the Christian religion' is a prop for Heron in Squish. Tom defines the layered needs of the project and things that some of Heron's fans may actually have gone too far, already. Heron offers guilt as a topic but prefers the term shame. Tom admits a certain obsession with Snowtown (or the Snowtown Murders as it was called in the US). Heron brightens the topic by talking about a traffic accident. YouTube thinks Tom is a nut. Tom wants to give away all his books. They explore how aging changes the mind.
In today's episode we meet with Natalie Freed, David Robert and Adam Setapen from Cynthia Breazeal's Personal Robots Group at the MIT Media Lab. They'll be telling us about the Playtime Computing System, a playground where kids can interact with the physical world and its virtual extension.
Ian Condry, Associate Director of MIT Comparative Media Studies and Associate Professor of Foreign Languages and Literatures, will discuss the prevalence of giant robots in anime (Japanese animated films and TV shows). From the sixties to the present, robot or “mecha” anime has evolved in ways that reflect changing business models and maturing audiences, as can be seen in titles like Astro Boy, Gundam, Macross, and Evangelion. How can we better understand the emergence of anime as a global media phenomenon through the example of robot anime? What does this suggest about our transmedia future? Cynthia Breazeal, Associate Professor at the MIT Media Lab and founder/director of the Lab’s Personal Robots Group, will discuss how science fiction has influenced the development of real robotic systems, both in research laboratories and corporations all over the world. She will explore of how science fiction has shaped ideas of the relationship and role of robots in human society, how the existence of such robots is feeding back into science fiction narratives, and how we might experience transmedia properties in the future using robotic technologies.
Talking Robots - The Podcast on Robotics and Artificial Intelligence
In this episode of Talking Robots we talk to Cynthia Breazeal who is an Associate Professor of Media Arts and Sciences at the Massachusetts Institute of Technology in the USA, where she founded and directs the Personal Robots Group at the Media Lab. With her creaturoids, animoids, humanoids and robotized objects, Breazeal has been working to make robots and humans team up in a human-centric way, work together as peers, and learn from one another. Breazeal's work includes personal robots such as the very expressive Kismet, the Huggable™ robot teddy, Leonardo the social creature and the MDS (Mobile/ Dextourous/Social ) humanoid robot.