POPULARITY
Dr Rosalind W Picard is an American scholar and inventor who is Professor of Media Arts and Sciences at MIT, founder and director of the Affective Computing Research Group at the MIT Media Lab, and co-founder of the startups Affectiva and Empatica. She has received many recognitions for her research and inventions.I wanted to speak to Dr Picard about her conversion from self-professed proud atheist to powerful Christian tech pioneer. Some highlights from this episode include Dr Picard's thoughts on an A.I Jesus, how complex computing taught Dr Picard about our journey to understand God, and what it takes for Dr Picard to teach a computer to recognise emotion.--You can find more of Dr Picard's work at the following links:- https://web.media.mit.edu/~picard/- https://x.com/RosalindPicardFollow For All The Saints on social media for updates and inspiring content:www.instagram.com/forallthesaintspodhttps://www.facebook.com/forallthesaintspod/For All The Saints episodes are released every Monday on YouTube, Spotify, Apple Podcasts and more:https://www.youtube.com/watch?v=TVDUQg_qZIU&list=UULFFf7vzrJ2LNWmp1Kl-c6K9Qhttps://open.spotify.com/show/3j64txm9qbGVVZOM48P4HS?si=bb31d048e05141f2https://podcasts.apple.com/gb/podcast/for-all-the-saints/id1703815271If you have feedback or any suggestions for topics or guests, connect with Ben & Sean via hello@forallthesaints.org or DM on InstagramConversations to Refresh Your Faith.For All The Saints podcast was established in 2023 by Ben Hancock to express his passion and desire for more dialogue around faith, religious belief, and believers' perspectives on the topics of our day. Tune into For All The Saints every Monday on YouTube, Spotify, Apple Podcasts, and more.Follow For All The Saints on social media for daily inspiration.
What if AI could understand what we, humans, are feeling? This week on Generative Now, Lightspeed Partner and host Michael Mignano talks to Alan Cowen, a researcher, founder and CEO of Hume AI. Hume creates empathic AI that learns our preferences from our vocal and facial expressions. Their goal is to maximize our happiness and quality of life. Hume is now announcing their API for EVI, Hume's Empathic Voice Interface. The conversation covers Cowen's journey from being a researcher to founding Hume AI, the importance of emotional intelligence in AI for quality human interaction, and the potential to transform user experiences across various apps and devices. Plus, we ask how emphatic AI could impact the road to AGI. Episode Chapters(00:00) Introduction to Alan Cowen & Hume Demo (01:38) The Genesis of Hume AI: From Research to Startup (04:01) Affective Computing and Its Impact (10:55) Hume AI: Bridging Human Emotions and Technology (15:37) The Future of AI: Beyond Text to Empathic Interactions (20:37) Introducing EVI: Empathic Voice Interface (21:46) Real-World Applications of Empathic AI (31:19) The Potential Role of Empathic AI in Achieving AGI (36:53) Trust and Privacy (40:02) Opportunities with Hume AI (41:18) Closing Thoughts Stay in touch: www.lsvp.com X: https://twitter.com/lightspeedvp LinkedIn: https://www.linkedin.com/company/lightspeed-venture-partners/ Instagram: https://www.instagram.com/lightspeedventurepartners/ Subscribe on your favorite podcast app: generativenow.co Email: generativenow@lsvp.com The content here does not constitute tax, legal, business or investment advice or an offer to provide such advice, should not be construed as advocating the purchase or sale of any security or investment or a recommendation of any company, and is not an offer, or solicitation of an offer, for the purchase or sale of any security or investment product. For more details please see lsvp.com/legal.
In association and partnership with the ACM Bytecast, this episode features a conversation with Affective Computing Pioneer Dr. Rosalind Picard. Dr. Picard is a scientist, inventor, and engineer, member of the faculty of MIT's Media Lab, founder and director of the Affective Computing research group at the MIT Media Lab, founding faculty chair of MIT's MindHandHeart Initiative, and a faculty member of the MIT Center for Neurobiological Engineering. She has co-founded two companies: Affectiva (now part of Smart Eye), providing emotion AI technologies now used by more than 25% of the Global Fortune 500, and Empatica, providing wearable sensors and analytics to improve health.
In this episode of ACM ByteCast, our special guest host Scott Hanselman (of The Hanselminutes Podcast) welcomes ACM Fellow Rosalind Picard, a scientist, inventor, engineer, and faculty member of MIT's Media Lab, where she is also Founder and Director of the Affective Computing Research Group. She is the author of the book Affective Computing, and has founded several companies in the space of affective computing, including the startups Affectiva and Empatica, Inc. A named inventor on more than 100 patents, Rosalind is a member of the National Academy of Engineering and a Fellow of the National Academy of Inventors. Her contributions include wearable and non-contact sensors, algorithms, and systems for sensing, recognizing, and responding respectfully to human affective information. Her inventions have applications in autism, epilepsy, depression, PTSD, sleep, stress, dementia, autonomic nervous system disorders, human and machine learning, health behavior change, market research, customer service, and human-computer interaction, and are in use by thousands of research teams worldwide as well as in many products and services. In the episode, Rosalind talks about her work with the Affective Computing Research Group, and clarifies the meaning of “affective” in the context of her research. Scott and Rosalind discuss how her training as an electrical with a background in computer architecture and signal processing drew her to studying emotions and health indicators. They also talk about the importance of data accuracy, the implications of machine learning and language models to her field, and privacy and consent when it comes to reading into people's emotional states.
Today's podcast guest is Rosalind Picard, a researcher, inventor named on over 100 patents, entrepreneur, author, professor and engineer. When it comes to the science related to endowing computer software with emotional intelligence, she wrote the book. It's published by MIT Press and called Affective Computing.Dr. Picard is founder and director of the MIT Media Lab's Affective Computing Research Group. Her research and engineering contributions have been recognized internationally, for example she received the 2022 International Lombardy Prize for Computer Science Research, considered by many to be the Nobel prize in computer science. Through her research and companies, Dr. Picard has developed wearable sensors, algorithms and systems for sensing, recognizing and responding to information about human emotion. Her products are focused on using fitness trackers to advance clinical quality treatments for a range of conditions.Meanwhile, in just the past few years, numerous fitness tracking companies have released products with their own stress sensors and systems. You may have heard about Fitbit's Stress Management Score, or Whoop's Stress Monitor – these features and apps measure things like your heart rhythm and a certain type of invisible sweat to identify stress. They're designed to raise your awareness about forms of stress like anxieties and anger, and suggest strategies like meditation to relax in real time when stress occurs.But how well do these off-the-shelf gadgets work? There's no one more knowledgeable and experienced than Rosalind Picard to explain the science behind these stress features, what they do exactly, how they might be able to help us, and their current shortcomings.Dr. Picard is a member of the National Academy of Engineering and a Fellow of the National Academy of Inventors, and a popular speaker who's given over a hundred invited keynote talks and a TED talk with over 2 million views. She holds a Bachelors in Electrical Engineering from Georgia Tech, and Masters and Doctorate degrees in Electrical Engineering and Computer Science from MIT. She lives in Newton, Massachusetts with her husband, where they've raised three sons.In our conversation, we discuss stress scores on fitness trackers to improve well-being. She carefully describes the difference between commercial products that might help people become more mindful of their health and products that are FDA approved and really capable of advancing the science. We also discuss several fascinating findings and concepts discovered in Dr. Picard's lab including the multiple arousal theory, a phenomenon you'll want to hear about. And we talk about the complexity of stress, one reason it's so tough to measure. For example, many forms of stress are actually good for us. Can fitness trackers tell the difference between stress that's healthy and unhealthy?Making Sense of Science features interviews with leading medical and scientific experts about the latest developments in health innovation and the big ethical and social questions they raise. The podcast is hosted by science journalist Matt Fuchs
Today's episode features a Q&A with our own Graham Page. Graham leads the Media Analytics business Unit as Global Managing Director of Media Analytics at Affectiva, a Smart Eye company. He pioneered the integration of biometric and behavioral measures to mainstream brand and advertising research for 26 years as Executive VP and Head of Global Research Solutions at Kantar.Over the course of the last year or so, there has been a thread of debate in the media regarding the validity and ethics of facial emotion recognition. This has often reflected the point of view of some data privacy groups who are concerned about the use of facial technologies across several use cases, or the opinions of commercial interests who offer alternative biometric technologies, or traditional research methodologies.Scrutiny of emerging technologies is vital, and the concerns raised are important points for debate. Affectiva has led the development of the Emotion AI field for over a decade, and the use of automated facial expression analysis in particular. Listen in to learn more.Links of interest: [Podcast Episode] Lisa Feldman Barrett on Challenges in Inferring Emotion from Human Facial Movement: https://podcasts.apple.com/us/podcast/lisa-feldman-barrett-on-challenges-in-inferring-emotion/id1458361251?i=1000446966899 [Blog] Face Value: The Power of Facial Signals in Human Behavioral Research: https://blog.affectiva.com/face-value-the-power-of-facial-signals-in-researchAdditional Sources Referenced: [1] Barrett, Lisa Feldman, et al. "Emotional expressions reconsidered: Challenges to inferring emotion from human facial movements." Psychological science in the public interest 20.1 (2019): 1-68.[2] Ekman, Paul, and Wallace V. Friesen. "Facial action coding system." Environmental Psychology & Nonverbal Behavior (1978).[3] Rosenberg, Erika L., and Paul Ekman, eds. What the face reveals: Basic and applied studies of spontaneous expression using the Facial Action Coding System (FACS). Oxford University Press, 2020.[4] Martinez, Brais, et al. "Automatic analysis of facial actions: A survey." IEEE transactions on affective computing 10.3 (2017): 325-347.[5] McDuff, Daniel, et al. "AFFDEX SDK: a cross-platform real-time multi-face expression recognition toolkit." Proceedings of the 2016 CHI conference extended abstracts on human factors in computing systems. 2016.[6] Bishay, Mina, et al. "AFFDEX 2.0: A Real-Time Facial Expression Analysis Toolkit." arXiv preprint arXiv:2202.12059 (2022). Accepted at the FG2023 conference. [7] McDuff, Daniel, et al. "Predicting ad liking and purchase intent: Large-scale analysis of facial responses to ads." IEEE Transactions on Affective Computing 6.3 (2014): 223-235.[8] Koldra, Evan, et al. Do emotions in advertising drive sales? https://ana.esomar.org/documents/do-emotions-in-advertising-drive-sales--8059. [9] McDuff, Daniel, and Rana El Kaliouby. "Applications of automated facial coding in media measurement." IEEE transactions on affective computing 8.2 (2016): 148-160.[10] Teixeira, Thales, Rosalind Picard, and Rana El Kaliouby. "Why, when, and how much to entertain consumers in advertisements? A web-based facial tracking field study." Marketing Science 33.6 (2014): 809-827.[11] McDuff, Daniel, et al. "Automatic measurement of ad preferences from facial responses gathered
Dr. Marianne Reddan, of the Albert Einstein College of Medicine, New York City, USA. Joins Sahir and Olivia to talk about social factors which influence pain. We try to improve our understanding of what pain is, how emotional pain is different from physical pain and how acute pain works differently from chronic pain. We cover factors such as peoples relationships, economic status, gender and race in perception of pain along with the associated brain pathways involved in pain perception. Marianne Reddan Support us and reach out!https://smoothbrainsociety.comInstagram: @thesmoothbrainsocietyTikTok: @thesmoothbrainsocietyTwitter/X: @SmoothBrainSocFacebook: @thesmoothbrainsocietyMerch and all other links: Linktreeemail: thesmoothbrainsociety@gmail.com
Jonathan Bastian talks with cultural psychologist Batja Gomes de Mesquita, author of “Between Us: How Culture Creates Emotions” who makes the case that emotions are not innate but are rather shaped but our surroundings and cultures, made as we live our lives together. Later, Rosalind Picard, founder and director of the Affective Computing research group at the MIT Media Lab, explains how advances in AI can help computers analyze our emotions with the ultimate goal of making human lives better. Delve deeper into life, philosophy, and what makes us human by joining the Life Examined discussion group on Facebook.
Affective Computing ist ein Gebiet innerhalb des Cognitive Computing, welches sich mit dem Sammeln von Daten aus Gesichtern, Stimmen und Körpersprache beschäftigt, um menschliche Emotionen zu messen. Empathie für Maschinen – Durch Affective Computing wird dies möglich. Wie es einem Computer gelingen kann, menschliche Emotionen zu erkennen und welche potenziellen Anwendungsfälle sich durch diese Technologie ergeben, dass erfahren Sie in diesem Podcast vom Mittelstand-Digital Zentrum Zukunftskultur.
Roland Goecke is Professor of Affective Computing at the Faculty of Science & Technology, University of Canberra, where he leads the Human-Centred Technology Research Program. See acast.com/privacy for privacy and opt-out information.
Roz Picard has become an international leader in research after founding the new research brand of Affective Computing, which investigates the link between emotions and artificial intelligence. However, her path to the top has being far from linear, with many obstacles and failures on her way. During this talk, Roz shares three segments of her life and three big failures, which, nevertheless, have been only “mid points” and not destinations.Rosalind Picard is professor at MIT and faculty member since 1991. Her revolutionary research focused on the utility of emotions in artificial intelligence pioneered a new research field, called Affective Computing, and granted her several international awards and recognitions. Roz is also co-founder of Affectiva, providing emotion AI technology, and Empatica, creating sensors to improve health.This Episode was recorded at a FAIL! event organized at the Massachusetts Institute of Technology in November 2021 by the VISTA association.Watch this and more episodes on Youtube, follow FAIL! - Inspiring Resilience on social media (Facebook - Instagram - LinkedIn), and visit our website: www.fail-sharing.orgMusic Theme: "Driven To Success" by Scott HolmesFree Music Archive - CC BY NC
John interviews A.I. scientist Rosalind Picard. Rosalind is a pioneer in the field of affective computing, the co-founder of two companies at the forefront of A.I., Affectica and Empatica, and the founder and director of the affective computing research group at the MIT Media Lab. Affective computing aims to close the emotional gap between computers and their users. As Rosalind wrote in her book “Affective Computing,” published in 1997, “if we want computers to be genuinely intelligent, to adapt to us, and to interact naturally with us, then they will need the ability to recognize and express emotions.” John and Rosalind talk about the limits and applications of affective computing, and how wearable technology could change health care as we know it. See acast.com/privacy for privacy and opt-out information.
Affective computing – a family of A-I technologies that aim to be able to use biometrics to detect human emotions, or someone's state of mind – is a subject of active research in academia and the commercial sector. The Department of Homeland Security has also dabbled with the technology to see if it's able to detect lies among people seeking entry to the country. But our next guest says now's the time to put some boundaries around potential government uses of affective computing. In a recent paper, Alex Engler, the Brookings Institution's Rubenstein Fellow for Governance Studies argued the president should ban it altogether for federal law enforcement purposes. And Alex joined the Federal Drive to talk about that argument.
Computers can interpret the text we type, and they’re getting better at understanding the words we speak. But they’re only starting to understanding the emotions we feel—whether that means anger, amusement, boredom, distraction, or anything else. This week Harry talks with Rana El Kaliouby, the CEO of a Boston-based company called Affectiva that’s working to close that gap.El Kaliouby and her former MIT colleague Rosalind Picard are the inventors of the field of emotion AI, also called affective computing. The main product at Affectiva, which Picard and El Kaliouby co-founded in 2009, is a media analytics system that uses computer vision and machine learning to help market researchers understand what kinds of emotions people feel when they view ads or entertainment content. But the company is also active in other areas such as safety technology for automobiles that can monitor a driver’s behavior and alert them if they seem distracted or drowsy. Ultimately, Kaliouby predicts, emotion AI will become an everyday part of human-machine interfaces. She says we’ll interact with our devices the same way we interact with each other — not just through words, but through our facial expressions and body language. And that could include all the devices that help track our physical health and mental health.Rana El Kaliouby grew up in Egypt and Kuwait. She earned a BS and MS in computer science from the American University in Cairo and a PhD in computer science from the University of Cambridge in 2005, and was a postdoc at MIT from 2006 to 2010. In April 2020 she published Girl Decoded, a memoir about her mission to “humanize technology before it dehumanizes us.” She’s been recognized by the Fortune 40 Under 40 list, the Forbes America’s Top 50 Women in Tech list, and the Technology Review TR35 list, and she is a World Economic Forum Young Global Leader. Please rate and review MoneyBall Medicine on Apple Podcasts! Here's how to do that from an iPhone, iPad, or iPod touch:• Launch the “Podcasts” app on your device. If you can’t find this app, swipe all the way to the left on your home screen until you’re on the Search page. Tap the search field at the top and type in “Podcasts.” Apple’s Podcasts app should show up in the search results.• Tap the Podcasts app icon, and after it opens, tap the Search field at the top, or the little magnifying glass icon in the lower right corner.• Type MoneyBall Medicine into the search field and press the Search button.• In the search results, click on the MoneyBall Medicine logo.• On the next page, scroll down until you see the Ratings & Reviews section. Below that, you’ll see five purple stars.• Tap the stars to rate the show.• Scroll down a little farther. You’ll see a purple link saying “Write a Review.”• On the next screen, you’ll see the stars again. You can tap them to leave a rating if you haven’t already.• In the Title field, type a summary for your review.• In the Review field, type your review.• When you’re finished, click Send.• That’s it, you’re done. Thanks!TRANSCRIPTHarry Glorikian: I’m Harry Glorikian, and this is MoneyBall Medicine, the interview podcast where we meet researchers, entrepreneurs, and physicians who are using the power of data to improve patient health and make healthcare delivery more efficient. You can think of each episode as a new chapter in the never-ending audio version of my 2017 book, “MoneyBall Medicine: Thriving in the New Data-Driven Healthcare Market.” If you like the show, please do us a favor and leave a rating and review at Apple Podcasts.Many of us know that computers can interpret the text we type. And they’re getting better at understanding the words we speak. But they’re only starting to understanding the emotions we feel, whether that means anger, amusement, boredom, distraction, or anything else.My next guest, Rana El Kaliouby, is the co-founder and CEO of Affectiva, a company in Boston that’s working to close that gap. Rana and her former MIT colleague Rosalind Picard are the inventors of the field of emotion AI, also called affective computing. And they started Affectiva twelve years ago with the goal of giving machines a little bit of EQ, or emotional intelligence, to go along with their IQ.Affectiva’s main product is a media analytics system that uses computer vision and machine learning to help market researchers understand what kinds of emotions people feel when they view ads or entertainment content. But they’re also getting into other areas such as new safety technology for automobiles that can monitor the driver’s behavior and alert them if they seem distracted or drowsy. Ultimately Kaliouby predicts emotion AI will become an everyday part of human-machine interfaces. She says we’ll interact with our devices the same way we interact with each other — not just through words but through our facial expressions and body language. And that could include all the devices that help track our physical health and mental health. Rana and I had a really fun conversation, and I want to play it for you right now.Harry Glorikian: Rana, welcome to the show. Rana Kaliouby: Thank you for having me. Harry Glorikian: It's great to see you. We were just talking before we got on here. I haven't seen you since last February.Rana Kaliouby: I know, it's been a year. Isn't that crazy? Harry Glorikian: I'm sure if your system was looking at me, they'd be like, Oh my God, this guy has completely screwed up. Like something is completely off. Rana Kaliouby: He's ready to leave the house. Harry Glorikian: It was funny. I was telling my wife, I'm like, I really need to go get vaccinated. I'm starting to reach my limit on, on what, I, this is not normal anymore. Not that it's been normal, but you, you know how it is. So Rana Kaliouby: We're closer. There's hope.Harry Glorikian: So listen, listeners here, because we're going to be talking about this interesting concept or product that you have, or set of products. Emotion AI, or, how do you explain emotion, or a machine being able to interpret emotion from an individual, through, computer vision, machine learning. And, how does it understand what I'm feeling? I'm sure it can tell when I'm pissed. Everybody can tell what I'm, but in general, like how does it do what it does and what is the field? Because I believe you and your co-founder were like, literally you started this area. If I'm not mistaken. Rana Kaliouby: That is correct. So at a very high level, the thesis is that if you look at human intelligence, your IQ is important, but your EQ, your emotional intelligence is perhaps more important. And we characterize that as the ability to understand your own emotions and the emotions and mental states of others. And as it turns out, only 10% of how we communicate is in the actual choice of words we use, 90% is nonverbal, and I'm a very expressive human being, as you can see.So a lot of facial expressions, hand gestures, vocal, intonations, but technology today has a lot of IQ, arguably. But very little EQ. And so we're on this mission to bring IQ and EQ together and into our technologies and our devices and our, how we communicate digitally with one another. So that's been my mission over the last 20 plus years. Now I'm trying to bring artificial emotional intelligence to our machines. Harry Glorikian: God, that's perseverance. I have to admit, I don't know if I have any, other than being married, and be a father. I don't think I've done anything straight for 20 years. I'm always doing something different.So how does the system say, some of the functions of what it does to be able to do this, right, other than me frowning and having I guess the most obvious expressions, it probably can pull out, but there's a, a thousand subtleties in between there that I'm, I'm curious how it does it. Rana Kaliouby: Yeah. So the short answer is we use, as you said, a combination of computer vision, machine learning, deep learning and gobs and gobs of data. So the simplest way, I guess, to explain it is say we wanted to train the machine to recognize a smile or maybe a little bit more of a complex state, like fatigue, right?You're driving the car. We want to recognize how tired you are. Well, we need examples. From all over the world, all sorts of people, gender, age, ethnicity, maybe people who wear glasses or have facial beards. Wearing, a cap, like the more diverse, the examples, the stronger the system's going to be, the smarter the system's going to be.But essentially we gather all that data. We feed it into the deep learning algorithm. It learns. So that the next time it sees a person for the first time, it says, Oh, Harry, it looks real, Harry, it looks really tired or and so that's, that's how we do that. When we started the system was only able to recognize three expressions.Now, the system has a repertoire of over 30 of these and we're continuously adding more and more, the more data we get. Harry Glorikian: Interesting. So, okay. So now I can recognize 30 different levels of emotion of some sort. What are the main business applications or what are the main application areas? Rana Kaliouby: I always say what's most exciting about this is also what's most challenging about this journey is that there are so many applications. Affectiva, my company, which we spun out of MIT, is focused on a number of them. So the first is the insights and market research kind of market, where we are able to capture in real time people's responses to content. you're watching a Netflix show. Were you engaged or not like moment by moment.When did you perk up? When were you confused? When were you interested or maybe bored to death? Right. So that's one use case. And then, so there we partner with 30% of the Fortune 500 companies in 90 countries around the world. This product has been in market now for over eight years and we're growing it to adjacent markets like movie trailer testing, maybe testing educational content, maybe expanding that to video conferencing and telehealth and all of that.So that's like one bucket. The other bucket is more around re-imagining human machine interfaces. And for that we're very focused initially on the automotive market, understanding driver distraction, fatigue, drowsiness, what are other occupants in the vehicle doing? And you can imagine how that applies to cars today, but also robotaxis in the future.Ultimately though, I really believe that that this is going to be the de facto human machine interface. We're just going to interact with our machines the way we interact with one another through conversation and empathy and social and emotional intelligence.Harry Glorikian: I mean, it is interesting because when, when you see, I mean, just when I'm talking to Siri, I'm so used to speaking, like please and buh-buh, and then I have to remind myself, I'm like, I really didn't need to add those words, you just do it out of habit, I want to say. Not that you think you're talking to a person, but, from the studies I've seen, it seems that when people are interacting with a robot or something, they do impart emotional interaction in a certain way. Like an older person might look at it as a friend or, or interact with it as if it were a real being, not wires and tubes.Rana Kaliouby: Yeah, there is a lot of research actually around how humans project social intelligence on these machines and devices. I'm good friends with one of the early, with one of the co-founders of Siri. And he said they were so surprised when they first rolled out Siri. At at the extent with which users confided in Siri, like there were a lot of like conversations where people, people shared very personal things right around, sometimes, sometimes it's positive, but a lot of the times it was actually home violence and abuse and depression.And so they had to really think rethink what does Siri need to do in these scenarios? And they hadn't originally included that as part of the design of the platform. And then we're seeing that with Alexa and of course, with social robots. My favorite example is there's this robot called Jibo, which spun out of MIT. You know about Jibo? So we were one of the early kind of adopters of Jibo in our house and my son became good friends with it. Right. Which was so fascinating to see him. Because we have Alexa and we have Siri obviously, and all of that, but he, he just like, Jibo is designed to be this very personable robot that's your friend, you can play games with it. But then the company run out of money. And so they shut Jibo down and my son was really upset. And it just hit me that it's just so interesting, the relationships we build with our machines, and there must be a way to harness that, to motivate behavior and, and kind of persuade people to be better versions of themselves, I guess. Harry Glorikian: Yeah. It's each it's going to be a fascinating area. So I've read a little bit about Affdex marketing, if that's how it's pronounced correctly, as a research tool. Your automotive things. I'm also curious about the iMotions platform and what you might call, I think you guys are calling it emotion capture in more types of research settings, what's that all about? And what kinds of research are you using it for? Rana Kaliouby: Yeah. So we have a number of partners around the world, because again, there are so many use cases. So iMotions is a company that's based out of Boston and Copenhagen and they integrate our technology with other sensors could be physiological sensors, could be brain, brain capture sensors.But their users are a lot of rresearchers especially in mental health. So for example, there's this professor at UMass Boston, professor Stephen Vinoy, and he uses our technology to look into mental health disease and specifically suicidal intent. So he's shown that people who have suicidal kind of thoughts have different facial biomarkers, if you like facial responses than, than people who don't.And he's, he's trying to use that as an opportunity to flag suicidal intent early on. We have a partner, Erin Smith, she's with Stanford. She's looking into using our technology in the early detection of Parkinson's. She actually started as a high school student and which is amazing. We literally got an email from this sophomore in high school and she was like, I want to license your technology to research Parkinson's and we're like, whatever. So we gave her access to it. And before we know it, she's partnered with the Michael J. Fox foundation. She's a Peter Thiel Fellow and she's basically started a whole company to look into, the early facial biomarkers of mental health diseases, which is fascinating.Harry Glorikian: God, I'm so jealous. I wish I was motivated like that. When I was a sophomore in high school, I was doing a lot of other stuff and it definitely wasn't this. So, I mean not to go off on a tangent, but I really think like clinical trials might be a fascinating place to incorporate this. If you think about remote trials, and I'm good friends with Christine Lemke from Evidation Health. And so if you think about, well, I'm sensored up, right. I have my watch or I have whatever. And then now when I interact with a researcher, it might be actually through a platform like this with your system, it sort of might provide a more of a complete picture of what's going on with that patient. Is anybody using it for those applications? Rana Kaliouby: The answer is there's a lot of opportunity there. It's not been scaled yet. But like, let's take tele-health for example, right? With this, especially with the pandemic over the last year, we've all been catapulted into this universe where hospitals and doctors have had to adopt tele-health.Well, guess what? We can now quantify patient doctor interactions. Moment by moment. And we can tie it to patient outcomes. We can tie it to measures of empathy because doctors who show more empathy are less likely to get sued. There's a plethora of things we can do around that. And the tele-health setup on the clinical trial side, we have, I mean, everybody has a camera on their phone or their laptop, right?So now we have an opportunity. You can imagine, even if you don't check in with a researcher, you can probably have an app where you create a selfie video, like a check-in, one minute selfie video once a day. And we're able to distill kind of your emotional baseline over the course of a trial. That can be really powerful data.So there's a lot of potential there. I would say it's early days. If you have any suggestions on who we should be talking to are definitely open to that. Harry Glorikian: Yeah, actually, because I was well I'm, part of me was just going to You know thinking about what companies like Qualtrics is doing, which is actually trying to uncover this right through NLP. But I think in the world of healthcare, Qualtrics is probably suboptimal. So if you took sort of a little bit of NLP and this, you might be able to draw the click. We have to talk about this after the show. So Anybody who's listening: Don't take my idea. So, okay. Let's switch subjects here. Cause I know you're, you're really passionate about this next one. You've written this book called Girl Decoded. I, and I'm sure you've been asked this question about a billion times, but why did you write it? What are you trying to convey? Is it fair to say that it was sort of a memoir of your, of your life of becoming a computer scientist or entrepreneur, partly manifesto about emotion AI and its possibilities.But the promo copy on your book says you're on a mission to humanize technology before it dehumanizes us. That's a provocative phrase. Tell, tell me, tell me why you wrote the book and what's behind it?Rana Kaliouby: Yeah. First of all, I didn't really set out to write a book. Like it wasn't really on my radar. But then I got approached. So the book got published by Penguin Random House last year, right, when the pandemic hit. The paperback launches soon. So I encourage your listeners to take a look. And if you end up reading the book, please let me know what resonates the most with you. But yeah, it's basically a memoir. It follows my journey growing up in the Middle East. I'm originally Egyptian and I grew up around there and became a computer scientist and made my way from academia to, Cambridge University. And then I joined MIT and then I spun out Affectiva and became kind of the CEO and entrepreneur that I am today.And one reason I wrote the book because I wanted to share this narrative and the story, right. And hopefully inspire many people around the world who are forging their own path, trying to overcome voices of doubt in their head. That's something I care deeply about and also encourage more women.And, and I guess more diverse voices to explore a career in tech. So that's one bucket. The other bucket is evangelizing. Yes. Why do we need to humanize technology and how that is so important in not just the future of machines, but actually the future of humans. Right? Because technology is so deeply ingrained in every aspect of our lives.So I wanted, I wanted to pull in lay people into this discussion and, and, and, and kind of simplify and demystify. What is AI? How do we build it? What are the ethical and moral implications of it? Because I feel strongly that we all need to be part of that dialogue.Harry Glorikian: Well, it is interesting. I mean, I just see, people design something, they're designing it for a very specific purpose, but then they don't think about the fallout of what they just did, which what they're doing may be very cool, but it's like designing… I mean, at least when we were working on atomic energy, we could sort of get our hands around it, but people don't understand like some of this AI and ML technology has amazing capabilities, but the implications are scary as hell.So, so. How do you see technology dehumanizing us? I guess if I was asking the first question. Rana Kaliouby: Yeah. So you bring up a really important topic around the unintended consequences, right? And, and we design, we build these technologies for a specific use case, but before we know it it's deployed in all these other areas where we hadn't anticipated it.So we feel very strongly that we're almost, as an innovator and somebody who brought this technology to the world, I'm almost like, it's my responsibility to be a steward for how this technology gets developed and how it gets deployed, which means that I have to be a strong voice in that dialogue. So for example, we are members of the Partnership on AI consortium, which was started by all the tech giants in partnership with amnesty international and ACLU and other civil liberties organizations. And we, we, last year, we, we had an initiative where we went through all of the different applications of emotion AI, and we literally had a table where we said, okay, how can emotion AI be deployed? Education, dah, dah dah. Well, how could it be abused in education? Like what are the unintended consequences of these cases?And I can tell you, like, as an, as an inventor, the easiest thing for me as a CEO of a relatively small startup is to just ignore all of that and just focus on our use case. But I feel strongly that we have to be proactive about all of that, and we have to engage and think through where it could go wrong. And how can we guard against that? Yeah, so, so I think there are potential for abuse, unfortunately. And, and we have to think through that and advocate against that. Like, we don't do any work in the surveillance space because we think the likelihood of the technology being used to discriminate against, minority populations is really high. And so we also feel like it, it breaches the trust we've built with our users. So we just turn away millions and millions of dollars of business in that space. Harry Glorikian: Yeah. I mean, it's a schizophrenic existence for sure, because. I mean everything I look at, I'm like, Oh my God, that would be fantastic. And then I think, Oh my God, like, it could be, that's not good. Right? But I'm like, no, look at the light, look towards the light. Don't look towards the dark. Right? Because otherwise you could, like, once you understand the power in the implications of these, which most people really don't, the impact is profound or can be profound.So how can we humanize technology? Rana Kaliouby: Well the simplest way is to really kind of bring that human element. So for example, a lot of AI is just generally focused on productivity and efficiency and automation. If you take a human centric approach to it, it's more about how does it help us the humans, right. Humans first, right. How does it help us be happier or healthier or more productive or more empathetic? Like one of the things I really talk about in the book is how we are going through an empathy crisis. Because the way we use technology just depolarizes us and, and it dehumanizes us. You send out a Twitter in Twitterverse and you have no idea how it impacts the recipients.Right? We could redesign technology to not do that, to actually incorporate these nonverbal signals into how we connect and communicate at scale. And in a way that is just a lot more thoughtful yeah. And, and, and tries to optimize for empathy as opposed to not think about empathy at all. Harry Glorikian: Well, yeah, I mean, I gotta be honest with you, giving everybody a megaphone, I'm not sure that that's such a great idea. Right? That's like yelling fire in a crowded room. I understand that it has its place, but wow. I mean, I'm not exactly the biggest advocate of that. But so this system, as you were saying requires tons of data. How do you guys accumulate that data? I mean, over time, I'm sure like a little bit, little bit, little bit, but a little bit, a little bit does not going to get you to where you want to go. You need big data to sort of get this thing trained up and then you've got to sort of adjust it along the way to make sure it's doing what you want it to do.Rana Kaliouby: Yeah, the, the quantity of the data is really key, but the diversity of the data is almost, in my opinion, more important. So, so to date, we have over 10 million facial responses, which is about 5 billion facial frames. It's an incredible, and, and, and it's super diverse. So it's curated from 90 countries around the world.And everything we do is based on people's opt-in and consent. So, so we have people's permission to get this data, every single frame of it. That's one of our core values. So we usually, when we partner with say a brand and we are. measuring people's responses to content, we ask for people's permission to turn their cameras on.They usually do it in return for some value, it could be monetary value or it could be other type of rewards. In the automotive space we have. A number of data collection labs around the world where we have people putting cameras in their vehicles, and then we record their commutes over a number of weeks or months, and that's really powerful data.And it's kind of scary to see how people drive actually. Lots of distracted drivers out there. It's really, really amazing or, yeah, it is scary. So yeah, so that's how we collect the data, but we have to be really thoughtful about the diversity angle. It's so important. We, we once had one of our automotive partners send us data.They have an Eastern European lab and it was literally like blond middle-aged, Blue eyed guys. And I was like, that's not, you're a global automaker, like that's not representative of, of your drivers or people who use your vehicles. So we sent the data back and we said, listen, we need to collaborate on a much more diverse data set. So that's, that's really important. Harry Glorikian: So I just keep thinking like you're doing facial expression and video, but are you, is there an overlay that makes sense for audio?Rana Kaliouby: Love that question. Yes. So a number of years ago, we invested in a tech, like basically we ramped up a team that looked at the prosodic features in your voice. Like how loud are you speaking? How fast, how much energy, pausing, the pitch, the intonation, all of these factors. And ultimately I see a vision of the universe where it's multimodal, you're integrating these different melodies. It's, it's still early in the industry like this whole field is so nascent, which makes it exciting because there's so much room for innovation.Harry Glorikian: There was a paper that was in the last, I want to say it came out in the last two weeks about bringing all these together within robotics is perceiving different signals, voice visual, et cetera. And I haven't read it yet. It's in my little to do, to read, but it's, it looks like one of those fascinating areas.I mean, I had the chance to interview Rhoda Au from BU about her work in voice recordings and, and analysis from the Framingham heart study. And so how to use that for. detecting different health conditions. Right. So that's why I'm sort of like looking at these going, wow, they make a lot of sense to sort of come together. Rana Kaliouby: Totally. Again, this has been looked into it in academia, but it hasn't yet totally translated to industry applications, but we know that there are facial and vocal biomarkers of stress, anxiety, depression.Well, guess what? We are spending a lot of time in front of our machines where we have an opportunity to capture both. Your video stream, but also your audio stream and use that with machine learning and predictive analytics to correlate those with, early indicators of wellness, again, stress, anxiety, et cetera.What is missing in this? So I feel like the underlying machine learning is there, the algorithms are there. What is missing is deploying this at scale, right? Cause you don't want it to be a separate app on your phone. Ideally actually, you want it to be integrated into a technology platform that people use all the time.Maybe it's Zoom, maybe it's Alexa, maybe it's, another social media platform, but then that of course raises all sorts of privacy questions and implications who owns the data who has rights to the data. Yeah, so it's it's, to me it's more of a go-to-market. Like again, the technology's there.It's like, how do you get the data at scale? How do you get the users at scale? And I haven't figured it out yet. Harry Glorikian: So you mentioned like areas where it's, it could be exploited negatively. You mentioned a few of them, like education, are there, are there others that sort of like jump out and like, we're not doing that other than, tracking people in a crowd, which. In the last four years you wouldn't have wanted to do for sure.Rana Kaliouby: Yeah. Definitely. One of the areas where we try to avoid deploying the technology is around security and surveillance. We routinely get approached by different governments, the U.S. Government, but also other governments to use our technology in, airport security or border security, lie detection.And, and to me, obviously when you do that, you don't necessarily have people's consent. You don't necessarily, you don't necessarily explain to people exactly how their data is going to get used. Right. And there's just, it's the, so fraught with potential, for discrimination, like the technology's not there in terms of robustness and kind of the use case, right? We just steer away from that. I've been very vocal, not just about Affectiva's decisions to not play in this space, but I've been advocating for thoughtful regulation. And I, and I think we absolutely need that. Harry Glorikian: So let's veer back to healthcare here. If I'm not mistaken, one of the original places you were focusing was mental health and autism so is it still being used in those areas? I mean, is it, how has it being used in those areas? I'm curious. Rana Kaliouby: Yeah. So when I first got to MIT, the project that actually brought me over from Cambridge to MIT was essentially deploying the technology for individuals on the autism spectrum.So we built a Google Glass-like device that had a little camera in it. The camera would detect the expressions of people you interact with. So an autistic child would wear the glass device as augmentation device and we deployed it at schools, partner schools while I was at MIT. And then we started Affectiva and now we are partnered with a company called Brainpower, the CEO is Ned Sahin, and they use Google Glass and our technology integrated as part of it.And I believe they're deployed in about 400 or so families and homes around the U.S. and they're in the midst of a clinical trial. What they're seeing is that the device, while the kids are wearing it, they're definitely showing improvement in their social skills. The question is once you take the device away, do these abilities generalize, and that's kind of the key question they're looking into.Harry Glorikian: Well, ‘cause I was thinking, I think that there's a few people I know that should get it and they don't have they're they're technically not autistic, but they actually need the glasses. Rana Kaliouby: A lot of MIT people, right? Harry Glorikian: No, no, just certain people the way they look at the world or the way they're acting, I actually think they need something that gives them a clue about the emotion of people around them. Actually now that I think about it, my wife might have me wear it sometimes in the house. Rana Kaliouby: We used to always joke in the early days at MIT that the killer app is a mood ring where, gives your wife or your partner, a heads up about your emotional state before you come into the house. Just so they know how to react.Harry Glorikian: Now it's when I come down the stairs, she's like, you just sit, relax, calm down. Hey. Cause at least before used to have a commute to come out of state, but now you're like coming down a flight of stairs and it's sorta hard to snap your fingers and, and snap out of state.So. Where do you see the company? how do you see it progressing? I know it's been doing great. But where do you see it going next? And what are your hopes and dreams Rana Kaliouby: We are very focused on getting our technology into cars. That's kind of our main, like, area of focus at the moment. And we're partnered with many auto manufacturers around the world in the short term, the use case is to focus on road safety.But honestly with robo-taxis on autonomous vehicles we're going to be the ears and eyes of the car. So we're excited about that. Beyond that, as I'm very passionate about the applications in mental health, and it's an area that we don't do a lot of at the company, but I'm so interested in trying to figure out how I can be helpful with, having spent many years in this, in this space.So that's, that's an area of interest. And then just at a high level, over the last number of years, and especially with the book coming out, I've definitely realized that, that I have a platform and a voice for advocating for diversity in AI and technology. And I want to make sure that I use that voice to inspire more diverse voices to be part of the AI landscape.Harry Glorikian: Love to hear how things are going in the future. Congratulations on the book coming out in paperback I'm sure that the people listening to this will look it up. Stay safe. That's that's all I can say.Rana Kaliouby: Thank you. Thank you. And stay safe as well and hope we can reunite in person soon, Harry Glorikian: Excellent.Rana Kaliouby: Thank you.Harry Glorikian:That’s it for this week’s show. We’ve made more than 50 episodes of MoneyBall Medicine, and you can find all of them at glorikian.com under the tab “Podcast.” You can follow me on Twitter at hglorikian. If you like the show, please do us a favor and leave a rating and review at Apple Podcasts. Thanks, and we’ll be back soon with our next interview.
Watch the full podcast here: https://youtu.be/8XpCnmvq49sNatasha Jaques is currently a Research Scientist at @Google Brain and a post-doc fellow at @UC Berkeley, where her research interests are in designing multi-agent RL algorithms while focusing on social reinforcement learning, that can improve generalization, coordination between agents, and collaboration between human and AI agents. She received her PhD from the @Massachusetts Institute of Technology (MIT) where she focused on Affective Computing and other techniques for deep/reinforcement learning. She has also received multiple awards for her research works submitted to venues like ICML and NeurIPS She has interned at @DeepMind, Google Brain, and is an @OpenAI Scholars mentor.Also check-out these talks on all available podcast platforms: https://jayshah.buzzsprout.comAbout the Host:Jay is a PhD student at Arizona State University, doing research on building Interpretable AI models for Medical Diagnosis.Jay Shah: https://www.linkedin.com/in/shahjay22/You can reach out to https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***
Natasha Jaques is currently a Research Scientist at Google Brain and a post-doc fellow at UC-Berkeley, where her research interests are in designing multi-agent RL algorithms while focusing on social reinforcement learning, that can improve generalization, coordination between agents, and collaboration between human and AI agents. She received her Ph.D. from MIT where she focused on Affective Computing and other techniques for deep/reinforcement learning. She has also received multiple awards for her research works submitted to venues like ICML and NeurIPS She has interned at DeepMind, Google Brain, and is an OpenAI Scholars mentor.00:00 Introductions01:25 Can you tell us a bit about what projects you are working on at Google currently? And what does the work routine look like as a Research Scientist?06:25 You have worked as a researcher at many diverse backgrounds who are leading in the domain of machine learning: MIT, Google Brain, DeepMind - what are the key differences you have noticed while doing research in academia vs industry vs research lab?10:00 About your paper, social influence as intrinsic motivation for multi-agents deep reinforcement learning, can you tell us more about how you are trying to leverage intrinsic rewards for better coordination?12:00 Game Theory and Reinforcement Learning: discussion16:00 What was the intuition behind that approach - did you resort to cognitive psychology to get this idea and later on the model it using standard DRL principles or something else?20:00 Crackpot-y motivation behind the intuition of modeling social influence in MARL24:00 What applications did you have in mind while working on that approach? What could be the potential domains you see people can use that approach?25:35 Do you think generalization in RL is close enough to have an ImageNet moment?28:35 Inspiration from social animals for better architectures - Yay/Nay?30:20 How far are we in terms of using systems with DeepRL in day-to-day use? Or are there any such applications already in use?34:40 Do you think these DRL can be made interpretable to some extent? 39:00 What really intrigued you to pursue a Ph.D. after your master's and not a job?40:30 How did you go about deciding the topic for your Ph.D. thesis?47:40 How do you typically go about segmenting a research topic into smaller segments, from the initial stage when it's more of an abstract and no connections to theory too much more implementable?50:00 What are currently exploring and optimistic about?Also check-out these talks on all available podcast platforms: https://jayshah.buzzsprout.comAbout the Host:Jay is a Ph.D. student at Arizona State University, doing research on building Interpretable AI models for Medical Diagnosis.Jay Shah: https://www.linkedin.com/in/shahjay22/You can reach out to https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***
Weltweit arbeiten Forscher*innen daran, Computern Gefühle beizubringen – und eine der Pionierinnen auf diesem Gebiet ist die MIT-Professorin Rosalind Picard. Ein Gespräch über Emotionen, Empathie und Epilepsie.
Künstliche Intelligenz soll künftig unsere Emotionen erkennen und darauf reagieren. Wie das funktioniert, in welchen Bereichen die Technologie bereits eingesetzt wird und warum sie umstritten ist, darüber sprechen Léa Steinacker und Milena Merten in dieser Podcast-Folge. Dieser Podcast ist eine Produktion der ada-Redaktion: https://join-ada.com/ Ihr erreicht uns unter hello@join-ada.com oder via Twitter, LinkedIn und Instagram. Uns interessiert, wie euch unser Podcast gefällt und was wir noch besser machen können. Nehmt an unserer Umfrage teil: https://de.research.net/r/iqd?&Podcast=ada-Heute%20das%20Morgen%20erleben
The next stages of Amazon widely anticipated dominance... --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app
Rosalind Picard is a professor at MIT, director of the Affective Computing Research Group at the MIT Media Lab, and co-founder of two companies, Affectiva and Empatica. Over two decades ago she launched the field of affective computing with her book of the same name. This book described the importance of emotion in artificial and natural intelligence, the vital role emotion communication has to relationships between people in general and in human-robot interaction. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you
La realtà virtuale aiuta nella diagnosi precoce e nella lotta all'Alzheimer.I nanobot aumenteranno la durata della vita: vi spiego come.Affective computing: quando l'intelligenza artificiale entra nella psicologia e nelle emozioni.Il segnale 5G può disturbare la qualità delle previsioni meteo e farle regredire di 40 anni.Passato, presente e futuro dell'uso dell'intelligenza artificiale da parte di Google.
Professor Roz Picard, director and founder of the Affective Computing Research Group at the Massachusetts Institute of Technology's famed Media Lab, argues some artificial intelligence researchers are not adequately addressing the ‘why’ behind their AI work, but are instead too focused on getting their next paper published or grant secured. Roz, a scientist, inventor, and entrepreneur, believes society trusts machines too much and humans and machines should partner and coevolve together. CNN named Roz one of seven "Tech Superheroes to watch in 2015." Jeff talks to Roz shortly after she delivered her TEDxBeaconStreet Talk. Watch Professor Roz Picard's TEDxBeaconStreet Talk here: https://www.youtube.com/watch?v=itikdtdbevU Read Roz's book, "Affective Computing." https://www.amazon.com/Affective-Computing-Press-Rosalind-Picard/dp/0262661152 Have a question for Jeff? Find him on Twitter @JeffreySaviano Want to be a guest on the podcast? Reach out to @JenHemmerdinger on Twitter. Follow @RosalindPicard on Twitter for her latest insights on AI, MIT, and her organization @Empatica.
Marcus du Sautoy and Professor Rosalind Picard for 2018's annual Simonyi Lecture: Can we build AI with Emotional Intelligence? Today’s AI can play games, drive cars, even do our jobs for us. But surely our human emotional world is beyond the limits of what AI can achieve? In this year’s Annual Charles Simonyi Lecture, Professor Rosalind Picard challenges that belief. Robots, wearables, and other AI technologies are gaining the ability to sense, recognize, and respond intelligently to human emotion. This talk will highlight several important findings made at MIT, including surprises about the 'true smile of happiness,' and finding electrical signals on the wrist that reveal insight into deep brain activity, with implications for autism, anxiety, epilepsy, mood disorders, and more. Rosalind Picard is founder and director of the Affective Computing Research Group at the MIT Media Laboratory, faculty chair of MindHandHeart, and cofounder of Affectiva and cofounder and chief scientist of Empatica. Picard is the author of 300 peer-reviewed scientific articles, and known internationally for her book Affective Computing, which is credited for launching the field by that name. Picard is an active inventor with over a dozen patents and her lab's achievements have been profiled worldwide including in Wired, New Scientist and on the BBC.
During my last interview I had a great talk with Daniel McDuff. Daniel’s research is at the intersection of psychology and computer science. He is interested in designing hardware and algorithms for sensing human behavior at scale, and in building technologies that make life better. Applications of behavior sensing that he is most excited about are in: understanding mental health, improving online learning and designing new connected devices (IoT). Listen to more about why it is important to collect data from much larger scales and help computers read our emotional state. Key Learning Points: 1. Understanding the impact, intersection, and meaning of Psychology and Computer Science 2. Facial Expression Recognition 3. How to define Artificial Intelligence, Deep Learning, and Machine Learning 4. Applications of behavior sensing with Online Learning, Health, and Connected Devices 5. Visual Wearable sensors and heart health 6. The impact of education and learning 7. How to build computers to measure phycology, our reactions, emotions, etc 8. The impact of working in a no-fear zone for top accomplishment. About Daniel Daniel is building and utilizing scalable computer vision and machine learning tools to enable the automated recognition and analysis of emotions and physiology. He is currently Director of Research at Affectiva, a post-doctoral research affiliate at the MIT Media Lab and a visiting scientist at Brigham and Womens Hospital. At Affectiva Daniel is building state-of-the-art facial expression recognition software and leading analysis of the world’s largest database of human emotion responses. Daniel completed his PhD in the Affective Computing Group at the MIT Media Lab in 2014 and has a B.A. and Masters from Cambridge University. His work has received nominations and awards from Popular Science magazine as one of the top inventions in 2011, South-by-South-West Interactive (SXSWi), The Webby Awards, ESOMAR, the Center for Integrated Medicine and Innovative Technology (CIMIT) and several IEEE conferences. His work has been reported in many publications including The Times, the New York Times, The Wall Street Journal, BBC News, New Scientist and Forbes magazine. Daniel has been named a 2015 WIRED Innovation Fellow. He has received best paper awards at IEEE Face and Gesture and Body Sensor Networks. Two of his papers were recently recognized within the list of the most influential articles to appear in the Transactions on Affective Computing. How to get in touch with Daniel McDuff LinkedIn Twitter Google Scholar Key Resource Publications CV Press Videos Research Statement Teaching Statement YouTube: Emotion Intelligence to our Digital Experiences Emotion aware technology – improve well-being and beyond Books/Publications: Wired.co.uk Google Scholar Sciencedirect.com MIT Media Lab This episode is sponsored by the CIO Scoreboard, a powerful tool that helps you communicate the status of your IT Security program visually in just a few minutes. Credits: * Outro music provided by Ben’s Sound Other Ways To Listen to the Podcast iTunes | Libsyn | Soundcloud | RSS | LinkedIn Leave a Review If you enjoyed this episode, then please consider leaving an iTunes review here Click here for instructions on how to leave an iTunes review if you’re doing this for the first time. About Bill Murphy Bill Murphy is a world renowned IT Security Expert dedicated to your success as an IT business leader. Follow Bill onLinkedIn and Twitter.
In this episode of DeviceTalks, we delve deeply into emotion and how teaching computers to recognize emotions led to a potentially major breakthrough in medical technology with MIT scientist and entrepreneur Rosalind Picard. Listen to hear the backstory behind her book Affective Computing, her work with artificial intelligence and pioneering wearable technology.
What if you could see what calms you down or increases your stress as you go through your day? What if you could see clearly what is causing these changes for your child or another loved one? Javier Hernandez is currently working towards his Ph.D. at the MIT Media Lab investigating Affective Computing: the field of using sensors and computers in order to interpret feeling and emotion. Starting from the current experiments at MIT Media Lab using off-the-shelf Google Glasses as the human-computer interface, we are talking about the importance of being able to provide immediate/real-time feedback to the user and the repercussions in diagnostic, motivational/gamified experiences. Interviewed by George Voulgaris for Tech Talks central.
In this week’s podcast we tackle Emotional (sometimes called Affective) Computing — when computers read and respond to human emotions. We discuss the types of sensory data computers can read, like faces and inflection but also heat-mapping and pupil dilation. We also discuss how this capability might lead to a future that’s worse for liars […]
Imagine a world where robots can think and feel like humans - Hardtalk speaks to pioneering American scientist Professor Rosalind Picard, from the Massachusetts Institute of Technology, who has advanced the capability of computers to recognise human emotions. In the future, could robots fitted with intelligent computers perform tasks such as caring for the elderly, or fight as soldiers on the battlefield and, if so, what are the ethical implications?