Podcast appearances and mentions of ben shneiderman

  • 16PODCASTS
  • 23EPISODES
  • 39mAVG DURATION
  • ?INFREQUENT EPISODES
  • Jun 25, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about ben shneiderman

Latest podcast episodes about ben shneiderman

Experiencing Data with Brian O'Neill
146 - (Rebroadcast) Beyond Data Science - Why Human-Centered AI Needs Design with Ben Shneiderman

Experiencing Data with Brian O'Neill

Play Episode Listen Later Jun 25, 2024 42:07


Ben Shneiderman is a leading figure in the field of human-computer interaction (HCI). Having founded one of the oldest HCI research centers in the country at the University of Maryland in 1983, Shneiderman has been intently studying the design of computer technology and its use by humans. Currently, Ben is a Distinguished University Professor in the Department of Computer Science at the University of Maryland and is working on a new book on human-centered artificial intelligence.     I'm so excited to welcome this expert from the field of UX and design to today's episode of Experiencing Data! Ben and I talked a lot about the complex intersection of human-centered design and AI systems.     In our chat, we covered: Ben's career studying human-computer interaction and computer science. (0:30) 'Building a culture of safety': Creating and designing ‘safe, reliable and trustworthy' AI systems. (3:55) 'Like zoning boards': Why Ben thinks we need independent oversight of privately created AI. (12:56) 'There's no such thing as an autonomous device': Designing human control into AI systems. (18:16) A/B testing, usability testing and controlled experiments: The power of research in designing good user experiences. (21:08) Designing ‘comprehensible, predictable, and controllable' user interfaces for explainable AI systems and why [explainable] XAI matters. (30:34) Ben's upcoming book on human-centered AI. (35:55)     Resources and Links: People-Centered Internet: https://peoplecentered.net/ Designing the User Interface (one of Ben's earlier books): https://www.amazon.com/Designing-User-Interface-Human-Computer-Interaction/dp/013438038X Bridging the Gap Between Ethics and Practice: https://doi.org/10.1145/3419764 Partnership on AI: https://www.partnershiponai.org/ AI incident database: https://www.partnershiponai.org/aiincidentdatabase/ University of Maryland Human-Computer Interaction Lab: https://hcil.umd.edu/ ACM Conference on Intelligent User Interfaces: https://iui.acm.org/2021/hcai_tutorial.html Human-Computer Interaction Lab, University of Maryland, Annual Symposium: https://hcil.umd.edu/tutorial-human-centered-ai/ Ben on Twitter: https://twitter.com/benbendc     Quotes from Today's Episode The world of AI has certainly grown and blossomed — it's the hot topic everywhere you go. It's the hot topic among businesses around the world — governments are launching agencies to monitor AI and are also making regulatory moves and rules. … People want explainable AI; they want responsible AI; they want safe, reliable, and trustworthy AI. They want a lot of things, but they're not always sure how to get them. The world of human-computer interaction has a long history of giving people what they want, and what they need. That blending seems like a natural way for AI to grow and to accommodate the needs of real people who have real problems. And not only the methods for studying the users, but the rules, the principles, the guidelines for making it happen. So, that's where the action is. Of course, what we really want from AI is to make our world a better place, and that's a tall order, but we start by talking about the things that matter — the human values: human rights, access to justice, and the dignity of every person. We want to support individual goals, a person's sense of self-efficacy — they can do what they need to in the world, their creativity, their responsibility, and their social connections; they want to reach out to people. So, those are the sort of high aspirational goals that become the hard work of figuring out how to build it. And that's where we want to go. - Ben (2:05)   The software engineering teams creating AI systems have got real work to do. They need the right kind of workflows, engineering patterns, and Agile development methods that will work for AI. The AI world is different because it's not just programming, but it also involves the use of data that's used for training. The key distinction is that the data that drives the AI has to be the appropriate data, it has to be unbiased, it has to be fair, it has to be appropriate to the task at hand. And many people and many companies are coming to grips with how to manage that. This has become controversial, let's say, in issues like granting parole, or mortgages, or hiring people. There was a controversy that Amazon ran into when its hiring algorithm favored men rather than women. There's been bias in facial recognition algorithms, which were less accurate with people of color. That's led to some real problems in the real world. And that's where we have to make sure we do a much better job and the tools of human-computer interaction are very effective in building these better systems in testing and evaluating. - Ben (6:10)     Every company will tell you, “We do a really good job in checking out our AI systems.” That's great. We want every company to do a really good job. But we also want independent oversight of somebody who's outside the company — someone who knows the field, who's looked at systems at other companies, and who can bring ideas and bring understanding of the dangers as well. These systems operate in an adversarial environment — there are malicious actors out there who are causing trouble. You need to understand what the dangers and threats are to the use of your system. You need to understand where the biases come from, what dangers are there, and where the software has failed in other places. You may know what happens in your company, but you can benefit by learning what happens outside your company, and that's where independent oversight from accounting companies, from governmental regulators, and from other independent groups is so valuable. - Ben (15:04)     There's no such thing as an autonomous device. Someone owns it; somebody's responsible for it; someone starts it; someone stops it; someone fixes it; someone notices when it's performing poorly. … Responsibility is a pretty key factor here. So, if there's something going on, if a manager is deciding to use some AI system, what they need is a control panel, let them know: what's happening? What's it doing? What's going wrong and what's going right? That kind of supervisory autonomy is what I talk about, not full machine autonomy that's hidden away and you never see it because that's just head-in-the-sand thinking. What you want to do is expose the operation of a system, and where possible, give the stakeholders who are responsible for performance the right kind of control panel and the right kind of data. … Feedback is the breakfast of champions. And companies know that. They want to be able to measure the success stories, and they want to know their failures, so they can reduce them. The continuous improvement mantra is alive and well. We do want to keep tracking what's going on and make sure it gets better. Every quarter. - Ben (19:41)     Google has had some issues regarding hiring in the AI research area, and so has Facebook with elections and the way that algorithms tend to become echo chambers. These companies — and this is not through heavy research — probably have the heaviest investment of user experience professionals within data science organizations. They have UX, ML-UX people, UX for AI people, they're at the cutting edge. I see a lot more generalist designers in most other companies. Most of them are rather unfamiliar with any of this or what the ramifications are on the design work that they're doing. But even these largest companies that have, probably, the biggest penetration into the most number of people out there are getting some of this really important stuff wrong. - Brian (26:36)   Explainability is a competitive advantage for an AI system. People will gravitate towards systems that they understand, that they feel in control of, that are predictable. So, the big discussion about explainable AI focuses on what's usually called post-hoc explanations, and the Shapley, and LIME, and other methods are usually tied to the post-hoc approach.That is, you use an AI model, you get a result and you say, “What happened?” Why was I denied a parole, or a mortgage, or a job? At that point, you want to get an explanation. Now, that idea is appealing, but I'm afraid I haven't seen too many success stories of that working. … I've been diving through this for years now, and I've been looking for examples of good user interfaces of post-hoc explanations. It took me a long time till I found one. The culture of AI model-building would be much bolstered by an infusion of thinking about what the user interface will be for these explanations. And even the DARPA's XAI—Explainable AI—project, which has 11 projects within it—has not really grappled with this in a good way about designing what it's going to look like. Show it to me. … There is another way. And the strategy is basically prevention. Let's prevent the user from getting confused and so they don't have to request an explanation. We walk them along, let the user walk through the step—this is like Amazon checkout process, seven-step process—and you know what's happened in each step, you can go back, you can explore, you can change things in each part of it. It's also what TurboTax does so well, in really complicated situations, and walks you through it. … You want to have a comprehensible, predictable, and controllable user interface that makes sense as you walk through each step. - Ben (31:13)

ANTIC The Atari 8-bit Podcast
ANTIC Episode 99 - 10-Year ANTICversary!

ANTIC The Atari 8-bit Podcast

Play Episode Listen Later Jun 21, 2023 113:28


ANTIC Episode 99 - 10-Year ANTICversary! In this episode of ANTIC The Atari 8-Bit Computer Podcast… We bring in some of our Atari friends and talk about the last 10 years of Atari 8-bit happenings in this celebration of the ten-year anniversary of ANTIC! READY! Recurring Links  Floppy Days Podcast  AtariArchives.org  AtariMagazines.com  Kevin's Book “Terrible Nerd”  New Atari books scans at archive.org  ANTIC feedback at AtariAge  Atari interview discussion thread on AtariAge  Interview index: here  ANTIC Facebook Page  AHCS  Eaten By a Grue  Next Without For  Links for Items Mentioned in Show: Scantastix project manual scanning - https://archive.org/search?query=identifier%3Astx_%2A&sort=-publicdate  Editing VCFE videos - https://www.youtube.com/playlist?list=PL_e5fSxflvrx86M2x_D5rQLIGjeffQuUM  “How to Excel on Your Atari 600XL and 800XL” by Timothy Knight - https://archive.org/details/how-to-excel-on-your-atari  “Let's Learn BASIC” by Ben Shneiderman - https://archive.org/details/Let_s_Learn_BASIC  Wade Ripkowski (Inverse ATASCII) text mode libraries for Action!, CC65, and MadPascal: https://github.com/Ripjetski6502/A8ActionLibrary  https://github.com/Ripjetski6502/A8CLibrary  https://github.com/Ripjetski6502/A8MadPascalLibrary  Inverse ATASCII web site: https://inverseatascii.info  Atari Projects (Jason Moore) - https://atariprojects.org/  Bill Lange, Atari archivist - http://atari8bitads.blogspot.com/  Bill Kendrick, Atari game and utility programmer - http://www.newbreedsoftware.com/atari/  Atari Party events  Robin McMullen, Player/Missile podcast https://playermissile.com/  Thom Cherryhomes, FujiNet - https://fujinet.online/  Atariteca: https://www.atariteca.net.pe/  VCF: https://vcfed.org  Old School Gamer Magazine: https://www.oldschoolgamermagazine.com/  Audacity Games (David Crane/Kitchen brothers, Circus Convoy): https://adgm.us/portal/index.html  Jamie Lendino's Books - https://www.amazon.com/s?k=jamie+lendino&crid=2BJGJAZOJ5A2P&sprefix=jamie+lendino%2Caps%2C98&ref=nb_sb_noss_1  Jason Moore's Book - https://www.amazon.com/Atari-Projects-Jason-Moore/dp/0578556421    

interview action books excel atari jason moore antic grue vcf items mentioned atariage new atari ben shneiderman old school gamer magazine bill kendrick jamie lendino atari party antic the atari inverse atascii player missile antic episode bill lange
New Books Network
Ben Shneiderman, "Human-Centered AI" (Oxford UP, 2022)

New Books Network

Play Episode Listen Later Apr 4, 2023 22:19


The remarkable progress in algorithms for machine and deep learning have opened the doors to new opportunities, and some dark possibilities. However, a bright future awaits those who build on their working methods by including HCAI strategies of design and testing. As many technology companies and thought leaders have argued, the goal is not to replace people, but to empower them by making design choices that give humans control over technology. In Human-Centered AI (Oxford UP, 2022), Professor Ben Shneiderman offers an optimistic realist's guide to how artificial intelligence can be used to augment and enhance humans' lives. This project bridges the gap between ethical considerations and practical realities to offer a road map for successful, reliable systems. Digital cameras, communications services, and navigation apps are just the beginning. Shneiderman shows how future applications will support health and wellness, improve education, accelerate business, and connect people in reliable, safe, and trustworthy ways that respect human values, rights, justice, and dignity. Jake Chanenson is a computer science Ph.D. student at the University of Chicago. Broadly, Jake is interested in topics relating to HCI, privacy, and tech policy. Jake's work has been published in top venues such as ACM's CHI Conference on Human Factors in Computing Systems. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network

New Books in Science, Technology, and Society
Ben Shneiderman, "Human-Centered AI" (Oxford UP, 2022)

New Books in Science, Technology, and Society

Play Episode Listen Later Apr 4, 2023 22:19


The remarkable progress in algorithms for machine and deep learning have opened the doors to new opportunities, and some dark possibilities. However, a bright future awaits those who build on their working methods by including HCAI strategies of design and testing. As many technology companies and thought leaders have argued, the goal is not to replace people, but to empower them by making design choices that give humans control over technology. In Human-Centered AI (Oxford UP, 2022), Professor Ben Shneiderman offers an optimistic realist's guide to how artificial intelligence can be used to augment and enhance humans' lives. This project bridges the gap between ethical considerations and practical realities to offer a road map for successful, reliable systems. Digital cameras, communications services, and navigation apps are just the beginning. Shneiderman shows how future applications will support health and wellness, improve education, accelerate business, and connect people in reliable, safe, and trustworthy ways that respect human values, rights, justice, and dignity. Jake Chanenson is a computer science Ph.D. student at the University of Chicago. Broadly, Jake is interested in topics relating to HCI, privacy, and tech policy. Jake's work has been published in top venues such as ACM's CHI Conference on Human Factors in Computing Systems. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/science-technology-and-society

New Books in Technology
Ben Shneiderman, "Human-Centered AI" (Oxford UP, 2022)

New Books in Technology

Play Episode Listen Later Apr 4, 2023 22:19


The remarkable progress in algorithms for machine and deep learning have opened the doors to new opportunities, and some dark possibilities. However, a bright future awaits those who build on their working methods by including HCAI strategies of design and testing. As many technology companies and thought leaders have argued, the goal is not to replace people, but to empower them by making design choices that give humans control over technology. In Human-Centered AI (Oxford UP, 2022), Professor Ben Shneiderman offers an optimistic realist's guide to how artificial intelligence can be used to augment and enhance humans' lives. This project bridges the gap between ethical considerations and practical realities to offer a road map for successful, reliable systems. Digital cameras, communications services, and navigation apps are just the beginning. Shneiderman shows how future applications will support health and wellness, improve education, accelerate business, and connect people in reliable, safe, and trustworthy ways that respect human values, rights, justice, and dignity. Jake Chanenson is a computer science Ph.D. student at the University of Chicago. Broadly, Jake is interested in topics relating to HCI, privacy, and tech policy. Jake's work has been published in top venues such as ACM's CHI Conference on Human Factors in Computing Systems. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/technology

In Conversation: An OUP Podcast
Ben Shneiderman, "Human-Centered AI" (Oxford UP, 2022)

In Conversation: An OUP Podcast

Play Episode Listen Later Apr 4, 2023 22:19


The remarkable progress in algorithms for machine and deep learning have opened the doors to new opportunities, and some dark possibilities. However, a bright future awaits those who build on their working methods by including HCAI strategies of design and testing. As many technology companies and thought leaders have argued, the goal is not to replace people, but to empower them by making design choices that give humans control over technology. In Human-Centered AI (Oxford UP, 2022), Professor Ben Shneiderman offers an optimistic realist's guide to how artificial intelligence can be used to augment and enhance humans' lives. This project bridges the gap between ethical considerations and practical realities to offer a road map for successful, reliable systems. Digital cameras, communications services, and navigation apps are just the beginning. Shneiderman shows how future applications will support health and wellness, improve education, accelerate business, and connect people in reliable, safe, and trustworthy ways that respect human values, rights, justice, and dignity. Jake Chanenson is a computer science Ph.D. student at the University of Chicago. Broadly, Jake is interested in topics relating to HCI, privacy, and tech policy. Jake's work has been published in top venues such as ACM's CHI Conference on Human Factors in Computing Systems.

Future of Humanity
Human-Centered Artificial Inteligence: An Idea Whose Time Has Come

Future of Humanity

Play Episode Listen Later Dec 6, 2022 38:53


As artificial intelligence-based applications are becoming pervasive in every sphere of life, there is a movement to make it more "human-centered". We will be talking with Ben Shneiderman whose pioneering work has led to the Human-Centered AI movement. He is passionate about innovation and the good that AI can do in making our lives much better. He talks to us about how human control and AI automation can work together to make humanity better off than mindlessly pursuing AI-based solutions. Ben has interesting views on where the romance of AI falls down and what the future holds when using AI intelligently.

Creative Tech Podcast
The Best of Series 1

Creative Tech Podcast

Play Episode Listen Later Sep 28, 2022 15:38


Who takes their inspiration from John Cleese? Who would bin education? And you would never guess who wants to ban robots....We kick-off series two with some of the highlights from last season. Every week, we invite some of the world's smartest thinkers in creativity and technology to have a chat with Professor Neil Maiden, the director of the Centre for Creativity enabled by AI.At the end of every conversation we ask our guest the same three questions - What do they need be creative? What new app would they create? and What piece of tech would they bin if they had the chance?From the godfather of user centre computer design, Ben Shneiderman, to Ted Talker and business author Margaret Heffernan, behavioural psychologist Richard Chataway, renowned digital artist and pioneer Ernest Edmonds, creative problem solving guru Scott Isaksen, and innovation expert Dr Sara Jones, we have gathered up their responses to our regular three question feature.Produced by Diana SquiresExecutive Producer and Presenter Sam SteeleTheme music generated by AI @ www.dsoundraw.ioConnect with the National Centre for Creativity enabled by AI www.linktr.ee/CebAI Hosted on Acast. See acast.com/privacy for more information.

Artificiality
Ben Shneiderman: Human-Centered AI

Artificiality

Play Episode Listen Later Jul 3, 2022 63:14


Many of our listeners will be familiar with human-centered design and human-computer interaction. These fields of research and practice have driven technology product design and development for decades. Today, however, these fields are changing to adapt to the increasing use of artificial intelligence, leading to an emerging field called human-centered AI.Prior to the widespread use of AI, technology products were powerful, yet, predictable—they operated based on the rules created by their designers. With AI, however, machines respond to data, providing predictions that may not be anticipated when the product is designed or programmed. This is incredibly powerful but can also create unintended consequences.This challenge leads to the questions: How can we design AI-based products that provide benefits to humans? How can we create AI systems that learn and change with new data but still provide consequences intended by the system's designers?These questions led us to interview Ben Shneiderman, an Emeritus Distinguished University Professor in the department of Computer Science at the University of Maryland. Ben recently published a wonderfully approachable book, Human-Centered AI, which provides a guide to how AI can be used to augment and enhance humans' lives. As the founding director of the Human-Computer Interaction Laboratory, Ben has a 40-year history in researching how humans and computers interact, making him an ideal source to talk with about how humans and AI interact.If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world's great minds.Subscribe to get Artificiality delivered to your emailLearn more about Sonder StudioThanks to Jonathan Coulton for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificiality.substack.com

Artificial Intelligence and You
092 - Guest: Ben Shneiderman, Human-Centered AI Expert, part 2

Artificial Intelligence and You

Play Episode Listen Later Mar 21, 2022 31:14


This and all episodes at: https://aiandyou.net/ .   We continue talking about  human-centered AI design with the man who wrote the book on user interface design: Ben Shneiderman, Emeritus Distinguished University Professor in the Department of Computer Science, Founding Director of the Human-Computer Interaction Laboratory and a member of the Institute for Advanced Computer Studies, all at the University of Maryland. His new book, Human-Centered AI, was just published, and in this conclusion we talk about what it's like to get into this field, and the role of standards and governance in human-centered AI. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.        

Artificial Intelligence and You
091 - Guest: Ben Shneiderman, Human-Centered AI Expert, part 1

Artificial Intelligence and You

Play Episode Listen Later Mar 14, 2022 31:49


This and all episodes at: https://aiandyou.net/ .   Who better to answer the call for expertise in human-centered AI design than the man who wrote the book on user interface design? Ben Shneiderman, Emeritus Distinguished University Professor in the Department of Computer Science, Founding Director of the Human-Computer Interaction Laboratory and a member of the Institute for Advanced Computer Studies, all at the University of Maryland, received six honorary doctorates in human-computer interface design. His new book, Human-Centered AI, was just published, and in this interview we talk about rationalism and empiricism in human-computer interaction, and metaphors in HCI, including his four metaphors for AI that empowers people. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.        

Creative Tech Podcast
Ben Shneiderman

Creative Tech Podcast

Play Episode Listen Later Sep 30, 2021 36:45


Ben Shneiderman is the father of Human Computer Interaction (HCI). He created much that we take for granted in our digital world - clickable highlighted web-links, and touchscreen keyboards on mobile devices for starters. Here he talks to Professor Neil Maiden of the UK's National Centre for Creativity enabled by AI (CebAI) about the subject of creativity and technology and why our apps need to evolve to enable users to have greater control of the creative activities they are using digital tools for.In this podcast he talks to Professor Neil Maiden of the UK's National Centre for Creativity enabled by AI (CebAI) about why apps need to evolve to enable users more control of their creative activities. He explains why metaphors like "social robots" and “intelligent computers" are unhelpful, and describes how AI can be a fundamental part of the process for developing creative self belief.LinksBen Shneiderman:groups.google.com/g/human-centered-aitwitter.com/HumanCentredAICebAI:twitter.com/CebAICentrewww.linkedin.com/company/national…ty-enabled-by-ai(Feed generated with FetchRSS) Hosted on Acast. See acast.com/privacy for more information.

uk ai creativity acast national centre human computer interaction hci ben shneiderman
Experiencing Data with Brian O'Neill
062 - Why Ben Shneiderman is Writing a Book on the Importance of Designing Human-Centered AI

Experiencing Data with Brian O'Neill

Play Episode Listen Later Apr 6, 2021 38:28


Ben Shneiderman is a leading figure in the field of human-computer interaction (HCI).  Having founded one of the oldest HCI research centers in the country at the University of Maryland in 1983, Shneiderman has been intently studying the design of computer technology and its use by humans. Currently, Ben is a Distinguished University Professor in the Department of Computer Science at the University of Maryland and is working on a new book on human-centered artificial intelligence.   I’m so excited to welcome this expert from the field of UX and design to today’s episode of Experiencing Data! Ben and I talked a lot about the complex intersection of human-centered design and AI systems.   In our chat, we covered: Ben's career studying human-computer interaction and computer science. (0:30) 'Building a culture of safety': Creating and designing ‘safe, reliable and trustworthy’ AI systems. (3:55) 'Like zoning boards': Why Ben thinks we need independent oversight of privately created AI. (12:56) 'There’s no such thing as an autonomous device': Designing human control into AI systems. (18:16) A/B testing, usability testing and controlled experiments: The power of research in designing good user experiences. (21:08) Designing ‘comprehensible, predictable, and controllable’ user interfaces for explainable AI systems and why [explainable] XAI matters. (30:34) Ben's upcoming book on human-centered AI. (35:55) Resources and Links: People-Centered Internet: https://peoplecentered.net/ Designing the User Interface (one of Ben’s earlier books): https://www.amazon.com/Designing-User-Interface-Human-Computer-Interaction/dp/013438038X Bridging the Gap Between Ethics and Practice: https://doi.org/10.1145/3419764 Partnership on AI: https://www.partnershiponai.org/ AI incident database: https://www.partnershiponai.org/aiincidentdatabase/ University of Maryland Human-Computer Interaction Lab: https://hcil.umd.edu/ ACM Conference on Intelligent User Interfaces: https://iui.acm.org/2021/hcai_tutorial.html Human-Computer Interaction Lab, University of Maryland, Annual Symposium: https://hcil.umd.edu/tutorial-human-centered-ai/ Ben on Twitter: https://twitter.com/benbendc   Quotes from Today’s Episode The world of AI has certainly grown and blossomed — it’s the hot topic everywhere you go. It’s the hot topic among businesses around the world — governments are launching agencies to monitor AI and are also making regulatory moves and rules. … People want explainable AI; they want responsible AI; they want safe, reliable, and trustworthy AI. They want a lot of things, but they’re not always sure how to get them. The world of human-computer interaction has a long history of giving people what they want, and what they need. That blending seems like a natural way for AI to grow and to accommodate the needs of real people who have real problems. And not only the methods for studying the users, but the rules, the principles, the guidelines for making it happen. So, that’s where the action is. Of course, what we really want from AI is to make our world a better place, and that’s a tall order, but we start by talking about the things that matter — the human values: human rights, access to justice, and the dignity of every person. We want to support individual goals, a person’s sense of self-efficacy — they can do what they need to in the world, their creativity, their responsibility, and their social connections; they want to reach out to people. So, those are the sort of high aspirational goals that become the hard work of figuring out how to build it. And that’s where we want to go. - Ben (2:05)   The software engineering teams creating AI systems have got real work to do. They need the right kind of workflows, engineering patterns, and Agile development methods that will work for AI. The AI world is different because it’s not just programming, but it also involves the use of data that’s used for training. The key distinction is that the data that drives the AI has to be the appropriate data, it has to be unbiased, it has to be fair, it has to be appropriate to the task at hand. And many people and many companies are coming to grips with how to manage that. This has become controversial, let’s say, in issues like granting parole, or mortgages, or hiring people. There was a controversy that Amazon ran into when its hiring algorithm favored men rather than women. There’s been bias in facial recognition algorithms, which were less accurate with people of color. That’s led to some real problems in the real world. And that’s where we have to make sure we do a much better job and the tools of human-computer interaction are very effective in building these better systems in testing and evaluating. - Ben (6:10)   Every company will tell you, “We do a really good job in checking out our AI systems.” That’s great. We want every company to do a really good job. But we also want independent oversight of somebody who’s outside the company — someone who knows the field, who’s looked at systems at other companies, and who can bring ideas and bring understanding of the dangers as well. These systems operate in an adversarial environment — there are malicious actors out there who are causing trouble. You need to understand what the dangers and threats are to the use of your system. You need to understand where the biases come from, what dangers are there, and where the software has failed in other places. You may know what happens in your company, but you can benefit by learning what happens outside your company, and that’s where independent oversight from accounting companies, from governmental regulators, and from other independent groups is so valuable. - Ben (15:04)   There’s no such thing as an autonomous device. Someone owns it; somebody’s responsible for it; someone starts it; someone stops it; someone fixes it; someone notices when it’s performing poorly. … Responsibility is a pretty key factor here. So, if there’s something going on, if a manager is deciding to use some AI system, what they need is a control panel, let them know: what’s happening? What’s it doing? What’s going wrong and what’s going right? That kind of supervisory autonomy is what I talk about, not full machine autonomy that’s hidden away and you never see it because that’s just head-in-the-sand thinking. What you want to do is expose the operation of a system, and where possible, give the stakeholders who are responsible for performance the right kind of control panel and the right kind of data. … Feedback is the breakfast of champions. And companies know that. They want to be able to measure the success stories, and they want to know their failures, so they can reduce them. The continuous improvement mantra is alive and well. We do want to keep tracking what’s going on and make sure it gets better. Every quarter. - Ben (19:41)   Google has had some issues regarding hiring in the AI research area, and so has Facebook with elections and the way that algorithms tend to become echo chambers. These companies — and this is not through heavy research — probably have the heaviest investment of user experience professionals within data science organizations. They have UX, ML-UX people, UX for AI people, they’re at the cutting edge. I see a lot more generalist designers in most other companies. Most of them are rather unfamiliar with any of this or what the ramifications are on the design work that they’re doing. But even these largest companies that have, probably, the biggest penetration into the most number of people out there are getting some of this really important stuff wrong. - Brian (26:36) Explainability is a competitive advantage for an AI system. People will gravitate towards systems that they understand, that they feel in control of, that are predictable. So, the big discussion about explainable AI focuses on what’s usually called post-hoc explanations, and the Shapley, and LIME, and other methods are usually tied to the post-hoc approach.That is, you use an AI model, you get a result and you say, “What happened?” Why was I denied a parole, or a mortgage, or a job? At that point, you want to get an explanation. Now, that idea is appealing, but I’m afraid I haven’t seen too many success stories of that working. … I’ve been diving through this for years now, and I’ve been looking for examples of good user interfaces of post-hoc explanations. It took me a long time till I found one. The culture of AI model-building would be much bolstered by an infusion of thinking about what the user interface will be for these explanations. And even the DARPA’s XAI—Explainable AI—project, which has 11 projects within it—has not really grappled with this in a good way about designing what it’s going to look like. Show it to me. … There is another way. And the strategy is basically prevention. Let’s prevent the user from getting confused and so they don’t have to request an explanation. We walk them along, let the user walk through the step—this is like Amazon checkout process, seven-step process—and you know what’s happened in each step, you can go back, you can explore, you can change things in each part of it. It’s also what TurboTax does so well, in really complicated situations, and walks you through it. … You want to have a comprehensible, predictable, and controllable user interface that makes sense as you walk through each step. - Ben (31:13)

The PFF Podcast
Ben Shneiderman, distinguished professor, author and pioneer in human-computer interaction.

The PFF Podcast

Play Episode Listen Later Jul 31, 2020 39:56


Ben Shneiderman is a professor in the Department of Computer Science at the University of Maryland, where he is also the founding director of the Human-Computer Interaction Laboratory and a member of the Institute for Advanced Computer Studies as well as author or co-author of numerous influential books. On this episode of the PFF Podcast, Ben talks with Jeffrey about human-computer interaction, the balance between human and machine control and building machines that empower people - enhance, augment and amplify human abilities, not replace or mimic them.

Eavesdrop on Experts
At the human-computer interface

Eavesdrop on Experts

Play Episode Listen Later Feb 28, 2018 34:38


In 1980 Ben Shneiderman published one of the first texts in the field that would come to be known as human-computer interaction, and has since pioneered innovations we take for granted today, like touchscreens and hyperlinks. He has now turned his attention to maximising the real-world impact of university research, by combining applied and basic research; a topic he addresses in his new book 'The New ABCs of Research'. He chats with our reporter Steve Grimwade. Episode recorded: December 14 2017 Interviewer: Steve Grimwade Producers: Dr Andi Horvath, Chris Hatzis and Silvi Van-Wall Audio engineer: Gavin Nebauer Editor: Chris Hatzis Banner image: Getty Images

Eavesdrop on Experts
At the human-computer interface

Eavesdrop on Experts

Play Episode Listen Later Feb 27, 2018 34:39


In 1980 Ben Shneiderman published one of the first texts in the field that would come to be known as human-computer interaction, and has since pioneered innovations we take for granted today, like touchscreens and hyperlinks. He has now turned his attention to maximising the real-world impact of university research, by combining applied and basic research; a topic he addresses in his new book 'The New ABCs of Research'. He chats with our reporter Steve Grimwade.Episode recorded: December 14 2017Interviewer: Steve GrimwadeProducers: Dr Andi Horvath, Chris Hatzis and Silvi Van-WallAudio engineer: Gavin NebauerEditor: Chris HatzisBanner image: Getty Images

Mixed Methods
The Future of HCI (@CHI) - Ben Shneiderman, U of Maryland

Mixed Methods

Play Episode Listen Later May 18, 2017 1873:00


Ben Shneiderman is one of the founding fathers of the field of human-computer interaction. His publications, such as Designing the User Interface, are canonical at this point and he founded one of the first HCI labs in the world at the University of Maryland.

Data Stories
073  |  Kim Albrecht on Untangling Tennis and the Cosmic Web

Data Stories

Play Episode Listen Later May 4, 2016 30:10


Kim is a visualization researcher and information designer. He currently works at the Center for Complex Network Research, the lab led by famous network physicist László Barabási. Kim works in a team of scientists to create effective and beautiful visualizations that explain complex scientific phenomena. In the show we focus on Untangling Tennis, a data visualization project aimed at explaining the relationship between popularity and athletic performance. We also talk about his more recent project, the Cosmic Web, which visualizes 24,000 galaxies and their network of gravitational relationships. Enjoy the show! This episode of Data Stories is sponsored by Qlik, which allows you to explore the hidden relationships within your data that lead to meaningful insights. Make sure to check out the blog post listing Visualization Advocate Patrik Lundblad’s favorite data visualization pioneers. You can try out Qlik Sense for free at qlik.de/datastories. LINKS Kim Albrecht: http://kimalbrecht.com/ Untangling Tennis: http://untangling-tennis.net/ The Cosmic Web: http://cosmicweb.barabasilab.com/ D3.js: https://d3js.org/ three.js, a javascript library for 3D vis: http://threejs.org/ Ben Shneiderman’s The New ABC of Research: http://www.cs.umd.edu/hcil/newabcs/ Peter Galison's Image and Logic: http://www.amazon.com/Image-Logic-Material-Culture-Microphysics/dp/0226279170 Peter Galison’s “Images Scatter Into Data, Data Gathers Into Images”: http://www.ann-sophielehmann.nl/content/docs/grgalison.pdf

The Social Media Clarity Podcast
What does your hashtag look like? Lee Rainie from Pew Internet Research

The Social Media Clarity Podcast

Play Episode Listen Later May 20, 2014 15:11


What does your hashtag look like - Lee Rainie from Pew Internet Research - Episode 17 Scott and Marc speak with Lee Rainie from Pew Internet Research about the new report Mapping Twitter Topic Networks: From Polarized Crowds to Community Clusters and how its findings can be used to better understand and grow online communities. Lee Rainie - Director, Pew Research Center's Internet & American Life Project The six types of Twitter conversations by Lee Rainie Mapping Twitter Topic Networks: From Polarized Crowds to Community Clusters By Marc A. Smith, Lee Rainie, Ben Shneiderman and Itai Himelboim Conversational Archetypes: Six Conversation and Group Network Structures in Twitte By Marc A. Smith, Lee Rainie, Ben Shneiderman and Itai Himelboim NodeXL Tools for Transparency: A How-to Guide for Social Network Analysis with NodeXL Transcript available at Social Media Clarity.net

guide hashtags pew research center internet research lee rainie pew internet ben shneiderman
The Social Media Clarity Podcast
Five Stages of Grief - Facebook Likes are not Community

The Social Media Clarity Podcast

Play Episode Listen Later May 7, 2014 17:58


Your hosts - Scott, Randy, and Marc discuss recent very public changes to Facebook reach as an indicator that companies may be looking in all the wrong places to connect with their community. Or is it audience and what's the difference anyway? See SocialMediaClarity.net for a full transcript of this episode. Crystal Coleman (@thatgirlcrystal), from Ning.com started the episode off, and was the editor and graphic layout artist for the white paper mentioned in today's episode. The Five Questions for Selecting an Online Community Platform is available from the Cultivating Community Blog. Sponsor your Page posts (April 23, 2012) What if Everything You Know About Social Media Marketing is Wrong? Ducking Responsibility: Marketers and Agencies Playing a Shameful Facebook Blame Game A Brand and Person Offer the Same Post with Very Different Results White Paper: 5 Questions for Selecting an Online Community Platform, by co-host Randy Farmer Mapping Twitter Topic Networks: From Polarized Crowds to Community Clusters by co-host Marc A. Smith along with Lee Rainie, Ben Shneiderman, and Itai Himelboim Part 2: Conversational Archetypes: Six Conversation and Group Network Structures in Twitter ibid.

Data Stories
029  |  Treemaps w/ Ben Shneiderman

Data Stories

Play Episode Listen Later Nov 15, 2013 64:05


We have a super guest this time on the show! Ben Shneiderman joins us to talk about his new treemap art project (beautiful treemap prints you can hang on the wall), treemaps and their history, and information visualization in general. Needless to say, we had a wonderful time chatting with him: lots of history and very inspiring thoughts (tip: we should look at vis 50-100 years from now!)

needless ben shneiderman
World Usability Day New England
The New Science of Universal Usability

World Usability Day New England

Play Episode Listen Later Dec 19, 2007 44:21


There is a growing awareness that new kinds of science are needed to cope with many contemporary problems. The idea of Science 2.0 shifts attention from the natural to the made world, where richly interdisciplinary problems are resistant to reductionist solutions. Science 2.0 includes topics such as environmental preservation, energy sustainability, conflict resolution, community building, and universal usability. The problems of universal usability have technical foundations, but its intensely human dimensions means that innovative solutions are needed to promote broad usage of the web, mobile technologies, and new media. The goals are to enable broad access to learning, democratic processes, health information, community services, etc. Challenging research problems emerge from addressing the needs of diverse users (novice/experts, young/old, abled/disabled, multiple languages, cross cultural) who use a wide range technologies (small/large displays, slow/fast networks, voice/text/video). In some cases, traditional controlled studies are successful (Science 1.0), but often novel case study ethnographic methods are more effective (Science 2.0). Ben Shneiderman is a Professor in the Department of Computer Science, Founding Director (1983-2000) of the Human-Computer Interaction Laboratory, and Member of the Institute for Advanced Computer Studies at the University of Maryland at College Park. He was elected as a Fellow of the Association for Computing (ACM) in 1997 and a Fellow of the American Association for the Advancement of Science (AAAS) in 2001. He received the ACM SIGCHI Lifetime Achievement Award in 2001. Ben is the author of Software Psychology: Human Factors in Computer and Information Systems (1980) and Designing the User Interface: Strategies for Effective Human-Computer Interaction (4th ed. 2004). His recent books include Leonardo's Laptop: Human Needs and the New Computing Technologies (MIT Press) which won the IEEE book award in 2004.

World Usability Day New England
Creativity Support Tools

World Usability Day New England

Play Episode Listen Later Dec 19, 2007 76:50


Creativity Support Tools is a research topic with high risk but potentially very high payoff. The goal is to develop improved software and user interfaces that empower diverse users in the sciences and arts to be more productive, and more innovative. Potential users include a combination of software and other engineers, diverse scientists, product and graphic designers, and architects, as well as writers, poets, musicians, new media artists, and many others. Ben Shneiderman is a Professor in the Department of Computer Science, Founding Director (1983-2000) of the Human-Computer Interaction Laboratory, and Member of the Institute for Advanced Computer Studies at the University of Maryland at College Park. He was elected as a Fellow of the Association for Computing (ACM) in 1997 and a Fellow of the American Association for the Advancement of Science (AAAS) in 2001. He received the ACM SIGCHI Lifetime Achievement Award in 2001. Ben is the author of Software Psychology: Human Factors in Computer and Information Systems (1980) and Designing the User Interface: Strategies for Effective Human-Computer Interaction (4th ed. 2004). His recent books include Leonardo's Laptop: Human Needs and the New Computing Technologies (MIT Press), which won the IEEE book award in 2004.