POPULARITY
The topic of this week's episode is the long history of biases inherent to the book reviews used for library collection decisions. Elizabeth is joined by academic librarian Pamela Hayes Bohanan to discuss 60 years of research and reflection about the limited exposure librarians get to the large corpus of potential acquisitions. While perspectives differ, there is little disagreement that publishers, editors, and book review publications are impacting what appears in your local public and academic libraries. Podcast notes April Liberalism in Practice Panel Discussion: McCarthyism in the Stackshttps://youtu.be/_xwjUR7tNGM?si=LlwxLTrZqEyvy7MC Pokornowski, E., & Schonfeld, R. C. (2024, March 28). Censorship and Academic Freedom in the Public University Library. Best, P. (2024). How to Combat the Biased School Library Book Selection Process. Gordon, Rosalie M. (1961). Why you can't find conservative books in public libraries. Human Events, 18, 591-4. Macleod, B. (1981). Library Journal and Choice: A Review of Reviews. Journal of Academic Librarianship, 7(1), 23-28. Kister, K. (2002). The conscience of a reference reviewer. Journal of Information Ethics, 11(1).
Sean shares his transformative journey in mental health and how it led to the creation of Mind Data, which aims to revolutionize therapy through technology. ✨Get into the world of Cold Water Therapy and enjoy 15% OFF all Lumi Products with code INSIDEAMINDPOD! Shop now: https://lumitherapy.co.uk/?dt_id=1119525 Discover the impactful relationship with his therapist, Betty, and how it shaped his mission to enhance the mental well-being of millions. From opening up about personal struggles to integrating ethical practices in his company, Sean's story offers a unique blend of emotional vulnerability and entrepreneurial spirit.We delve deeper into Sean's journey, examining the trials and triumphs he faced while opening up to friends and family about his mental health and how these experiences informed his role as a founder and CEO. We explore emotional maturity, vulnerability, and the pivotal role of therapy in maintaining mental health. But the conversation continues further. Sean shares invaluable insights from his time in the business world, offering a peek into the importance of mentors, company culture, and ethical compliance in his work at Mind Data.
In an age of communicative media, free speech appears to be ubiquitous. Anyone can say anything in any of a hundred social media apps that can go viral worldwide in an instant throughout all of them. And this opens the thoughts of anyone to the commentary of anyone else. But are there limits on free speech in a global world? How is our freedom of speech bound by legal, ethical, and content restrictions? In what way is our freedom really free? Presenters from the United Kingdom, South Africa, and Australia join with a Canadian moderator to walk us through some considerations on what it means to speak freely in the modern world.Lawrence Hopperton (ret.) was the founding director of distributed learning at Tyndale University in Toronto, Canada. He has published extensively on the theory and practice of online learning and won an international award for instructional design and disability compliance. He has also published four textbooks on writing skills, two chapbooks of poetry, and a full collection, Table for Three, through En Route Books and Media, which also published three of his chapters concerning disability compliance in a book entitled Teaching and Learning in the Age of Covid-19. His next book of poetry, Such Common Stories, will be released by En Route later this year. Rachel Fischer is a researcher and information ethicist who collaborates with academic institutions, civil society organisations and intergovernmental entities. She is the Co-Chair of the International Centre for Information Ethics, Deputy Editor for the International Review of Information Ethics and member of UNESCO IFAP's Working Group on Information Accessibility. Rachel was an editor and one of the authors for the Nelson Mandela Reader on Information Ethics which was released in 2021. Learn more about this reader at https://www.i-c-i-e.org/publications.Francis Etheredge is a Catholic married layman, with eleven children, three of whom he hopes are in heaven and the rest of whom are alive and well and stepping through life's stages of school, university, and career. In the last seven years, he has returned to being a self-employed writer, adding twelve books to one already published. Find his books on bioethics at En Route Books and Media. The Human Person: A Bioethical Word, Conception: An Icon of the Beginning, Mary and Bioethics: An Exploration, The ABCQ of Conceiving Conception, and Reaching for the Resurrection: A Pastoral Bioethics. Forthcoming later in 2022 is his new book, Human Nature: Moral Norm.Peter Breen is a defamation and media lawyer and former member of state parliament in Australia. He is the author of several books including his latest book, Prodigal Pilgrim Letters to Pope Francis from Lourdes, Fatima, Garabandal and Medjugorje. Coming in July 2022, his book Dear Mr. Putin will be published by En Route Books and Media. In this book, Peter expresses outrage at the unprovoked attack on the Ukrainian people, at the same time placing the war in a religious context, with the prospect that its outcome may be the historical events prophesied at Fatima in 1917.https://faithscience.org/free-speech/
E-Health Pioneers | Der Business Podcast für den digitalen Gesundheitsmarkt
In mehr als 40 Pflegeeinrichtungen in Deutschland werden Sattelrobben als Therapiemittel eingesetzt – allerdings keine echten Sattelrobben, sondern die Robotersattelrobbe Paro. Ist das die Lösung für den Pflegenotstand? Immerhin werden uns bis zum Jahr 2035 voraussichtlich mehr als eine halbe Millionen Pflegefachkräfte fehlen. Das zeigt zumindest eine aktuelle Studie des Instituts der Deutschen Wirtschaft in Köln. In dieser Folge E-Health Pioneers spricht Host Andrea Buzzi mit dem Roboterforscher Prof. Dr. Oliver Bendel darüber, was sogenannten soziale Roboter heute schon können, was nicht, und was wir Menschen überhaupt von ihnen erwarten und wollen. Oliver Bendel ist Professor für Information Systems, Information Ethics, Machine Ethics und Robophilosophy an der Fachhochschule Nordwestschweiz und hat mehrere Bücher zu dem Thema herausgegeben – unter anderem „Pflegeroboter“ und „Maschinenliebe“. Er befasst sich nicht nur mit den technischen Aspekten sondern auch mit den ethischen Grundlagen für den Einsatz von Robotern. Unter anderem vertritt er die These, dass Patienten Maschinen bei bestimmten Tätigkeiten – etwa der Intimpflege – bevorzugen, bei sozialen Aktivitäten jedoch eher ablehnen. Prof. Bendel muss es wissen – schließlich hat er sich als Forscher auch mit Sex-Robotern befasst und ist der Initiator des Projekts Huggie. Dabei handelt es sich um einen Roboter, der Menschen umarmen kann und sogar einen eigenen Geruch hat. Das klingt doch alles sehr nach Science Fiction. Prof. Bendel sagt: „Es werden inzwischen auch Roboter gebaut, die Beziehungen zu ihnen auf eine sehr manipulative Art fördern.“ Sind wir etwa schon angekommen in einer Science Fiction Welt? Und sind Roboter jetzt die neuen Pflegefachkräfte? Antworten gibt diese Auftaktfolge zu der neuen Serie über E-Health und Pflege.
In this episode, Dr Simon McKenzie talks with Dr Samuli Haataja about countermeasures in cyberspace. The right to countermeasures is a mechanism in international law that allows States to take action when they have suffered an international wrong. Some of features of cyberspace challenge this well-established body of rules, and it may need to change to ensure States have an effective remedy to deter foreign cyber attacks. Samuli researches the public international law aspects of cybersecurity, and his book Cyber Attacks and International Law on the Use of Force: The Turn to Information Ethics was published by Routledge in 2019. He's also a member of the IEEE Society on Society Implication of Technology.Further Reading:Samuli Haataja, 'Cyber Operations and Collective Countermeasures under International Law' (2020) 25(1) Journal of Conflict and Security Law, 33–51Tallinn Manual 2.0 on the International Law Applicable to Cyber OperationsDraft Articles on Responsibility of States for Internationally Wrongful ActsAustralia's position on how international law applies to state conduct in cyberspaceNew Zealand - The Application of International Law to State Activity in Cyberspace
Braving change can be your greatest challenge and help you discover your strongest allies! Megan and Hollie spoke with Sherry Duffy about self worth, the journey to change situations, passion projects, being an empowered consumer of information, taking back personal power during a crisis and gathering around the proverbial campfire to listen, learn and grow. Sherry Duffy is the Director of Strategic Initiatives for the School of Public Health and Information Sciences (SPHIS). Sherry has primary responsibility for the operations of the SPHIS research office and the Commonwealth Institute of Kentucky (CIK), overseeing development and maintenance of the infrastructure related to administration, financial management, communication, community engagement, and content expertise. This entails cultivating relationships with stakeholders, government agencies, funding organizations, faculty, and community partners, as well as organizational strategy and marketing of CIK and SPHIS. Building upon the relationships she creates opportunities for growth, connecting faculty and staff to appropriate community partners and government agencies. Teaching Part-Time in the UofL Organizational Leadership and Learning program allows her to fulfill a passion of helping adult learners achieve their academic goals. She teaches: HR Fundamentals, Diversity in the Workplace, Needs Assessment, Project Management, Workplace & Information Ethics and Coaching and Talent Management.
Today’s episode starts off with a 30-minute, ad hoc discussion surrounding the recent murder of George Floyd and the ensuing national campaign against police racism. Please refer to Pan-Optic’s website for additional resources on how to support reputable activist groups in the fight against racism (https://www.panopticpod.com/post/pitching-in-to-fight-racism-and-police-violence). During Pan-Optic’s two-part series “Philosophers in Firms,” Jason and Juan Pablo investigate the mystery of why Google hired a philosopher and what this individual does. Along the way, we address the following more fundamental questions: Should firms hire philosophers? Does it make good business sense? How does the business case compare to the moral case? Do they conflict? Today’s episode (part two) explores: philosopher Luciano Floridi’s theory of information; how Google applied Floridi’s theory to navigate complex international legal challenges pertaining to data privacy; change management professional Paul Gibbons’ critique of the change consulting industry; how change managers might leverage the humanities to “philosophically ground” strategic recommendations and improve client outcomes; and opportunities for professionals with strong humanities backgrounds to innovate and make a difference in the consulting world. The views expressed on this podcast are our own. If you enjoy what you're hearing, please follow/support us through any of the below media: Twitter: twitter.com/Panopticpod Patreon: www.patreon.com/panopticpod Website: www.panopticpod.com/ Apple: podcasts.apple.com/us/podcast/pan-…st/id1475726450 Spotify: open.spotify.com/show/0edBN0huV1GkMFxSXErZIx
We are thankful and happy to be able to speak with Dr. Diana L. Ascher from UCLA. Her work focuses on information, technology, and decision making in organizations. Dr. Ascher's interdisciplinary research interests draw from her work in a variety of fields, including financial communication, journalism, and public policy. Dr. Ascher is the director of the Information Studies Research Lab (IS Lab) at UCLA, where she develops resources and programming to support the curriculum of the Department of Information Studies. Dr. Ascher, is also the founder of the Information Ethics & Equity Institute (IEEI), which translates data management priorities for information workers, the ethics, economics, and politics of long-term data management, and information ethics with respect to identity, privacy, power, and freedom. Dr. Ascher earned a Ph.D. from the Department of Information Studies in the Graduate School of Education & Information Studies at the University of California, Los Angeles; an M.B.A. from the Peter F. Drucker Graduate School of Management at Claremont Graduate University; and a B.A. in Public Policy with concentrations in journalism and international policy from Duke University.
On this special edition of Generation Justice, we’ll dive into a lecture given by Dr. Safiya Noble, an assistant professor at the Annenberg School of Communication at the University of Southern California. Dr. Noble, spoke at the University of New Mexico, where she discussed her research on algorithmic biases of racism and sexism commonly found on commercial search engines like Google. Dr. Noble has taught in the school of Education at UCLA, and is a co-founder of the Information Ethics & Equity Institute.
In part two of my interview with Delft University of Technology’s assistant professor of cyber risk, Dr. Wolter Pieters, we continue our discussion on transparency versus secrecy in security. We also cover ways organizations can present themselves as trustworthy. How? Be very clear about managing expectations. Declare your principles so that end users can trust that you’ll be executing by the principles you advocate. Lastly, have a plan for know what to do when something goes wrong. And of course there’s a caveat, Wolter reminds us that there’s also a very important place in this world for ethical hackers. Why? Not all security issues can be solved during the design stage. Transparency versus Secrecy Wolter Pieters My name is Wolter Pieters. I have a background in both computer science and philosophy of technology. I'm very much interested in studying cyber security from an angle that either goes a bit more towards the social science, so, why do people behave in certain ways in the cyber security space. But also more towards philosophy and ethics, so, what would be reasons for doing things differently in order to support certain values. Privacy, but then again, I think privacy is a bit overrated. This is really about power balance. It's because everything we do in security will give some people access and exclude other people, and that's a very fundamental thing. It's basically about power balance that is through security we embed into technology. And that is what fundamentally interests me in relation to security and ethics. Cindy Ng How do we live in now world where you just don't know whether or not organizations or governments are behaving in a way that's trustworthy? Wolter Pieters You know, transparency versus secrecy is a very important debate within the security space. This already starts out very fundamentally from the question like, "Should methods for protecting information be publicly known or should they be kept secret because otherwise we may be giving too much information away to hackers, etc?" So, this is a very fundamental thing and in terms of encryption already, there's the principle like, "Hey, encryption algorithms should be publicly known because otherwise we can't even tell how well our information is being protected by means of that encryption and only the keys using encryption should be kept secret." This is a principle called Kerckhoff’s Principle. This is very old and information in security and a lot of the current encryption algorithms actually adhere to that principle and we've also seen encryption algorithms not adhering to that principle. So, algorithms that were secrets, trade secrets, etc. being broken very moments the algorithm became known. So, in that sense there I think most researchers would agree this is good practice. On the other hand it's seems that there's also a certain limit to what we want to be transparent there. Both in terms of security controls, we're not giving away every single thing governments do in terms of security online. So, there is some level of security by obscurity there and more generally to what extent is transparency a good thing. This again ties in with who is a threat. I mean, we have the whole WikiLeaks endeavor and some people will say, "Well, this is great. The government shouldn't be keeping all that stuff secret." So, it's great for trust that this is now all out in the open. On the other hand, you could argue all this and this is actually a threat to trust in the government. So, this form of transparency would be very bad for trust. So, there's clearly a tension there. Some level of transparency may help people trust in the protections embedded in the technology and in the actors that use those technologies online. But on the other hand, if there's too much transparency all the nitty-gritty details may actually decrease trust. You see this all over the place. We've seen it through with the electronic voting as well. If you provide some level of explanation on how certain technologies are being secured, that may help. If you provide too much detail people won't understand it and it will only increase distrust. There is a kind of golden middle there in terms of how much explanation you should give to make people trust in certain forms of security encryption, etc. And again, in the end people will have to rely on experts because physical forms of security, physical ballot boxes, it's possible to explain and how these work and how they are being secured with digital that becomes much more complicated and for most people, they will have to trust the judgment of experts that these forms of security are actually good if the experts believe so. What Trustworthy Organizations Do Differently Cindy Ng What's something an organization can do in order to establish themselves as a trustworthy, morally-sound, ethical organization? Wolter Pieters I think the most important thing that companies can do is very clear in terms of managing expectations. So, couple of examples there, if as a company you decide to provide end-to-end encryption for communications. The people that use your or your jet app exchange messages get the assurance that the messages are encrypted between their device and the device of the one that they're communicating with. And this is a clear statement like, "Hey, we're doing it this way." And that also means that then you shouldn't have any backdoors or means to give this communication away to need the intelligence agencies anyway. Because if this is your standpoint, and people need to be able to trust in that. Similarly, if you are running a social network site and you want people to trust in your policies then you need to be crystal clear. Not only that it's possible to change your privacy settings, to regulate the access that other use of the social networking servers have to your data, but at the same time you need to be crystal clear about how you as a social network operator are using the kind of data. Because sometimes I get the big internet companies are offering all kinds of privacy settings which give people the impression that they can do a lot in terms of their privacy but, yes, this is true for the inter user data access but the provider still sees everything. This seems to be a way of framing privacy in terms of inter user data access. Whereas, I think it's much more fundamental what these companies can do with all the data they gather for all their use and what that means in terms of their power and the position that they get in this whole area of cyberspace this whole arena. So, managing expectations, I mean, there's all kinds of different standpoints also based on different ethical theories, based on different political points of view that you could take in this space. If you want to behave ethically then make sure you list your principles, you list what you do in terms of security and privacy to adhere to those principles and make sure that people can actually trust that this is also what you do in practice. And also make sure that you know exactly what you're going to do in case something goes wrong anyway. We've seen too many breaches where the responses by the companies were not quite up to standards in terms of delaying the announcement of the breach or it's crucial to not only do some prevention in terms of security and privacy but also know what you're going to do in case something goes wrong. Doomsday Scenarios Cindy Ng Yeah, you say, if an IoT device gets created and they get to market their product first and then they'll fix security and privacy later, that's too late. Is it sort of like, "We're doomed already and we're just sort of managing the best way we know how?" Wolter Pieters In a way, it's a good thing when we are nervous about where our society is going because in history at moments where people weren't nervous enough about where society was going, we've seen things go terribly wrong. So, in a sense we need to get rid of the illusion that we can easily be in control or something like that because we can't. The same for elections, there is no neutral space from which people can cast their vote without being influenced and we've seen in recent elections that actually technology is playing more and more of a role in how people perceive political parties and how to make decisions in terms of voting. So, it's inevitable that technology companies have a role in those elections and that's also what they need to acknowledge. And then of course, and I think this is a big question that needs to be asked, "Can we prevent the situation in which the power of certain online stakeholders whether those are companies or are there for a nation state or whatever. Can we prevent a situation in which they get so much power that they are able to influence our governments, either through elections or through other means?" That's a situation that we really don't want to be in and I'm not pretending that I have a crystal clear answers there but this is something that at least we should consider as a possible scenario. And then there's all these doomsday scenarios with Cyber Pearl Harbor and I'm not sure whether these doomsday scenarios are the best way to think about this but we should also not be naive and think that all of this will blow over because maybe indeed we have already been giving away too much power in a sense. So, what we should do is fundamentally rethink the way we think about security and privacy from, "Oh, damn, my photos are I don't know whatever, in the hands of whoever." That's not the point. It's about the scale in which the certain actors either get their hands on data or lots of individuals are able to influence lots of individuals. So, again scale comes in there. It's not about our individual privacy, it's about the power that these stakeholders get by having access to the data over by being able to influence lots and lots of people and that's what the debate needs to be about. Cindy Ng Whoever has the data has power, is what you're getting at. Wolter Pieters Whoever has the data and in a sense that data can then, again, be used also to influence people in a targeted way. If you know that somebody's interested in something, you can try to influence their behavior by referring to the thing that they're interested in. Cindy Ng That's only if you have data integrity. Wolter Pieters Yes. Yes, of course. But on the other hand, little bit of noise in the data doesn't matter too much because if you have data that's more or less correct, you can still achieve a lot. Ethical Hackers Have An Important Role Cindy Ng Anything that I didn't touch upon that you think is important for our listeners to know? Wolter Pieters The one thing that I think is critically important is the role that ethical hackers can have in keeping people alert, in a way maybe even changing the rules of the game, because in the end I also don't think that all security issues can be solved in the design of technology and it's critically important that when technology are being deployed that people keep an eye on issues that may have been overlooked in the design stage of those technologies. We need some people that are paying attention and will be alerting us to issues that may emerge. Cindy Ng It's a scary role to be in though if you're an ethical hacker because what if the government comes around and accuses you being an unethical hacker? Wolter Pieters: Yeah. I think that's an issue but if that's going to be happening, if people are afraid to play this role because legislation doesn't protect them enough, then maybe we need to do something about that. If we don't have people that point us to essential weaknesses in security, then what will happen is that those issues will be kept secret and that they will be misused in ways that we don't know about and I think that's much worse situation to be in.
In part one of my interview with Delft University of Technology’s assistant professor of cyber risk, Dr. Wolter Pieters, we learn about the fundamentals of ethics as it relates to new technology, starting with the trolley problem. A thought experiment on ethics, it’s an important lesson in the world of self-driving cars and the course of action the computer on wheels would have to take when faced with potential life threatening consequences. Wolter also takes us through a thought track on the potential of power imbalances when some stakeholders have a lot more access to information than others. That led us to think, is technology morally neutral? Where and when does one’s duty to prevent misuse begin and end? Transcript Wolter Pieters: My name is Wolter Pieters. I have a background in both computer science and philosophy of technology. I'm very much interested in studying cyber security from an angle that either goes a bit more towards the social science, so, why do people behave in certain ways in the cyber security space. But also more towards philosophy and ethics, so, what would be reasons for doing things differently in order to support certain values. Privacy, but then again, I think privacy is a bit overrated. This is really about power balance. It's because everything we do in security will give some people access and exclude other people, and that's a very fundamental thing. It's basically about power balance that is through security we embed into technology. And that is what fundamentally interests me in relation to security and ethics. Cindy Ng: Let's go back first and start with philosophical, ethical, and moral terminology. The trolley problem: it's where you're presented two dilemmas, where you're the conductor and you see the trolley is going down a track and it has the potential to kill five people. But then if you pull a lever, you can make the trolley go on the other track where it would kill one person. And that really is about: what is the most ethical choice and what does ethics mean? Wolter Pieters: Right. So, ethics generally deals with protecting values. And values, basically, refer to things that we believe are worthy of protection. So, those can be anything from health, privacy, biodiversity. And then it's said that some values can be fundamental, others can be instrumental in the sense that they only help to support other values, but they're not intrinsically worth something in and of themselves. Ethics aims to come up with rules, guidelines, principles that help us support those values in what we do. You can do this in different ways. You can try to look only at the consequences of your actions. And in this case, clearly, in relation to the trolley problem, it's better to kill one person than to kill five. If you simply do the calculation, you know, you could say, "Well, I pull the switch and thereby reduce the total consequences." But you could also argue that certain rules state like you shall not kill someone, which would be violated in case you pull the switch. I mean, if you don't do something, then five people would be killed. Then you don't do something explicitly, whereas you would pull the switch you would explicitly kill someone. And from that angle, you could argue that you should not pull the switch. So, this is very briefly an outline of different ways in which you could reason about what actions would be appropriate in order to support certain values, in this case, life and death. Now, this trolley problem is these days often cited in relation to self-driving cars, which also would have to make decisions about courses of action, trying to minimize certain consequences, etc. So, that's why this has become very prominent in the ethics space. Cindy Ng: So, you've talked about a power in balance. Can you elaborate on and provide an example on what that means? Wolter Pieters: What we see in cyberspace is that there are all kinds of actors, stakeholders that gather lots of information. There's governments being interested in doing types of surveillance in order to catch the terrorist amongst the innocent data traffic. There is content providers that give us all kinds of nice services, but at the same time, we pay with our data, and they make profiles out of it and offers targeted advertisements and, etc. And at some point, some companies may be able to do better predictions than even our governments can do. So, what does that mean? In the Netherlands, today actually, there's a referendum regarding new powers for the intelligence agencies to do types of surveillance online, so there's a lot of discussion about that. So, on the one hand, we all agree that we should try to prevent terrorism, etc. On the other hand, this is also a relatively easy argument to claim access to data, they're like, "Hey, we can't allow these terrorists attacks, so we need all your data." It's very political. And this also makes it possible to kind of leverage security as an argument to claim access to all kinds of things. Cindy Ng: I've been drawn to ethics and the dilemma of our technology, and because I work at a data security company, you learn about privacy regulations, GDPR, HIPAA, SOX compliance. And at the core, they are about ethics and a moral standard of behavior. And can you address the tension between ethics and technology? And the best thing I read lately was Bloomberg's subhead that said that ethics don't scale. When ethics is such a core value, but at the same time, technology is sort of what drives economies, and then add an element of a government to overseeing it all. Wolter Pieters: There's a couple of issues here. One is that's often cited is that ethics and law seem to be lagging behind compared to our technological achievements. We always have to wait for new technology to kind of get out of hand before we start thinking about ethics and regulation. So, in a way, you could argue that's the case for internet of things type developments where manufacturers of products have been making their products smart for quite a while now. And we suddenly realized that all of these things have security vulnerabilities, and they and they can become part of botnets of cameras that can then be used to do distributed denial of attacks on our websites, etc. And only now are we starting to think about what is needed to make sure that these and other things, devices are securable at some level. Can they be updated? Can they be patched? In a way, it already seems to be too late. So, it is the argument then that is lagging behind. On the other hand, there's also the point that ethics and norms are always in a way embedded in technologies. And again, in the security space, whatever way you design technology, it will always enable certain kinds of access, and it will disable other kinds of access. So, there's always this inclusion, exclusion going on with new digital technologies. So, in that sense, increasingly, ethics is always already present in a technology. And I'm not sure whether ethics, whether it should be said that ethics doesn't scale. Maybe the problem is rather that it scales too well in the sense that, when we design a piece of technology, we can't really imagine how things are going to work out if the technology is being used by millions of people. So, this holds for a lot of these elements. And then the internet when it was designed, it was never conceived as a tool that would be used by billions. It was kind of a network for research purposes to exchange data and everything. So, same for Facebook. It was never designed as a platform for an audience like this, which means that, in a sense, that the norms that are initially being embedded into those technologies do scale. And if, for example, for the internet, you don't embed security in it from the beginning and then you scale it up, then it becomes much more difficult to change it later on. So, ethics does scale, but maybe not in the way that we want it to scale. Cindy Ng: So, you mentioned Facebook. And Facebook is not the only tech company that design systems to allow data to flow through so many third parties, and when people use that data in a nefarious way, the tech company can respond to say, you know, "It's not a data breach. It's how things were designed to work and people misused it." Why does that response feel so unsettling? I also like what you said in the paper you wrote that we're tempted to consider technology as morally neutral. Wolter Pieters: There's always this idea of technology being kind of a hammer, right? I need a hammer to drive in the nail and so, it's just a tool. Now, information flow technology has been discussed for a while that there will always be some kind of side effects. And we've learned that technologies pollute the environment, technologies cause safety hazards, nuclear incidents etc., etc. And in all of these cases, when something goes wrong, there are people who designed the technology or operate the technology who could potentially be blamed for these things going wrong. Now, in the security space, we're dealing with intentional behavior of third parties. So, they can be hackers, they can be people who misuse the technology. And then suddenly it becomes very easy for those designing or operating the technology to point to those third parties as the ones to blame. You know, like, "Yeah. We just provide the platform. They misused it. It's not our fault." But the point is, if you follow that line of reasoning, you wouldn't need to do any kind of security. Just say, "Well, I made a technology that has some useful functions," and, yes, then there's these bad guys that misuse my functionality." On the one hand, it seems natural to kind of blame the bad guys or the misusers of whatever. On the other hand, if you only follow that line of reasoning, then nobody would need to do any kind of security. So, this means that you can't really get away with that argument in general. Then, of course, with specific cases, and then it becomes more of a gray area, where does your duty to prevent misuse stop? And then you get into the area, okay, what is an acceptable level of protection security? But also, of course, there's the business models of these companies involve giving access to some parties, which the end users may not be fully aware of. And this has to do with security always being about who are the bad guys? Who are the threats? And some people have different ideas about who the threats are than others. So, if a company gets a request from the intelligence services like, "Hey, we need your data because we would like to investigate this suspect." Is that acceptable or maybe some people see that as a threat as well. So, the labeling of who are the threats? Are the terrorists the threats? Are the intelligence agencies the threats? Are the advertising companies the threats? This all matters in terms of what you would consider acceptable or not from a security point of view. Within that space, it is often not very transparent to people what could or could not be done with the data. And then the European legislation is trying, in particular, to require consent of people in order to process their data in certain kinds of ways. Now that, in principle, seems like a good idea. In practice, consent is often given without paying too much attention to the exact privacy policies etc., because people can't be bothered to read all of that. And in a sense, maybe that's the rational decision because it would take too much time. So, that also means that, if we try to solve these problems by letting individuals give consent to certain ways of processing their data, this may lead us to a situation where individually, everybody would just click away the messages because for them it's rational like, "Hey, I want this service and I don't have time to be bothered with all these legal stuff." But on a societal level, we are creating a situation where indeed certain stakeholders in the internet get a lot of power because they have a lot of data. This is the space in which decisions are being made. Cindy Ng: We rely on technology. A lot of people use Facebook. We can't just say goodbye to IoT devices. We can't say goodbye to Facebook. We can't say goodbye to any piece of technology because as you've said, in one of your papers, that technology will profoundly change people's lives, and our society. Instead of saying goodbye to this wonderful thing that we've created or things, how do we go about living our lives and conducting ourselves with integrity, with good ethics, and morals? Wolter Pieters: Yeah. That's a good question. So, what currently seems to be happening is that, indeed, a lot of this responsibility is being allocated to the end users. Like, you decide whether you want to join social media platforms or not. You decide what to share there. You decide whether to communicate with end-to-end encryption or not, etc., etc. So, this means that a lot of pressure is being put on individuals make those kinds of choices. And the fundamental question is whether that approach makes sense, whether that approach scales, because the more technologies people are using, the more decisions they will have to make about how to use these kinds of technologies. Now, of course, there are certain basic principles that you can try to adhere to when doing your stuff online. But on the security sides, watch out of phishing emails, use strong passwords etc., etc. On the privacy side, don't share stuff off from other people that they haven't agreed to etc., etc. But all of that requires quite a bit of effort on the side of the individual. And at the same time, there seems to be pressure to share more and more and more stuff even...and, for example, pictures of children that aren’t even able to consent to whether they want their pictures posted or not. So, it's, in a sense, there's a high moral demand on users, maybe too high. And that's a great question. In terms of acting responsibly online, now, if at some point you would decide that we're putting too high a demand on those users, and the question is like, "Okay, are there possible ways to make it easier for people to act responsibly?" And then you would end up with certain types of regulation that don't only delegate responsibility back to individuals, like, for example, asking consent, but putting really very strict rules on what, in principle, is allowed or not. Now, that's a very difficult debate because you usually end up also in accusations of paternalism, like, "Hey, you're putting all kinds of restrictions on what can or cannot be done online." But why shouldn't people be able to decide for themselves? For instance, on the other hand, people being overloaded with decisions to the extent that it becomes impossible for them to make those decisions responsibly. This, on the one hand, leaving all kinds of decisions to the individual versus making some decisions on a collective level that's gonna be a very fundamental issue in the future.
Luciano Floridi discusses his new book, 'The Ethics of Information', and outline the nature and scope of Information Ethics. With the help of three metaphors, Professor Floridi outlines the nature and scope of Information Ethics, the new philosophical area of research that investigates the ethical impact of Information and Communication Technologies (ICTs) on human life and society. In the course of the presentation, he introduces some of the topics he has analysed in his book, 'The Ethics of Information' (OUP 2013), a book in which he has sought to provide the conceptual foundations of Information Ethics.
Luciano Floridi discusses his new book, 'The Ethics of Information', and outline the nature and scope of Information Ethics. With the help of three metaphors, Professor Floridi outlines the nature and scope of Information Ethics, the new philosophical area of research that investigates the ethical impact of Information and Communication Technologies (ICTs) on human life and society. In the course of the presentation, he introduces some of the topics he has analysed in his book, 'The Ethics of Information' (OUP 2013), a book in which he has sought to provide the conceptual foundations of Information Ethics.