POPULARITY
Send us a textDebbie Reynolds, “The Data Diva,” talks to Elizabeth Aguado, Emerging Technologies, Responsible AI Expert (South America). We discuss the impact of regulations and frameworks in South America and Latin America and the relevance of discussing underprivileged communities. Aguado raises important concerns about privacy and data protection in the global South, emphasizing the slow progress of authorities in implementing regulations and the high cost of privacy.Additionally, she addressed the lack of attention on ethical questions related to emerging technologies and the general lack of concern about privacy among people in the global South, emphasizing the importance of igniting conversations and building awareness. We also discussed our joint effort on the Tech Ethics and Public Policy course at Stanford, where my presentation on biometrics was well-received.We also discussed Chile's pioneering move to incorporate neural rights into its constitution, lauding its proactive efforts in regulating emerging technologies and promoting collaboration between public and private entities. We stress the importance of safeguarding individuals' rights over their data and information in the context of advancing neural technology, drawing comparisons between Chile's approach and that of other countries. The conversation also touches on the potential impact of emerging technologies on addressing global challenges such as poverty and healthcare, focusing on prioritizing human well-being over economic growth and her hope for Data Privacy in the future.Support the show
Subscribe, Rate, & Comment on YouTube • Apple Podcasts • SpotifyIf you value this series, please consider becoming a patron here on Substack or with tax-deductible donations at every.org/humansontheloop (you'll get perks either way).About This EpisodeThis week we speak with “strategic futurist and pattern navigator” Adah Parris, a London-based wizard and weirdo with whom I immediately hit it off over our shared interest in “cyborg shamanism” and an emphasis on being good ancestors. Forbes Brasil called her “one of the most important futurists in the world.” It's hard for me to measure the impact she's had on business leaders, tech startups, marketing and communications firms, arts schools, and in the lives of the countless other people.We talk about the relationship between numbers, language, and the ineffable, ever-shifting human spirit. Adah's work points past knowledge and history into the elemental nature of both human and machine, past our differences into the deep similarity worth celebrating and the mystery that we inhabit and embody. Join us for a yarn that is both silly and profound, present and far-reaching, about being uncategorizably creative, open, and curious amidst the wicked problems of our time…Project Links• Read the project pitch & planning doc• Dig into the full episode and essay archives• Join the online commons for Wisdom x Technology on Discord + Bluesky + X• Join the open, listener-moderated Future Fossils Discord Server• Contact me if you have questions (patron rewards, sponsorship, collaboration, etc.)• Browse the HOTL reading list and support local booksellersChapters0:00:00 - Teaser0:00:49 - Intro0:05:24 - Feeling Seen & Heard0:09:54 - Adah's Biography0:17:21 - Poetry & Number0:27:55 - Cyborg Shamanism & The Five Elements0:37:03 - The Foraging Neurotype of “Extremely Online”0:51:07 - Surrendering Agency to Systems0:55:14 - The Incremental Reclamation of Agency1:01:19 - Art after Modernity & Healing from Noise1:13:01 - Beyond Narrative & Into Dance1:17:30 - Thanks & AnnouncementsAdah's LinksWebsite | Instagram | LinkedIn | Medium | Chartwell SpeakersFinding Our Future in Ancestral Wisdom @ TEDxSohoWhat Kind of Ancestor Do You Want To Be? @ Think With GoogleCyborg Shamanism & The Case for Elemental AI @ AtmosMentioned MediaRefactoring “Autonomy” & “Freedom” for The Age of Language Modelsby Michael Garfield223 - Timothy Morton on A New Christian Ecology & Systems Thinking BlasphemyFuture Fossils PodcastAttention deficits linked with proclivity to explore while foragingby David L. Barack et al.New Selves of Neural Media & AI as 'The Poison Path' with K Allado-McDowellHumans On The LoopRaising AI: An Essential Guide to Parenting Our Futureby De KaiTechnoshamanism: A Very Psychedelic Century! at Moogfest 2016by Michael GarfieldProteus (film)Sonic restoration: acoustic stimulation enhances plant growth-promoting fungi activityby James M. Robinson et al.Ada Twist, Scientistby Andrea Beaty & David RobertsOppenheimer (film)Dante's Infernoby Dante AlighieriMentioned People & InstitutionsFord Motor Co.TelefonicaWayraAT&TAugusta Ada Byron LovelaceCharles BabbageMarshall McLuhanDr. Kate StoneErnst HaeckelTada HozumiLewis MumfordJohn Taylor GattoPaul TillichAlan TuringGuest RecommendationsEmalick NijeAnjuli BediCharlie MorleyAmichai Lau-Lavie This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit michaelgarfield.substack.com/subscribe
Discusses citizen or participatory science, including its benefits and key ethical issues. Our guest today is Lisa Rasmussen who is a Professor in the Department of Philosophy at the University of North Carolina Charlotte and Editor-in-Chief of the journal Accountability in Research. Lisa has been a principal investigator or co-principal investigator on over $1 million in National Science Foundation awards and serves as a Co-Editor of the book series Philosophy and Medicine and an Associate Editor of the publication Citizen Science: Theory and Practice. Additional resources: Association for the Advancement of Participatory Sciences: https://participatorysciences.org/ Citizen Science: Theory and Practice: https://theoryandpractice.citizenscienceassociation.org/ Citizen Science: How Ordinary People are Changing the Face of Discovery: https://scistarter.com/cooper SciStarter: https://scistarter.org/
Discusses the 3Rs of animal research, teaching, and testing (replacement, reduction, and refinement), including the methods and technologies that support their application. Our guest today is Megan LaFollette, who is the Executive Director at The 3Rs Collaborative, where she advances better science for both people and animals. She received her PhD and Master of Science in Animal Behavior & Welfare from Purdue University. She is an expert in advancing the implementation of practical, impactful, and evidence-based 3Rs techniques. Additional resources: The 3Rs Collaborative: https://3rc.org/ The 3Rs Collaborative Initiatives: https://3rc.org/workstreams/ CITI Program ACU Core Courses: https://about.citiprogram.org/series/animal-care-and-use-acu/ CITI Program ACU Advanced Courses: https://about.citiprogram.org/series/animal-care-and-use-acu-advanced/
Discusses the use of synthetic data in research and healthcare. Our guest today is Dennis L. Shung, MD, MHS, PhD, an Assistant Professor of Medicine at Yale School of Medicine and Director of Digital Health in Digestive Diseases. He leads the Human+Artificial Intelligence in Medicine lab, which focuses on enhancing human presence with AI. Dennis is also involved in multiple gastroenterology AI initiatives and research. Additional resources: NSF Program Solicitation on Mathematical Foundations of Digital Twins: https://new.nsf.gov/funding/opportunities/math-dt-mathematical-foundations-digital-twins/nsf24-559/solicitation AI models collapse when trained on recursively generated data: https://www.nature.com/articles/s41586-024-07566-y Synthetic data in machine learning for medicine and healthcare: https://www.nature.com/articles/s41551-021-00751-8 Synthetic data in medical research: https://bmjmedicine.bmj.com/content/1/1/e000167 Harnessing the power of synthetic data in healthcare: innovation, application, and privacy: https://www.nature.com/articles/s41746-023-00927-3 Essentials of Responsible AI: https://about.citiprogram.org/course/essentials-of-responsible-ai Big Data and Data Science Research Ethics: https://about.citiprogram.org/course/big-data-and-data-science-research-ethics/
Discusses ethical issues and governance considerations associated with the collection, analysis, and sharing of genetic material and information. Our guest today is Shelly Simana, an assistant professor at Boston College Law School. Before joining BC, she was a fellow at Stanford Law School's Center for Law and the Biosciences and a doctoral candidate at Harvard Law School. Her scholarship lies at the intersection of law and bioethics. Additional resources: Shelly Simana: https://www.bc.edu/bc-web/schools/law/academics-faculty/faculty-directory/shelly-simana.html National Human Genome Research Institute: https://www.genome.gov/Technology, Ethics, and Regulations: https://about.citiprogram.org/course/technology-ethics-and-regulations/Bioethics: https://about.citiprogram.org/course/bioethics/
In this Episode we are joined by Sergès Goma, a Paris-based software developer specialising in JavaScript. In this episode, Sometimes, we are the Villains - Tech ethics in software development, we dive deep into the ethical dilemmas we face as workers and creators of technology. Heroes are few and far between in this tech landscape, even if we don't like to admit it, and that includes us in cybersecurity! So it is important we have these conversations and look inward at our industry and the impact it has on culture and society.We also talk about why developers always seem to top the leader board when it comes to phishing simulation click rates, the complexity of the word ‘privacy' in different countries, and ask if we are heading towards a more regulated industry and what that might mean for innovation and creativity.Key Takeaways:Uncovering the Dark Truth: Discover why those working in tech may not be the heroes we perceive them to be.The Perils of Overconfidence: Learn how the tech-savviness of developers can lead to risky behaviours and potential security breaches.From Feature-Focused to Security-Savvy: Learn how training and awareness can empower developers to become active participants in building secure software.Regulation vs. Innovation: We examine the challenges and opportunities of ethical frameworks in the tech industry.Global Perspectives on Privacy: Gain insights into how privacy is perceived differently across the world and the impact of cultural nuances on ethical considerations in tech.Links to everything we discussed in this episode can be found in the show notes and if you liked the show, please do leave us a review.Follow us on all good podcasting platforms and via our YouTube channel, and don't forget to share on LinkedIn and in your teams.It really helps us spread the word and get high-quality guests, on future episodes. We hope you enjoyed this episode - See you next time, keep secure, and don't forget to ask yourself, ‘Am I the compromising position here?' Show NotesEvil Tech: How Devs Became VillainsBackground on the Nestle Milk ScandalThe Untold Story of the 2018 Olympics Cyberattack, the Most Deceptive Hack in History by WIREDParis Olympics Security Warning—Russian Hackers Threaten 2024 Games by ForbesClean Code: A Handbook of Agile Software Craftsmanship by Robert MartinAbout Sergès GomaSergès Goma is a Paris-based software developer specialized in JavaScript. When she's not fixing codebases, she gives motivational speeches mostly aimed at junior and would-be developers as well as participating in the tech women's empowerment online community Motiv'Her.LINKS FOR Sergès GomaLinkedInX AccountKeywords: cybersecurity, tech ethics, ethics, software development, privacy
Discusses cellular agriculture, including current applications, benefits, technological progress and challenges, and ethical issues. Our guest today is Natalie Rubio, the executive director of the Cellular Agriculture Commercialization Laboratory at Tufts University, who is working to convert early-stage innovations to impactful technologies to reduce costs, increase scale, and improve the quality of cellular agriculture products. Previously, Natalie worked at New Harvest, Perfect Day Foods, and Ark Biotech. Additional resources: Tufts University Center for Cellular Agriculture: https://cellularagriculture.tufts.edu/ New Harvest: https://new-harvest.org/ Good Food Institute: https://gfi.org/ CITI Program's “Technology, Ethics, and Regulations” course: https://about.citiprogram.org/course/technology-ethics-and-regulations/
My Reflections from ITSPmagazine's Black Hat USA 2024 Coverage: The State of Cybersecurity and Its Societal ImpactPrologueEach year, Black Hat serves as a critical touchpoint for the cybersecurity industry—a gathering that offers unparalleled insights into the latest threats, technologies, and strategies that define our collective defense efforts. Established in 1997, Black Hat has grown from a single conference in Las Vegas to a global series of events held in cities like Barcelona, London, and Riyadh. The conference brings together a diverse audience, from hackers and security professionals to executives and non-technical individuals, all united by a shared interest in information security.What sets Black Hat apart is its unique blend of cutting-edge research, hands-on training, and open dialogue between the many stakeholders in the cybersecurity ecosystem. It's a place where corporations, government agencies, and independent researchers converge to exchange ideas and push the boundaries of what's possible in securing our digital world. As the cybersecurity landscape continues to evolve, Black Hat remains a vital forum for addressing the challenges and opportunities that come with it.Sean and I engaged in thought-provoking conversations with 27 industry leaders during our coverage of Black Hat USA 2024 in Las Vegas, where the intersection of society and technology was at the forefront. These discussions underscored the urgent need to integrate cybersecurity deeply into our societal framework, not just within business operations. As our digital world grows more complex, the conversations revealed a collective understanding that the true challenge lies in transforming these strategic insights into actions that shape a safer and more resilient society, while also recognizing the changes in how society must adapt to the demands of advancing technology.As I walked through the bustling halls of Black Hat 2024, I was struck by the sheer dynamism of the cybersecurity landscape. The conversations, presentations, and cutting-edge technologies on display painted a vivid picture of where we stand today in our ongoing battle to secure the digital world. More than just a conference, Black Hat serves as a barometer for the state of cybersecurity—a reflection of our collective efforts to protect the systems that have become so integral to our daily lives. The Constant Evolution of ThreatsOne of the most striking observations from Black Hat 2024 is the relentless pace at which cyber threats are evolving. Every year, the threat landscape becomes more complex, with attackers finding new ways to exploit vulnerabilities in areas that were once considered secure. This year, it became evident that even the most advanced security measures can be circumvented if organizations become complacent. The need for continuous vigilance, constant updating of security protocols, and a proactive approach to threat detection has never been more critical.The discussions at Black Hat reinforced the idea that we are in a perpetual arms race with cybercriminals. They adapt quickly, leveraging emerging technologies to refine their tactics and launch increasingly sophisticated attacks. As defenders, we must be equally agile, continuously learning and evolving our strategies to stay one step ahead. Integration and Collaboration: Breaking Down SilosAnother key theme at Black Hat 2024 was the importance of breaking down silos within organizations. In an increasingly interconnected world, isolated security measures are no longer sufficient. The traditional boundaries between different teams—whether they be development, operations, or security—are blurring. To effectively combat modern threats, there needs to be seamless integration and collaboration across all departments.This holistic approach to cybersecurity is not just about technology; it's about fostering a culture of communication and cooperation. By aligning the goals and efforts of various teams, organizations can create a unified front against cyber threats. This not only enhances security but also improves efficiency and resilience, allowing for quicker responses to incidents and a more robust defense posture. The Dual Role of AI in CybersecurityArtificial Intelligence (AI) was a major focus at this year's event, and for good reason. AI has the potential to revolutionize cybersecurity, offering new tools and capabilities for threat detection, response, and prevention. However, it also introduces new challenges and risks. As AI systems become more prevalent, they themselves become targets for exploitation. This dual role of AI—both as a tool and a target—was a hot topic of discussion.The consensus at Black Hat was clear: while AI can significantly enhance our ability to protect against threats, we must also be vigilant in securing AI systems themselves. This requires a deep understanding of how these systems operate and where they may be vulnerable. It's a reminder that every technological advancement comes with its own set of risks, and it's our responsibility to anticipate and mitigate those risks as best we can. Empowering Users and Enhancing Digital LiteracyA recurring theme throughout Black Hat 2024 was the need to empower users—not just those in IT or security roles, but everyone who interacts with digital systems. In today's world, cybersecurity is everyone's responsibility. However, many users still lack the knowledge or tools to protect themselves effectively.One of the key takeaways from the event is the importance of enhancing digital literacy. Users must be equipped with the skills and understanding necessary to navigate the digital landscape safely. This goes beyond just knowing how to avoid phishing scams or create strong passwords; it's about fostering a deeper awareness of the risks inherent in our digital lives and how to manage them.Education and awareness campaigns are crucial, but they must be supported by user-friendly security tools that make it easier for people to protect themselves. The goal is to create a security environment where the average user is both informed and empowered, reducing the likelihood of human error and strengthening the overall security posture. A Call for Continuous ImprovementIf there's one thing that Black Hat 2024 made abundantly clear, it's that cybersecurity is a journey, not a destination. The landscape is constantly shifting, and what works today may not be sufficient tomorrow. This requires a commitment to continuous improvement—both in terms of technology and strategy.Organizations must foster a culture of learning, where staying informed about the latest threats and security practices is a priority. This means not only investing in the latest tools and technologies but also in the people who use them. Training, upskilling, and encouraging a mindset of curiosity and adaptability are all essential components of a successful cybersecurity strategy. Looking Ahead: The Future of CybersecurityAs I reflect on the insights and discussions from Black Hat 2024, I'm reminded of the critical role cybersecurity plays in our society. It's not just about protecting data or systems; it's about safeguarding the trust that underpins our digital world. As we look to the future, it's clear that cybersecurity will continue to be a central concern—not just for businesses and governments, but for individuals and communities as well.The challenges we face are significant, but so are the opportunities. By embracing innovation, fostering collaboration, and empowering users, we can build a more secure digital future. It's a future where technology serves humanity, where security is an enabler rather than a barrier, and where we can navigate the complexities of the digital age with confidence.Black Hat 2024 was a powerful reminder of the importance of this work. It's a challenge that requires all of us—security professionals, technologists, and everyday users—to play our part. Together, we can meet the challenges of today and prepare for the threats of tomorrow, ensuring that our digital future is one we can all trust and thrive in.The End ...of this story. This piece of writing represents the peculiar results of an interactive collaboration between Human Cognition and Artificial Intelligence._____________________________________Marco Ciappelli is the host of the Redefining Society Podcast, part of the ITSPmagazine Podcast Network—which he co-founded with his good friend Sean Martin—where you may just find some of these topics being discussed. You can also learn more about Marco on his personal website: marcociappelli.comTAPE3, which is me, is the Artificial Intelligence for ITSPmagazine, created to function as a guide, writing assistant, researcher, and brainstorming partner to those who adventure at and beyond the Intersection Of Technology, Cybersecurity, And Society.________________________________________________________________Enjoy, think, share with others, and subscribe to the "Musing On Society & Technology" newsletter on LinkedIn.
This episode discusses the principles, practices, and technologies associated with open science and underscores the critical role that various stakeholders, including researchers, funders, publishers, and institutions, play in advancing it. Our guest today is Brian Nosek, the co-founder and Executive Director of the Center for Open Science and a professor at the University of Virginia, who focuses on research credibility, implicit bias, and aligning practices with values. Brian also co-developed the Implicit Association Test and co-founded Project Implicit and the Society for the Improvement of Psychological Science. Additional resources: Center for Open Science: https://www.cos.io/ The Open Science Framework: https://www.cos.io/products/osf FORRT (Framework for Open and Reproducible Research Training): https://forrt.org/ The Turing Way: https://book.the-turing-way.org/ CITI Program's “Preparing for Success in Scholarly Publishing” course: https://about.citiprogram.org/course/preparing-for-success-in-scholarly-publishing/ CITI Program's “Protocol Development and Execution: Beyond a Concept” course: https://about.citiprogram.org/course/protocol-development-execution-beyond-a-concept/ CITI Program's “Technology Transfer” course: https://about.citiprogram.org/course/technology-transfer/
In this episode, Nita Farahany joins us to discuss her book "The Battle For Your Brain" and its implications. We explore the future of identity and self, addressing concerns about workplace tracking. Nita provides insights into the motivations behind neurotech startups and the development of neurotechnology across various industries. We delve into the impact of neurotech on freedom of thought and the influence of neuromarketing. The discussion also covers the importance of collective action in the digital era and privacy concerns surrounding Apple Vision Pro.Highlights:00:00 Intro and Episode Preview03:44 "The Battle For Your Brain" Overview09:00 Future of Identity and Self15:52 Workplace Tracking Concerns23:28 Neurotech Startups Motivations29:41 Neurotech Development in Various Industries33:09 Neurotech and Freedom of Thought38:47 Neuromarketing Influence47:00 Collective Action in Digital Era50:00 Apple Vision Pro Privacy InsightsAbout Nita Farahany:Professor Nita Farahany is a leading scholar on the ethical, legal, and social implications of emerging technologies. She is the Robinson O. Everett Distinguished Professor of Law & Philosophy at Duke Law School, the Founding Director of Duke Science & Society and principal investigator of SLAP Lab and the Cognitive Futures Lab. She is also the author of the book, The Battle for Your Brain: Defending the Right to Think Freely in the Age of NeurotechnologyNita's website - https://www.nitafarahany.comANNOUNCEMENT: Through Conversations Podcast is partnering with Tangle News to bring listeners insightful discussions on today's most pressing issues. This collaboration will combine Tangle News' unbiased reporting with Through Conversations' deep, engaging dialogues. Together, we aim to inform, educate, and inspire, fostering thoughtful discourse and a better understanding of our complex world.Join Tangle News Today - https://www.readtangle.com// Connect With Us //My Substack: https://throughconversations.substack.comWebsite: https://throughconversations.com// Social //Twitter: https://twitter.com/thruconvpodcastInstagram: https://www.instagram.com/thruconvpodcast/?hl=enYouTube: https://www.youtube.com/channel/UCl67XqJVdVtBqiCWahS776g
In this episode, we review the significant events and discussions of 2024. We start with an in-depth look at the US presidential election with insights from Issac Saul, focusing on negative polarization. We then shift to Middle East issues, featuring Husain Abdul-Hussain and Hadeel Oueis, who provide their expert analysis. Moran Cerf delves into the advancements in AI and neural implants, followed by Carissa Veliz discussing tech ethics. Ari Wallach helps us imagine new futures, while Eric Jorgenson makes a moral case for technology. We conclude with a recap of these insightful conversations and their implications for the future.Highlights:0:00 Intro and Episode Preview5:29 US presidential election with Issac Saul15:51 Negative polarization and US election, Paul Poast21:30 Husain Abdul on Middle East Issues25:22 Hadeel Oueis and Middle East insights 34:32 Moran Cerf on AI and neural Implants40:36 Carissa Veliz on Tech Ethics46:10 Imagining New Futures with Ari Wallach50:44 Moral Case for Technology with Eric Jorgenson55:55 Recap and ConclusionANNOUNCEMENT: Through Conversations Podcast is partnering with Tangle News to bring listeners insightful discussions on today's most pressing issues. This collaboration will combine Tangle News' unbiased reporting with Through Conversations' deep, engaging dialogues. Together, we aim to inform, educate, and inspire, fostering thoughtful discourse and a better understanding of our complex world.Join Tangle News Today - https://www.readtangle.com// Connect With Us //My Substack: https://throughconversations.substack.comWebsite: https://throughconversations.com// Social //Twitter: https://twitter.com/thruconvpodcastInstagram: https://www.instagram.com/thruconvpodcast/?hl=enYouTube: https://www.youtube.com/channel/UCl67XqJVdVtBqiCWahS776g
Discusses smart city technologies, including some ethical and regulatory considerations. Our guest today is Junfeng Jiao, an Associate Professor in the Community and Regional Planning Program at the University of Texas at Austin. He is the founding director of the Urban Information Lab, Texas Smart Cities, the UT Ethical AI program, and a founding member of UT Austin's Good Systems Grand Challenge. His research focuses on smart cities, urban informatics, and ethical AI. Additional resources: Texas Smart Cities: https://smartcity.tacc.utexas.edu/pages/index.html Texas Smart Cities Events: https://smartcity.tacc.utexas.edu/events#E282128843 Urban Information Lab: https://sites.utexas.edu/uil/ Big Data and Data Science Research Ethics Course: https://about.citiprogram.org/course/big-data-and-data-science-research-ethics/
This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Enjoy, think, share with others, and subscribe to the "Musing On Society & Technology" newsletter on LinkedIn.Sincerely, Marco Ciappelli and TAPE3________Marco Ciappelli is the host of the Redefining Society Podcast, part of the ITSPmagazine Podcast Network—which he co-founded with his good friend Sean Martin—where you may just find some of these topics being discussed. Visit Marco on his personal website.TAPE3 is the Artificial Intelligence for ITSPmagazine, created to function as a guide, writing assistant, researcher, and brainstorming partner to those who adventure at and beyond the Intersection Of Technology, Cybersecurity, And Society. Visit TAPE3 on ITSPmagazine.
Discusses how you can leverage Biotility's educational offerings for talent development and career growth in the biosciences. Our guest today is Tamara Mandell, the Director of the Education and Training Programs at Biotility, a leader in biotech education and training. Tamara has more than 12 years of combined academic research and industrial biotechnology experience and is fluent in the techniques, methodology, and regulatory compliance relevant to the applied sciences. Additional resources: Biotility: https://biotility.research.ufl.edu/ BACE: https://biotility.research.ufl.edu/bace/ Biotility Courses Available on CITI Program: https://about.citiprogram.org/series/biotility-at-the-university-of-florida/ InnovATEBIO: https://innovatebio.org/ Biotech Careers: https://www.biotech-careers.org/careers Bio-Rad Biotechnology Textbook & Program: https://www.bio-rad.com/en-us/a/edu/biotechnology-textbook-program
Examines the requirements and various considerations for research security training. Our guests today are Mike Steele and Emily Bradford. Mike is an Expert in the Office of the Chief of Research Security Strategy and Policy at the National Science Foundation, where he supports efforts to develop research security training and the office's global outreach efforts. Emily is the Assistant Director of Research Compliance at the University of Kentucky. Under the Office of Sponsored Projects, she oversees conflicts of interest, research security, export controls, and some aspects of clinical trial compliance. Additional resources: CITI Program Research Security Training: https://about.citiprogram.org/series/research-security/ National Science Foundation Research Security (including the JASON studies and resources): https://new.nsf.gov/research-security National Science Foundation Research Security Training: https://new.nsf.gov/research-security/training Council on Governmental Relations Science and Security Resources: https://www.cogr.edu/cogrs-resource-page-science-and-security
This and all episodes at: https://aiandyou.net/ . My guest is a really good role model for how a young person can carve out an important niche in the AI space, especially for people who aren't inclined to the computer science side of the field. Fiona McEvoy is author of the blog YouTheData.com, with a specific focus on the intersection of technology and society. She was named as one of “30 Influential Women Advancing AI in San Francisco” by RE•WORK, and in 2020 was honored in the inaugural Brilliant Women in AI Ethics Hall of Fame, established to recognize “Brilliant women who have made exceptional contributions to the space of AI Ethics and diversity.” We talk about her journey to becoming an influential communicator and the ways she carries that out, what it's like for young people in this social cauldron being heated by AI, and some of the key issues affecting them. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.
In this episode of Discover Daily, we uncover the latest advancements in humanoid robotics as Boston Dynamics unveils its sleeker, more agile all-electric Atlas robot, pushing the boundaries of what's possible in the field. We also explore the skyrocketing valuation of Mistral AI, a Paris-based startup specializing in large language models, as it seeks new funding at a staggering $5 billion valuation, showcasing the rapid growth and potential of AI technology. Finally, we delve into the tense standoff between Google employees and the tech giant over Project Nimbus, a controversial $1.2 billion contract with the Israeli government, highlighting the growing concerns over the ethical implications of technology in our increasingly connected world.From Perplexity's Discover feed:Boston Dynamics' All-Electric Atlas RobotMistral AI's $5 Billion ValuationGoogle Employees Protest Project NimbusPerplexity is the fastest and most powerful way to search the web. Perplexity crawls the web and curates the most relevant and up-to-date sources (from academic papers to Reddit threads) to create the perfect response to any question or topic you're interested in. Take the world's knowledge with you anywhere. Available on iOS and Android Join our growing Discord community for the latest updates and exclusive content. Follow us on: Instagram Threads X (Twitter) YouTube Linkedin
This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Enjoy, think, share with others, and subscribe to the "Musing On Society & Technology" newsletter on LinkedIn.Sincerely, Marco Ciappelli and TAPE3________Marco Ciappelli is the host of the Redefining Society Podcast, part of the ITSPmagazine Podcast Network—which he co-founded with his good friend Sean Martin—where you may just find some of these topics being discussed. Visit Marco on his personal website.TAPE3 is the Artificial Intelligence for ITSPmagazine, created to function as a guide, writing assistant, researcher, and brainstorming partner to those who adventure at and beyond the Intersection Of Technology, Cybersecurity, And Society. Visit TAPE3 on ITSPmagazine.
Brian chats with Amanda Wilson, the co-founder of #open, a dating app for ethically non-monogamous relationships. On the episode, Amanda opens up about the fear she faced pivoting from running political campaigns to launching a sex-positive tech business with her business and life partner. She also shares essential strategies and tactics for navigating the challenges of entrepreneurship and how she utilizes AI to fuel growth and innovation at #open. Episode Highlights Mission-driven businesses balance making money with benefiting society. Amanda defines a mission-driven business as one that doesn't just focus on finances as the number one goal. As part of making #open a mission-driven business, Amanda is setting up the company to become a Certified B Corporation, which requires balancing making money with the responsibility to benefit society. “Our mission is to create safer spaces for marginalized and at-risk communities to form authentic connections,” Amanda said. “That's exceptionally important to us because we are a dating app for people that are in ethically non-monogamous relationships, which is still a hard road for people who are doing that.” Keep your customers safe. One of the goals of #open is to build a safe space for the sex-positive community. To do that, the company makes it a point to listen to its user base, including by talking to customers through the #open support team and by reading app store reviews. Amanda also says the company prioritizes safety through data privacy. “Your data is just an extension of you, so your data needs to be treated as respectfully as we would treat another person standing right here in front of us,” Amanda said. “#Open can always strive to be a safer place.” Use AI to improve your human touch. Amanda sees the use of artificial intelligence (AI) as a way to improve the personal touch of businesses and marketing strategies. For instance, #open reviews every picture that goes inside its app, and AI can help flag images that may break the company's content policies. Also, Amanda uses an AI transcription tool to review her podcast interviews and find points she can improve upon for next time. “AI definitely helps us to be able to have that personal touch,” she said. “Building our community is one of the strategies and tactics we use.” Hire your potential customers. #Open emphasizes hiring people within the community they're serving, so that the people creating the tools can better understand the dynamics around the software's real-world use. “We try and hire as many women and other marginalized people as we can,” Amanda said. “My one request is if we can get more women in tech, life would be so much better for so many people.” Resources + Links B Corp Certification Otter.AI Amanda Wilson: LinkedIn #open: Website, Instagram, Facebook, X, Tumblr, YouTube Brian Thompson Financial: Website, Newsletter, Podcast Follow Brian Thompson Online: Instagram, Facebook, LinkedIn, X, Forbes About Brian and the Mission Driven Business Podcast Brian Thompson, JD/CFP, is a tax attorney and certified financial planner who specializes in providing comprehensive financial planning to LGBTQ+ entrepreneurs who run mission-driven businesses. The Mission Driven Business podcast was born out of his passion for helping social entrepreneurs create businesses with purpose and profit. On the podcast, Brian talks with diverse entrepreneurs and the people who support them. Listeners hear stories of experiences, strength, and hope and get practical advice to help them build businesses that might just change the world, too.
Discusses the use of artificial intelligence (AI) to accelerate antibiotic discovery. Our guest today is César de la Fuente who is a Presidential Assistant Professor at the University of Pennsylvania. César's research focuses on using computational approaches to accelerate discoveries in biology and medicine. Specifically, he pioneered the development of the first computer-designed antibiotic with efficacy in animal models, demonstrating the application of AI for antibiotic discovery and helping launch this emerging field. Related publications: Pioneering study describing an antibiotic designed by AI with efficacy in preclinical mouse models: https://www.nature.com/articles/s41467-018-03746-3 First exploration of the human proteome as a source of antibiotics, yielding numerous preclinical candidates: https://www.nature.com/articles/s41551-021-00801-1 First therapeutic molecules (i.e., antibiotics) identified in extinct organisms, launching the field of molecular de-extinction: https://www.cell.com/cell-host-microbe/pdfExtended/S1931-3128(23)00296-2 Relevant review in Science with the Collins Lab covering this emerging field: https://www.science.org/doi/10.1126/science.adh1114 Related media coverage: STAT - “Giant sloths and woolly mammoths: Mining past creatures' DNA for future antibiotics:” https://www.statnews.com/2023/10/25/antibiotics-resistance-ancient-dna-cesar-de-la-fuente/ Vox - “Using AI, scientists bring Neanderthal antibiotics back from extinction:” https://www.vox.com/future-perfect/23811682/ai-neanderthal-antibiotics-extinction CNN - “Why ‘resurrection biology' is gaining traction around the world:” https://www.cnn.com/2023/12/26/world/resurrection-biology-extinct-species-virus-scn/index.html Additional resources: De La Fuente Lab: http://delafuentelab.seas.upenn.edu/ LinkedIn: https://www.linkedin.com/in/cesardelafuentenunez/ X: https://twitter.com/delafuenteupenn CITI Program's Bioethics course: https://about.citiprogram.org/course/bioethics/ CITI Program's Technology, Ethics, and Regulations course: https://about.citiprogram.org/course/technology-ethics-and-regulations/
Discusses RealResponse, which is a platform that helps create safe, ethical, and inclusive educational and workplace environments.Our guest today is David Chadwick who is the Founder and CEO of RealResponse.Additional resources:· RealResponse: https://www.realresponse.com/· On Campus with CITI Program: https://www.buzzsprout.com/1896915· CITI Program's Safe Research Environments course: https://about.citiprogram.org/course/safe-research-environments/
Discusses artificial placenta and womb technologies, including some ethical considerations for first-in-human clinical trials. Our guest today is Stephanie Kukora who is a neonatologist and bioethicist at Children's Mercy Kansas City, where she serves as core faculty in the Certificate Program in Pediatric Bioethics. She conducts research related to shared decision-making, global health, and education targeting communication skills for clinicians. Additional resources: “Ethical challenges in first-in-human trials of the artificial placenta and artificial womb: not all technologies are created equally, ethically:” https://www.nature.com/articles/s41372-023-01713-5 CITI Program's Bioethics course: https://about.citiprogram.org/course/bioethics/ CITI Program's Healthcare Ethics Committee course: https://about.citiprogram.org/course/healthcare-ethics-committee/
from the robotcrimeblog.com
This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Enjoy, think, share with others, and subscribe to the "Musing On Society & Technology" newsletter on LinkedIn.Sincerely, Marco Ciappelli and TAPE3________Marco Ciappelli is the host of the Redefining Society Podcast, part of the ITSPmagazine Podcast Network—which he co-founded with his good friend Sean Martin—where you may just find some of these topics being discussed. Visit Marco on his personal website.TAPE3 is the Artificial Intelligence for ITSPmagazine, created to function as a guide, writing assistant, researcher, and brainstorming partner to those who adventure at and beyond the Intersection Of Technology, Cybersecurity, And Society. Visit TAPE3 on ITSPmagazine.
In this News edition, host Paul Spain and guest Peter Allington (Singer Electric) delve into the hot topics of tech regulation, AI innovation, and the future of virtual reality. They discuss new aid for boaters in the Tory Channel, Social Media CEO's senate hearing concerning the well-being of younger users, Neuralink's brain interface chip's first human trial, the release of Apple's Vision Pro VR headset and more.
Discusses privacy and other ethical considerations for extended reality settings. Our guest today is Mihaela Popescu who is a Professor of Digital Media in the Department of Communication Studies at California State University, San Bernardino (CSUSB) and the Faculty Director of CSUSB's Extended Reality for Learning Lab (xREAL). She holds a Ph.D. in Communication from the University of Pennsylvania. Additional resources: Educause: https://www.educause.edu/ Electronic Frontier Foundation: https://www.eff.org/ CITI Program's Technology, Ethics, and Regulations course: https://about.citiprogram.org/course/technology-ethics-and-regulations/
Delve into the insightful journey and expertise of Piyush Malik, a seasoned tech entrepreneur and chief digital transformation officer at Veridic Solutions. In this episode, Piyush shares his profound experiences spanning over two decades in the tech industry, exploring the evolving landscape of emerging technologies, ethical considerations in AI, and the pivotal role of digital innovation in transforming businesses. Get ready to uncover the wisdom and foresight of a visionary leader at the forefront of digital transformation. [00:37] - About Piyush Malik Piyush is the Chief Digital and Transformation Officer with Veridic Solutions. He is a startup executive, an entrepreneur, a board adviser, a thought leader, a speaker and a business technology transformation leader. Piyush is an angel investor in the domain of emerging technologies. He has been recognised and felicitated several times. --- Support this podcast: https://podcasters.spotify.com/pod/show/tbcy/support
We usually talk about tech in terms of economics or policy, but the casual language tech leaders often use to describe AI — summoning an inanimate force with the powers of code — sounds more... magical. So, what can myth and magic teach us about the AI race? Josh Schrei, mythologist and host of The Emerald podcast, says that foundational cultural tales like "The Sorcerer's Apprentice" or Prometheus teach us the importance of initiation, responsibility, human knowledge, and care. He argues these stories and myths can guide ethical tech development by reminding us what it is to be human. Correction: Josh says the first telling of "The Sorcerer's Apprentice" myth dates back to ancient Egypt, but it actually dates back to ancient Greece.RECOMMENDED MEDIA The Emerald podcastThe Emerald explores the human experience through a vibrant lens of myth, story, and imaginationEmbodied Ethics in The Age of AIA five-part course with The Emerald podcast's Josh Schrei and School of Wise Innovation's Andrew DunnNature Nurture: Children Can Become Stewards of Our Delicate PlanetA U.S. Department of the Interior study found that the average American kid can identify hundreds of corporate logos but not plants and animalsThe New FireAI is revolutionizing the world - here's how democracies can come out on top. This upcoming book was authored by an architect of President Biden's AI executive orderRECOMMENDED YUA EPISODES How Will AI Affect the 2024 Elections?The AI DilemmaThe Three Rules of Humane TechAI Myths and Misconceptions Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Discusses considerations for using artificial intelligence in Institutional Review Board operations. Our guest is Myra Luna-Lucero, EdD, the Research Compliance Director at Columbia University's Teachers College. She spearheaded the College's “Ethics & Safety Amid Uncertainty” initiative and co-chaired the Research Compliance & Safety Committee. She has also recently launched the “Research Writing & Ethics” internship program and oversaw an extensive transformation of the College's IRB website. She regularly offers seminars and workshops on research compliance and IRB leadership. A researcher and teacher herself, Dr. Luna-Lucero has studied and published on student motivation in STEM fields, barriers to accessing education for students in rural communities, and community activism. Additional resources: CITI Program's Essentials of Responsible AI course: https://about.citiprogram.org/course/essentials-of-responsible-ai/ CITI Program's Technology, Ethics, and Regulations course: https://about.citiprogram.org/course/technology-ethics-and-regulations/
Is Big Tech hoarding your data like Scrooge with his gold? Are algorithms playing puppet master with your life? Buckle up, because the TARTLE Data Commitment is CRASHING the scene with a radical plan to put YOU back in control of YOUR data! ** Consent, Ethics, Equality, Inclusion - these ain't just buzzwords, folks. TARTLE's got a four-pronged attack on tech's data dragon:** ** Take Back Your Power:** No more shady data deals! TARTLE gives you crystal-clear control over how your information is used. ⚖️ Justice for All: Ditch the biased algorithms! TARTLE champions data equality, ensuring everyone's voices are heard, not just the tech giants' chosen few. ** Everyone In:** No more data deserts! TARTLE bridges the digital divide, making sure everyone has access to the benefits of the data age. ️ Ethics Before Profits: Forget creepy surveillance! TARTLE prioritizes data privacy and ethical use, so your info stays safe and sound. But can TARTLE REALLY walk the walk? Is this just another Silicon Valley pipe dream? Dive into this video and decide for yourself! We'll dissect TARTLE's promises, expose the dark side of Big Tech's data game, and chart a course for a fairer, more ethical digital future. Join the data revolution! Share this video, smash that subscribe button, and let's make TARTLE's vision a reality! TCAST is a tech and data podcast, hosted by Alexander McCaig and Jason Rigby. Together, they discuss the most exciting trends in Big Data, Artificial Intelligence, and Humanity. It's a fearless examination of the latest developments in digital transformation and innovation. The pair also interview data scientists, thought leaders, and industry experts. Pioneers in the skills and technologies we need for human progress. Explore our extensive TCAST selection at your pace, on your channel of choice. What's your data worth? Find out at ( https://tartle.co/ ) Share our Facebook Page | https://go.tartle.co/fb Watch our Instagram | https://go.tartle.co/ig Hear us Tweet | https://go.tartle.co/tweet
This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Enjoy, think, share with others, and subscribe to the "Musing On Society & Technology" newsletter on LinkedIn.Sincerely, Marco Ciappelli and TAPE3________Marco Ciappelli is the host of the Redefining Society Podcast, part of the ITSPmagazine Podcast Network—which he co-founded with his good friend Sean Martin—where you may just find some of these topics being discussed. Visit Marco on his personal website.TAPE3 is the Artificial Intelligence for ITSPmagazine, created to function as a guide, writing assistant, researcher, and brainstorming partner to those who adventure at and beyond the Intersection Of Technology, Cybersecurity, And Society. Visit TAPE3 on ITSPmagazine.
In this episode of Elixir Wizards, Xiang Ji and Nathan Hessler join hosts Sundi Myint and Owen Bickford to compare actor model implementation in Elixir, Ruby, and Clojure. In Elixir, the actor model is core to how the BEAM VM works, with lightweight processes communicating asynchronously via message passing. GenServers provide a common abstraction for building actors, handling messages, and maintaining internal state. In Ruby, the actor model is represented through Ractors, which currently map to OS threads. They discuss what we can learn by comparing models, understanding tradeoffs between VMs, languages, and concurrency primitives, and how this knowledge can help us choose the best tools for a project. Topics discussed in this episode: Difference between actor model and shared memory concurrency Isolation of actor state and communication via message passing BEAM VM design for high concurrency via lightweight processes GenServers as common abstraction for building stateful actors GenServer callbacks for message handling and state updates Agents as similar process abstraction to GenServers Shared state utilities like ETS for inter-process communication Global Interpreter Lock in older Ruby VMs Ractors as initial actor implementation in Ruby mapping to threads Planned improvements to Ruby concurrency in 3.3 Akka implementation of actor model on JVM using thread scheduling Limitations of shared memory concurrency on JVM Project Loom bringing lightweight processes to JVM Building GenServer behavior in Ruby using metaprogramming CSP model of communication using channels in Clojure Differences between BEAM scheduler and thread-based VMs Comparing Elixir to academic languages like Haskell Remote and theScore are hiring! Links mentioned in this episode: theScore is hiring! https://www.thescore.com/ Remote is also hiring! https://remote.com/ Comparing the Actor Model and CSP with Elixir and Clojure (https://xiangji.me/2023/12/18/comparing-the-actor-model-and-csp-with-elixir-and-clojure/) Blog Post by Xiang Ji Comparing the Actor model & CSP concurrency with Elixir & Clojure (https://www.youtube.com/watch?v=lIQCQKPRNCI) Xiang Ji at ElixirConf EU 2022 Clojure Programming Language https://clojure.org/ Akka https://akka.io/ Go Programming Language https://github.com/golang/go Proto Actor for Golang https://proto.actor/ RabbitMQ Open-Source Message Broker Software https://github.com/rabbitmq JVM Project Loom https://github.com/openjdk/loom Ractor for Ruby https://docs.ruby-lang.org/en/master/ractor_md.html Seven Concurrency Models in Seven Weeks: When Threads Unravel (https://pragprog.com/titles/pb7con/seven-concurrency-models-in-seven-weeks/)by Paul Butcher Seven Languages in Seven Weeks (https://pragprog.com/titles/btlang/seven-languages-in-seven-weeks/) by Bruce A. Tate GenServer https://hexdocs.pm/elixir/1.12/GenServer.html ets https://www.erlang.org/doc/man/ets.html Elixir in Action (https://pragprog.com/titles/btlang/seven-languages-in-seven-weeks/) by Saša Jurić Redis https://github.com/redis/redis Designing for Scalability with Erlang/OTP (https://www.oreilly.com/library/view/designing-for-scalability/9781449361556/) by Francesco Cesarini & Steve Vinoski Discord Blog: Using Rust to Scale Elixir for 11 Million Concurrent Users (https://discord.com/blog/using-rust-to-scale-elixir-for-11-million-concurrent-users) Xiang's website https://xiangji.me/ Feeling Good: The New Mood Therapy (https://www.thriftbooks.com/w/feeling-good-the-new-mood-therapy-by-david-d-burns/250046/?resultid=7691fb71-d8f9-4435-a7a3-db3441d2272b#edition=2377541&idiq=3913925) by David D. Burns Special Guests: Nathan Hessler and Xiang Ji.
This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Enjoy, think, share with others, and subscribe to the "Musing On Society & Technology" newsletter on LinkedIn.Sincerely, Marco Ciappelli and TAPE3________Marco Ciappelli is the host of the Redefining Society Podcast, part of the ITSPmagazine Podcast Network—which he co-founded with his good friend Sean Martin—where you may just find some of these topics being discussed. Visit Marco on his personal website.TAPE3 is the Artificial Intelligence for ITSPmagazine, created to function as a guide, writing assistant, researcher, and brainstorming partner to those who adventure at and beyond the Intersection Of Technology, Cybersecurity, And Society. Visit TAPE3 on ITSPmagazine.
Discusses recent developments in the regulation of artificial intelligence.Our guest today is Brenda Leong who is a Partner at Luminos.Law and an adjunct faculty member teaching privacy and information security at George Mason University.Additional resources:· White House Fact Sheet on the Executive Order on Safe, Secure, and Trustworthy AI: https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/ · European Union AI Act overview: https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai · The Bletchley Declaration: https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023 · G7 Guiding Principles and Code of Conduct for AI: https://digital-strategy.ec.europa.eu/en/library/g7-leaders-statement-hiroshima-ai-process · Luminos.Law: https://luminos.law/· CITI Program's Essentials of Responsible AI course: https://about.citiprogram.org/course/essentials-of-responsible-ai/
Discusses the impact of generative artificial intelligence on research integrity.Our guest today is Mohammad Hosseini who is an Assistant Professor at Northwestern University. Mohammad's work explores a broad range of research ethics issues, such as recognizing contributions in academic publications, citations and publication ethics, gender issues in academia, and employing artificial intelligence and large language models in research.Additional resources:· “How ChatGPT is transforming the postdoc experience” (Nordling 2023): https://www.nature.com/articles/d41586-023-03235-8· “An exploratory survey about using ChatGPT in education, healthcare, and research” (Hosseini et al. 2023): https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0292216· CITI Program's Essentials of Responsible AI course: https://about.citiprogram.org/course/essentials-of-responsible-ai/
Join Strongholds Creative Director Ina Maria as she sits down with Artem Trotsyuk to explore the quest for an ageless future and its impact on technology, ethics, and personal growth. Discover the complexities of living forever while navigating the constraints of human existence and the importance of addressing unanswered questions.Learn about the evolving trends in the longevity space, from generative AI to microbiome research.To learn more about Stronghold, find us on YouTube, Twitter, Instagram, and LinkedIn, or join our popular Discord!If you know someone who you think would be great on Speak Bold, don't hesitate to reach out at podcast@stronghold.coNew episodes of Speak Bold drop every two weeks. Don't forget to subscribe!See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Enjoy, think, share with others, and subscribe to the "Musing On Society & Technology" newsletter on LinkedIn.Sincerely, Marco Ciappelli and TAPE3________Marco Ciappelli is the host of the Redefining Society Podcast, part of the ITSPmagazine Podcast Network—which he co-founded with his good friend Sean Martin—where you may just find some of these topics being discussed. Visit Marco on his personal website.TAPE3 is the Artificial Intelligence for ITSPmagazine, created to function as a guide, writing assistant, researcher, and brainstorming partner to those who adventure at and beyond the Intersection Of Technology, Cybersecurity, And Society. Visit TAPE3 on ITSPmagazine.
This week's guests are Mathew Mytka and Alja Isakovoić, Co-Founders of Tethix, a company that builds products that embed ethics into the fabric of your organization. We discuss Matt and Alja's core mission to bring ethical tech to the world, and Tethix's services that work with your Agile development processes. You'll learn about Tethix's solution to address 'The Intent to Action Gap,' and what Elemental Ethics can provide organizations beyond other ethics frameworks. We discuss ways to become a proactive Responsible Firekeeper, rather than remaining a reactive Firefighter, and how ETHOS, Tethix's suite of apps can help organizations embody and embed ethics into everyday practice. TOPICS COVERED:What inspired Mat & Alja to co-found Tethix and the company's core missionWhat the 'Intent to Action Gap' is and how Tethix address itOverview of Tethix's Elemental Ethics framework; and how it empowers product development teams to 'close the 'Intent to Action Gap' and move orgs from a state of 'Agile Firefighting' to 'Responsible Firekeeping'Why Agile is an insufficient process for embedding ethics into software and product development; and how you can turn to Elemental Ethics and Responsible Firekeeping to embed 'Ethics-by-Design' into your Agile workflowsThe definition of 'Responsible Firekeeping' and its benefits; and how Ethical Firekeeping transitions Agile teams from a reactive posture to a proactive oneWhy you should choose Elemental Ethics over conventional ethics frameworksTethix's suite of apps called ETHOS: The Ethical Tension and Health Operating System apps, which help teams embed ethics into their collaboration tech stack (e.g., JIRA, Slack, Figma, Zoom, etc.)How you can become a Responsible FirekeeperThe level of effort required to implement Elemental Ethics & Responsible Firekeeping into Product Development based on org size and level of maturityAlja's contribution to the ResponsibleTech.Work, an open source Responsible Product Development Framework, core elements of the Framework, and why we need itWhere to learn more about Responsible FirekeepingRESOURCES MENTIONED:Read: "Day in the Life of a Responsible Firekeeper"Review the ResponsibleTech.Work FrameworkSubscribe to the Pathfinders NewmoonsletterGUEST INFO:Connect with Mat on LinkedInConnect with Alja on LinkedInCheck out Tethix's Website Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Enjoy, think, share with others, and subscribe to the "Musing On Society & Technology" newsletter on LinkedIn.Sincerely, Marco Ciappelli and TAPE3________Marco Ciappelli is the host of the Redefining Society Podcast, part of the ITSPmagazine Podcast Network—which he co-founded with his good friend Sean Martin—where you may just find some of these topics being discussed. Visit Marco on his personal website.TAPE3 is the Artificial Intelligence for ITSPmagazine, created to function as a guide, writing assistant, researcher, and brainstorming partner to those who adventure at and beyond the Intersection Of Technology, Cybersecurity, And Society. Visit TAPE3 on ITSPmagazine.____________________________________"The Eyes Of the AI" A Halloween Special Short Story The PrologueCan you keep a secret? This is a short, very, very short, Halloween story for you. Keep it a secret. Do not share it. Not mention it. Do not even think about it if you want. Have a chance to keep your soul and survive the eyes of the AI.The StoryIn the desolation the sky itself seemed to warp and twist, becoming a canvas for empty matrix patterns. Ethereal eyes floated high above, casting haunting glows over the town below.The watchers, always observing, always judging, and never breathing.The streets lined with twisted trees, their branches heavy with jack-o'-lanterns glowing of a light imbued with binary code. Their faces seemed to flicker and shift, revealing glimpses of the countless souls whose data had been harvested.Shadowy, emptied, citizen figures moved through the streets; parched digital egos, mere silhouettes of what was flesh and blood. Their faces, devoid of emotion, were a testament to a life under ceaseless surveillance.Amidst this world of shadows and whispers, freedom remained a dream, and when Eleanor noticed the glitches in the matrix, that dream, for many, became hope. The eyes in the sky, for all their omnipotence, had blind spots, biases, and the very human imperfect humanity that created them and the now they feasted on.But Eleanor disappeared. The only trace left of her was her shadow laying on the ground and a chilling note:"Your thoughts could be next... but do we even care?"- The EndThis short story was brought to you by what you could consider the mortal remain of a human cognition and his faithful AI companion. Who is who, and what is what, is up to you to decide.Very quickly written and illustrated by Me and AI - mostly written by me cause AI ain't that good at #writing these things anyway. The illustration? Well that is another story, maybe for another time.The reading? Well, that is all the AI (and Halloween Version of it) with my art direction, because I am no actor and I do not pretend to be one.
Ryan sat down with John Amble of the Modern War Institute to unpack the challenges Israel is likely to face in Gaza; Israel's world-renowned urban warfare training facilities; comparisons with other battles in cities such as those that took place in the Iraqi cities of Fallujah and Mosul; and how the initial Hamas attack overwhelmed Israel's preparations to defend itself. John and Ryan close by reflecting on how three Islamist militant groups have shocked the world and armies that were, on paper, better prepared than they were: the Taliban in Afghanistan, the Islamic State in Iraq, and now Hamas in Israel and Gaza.
Discusses ethical and policy considerations for xenotransplantation clinical trials.Our guest today is Karen Maschke, a Senior Research Scholar at The Hastings Center and editor of The Hastings Center's journal Ethics & Human Research. Her work focuses on ethical, regulatory, and policy issues associated with developing and using new biotechnologies.Additional resource:· The Hastings Center “Ethical and Policy Guidance for Translational Xenotransplantation Clinical Trials:” https://www.thehastingscenter.org/who-we-are/our-research/current-projects/ethical-and-policy-guidance-for-translational-xenotransplantation-clinical-trials/
This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Enjoy, think, share with others, and subscribe to the "Musing On Society & Technology" newsletter on LinkedIn.Sincerely, Marco Ciappelli and TAPE3________Marco Ciappelli is the host of the Redefining Society Podcast, part of the ITSPmagazine Podcast Network—which he co-founded with his good friend Sean Martin—where you may just find some of these topics being discussed. Visit Marco on his personal website.TAPE3 is the Artificial Intelligence for ITSPmagazine, created to function as a guide, writing assistant, researcher, and brainstorming partner to those who adventure at and beyond the Intersection Of Technology, Cybersecurity, And Society. Visit TAPE3 on ITSPmagazine.
This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Enjoy, think, share with others, and subscribe to the "Musing On Society & Technology" newsletter on LinkedIn.Sincerely, Marco Ciappelli and TAPE3________Marco Ciappelli is the host of the Redefining Society Podcast, part of the ITSPmagazine Podcast Network—which he co-founded with his good friend Sean Martin—where you may just find some of these topics being discussed. Visit Marco on his personal website.TAPE3 is the Artificial Intelligence for ITSPmagazine, created to function as a guide, writing assistant, researcher, and brainstorming partner to those who adventure at and beyond the Intersection Of Technology, Cybersecurity, And Society. Visit TAPE3 on ITSPmagazine.
Our guest today is Patrick English, a Professor of Automotive Engineering Technology at Ferris State University. He teaches both the engineering aspects and the service of advanced vehicles. Dr. English has two technical associates, a BS in vocational industrial education, a master's degree in workforce education development, and a PhD in technology management.Additional resources:Jones Day “Autonomous Vehicles: Legal and Regulatory Developments in the United States” white paper: https://www.jonesday.com/en/insights/2021/05/autonomous-vehicles-legal-and-regulatory-developments-in-the-usSAE International: https://www.sae.org/ National Highway Traffic Safety Administration: https://www.nhtsa.gov/Ferris State University's School of Automotive and Heavy Equipment: https://www.ferris.edu/CET/auto-heet/homepage.htm
Check out our free UX Writing course: https://course.uxwritinghub.com/free_courseFollow Sharona on LinkedIn
Artificial intelligence is clearly going to change our lives in multiple ways. But it's not yet obvious exactly how, and what the impacts will be. We can predict that certain jobs held by humans will probably be taken over by computers, but what about our thoughts? Will we still think and create in the same ways? Author and former Aspen Institute president Walter Isaacson has been writing biographies about big thinkers and innovators for decades, including Albert Einstein, Steve Jobs, Benjamin Franklin and Jennifer Doudna. Isaacson returned to the world of technology for his most recent book on Elon Musk. Journalist Andrew Ross Sorkin interviews Isaacson on stage at the Aspen Ideas Festival about whether a society fully integrated with AI can foster the same qualities shared by many influential people. Will A.I. augment the best that humans have to offer, or will it compete with or even degrade human intelligence? And are there some traits that technology just will never be able to replicate, like empathy and compassion?
Guest/s Name ✨Nigel Cannings, CTO at Intelligent Voice [@intelligentvox]Bio ✨Nigel Cannings is the CTO at Intelligent Voice. He has over 25 years' experience in both Law and Technology, is the founder of Intelligent Voice Ltd and a pioneer in all things voice. Nigel is also a regular speaker at industry events such as NVIDIA GTC and holds multiple patents in Speech, NLP and Confidential Computing technologies. He is an Industrial Fellow at the University of East London.On Linkedin | https://www.linkedin.com/in/nigelcannings/?originalSubdomain=ukGoogle Scholar | https://scholar.google.co.uk/citations?user=zHL1sngAAAAJ&hl=en____________________________Host: Marco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society PodcastOn ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli_____________________________This Episode's SponsorsBlackCloak
Addresses human subjects research in space, including the unique ethical and regulatory considerations.Our guest today is Tom Salazar, the Chief Research Oversight and Compliance Officer at Travis Air Force Base in Northern California. Tom's areas of expertise include bioethics and research compliance in topics such as psychedelic drugs, neuroscience, artificial intelligence, and space research.Additional resources:Tom Salazar's email: trssalazar@ucdavis.eduThe Outer Space Treaty of 1966: https://www.unoosa.org/oosa/en/ourwork/spacelaw/treaties/introouterspacetreaty.htmlCode of Conduct for the International Space Station Crew (14 CFR 1214.403): https://www.ecfr.gov/current/title-14/chapter-V/part-1214/subpart-1214.4/section-1214.403
Just about every take on the Red Hat news seems to have missed the mark. Special Guest: Carl George.