Artificial intelligence-based human image synthesis technique
POPULARITY
Categories
A nightmare scenario happened to politician Cara Hunter: just weeks before her election, she became the victim of a deepfake scam that threatened to upend her life and career. In a fearless talk, she explores AI's potential to undermine truth and democracy — and offers a path forward to harness this powerful technology as a force for good. (This talk contains mature language.)
Note: We're posting Perry's new show, "The FAIK Files", to this feed through the end of the year. This will give you a chance to get a feel for the new show and subscribe to the new feed if you want to keep following in 2025. Welcome back to The FAIK Files— the show about artificial intelligence and natural nonsense! In this week's episode: Mason has an update on last week's shocking story of Google Gemini's alleged threats against a user. Is this a glitch, a hack, or something more sinister? We sit down with cybersecurity expert Justin “Hutch” Hutchins (author of The Language of Deception) to dive into his experience weaponizing large language models and what it means for the future of AI. And finally, we lighten things up with some fun AI music experiments, showing that Perry & Mason may have too much time on their hands, Want to leave us a voicemail? Here's the magic link to do just that: https://sayhi.chat/FAIK You can also join our Discord server here: https://discord.gg/cU7wepaz *** NOTES AND REFERENCES *** Hutch's stuff: Justin's Book: The Language of Deception: Weaponizing Next Generation AI AI podcast: The Frontier Today (Apple Podcasts) (Spotify) Justin and Len Noe's podcast: Cyber Cognition (Apple Podcasts) (Spotify) RSA Conference talk: Autonomous Hacking Systems - Future Risk or FUD Houston Security Conference talk: This is how we lose control - Focused on the catastrophic/societal risks - P(Doom) Mason's update on theories about Google Gemini threatening people: AI Didn't Threaten Humanity—The Media Did: Breaking Down the Gemini Incident LLM Context Windows Lost in the Middle: How Language Models Use Long Contexts Attention Sorting Combats Recency Bias in Long Context Large Language Models Cognitive Overload Attack: Prompt Injection for Long Context Suno and Generative AI Music Generation: Suno Udio Whispers in the Static (full song generated on Suno) Experimental outro from today's episode (Suno) Wired: US Record Labels Sue AI Music Generators Suno and Udio for Copyright Infringement Soundiiz Blog: The music industry is at war against AI: Suno and Udio's case *** THE BOILERPLATE *** About The FAIK Files: The FAIK Files is an offshoot project from Perry Carpenter's most recent book, FAIK: A Practical Guide to Living in a World of Deepfakes, Disinformation, and AI-Generated Deceptions. Get the Book: FAIK: A Practical Guide to Living in a World of Deepfakes, Disinformation, and AI-Generated Deceptions (Amazon Associates link) Check out the website for more info: https://thisbookisfaik.com Check out Perry & Mason's other show, the Digital Folklore Podcast: Apple Podcasts: https://podcasts.apple.com/us/podcast/digital-folklore/id1657374458 Spotify: https://open.spotify.com/show/2v1BelkrbSRSkHEP4cYffj?si=u4XTTY4pR4qEqh5zMNSVQA Other: https://digitalfolklore.fm Want to connect with us? Here's how: Connect with Perry: Perry on LinkedIn: https://www.linkedin.com/in/perrycarpenter Perry on X: https://x.com/perrycarpenter Perry on BlueSky: https://bsky.app/profile/perrycarpenter.bsky.social Connect with Mason: Mason on LinkedIn: https://www.linkedin.com/in/mason-amadeus-a853a7242/ Mason on BlueSky: https://bsky.app/profile/pregnantsonic.com
Two-time guest Nate Persily is a professor of law and policy who studies the intersection of artificial intelligence and democracy. AI is creeping into democracy, he says, and 2024 saw its share of deepfakes and synthetic media, but with surprisingly little impact. His bigger concern is the opposite – politicians claiming the truth to be fake. It breeds distrust and, for democracy, that's more pernicious, Persily tells host Russ Altman on this episode of Stanford Engineering's The Future of Everything podcast.Have a question for Russ? Send it our way in writing or via voice memo, and it might be featured on an upcoming episode. Please introduce yourself, let us know where you're listening from, and share your quest. You can send questions to thefutureofeverything@stanford.edu.Episode Reference Links:Stanford Profile: Nathaniel PersilyConnect With Us:Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads / Bluesky / MastodonConnect with School of Engineering >>> Twitter/X / Instagram / LinkedIn / FacebookChapters:(00:00:00) IntroductionRuss Altman introduces guest Nate Persily, a professor of law at Stanford University.(00:02:49) How AI Impacts DemocracyAI's role in influencing elections and its implications for democratic systems.(00:03:38) Deepfakes and the ElectionThe limited impact of deepfakes and emerging disinformation trends.(00:04:37) Is Skepticism Beneficial?The role of skepticism in navigating AI-driven disinformation.(00:06:06) AI and Social MediaHow AI powers social media and its implications for democratic dialogue.(00:08:40) AI Monopoly and DemocracyRisks of concentrated AI power among tech giants.(00:12:03) Risks of Open AI ModelsBalancing innovation and misuse risks of open AI models.(00:17:20) Regulating AIGlobal efforts to regulate AI and lessons from the European AI Act.(00:19:54) The Global AI RaceThe international race to balance progress and safeguards with AI.(00:23:35) Global South and AIThe digital divide and role of open models in under-resourced nations.(00:25:05) AI in Legal PracticePotential and challenges of AI in law, from drafting to adjudication.(00:28:55) Conclusion Connect With Us:Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads / Bluesky / MastodonConnect with School of Engineering >>>Twitter/X / Instagram / LinkedIn / Facebook
A nightmare scenario happened to politician Cara Hunter: just weeks before her election, she became the victim of a deepfake scam that threatened to upend her life and career. In a fearless talk, she explores AI's potential to undermine truth and democracy — and offers a path forward to harness this powerful technology as a force for good. (This talk contains mature language.)
A nightmare scenario happened to politician Cara Hunter: just weeks before her election, she became the victim of a deepfake scam that threatened to upend her life and career. In a fearless talk, she explores AI's potential to undermine truth and democracy — and offers a path forward to harness this powerful technology as a force for good. (This talk contains mature language.)
Social media strategist Scott Kleinberg joins Bob Sirott to explain what A.I. deepfakes are, the risks associated with them, and how you can protect yourself from them. He also shares details about how you can detect A.I. deepfake audio and answers this week’s genius bar question.
I reviewed "FAIK, A Practical Guide To Living In A World Of Deepfakes, Disinformation, and AI-Generated Deceptions", by Perry Carpenter. This publication came across as highly engaging, innovatively written, and a worthwhile read for all in our digital age. FAIK: Navigating AI, Deepfakes and Deception Safely, reviewed Many related publications focus on the mechanics of AI, security, and the process for handling identified issues. This easy-to-read publication was more holistic, wider in scope, yet more people focused. The author's coverage started with an exploration of technology as a tool for good or evil. He then moves seamlessly into AI, its inflection point to good or evil outcomes, and how our general understanding of it can mitigate its risk. A fascinating angle to this book is the human intelligence v article intelligence angle. The author explores the human art of deception that has found an accomplice in AI, and how this changes the threat landscape for us all. The book has a balanced feel to it, with each chapter containing a story, acting as a chapter anchor. It then explores the topic behind it and offers practical takeaways to deepen the reader's knowledge of deception using AI. This theme is present throughout the book, with quality advice on how to deal with deception, such as the SIFT method. The author is not shy about his writing goal. He tells us it is ok to be AI curious once we understand how to deal with its deceptive dark side. I found this practical outlook to be illustrative, which made concepts easy to grasp. The forward and introduction are worth a read. It familiarizes the reader with the author's writing style, as it lays out the book's content in an easy to digest manner. This publication, comprising 10 chapters, has three main parts. Chapters 1 to 3 are an overview of AI, and synthetic media (AI produced content), which includes deep fakes. Chapters 4 to 7 are an exploration of the emerging threat landscape, which includes AI based deception, along with tactics to deal with them. Chapters 8 to 10 are about self protection, and protection for your community regarding AI. This section continues the theme of self-contained chapters, which are also linked, by topic, into a whole-of-book read. Chapter 10, "A New Hope", has the now familiar dramatization story as an italic opener, to be followed by an exploration, and a powerful call to action. The author tells us, "criminals have means, motive, and opportunity. But so do we". This community collaboration against AI abusers, such as scammers, disinformation groups, plus more, makes sense. While quality takeaways are present in every chapter, the takeaways in this final chapter go wider, yet still strike deep. They inform us of how we can protect ourselves and each other in a world filled with AI. I found the exploration of AI basics, the deception game and the quality of advice to be of enormous benefit to any reader. It humanizes the issues as it defines technological intelligence as separate from human intelligence. Understanding AI's adversarial dark side and its interaction with humans is key. It is how we can collaborate with AI using a wise degree of caution. That the genii is out of the bottle regarding AI is of note, as is the infection point we are now at. Taking the initiative back from bad-actors, and looking to understand AI's dark side as much as its potential, is our best path forward. Its potential can make or break us as a species. This book allows us to take the brave first step into AI, and why human understanding can still prevail. To do that, we need to understand the human art of deception as delivered through AI, and how AI's potential relies on human input to drive it forward. How that happens next is up to us. By John Mulhall @soldersee | john@authormulhall.com is a writer with Irish Tech News for over 8 years and also a Cloud Engineer, Writer, and Novelist. You can learn more about John, and his debut novel "From Terror to ...
A new kind of deepfake revenge porn is sweeping the internet. Using artificial intelligence, bad actors can do things like superimpose your face on a nude body, creating convincing and harmful images. Tech companies and lawmakers are trying to play catch up, but the truth is these tools are still easy to access. So how can you and your loved ones stay safe from this dangerous technology? Carrie Goldberg, a lawyer specializing in digital harassment and sex crimes, has some answers. Listen to ‘Terms of Service' here. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Note: We're posting Perry's new show, "The FAIK Files", to this feed through the end of the year. This will give you a chance to get a feel for the new show and subscribe to the new feed if you want to keep following in 2025. Happy FAIKs-giving everyone! Welcome to the newly renovated and relaunched FAIK Files podcast. On this week's episode, Perry & Mason cover Anthropic's recent hiring of an employee focused on AI well-being, an AI grandmother from hell (for scammers), and Google's Gemini chatbot allegedly tells a user what it really thinks of them. Welcome back to the show that keeps you informed on all things artificial intelligence and natural nonsense. Want to leave us a voicemail? Here's the magic link to do just that: https://sayhi.chat/FAIK You can also join our Discord server here: https://discord.gg/cU7wepaz *** NOTES AND REFERENCES *** AI Wellbeing: Anthropic has hired an 'AI welfare' researcher:https://www.transformernews.ai/p/anthropic-ai-welfare-researcher It's time to take AI welfare seriously: https://www.transformernews.ai/p/ai-welfare-paper Taking AI Welfare Seriously: https://arxiv.org/pdf/2411.00986 The problem of sycophancy in AI: Suckup software: How sycophancy threatens the future of AI: https://www.freethink.com/robots-ai/ai-sycophancy Towards Understanding Sycophancy in Language Models:https://arxiv.org/pdf/2310.13548 AI Interpretability: Mapping the Mind of a Large Language Model: https://www.anthropic.com/news/mapping-mind-language-model Lex Fridman podcast interview with Dario Amodei, Amanda Askell, & Chris Olah: https://youtu.be/ugvHCXCOmm4 Deceptive and self-serving tendencies in AI systems: Sycophancy to subterfuge: Investigating reward tampering in language models: https://www.anthropic.com/research/reward-tampering OpenAI o1 System Card: https://openai.com/index/openai-o1-system-card/ Announcing our updated Responsible Scaling Policy: https://www.anthropic.com/news/announcing-our-updated-responsible-scaling-policy AI Grandmother from Hell (for scammers): Phone network employs AI "grandmother" to waste scammers' time with meandering conversations: https://www.techspot.com/news/105571-phone-network-employs-ai-grandmother-waste-scammers-time.html YouTube video of Daisy: https://www.youtube.com/watch?v=RV_SdCfZ-0s AI Dumpster Fire of the Week (Gemini tells an end user what it really thinks about him): Article: https://people.com/ai-chatbot-alarms-user-with-unsettling-message-human-please-die-8746112 Gemini interaction: https://gemini.google.com/share/6d141b742a13 *** THE BOILERPLATE *** About The FAIK Files: The FAIK Files is an offshoot project from Perry Carpenter's most recent book, FAIK: A Practical Guide to Living in a World of Deepfakes, Disinformation, and AI-Generated Deceptions. Get the Book: FAIK: A Practical Guide to Living in a World of Deepfakes, Disinformation, and AI-Generated Deceptions (Amazon Associates link) Check out the website for more info: https://thisbookisfaik.com Check out Perry & Mason's other show, the Digital Folklore Podcast: Apple Podcasts: https://podcasts.apple.com/us/podcast/digital-folklore/id1657374458 Spotify: https://open.spotify.com/show/2v1BelkrbSRSkHEP4cYffj?si=u4XTTY4pR4qEqh5zMNSVQA Other: https://digitalfolklore.fm Want to connect with us? Here's how: Connect with Perry: Perry on LinkedIn: https://www.linkedin.com/in/perrycarpenter Perry on X: https://x.com/perrycarpenter Perry on BlueSky: https://bsky.app/profile/perrycarpenter.bsky.social Connect with Mason: Mason on LinkedIn: https://www.linkedin.com/in/mason-amadeus-a853a7242/ Mason on BlueSky: https://bsky.app/profile/pregnantsonic.com
Top Tagesschau-Posts mit den meisten Interaktionen: Gartenzwerg aus MDMA gefunden (01:03) Zayn Malik trauert um Liam Payne (01:42) Depressionen in Deutschland, Hilfsangebote siehe unten (02:24) Schüler:innen im Krankenhaus wegen Brownies (04:25) Internationaler Tag gegen Gewalt an Frauen (05:08) - Waffenruhe zwischen Israel und Hisbollah: Gibt's auch Hoffnung für Gaza? (07:36) Einschätzung von Israel-Korrespondent Christian Limpert aus dem ARD-Studio Tel Aviv - Koalition aus SPD und BSW in Brandenburg: Worauf haben sie sich geeinigt? (11:41) - Flugzeugabsturz in Litauen: Was man bisher über die Ursachen weiß (15:06) - Deepfake von Merz: Welche Rolle wird KI im Wahlkampf spielen? (18:02) Gespräch mit Yves Bellinghausen vom funk-Podcast Godcode: KI. Macht. Drama., den könnt ihr hier anhören: https://open.spotify.com/show/7mTzIeNdkcGWzh9xr0GMWU?si=39e3c0932adf4c9c Die KI-Folge von Absoluter Mehrheit gibt's hier: https://open.spotify.com/episode/5zBKVxaLH6wvMzx0yMedQO?si=s6mcJj3zRBemAb1_hfuzgA - Was steckt hinter #womeninmalefields? (26:26) - Black Friday: Wird Amazon immer schlechter? (30:44) Einschätzung von Alexander Prinz aka Der dunkle Parabelritter, sein Video gibt's hier: https://www.youtube.com/watch?v=2ShAmarQMOk - Kurzkurznews (35:46) Update zum Prozess von Gisèle Pélicot ESC kann in Basel in vollem Umfang stattfinden TikTok will Facefilter für Minderjährige verbieten - Hilfsangebote für Depressionen: TelefonSeelsorge 0800 111 0 111 Info-Telefon Depression 0800 33 44 5 33 weitere Infos und Adressen findet ihr unter www.deutsche-depressionshilfe.de - Feedback und Fragen könnt ihr uns immer per DM auf Insta schicken: https://www.instagram.com/funk/ - Unsere Quellen findet ihr hier: https://docs.google.com/document/d/1fRvhbus3A-8IEZr-R4l4cc09MrL1TzSs1T4I5Qn77mo/edit?tab=t.0 - Moderation: Julika Kott, Berit Ström Redaktion: Berit Ström, Julika Kott, Nora Scharmberg Redaktion funk: Helen Schulte, Magdalena Stefely Ton: Stanley Baldauf Produktion: Skip Intro im Auftrag von funk - Redaktionsschluss: Freitag, 29. November 2024, 10.00 Uhr
00:00 Introducción 00:20 El uso de deepfakes se cuadriplica, lo potencia el uso de redes sociales Las empresas y los usuarios lidian con el robo de identidad, que registra un crecimiento del 137% en este 2024. 01:07 Si eres profesor, Microsoft te capacita en IA y Ciberseguridad Dentro de los planes de la tecnológica está la capacitación de mas de cinco millones de mexicanos en los siguientes tres años. 01:56 La Copa América y las Elecciones ocuparon las preferencias en YouTube en 2024 Deportes, entretenimiento nativo y elecciones presidenciales fueron los contenidos más buscados en México durante este año. Learn more about your ad choices. Visit megaphone.fm/adchoices
Septième épisode et dernier épisode de la série sur l'histoire de l'épistémologie.Nous explorons les bouleversements contemporains de notre rapport au savoir. Pour la première fois, trois révolutions simultanées transforment radicalement notre façon de connaître : une révolution technologique (surabondance informationnelle), une révolution sociale (crise des autorités traditionnelles) et une révolution cognitive (surcharge informationnelle). Comment naviguer dans ce nouveau paysage épistémique ?Grandes idées abordées :La complexité comme nouveau paradigme de la connaissanceL'effondrement des médiateurs traditionnels du savoirL'émergence d'une nouvelle infrastructure numérique de la connaissanceLa transformation du rapport à la vérité (post-vérité)Le défi de l'IA dans la production du savoirLe besoin d'une nouvelle "écologie de la connaissance"Penseurs clés cités :Edgar Morin : La pensée complexeBruno Latour : Le Nouveau Régime ClimatiquePierre Rosanvallon : Transformation de l'autoritéHarry Collins & Robert Evans : Nature de l'expertiseNaomi Oreskes : Science du douteShoshana Zuboff : Capitalisme de surveillanceBernard Stiegler : Prolétarisation des savoirsHarry Frankfurt : Théorie du bullshitMichel Foucault : Régimes de véritéByung-Chul Han : Régime numérique de la véritéDanièle Hervieu-Léger : Individualisme épistémiqueYves Citton : Économie de l'attentionEli Pariser : Bulles de filtresDominique Cardon : Réseaux sociauxPierre Lévy : Intelligence collectiveEpisode enregistrée le 26 novembre 2024#philosophie #connaissance #science #méthodescientifique---Retrouvez tous les épisodes, les transcripts et les résumés sur www.sismique.frSismique est un podcast indépendant créé et animé par Julien Devaureix.
Guest: Jacqueline Jayne, The Independent Cybersecurity ExpertOn LinkedIn | https://www.linkedin.com/in/jacquelinejayne/At AU Cyber Con | https://melbourne2024.cyberconference.com.au/speakers/jacqueline-jayne-smictHosts: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]On ITSPmagazine | https://www.itspmagazine.com/sean-martinMarco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society Podcast & Audio Signals PodcastOn ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli____________________________Episode NotesDuring the On Location series at AISA Cyber Con 2024 in Melbourne, the conversation about cybersecurity turns engaging as Jacqueline Jayne, Security Awareness Advocate, shares her experience on human risk management and cybersecurity education. Her insights bring forward crucial points on bridging the gap between human behavior and technological security measures.One pivotal topic discussed is the persistent challenge of human error in cybersecurity. Jacqueline highlights that human error now accounts for over 90% of security breaches. The approach to mitigating these risks isn't merely technological but educational. She emphasizes the need for comprehensive security awareness training and shifting organizational culture towards proactive risk management.Jacqueline shares, “Organizations should redefine IT departments from the ‘Department of No' to the ‘Department of K-N-O-W.'” She believes that instead of restricting users, organizations should focus on empowering them with knowledge, emphasizing the importance of comprehensive training that connects with employees on a personal level.Throughout the conversation, the importance of contextual and relatable education stands out. Jacqueline advocates for simulated phishing campaigns to provide real-world scenarios for employees. By understanding and experiencing what a phishing attempt looks like in a controlled environment, employees can better recognize and react to actual threats.Another compelling point is teaching digital citizenship from a young age. Jacqueline compares cybersecurity education to road safety education. Just as children learn road safety progressively, digital safety should be ingrained from an early age. Appropriate and guided exposure to technology can ensure they grow up as responsible digital citizens.The discussion also touches on parental and organizational roles. Jacqueline discusses the proposal of banning social media for children under 16, acknowledging its complexity. She suggests that though banning might seem straightforward, it's more about educating and guiding children and teenagers on safe digital practices. Organizations and parents alike should collaborate to create a safer and more informed digital environment for the younger generation.Towards the end, the dialogue shifts to the potential role of AI in enhancing cybersecurity awareness. There's a consensus on using AI not as a replacement but as an augmentative tool to alert and educate users about potential threats in real-time, potentially mitigating the risk of human error. In conclusion, the conversation highlights the indispensable role of education in cybersecurity. JJ's perspective fosters a comprehensive approach that includes organizational culture change, continuous engagement, and early digital citizenship education. It's not just about implementing technology but evolving our collective behavior and mindset to ensure a secure digital future.____________________________This Episode's SponsorsThreatlocker: https://itspm.ag/threatlocker-r974____________________________ResourcesThe top 10 skills your security awareness and culture person must have with no IT or cyber skills in sight (Session): https://melbourne2024.cyberconference.com.au/sessions/session-OZ4j4mTr1OKeeping our kids safe online: The essential information for parents and caregivers (Session): https://melbourne2024.cyberconference.com.au/sessions/session-oBf7Gjn2xGSecurity awareness 2.0: The paradigm shift from training and simulations to engagement and culture: https://melbourne2024.cyberconference.com.au/sessions/session-drDWsOKBsLLearn more and catch more stories from Australian Cyber Conference 2024 coverage: https://www.itspmagazine.com/australian-cyber-conference-melbourne-2024-cybersecurity-event-coverage-in-australiaBe sure to share and subscribe!____________________________Catch all of our event coverage: https://www.itspmagazine.com/technology-cybersecurity-society-humanity-conference-and-event-coverageTo see and hear more Redefining CyberSecurity content on ITSPmagazine, visit: https://www.itspmagazine.com/redefining-cybersecurity-podcastTo see and hear more Redefining Society stories on ITSPmagazine, visit:https://www.itspmagazine.com/redefining-society-podcastWant to tell your Brand Story Briefing as part of our event coverage?Learn More
Alex's smear campaign MORE Words of the Year Mindblown: Old messages in bottles and bioluminescence LINKS If you've got something to add to the show, slide into our dms @matt.and.alex at https://bit.ly/mattandalex-ig Get Alex Dyson for Wannon Merch at https://bit.ly/adyson-merch . CREDITSHosts: Matt Okine and Alex DysonExecutive Producer: Elise CooperAudio Imager: Linc Kelly See omnystudio.com/listener for privacy information.
Creating habits of healthy skepticism when receiving texts or emails can prevent you from clicking on phishing links. Everybody is vulnerable online, especially when distracted or in a hurry. But cultivating critical thinking and self-awareness can enhance protection against manipulation. Today's guest is Perry Carpenter. Perry is an award-winning author, podcaster, and speaker with over two decades in cybersecurity, focusing on how cyber criminals exploit human behavior. As the Chief Human Risk Management Strategist at KnowBe4, Perry helps build robust, human-centric defenses against social engineering-based threats. His latest book FAIK: A Practical Guide to Living in a World of Deepfakes, Disinformation, and AI-Generated Deceptions, tackles AI's role in deception. Show Notes: [1:02] - Perry shares his background and what his career has entailed. [4:01] - Regardless of how much people say, spend, or do on security-related issues, the people side of things is hard to control. [5:25] - Perry has always been interested in deception and misdirection. [6:59] - Even as a security professional, Perry has experienced enough distraction to click a phishing email. [9:43] - It is easier to be distracted and not follow usual healthy security habits than being on a computer. [12:24] - We fall into habits easily, especially when the behavior is simple and easy. [16:00] - Technology based deception is more available to anybody than in any other time in history. [18:10] - Security professionals and often pushed in the roles of giving advice. [19:40] - Reflection questions like “Why is this in front of me?” might prevent someone from falling victim to a scam. [26:58] - Everybody is vulnerable. Even though cybersecurity professionals know more on the topic than some others, it is still possible for them as well. [30:40] - Pig butchering and crypto scammers sometimes actually do send money back as a tactic to earn trust and increase hope. [34:42] - We have to have a healthy skepticism of the information environment that we live in. [36:39] - There are very few situations in life where you won't benefit from slowing down and thinking things through. [38:41] - Perry suggests a family activity that will help boost understanding of pressure tactics. [40:17] - The narratives or tells that work for someone might raise a red flag to others. [43:25] - As a society, we've gotten to a point where we don't like to introspect. [45:59] - Perry discusses the content of his most recent book and how it is information without the “easy way out”. Thanks for joining us on Easy Prey. Be sure to subscribe to our podcast on iTunes and leave a nice review. Links and Resources: Podcast Web Page Facebook Page whatismyipaddress.com Easy Prey on Instagram Easy Prey on Twitter Easy Prey on LinkedIn Easy Prey on YouTube Easy Prey on Pinterest FAIK: A Practical Guide to Living in a World of Deepfakes, Disinformation, and AI-Generated Deceptions by Perry Carpenter Perry Carpenter on LinkedIn
Guest: Jacqueline Jayne, The Independent Cybersecurity ExpertOn LinkedIn | https://www.linkedin.com/in/jacquelinejayne/At AU Cyber Con | https://melbourne2024.cyberconference.com.au/speakers/jacqueline-jayne-smictHosts: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]On ITSPmagazine | https://www.itspmagazine.com/sean-martinMarco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society Podcast & Audio Signals PodcastOn ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli____________________________Episode NotesDuring the On Location series at AISA Cyber Con 2024 in Melbourne, the conversation about cybersecurity turns engaging as Jacqueline Jayne, Security Awareness Advocate, shares her experience on human risk management and cybersecurity education. Her insights bring forward crucial points on bridging the gap between human behavior and technological security measures.One pivotal topic discussed is the persistent challenge of human error in cybersecurity. Jacqueline highlights that human error now accounts for over 90% of security breaches. The approach to mitigating these risks isn't merely technological but educational. She emphasizes the need for comprehensive security awareness training and shifting organizational culture towards proactive risk management.Jacqueline shares, “Organizations should redefine IT departments from the ‘Department of No' to the ‘Department of K-N-O-W.'” She believes that instead of restricting users, organizations should focus on empowering them with knowledge, emphasizing the importance of comprehensive training that connects with employees on a personal level.Throughout the conversation, the importance of contextual and relatable education stands out. Jacqueline advocates for simulated phishing campaigns to provide real-world scenarios for employees. By understanding and experiencing what a phishing attempt looks like in a controlled environment, employees can better recognize and react to actual threats.Another compelling point is teaching digital citizenship from a young age. Jacqueline compares cybersecurity education to road safety education. Just as children learn road safety progressively, digital safety should be ingrained from an early age. Appropriate and guided exposure to technology can ensure they grow up as responsible digital citizens.The discussion also touches on parental and organizational roles. Jacqueline discusses the proposal of banning social media for children under 16, acknowledging its complexity. She suggests that though banning might seem straightforward, it's more about educating and guiding children and teenagers on safe digital practices. Organizations and parents alike should collaborate to create a safer and more informed digital environment for the younger generation.Towards the end, the dialogue shifts to the potential role of AI in enhancing cybersecurity awareness. There's a consensus on using AI not as a replacement but as an augmentative tool to alert and educate users about potential threats in real-time, potentially mitigating the risk of human error. In conclusion, the conversation highlights the indispensable role of education in cybersecurity. JJ's perspective fosters a comprehensive approach that includes organizational culture change, continuous engagement, and early digital citizenship education. It's not just about implementing technology but evolving our collective behavior and mindset to ensure a secure digital future.____________________________This Episode's SponsorsThreatlocker: https://itspm.ag/threatlocker-r974____________________________ResourcesThe top 10 skills your security awareness and culture person must have with no IT or cyber skills in sight (Session): https://melbourne2024.cyberconference.com.au/sessions/session-OZ4j4mTr1OKeeping our kids safe online: The essential information for parents and caregivers (Session): https://melbourne2024.cyberconference.com.au/sessions/session-oBf7Gjn2xGSecurity awareness 2.0: The paradigm shift from training and simulations to engagement and culture: https://melbourne2024.cyberconference.com.au/sessions/session-drDWsOKBsLLearn more and catch more stories from Australian Cyber Conference 2024 coverage: https://www.itspmagazine.com/australian-cyber-conference-melbourne-2024-cybersecurity-event-coverage-in-australiaBe sure to share and subscribe!____________________________Catch all of our event coverage: https://www.itspmagazine.com/technology-cybersecurity-society-humanity-conference-and-event-coverageTo see and hear more Redefining CyberSecurity content on ITSPmagazine, visit: https://www.itspmagazine.com/redefining-cybersecurity-podcastTo see and hear more Redefining Society stories on ITSPmagazine, visit:https://www.itspmagazine.com/redefining-society-podcastWant to tell your Brand Story Briefing as part of our event coverage?Learn More
On this episode of The Insightful Leader, why your trade secrets may not be safe, and other considerations.
durée : 00:02:20 - Le vrai ou faux - Dans ces deepfakes, très partagés sur les réseaux sociaux, on voit et on entend le Premier ministre faire toute une série d'annonces à destination des automobilistes. Ces vidéos sont fausses, elles ont été générées par l'intelligence artificielle.
Send us a textIn this episode of Embedded Insiders, we dive into the challenges and solutions shaping AI and identity verification. Pavel Goldman-Kalaydin, Head of AI and Machine Learning at Sumsub shares how his team is tackling deepfake threats while addressing privacy and ethics concerns with cutting-edge, AI-driven tech.Later, Rich and Vin chat with Maher Matta, President of the Americas at Infineon Technologies. From his early days as a bench engineer to leading a thriving tech powerhouse, Maher shares the highs, lows, and lessons from his remarkable journey.But first, Rich and Ken break down the latest AI news featuring two industry giants. For more information, visit embeddedcomputing.com
Guest: Chris Glanden, CEO and Founder, BarCode Security [@BarCodeSecurity]On LinedIn | https://www.linkedin.com/in/chrisglanden/On Twitter | https://x.com/barcodesecurityHosts: HutchOn ITSPmagazine
In this episode of the Identity at the Center podcast, Jim McDonald is joined by guest David Mahdi, CIO at Transmit Security and former Gartner analyst. The discussion delves into the challenges of identity security and the necessity of transitioning from IAM to identity security to mitigate privacy, fraud, and scam risks. They explore the convergence of cybersecurity and identity, the rise of non-human identities, and the importance of cyber resilience. David shares insights from his global experiences and highlights the importance of behavior-based monitoring for both human and non-human identities. The episode wraps up with a lighter note as they discuss the perfect trail mix ingredients. 00:00 Welcome to the Identity at the Center Podcast 01:12 Guest Introduction: David Mahdi 02:35 The Evolution of Identity Security 04:13 The Importance of Identity Security 05:26 David Mahdi's Journey in Identity Security 07:20 Challenges in Identity and Access Management 10:10 The Role of Machine Learning in Identity Security 12:19 The Future of Identity Security 22:52 The Need for Convergence in Identity and Security Teams 33:01 Non-Human Identities and Their Management 41:00 Defining Roles and Responsibilities 41:46 Ignoring Non-Human Accounts 42:38 Emerging Threats and Risk Management 44:33 The Importance of Identity Security 46:46 Cyber Resilience in Modern Organizations 49:24 Cloud Services and Backup Plans 51:26 Ransomware and Data Access Governance 54:27 The Future of Identity Security 01:07:53 Trail Mix and Lighter Notes 01:14:32 Closing Remarks and Upcoming Events Connect with David: https://www.linkedin.com/in/dmahdi/ Learn more about Transmit Security: https://transmitsecurity.com/ Gartner IAM Summit - Save $375 on registration using our exclusive code IDAC375: https://www.gartner.com/en/conferences/na/identity-access-management-us Connect with us on LinkedIn: Jim McDonald: https://www.linkedin.com/in/jimmcdonaldpmp/ Jeff Steadman: https://www.linkedin.com/in/jeffsteadman/ Visit the show on the web at http://idacpodcast.com Keywords: Identity Security, IAM, Cyber Resilience, Machine Identity, Non-Human Identity, Digital Transformation, Cloud Security, Zero Trust, Data Breach, Risk Management, ITDR, MFA Fatigue, Deepfakes, Gen AI, Copilot, Claude AI, Decentralized Identity, Quantum Computing, Ransomware, Data Access Governance, Transmit Security, Gartner IAM Summit, FIDO Authenticate Conference.
In Episode 356, Tom and Kevin discuss the increasing role of deepfake technology in bypassing biometric checks, accounting for 24 percent of fraud attempts. The show covers identity fraud issues and explores the controversial practices of data brokers selling location data, including tracking US military personnel. The conversation shifts to social media platforms Twitter, Blue […] The post Deepfake Fraud, Data Brokers Tracking Military Personnel appeared first on Shared Security Podcast.
Mardi 26 novembre 2024, SMART TECH reçoit Nathalie Birocheau (Directrice générale, Ircam Amplify)-----------------------------------------------------------------------SMART TECH - Le magazine quotidien de l'innovationDans SMART TECH, l'actu du numérique et de l'innovation prend tout son sens. Chaque jour, des spécialistes décryptent les actualités, les tendances, et les enjeux soulevés par l'adoption des nouvelles technologies.
What is the probability of our existence? Neil deGrasse Tyson and cohosts Chuck Nice and Gary O'Reilly learn about the probabilities all around us, the idea of risk, and how they factor into our own security in the digital age with cybersecurity expert Alex Cosoi.NOTE: StarTalk+ Patrons can listen to this entire episode commercial-free here: https://startalkmedia.com/show/the-power-of-probability-with-alex-cosoi/Thanks to our Patrons Ben Jackson, James Hall, XYZ, Rick Reyes, Brian Gilstrap, Jeffrey Silva, Simon Schwartz, Lori Thomas, Sally Sapp, Alberto.p, Kenneth W Miller, Richard Hart, Patience, Brent Fraliex, 4 Light Years Away, Michele Raiola, Tess Gleason, Connie Schreiber, Metthew Tucker, Hickory Ogle, and Aldeeep for supporting us this week. Subscribe to SiriusXM Podcasts+ on Apple Podcasts to listen to new episodes ad-free and a whole week early.
The Department of Justice has a list of proposed actions Google must take to end anticompetitive practices, and Google is not happy about it. Apple probably isn't thrilled either. But the matter still has to head to court next year, so nothing is decided just yet. Plus, Microsoft makes some questionable choices and an AI expert leans a bit too hard on AI. See omnystudio.com/listener for privacy information.
A male student used AI to create explicit fake images of 50 girls at his school. Now, the question is: can the law keep up with this tech-driven horror?
Eine künstliche Intelligenz, die uns schwierige Denkaufgaben abnimmt, ist grundsätzlich willkommen. Doch ähnlich wie der Sänger und der Plattenspieler sind Mensch und Maschine auch hier Konkurrenten. Die Grenzen verfließen, wo ist sie dienstbar und wo wird sie gefährlich? Zurzeit findet in San Francisco eine Konferenz zum Thema “Sicherheit bei Künstlicher Intelligenz” statt. Dort machen sich führende Wissenschaftler Gedanken darüber, wie man Gefahren in den Bereichen von Deepfakes und der missbräuchlichen Nutzung durch kriminelle Akteure minimieren kann. Welche Standards brauchen wir und wieweit kann man künstliche Intelligenz auch militärisch nutzbar machen? Darüber wollen wir heute reden: mit Dr. Niklas Schörnig, Senior Researcher im Programmbereich Internationale Sicherheit am Leibniz-Institut für Friedens- und Konfliktforschung, mit Prof. Florian Kunze, er forscht zu Personalmanagement und Arbeitssicherheit an der Universität Konstanz, sowie Prof. Martin Steinebach, Leiter der Abteilung Media Security und IT Forensics am Fraunhofer-Institut für Sichere Informationstechnologie. Podcast-Tipp: Der KI-Podcast Künstliche Intelligenz ist kein Zukunftsthema mehr - KI prägt unsere Gegenwart. Künstlich generierte Texte, Bilder und Stimmen sind überall ... und werden wöchentlich besser. Welche Auswirkungen bringt die KI in Arbeitswelt, Bildung und Gesellschaft mit sich? Wie kann man künstliche Intelligenz selbst im Alltag nutzen? Und was passiert, wenn jeder Inhalt im Internet auch ein KI-Fake sein könnte? Gregor Schmalzried, Marie Kilg und Fritz Espenlaub stellen sich jeden Dienstag den großen und kleinen Fragen der KI-Revolution - und trennen die Fakten vom Hype. Ein ARD-Podcast von BR24 und SWR. https://www.ardaudiothek.de/sendung/der-ki-podcast/94632864/
Donald Trump wird der nächste US-Präsident - und sein Kabinett könnte für die Zukunft der KI-Welt entscheidend sein. Denn schon jetzt verschlingen KI-Modelle Unmengen an Energie für Training und Betrieb. Könnte die Trump-Regierung das noch einmal beschleunigen? Und wird KI so zur Belastung für die Umwelt? Über die Hosts: Gregor Schmalzried ist freier Tech-Journalist, Speaker und Berater, u.a. beim Bayerischen Rundfunk. Fritz Espenlaub ist freier Journalist und Ökonom. Er arbeitet unter anderem für den Bayerischen Rundfunk und das Tech-Magazin 1E9. In dieser Folge: 0:00 Intro: Trumps neuer Umweltminister 2:50 Wie viel Energie verbraucht KI? 9:30 Was hat die Trump-Regierung vor? 15:00 Kann man mit KI mit gutem Gewissen nutzen? 20:30 Was haben wir mit KI gemacht? Merz-Deepfake und Cola-Werbespot Links: Trumps Umweltminister über KI https://www.theverge.com/2024/11/12/24294483/donald-trump-ai-data-center-epa-lee-zeldin Mark Zuckerberg über Energieverbrauch https://www.youtube.com/watch?v=i-o5YbNfmh0 Energieverbrauch von Rechenzentren in der Vergangenheit https://www.science.org/doi/10.1126/science.aba3758 Kernkraft: Zurück in die Zukunft dank Big Tech? https://www.br.de/nachrichten/netzwelt/microsoft-google-und-amazon-setzen-auf-atomkraft-die-plaene,URRUJFl Die IEA über Emissionen durch Rechenzentren und andere Sektoren https://www.iea.org/energy-system/ Deepfake von Merz - was bedeutet er für den Bundestagswahlkampf? https://www.br.de/nachrichten/netzwelt/deepfake-von-merz-was-bedeutet-er-fuer-den-bundestagswahlkampf,UUCtkLv KI-generierter Cola-Werbespot https://the-decoder.de/coca-cola-hat-seinen-ikonischen-weihnachts-spot-mit-generativer-ki-umgesetzt/ "KI verstehen” Podcastfolge: KI und Klimakrise - Rettet Künstliche Intelligenz mit uns das Klima? https://www.ardaudiothek.de/episode/ki-verstehen/ki-und-klimakrise-rettet-kuenstliche-intelligenz-mit-uns-das-klima/deutschlandfunk/13564793/ "Mein Leben als Spam” Hörspielserie https://1.ard.de/ki_podcast-mein_leben_als_spam Redaktion und Mitarbeit: David Beck, Cristina Cletiu, Chris Eckardt, Fritz Espenlaub, Marie Kilg, Mark Kleber, Gudrun Riedl, Christian Schiffer, Gregor Schmalzried Kontakt: Wir freuen uns über Fragen und Kommentare an podcast@br.de. Unterstützt uns: Wenn euch dieser Podcast gefällt, freuen wir uns über eine Bewertung auf eurer liebsten Podcast-Plattform. Abonniert den KI-Podcast in der ARD Audiothek oder wo immer ihr eure Podcasts hört, um keine Episode zu verpassen. Und empfehlt uns gerne weiter!
With the influx of AI running rapid across the internet, what is this going to do to it as a tool or cultural space? In this essay, I explore the concept of the 'Dead Internet Theory,' which posits that a significant portion of online content is generated by bots rather than humans. Is this valid? If so what are the implications on online interactions, trust in content, and the rise of AI-generated material, including deepfakes. Because if this is the case, we have difficulty discerning real from fake information and the potential for a future where trusted sources become increasingly valuable in navigating the digital landscape. Chapters (00:00) - Exploring Dead Internet Theory (03:28) - The Impact of Bots on Online Interaction (06:51) - Trust Issues in Online Reviews and Content (09:14) - The Rise of AI and Deepfakes (10:42) - The Future of Trusted Information Sources
In this engaging episode of Shantanu's podcast, we're joined by @VarunMayya, a visionary entrepreneur, tech innovator, and author known for his cutting-edge insights on technology and content creation. Together, Varun and Shantanu discuss a variety of compelling topics, from the potential of AI avatars and deepfake technology to the evolving relationship between AI and content creation.Varun offers his unique perspective on how to build long-term intellectual property, and develop personal branding strategies. Discover how AI, technology, and innovative thinking are revolutionizing content creation and personal branding. Don't miss this thought-provoking conversation packed with actionable advice and forward-looking perspectives on tech, AI, and content creation!Want to Join the Action IRL?Participate in Razorprenuer 2.0! Showcase your creativity with the Sensi Smart 3 Razor and you could stand a chance to win a contract ≈ ₹10 Lakh, influencer deals, and a feature with Shantanu on The Barbershop!Here's how to jump in:1. Sign up through the link in bio.2. Create a reel, ad, or post featuring the Sensi Smart 3 Razor.3. Tag @Bombayshavingcompany with #IndiaKaSmartestCreatorKaunThis is your chance to shine! Show us what you have got and register here: https://bit.ly/3Y6LZU9
In this episode of The Cognitive Revolution, Nathan welcomes back Zvi Mowshowitz for an in-depth discussion on the latest developments in AI over the past six months. They explore Ilya's new superintelligence-focused startup, analyze OpenAI's O1 model, and debate the impact of Claude's computer use capabilities. The conversation covers emerging partnerships in big tech, regulatory changes, and the recent OpenAI profit-sharing drama. Zvi offers unique insights on AI safety, politics, and strategic analysis that you won't find elsewhere. Join us for this thought-provoking episode that challenges our understanding of the rapidly evolving AI landscape. Check out "Don't Worry About the Vase" Blog: https://thezvi.substack.com Be notified early when Turpentine's drops new publication: https://www.turpentine.co/exclusiveaccess SPONSORS: Shopify: Shopify is the world's leading e-commerce platform, offering a market-leading checkout system and exclusive AI apps like Quikly. Nobody does selling better than Shopify. Get a $1 per month trial at https://shopify.com/cognitive Notion: Notion offers powerful workflow and automation templates, perfect for streamlining processes and laying the groundwork for AI-driven automation. With Notion AI, you can search across thousands of documents from various platforms, generating highly relevant analysis and content tailored just for you - try it for free at https://notion.com/cognitiverevolution Oracle Cloud Infrastructure (OCI): Oracle's next-generation cloud platform delivers blazing-fast AI and ML performance with 50% less for compute and 80% less for outbound networking compared to other cloud providers13. OCI powers industry leaders with secure infrastructure and application development capabilities. New U.S. customers can get their cloud bill cut in half by switching to OCI before December 31, 2024 at https://oracle.com/cognitive SelectQuote: Finding the right life insurance shouldn't be another task you put off. SelectQuote compares top-rated policies to get you the best coverage at the right price. Even in our AI-driven world, protecting your family's future remains essential. Get your personalized quote at https://selectquote.com/cognitive RECOMMENDED PODCAST: Unpack Pricing - Dive into the dark arts of SaaS pricing with Metronome CEO Scott Woody and tech leaders. Learn how strategic pricing drives explosive revenue growth in today's biggest companies like Snowflake, Cockroach Labs, Dropbox and more. Apple: https://podcasts.apple.com/us/podcast/id1765716600 Spotify: https://open.spotify.com/show/38DK3W1Fq1xxQalhDSueFg CHAPTERS: (00:00:00) Teaser (00:01:03) About the Episode (00:02:57) Catching Up (00:04:00) Ilya's New Company (00:06:10) GPT-4 and Scaling (00:11:49) User Report: GPT-4 (Part 1) (00:18:11) Sponsors: Shopify | Notion (00:21:06) User Report: GPT-4 (Part 2) (00:24:25) Magic: The Gathering (Part 1) (00:32:34) Sponsors: Oracle Cloud Infrastructure (OCI) | SelectQuote (00:34:58) Magic: The Gathering (Part 2) (00:35:59) Humanity's Last Exam (00:41:29) Computer Use (00:47:42) Industry Landscape (00:55:42) Why is Gemini Third? (01:04:32) Voice Mode (01:09:41) Alliances and Coupling (01:16:31) Regulation (01:24:58) Machines of Loving Grace (01:33:23) Taiwan and Chips (01:41:13) SB 1047 Veto (02:00:07) Arc AGI Prize (02:02:23) Deepfakes and UBI (02:09:06) Trump and AI (02:26:31) AI Manhattan Project (02:32:05) Virtue Ethics (02:38:40) Closing Thoughts (02:40:37) Outro SOCIAL LINKS: Website: https://www.cognitiverevolution.ai Twitter (Podcast): https://x.com/cogrev_podcast Twitter (Nathan): https://x.com/labenz LinkedIn: https://www.linkedin.com/in/nathanlabenz/ Youtube: https://www.youtube.com/@CognitiveRevolutionPodcast Apple: https://podcasts.apple.com/de/podcast/the-cognitive-revolution-ai-builders-researchers-and/id1669813431 Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk
In this episode of The Patti Brennan Show, guest host Michael Brennan, CFP and son of Patti Brennan, joins Patti to chat about recent conferences she has attended. Michael actually runs the show today! From New York to Las Vegas, Patti has been traveling, speaking with, and interviewing some bright minds. This podcast focuses on today's most relevant issues, with Patti sharing insights gained from her experiences with remarkable people in the industry. Don't miss out on this episode of The Patti Brennan Show!
El Mundo De Cabeza - Christian Zurita, Deepfake En La Era De La IA by FM Mundo 98.1
Another News & Research episode this week, and lots of interesting links and research discussed. Here's the weblinks for everything! News Deception and Deepfakes - How genAI is Changing Cyber Safety Course: https://spsc.thinkific.com/courses/take/deceptionanddeepfakes/ Leon Furzes online course, where he made a free course about deepfakes hosted by a deepfake, with deepfake content about deepfakes https://leonfurze.teachable.com/p/this-course-is-ai-generated McKinsey surveyed 1363 working professionals across countries, industries, and demographics about the use of generative AI at their work places. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai The Carbon Emissions of Writing and Illustrating Are Lower for AI than for Humans https://arxiv.org/ftp/arxiv/papers/2303/2303.06219.pdf Satya Nadella shared 200 examples globally of companies using AI https://blogs.microsoft.com/blog/2024/11/12/how-real-world-businesses-are-transforming-with-ai/ Rose Luckin's Beyond the Hype report https://www.educateventures.com/beyond-the-hype Brad Smith shares a Digital Twin of St Peters Basilica in Rome https://youtu.be/5dQI_TXYJbU Research Spotting Research Fraud - a website that's fascinating https://www.academ-ai.info/ AI or Human? Evaluating Student Feedback Perceptions in Higher Education https://osf.io/preprints/osf/6zm83 Teacher Learning Network Journal - AI in Education special https://tln.org.au/Web/Web/TLN-Journals/TLN%20Journal%20Public.aspx Adoption and Impact of ChatGPT in Computer Science Education: A Case Study on a Database Administration Course https://arxiv.org/abs/2407.12145 Follow-Up Questions Improve Documents Generated by Large Language Models https://arxiv.org/abs/2407.12017 Educational Personalized Learning Path Planning with Large Language Models https://arxiv.org/abs/2407.11773
Many feared that the 2024 election would be affected, and perhaps decided, by AI-generated disinformation. While there was some to be found, it was far less than anticipated. But don't let that fool you: the disinfo threat is real — you're just not the target. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Perry Carpenter is the author of the new book FAIK: A Practical Guide to Living in a World of Deepfakes, Disinformation, and AI-Generated Deceptions. For this episode, we are discussing what it means to live in a world where artificial intelligence is altering our reality more and more, how we can navigate a deeply uncertain digital future, and what it all has to do with human nature and the nature of folklore. Check out Perry book FAIK here Find The FAIK Files podcast on Apple and Spotify Find the Digital Folklore Podcast here Find Perry Carpenter on LinkedIn, Twitter, and 8th Layer Media Check out Perry's new op-ed with The Hill about our post-truth era Get tickets to our live show called A Massive Seance with You're Wrong About Become a Patron to support our show and get early ad-free episodes and bonus content Or subscribe to American Hysteria on Apple Podcasts Find us on Instagram @americanhysteriapodcast Leave us a message on our Urban Legends Hotline or get your mitts on some merch at americanhysteria.com Producer and Editor: Miranda Zickler Associate Producer: Riley Swedelius-Smith Hosted by Chelsey Weber-Smith Learn more about your ad choices. Visit megaphone.fm/adchoices
Deepfake porn is easier than ever to create, and students are suffering the consequences
This week, we are replaying the latest installment in our virtual roundtable series, titled “Understanding the Legal and Commercial Challenges of Disinformation and Deepfakes.” For those who may not be familiar, a deepfake is a type of synthetic media where images, videos, or audio may seem real, but have been manipulated or generated with artificial intelligence. While some synthetic or manipulated media have legitimate applications, the ability of deepfakes to exploit and spread disinformation poses various commercial and legal threats, which organizations must be prepared to address. Our panel included Catherine Porter, Chief Business Officer of Prove; Hon. Paul W. Grimm (Ret.), professor of the practice of law and director of the Bolch Judicial Institute at Duke Law School; and Puya Partow-Navid, partner at Seyfarth. We discussed a variety of topics, including the biggest risks of deepfakes that organizations must tackle for consumers, the impact of deepfakes on courtroom and evidentiary rules, the legal frameworks in place to protect against deepfakes and disinformation, and technological solutions and best practices for businesses.
Nintendo is suing a streamer of pirated software for millions of dollars, ChatGPT rejected 250,000 election deepfake requests, and Black people across the country have been getting racist texts about slave plantations. It's Monday, November 11th and this is Engadget News. Learn more about your ad choices. Visit podcastchoices.com/adchoices
In this episode of The Business of Tech, host Dave Sobel engages in a thought-provoking discussion with Aaron Painter, CEO of NameTag, about the evolving landscape of identity verification in the face of emerging threats like deep fakes. The conversation begins with an exploration of traditional identity verification methods, particularly multi-factor authentication (MFA), and their limitations. Painter emphasizes that while MFA is a critical first step in securing accounts, it is not sufficient on its own, especially when recovery processes can be exploited by malicious actors impersonating legitimate users.As the dialogue progresses, Painter introduces the concept of behavioral biometrics and AI-powered verification as potential advancements in security. He explains how these technologies can monitor user behavior to determine if the person accessing an account is indeed the same individual who logged in. However, he also highlights the challenges associated with these methods, particularly in scenarios where identity verification is crucial, such as during account recovery or hiring processes. The discussion underscores the need for more robust identity verification practices that go beyond traditional methods.The episode also delves into the growing adoption of passkeys as a solution for identity verification. Painter notes that while passkeys are gaining traction, particularly among larger tech companies, their implementation in small to mid-sized businesses remains limited. He points out that passkeys are not a complete replacement for passwords and still rely on traditional recovery methods, which can create vulnerabilities. This leads to a broader conversation about the importance of understanding the limitations of current technologies and the need for a comprehensive approach to identity security.Finally, the conversation shifts to the implications of deep fakes on identity verification. Painter warns that the technology enabling deep fakes has become increasingly accessible, posing significant risks for organizations relying on visual verification methods. He advocates for a more advanced approach to identity verification that combines AI, cryptography, and biometrics to create a stronger defense against impersonation. The episode concludes with a call to action for organizations to rethink their identity verification processes and adapt to the evolving threat landscape, emphasizing the importance of knowing who users are in a digital world. All our Sponsors: https://businessof.tech/sponsors/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Support the show on Patreon: https://patreon.com/mspradio/ Want to be a guest on Business of Tech: Daily 10-Minute IT Services Insights? Send Dave Sobel a message on PodMatch, here: https://www.podmatch.com/hostdetailpreview/businessoftech Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessoftech.bsky.social
Join Film Theory Host MatPat as he explains one of the big problems with Deepfakes. Learn more about your ad choices. Visit megaphone.fm/adchoices
The mainstream media is in serious trouble. Trust was low before the election, and now even fewer people believe them
#536 Elephant Man Love Token - RHLSTP is back for its 30th series! Can this thing go on forever? Or does it just feel like it. Rich has had a sitcom experience with his bins and his son is disappointed with God. His returning guest is the great Peter Serafinowicz. The pair discuss Peter's embracing of live work and technology, the return of Brian Butterfield and why Peter stayed in character for two hours after each show, using Deep Fakes on stage to bring the dead back to life, whether Abba turned up every night to do the curtain call for their hologram show, filming Sassy Trump with the guys from South Park and performing with Brian Eno in Venice. Plus which celebrity did Peter live round the corner until recently and is he able to get them on the podcast?Buy Peter's Brian Butterfield book here - https://brianbutterfield.limitedrun.com/products/807187-theres-no-business-like-businessSee a live recording of RHLSTP - https://richardherring.com/rhlstpSUPPORT THE SHOW!Watch our TWITCH CHANNELSee extra content at our WEBSITE Become a member at https://plus.acast.com/s/rhlstp. Hosted on Acast. See acast.com/privacy for more information.
Tariffs, Pygmy Goat, Schneider, SQLite and Dixie Flatline, Deepfakes, Military AI, Josh Marpet, and more on the Security Weekly News. Visit https://www.securityweekly.com/swn for all the latest episodes! Show Notes: https://securityweekly.com/swn-428
In the run up to the 2024 presidential election, the concern with outside interference in our elections and the political debate surrounding them has never been higher. In this environment, we thought it would be a good time to re-release a podcast episode from season one on deepfakes and disinformation, which was originally released in March of 2021. Please listen to former podcast co-host and current candidate for Denver District Attorney John Walsh, Partner Jason Chipman, former counsel Matthew Ferraro, and Nina Schick, who discuss the phenomenon of deepfakes and the implications they have on society as a whole.
Episode 644: Shaan Puri ( https://x.com/ShaanVP ) talks to author and political campaign expert Sasha Issenberg ( https://x.com/sissenberg ) about the marketing machine behind presidential elections. — Show Notes: (0:00) The malarky factory and The Harm Index (18:55) Trump's 2016 playbook (28:18)Kamala's viral clip factory (32:35) Who's running a better 2024 campaign? (35:04) What is Elon's role in Trump's campaign (38:31) The lopsided downside of doorknocks (42:25) Deepfakes and new tools in campaigning (45:58) "The podcast election" (50:00) The new playbook for winning in the age of meme wars (54:15) Most mis-priced opportunity in elections — Links: • Sasha Issenberg - https://www.sashaissenberg.com/ • The Victory Lap - https://www.amazon.com/Victory-Lab-Science-Winning-Campaigns/dp/0307954803 • The Lie Detectives - https://www.amazon.com/Lie-Detectives-Playbook-Elections-Disinformation/dp/B0CFN6WM3Y — Check Out Shaan's Stuff: Need to hire? You should use the same service Shaan uses to hire developers, designers, & Virtual Assistants → it's called Shepherd (tell ‘em Shaan sent you): https://bit.ly/SupportShepherd — Check Out Sam's Stuff: • Hampton - https://www.joinhampton.com/ • Ideation Bootcamp - https://www.ideationbootcamp.co/ • Copy That - https://copythat.com • Hampton Wealth Survey - https://joinhampton.com/wealth • Sam's List - http://samslist.co/ My First Million is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Arie Desormeaux // Editing by Ezra Bakker Trupiano
We knew that the 2024 election season in the US would include some tough stories involving tech and potential election interference. In this episode, we look at a few headlines of how tech may have already played a role in misinforming voters either on purpose or by accident. Plus many more headlines in this jumbo news episode!See omnystudio.com/listener for privacy information.
Deepfake videos are spreading rapidly, fueled by sophisticated open-source AI models. MIT researchers reveal that most of these videos are non-consensual porn, targeting celebrities like Taylor Swift. But now even high school and middle school students, predominantly females, are being targeted. UCLA professor John Villasenor joins The Excerpt to parse through the legislative and technological efforts to curb this surge of illicit content. We discuss the challenges of regulating AI-generated images, the importance of international cooperation, and offer practical advice for parents to protect their children from cyber sexual violence.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
We know from various studies that young people are, unsurprisingly, using generative AI tools like chatbots and image generators, sometimes for homework, sometimes for fun and sometimes for malicious purposes. A recent survey from the Center for Democracy and Technology found that artificial intelligence is being used among high school students to create nonconsensual, illicit imagery — in other words, sexually explicit deepfakes. Marketplace’s Meghan McCarty Carino spoke with Elizabeth Laird, director of equity in civic technology at CDT, to learn more.
We know from various studies that young people are, unsurprisingly, using generative AI tools like chatbots and image generators, sometimes for homework, sometimes for fun and sometimes for malicious purposes. A recent survey from the Center for Democracy and Technology found that artificial intelligence is being used among high school students to create nonconsensual, illicit imagery — in other words, sexually explicit deepfakes. Marketplace’s Meghan McCarty Carino spoke with Elizabeth Laird, director of equity in civic technology at CDT, to learn more.