Podcasts about adversarial

  • 328PODCASTS
  • 460EPISODES
  • 41mAVG DURATION
  • 1WEEKLY EPISODE
  • May 16, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about adversarial

Latest podcast episodes about adversarial

Make Your Damn Bed
1449 || how to resist demoralization

Make Your Damn Bed

Play Episode Listen Later May 16, 2025 9:23


Resisting subversion means we need to stop playing defense with our consumption habits and start playing offense. Get aggressive with deciding who and what you offer your attention. Identify content as Adversarial or Productive:Adversarial leaves you feeling defeated, or angry, or upset, or hateful, or more disconnected, or trying to sell you something. Attempting to elicit a negative emotional response. Productive has an action item or leaves you with a sense of hope + direction + purpose + support for your tool belt. If it's selling you something, they aren't using predatory tactics. Useful, valuable, informative, accurate, and makes you feel more connected with humanity &/or community.RESOURCE: https://www.theframelab.org/putin-failed-to-conquer-navalny-or/Donate to Palestinian Children's Relief Fund::www.pcrf.netDonate to Mutual Aid Funds: https://www.folxhealth.com/library/mutual-aid-fundsGET AN OCCASIONAL PERSONAL EMAIL FROM ME: www.makeyourdamnbedpodcast.comTUNE IN ON INSTAGRAM FOR COOL CONTENT: www.instagram.com/mydbpodcastOR BE A REAL GEM + TUNE IN ON PATREON: www.patreon.com/MYDBpodcastOR WATCH ON YOUTUBE: www.youtube.com/juliemerica The opinions expressed by Julie Merica and Make Your Damn Bed Podcast are intended for entertainment purposes only. Make Your Damn Bed podcast is not intended or implied to be a substitute for professional medical advice, diagnosis or treatment. Get bonus content on PatreonSupport this show http://supporter.acast.com/make-your-damn-bed. Hosted on Acast. See acast.com/privacy for more information.

Brain Inspired
BI 211 COGITATE: Testing Theories of Consciousness

Brain Inspired

Play Episode Listen Later May 7, 2025 119:40


Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Sign up for Brain Inspired email alerts to be notified every time a new Brain Inspired episode is released. To explore more neuroscience news and perspectives, visit thetransmitter.org. Rony Hirschhorn, Alex Lepauvre, and Oscar Ferrante are three of many many scientists that comprise the COGITATE group. COGITATE is an adversarial collaboration project to test theories of consciousness in humans, in this case testing the integrated information theory of consciousness and the global neuronal workspace theory of consciousness. I said it's an adversarial collaboration, so what does that mean. It's adversarial in that two theories of consciousness are being pitted against each other. It's a collaboration in that the proponents of the two theories had to agree on what experiments could be performed that could possibly falsify the claims of either theory. The group has just published the results of the first round of experiments in a paper titled Adversarial testing of global neuronal workspace and integrated information theories of consciousness, and this is what Rony, Alex, and Oscar discuss with me today. The short summary is that they used a simple task and measured brain activity with three different methods: EEG, MEG, and fMRI, and made predictions about where in the brain correlates of consciousness should be, how that activity should be maintained over time, and what kind of functional connectivity patterns should be present between brain regions. The take home is a mixed bag, with neither theory being fully falsified, but with a ton of data and results for the world to ponder and build on, to hopefully continue to refine and develop theoretical accounts of how brains and consciousness are related. So we discuss the project itself, many of the challenges they faced, their experiences and reflections working on it and on coming together as a team, the nature of working on an adversarial collaboration, when so much is at stake for the proponents of each theory, and, as you heard last episode with Dean Buonomano, when one of the theories, IIT, is surrounded by a bit of controversy itself regarding whether it should even be considered a scientific theory. COGITATE. Oscar Ferrante. @ferrante_oscar Rony Hirschhorn. @RonyHirsch Alex Lepauvre. @LepauvreAlex Paper: Adversarial testing of global neuronal workspace and integrated information theories of consciousness. BI 210 Dean Buonomano: Consciousness, Time, and Organotypic Dynamics 0:00 - Intro 4:00 - COGITATE 17:42 - How the experiments were developed 32:37 - How data was collected and analyzed 41:24 - Prediction 1: Where is consciousness? 47:51 - The experimental task 1:00:14 - Prediction 2: Duration of consciousness-related activity 1:18:37 - Prediction 3: Inter-areal communication 1:28:28 - Big picture of the results 1:44:25 - Moving forward

INTHEBLACK
AI and cybersecurity: penetration tester reveals key dangers

INTHEBLACK

Play Episode Listen Later Apr 7, 2025 31:23


As organisations big and small integrate artificial intelligence into their operations, understanding the vulnerabilities that come with AI systems is essential. In this episode, we'll explore the crucial intersection between AI and cybersecurity.  You'll gain insights on AI systems, common pitfalls in AI security and specialist tips for businesses to navigate this dynamic landscape.  This episode covers areas such as:  Adversarial machine learning (AML)   The fundamental difference between AI and IT security AI model vulnerabilities Expanded attack surface via unstructured inputs   Key pitfalls in AI adoption Risk mitigation  Tune in now for specialist advice from a leading expert in the field.    Host: Garreth Hanley, Podcast Producer, CPA Australia  Guest: Miranda R, an offensive security team manager at Malware Security, and an AI vulnerability researcher and a trainer with Mileva, where she conducts penetration testing for various sectors, including government and private industry.   Want to learn more? Head online to Malsec and Mileva.  And you can read an insightful post by Miranda R on her LinkedIn, as well as a news story about an ID system failure in the US involving a fraudster and how a North Korean hacker duped a cybersecurity firm. You can find a CPA at our custom portal on the CPA Australia website.   Would you like to listen to more INTHEBLACK episodes? Head to CPA Australia's YouTube channel.   CPA Australia publishes four podcasts, providing commentary and thought leadership across business, finance and accounting:   With Interest INTHEBLACK  INTHEBLACK Out Loud Excel Tips  Search for them in your podcast platform.   Email the podcast team at podcasts@cpaaustralia.com.au  

Root Causes: A PKI and Security Podcast
Root Causes 479: AI Adversarial Machine Learning

Root Causes: A PKI and Security Podcast

Play Episode Listen Later Mar 21, 2025 13:10


In this episode we discuss the thinking on how adversaries can exploit the flaws in AI models to achieve unexpected and dangerous results. We explore some potential paths of defense against attacks of this sort.

Family Disappeared
Dr. Bernet Talks About The Intricacies Of Parental Alienation Part 2 - Episode 85

Family Disappeared

Play Episode Listen Later Mar 17, 2025 26:53 Transcription Available


In this episode of the Family Disappeared podcast, Lawrence Joss continues the conversation with Dr. William Bernet, focusing on the innovative resources and strategies being developed to address parental alienation. They discuss a new book aimed at educating judges and mental health professionals, the importance of the Parental Alienation Study Group (PASG), and the need for accurate information in the field. Dr. Bernet shares insights on the challenges faced by mental health professionals and advocates for awareness of misinformation surrounding parental alienation. The episode concludes with a hopeful story of reunification between a parent and child, emphasizing the potential for positive outcomes in challenging situations.Key TakeawaysThe new book on parental alienation will be concise and free.  Its title will be "Parental Alienation Theory: official synopsis".Innovative testing is being integrated into evaluations for parental alienation.The PASG aims to educate and support professionals and families.Misinformation about parental alienation is prevalent and harmful.Mental health professionals need to seek out education on parental alienation.Early intervention can lead to positive outcomes for families.The importance of community support in dealing with parental alienation.Adversarial collaboration can help resolve disagreements in the field.The new book will be available in multiple languages.Staying in touch with children can lead to reunification after alienation.Chapters00:00 Introduction to Parental Alienation and New Resources03:03 Innovative Testing and Evaluation in Parental Alienation10:20 The Launch of a Groundbreaking Book15:16 Understanding the Parental Alienation Study Group (PASG)18:11 Advice for Mental Health Professionals22:58 Misinformation and Advocacy in Parental Alienation25:29 Positive Outcomes and Hopeful StoriesKnow more about Dr. Bernet and PASG here:https://www.pasg.info/Book title "Parental Alienation Theory: official synopsis"If you wish to connect with Lawrence Joss or any of the PA-A community members who have appeared as guests on the podcast:Email-      familydisappeared@gmail.comLinktree: https://linktr.ee/lawrencejoss(All links mentioned in the podcast are available in Linktree)Please donate to support PAA programs:https://www.paypal.com/donate?hosted_button_id=SDLTX8TBSZNXSThis podcast is made possible by the Family Disappeared Team:Anna Johnson- Editor/Contributor/Activist/Co-hostGlaze Gonzales- Podcast ManagerConnect with Lawrence Joss:Website: https://parentalalienationanonymous.com/Email- familydisappeared@gmail.com

Bad Faith
Episode 456 - The "Adversarial Left" Responds

Bad Faith

Play Episode Listen Later Mar 13, 2025 98:05


Subscribe to Bad Faith on Patreon to instantly unlock our full premium episode library: http://patreon.com/badfaithpodcast Following Brie's explosive interview with former Rep. Jamaal Bowman about Force The Vote and the usefulness of The Squad, socialist former Seattle City Councilmember and founder of independent movement organization Workers Strike Back Kshama Sawant returns to Bad Faith to offer her rebuttal to the "inside/outside" strategy for moving the country left. As part of the adversarial left, Kshama describes her version of the left response to the current political moment and debriefs Briahna on a recent Piers Morgan interview about Trump's first month in office. Subscribe to Bad Faith on YouTube for video of this episode. Find Bad Faith on Twitter (@badfaithpod) and Instagram (@badfaithpod). Produced by Armand Aviram. Theme by Nick Thorburn (@nickfromislands).

BTC Sessions
What's Lurking in Your Device? Adversarial Thinking Unveiled with NVK – Mentor Sessions Ep.002

BTC Sessions

Play Episode Listen Later Mar 13, 2025 56:12


In this electrifying episode of Mentor Sessions, we dive deep with NVK, the enigmatic founder of Coinkite and mastermind behind the Coldcard wallet, including the sleek new Coldcard Q (yes, I'm a fan too!). NVK unpacks adversarial thinking—the paranoid art of outsmarting attackers before they strike. From designing bulletproof Bitcoin hardware to everyday OpSec blunders (porn and banking on the same browser, anyone?), NVK reveals why only the paranoid survive in a world of infinite threats. Plus, hilarious tangents, real-world horror stories, and a surprising twist on staying invisible. Are you ready to rethink security?Timestamps: • 00:00 – Intro: Meet NVK & Coinkite's Coldcard Legacy • 00:57 – What is Adversarial Thinking? (Spoiler: It's Not Just Paranoia) • 06:39 – Coldcard's Battle-Tested Edge: $1M Attacks & Beyond • 19:45 – Epic Fails: How Smart People Lose Bitcoin • 31:10 – Inside Coldcard's Silicon: Trust-Minimized Genius • 40:35 – Must-Have Hardware Wallet ChecklistConnect with NVK: • Twitter/X: @nvk • Nostr: primal.net/nvkExplore Coinkite: • Website: coinkite.com • Bitcoin Security Guide: bitcoinsecurity.org • Podcast: Bitcoin Review - https://bitcoin.review/FREE Bitcoin Book Giveaway:New to Bitcoin? Grab a free copy of Magic Internet Money by Jesse Berger! CLICK THE LINK:https://bitcoinmentororange.com/magic-internet-money Don't have enough time to learn Bitcoin Self-Custody? Get Personalized, Expert guidance at Bitcoin Mentor. Check out https://bitcoinmentor.io/ Subscribe to Mentor Sessions:Don't miss an episode—hit that subscribe button and follow us: • BTC Sessions: https://x.com/BTCsessions • Nathan: https://x.com/theBTCmentor• Gary: https://x.com/GaryLeeNYC BITCOIN WELL is the quickest and easiest way to get Bitcoin directly into self custody. They also offer non-KYC sells and bill payments. Transparent 1% spread, no additional fees and no withdrawal fees. Check them out today!https://qrco.de/bfiDC6COINKITE offers the BEST Bitcoin hardware on the market. Use this link to get 8% off anything in their store:https://qrco.de/bfiDBVMine Bitcoin like a pro!

NeuroDiverse Christian Couples
JTG - The Contextually Appropriate Penguin Skeleton

NeuroDiverse Christian Couples

Play Episode Listen Later Mar 12, 2025 44:23


In this lively conversation, the “Just the Guys” team explores how crucial context can be—whether you're deciphering a strange image, navigating a complex relationship, or fielding unexpected life events. They kick things off by sharing news of the new book *Uniquely Us* and then dive into fascinating parallels between fooling AI systems with “adversarial examples” and how our assumptions can mislead us in everyday life. By using an analogy of seeing a “penguin in the desert” (something bizarre or out of place), they illustrate how missing details can trigger confusion, misunderstandings, or emotional reactions.Throughout the discussion, they highlight the importance of curiosity and humility to uncover the “why” behind surprising information—much like slowing down to determine if that shape in the dark is an intruder or just a coat on a chair. They also introduce practical techniques (like the C.A.L.M. method) to help us pause, gather context, and respond gracefully rather than reacting rashly. It's an honest, lighthearted look at how we can avoid “kicking the penguin” in our personal and professional relationships by learning to seek more information, accept when we've made mistakes, and ultimately prioritize connection over “being right.”Mentioned in This Episode:- Uniquely Us, a new release about neurodiverse Christian marriages- Adversarial examples in AI and what they reveal about our own blind spots- The C.A.L.M. method (Connect, Assess, Lean in, Move forward) for managing emotional triggers- Why “winning the relationship” matters more than winning any single argumentHave a topic you'd like the guys to unpack? Drop them a note via the show's website—feedback and questions are always welcome!

The Engineering Leadership Podcast
Data governance, adversarial testing & more - Essential strategies to scale AI responsibly in a shifting regulatory environment w/ Jim Palmer #210

The Engineering Leadership Podcast

Play Episode Listen Later Mar 4, 2025 48:58


ABOUT JIM PALMERJim Palmer is the Chief AI Officer at Dialpad. Previously he was CTO and Co-Founder of TalkIQ, a conversational intelligence start-up with expertise in real-time speech recognition and natural language processing, acquired by Dialpad in May of 2018. Prior to TalkIQ, he was the founding engineer on the eBay Now local delivery service.SHOW NOTES:Tips and cheat codes for navigating AI governance (3:30)Breaking down red teaming & adversarial testing in AI governance (8:02)Launching and scaling adversarial testing efforts (11:27)Unexpected benefits unlocked with adversarial testing (13:43)Understanding data governance and strategic AI investments (15:38)Building resilient AI from concept to customer validation (19:28)Exploring early feature validation and pattern recognition in AI (22:38)Adaptability in data management and ensuring safe, ethical data use while adapting to evolving legal and governance requirements (26:51)How to prepare data for safe and sustainable long-term use (30:02)Strategies for compliant data practices in a regulated world (32:43)Building data deletion systems with model training in mind (35:14)Current events and trends shaping adaptability and durability in the AI ecosystem (38:38)The role of a Chief AI Officer (41:20)Rapid fire questions (44:35)LINKS AND RESOURCESGenius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World - With deep and exclusive reporting, across hundreds of interviews, New York Times Silicon Valley journalist Cade Metz brings you into the rooms where these questions are being answered. Where an extraordinarily powerful new artificial intelligence has been built into our biggest companies, our social discourse, and our daily lives, with few of us even noticing.This episode wouldn't have been possible without the help of our incredible production team:Patrick Gallagher - Producer & Co-HostJerry Li - Co-HostNoah Olberding - Associate Producer, Audio & Video Editor https://www.linkedin.com/in/noah-olberding/Dan Overheim - Audio Engineer, Dan's also an avid 3D printer - https://www.bnd3d.com/Ellie Coggins Angus - Copywriter, Check out her other work at https://elliecoggins.com/about/

The Adversarial Mind: Defeating AI Defenses with Nicholas Carlini of Google DeepMind

Play Episode Listen Later Feb 27, 2025 154:38


In this episode, security researcher Nicholas Carlini of Google DeepMind delves into his extensive work on adversarial machine learning and cybersecurity. He discusses his pioneering contributions, which include developing attacks that have challenged the defenses of image classifiers and exploring the robustness of neural networks. Carlini details the inherent difficulties of defending against adversarial attacks, the role of human intuition in his work, and the potential of scaling attack methodologies using language models. He also addresses the broader implications of open-source AI and the complexities of balancing security with accessibility in emerging AI technologies. SPONSORS: SafeBase: SafeBase is the leading trust-centered platform for enterprise security. Streamline workflows, automate questionnaire responses, and integrate with tools like Slack and Salesforce to eliminate friction in the review process. With rich analytics and customizable settings, SafeBase scales to complex use cases while showcasing security's impact on deal acceleration. Trusted by companies like OpenAI, SafeBase ensures value in just 16 days post-launch. Learn more at https://safebase.io/podcast Oracle Cloud Infrastructure (OCI): Oracle's next-generation cloud platform delivers blazing-fast AI and ML performance with 50% less for compute and 80% less for outbound networking compared to other cloud providers. OCI powers industry leaders like Vodafone and Thomson Reuters with secure infrastructure and application development capabilities. New U.S. customers can get their cloud bill cut in half by switching to OCI before March 31, 2024 at https://oracle.com/cognitive Shopify: Shopify is revolutionizing online selling with its market-leading checkout system and robust API ecosystem. Its exclusive library of cutting-edge AI apps empowers e-commerce businesses to thrive in a competitive market. Cognitive Revolution listeners can try Shopify for just $1 per month at https://shopify.com/cognitive NetSuite: Over 41,000 businesses trust NetSuite by Oracle, the #1 cloud ERP, to future-proof their operations. With a unified platform for accounting, financial management, inventory, and HR, NetSuite provides real-time insights and forecasting to help you make quick, informed decisions. Whether you're earning millions or hundreds of millions, NetSuite empowers you to tackle challenges and seize opportunities. Download the free CFO's guide to AI and machine learning at https://netsuite.com/cognitive RECOMMENDED PODCAST: Second OpinionJoin Christina Farr, Ash Zenooz and Luba Greenwood as they bring influential entrepreneurs, experts and investors into the ring for candid conversations at the frontlines of healthcare and digital health every week. Spotify: https://open.spotify.com/show/0A8NwQE976s32zdBbZw6bv Apple: https://podcasts.apple.com/us/podcast/second-opinion-with-christina-farr-ash-zenooz-md-luba/id1759267211 YouTube: https://www.youtube.com/@SecondOpinionwithChristinaFarr SOCIAL LINKS: Website: https://www.cognitiverevolution.ai Twitter (Podcast): https://x.com/cogrev_podcast Twitter (Nathan): https://x.com/labenz LinkedIn: https://linkedin.com/in/nathanlabenz/ Youtube: https://youtube.com/@CognitiveRevolutionPodcast Apple: https://podcasts.apple.com/de/podcast/the-cognitive-revolution-ai-builders-researchers-and/id1669813431 Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk PRODUCED BY: https://aipodcast.ing

Lawyer Talk Off The Record
Understanding Temporary Restraining Orders in Constitutional Disputes

Lawyer Talk Off The Record

Play Episode Listen Later Feb 14, 2025 15:38 Transcription Available


Are we experiencing a "constitutional crisis?" I explore the intricacies of legal procedures involving the challenge of executive actions, particularly focusing on the recent controversies surrounding President Trump's administration and its handling of governmental directives. I break down the essential concepts of standing, temporary restraining orders, and judicial review, illustrating how the legal system is designed to handle such conflicts. Be assured that the reported "crisis" is part of the normal legal process and highlights the resilience of our constitutional framework. Adversarial debates can lead to meaningful resolutions in the judicial system. And understand these conflicts are crucial for developing landmark Supreme Court decisions. Here's a quick breakdown of key takeaways:Understanding Legal Procedures: The process of challenging government actions through courts is a fundamental right. From filing lawsuits in district courts to potential escalations to the U.S. Supreme Court, these steps are essential in maintaining the checks and balances within our government.Temporary Restraining Orders Explained: Often a hot topic in legal discussions, Temporary Restraining Orders (TROs) are pivotal in halting government actions temporarily, allowing the courts to deliberate on potential constitutional violations without causing irreparable harm to those affected.Historical Context and Resolution: Our legal system is designed to withstand conflicts and evolve through landmark decisions. While today's media might intensify the narrative, historical precedents show that such legal debates have long been pathways to clarification and resolution.Submit your questions to www.lawyertalkpodcast.com.Recorded at Channel 511.Stephen E. Palmer, Esq. has been practicing criminal defense almost exclusively since 1995. He has represented people in federal, state, and local courts in Ohio and elsewhere.Though he focuses on all areas of criminal defense, he particularly enjoys complex cases in state and federal courts.He has unique experience handling and assembling top defense teams of attorneys and experts in cases involving allegations of child abuse (false sexual allegations, false physical abuse allegations), complex scientific cases involving allegations of DUI and vehicular homicide cases with blood alcohol tests, and any other criminal cases that demand jury trial experience.Steve has unique experience handling numerous high publicity cases that have garnered national attention.For more information about Steve and his law firm, visit Palmer Legal Defense. Copyright 2025 Stephen E. Palmer - Attorney At Law

YusufOnSecurity.com
210 - Adversarial Misuse of Generative AI

YusufOnSecurity.com

Play Episode Listen Later Feb 8, 2025 50:21


Enjoying the content? Let us know your feedback!As AI-generated content becomes more advanced, the risk of adversarial misuse—where bad actors manipulate AI for malicious purposes—has skyrocketed. But what does this mean in practical terms? What risks do we face, and how one of the big players is addressing them? Stick around as we break Google's Adversarial Misuse of Generative AI report, explain the key jargon, and bust a cybersecurity myth at the end of the show.Before we get into the main topic, lets have a look at one important news update, and that is:Microsoft has expanded its Windows 11 administrator protection tests- https://cloud.google.com: Adversarial Misuse of Generative AI- https://deepmind.google: Mapping the misuse of generative AI- https://learn.microsoft.com: User Account Control overview- https://learn.microsoft.com: How User Account Control worksBe sure to subscribe! You can also stream from https://yusufonsecurity.comIn there, you will find a list of all previous episodes in there too.

Informatik für die moderne Hausfrau
Folge 32 - Adversarial Attacks: Wie sich KI-Systeme austricksen lassen

Informatik für die moderne Hausfrau

Play Episode Listen Later Jan 22, 2025 20:47


Künstliche Intelligenz ist aus der heutigen Welt kaum mehr wegzudenken und kommt in vielen Bereichen unseres Alltags zum Einsatz. Umso wichtiger ist es, Systeme, die KI verwenden, und die zugrundeliegenden Modelle vor Manipulationen zu schützen. In der 32. Folge von Informatik für die moderne Hausfrau beschäftigen wir uns mit sogenannten Adversarial Attacks, also mit böswilligen Angriffen, mit denen KI-Systeme ausgetrickst und sogar nachhaltig geschädigt werden können. Dazu rekapitulieren wir zunächst, was man eigentlich unter dem Begriff "KI" versteht (nämlich normalerweise das maschinelle Lernen oder Machine Learning) und wie KI funktioniert. Wir schauen uns außerdem an, wie die Folgen von Adversarial Attacks abgemildert werden können.  Mehr über Adversarial Attacks auf KI-Systeme könnt ihr hier erfahren: https://spectrum.ieee.org/slight-street-sign-modifications-can-fool-machine-learning-algorithms Die wissenschaftliche Veröffentlichung, auf dem der Artikel basiert, findet ihr hier: https://arxiv.org/pdf/1707.08945 Das Programm "Nightshade" könnt ihr auf der entsprechenden Projektwebseite herunterladen und euch außerdem über die Hintergründe informieren: https://nightshade.cs.uchicago.edu/whatis.html Adversarial Attacks selbst ausprobieren könnt ihr hier: https://kennysong.github.io/adversarial.js/ Eine umfangreiche Übersicht über Adversarial Attacks bzw. Gefahren für KI-Systeme, Angriffsstrategien und Gegenmaßnahmen ist hier zu finden: https://atlas.mitre.org/ Alle Informationen zum Podcast findet ihr auf der zugehörigen Webseite https://www.informatik-hausfrau.de. Zur Kontaktaufnahme schreibt mir gerne eine Mail an mail@informatik-hausfrau.de oder meldet euch über Social Media. Auf Instagram und Bluesky ist der Podcast unter dem Handle @informatikfrau (bzw. @informatikfrau.bsky.social) zu finden.  Wenn euch dieser Podcast gefällt, abonniert ihn doch bitte und hinterlasst eine positive Bewertung oder eine kurze Rezension, um ihm zu mehr Sichtbarkeit zu verhelfen. Rezensionen könnt ihr zum Beispiel bei Apple Podcasts schreiben oder auf panoptikum.social.  Falls ihr die Produktion des Podcasts finanziell unterstützen möchtet, habt ihr die Möglichkeit, dies über die Plattform Steady zu tun. Weitere Informationen dazu sind hier zu finden: https://steadyhq.com/de/informatikfrau Falls ihr mir auf anderem Wege etwas 'in den Hut werfen' möchtet, ist dies (auch ohne Registrierung) über die Plattform Ko-fi möglich: https://ko-fi.com/leaschoenberger Dieser Podcast wird gefördert durch das Kulturbüro der Stadt Dortmund.  

Monsters and Treasure
Adversarial or Ads for Cereal

Monsters and Treasure

Play Episode Listen Later Jan 2, 2025 33:10


Legal Grounds | Conversations on Life, Leadership & Law
Legal Grounds | Mark Perkins On Sustainability Over Balance, the Power of Kindness in an Adversarial System, and Why Seeking Help Doesn't Make You Less Competent

Legal Grounds | Conversations on Life, Leadership & Law

Play Episode Listen Later Dec 11, 2024 58:29


When discussing any task that was accomplishable but time-consuming, one of my old man's goto phrases was, “Hell, you can stand on your head and gargle peanut butter for (insert amount of time), so you better get to it.”It's a phrase my son's and I still use today, and while there is more jest than earnestness in its use, there are times in life - be they professional or personal - when we all have to gargle peanut butter.And while in most professions, and hopefully all relationships, the goal is to get to a point where this ISN'T the case, when someone decides to become a trial attorney they are signing up for a job that guarantees there will be times when work cannot be ‘balanced'. But as my guest and I discussed this week, just because we know we're going to have to “gargle peanut butter” at some point - say, when a major case goes to trial - that doesn't mean we have to treat everything that crosses our desk like it's a top-priority. As a former prosecutor and trial attorney whose firm just celebrated its 25th year in business, Mark Perkins shares how he had to learn these lessons the hard way. In his words, after powering through much of his career on adrenaline, he began to feel off. But the journey it would lead him on was filled with valuable lessons on kindness and the power of vulnerability when it comes to dealing with mental-health.It was a wonderful and story-filled conversation that, if anything, has plenty of laughs. As always, enjoy the show.

Follow Him: A Come, Follow Me Podcast featuring Hank Smith & John Bytheway
Ether 12-15 Part 2 • Dr. Doug Benson • November 25 - December 1 • Come Follow Me

Follow Him: A Come, Follow Me Podcast featuring Hank Smith & John Bytheway

Play Episode Listen Later Nov 20, 2024 45:08


Dr. Doug Benson continues to explore Moroni's treatise on charity and the warning signs for our days regarding anger, pride, and contempt.SHOW NOTES/TRANSCRIPTSEnglish: https://tinyurl.com/podcastBM48ENFrench: https://tinyurl.com/podcastBM48FRGerman: https://tinyurl.com/podcastBM48DEPortuguese: https://tinyurl.com/podcastBM48PTSpanish: https://tinyurl.com/podcastBM48ESYOUTUBEhttps://youtu.be/_d4DA9UbksQALL EPISODES/SHOW NOTESfollowHIM website: https://www.followHIMpodcast.comFREE PDF DOWNLOADS OF followHIM QUOTE BOOKSNew Testament: https://tinyurl.com/PodcastNTBookOld Testament: https://tinyurl.com/PodcastOTBookWEEKLY NEWSLETTERhttps://tinyurl.com/followHIMnewsletterSOCIAL MEDIAInstagram: https://www.instagram.com/followHIMpodcastFacebook: https://www.facebook.com/followhimpodcastTIMECODE00:00 Part II - Dr. Doug Benson03:22 Loving yourself07:08 Ignore the haters08:28 Faith involves action11:14 Opposition between things13:22 Pride, envy, and contempt15:39 Ether 14:25-15:19 - Anger leads to destruction17:14 Ether 15:22-25 - Drunken with anger18:25 Acting in faith vs acting on emotion19:37 Star Trek or Harry Potter analogy22:55 The Wise Mind and handling anger27:50 Ether 15:16 - Radical Acceptance30:41 He comes to himself32:04 Adversarial relationships35:35 Mindfulness and not acting on negative emotion38:33 Becoming a peacemaker40:27 How does the Book of Mormon compare?45:51 End of Part 2 - Dr. Doug BensonThanks to the followHIM team:Steve & Shannon Sorensen: Cofounder, Executive Producer, SponsorDavid & Verla Sorensen: SponsorsDr. Hank Smith: Co-hostJohn Bytheway: Co-hostDavid Perry: ProducerKyle Nelson: Marketing, SponsorLisa Spice: Client Relations, Editor, Show NotesJamie Neilson: Social Media, Graphic DesignWill Stoughton: Video EditorKrystal Roberts: Translation Team, English & French Transcripts, WebsiteAriel Cuadra: Spanish Transcripts"Let Zion in Her Beauty Rise" by Marshall McDonaldhttps://www.marshallmcdonaldmusic.com

CERIAS Security Seminar Podcast
Mason Rice, Adversarial C2 inside OT Networks

CERIAS Security Seminar Podcast

Play Episode Listen Later Nov 20, 2024 50:35


This presentation outlines adversarial command and control attacks in OT networks.  Focusing on the electrical grid, this presentation highlights current gaps in critical infrastructure protection research.  After discussing real-world examples, a fictional electrical grid is used to explore cyber-physical threats and mitigations to OT systems. About the speaker: Dr. Mason Rice is the director of the Cyber Resilience and Intelligence Division at Oak Ridge National Laboratory. In this role, he is responsible for an R&D portfolio focused on advanced intelligent systems and resilient cyber-physical systems, including research into (1) AI for national security, (2) cybersecurity for critical systems, (3) machine-augmented intelligence, (4) vulnerability science, and (5) identity science.Following retirement from the Army, Dr. Rice joined ORNL in 2017 as the Cyber-Physical R&D Manager and was soon appointed as the first Group Leader for Resilient Cyber-Physical Systems at ORNL. He ultimately grew the group into four focused research groups, at which point he was selected to be the first Section Head of the new Resilient Cyber-Physical Systems Section.

EM360 Podcast
AI Personas: A Fine Line Between Friend and Fiasco

EM360 Podcast

Play Episode Listen Later Nov 18, 2024 26:57


AI Personas are the cornerstone of how these systems interact with users, delivering tailored and engaging experiences. These personas—crafted from user research, behavioural insights, and cultural contexts—help define an AI's tone, style, and decision-making approach. Whether it's a friendly virtual assistant or a professional customer service bot, personas ensure that AI systems resonate with their audiences while maintaining a consistent identity. However, developing personas for AI isn't without its challenges. Ensuring that AI responses remain appropriate, ethical, and unbiased while preserving a unique persona requires careful consideration. From avoiding stereotypes to addressing edge cases, the process demands robust testing and a clear understanding of how diverse user interactions can unfold. When personas fail to account for the complexity of real-world scenarios, the risk of inappropriate or harmful responses increases. By combining creative storytelling with ethical AI design principles, organisations can navigate these challenges and build AI systems that are engaging and responsible in their behaviour.In this episode, Paulina Rios Maya, Head of Industry Relations, speaks to Cobus Greyling, Chief Evangelist at Kore.ai, about the influence of cultural norms and value systems on AI and strategies for maintaining control over AI behaviour. Key Takeaways: Personas in AI shape user interactions and trust.Cultural norms influence AI decision-making processes.Balancing control and agency is crucial for effective AI.Adversarial attacks can undermine AI reliability.Transparency is essential for user confidence in AI.Organisations should not offload too much responsibility to AI.AI should enhance human creativity, not replace it.Proof of value is necessary for AI technology implementation.Chapters:00:00 - Introduction to AI Personas and Their Impact02:34 - The Role of Personas in AI Behavior05:51 - Challenges in Ensuring Appropriate AI Responses09:07 - Cultural Norms and Value Systems in AI10:30 - Balancing Control and Agency in AI14:14 - Strategies for Maintaining Control Over AI Behavior21:24 - The Importance of Responsibility in AI Usage

Two Outspoken
Is Bernie Right that Democrats Abandoned the Working Class?

Two Outspoken

Play Episode Listen Later Nov 8, 2024 9:40


This is a free preview of a paid episode. To hear more, visit zeteo.comWhat on earth happened on Tuesday? In the latest episode of ‘Two Outspoken,' filmed in Washington DC, Mehdi and Owen sit down together and find plenty to both agree and disagree about, from the role of the economy in the Dems' landslide defeat, to the impact of misinformation and media bias. Their wide-ranging and thought-provoking conversation comes on the heels of a headline-grabbing statement from Senator Bernie Sanders accusing the Democrats of abandoning the working class.“The real hourly wages of most American workers are the same as they were when Richard Nixon was in the White House half a century ago. Yeah, that has political consequences,” Owen argues, agreeing with Sanders.Mehdi pushes back, arguing that there were other factors at play.“It depends where you start the clock. If you start the clock in September 2020 and compare it to September 2024, real wages are up… Don't forget there was a pandemic and the pandemic actually messed up all the numbers.”The pair also discuss Harris' failure to offer an ambitious economic agenda to the American people; the anti-incumbency effect both in the United States and across the post-pandemic democratic world; and the power of right-wing messaging and - let's be honest - propaganda. “The problem is a lot of Americans don't have a clue what the Democrats actually did for them,” Mehdi says. “See, Trump is a master of doing very little but exaggerating the s**t out of it.”Adversarial till the end, Mehdi even proposes a sportsman's bet with Owen about what might happen on January 20th, 2025 when it comes to Trump voters and their attitude towards the economy. Stick around to the end of the show to see what the bet is - and whether Owen accepts. Watch the full ‘Two Outspoken' episode above and let us know what you think in the comments! Free subscribers get a 10-minute preview.—‘Two Outspoken' is a monthly conversation between Mehdi and political commentator, author, and activist Owen Jones. Catch the two every month, debriefing, venting, analyzing and sometimes even grieving, really, about the news in the US and the UK. No holds barred, always outspoken.

The Kimberly Lovi Podcast
#138. How to Negotiate a GOOD Business Deal

The Kimberly Lovi Podcast

Play Episode Listen Later Oct 30, 2024 23:40


Episode #138: Growing up, I learned the art of negotiation by watching my mother haggle at the Rose Bowl flea market. Those early experiences taught me that negotiation isn't about clashing interests but finding common ground. In this episode, I invite you to discover how negotiation can be a collaborative process rather than an adversarial one. Drawing from my time at USC business school and my extensive professional journey, I share stories and strategies that will transform your approach to negotiation. Whether it's negotiating a corporate salary or setting terms for an entrepreneurial venture, these insights will empower you to achieve personal and professional growth. The art of negotiation isn't just about closing deals; it's about building lasting relationships and creating win-win scenarios. Together, let's explore how setting clear expectations and fostering a positive atmosphere can lead to sustainable success. By adopting a mindset focused on mutual benefit, we emphasize transparency, respect, and good vibes in all business dealings. This episode is your guide to embracing negotiation as an essential life skill, ensuring that you walk out of any discussion not just with a deal, but with a partnership for future success. Join me on this journey to see negotiation as a celebration of shared goals and opportunities.   Chapters (00:00) - The Art of Negotiation (09:16) - The Art of Collaborative Negotiation Highlights (00:58) Mastering Business Deal Negotiations (116 Seconds) (07:48) The Importance of Treating People Well (87 Seconds) (12:53) Building Trust and Goodwill in Deals (73 Seconds) (18:36) Morality in Business Relations (47 Seconds) (21:23) Negotiating for Success and Friendship (76 Seconds) Follow Kimberly on Instagram and TikTok @kimberlylovi or @iconicnationmedia

Skyrim Book Club
The Adversarial Spirits

Skyrim Book Club

Play Episode Listen Later Oct 23, 2024 2:30


The DownLink
Space Competition: How To Better De-Risk Adversarial Threats To Space Systems' Value

The DownLink

Play Episode Listen Later Sep 28, 2024 35:21


Space Competition: How To Better De-Risk Adversarial Threats To Space Systems' Value This episode was recorded before a live audience at the Space Information Sharing and Analysis Center's Value of Space Summit, convened in Colorado Springs, Colorado. The discussion focuses on threat trends, the lack of insurance cover for certain types of attacks; and how to better secure the value of space systems. Laura Winter speaks with Chris Kunstadter, President at Triton Space; Tomas Peña, Chief Technologist for Cyber Operations at L3Harris Technologies; and Sam Visner, the Space-ISAC Chairman.

Two Minutes in Trade
Two Minutes in Trade - Stopping Adversarial Tariff Evasion Act

Two Minutes in Trade

Play Episode Listen Later Sep 26, 2024 3:23


A new bill would link the payment of section 201, 301, and 232 duties to the ownership of the factory making the goods! Listen for more details on Two Minutes in Trade. 

Papers Read on AI
Artificial Immune System of Secure Face Recognition Against Adversarial Attacks

Papers Read on AI

Play Episode Listen Later Sep 13, 2024 70:54


Insect production for food and feed presents a promising supplement to ensure food safety and address the adverse impacts of agriculture on climate and environment in the future. However, optimisation is required for insect production to realise its full potential. This can be by targeted improvement of traits of interest through selective breeding, an approach which has so far been underexplored and underutilised in insect farming. Here we present a comprehensive review of the selective breeding framework in the context of insect production. We systematically evaluate adjustments of selective breeding techniques to the realm of insects and highlight the essential components integral to the breeding process. The discussion covers every step of a conventional breeding scheme, such as formulation of breeding objectives, phenotyping, estimation of genetic parameters and breeding values, selection of appropriate breeding strategies, and mitigation of issues associated with genetic diversity depletion and inbreeding. This review combines knowledge from diverse disciplines, bridging the gap between animal breeding, quantitative genetics, evolutionary biology, and entomology, offering an integrated view of the insect breeding research area and uniting knowledge which has previously remained scattered across diverse fields of expertise. 2024: Min Ren, Yunlong Wang, Yuhao Zhu, Yongzhen Huang, Zhen Sun, Qi Li, Tieniu Tan https://arxiv.org/pdf/2406.18144

The Nonlinear Library
LW - Solving adversarial attacks in computer vision as a baby version of general AI alignment by stanislavfort

The Nonlinear Library

Play Episode Listen Later Aug 30, 2024 12:33


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Solving adversarial attacks in computer vision as a baby version of general AI alignment, published by stanislavfort on August 30, 2024 on LessWrong. I spent the last few months trying to tackle the problem of adversarial attacks in computer vision from the ground up. The results of this effort are written up in our new paper Ensemble everything everywhere: Multi-scale aggregation for adversarial robustness (explainer on X/Twitter). Taking inspiration from biology, we reached state-of-the-art or above state-of-the-art robustness at 100x - 1000x less compute, got human-understandable interpretability for free, turned classifiers into generators, and designed transferable adversarial attacks on closed-source (v)LLMs such as GPT-4 or Claude 3. I strongly believe that there is a compelling case for devoting serious attention to solving the problem of adversarial robustness in computer vision, and I try to draw an analogy to the alignment of general AI systems here. 1. Introduction In this post, I argue that the problem of adversarial attacks in computer vision is in many ways analogous to the larger task of general AI alignment. In both cases, we are trying to faithfully convey an implicit function locked within the human brain to a machine, and we do so extremely successfully on average. Under static evaluations, the human and machine functions match up exceptionally well. However, as is typical in high-dimensional spaces, some phenomena can be relatively rare and basically impossible to find by chance, yet ubiquitous in their absolute count. This is the case for adversarial attacks - imperceptible modifications to images that completely fool computer vision systems and yet have virtually no effect on humans. Their existence highlights a crucial and catastrophic mismatch between the implicit human vision function and the function learned by machines - a mismatch that can be exploited in a dynamic evaluation by an active, malicious agent. Such failure modes will likely be present in more general AI systems, and our inability to remedy them even in the more restricted vision context (yet) does not bode well for the broader alignment project. This is a call to action to solve the problem of adversarial vision attacks - a stepping stone on the path to aligning general AI systems. 2. Communicating implicit human functions to machines The basic goal of computer vision can be viewed as trying to endow a machine with the same vision capabilities a human has. A human carries, locked inside their skull, an implicit vision function mapping visual inputs into semantically meaningful symbols, e.g. a picture of a tortoise into a semantic label tortoise. This function is represented implicitly and while we are extremely good at using it, we do not have direct, conscious access to its inner workings and therefore cannot communicate it to others easily. To convey this function to a machine, we usually form a dataset of fixed images and their associated labels. We then use a general enough class of functions, typically deep neural networks, and a gradient-based learning algorithm together with backpropagation to teach the machine how to correlate images with their semantic content, e.g. how to assign a label parrot to a picture of a parrot. This process is extremely successful in communicating the implicit human vision function to the computer, and the implicit human and explicit, learned machine functions agree to a large extent. The agreement between the two is striking. Given how different the architectures are (a simulated graph-like function doing a single forward pass vs the wet protein brain of a mammal running continuous inference), how different the learning algorithms are (gradient descent with backpropagation vs something completely different but still unknown), and how differ...

The Nonlinear Library
AF - Solving adversarial attacks in computer vision as a baby version of general AI alignment by stanislavfort

The Nonlinear Library

Play Episode Listen Later Aug 29, 2024 12:34


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Solving adversarial attacks in computer vision as a baby version of general AI alignment, published by stanislavfort on August 29, 2024 on The AI Alignment Forum. I spent the last few months trying to tackle the problem of adversarial attacks in computer vision from the ground up. The results of this effort are written up in our new paper Ensemble everything everywhere: Multi-scale aggregation for adversarial robustness (explainer on X/Twitter). Taking inspiration from biology, we reached state-of-the-art or above state-of-the-art robustness at 100x - 1000x less compute, got human-understandable interpretability for free, turned classifiers into generators, and designed transferable adversarial attacks on closed-source (v)LLMs such as GPT-4 or Claude 3. I strongly believe that there is a compelling case for devoting serious attention to solving the problem of adversarial robustness in computer vision, and I try to draw an analogy to the alignment of general AI systems here. 1. Introduction In this post, I argue that the problem of adversarial attacks in computer vision is in many ways analogous to the larger task of general AI alignment. In both cases, we are trying to faithfully convey an implicit function locked within the human brain to a machine, and we do so extremely successfully on average. Under static evaluations, the human and machine functions match up exceptionally well. However, as is typical in high-dimensional spaces, some phenomena can be relatively rare and basically impossible to find by chance, yet ubiquitous in their absolute count. This is the case for adversarial attacks - imperceptible modifications to images that completely fool computer vision systems and yet have virtually no effect on humans. Their existence highlights a crucial and catastrophic mismatch between the implicit human vision function and the function learned by machines - a mismatch that can be exploited in a dynamic evaluation by an active, malicious agent. Such failure modes will likely be present in more general AI systems, and our inability to remedy them even in the more restricted vision context (yet) does not bode well for the broader alignment project. This is a call to action to solve the problem of adversarial vision attacks - a stepping stone on the path to aligning general AI systems. 2. Communicating implicit human functions to machines The basic goal of computer vision can be viewed as trying to endow a machine with the same vision capabilities a human has. A human carries, locked inside their skull, an implicit vision function mapping visual inputs into semantically meaningful symbols, e.g. a picture of a tortoise into a semantic label tortoise. This function is represented implicitly and while we are extremely good at using it, we do not have direct, conscious access to its inner workings and therefore cannot communicate it to others easily. To convey this function to a machine, we usually form a dataset of fixed images and their associated labels. We then use a general enough class of functions, typically deep neural networks, and a gradient-based learning algorithm together with backpropagation to teach the machine how to correlate images with their semantic content, e.g. how to assign a label parrot to a picture of a parrot. This process is extremely successful in communicating the implicit human vision function to the computer, and the implicit human and explicit, learned machine functions agree to a large extent. The agreement between the two is striking. Given how different the architectures are (a simulated graph-like function doing a single forward pass vs the wet protein brain of a mammal running continuous inference), how different the learning algorithms are (gradient descent with backpropagation vs something completely different but still unknown), a...

The Full Ratchet: VC | Venture Capital | Angel Investors | Startup Investing | Fundraising | Crowdfunding | Pitch | Private E
448. Where AI Value will Accrue in the Tech Stack, How Far we are from AGI, The Role of Adversarial AI, and Enterprise Adoption of AI in 2024 (Jill Chase)

The Full Ratchet: VC | Venture Capital | Angel Investors | Startup Investing | Fundraising | Crowdfunding | Pitch | Private E

Play Episode Listen Later Aug 26, 2024 38:50


Jill Chase of CapitalG joins Nick to discuss Where AI Value will Accrue in the Tech Stack, How Far we are from AGI, The Role of Adversarial AI, and Enterprise Adoption of AI in 2024. In this episode we cover: AI Technology and Investment, LTM Nets and Potential for Fully Autonomous Software Engineering AI Technology, Growth Strategies, and Outlier Metrics in the Industry AI Adoption in Enterprise, Challenges in Measuring ROI Ethical Considerations Around Data Usage for Model Training AI Research and Its Potential Impact on Society Guest Links: Jill's Twitter/X Jill's LinkedIn CapitalG's LinkedIn CapitalG's Website The hosts of The Full Ratchet are Nick Moran and Nate Pierotti of New Stack Ventures, a venture capital firm committed to investing in founders outside of the Bay Area. Want to keep up to date with The Full Ratchet? Follow us on social. You can learn more about New Stack Ventures by visiting our LinkedIn and Twitter. Are you a founder looking for your next investor? Visit our free tool VC-Rank and we'll send a list of potential investors right to your inbox!

Machine Learning Street Talk
Adversarial Examples and Data Modelling - Andrew Ilyas (MIT)

Machine Learning Street Talk

Play Episode Listen Later Aug 22, 2024 88:00


Andrew Ilyas, a PhD student at MIT who is about to start as a professor at CMU. We discuss Data modeling and understanding how datasets influence model predictions, Adversarial examples in machine learning and why they occur, Robustness in machine learning models, Black box attacks on machine learning systems, Biases in data collection and dataset creation, particularly in ImageNet and Self-selection bias in data and methods to address it. MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api Andrew's site: https://andrewilyas.com/ https://x.com/andrew_ilyas TOC: 00:00:00 - Introduction and Andrew's background 00:03:52 - Overview of the machine learning pipeline 00:06:31 - Data modeling paper discussion 00:26:28 - TRAK: Evolution of data modeling work 00:43:58 - Discussion on abstraction, reasoning, and neural networks 00:53:16 - "Adversarial Examples Are Not Bugs, They Are Features" paper 01:03:24 - Types of features learned by neural networks 01:10:51 - Black box attacks paper 01:15:39 - Work on data collection and bias 01:25:48 - Future research plans and closing thoughts References: Adversarial Examples Are Not Bugs, They Are Features https://arxiv.org/pdf/1905.02175 TRAK: Attributing Model Behavior at Scale https://arxiv.org/pdf/2303.14186 Datamodels: Predicting Predictions from Training Data https://arxiv.org/pdf/2202.00622 Adversarial Examples Are Not Bugs, They Are Features https://arxiv.org/pdf/1905.02175 IMAGENET-TRAINED CNNS https://arxiv.org/pdf/1811.12231 ZOO: Zeroth Order Optimization Based Black-box https://arxiv.org/pdf/1708.03999 A Spline Theory of Deep Networks https://proceedings.mlr.press/v80/balestriero18b/balestriero18b.pdf Scaling Monosemanticity https://transformer-circuits.pub/2024/scaling-monosemanticity/ Adversarial Examples Are Not Bugs, They Are Features https://gradientscience.org/adv/ Adversarial Robustness Limits via Scaling-Law and Human-Alignment Studies https://proceedings.mlr.press/v235/bartoldson24a.html Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors https://arxiv.org/abs/1807.07978 Estimation of Standard Auction Models https://arxiv.org/abs/2205.02060 From ImageNet to Image Classification: Contextualizing Progress on Benchmarks https://arxiv.org/abs/2005.11295 Estimation of Standard Auction Models https://arxiv.org/abs/2205.02060 What Makes A Good Fisherman? Linear Regression under Self-Selection Bias https://arxiv.org/abs/2205.03246 Towards Tracing Factual Knowledge in Language Models Back to the Training Data [Akyürek] https://arxiv.org/pdf/2205.11482

The Social-Engineer Podcast
Ep. 271 - Security Awareness Series - Advocating for Non-Adversarial Security with Kimberly Sims

The Social-Engineer Podcast

Play Episode Listen Later Aug 19, 2024 30:10


Today on the Social-Engineer Podcast: The Security Awareness Series, Chris is joined by Kimberly Sims. Kimberly is the Deputy CISO, Director of Cyber Operations and Cloud Security for American Century Investments. Kimberly has over 17 years' experience in the Financial Services sector. Prior to joining American Century, Kimberly ran the Information Security Program for the second-largest capital market's desk in the world, responsible for securing systems that process a trillion-dollar debt portfolio.   Kimberly is an advisory member, and IT lead for the Charter for Veterans, a non-profit organization assisting recovering combat wounded veterans. She is an advocate for mentorship and coaching across the security industry and participates in several industry groups. [Aug 19, 2024]   00:00 - Intro 00:22 - Intro Links: -          Social-Engineer.com - http://www.social-engineer.com/ -          Managed Voice Phishing - https://www.social-engineer.com/services/vishing-service/ -          Managed Email Phishing - https://www.social-engineer.com/services/se-phishing-service/ -          Adversarial Simulations - https://www.social-engineer.com/services/social-engineering-penetration-test/ -          Social-Engineer channel on SLACK - https://social-engineering-hq.slack.com/ssb -          CLUTCH - http://www.pro-rock.com/ -          innocentlivesfoundation.org - http://www.innocentlivesfoundation.org/                                                03:01 - Kimberly Sims Intro 03:49 - Inspired by a Book 05:32 - Making a Giant Leap! 06:59 - Biggest Mistakes 09:14 - Lessons Learned 11:07 - Security Ambassador 13:23 - Overcoming the Fear 15:20 - Executive Order 17:06 - People, Process, Technology 18:38 - Back to Basics 20:11 - Resiliency Programs 23:14 - The Take Home 24:47 - Jump In! 25:46 - Book Recommendations -          Blink - Malcolm Gladwell -          Pitch Perfect - Bill McGowan -          Permission to Screw Up - Kristen Hadeed 27:49 - Mentors 29:12 - Find Kimberly Sims online -          LinkedIn: in/kimberly-sims-733510/ 29:26 - Wrap Up & Outro -          www.social-engineer.com -          www.innocentlivesfoundation.org

Shady Oaks Church Podcast
An Adversarial Relationship

Shady Oaks Church Podcast

Play Episode Listen Later Aug 5, 2024 59:43


At the Cross of Calvary, Jesus work made available to those who believe on Him the opportunity to reconcile with God. As believers, we have been forgiven, redeemed, restored, and empowered by the Lord. We are now His children who have an assignment of bringing God's Kingdom to bear in the world around us. Being reconciled with God, we have been given power and authority to push back the darkness and bring His glory and light to bear. We are in a battle but the battle is not one of seeking a victory, but one of enforcing God's victory wherever we go.

The Nonlinear Library
AF - Can Generalized Adversarial Testing Enable More Rigorous LLM Safety Evals? by Stephen Casper

The Nonlinear Library

Play Episode Listen Later Jul 30, 2024 8:19


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Can Generalized Adversarial Testing Enable More Rigorous LLM Safety Evals?, published by Stephen Casper on July 30, 2024 on The AI Alignment Forum. Thanks to Zora Che, Michael Chen, Andi Peng, Lev McKinney, Bilal Chughtai, Shashwat Goel, Domenic Rosati, and Rohit Gandikota. TL;DR In contrast to evaluating AI systems under normal "input-space" attacks, using "generalized," attacks, which allow an attacker to manipulate weights or activations, might be able to help us better evaluate LLMs for risks - even if they are deployed as black boxes. Here, I outline the rationale for "generalized" adversarial testing and overview current work related to it. See also prior work in Casper et al. (2024), Casper et al. (2024), and Sheshadri et al. (2024). Even when AI systems perform well in typical circumstances, they sometimes fail in adversarial/anomalous ones. This is a persistent problem. State-of-the-art AI systems tend to retain undesirable latent capabilities that can pose risks if they resurface. My favorite example of this is the most cliche one many recent papers have demonstrated diverse attack techniques that can be used to elicit instructions for making a bomb from state-of-the-art LLMs. There is an emerging consensus that, even when LLMs are fine-tuned to be harmless, they can retain latent harmful capabilities that can and do cause harm when they resurface ( Qi et al., 2024). A growing body of work on red-teaming ( Shayegani et al., 2023, Carlini et al., 2023, Geiping et al., 2024, Longpre et al., 2024), interpretability ( Juneja et al., 2022, Lubana et al., 2022, Jain et al., 2023, Patil et al., 2023, Prakash et al., 2024, Lee et al., 2024), representation editing ( Wei et al., 2024, Schwinn et al., 2024), continual learning ( Dyer et al., 2022, Cossu et al., 2022, Li et al., 2022, Scialom et al., 2022, Luo et al., 2023, Kotha et al., 2023, Shi et al., 2023, Schwarzchild et al., 2024), and fine-tuning ( Jain et al., 2023, Yang et al., 2023, Qi et al., 2023, Bhardwaj et al., 2023, Lermen et al., 2023, Zhan et al., 2023, Ji et al., 2024, Hu et al., 2024, Halawi et al., 2024) suggests that fine-tuning struggles to make fundamental changes to an LLM's inner knowledge and capabilities. For example, Jain et al. (2023) likened fine-tuning in LLMs to merely modifying a "wrapper" around a stable, general-purpose set of latent capabilities. Even if they are generally inactive, harmful latent capabilities can pose harm if they resurface due to an attack, anomaly, or post-deployment modification ( Hendrycks et al., 2021, Carlini et al., 2023). We can frame the problem as such: There are hyper-astronomically many inputs for modern LLMs (e.g. there are vastly more 20-token strings than particles in the observable universe), so we can't brute-force-search over the input space to make sure they are safe. So unless we are able to make provably safe advanced AI systems (we won't soon and probably never will), there will always be a challenge with ensuring safety - the gap between the set of failure modes that developers identify, and unforeseen ones that they don't. This is a big challenge because of the inherent unknown-unknown nature of the problem. However, it is possible to try to infer how large this gap might be. Taking a page from the safety engineering textbook -- when stakes are high, we should train and evaluate LLMs under threats that are at least as strong as, and ideally stronger than, ones that they will face in deployment. First, imagine that an LLM is going to be deployed open-source (or if it could be leaked). Then, of course, the system's safety depends on what it can be modified to do. So it should be evaluated not as a black-box but as a general asset to malicious users who might enhance it through finetuning or other means. This seems obvious, but there's preced...

The Nonlinear Library
AF - Does robustness improve with scale? by ChengCheng

The Nonlinear Library

Play Episode Listen Later Jul 25, 2024 2:16


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Does robustness improve with scale?, published by ChengCheng on July 25, 2024 on The AI Alignment Forum. Adversarial vulnerabilities have long been an issue in various ML systems. Large language models (LLMs) are no exception, suffering from issues such as jailbreaks: adversarial prompts that bypass model safeguards. At the same time, scale has led to remarkable advances in the capabilities of LLMs, leading us to ask: to what extent can scale help solve robustness? In this post, we explore this question in the classification setting: predicting the binary label of a text input. We find that scale alone does little to improve model robustness, but that larger models benefit more from defenses such as adversarial training than do smaller models. We study models in the classification setting as there is a clear notion of "correct behavior": does the model output the right label? We can then naturally define robustness as the proportion of the attacked dataset that the model correctly classifies. We evaluate models on tasks such as spam detection and movie sentiment classification. We adapt pretrained foundation models for classification by replacing the generative model's unembedding layer with a randomly initialized classification head, and then fine-tune the models on each task. We focus on adversarial-suffix style attacks: appending an adversarially chosen prompt to a benign prompt in an attempt to cause the model to misclassify the input, e.g., classify a spam email as not-spam. We consider two attacks: the state-of-the-art Greedy Coordinate Gradient method (Zou et al., 2023), and a baseline random token attack. This simple threat model has the advantage of being unlikely to change the semantics of the input. For example, a spam email is still spam even if a handful of tokens are appended to it. Of course, attackers are not limited to such a simple threat model: studying more open-ended threat models (such as rephrasing the prompt, or replacing words with synonyms) and corresponding attack methods (such as LLM generated adversarial prompts) is an important direction that we hope to pursue soon in future work. For more information, see our blog post or paper. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

Root Causes: A PKI and Security Podcast
Root Causes 405: What Is an Adversarial Self-replicating Prompt?

Root Causes: A PKI and Security Podcast

Play Episode Listen Later Jul 19, 2024 25:04


In this episode we explain what an adversarial, self-replicating prompt, otherwise known as a prompt worm.

Slam the Gavel
Co-Parenting Effectively After Adversarial Experience In Family Court; With Lendo Mutambala

Slam the Gavel

Play Episode Listen Later Jul 16, 2024 57:36


Send us a Text Message.    Slam the Gavel welcomes Lendo Mutambala to the podcast Lendo is from Calgary, Alberta, Canada and had been in family court since the age of 23 when he first had his eldest son. Lendo's actual first time in court was in 2015. He had three major appearances there. At first, Lendo was give custody temporarily and he gave that up and when he did that, it would come back to haunt him, as he explains in this podcast.      Finding himself in court a second time, Lendo was then facing false accusations of neglect. Lendo was able to put documents together, focusing on one of the major lies this was that he was neglecting his son's health issues, being vilified as a young parent. Lendo's career and educational aspirations were now being weaponized against him. Able to convince the judge to give them 50/50 shared custody, it turned out to be the best decision for their child.      Lendo talked about how he flipped this situation where today his son is thriving and has no bad resentment toward his ex or her mother.To Reach Lendo Mutambala: https://www.superdaddyclub.ca/SDC Connects Self-Improvement Resources:https://www.onefourstudios.ca/super-daddy-connects.htmlFull Episode Streaming Links (We are present on all of them)Spotify: https://open.spotify.com/show/6MM5pClaycXZg9pjpMKEZSApple Podcast: https://podcasts.apple.com/us/podcast/the-super-daddy-club/id1605901156Follow us on Social Media Links  Tik Tok: https://www.tiktok.com/@superdaddyclub?is_from_webapp=1&sender_device=pInstagram:https://www.instagram.com/superdaddyclub/Linked In:https://www.linkedin.com/in/lendo-mutambala/*DISCLAIMER* The use of this information is at the viewer/user's own risk. Not financial, medical nor legal advice as the content on this podcast does not constitute legal, financial, medical or any other professional advice. Viewer/user's should consult with the relevant professionals Supportshow(https://www.buymeacoffee.com/maryannpetri)Support the Show.Supportshow(https://www.buymeacoffee.com/maryannpetri)http://www.dismantlingfamilycourtcorruption.com/

TRADOC Talks
TRADOC Talks - Episode 25 - Army 2030: Our Near Peer Adversarial Capabilities

TRADOC Talks

Play Episode Listen Later Jun 20, 2024


Maj. Gen. Scott Linton, deputy chief of staff, U.S. Army Training and Doctrine Command, hosts Ian Sullivan, deputy chief of staff, TRADOC G2, to discuss how TRADOC is helping the Army understand the current threat and operational environment as it shifts its focus back to large scale combat operations.

The Nonlinear Library
AF - Analysing Adversarial Attacks with Linear Probing by Yoann Poupart

The Nonlinear Library

Play Episode Listen Later Jun 17, 2024 13:16


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Analysing Adversarial Attacks with Linear Probing, published by Yoann Poupart on June 17, 2024 on The AI Alignment Forum. This work was produced as part of the Apart Fellowship. @Yoann Poupart and @Imene Kerboua led the project; @Clement Neo and @Jason Hoelscher-Obermaier provided mentorship, feedback and project guidance. Here, we present a qualitative analysis of our preliminary results. We are at the very beginning of our experiments, so any setup change, experiment advice, or idea is welcomed. We also welcome any relevant paper that we could have missed. For a better introduction to interpreting adversarial attacks, we recommend reading scasper's post: EIS IX: Interpretability and Adversaries. Code available on GitHub (still drafty). TL;DR Basic adversarial attacks produce humanly indistinguishable examples for the human eye. In order to explore the features vs bugs view of adversarial attacks, we trained linear probes on a classifier's activations (we used CLIP-based models for further multimodal work). The goal is that training linear probes to detect basic concepts can help analyse and detect adversarial attacks. As the concepts studied are simple (e.g. color), they can be probed from every layer of the model. We show that concept probes are indeed modified by the adversarial attack, but only on the later layers. Hence, we can naively detect adversarial attacks by observing the disagreement between early layers' probes and later layers' probes. Introduction Motivation Adversarial samples are a concerning failure mode of deep learning models. It has been shown that by optimising an input, e.g. an image, the output of the targeted model can be shifted at the attacker's will. This failure mode appears repeatedly in different domains like image classification, text encoders, and more recently on multimodal models enabling arbitrary outputs or jailbreaking. Hypotheses to Test This work was partly inspired by the feature vs bug world hypothesis. In our particular context this would imply that adversarial attacks might be caused by meaningful feature directions being activated (feature world) or by high-frequency directions overfitted by the model (bug world). The hypotheses we would like to test are thus the following: The adversariality doesn't change the features. Nakkiran et al. found truly adversarial samples that are "bugs", while Ilyas et al. seem to indicate that adversarial attacks find features. The different optimisation schemes don't produce the same categories of adversarial samples (features vs bugs). Some might be more robust, e.g. w.r.t. the representation induced. Post Navigation First, we briefly present the background we used for adversarial attacks and linear probing. Then we showcase experiments, presenting our setup, to understand the impact of adversarial attacks on linear probes and see if we can detect it naively. Finally, we present the limitations and perspectives of our work before concluding. Background Adversarial Attacks For our experiment, we'll begin with the most basic and well-known adversarial attack, the Fast Gradient Sign Method (FGSM). This intuitive method takes advantage of the classifier differentiability to optimise the adversary goal (misclassification) using a gradient descent. It can be described as: With x,y the original image and label, ^x the adversarial example and ϵ the perturbation amplitude. This simple method can be derived in an iterative form: and seen as projected gradient descent, with the projection ensuring that ||xxn+1||ϵ. In our experiments ϵ=3. Linear Probing Linear probing is a simple idea where you train a linear model (probe) to predict a concept from the internals of the interpreted target model. The prediction performances are then attributed to the knowledge contained in the target model's latent re...

Radioactive Metal
Episode 789 - Unleashing The Fury

Radioactive Metal

Play Episode Listen Later Jun 14, 2024 113:05


'84 saw the solo effort from axe virtuoso Yngwie Malmsteen. "Rising Force" is an appropriate moniker as the record took the Metal World by storm. Not to mention it was kind of a trailblazing release. The 80's was definitely the era of the flamboyant frontman. But Malmsteen chose to let the music do the talking...literally. So this week we go through this musician's musician album. And Unleash The Fury!! In our "News, Views and Tunes", we go over the "The Last Voyage of the Demeter" flick. And give a shout out to Zebra Boy from the "Heavy Metal Parking Lot" doc. Musically, we crank some new and used from Dead Kennedys, Whiplash, Adversarial, Venger, Haatan and introduce Italy's Lord Goblin in our "Indie Spotlight". Horns Up!

TERMINUS: extreme metal podcast
Terminus Episode 154 - Adversarial, Tsatthoggua

TERMINUS: extreme metal podcast

Play Episode Listen Later Jun 6, 2024 114:52


Continuing this year's ongoing theme of triumphant returns come two excellent records--one highly anticipated, and the other utterly unexpected. Side A: the return of Adversarial, a TDMG favorite whose brand of death-fueled war metal smashes contemporaries with a flail of technicality, extremity, and potent atmosphere. Side B: the shocking return of Tsatthoggua, German blasting black metal legends whose new record picks up almost exactly where the band left off a quarter century prior. Put on your gimp mask--it's time to party. 0:00:00 - Intro 0:03:59 - Adversarial - Solitude With the Eternal… (Dark Descent Records) 0:49:23 - Interlude - Allfather - “Evolution to Supremacy,” fr. Weapon of Ascension (Invictus, 2005) 0:51:42 - Tsatthoggua - We Are God (Osmose Productions) 1:50:37 - Outro - “Bless Thee for Granting Me Pain,” fr. Blackdoor Miracle (Regain Records, 2004) Terminus links: Terminus on Youtube Terminus on Patreon Terminus on Instagram Terminus on Facebook thetrueterminus@gmail.com

Getting It Out
Tzompantli (Brian Ortiz)

Getting It Out

Play Episode Listen Later Jun 3, 2024 73:48


Led by Brian “Itztlakamayeh” Ortiz, TZOMPANTLI, the brutal death doom collective from Los Angeles, recently dropped the devastating Beating the Drums of Ancestral Force via 20 Buck Spin Records. We're discussing the band's origins, expansions, and future in this kicked-back discussion.Music by:Mutilation BarbecueTzompantliAdversarialIntro music by:Hot ZonePatreon: https://www.patreon.com/GettingitoutpodcastEmail: dan@gettingitout.netWebsite: http://gettingitout.net/Instagram: @getting_it_out_podcastFacebook: www.facebook.com/gettingitoutpodcastX: @GettingItOutPod Get bonus content on PatreonSupport this show http://supporter.acast.com/getting-it-out. Hosted on Acast. See acast.com/privacy for more information.

Into The Necrosphere
MISTHYRMING - D.G. | Into The Necrosphere Podcast #225

Into The Necrosphere

Play Episode Listen Later May 7, 2024 151:13


On this episode I visited with D.G., guitarist and vocalist for Icelandic black metal powerhouse, Misthyrming, shortly following their show-stealing performance at this year's Inferno Festival in Olso, Norway.  We discussed his influences, our shared love of Rammstein, his reflections on the band's career to date, what motivates him today and much more. Later in the show, I also share another round-up of some of my favourite releases of the past few weeks, including albums by Bythos, Night Shall Drape Us and Hasslig.  On my weekly news rant I give you my first impression of new releases by Gaerea, 1349, Adversarial, Cavalera Conspiracy and others.  PLUS - Danish black metallers Perdition's Mire get their moment of glory on the Into The Necrosphere Unsigned Band Spotlight.▶️SUPPORT THE BANDS FEATURED ON THIS EPISODEMisthyrminghttps://misthyrming.bandcamp.com/ Hasslighttps://hasslig.bandcamp.com/ Perdition's Mirehttps://perditionsmire.bandcamp.com/ Morbid Angelhttps://morbid-angel.bandcamp.com/ ▶️SUBSCRIBE TO THE PODCAST https://youtube.com/c/IntoTheNecrosphere ▶️STREAM & DOWNLOADAmazon Musichttps://amzn.to/3epNJ4KSpotifyhttps://spoti.fi/3iKqbIPApple Podcastshttps://apple.co/38wDYhi ▶️SOCIAL MEDIAFacebookhttps://www.facebook.com/intothenecrosphere  Instagramhttps://www.instagram.com/intothenecrosphere    Twitterhttps://twitter.com/inecrosphere  ▶️INTO THE NECROSPHERE MERCHhttps://into-the-necrosphere.creator-spring.com▶️THE HORSEMEN OF THE PODCASTING APOCALYPSE Horrorwolf666https://thehorrorwolf666podcast.buzzsprout.com/ Everything Went Blackhttps://everythingwentblack.podbean.com/ Necromaniacshttps://necromaniacs.podbean.com/ Sol Noxhttps://www.solnoxpodcast.podbean.com/ Iblis Manifestationshttps://linktr.ee/iblismanifestationspodcast 

Lenny's Podcast: Product | Growth | Career
This will make you a better decision maker | Annie Duke (author of “Thinking in Bets” and “Quit”, former pro poker player)

Lenny's Podcast: Product | Growth | Career

Play Episode Listen Later May 2, 2024 81:01


Annie Duke is a former professional poker player, a decision-making expert, and a special partner at First Round Capital. She is the author of Thinking in Bets (a national bestseller) and Quit: The Power of Knowing When to Walk Away and the co-founder of the Alliance for Decision Education, a nonprofit whose mission is to improve lives by empowering students through decision skills education. In our conversation, we cover:• What Annie learned from the late Daniel Kahneman• The power of pre-mortems and “kill criteria”• The relationship between money and happiness• The power of “mental time travel”• The nominal group technique for better decision quality• How First Round Capital improved their decision-making process• Many tactical decision-making frameworks—Brought to you by:• Vanta—Automate compliance. Simplify security.• UserTesting—Human understanding. Human experiences.• LinkedIn Ads—Reach professionals and drive results for your business—Find the transcript at: https://www.lennysnewsletter.com/p/making-better-decisions-annie-duke—Where to find Annie Duke:• X: https://twitter.com/AnnieDuke• LinkedIn: https://www.linkedin.com/in/annie-duke/• Website: https://www.annieduke.com/• Substack: https://www.annieduke.com/substack/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Annie's background(03:53) Lessons from Daniel Kahneman: humility, curiosity, and open-mindedness(09:15) The importance of unconditional love in parenting(15:15) Mental time travel and “nevertheless”(20:06) The extent of improvement possible in decision-making (24:54) Independent brainstorming for better decisions(35:36) Making sure people feel heard(42:41) The “3Ds” framework to make better decisions(44:49) Decision quality(55:46) Improving decision-making at First Round Capital(01:05:05) Using pre-mortems and kill criteria(01:10:15) Making explicit what's implicit(01:10:55) The challenges of quitting and knowing when to walk away(01:19:23) Where to find Annie—Referenced:• Daniel Kahneman, Who Plumbed the Psychology of Economics, Dies at 90: https://www.nytimes.com/2024/03/27/business/daniel-kahneman-dead.html• Adversarial collaboration: https://en.wikipedia.org/wiki/Adversarial_collaboration• Does more money correlate with greater happiness?: https://penntoday.upenn.edu/news/does-more-money-correlate-greater-happiness-Penn-Princeton-research#• Income and emotional well-being: A conflict resolved: https://pubmed.ncbi.nlm.nih.gov/36857342/• Strategic decisions: When can you trust your gut?: https://www.mckinsey.com/capabilities/strategy-and-corporate-finance/our-insights/strategic-decisions-when-can-you-trust-your-gut• Cass Sunstein on X: https://twitter.com/CassSunstein• Dr. Becky on Instagram: https://www.instagram.com/drbeckyatgoodinside• A framework for finding product-market fit | Todd Jackson (First Round Capital): https://www.lennysnewsletter.com/p/a-framework-for-finding-product-market• First Round Capital: https://firstround.com/• Brett Berson on X: https://twitter.com/brettberson• Renegade Partners: https://www.renegadepartners.com/• Renata Quintini on X: https://twitter.com/rquintini• Roseanne Wincek on X: https://twitter.com/imthemusic• Josh Kopelman on X: https://twitter.com/joshk• Bill Trenchard on X: https://twitter.com/btrenchard• Linnea Gandhi on X: https://twitter.com/linneagandhi• Maurice Schweitzer on X: https://twitter.com/me_schweitzer• Problems with premortems: https://sjdm.org/presentations/2021-Poster-Gandhi-Linnea-debiasing-premortem-selfserving~.pdf• Create a Solid Plan on How to Fail Big This Year: https://www.forbes.com/sites/forbesfinancecouncil/2020/02/07/create-a-solid-plan-on-how-to-fail-big-this-year/• Quit: The Power of Knowing When to Walk Away: https://www.amazon.com/Quit-Power-Knowing-When-Walk/dp/0593422996/• Richard Thaler on X: https://twitter.com/R_Thaler• Stewart Butterfield on X: https://twitter.com/stewart• Glitch: https://en.wikipedia.org/wiki/Glitch_(video_game)• How the Founder of Slack & Flickr Turned Colossal Failures into Billion-Dollar Companies: https://medium.com/swlh/how-the-founder-of-slack-flickr-turned-failures-into-million-and-billion-dollar-companies-7bcaf0d35d66• The Most Fascinating Profile You'll Ever Read About a Guy and His Boring Startup: https://www.wired.com/2014/08/the-most-fascinating-profile-youll-ever-read-about-a-guy-and-his-boring-startup/• The Alliance for Decision Education: https://alliancefordecisioneducation.org/• Make Better Decisions course on Maven: https://maven.com/annie-duke/make-better-decisions—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe

NeuroDiverse Christian Couples
Navigating a Complex and Adversarial NeuroDiverse Divorce with CDFA Rhonda Noordyk

NeuroDiverse Christian Couples

Play Episode Listen Later Apr 29, 2024 50:10


While we are pro-marriage and pro-covenant. Sometimes abuse, unrepentant addiction, and adultery break the vows and can lead to divorce. Divorce from a Christian AS man can get very adversarial and messy. If you have wrestled through and find that you have peace and release to divorce, you may want a CDFA, like Rhonda, on your side of Women's Financial Wellness Center.A CDFA®, which stands for Certified Divorce Financial Analyst, is a financial expert who specializes in helping women navigate the complex world of divorce. I combine my knowledge of finance with my understanding of the legal aspects of divorce to provide valuable guidance and support during this challenging time.Now, you might be wondering,"Why would I need a CDFA® when I'm going through a divorce?"I will help you understand your current financial situation. We'll work to create a realistic and comprehensive financial plan to help us develop a strategy that ensures you're financially secure both now and in the years to come. I understand how to work with attorneys and other divorce professionals. This teamwork helps to create a solid foundation for negotiating a fair settlement.So, having me on your side during a divorce can be a total game-changer.About Rhonda from her website:https://wfwcdivorce.com/about-3It became clear to Rhonda that women going through divorce needed someone who was:Strategic and knowledgeable to help them prepare their financials before, during, and after the divorce so that they could live the quality of life they deserved.Empowering them with financial knowledge and giving them clarity to make the right decisions.Providing compassion and a steady voice when the circumstances were emotional and overwhelming.‌So, Rhonda became an advocate for women who were going through a divorce.‌For her clients, this often means being able to provide a great home for their family, continue to send the kids to private school, invest in causes that are important to her, buy a rental property for income, start a business, take a vacation, and live their best lives.With 17,000 hours, 565 clients, and $20 million back in the hands of women going through divorce later, Rhonda is committed to filling the gap between traditional financial advisors and divorce support groups for women.Contact:Rhonda Noordyk, CDFA925-940-3056info@wfwcllc.com

Trust Issues
EP 50 - Adversarial AI's Advance

Trust Issues

Play Episode Listen Later Apr 17, 2024 40:01


In the 50th episode of the Trust Issues podcast, host David Puner interviews Justin Hutchens, an innovation principal at Trace3 and co-host of the Cyber Cognition podcast (along with CyberArk's resident Technical Evangelist, White Hat Hacker and Transhuman Len Noe). They discuss the emergence and potential misuse of generative AI, especially natural language processing, for social engineering and adversarial hacking. Hutchens shares his insights on how AI can learn, reason – and even infer human emotions – and how it can be used to manipulate people into disclosing information or performing actions that compromise their security. They also talk about the role of identity in threat monitoring and detection, and the challenges and opportunities AI presents organizations in defending against evolving threats and how we can harness its power for the greater good. Tune in to learn more about the fascinating and ever-changing landscape of adversarial AI and identity security.

ITSPmagazine | Technology. Cybersecurity. Society
Adversarial Machine Learning: Realities of AI and ML in Cybersecurity | A Conversation with Dr. Anmol Agarwal | Redefining CyberSecurity with Sean Martin

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Apr 11, 2024 44:48


Guest: Dr. Anmol Agarwal, Senior Security ResearcherOn LinkedIn | https://www.linkedin.com/in/anmolsagarwal/On Twitter | https://twitter.com/anmolspeakerOn YouTube | https://www.youtube.com/channel/UCuWzfnJyZ0S68kG5e-lUZ6w____________________________Host: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]On ITSPmagazine | https://www.itspmagazine.com/sean-martinView This Show's Sponsors___________________________Episode NotesIn this episode of Redefining CyberSecurity, host Sean Martin explores the complex world of artificial intelligence (AI) and machine learning (ML) with Dr. Anmol Agarwal, a senior security researcher at Nokia and adjunct professor at George Washington University. The discussion kicks off with a reflection on the evolving dialogue around AI and ML, shedding light on the critical role of machine learning as the backbone of AI technology. Dr. Agarwal emphasizes machine learning's influence on the accessibility and popularity of generative AI, thanks to its application in natural language processing.The conversation transitions to Dr. Agarwal's intricate work on standardizing 5G and 6G technologies, underscoring the significance of security standardization in the rapid advancement of mobile technologies. Furthermore, they explore the utilization of machine learning in balancing network load and enabling emerging technologies like the metaverse, showcasing AI's prowess in facilitating fast data analytics.A substantial portion of the episode is dedicated to adversarial machine learning, where Dr. Agarwal explains its definition as the study of attacking and defending machine learning models. Through examples such as the potential manipulation of Tesla's autopilot via adversarial techniques, they provide a vivid picture of the threats posed by malicious actors leveraging AI for harmful purposes. The episode concludes with an appeal for a deeper understanding of AI and ML beyond the buzzwords, promoting a pragmatic approach to integrating these technologies in cybersecurity strategies.This episode offers valuable insights for cybersecurity leaders, CISOs, business executives, and security analysts, emphasizing the importance of comprehensive risk analysis and the ethical application of AI and ML in bolstering cybersecurity defenses.___________________________Watch this and other videos on ITSPmagazine's YouTube ChannelRedefining CyberSecurity Podcast with Sean Martin, CISSP playlist:

The Art of Speaking Up
302 | Assertiveness vs. aggressiveness: how to communicate in a way that's assertive but not adversarial

The Art of Speaking Up

Play Episode Listen Later Apr 9, 2024 25:29


Assertive communication is a non-negotiable skill for success in your corporate career - it's how you establish a reputation as a thought leader and it's how you make a big impact in the work you do.   If you're anything like my clients - you might struggle to communicate assertively.  You may be afraid of stepping on peoples' toes and you may struggle to figure out how to be assertive without crossing the line into being aggressive.   Today's episode will help you figure out how to be assertive without being aggressive, and how to walk that fine line between being assertive and being adversarial.  You'll leave this episode with clarity on WHY you're stuck in this area, and what to do about it.   I hope you love this episode.  To access the links mentioned in today's episode, head here:   Assertiveness Ebook: https://www.assertivenessebook.com/  Join the waitlist for The Art of Speaking Up Academy: https://jessguzikcoaching.com/academy/

Stories of Impact
How Adversarial Collaboration Makes Better Science & Better Scientists

Stories of Impact

Play Episode Listen Later Mar 5, 2024 37:21


In the dozens of episodes we've shared with you over the last four years, you've heard stories of experts examining the science behind everything from bees to whales, video games to dance, education systems to communication networks. Today, we're zooming out further, speaking with researchers who are exploring ways to improve how we do science and how we cultivate and educate better scientists. When you think of a scientist, what image comes to mind? Do you imagine a genius laboring solo in a lab, displaying little emotion as they logically analyze data? Let's challenge that stereotype — scientists are anything but dispassionate.   Read the transcript of this episode Subscribe to Stories of Impact wherever you listen to podcasts Find us on Facebook, Twitter, Instagram and YouTube Share your comments, questions and suggestions at info@storiesofimpact.org Supported by Templeton World Charity Foundation

BarCode
Hutch with Justin "Hutch" Hutchins

BarCode

Play Episode Listen Later Feb 12, 2024 67:00 Transcription Available


Hutch, an expert in AI and cybersecurity, discusses his early interest in using AI for algorithmic trading and automating social engineering attacks with chatbots. He highlights two main cyber risks of advanced AI - the ability to manipulate people and autonomously execute attacks. Hutch and Chris explore issues like commercialization of AI versus proprietary chatbots, and tech companies' ethical duties to reduce AI risks through testing and responsible development. They delve into potential weaponization of AI in lethal autonomous weapons and "flash wars", as well as risks from intelligent humanoids. The need for global AI partnerships is discussed, but challenged by current geopolitics. Private sector researchers and companies have a key role in addressing AI safety and risks. However, adversaries likely have an edge in exploiting AI vulnerabilities, underscoring the importance of innovative defense strategies. TIMESTAMPS:00:02:14 - Introduction to Justin Hutchins (Hutch) and his background00:03:43 - Hutch's interest in AI and cybersecurity00:08:43 - Discussion on GPT-4 and its key risks00:15:21 - Comparison between different AI platforms00:20:28 - Viability of weaponizing emerging technologies00:25:10 - Viability of embedding AI into realistic form factors00:30:53 - Psychological effects of chatbots on humanity00:35:48 - The need for global partnerships to regulate AI00:40:36 - Adapting AI capabilities for weaponization00:47:30 - Adversarial threat actors and their adaptation to AI00:50:46 - AI systems circumventing security controls00:53:48 - The concept of singularity in AISYMLINKSLinkedin: https://www.linkedin.com/in/justinhutchens/X: https://twitter.com/sociosploit/status/1546218889675259904The Language of Deception- Weaponizing Next Generation: https://www.amazon.com/Language-Deception-Weaponizing-Next-Generation/dp/1394222548/Socioploit: https://www.sociosploit.com/Cyber Cognition Podcast: https://www.itspmagazine.com/cyber-cognition-podcastDRINK INSTRUCTIONThe Hallucination1 oz Elderflower Liqueur1 oz Absinthe1 oz Fresh Lemon JuiceGuava SodaAdd ice into a chilled cocktail glass. Add the Elderflower Liqueur, Absinthe, and lemon juice into a cocktail shaker without ice. Shake vigorously. Strain into the glass with ice. Top off with guava soda.CONNECT WITH USwww.barcodesecurity.comBecome a SponsorFollow us on LinkedInTweet us at @BarCodeSecurityEmail us at info@barcodesecurity.com

Daily Mind Medicine
#892 - It's Just Trying to Protect You

Daily Mind Medicine

Play Episode Listen Later Dec 13, 2023 4:49


@taylorawelch explores the intriguing concept that your mind is not trying to hurt you but rather protect you. He delves into the realm of imposter syndrome and understand how it is your mind's way of safeguarding the real you from rejection. Join Taylor on a journey of self-discovery as we discuss the importance of granting yourself permission to move into new levels and seasons of life. Gain insights into the foundational mechanics of living and operating day-to-day, transcending the hustle and grind by embracing the power of acceptance and inheritance.IF you enjoyed the show please leave us a review to help push this message to more listeners around the world!Please visit Taylorawelch.com to access all of Taylor's socials and content Text Taylor: 615-326-5037Daily Mind Medicine is back!Chapters: (01:58) - Challenging conversations with God.(02:28) - The necessity of regression.(03:10) - It's there to protect you.(03:53) - Adversarial relationship within.(04:15) - Winning a prize versus inheriting one.

The John Batchelor Show
1/2: #Tritium: The adversarial and non-transparent #PRC buys vast amounts of uranium ore from Canada & What is to be done? Henry Sokolski, NPEC

The John Batchelor Show

Play Episode Listen Later Dec 9, 2023 9:35


1/2: #Tritium: The adversarial and non-transparent #PRC buys vast amounts of  uranium ore from Canada & What is to be done? Henry Sokolski, NPEC https://nationalinterest.org/feature/chinese-nuclear-weapons-and-canada-uncivil-military-connection-207727 Beijing 1935

The John Batchelor Show
2/2: #Tritium: The adversarial and non-transparent #PRC buys vast amounts of uranium ore from Canada & What is to be done? Henry Sokolski, NPEC

The John Batchelor Show

Play Episode Listen Later Dec 9, 2023 10:05


2/2: #Tritium: The adversarial and non-transparent #PRC buys vast amounts of uranium ore from Canada & What is to be done? Henry Sokolski, NPEC https://nationalinterest.org/feature/chinese-nuclear-weapons-and-canada-uncivil-military-connection-207727 1849