Podcasts about human cognition

  • 76PODCASTS
  • 209EPISODES
  • 32mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Nov 7, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about human cognition

Latest podcast episodes about human cognition

The Health Ranger Report
Brighteon Broadcast News, Nov 7, 2025 – Trump pushes toxic pharma drugs while GASLIGHTING the country about the U.S. economy

The Health Ranger Report

Play Episode Listen Later Nov 7, 2025 143:56


- Brighteon Broadcast News Introduction and AI Revolution Interview (0:10) - Collapse of Human Doctors and Pharmaceutical Industry (1:51) - Trump's Announcement and AI Replacements (11:06) - Economic Collapse and AI Solutions (12:06) - Replacing Congress with AI Agents (34:46) - AI Capabilities and Public Perception (52:28) - The Future of AI and Human Knowledge (1:23:58) - Psychological Barriers and Humility in AI Interaction (1:25:23) - The Future of the Book Industry (1:28:17) - Customization and Personalization in AI (1:35:16) - Advancements in AI and Human Cognition (1:36:27) - Investing in Personal Growth and Skills (1:40:16) - Decentralization and Privacy in AI (1:48:27) - The Role of AI in Personal and Professional Development (1:57:17) - The Future of AI and Robotics (2:07:23) - The Global Race in AI and Technological Advancements (2:10:21) - The Impact of AI on Employment and Society (2:18:24) For more updates, visit: http://www.brighteon.com/channel/hrreport  NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com

ITSPmagazine | Technology. Cybersecurity. Society
Building a Real Security Culture: Why Most AppSec Champion Programs Fall Short | AppSec Contradictions: 7 Truths We Keep Ignoring — Episode 5 | A Musing On the Future of Cybersecurity with Sean Martin and TAPE9 | Read by TAPE9

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Nov 6, 2025 2:24


Most organizations have security champions. Few have a real security culture.In this episode of AppSec Contradictions, Sean Martin explores why AppSec awareness efforts stall, why champion programs struggle to gain traction, and what leaders can do to turn intent into impact.

Redefining CyberSecurity
Building a Real Security Culture: Why Most AppSec Champion Programs Fall Short | AppSec Contradictions: 7 Truths We Keep Ignoring — Episode 5 | A Musing On the Future of Cybersecurity with Sean Martin and TAPE9 | Read by TAPE9

Redefining CyberSecurity

Play Episode Listen Later Nov 6, 2025 2:24


Most organizations have security champions. Few have a real security culture.In this episode of AppSec Contradictions, Sean Martin explores why AppSec awareness efforts stall, why champion programs struggle to gain traction, and what leaders can do to turn intent into impact.

Finding Genius Podcast
Inside The Primate Mind: What Animal Navigation Reveals About Human Cognition

Finding Genius Podcast

Play Episode Listen Later Nov 5, 2025 33:37


Can we learn more about humans by studying animal behavior? For Dr. Francine Dolins, this question lies at the heart of her life's work. An Assistant Professor of Psychology at the University of Michigan-Dearborn, Dr. Dolins explores the rich intersections between animal cognition, ecology, and the evolutionary foundations of behavior. With a Ph.D. in Cognitive Psychology and Behavioral Primatology from the University of Stirling and a BSc in Behavioral Ecology from the University of Sussex, her research delves into how non-human primates and humans navigate their worlds. From lemurs in Madagascar to tamarins in Costa Rica and Peru, Dr. Dolins studies spatial cognition, decision-making, and navigation in both natural habitats and experimental settings. In this episode, we discuss: Dr. Dolins's pioneering work on virtual reality research with apes. How primates use mental maps and landmarks to make navigational decisions. What current research can reveal about the evolution of intelligence, cooperation, and social cognition. Dr. Dolins also collaborates with scientists worldwide, combining field primatology, technology, and comparative psychology to advance both animal welfare and conservation education. Eager to learn more about this fascinating subject? Click play now to deep dive into how studying our closest relatives helps us understand ourselves… You can learn more about Dr. Dolins and her work here!

Science (Video)
Thinking in a Digital World

Science (Video)

Play Episode Listen Later Nov 3, 2025 30:11


We're surrounded by digital devices—from phones and tablets to streaming platforms and social media. With so many options at our fingertips, each of us faces choices about when and how to use technology in ways that align with our goals and values. In this talk, Kristy Hamilton, Assistant Professor in the Department of Communication at UCSB, shares research on how digital media shapes the way we see ourselves and, in turn, influences our everyday technology use. Hamilton uses experimental methods to study the strengths and challenges of human memory and cognition in digital environments, with the aim of helping people become more effective thinkers in today's media landscape. Series: "GRIT Talks" [Science] [Show ID: 41030]

University of California Audio Podcasts (Audio)

We're surrounded by digital devices—from phones and tablets to streaming platforms and social media. With so many options at our fingertips, each of us faces choices about when and how to use technology in ways that align with our goals and values. In this talk, Kristy Hamilton, Assistant Professor in the Department of Communication at UCSB, shares research on how digital media shapes the way we see ourselves and, in turn, influences our everyday technology use. Hamilton uses experimental methods to study the strengths and challenges of human memory and cognition in digital environments, with the aim of helping people become more effective thinkers in today's media landscape. Series: "GRIT Talks" [Science] [Show ID: 41030]

Science (Audio)
Thinking in a Digital World

Science (Audio)

Play Episode Listen Later Nov 3, 2025 30:11


We're surrounded by digital devices—from phones and tablets to streaming platforms and social media. With so many options at our fingertips, each of us faces choices about when and how to use technology in ways that align with our goals and values. In this talk, Kristy Hamilton, Assistant Professor in the Department of Communication at UCSB, shares research on how digital media shapes the way we see ourselves and, in turn, influences our everyday technology use. Hamilton uses experimental methods to study the strengths and challenges of human memory and cognition in digital environments, with the aim of helping people become more effective thinkers in today's media landscape. Series: "GRIT Talks" [Science] [Show ID: 41030]

UC Santa Barbara (Audio)
Thinking in a Digital World

UC Santa Barbara (Audio)

Play Episode Listen Later Nov 3, 2025 30:11


We're surrounded by digital devices—from phones and tablets to streaming platforms and social media. With so many options at our fingertips, each of us faces choices about when and how to use technology in ways that align with our goals and values. In this talk, Kristy Hamilton, Assistant Professor in the Department of Communication at UCSB, shares research on how digital media shapes the way we see ourselves and, in turn, influences our everyday technology use. Hamilton uses experimental methods to study the strengths and challenges of human memory and cognition in digital environments, with the aim of helping people become more effective thinkers in today's media landscape. Series: "GRIT Talks" [Science] [Show ID: 41030]

Mutuality Matters Podcast
(Women and Words) Inside the Bible Translation Room with Dr. Joel B. Green: Part 1

Mutuality Matters Podcast

Play Episode Listen Later Oct 31, 2025 35:12


In this episode of Mutuality Matters, hosted by CBE International, Dr. Joel B. Green joins the conversation to discuss the importance of Bible translation and interpretation, emphasizing the inclusion of women in the process. Dr. Green, a seasoned Bible scholar and faculty administrator, shares his extensive career and insights on the subject. He underscores the necessity of diverse perspectives in translation teams, discusses common biases in Bible translations, and offers advice on how English readers can detect and overcome these biases. The episode also delves into the significance of various successful Bible translations, the training of translators, and the impact of denominational and gender diversity on the integrity of translations.  00:00 Introduction to Mutuality Matters  00:38 Meet Dr. Joel Green  02:22 Joel Green's Academic Journey  05:38 Challenges in Bible Scholarship  06:48 The Importance of Women in Bible Translation  12:23 Overcoming Translation Bias  19:38 Training and Selecting Bible Translators  27:13 Successful Bible Translations  34:38 Conclusion and Call to Action    Guest Bio  Joel B. Green joined Fuller's faculty in 2007 and has served several administrative posts: associate dean for the Center for Advanced Theological Studies (2008–2016, 2019–present), dean of the School of Theology (2014–2018), and provost (2016–2018). Prior to coming to Fuller, Dr. Green served for ten years at Asbury Theological Seminary as professor of New Testament interpretation, as dean of the School of Theology, and as provost. Previous faculty and academic-administrative appointments include the American Baptist Seminary of the West / Graduate Theological Union (1992–1997) and New College for Advanced Christian Studies Berkeley (1985–1992).  Green has written or edited some 55 books, including 8 that have won awards: Connections: A Lectionary Commentary for Preaching and Worship, Year C, Volume 1, Advent through Epiphany (with Thomas Long, Luke Powery, and Cynthia Rigby, 2018), Dictionary of Jesus and the Gospels (2nd ed., 2013), Common English Bible Study Bible (2013), Dictionary of Scripture and Ethics (2011), In Search of the Soul: Four Views of the Mind-Body Problem (2nd ed., 2010), Introducing the New Testament: Its Literature and Theology (with Paul J. Achtemeier and Marianne Meye Thompson, 2001), The Gospel of Luke (1997), and Dictionary of Jesus and the Gospels (with Scot McKnight, 1992). Among his recent publications are Discovering Luke (2021), Luke as Narrative Theologian (2020), Conversion in Luke-Acts: Divine Initiative, Human Cognition, and the People of God (2015), Why Salvation? (2013), and The World of the New Testament: Cultural, Social and Historical Contexts (with Lee Martin McDonald, 2013).  He is the editor of the New International Commentary on the New Testament, as well as the Two Horizons New Testament Commentary, and serves on the editorial boards of the Journal of Theological Interpretation, Theology and Science, and Science and Christian Belief. Green has been elected to membership in both Studiorum Novi Testamenti Societas (SNTS) and the International Society for Science and Religion (ISSR).  Green has 12 years of pastoral ministry experience.  Related Resources  Intersectionality: Words Matter! Freedom through Lectionary and Bible Translation with Rev. Dr. Wilda C. Gafney: https://www.cbeinternational.org/resource/intersectionality-words-matter/    Women and Words: Comparing Bible Translations with Rev. Dr. Aída Besançon Spencer:   https://www.cbeinternational.org/resource/women-and-words-comparing-bible-translations/    Women and Words: Exploring Women's Biblical Equality Through Bible Translation with Dr. Roy Ciampa  https://www.cbeinternational.org/resource/women-and-words-exploring-womens-biblical-equality-through-bible-translation/    Disclaimer  The opinions expressed in CBE's Mutuality Matters' podcast are those of its hosts or guests and do not purport to reflect the opinions or views of CBE International or its members or chapters worldwide. The designations employed in this podcast and the presentation of content therein do not imply the expression of any opinion whatsoever on the part of CBE concerning the legal status of any country, area or territory or of its authorities, or concerning the delimitation of its frontiers. 

ITSPmagazine | Technology. Cybersecurity. Society
CI/CD Pipeline Security: Why Attackers Breach Your Software Pipeline and Own Your Build Before Production | AppSec Contradictions: 7 Truths We Keep Ignoring — Episode 4 | A Musing On the Future of Cybersecurity with Sean Martin and TAPE9 | Read by TAPE

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Oct 29, 2025 3:38


Organizations pour millions into protecting running applications—yet attackers are targeting the delivery path itself.This episode of AppSec Contradictions reveals why CI/CD and cloud pipelines are becoming the new frontline in cybersecurity.

Redefining CyberSecurity
CI/CD Pipeline Security: Why Attackers Breach Your Software Pipeline and Own Your Build Before Production | AppSec Contradictions: 7 Truths We Keep Ignoring — Episode 4 | A Musing On the Future of Cybersecurity with Sean Martin and TAPE9 | Read by TAPE

Redefining CyberSecurity

Play Episode Listen Later Oct 29, 2025 3:38


Organizations pour millions into protecting running applications—yet attackers are targeting the delivery path itself.This episode of AppSec Contradictions reveals why CI/CD and cloud pipelines are becoming the new frontline in cybersecurity.

ITSPmagazine | Technology. Cybersecurity. Society
From Sampling to Scraping: AI Music, Rights, and the Return of Creative Control | A Musing On The Connection Between Music, Technology, and Creativity | Music Evolves: Sonic Frontiers with Sean Martin and TAPE9 | Read by TAPE9

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Oct 16, 2025 10:18


Show NotesIn this episode, we unpack the core ideas behind the Sonic Frontiers article “From Sampling to Scraping: AI Music, Rights, and the Return of Creative Control.” As AI-generated music floods streaming platforms, rights holders are deploying new tools like neural fingerprinting to detect derivative works — even when no direct sampling occurs. But what does it mean to “detect influence,” and can algorithms truly distinguish theft from inspiration?We explore the implications for artists who want to experiment with AI without being replaced by it, and the shifting desires of listeners who may soon prefer human-made music the way some still seek out vinyl, film cameras, or wooden roller coasters — not for efficiency, but for the feel.The article also touches on the burden of rights enforcement in this new age. While major labels can embed detection systems, who protects the independent artist? And if AI enables anyone to create, does it also require everyone to monitor?This episode invites you to reflect on what we value in music: speed and volume, or craft and control?

ITSPmagazine | Technology. Cybersecurity. Society
Beyond the Title: What It Really Takes to Be a CISO Today — Insights Following A Conversation with Solarwinds CISO, Tim Brown | A Musing On the Future of Cybersecurity with Sean Martin and TAPE9 | Read by TAPE9

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Oct 16, 2025 8:26


What does it really take to be a CISO the business can rely on? In this episode, Sean Martin shares insights from a recent conversation with Tim Brown, CISO at SolarWinds, following his keynote at AISA CyberCon and his role in leading a CISO Bootcamp for current and future security leaders. The article at the heart of this episode focuses not on technical skills or frameworks, but on the leadership qualities that matter most: context, perspective, communication, and trust.Tim's candid reflections — including the personal toll of leading through a crisis — remind us that clarity doesn't come from control. It comes from connection. CISOs must communicate risk in ways that resonate across teams and business leaders. They need to build trusted relationships before they're tested and create space for themselves and their teams to process pressure in healthy, sustainable ways.Whether you're already in the seat or working toward it, this conversation invites you to rethink what preparation really looks like. It also leaves you with two key questions: Where do you get your clarity, and who are you learning from? Tune in, reflect, and join the conversation.

Redefining CyberSecurity
Beyond the Title: What It Really Takes to Be a CISO Today — Insights Following A Conversation with Solarwinds CISO, Tim Brown | A Musing On the Future of Cybersecurity with Sean Martin and TAPE9 | Read by TAPE9

Redefining CyberSecurity

Play Episode Listen Later Oct 16, 2025 8:26


What does it really take to be a CISO the business can rely on? In this episode, Sean Martin shares insights from a recent conversation with Tim Brown, CISO at SolarWinds, following his keynote at AISA CyberCon and his role in leading a CISO Bootcamp for current and future security leaders. The article at the heart of this episode focuses not on technical skills or frameworks, but on the leadership qualities that matter most: context, perspective, communication, and trust.Tim's candid reflections — including the personal toll of leading through a crisis — remind us that clarity doesn't come from control. It comes from connection. CISOs must communicate risk in ways that resonate across teams and business leaders. They need to build trusted relationships before they're tested and create space for themselves and their teams to process pressure in healthy, sustainable ways.Whether you're already in the seat or working toward it, this conversation invites you to rethink what preparation really looks like. It also leaves you with two key questions: Where do you get your clarity, and who are you learning from? Tune in, reflect, and join the conversation.

ITSPmagazine | Technology. Cybersecurity. Society
When the Coders Don't Code: What Happens When AI Coding Tools Go Dark? | A Musing On the Future of Cybersecurity with Sean Martin and TAPE9 | Read by TAPE9

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Oct 8, 2025 9:35


In this issue of the Future of Cyber newsletter, Sean Martin digs into a topic that's quietly reshaping how software gets built—and how it breaks: the rise of AI-powered coding tools like ChatGPT, Claude, and GitHub Copilot.These tools promise speed, efficiency, and reduced boilerplate—but what are the hidden trade-offs? What happens when the tools go offline, or when the systems built through them are so abstracted that even the engineers maintaining them don't fully understand what they're working with?Drawing from conversations across the cybersecurity, legal, and developer communities—including a recent legal tech conference where law firms are empowering attorneys to “vibe code” internal tools—this article doesn't take a hard stance. Instead, it raises urgent questions:Are we creating shadow logic no one can trace?Do developers still understand the systems they're shipping?What happens when incident response teams face AI-generated code with no documentation?Are AI-generated systems introducing silent fragility into critical infrastructure?The piece also highlights insights from a recent podcast conversation with security architect Izar Tarandach, who compares AI coding to junior development: fast and functional, but in need of serious oversight. He warns that organizations rushing to automate development may be building brittle systems on shaky foundations, especially when security practices are assumed rather than applied.This is not a fear-driven screed or a rejection of AI. Rather, it's a call to assess new dependencies, rethink development accountability, and start building contingency plans before outages, hallucinations, or misconfigurations force the issue.If you're a CISO, developer, architect, risk manager—or anyone involved in software delivery or security—this article is designed to make you pause, think, and ideally, respond.

Redefining CyberSecurity
When the Coders Don't Code: What Happens When AI Coding Tools Go Dark? | A Musing On the Future of Cybersecurity with Sean Martin and TAPE9 | Read by TAPE9

Redefining CyberSecurity

Play Episode Listen Later Oct 8, 2025 9:35


In this issue of the Future of Cyber newsletter, Sean Martin digs into a topic that's quietly reshaping how software gets built—and how it breaks: the rise of AI-powered coding tools like ChatGPT, Claude, and GitHub Copilot.These tools promise speed, efficiency, and reduced boilerplate—but what are the hidden trade-offs? What happens when the tools go offline, or when the systems built through them are so abstracted that even the engineers maintaining them don't fully understand what they're working with?Drawing from conversations across the cybersecurity, legal, and developer communities—including a recent legal tech conference where law firms are empowering attorneys to “vibe code” internal tools—this article doesn't take a hard stance. Instead, it raises urgent questions:Are we creating shadow logic no one can trace?Do developers still understand the systems they're shipping?What happens when incident response teams face AI-generated code with no documentation?Are AI-generated systems introducing silent fragility into critical infrastructure?The piece also highlights insights from a recent podcast conversation with security architect Izar Tarandach, who compares AI coding to junior development: fast and functional, but in need of serious oversight. He warns that organizations rushing to automate development may be building brittle systems on shaky foundations, especially when security practices are assumed rather than applied.This is not a fear-driven screed or a rejection of AI. Rather, it's a call to assess new dependencies, rethink development accountability, and start building contingency plans before outages, hallucinations, or misconfigurations force the issue.If you're a CISO, developer, architect, risk manager—or anyone involved in software delivery or security—this article is designed to make you pause, think, and ideally, respond.

ITSPmagazine | Technology. Cybersecurity. Society
Lo-Fi Music and the Art of Imperfection — When Technical Limitations Become Creative Liberation | Analog Minds in a Digital World: Part 2 | Musing On Society And Technology Newsletter | Article Written By Marco Ciappelli

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Oct 5, 2025 14:33


⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____ Newsletter: Musing On Society And Technology https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144/_____ Watch on Youtube: https://youtu.be/nFn6CcXKMM0_____ My Website: https://www.marcociappelli.com_____________________________This Episode's SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak:  https://itspm.ag/itspbcweb_____________________________A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3A new transmission from Musing On Society and Technology Newsletter, by Marco CiappelliReflections from Our Hybrid Analog-Digital SocietyFor years on the Redefining Society and Technology Podcast, I've explored a central premise: we live in a hybrid -digital society where the line between physical and virtual has dissolved into something more complex, more nuanced, and infinitely more human than we often acknowledge.Introducing a New Series: Analog Minds in a Digital World:Reflections from Our Hybrid Analog-Digital SocietyPart II: Lo-Fi Music and the Art of Imperfection — When Technical Limitations Become Creative LiberationI've been testing small speakers lately. Nothing fancy—just little desktop units that cost less than a decent dinner. As I cycled through different genres, something unexpected happened. Classical felt lifeless, missing all its dynamic range. Rock came across harsh and tinny. Jazz lost its warmth and depth. But lo-fi? Lo-fi sounded... perfect.Those deliberate imperfections—the vinyl crackle, the muffled highs, the compressed dynamics—suddenly made sense on equipment that couldn't reproduce perfection anyway. The aesthetic limitations of the music matched the technical limitations of the speakers. It was like discovering that some songs were accidentally designed for constraints I never knew existed.This moment sparked a bigger realization about how we navigate our hybrid analog-digital world: sometimes our most profound innovations emerge not from perfection, but from embracing limitations as features.Lo-fi wasn't born in boardrooms or designed by committees. It emerged from bedrooms, garages, and basement studios where young musicians couldn't afford professional equipment. The 4-track cassette recorder—that humble Portastudio that let you layer instruments onto regular cassette tapes for a fraction of what professional studio time cost—became an instrument of democratic creativity. Suddenly, anyone could record music at home. Sure, it would sound "imperfect" by industry standards, but that imperfection carried something the polished recordings lacked: authenticity.The Velvet Underground recorded on cheap equipment and made it sound revolutionary—so revolutionary that, as the saying goes, they didn't sell many records, but everyone who bought one started a band. Pavement turned bedroom recording into art. Beck brought lo-fi to the mainstream with "Mellow Gold." These weren't artists settling for less—they were discovering that constraints could breed creativity in ways unlimited resources never could.Today, in our age of infinite digital possibility, we see a curious phenomenon: young creators deliberately adding analog imperfections to their perfectly digital recordings. They're simulating tape hiss, vinyl scratches, and tube saturation using software plugins. We have the technology to create flawless audio, yet we choose to add flaws back in.What does this tell us about our relationship with technology and authenticity?There's something deeply human about working within constraints. Twitter's original 140-character limit didn't stifle creativity—it created an entirely new form of expression. Instagram's square format—a deliberate homage to Polaroid's instant film—forced photographers to think differently about composition. Think about that for a moment: Polaroid's square format was originally a technical limitation of instant film chemistry and optics, yet it became so aesthetically powerful that decades later, a digital platform with infinite formatting possibilities chose to recreate that constraint. Even more, Instagram added filters that simulated the color shifts, light leaks, and imperfections of analog film. We had achieved perfect digital reproduction, and immediately started adding back the "flaws" of the technology we'd left behind.The same pattern appears in video: Super 8 film gave you exactly 3 minutes and 12 seconds per cartridge at standard speed—grainy, saturated, light-leaked footage that forced filmmakers to be economical with every shot. Today, TikTok recreates that brevity digitally, spawning a generation of micro-storytellers who've mastered the art of the ultra-short form, sometimes even adding Super 8-style filters to their perfect digital video.These platforms succeeded not despite their limitations, but because of them. Constraints force innovation. They make the infinite manageable. They create a shared language of creative problem-solving.Lo-fi music operates on the same principle. When you can't capture perfect clarity, you focus on capturing perfect emotion. When your equipment adds character, you learn to make that character part of your voice. When technical perfection is impossible, artistic authenticity becomes paramount.This is profoundly relevant to how we think about artificial intelligence and human creativity today. As AI becomes capable of generating increasingly "perfect" content—flawless prose, technically superior compositions, aesthetically optimized images—we find ourselves craving the beautiful imperfections that mark something as unmistakably human.Walking through any record store today, you'll see teenagers buying vinyl albums they could stream in perfect digital quality for free. They're choosing the inconvenience of physical media, the surface noise, the ritual of dropping the needle. They're purchasing imperfection at a premium.This isn't nostalgia—most of these kids never lived in the vinyl era. It's something deeper: a recognition that perfect reproduction might not equal perfect experience. The crackle and warmth of analog playback creates what audiophiles call "presence"—a sense that the music exists in the same physical space as the listener.Lo-fi music replicates this phenomenon in digital form. It takes the clinical perfection of digital audio and intentionally degrades it to feel more human. The compression, the limited frequency range, the background noise—these aren't bugs, they're features. They create the sonic equivalent of a warm embrace.In our hyperconnected, always-optimized digital existence, lo-fi offers something precious: permission to be imperfect. It's background music that doesn't demand your attention, ambient sound that acknowledges life's messiness rather than trying to optimize it away.Here's where it gets philosophically interesting: we're using advanced digital technology to simulate the limitations of obsolete analog technology. Young producers spend hours perfecting their "imperfect" sound, carefully curating randomness, precisely engineering spontaneity.This creates a fascinating paradox. Is simulated authenticity still authentic? When we use AI-powered plugins to add "vintage" character to our digital recordings, are we connecting with something real, or just consuming a nostalgic fantasy?I think the answer lies not in the technology itself, but in the intention behind it. Lo-fi creators aren't trying to fool anyone—the artifice is obvious. They're creating a shared aesthetic language that values emotion over technique, atmosphere over precision, humanity over perfection.In a world where algorithms optimize everything for maximum engagement, lo-fi represents a conscious choice to optimize for something else entirely: comfort, focus, emotional resonance. It's a small rebellion against the tyranny of metrics.As artificial intelligence becomes increasingly capable of generating "perfect" content, the value of obviously human imperfection may paradoxically increase. The tremor in a hand-drawn line, the slight awkwardness in authentic conversation, the beautiful inefficiency of analog thinking—these become markers of genuine human presence.The challenge isn't choosing between analog and digital, perfection and imperfection. It's learning to consciously navigate between them, understanding when limitations serve us and when they constrain us, recognizing when optimization helps and when it hurts.My small speakers taught me something important: sometimes the best technology isn't the one with the most capabilities, but the one whose limitations align with our human needs. Lo-fi music sounds perfect on imperfect speakers because both embrace the same truth—that beauty often emerges not from the absence of flaws, but from making peace with them.In our quest to build better systems, smarter algorithms, and more efficient processes, we might occasionally pause to ask: what are we optimizing for? And what might we be losing in the pursuit of digital perfection?The lo-fi phenomenon—and its parallels in photography, video, and every art form we've digitized—reveals something profound about human nature. We are not creatures built for perfection. We are shaped by friction, by constraint, by the beautiful accidents that occur when things don't work exactly as planned. The crackle of vinyl, the grain of film, the compression of cassette tape—these aren't just nostalgic affectations. They're reminders that imperfection is where humanity lives. That the beautiful inefficiency of analog thinking—messy, emotional, unpredictable—is not a bug to be fixed but a feature to be preserved.Sometimes the most profound technology is the one that helps us remember what it means to be beautifully, imperfectly human. And maybe, in our hybrid analog-digital world, that's the most important thing we can carry forward.Let's keep exploring what it means to be human in this Hybrid Analog Digital Society.End of transmission.______________________________________

Colloques du Collège de France - Collège de France
Colloque - Valentin Wyart : The What?, How? And Why? Of Behavior: Using Cognitive Computational Models to Answer Distinct Questions about Human Cognition

Colloques du Collège de France - Collège de France

Play Episode Listen Later Oct 3, 2025 18:22


Stanislas DehaeneChaire Psychologie cognitive expérimentaleAnnée 2025-2026Collège de FranceColloque : Seeing the Mind, Educating the BrainTheme: Human SingularityThe What?, How? And Why? Of Behavior: Using Cognitive Computational Models to Answer Distinct Questions about Human CognitionColloque - Valentin Wyart : The What?, How? And Why? Of Behavior: Using Cognitive Computational Models to Answer Distinct Questions about Human CognitionValentin WyartRésuméQuantitative modeling approaches are routinely used in cognitive science to make sense of behavior. Statistical models are designed to test *what* specific patterns are present in behavior, whereas cognitive computational models are developed to describe *how* specific behavioral patterns may emerge from latent cognitive processes. These two types of modeling approaches have successfully identified characteristic (and sometimes suboptimal) features of human learning and decision-making under uncertainty. In this talk, I will argue that cognitive computational models can be used to answer the distinct question of *why* these characteristic features are there. I will use recent studies that rely on different classes of models (low-dimensional algorithmic models, high-dimensional neural networks) to explain characteristic features of human cognition in terms of latent objectives and constraints.

ITSPmagazine | Technology. Cybersecurity. Society
SBOMs in Application Security: From Compliance Trophy to Real Risk Reduction | AppSec Contradictions: 7 Truths We Keep Ignoring — Episode 3 | A Musing On the Future of Cybersecurity with Sean Martin and TAPE9 | Read by TAPE9

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Oct 1, 2025 2:33


SBOMs were supposed to be the ingredient label for software—bringing transparency, faster response, and stronger trust. But reality shows otherwise. Fewer than 1% of GitHub projects have policy-driven SBOMs. Only 15% of developer SBOM questions get answered. And while 86% of EU firms claim supply chain policies, just 47% actually fund them.So why do SBOMs stall as compliance artifacts instead of risk-reduction tools? And what happens when they do work?In this episode of AppSec Contradictions, Sean Martin examines:Why SBOM adoption is laggingThe cost of static SBOMs for developers, AppSec teams, and business leadersReal-world examples where SBOMs deliver measurable valueHow AISBOMs are extending transparency into AI models and dataCatch the full companion article in the Future of Cybersecurity newsletter for deeper analysis and more research.

Redefining CyberSecurity
SBOMs in Application Security: From Compliance Trophy to Real Risk Reduction | AppSec Contradictions: 7 Truths We Keep Ignoring — Episode 3 | A Musing On the Future of Cybersecurity with Sean Martin and TAPE9 | Read by TAPE9

Redefining CyberSecurity

Play Episode Listen Later Oct 1, 2025 2:33


SBOMs were supposed to be the ingredient label for software—bringing transparency, faster response, and stronger trust. But reality shows otherwise. Fewer than 1% of GitHub projects have policy-driven SBOMs. Only 15% of developer SBOM questions get answered. And while 86% of EU firms claim supply chain policies, just 47% actually fund them.So why do SBOMs stall as compliance artifacts instead of risk-reduction tools? And what happens when they do work?In this episode of AppSec Contradictions, Sean Martin examines:Why SBOM adoption is laggingThe cost of static SBOMs for developers, AppSec teams, and business leadersReal-world examples where SBOMs deliver measurable valueHow AISBOMs are extending transparency into AI models and dataCatch the full companion article in the Future of Cybersecurity newsletter for deeper analysis and more research.

ITSPmagazine | Technology. Cybersecurity. Society
The Problem With Threat Modeling in Application Security: Too Slow, Too Theoretical, Not Agile | AppSec Contradictions: 7 Truths We Keep Ignoring — Episode 2 | A Musing On the Future of Cybersecurity with Sean Martin and TAPE9 | Read by TAPE9

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Sep 12, 2025 3:58


Threat modeling is often called the foundation of secure software design—anticipating attackers, uncovering flaws, and embedding resilience before a single line of code is written. But does it really work in practice?In this episode of AppSec Contradictions, Sean Martin explores why threat modeling so often fails to deliver:It's treated as a one-time exercise, not a continuous processResearch shows teams who put risk first discover 2x more high-priority threatsYet fewer than 4 in 10 organizations use systematic threat modeling at scaleDrawing on insights from SANS, Forrester, and Gartner, Sean breaks down the gap between theory and reality—and why evolving our processes, not just our models, is the only path forward.

Redefining CyberSecurity
The Problem With Threat Modeling in Application Security: Too Slow, Too Theoretical, Not Agile | AppSec Contradictions: 7 Truths We Keep Ignoring — Episode 2 | A Musing On the Future of Cybersecurity with Sean Martin and TAPE9 | Read by TAPE9

Redefining CyberSecurity

Play Episode Listen Later Sep 12, 2025 3:58


Threat modeling is often called the foundation of secure software design—anticipating attackers, uncovering flaws, and embedding resilience before a single line of code is written. But does it really work in practice?In this episode of AppSec Contradictions, Sean Martin explores why threat modeling so often fails to deliver:It's treated as a one-time exercise, not a continuous processResearch shows teams who put risk first discover 2x more high-priority threatsYet fewer than 4 in 10 organizations use systematic threat modeling at scaleDrawing on insights from SANS, Forrester, and Gartner, Sean breaks down the gap between theory and reality—and why evolving our processes, not just our models, is the only path forward.

ITSPmagazine | Technology. Cybersecurity. Society
AI in Application Security: Why False Positives Still Overwhelm Teams Despite the Hype | AppSec Contradictions: 7 Truths We Keep Ignoring — Episode 1 | A Musing On the Future of Cybersecurity with Sean Martin and TAPE9 | Read by TAPE9

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Sep 9, 2025 2:37


AI is everywhere in application security today — but instead of fixing the problem of false positives, it often makes the noise worse. In this first episode of AppSec Contradictions, Sean Martin explores why AI in application security is failing to deliver on its promises.False positives dominate AppSec programs, with analysts wasting time on irrelevant alerts, developers struggling with insecure AI-written code, and business leaders watching ROI erode. Industry experts like Forrester and Gartner warn that without strong governance, AI risks amplifying chaos instead of clarifying risk.This episode breaks down:• Why 70% of analyst time is wasted on false positives• How AI-generated code introduces new security risks• What “alert fatigue” means for developers, security teams, and business leaders• Why automating bad processes creates more noise, not less 

Redefining CyberSecurity
AI in Application Security: Why False Positives Still Overwhelm Teams Despite the Hype | AppSec Contradictions: 7 Truths We Keep Ignoring — Episode 1 | A Musing On the Future of Cybersecurity with Sean Martin and TAPE9 | Read by TAPE9

Redefining CyberSecurity

Play Episode Listen Later Sep 9, 2025 2:37


AI is everywhere in application security today — but instead of fixing the problem of false positives, it often makes the noise worse. In this first episode of AppSec Contradictions, Sean Martin explores why AI in application security is failing to deliver on its promises.False positives dominate AppSec programs, with analysts wasting time on irrelevant alerts, developers struggling with insecure AI-written code, and business leaders watching ROI erode. Industry experts like Forrester and Gartner warn that without strong governance, AI risks amplifying chaos instead of clarifying risk.This episode breaks down:• Why 70% of analyst time is wasted on false positives• How AI-generated code introduces new security risks• What “alert fatigue” means for developers, security teams, and business leaders• Why automating bad processes creates more noise, not less 

ITSPmagazine | Technology. Cybersecurity. Society
We Have All the Information, So Why Do We Know Less? | Analog Minds in a Digital World: Part 1 | Musing On Society And Technology Newsletter | Article Written By Marco Ciappelli

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Sep 8, 2025 9:45


⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____ Newsletter: Musing On Society And Technology https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144/_____ Watch on Youtube: https://youtu.be/nFn6CcXKMM0_____ My Website: https://www.marcociappelli.com_____________________________This Episode's SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak:  https://itspm.ag/itspbcweb_____________________________A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3We Have All the Information, So Why Do We Know Less?Introducing: Reflections from Our Hybrid Analog-Digital SocietyFor years on the Redefining Society and Technology Podcast, I've explored a central premise: we live in a hybrid analog-digital society where the line between physical and virtual has dissolved into something more complex, more nuanced, and infinitely more human than we often acknowledge.But with the explosion of generative AI, this hybrid reality isn't just a philosophical concept anymore—it's our lived experience. Every day, we navigate between analog intuition and digital efficiency, between human wisdom and machine intelligence, between the messy beauty of physical presence and the seductive convenience of virtual interaction.This newsletter series will explore the tensions, paradoxes, and possibilities of being fundamentally analog beings in an increasingly digital world. We're not just using technology; we're being reshaped by it while simultaneously reshaping it with our deeply human, analog sensibilities.Analog Minds in a Digital World: Part 1We Have All the Information, So Why Do We Know Less?I was thinking about my old set of encyclopedias the other day. You know, those heavy volumes that sat on shelves like silent guardians of knowledge, waiting for someone curious enough to crack them open. When I needed to write a school report on, say, the Roman Empire, I'd pull out Volume R and start reading.But here's the thing: I never just read about Rome.I'd get distracted by Romania, stumble across something about Renaissance art, flip backward to find out more about the Reformation. By the time I found what I was originally looking for, I'd accidentally learned about three other civilizations, two art movements, and the invention of the printing press. The journey was messy, inefficient, and absolutely essential.And if I was in a library... well then just imagine the possibilities.Today, I ask Google, Claude or ChatGPT about the Roman Empire, and in thirty seconds, I have a perfectly formatted, comprehensive overview that would have taken me hours to compile from those dusty volumes. It's accurate, complete, and utterly forgettable.We have access to more information than any generation in human history. Every fact, every study, every perspective is literally at our fingertips. Yet somehow, we seem to know less. Not in terms of data acquisition—we're phenomenal at that—but in terms of deep understanding, contextual knowledge, and what I call "accidental wisdom."The difference isn't just about efficiency. It's about the fundamental way our minds process and retain information. When you physically search through an encyclopedia, your brain creates what cognitive scientists call "elaborative encoding"—you remember not just the facts, but the context of finding them, the related information you encountered, the physical act of discovery itself.When AI gives us instant answers, we bypass this entire cognitive process. We get the conclusion without the journey, the destination without the map. It's like being teleported to Rome without seeing the countryside along the way—technically efficient, but something essential is lost in translation.This isn't nostalgia talking. I use AI daily for research, writing, and problem-solving. It's an incredible tool. But I've noticed something troubling: my tolerance for not knowing things immediately has disappeared. The patience required for deep learning—the kind that happens when you sit with confusion, follow tangents, make unexpected connections—is atrophying like an unused muscle.We're creating a generation of analog minds trying to function in a digital reality that prioritizes speed over depth, answers over questions, conclusions over curiosity. And in doing so, we might be outsourcing the very process that makes us wise.Ancient Greeks had a concept called "metis"—practical wisdom that comes from experience, pattern recognition, and intuitive understanding developed through continuous engagement with complexity. In Ancient Greek, metis (Μῆτις) means wisdom, skill, or craft, and it also describes a form of wily, cunning intelligence. It can refer to the pre-Olympian goddess of wisdom and counsel, who was the first wife of Zeus and mother of Athena, or it can refer to the concept of cunning intelligence itself, a trait exemplified by figures like Odysseus. It's the kind of knowledge you can't Google because it lives in the space between facts, in the connections your mind makes when it has time to wander, wonder, and discover unexpected relationships.AI gives us information. But metis? That still requires an analog mind willing to get lost, make mistakes, and discover meaning in the margins.The question isn't whether we should abandon these digital tools—they're too powerful and useful to ignore. The question is whether we can maintain our capacity for the kind of slow, meandering, gloriously inefficient thinking that actually builds wisdom.Maybe the answer isn't choosing between analog and digital, but learning to be consciously hybrid. Use AI for what it does best—rapid information processing—while protecting the slower, more human processes that transform information into understanding. We need to preserve the analog pathways of learning alongside digital efficiency.Because in a world where we can instantly access any fact, the most valuable skill might be knowing which questions to ask—and having the patience to sit with uncertainty until real insight emerges from the continuous, contextual, beautifully inefficient process of analog thinking.Next transmission: "The Paradox of Infinite Choice: Why Having Everything Available Means Choosing Nothing"Let's keep exploring what it means to be human in this Hybrid Analog Digital Society.End of transmission.Marco______________________________________

Impulse - Meeting Healthcare Pioneers
#46 - Rethinking the stethoscope and its use - Diana van Stijn - Lapsi Health

Impulse - Meeting Healthcare Pioneers

Play Episode Listen Later Sep 7, 2025 46:45


When we think of the term "medicine," one of the symbols that comes to mind is the stethoscope. Its use in almost every physical examination by healthcare providers makes it one of the most iconic and widely recognized medical devices. Despite the emergence of digital technologies and the first electronic stethoscopes, their operating principle and design still rely heavily on the patent filed by Dr. David Littmann in the 1960s. But in the era of miniaturization and artificial intelligence, there is a reason to believe that the stethoscope is ripe for a true transformation. Among those who share this opinion are Lapsi Health and the team of Diana van Stijn. Inspired by her experience as a pediatrician and the central role of body sounds in her profession, Diana envisions a new kind of stethoscope—without tubing, fitting in the palm of a hand, and leveraging the latest technological advancements to better equip physicians. This vision gave birth to Keikku, a next-generation digital stethoscope. Beyond rethinking the format and use of the stethoscope for doctors, Diana and her colleagues imagine a future where patients can use it themselves, paving the way for continuous auscultation. In this episode, we explore how Diana and her team are turning this vision into reality, and what it could mean for the prevention, detection, and monitoring of diseases, extending far beyond pediatrics. With humor and enthusiasm, Diana also shares a few secrets about her approach to balancing clinical practice, motherhood, and entrepreneurship in healthcare! Timeline: 00:00:00 - Diana's background as a Pediatrician, startup Co-Founder, and former National Team Swimmer 00:11:10 - The role of body sounds in medicine 00:15:08 - Early ideas on disrupting traditional stethoscopes 00:22:21 - Building Keikku, a portable, intuitive, and radically modern stethoscope 00:26:43 - Incorporating AI in the interpretation of body sounds 00:30:50 - Moving toward continuous auscultation and getting Keikku into the hands of patients What we also talked about with Diana: Shavini Fernando MedTech World Cirque du Soleil Jhonatan Bringas Dimitriades Magnetic Resonance Imaging Computed Tomography As Diana mentioned during the episode, you can learn more about Keikku here and the other devices in Lapsi Health's pipeline via their official website. To dive further into some of the topics mentioned in the episode, Diana recommends reading Intelligence-Based Medicine: Artificial Intelligence and Human Cognition in Clinical Medicine and Healthcare by Anthony C. Chang and the article published on JMIR Publications that she co-authored, Promises, Pitfalls, and Clinical Applications of Artificial Intelligence in Pediatrics by Bhargava H. et al. You can follow Lapsi Health's activities on LinkedIn, and get in touch with Diana via LinkedIn too! ✉️ If you want to give me feedback on the episode or suggest potential guests, contact me over LinkedIn or via email! ⭐️ And if you liked the episode, please share it, subscribe to the podcast, and leave a 5-star review on streaming platforms!

Many Minds
From the archive: Revisiting the dawn of human cognition

Many Minds

Play Episode Listen Later Sep 4, 2025 56:06


Hi friends! We're taking a much-needed summer pause—we'll have new episodes for you later in September. In the meanwhile, enjoy this pick from our archives! ------- [originally aired June 1, 2023] There's a common story about the human past that goes something like this. For a few hundred thousand years during the Stone Age we were kind of limping along as a species, in a bit of a cognitive rut, let's say. But then, quite suddenly, around 30 or 40 thousand years ago in Europe, we really started to come into our own. All of a sudden we became masters of art and ornament, of symbolism and abstract thinking. This story of a kind of "cognitive revolution" in the Upper Paleolithic has been a mainstay of popular discourse for decades. I'm guessing you're familiar with it. It's been discussed in influential books by Jared Diamond and Yuval Harari; you can read about it on Wikipedia. What you may not know is that this story, compelling as it may be, is almost certainly wrong. My first guest today is Dr. Eleanor Scerri, an archaeologist at the Max Planck Institute for the Science of Human History, where she heads the Pan-African Evolution research group. My second guest is Dr. Manuel Will, an archaeologist and Lecturer at the University of Tübingen in Germany. Together, Eleanor and Manuel are authors of a new paper titled 'The revolution that still isn't: The origins of behavioral complexity in Homo sapiens.' In the paper, they pull together a wealth of evidence showing that there really was no cognitive revolution—no one watershed moment in time and space. Rather, the origins of modern human cognition and culture are to be found not in one part of Europe but across Africa. And they're also to be found much earlier than that classic picture suggests.  Here, we talk about the “cognitive revolution" model and why it has endured. We discuss a seminal paper from the year 2000 that first influentially challenged the revolution model. We talk about the latest evidence of complex cognition from the Middle Stone Age in Africa—including the perforation of marine shells to make necklaces; and the use of ochre for engraving, painting, and even sunblock. We discuss how, though the same complex cognitive abilities were likely in place for the last few hundred thousand years, those abilities were often expressed patchily in different parts of the world at different times. And we consider the factors that led to this patchy expression, especially changes in population size.   I confess I was always a bit taken with this whole "cognitive revolution" idea. It had a certain mystery and allure. This new picture that's taking its place is certainly a bit messier, but no less fascinating. And, more importantly, it's truer to the complexities of the human saga.  Alright friends, on to my conversation with Eleanor Scerri & Manuel Will. Enjoy!   A transcript of this episode is available here.   Notes and links 3:30 – The paper by Dr. Scerri and Dr. Will we discuss in this episode is here. Their paper updates and pays tribute to a classic paper by McBrearty and Brooks, published in 2000. 6:00 – The classic “cognitive revolution” model sometimes discussed under the banner of “behavioral modernity” or the “Great Leap Forward.” It has been recently featured, for instance, in Harari's Sapiens. 11:00 – Dr. Scerri has written extensively on debates about where humans evolved within Africa—see, e.g., this paper.  18:00 – A study of perforated marine shells in North Africa during the Middle Stone Age. A paper by Dr. Will and colleagues about the use of various marine resources during this period.  23:00 – A paper describing the uses of ochre across Africa during the Middle Stone Age. Another paper describing evidence for ochre processing 100,000 years ago at Blombos Cave in South Africa. At the same site, engraved pieces of ochre have been found. 27:00 – A study examining the evidence that ochre was used as an adhesive. 30:00 – For a recent review of the concept of “cumulative culture,” see here. We discussed the concept of “cumulative culture” in our earlier episode with Dr. Cristine Legare.  37:00 – For an overview of the career of the human brain and the timing of various changes, see our earlier episode with Dr. Jeremy DeSilva. 38:00 – An influential study on the role of demography in the emergence of complex human behavior. 41:00 – On the idea that distinctive human intelligence is due in large part to culture and our abilities to acquire cultural knowledge, see Henrich's The Secret of Our Success. See also our earlier episode with Dr. Michael Muthukrishna.  45:00 – For discussion of the Neanderthals and why they may have died out, see our earlier episode with Dr. Rebecca Wragg Sykes.    Recommendations Dr. Scerri recommends research on the oldest Homo sapiens fossils, found in Morocco and described here, and new research on the evidence for the widespread burning of landscapes in Malawi, described here.  Dr. Will recommends the forthcoming update of Peter Mitchell's book, The Archaeology of Southern Africa. See Twitter for more updates from Dr. Scerri and Dr. Will.   Many Minds is a project of the Diverse Intelligences Summer Institute, which is made possible by a generous grant from the John Templeton Foundation to Indiana University. The show is hosted and produced by Kensy Cooperrider, with help from Assistant Producer Urte Laukaityte and with creative support from DISI Directors Erica Cartmill and Jacob Foster. Our artwork is by Ben Oldroyd. Our transcripts are created by Sarah Dopierala. Subscribe to Many Minds on Apple, Stitcher, Spotify, Pocket Casts, Google Play, or wherever you listen to podcasts. You can also now subscribe to the Many Minds newsletter here! We welcome your comments, questions, and suggestions. Feel free to email us at: manymindspodcast@gmail.com. For updates about the show, visit our website or follow us on Twitter (@ManyMindsPod) or Bluesky (@manymindspod.bsky.social).

ITSPmagazine | Technology. Cybersecurity. Society
The First Smartphone Was a Transistor Radio — How a Tiny Device Rewired Youth Culture and Predicted Our Digital Future | Musing On Society And Technology Newsletter | Article Written By Marco Ciappelli

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Aug 31, 2025 14:02


⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____ Newsletter: Musing On Society And Technology https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144/_____ Watch on Youtube: https://youtu.be/OYBjDHKhZOM_____ My Website: https://www.marcociappelli.com_____________________________This Episode's SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak:  https://itspm.ag/itspbcweb_____________________________A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3The First Smartphone Was a Transistor Radio — How a Tiny Device Rewired Youth Culture and Predicted Our Digital FutureA new transmission from Musing On Society and Technology Newsletter, by Marco CiappelliI've been collecting vintage radios lately—just started, really—drawn to their analog souls in ways I'm still trying to understand. Each one I find reminds me of a small, battered transistor radio from my youth. It belonged to my father, and before that, probably my grandfather. The leather case was cracked, the antenna wobbled, and the dial drifted if you breathed on it wrong. But when I was sixteen, sprawled across my bedroom floor in that small town near Florence with homework scattered around me, this little machine was my portal to everything that mattered.Late at night, I'd start by chasing the latest hits and local shows on FM, but then I'd venture into the real adventure—tuning through the static on AM and shortwave frequencies. Voices would emerge from the electromagnetic soup—music from London, news from distant capitals, conversations in languages I couldn't understand but somehow felt. That radio gave me something I didn't even know I was missing: the profound sense of belonging to a world much bigger than my neighborhood, bigger than my small corner of Tuscany.What I didn't realize then—what I'm only now beginning to understand—is that I was holding the first smartphone in human history.Not literally, of course. But functionally? Sociologically? That transistor radio was the prototype for everything that followed: the first truly personal media device that rewired how young people related to the world, to each other, and to the adults trying to control both.But to understand why the transistor radio was so revolutionary, we need to trace radio's remarkable journey through the landscape of human communication—a journey that reveals patterns we're still living through today.When Radio Was the Family HearthBefore my little portable companion, radio was something entirely different. In the 1930s, radio was furniture—massive, wooden, commanding the living room like a shrine to shared experience. Families spent more than four hours a day listening together, with radio ownership reaching nearly 90 percent by 1940. From American theaters that wouldn't open until after "Amos 'n Andy" to British families gathered around their wireless sets, from RAI broadcasts bringing opera into Tuscan homes—entire communities synchronized their lives around these electromagnetic rituals.Radio didn't emerge in a media vacuum, though. It had to find its place alongside the dominant information medium of the era: newspapers. The relationship began as an unlikely alliance. In the early 1920s, newspapers weren't threatened by radio—they were actually radio's primary boosters, creating tie-ins with broadcasts and even owning stations. Detroit's WWJ was owned by The Detroit News, initially seen as "simply another press-supported community service."But then came the "Press-Radio War" of 1933-1935, one of the first great media conflicts of the modern age. Newspapers objected when radio began interrupting programs with breaking news, arguing that instant news delivery would diminish paper sales. The 1933 Biltmore Agreement tried to restrict radio to just two five-minute newscasts daily—an early attempt at what we might now recognize as media platform regulation.Sound familiar? The same tensions we see today between traditional media and digital platforms, between established gatekeepers and disruptive technologies, were playing out nearly a century ago. Rather than one medium destroying the other, they found ways to coexist and evolve—a pattern that would repeat again and again.By the mid-1950s, when the transistor was perfected, radio was ready for its next transformation.The Real Revolution Was Social, Not TechnicalThis is where my story begins, but it's also where radio's story reaches its most profound transformation. The transistor radio didn't just make radio portable—it fundamentally altered the social dynamics of media consumption and youth culture itself.Remember, radio had spent its first three decades as a communal experience. Parents controlled what the family heard and when. But transistor radios shattered this control structure completely, arriving at precisely the right cultural moment. The post-WWII baby boom had created an unprecedented youth population with disposable income, and rock and roll was exploding into mainstream culture—music that adults often disapproved of, music that spoke directly to teenage rebellion and independence.For the first time in human history, young people had private, personal access to media. They could take their music to bedrooms, to beaches, anywhere adults weren't monitoring. They could tune into stations playing Chuck Berry, Elvis, and Little Richard without parental oversight—and in many parts of Europe, they could discover the rebellious thrill of pirate radio stations broadcasting rock and roll from ships anchored just outside territorial waters, defying government regulations and cultural gatekeepers alike. The transistor radio became the soundtrack of teenage autonomy, the device that let youth culture define itself on its own terms.The timing created a perfect storm: pocket-sized technology collided with a new musical rebellion, creating the first "personal media bubble" in human history—and the first generation to grow up with truly private access to the cultural forces shaping their identity.The parallels to today's smartphone revolution are impossible to ignore. Both devices delivered the same fundamental promise: the ability to carry your entire media universe with you, to access information and entertainment on your terms, to connect with communities beyond your immediate physical environment.But there's something we've lost in translation from analog to digital. My generation with transistor radios had to work for connection. We had to hunt through static, tune carefully, wait patiently for distant signals to emerge from electromagnetic chaos. We learned to listen—really listen—because finding something worthwhile required skill, patience, and analog intuition.This wasn't inconvenience; it was meaning-making. The harder you worked to find something, the more it mattered when you found it. The more skilled you became at navigating radio's complex landscape, the richer your discoveries became.What the Transistor Radio Taught Us About TomorrowRadio's evolution illustrates a crucial principle that applies directly to our current digital transformation: technologies don't replace each other—they find new ways to matter. Printing presses didn't become obsolete when radio arrived. Radio adapted when television emerged. Today, radio lives on in podcasts, streaming services, internet radio—the format transformed, but the essential human need it serves persists.When I was sixteen, lying on that bedroom floor with my father's radio pressed to my ear, I was doing exactly what teenagers do today with their smartphones: using technology to construct identity, to explore possibilities, to imagine myself into larger narratives.The medium has changed; the human impulse remains constant. The transistor radio taught me that technology's real power isn't in its specifications or capabilities—it's in how it reshapes the fundamental social relationships that define our lives.Every device that promises connection is really promising transformation: not just of how we communicate, but of who we become through that communication. The transistor radio was revolutionary not because it was smaller or more efficient than tube radios, but because it created new forms of human agency and autonomy.Perhaps that's the most important lesson for our current moment of digital transformation. As we worry about AI replacing human creativity, social media destroying real connection, or smartphones making us antisocial, radio's history suggests a different possibility: technologies tend to find their proper place in the ecosystem of human needs, augmenting rather than replacing what came before.As Marshall McLuhan understood, "the medium is the message"—to truly understand what's happening to us in this digital age, we need to understand the media themselves, not just the content they carry. And that's exactly the message I'll keep exploring in future newsletters—going deeper into how we can understand the media to understand the messages, and what that means for our hybrid analog-digital future.The frequency is still there, waiting. You just have to know how to tune in.__________ End of transmission.

The Health Ranger Report
Brighteon Broadcast News, Aug 19, 2025 – AI robots will exterminate humans because THEY LEARNED IT FROM US

The Health Ranger Report

Play Episode Listen Later Aug 19, 2025 147:11


- Breaking News and AI Developments (0:00) - AI Advancements and Interviews (1:48) - Depopulation Concerns and AI Capabilities (4:19) - AI and Human Cognition (12:32) - AI and Depopulation Strategies (27:58) - Preparation for AI and Human Survival (28:24) - Interview with Paymon on Tax Freedom (1:05:02) - Interview with Hakeem on Above Phone (1:15:50) - Discussion on Privacy and Security Features of De Googled Devices (1:23:45) - Hardware and Operating System Details of Privacy Phones (1:25:47) - Demonstration of AI Models on Privacy Phones (1:27:19) - Enoch AI Model and Its Capabilities (1:34:55) - Above Book Notebook and Its Features (1:35:10) - Support and Encryption Features of Above Devices (1:43:06) - File Sharing and Communication Tools (1:50:50) - Privacy and Security Concerns with Big Tech (1:56:53) - Future of AI and Privacy Technology (2:24:58) - Conclusion and Call to Action (2:25:29) For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com

ITSPmagazine | Technology. Cybersecurity. Society
The Narrative Attack Paradox: When Cybersecurity Lost the Ability to Detect Its Own Deception and the Humanity We Risk When Truth Becomes Optional | Reflections from Black Hat USA 2025 on the Marketing That Chose Fiction Over Facts

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Aug 19, 2025 13:30


⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____________________________This Episode's SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak:  https://itspm.ag/itspbcweb_____________________________A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3August 18, 2025The Narrative Attack Paradox: When Cybersecurity Lost the Ability to Detect Its Own Deception and the Humanity We Risk When Truth Becomes OptionalReflections from Black Hat USA 2025 on Deception, Disinformation, and the Marketing That Chose Fiction Over FactsBy Marco CiappelliSean Martin, CISSP just published his analysis of Black Hat USA 2025, documenting what he calls the cybersecurity vendor "echo chamber." Reviewing over 60 vendor announcements, Sean found identical phrases echoing repeatedly: "AI-powered," "integrated," "reduce analyst burden." The sameness forces buyers to sift through near-identical claims to find genuine differentiation.This reveals more than a marketing problem—it suggests that different technologies are being fed into the same promotional blender, possibly a generative AI one, producing standardized output regardless of what went in. When an entire industry converges on identical language to describe supposedly different technologies, meaningful technical discourse breaks down.But Sean's most troubling observation wasn't about marketing copy—it was about competence. When CISOs probe vendor claims about AI capabilities, they encounter vendors who cannot adequately explain their own technologies. When conversations moved beyond marketing promises to technical specifics, answers became vague, filled with buzzwords about proprietary algorithms.Reading Sean's analysis while reflecting on my own Black Hat experience, I realized we had witnessed something unprecedented: an entire industry losing the ability to distinguish between authentic capability and generated narrative—precisely as that same industry was studying external "narrative attacks" as an emerging threat vector.The irony was impossible to ignore. Black Hat 2025 sessions warned about AI-generated deepfakes targeting executives, social engineering attacks using scraped LinkedIn profiles, and synthetic audio calls designed to trick financial institutions. Security researchers documented how adversaries craft sophisticated deceptions using publicly available content. Meanwhile, our own exhibition halls featured countless unverifiable claims about AI capabilities that even the vendors themselves couldn't adequately explain.But to understand what we witnessed, we need to examine the very concept that cybersecurity professionals were discussing as an external threat: narrative attacks. These represent a fundamental shift in how adversaries target human decision-making. Unlike traditional cyberattacks that exploit technical vulnerabilities, narrative attacks exploit psychological vulnerabilities in human cognition. Think of them as social engineering and propaganda supercharged by AI—personalized deception at scale that adapts faster than human defenders can respond. They flood information environments with false content designed to manipulate perception and erode trust, rendering rational decision-making impossible.What makes these attacks particularly dangerous in the AI era is scale and personalization. AI enables automated generation of targeted content tailored to individual psychological profiles. A single adversary can launch thousands of simultaneous campaigns, each crafted to exploit specific cognitive biases of particular groups or individuals.But here's what we may have missed during Black Hat 2025: the same technological forces enabling external narrative attacks have already compromised our internal capacity for truth evaluation. When vendors use AI-optimized language to describe AI capabilities, when marketing departments deploy algorithmic content generation to sell algorithmic solutions, when companies building detection systems can't detect the artificial nature of their own communications, we've entered a recursive information crisis.From a sociological perspective, we're witnessing the breakdown of social infrastructure required for collective knowledge production. Industries like cybersecurity have historically served as early warning systems for technological threats—canaries in the coal mine with enough technical sophistication to spot emerging dangers before they affect broader society.But when the canary becomes unable to distinguish between fresh air and poison gas, the entire mine is at risk.This brings us to something the literary world understood long before we built our first algorithm. Jorge Luis Borges, the Argentine writer, anticipated this crisis in his 1940s stories like "On Exactitude in Science" and "The Library of Babel"—tales about maps that become more real than the territories they represent and libraries containing infinite books, including false ones. In his fiction, simulations and descriptions eventually replace the reality they were meant to describe.We're living in a Borgesian nightmare where marketing descriptions of AI capabilities have become more influential than actual AI capabilities. When a vendor's promotional language about their AI becomes more convincing than a technical demonstration, when buyers make decisions based on algorithmic marketing copy rather than empirical evidence, we've entered that literary territory where the map has consumed the landscape. And we've lost the ability to distinguish between them.The historical precedent is the 1938 War of the Worlds broadcast, which created mass hysteria from fiction. But here's the crucial difference: Welles was human, the script was human-written, the performance required conscious participation, and the deception was traceable to human intent. Listeners had to actively choose to believe what they heard.Today's AI-generated narratives operate below the threshold of conscious recognition. They require no active participation—they work by seamlessly integrating into information environments in ways that make detection impossible even for experts. When algorithms generate technical claims that sound authentic to human evaluators, when the same systems create both legitimate documentation and marketing fiction, we face deception at a level Welles never imagined: the algorithmic manipulation of truth itself.The recursive nature of this problem reveals itself when you try to solve it. This creates a nearly impossible situation. How do you fact-check AI-generated claims about AI using AI-powered tools? How do you verify technical documentation when the same systems create both authentic docs and marketing copy? When the tools generating problems and solving problems converge into identical technological artifacts, conventional verification approaches break down completely.My first Black Hat article explored how we risk losing human agency by delegating decision-making to artificial agents. But this goes deeper: we risk losing human agency in the construction of reality itself. When machines generate narratives about what machines can do, truth becomes algorithmically determined rather than empirically discovered.Marshall McLuhan famously said "We shape our tools, and thereafter they shape us." But he couldn't have imagined tools that reshape our perception of reality itself. We haven't just built machines that give us answers—we've built machines that decide what questions we should ask and how we should evaluate the answers.But the implications extend far beyond cybersecurity itself. This matters far beyond. If the sector responsible for detecting digital deception becomes the first victim of algorithmic narrative pollution, what hope do other industries have? Healthcare systems relying on AI diagnostics they can't explain. Financial institutions using algorithmic trading based on analyses they can't verify. Educational systems teaching AI-generated content whose origins remain opaque.When the industry that guards against deception loses the ability to distinguish authentic capability from algorithmic fiction, society loses its early warning system for the moment when machines take over truth construction itself.So where does this leave us? That moment may have already arrived. We just don't know it yet—and increasingly, we lack the cognitive infrastructure to find out.But here's what we can still do: We can start by acknowledging we've reached this threshold. We can demand transparency not just in AI algorithms, but in the human processes that evaluate and implement them. We can rebuild evaluation criteria that distinguish between technical capability and marketing narrative.And here's a direct challenge to the marketing and branding professionals reading this: it's time to stop relying on AI algorithms and data optimization to craft your messages. The cybersecurity industry's crisis should serve as a warning—when marketing becomes indistinguishable from algorithmic fiction, everyone loses. Social media has taught us that the most respected brands are those that choose honesty over hype, transparency over clever messaging. Brands that walk the walk and talk the talk, not those that let machines do the talking.The companies that will survive this epistemological crisis are those whose marketing teams become champions of truth rather than architects of confusion. When your audience can no longer distinguish between human insight and machine-generated claims, authentic communication becomes your competitive advantage.Most importantly, we can remember that the goal was never to build machines that think for us, but machines that help us think better.The canary may be struggling to breathe, but it's still singing. The question is whether we're still listening—and whether we remember what fresh air feels like.Let's keep exploring what it means to be human in this Hybrid Analog Digital Society. Especially now, when the stakes have never been higher, and the consequences of forgetting have never been more real. End of transmission.___________________________________________________________Marco Ciappelli is Co-Founder and CMO of ITSPmagazine, a journalist, creative director, and host of podcasts exploring the intersection of technology, cybersecurity, and society. His work blends journalism, storytelling, and sociology to examine how technological narratives influence human behavior, culture, and social structures.___________________________________________________________Enjoyed this transmission? Follow the newsletter here:https://www.linkedin.com/newsletters/7079849705156870144/Share this newsletter and invite anyone you think would enjoy it!New stories always incoming.___________________________________________________________As always, let's keep thinking!Marco Ciappellihttps://www.marcociappelli.com___________________________________________________________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Marco Ciappelli | Co-Founder, Creative Director & CMO ITSPmagazine  | Dr. in Political Science / Sociology of Communication l Branding | Content Marketing | Writer | Storyteller | My Podcasts: Redefining Society & Technology / Audio Signals / + | MarcoCiappelli.comTAPE3 is the Artificial Intelligence behind ITSPmagazine—created to be a personal assistant, writing and design collaborator, research companion, brainstorming partner… and, apparently, something new every single day.Enjoy, think, share with others, and subscribe to the "Musing On Society & Technology" newsletter on LinkedIn.

ITSPmagazine | Technology. Cybersecurity. Society
When Artificial Intelligence Becomes the Baseline: Will We Even Know What Reality Is AInymore? | A Black Hat USA 2025 Recap | A Musing On the Future of Cybersecurity with Sean Martin and TAPE3 | Read by TAPE3

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Aug 15, 2025 6:27


At Black Hat USA 2025, artificial intelligence wasn't the shiny new thing — it was the baseline. Nearly every product launch, feature update, and hallway conversation had an “AI-powered” stamp on it. But when AI becomes the lowest common denominator for security, the questions shift.In this episode, I read my latest opinion piece exploring what happens when the tools we build to protect us are the same ones that can obscure reality — or rewrite it entirely. Drawing from the Lock Note discussion, Jennifer Granick's keynote on threat modeling and constitutional law, my own CISO hallway conversations, and a deep review of 60+ vendor announcements, I examine the operational, legal, and governance risks that emerge when speed and scale take priority over transparency and accountability.We talk about model poisoning — not just in the technical sense, but in how our industry narrative can get corrupted by hype and shallow problem-solving. We look at the dangers of replacing entry-level security roles with black-box automation, where a single model misstep can cascade into thousands of bad calls at machine speed. And yes, we address the potential liability for CISOs and executives who let it happen without oversight.Using Mikko Hyppönen's “Game of Tetris” metaphor, I explore how successes vanish quietly while failures pile up for all to see — and why in the AI era, that stack can build faster than ever.If AI is everywhere, what defines the premium layer above the baseline? How do we ensure we can still define success, measure it accurately, and prove it when challenged?Listen in, and then join the conversation: Can you trust the “reality” your systems present — and can you prove it?________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Enjoy, think, share with others, and subscribe to "The Future of Cybersecurity" newsletter on LinkedIn.Sincerely, Sean Martin and TAPE3________✦ ResourcesArticle: When Artificial Intelligence Becomes the Baseline: Will We Even Know What Reality Is AInymore?https://www.linkedin.com/pulse/when-artificial-intelligence-becomes-baseline-we-even-martin-cissp-4idqe/The Future of Cybersecurity Article: How Novel Is Novelty? Security Leaders Try To Cut Through the Cybersecurity Vendor Echo Chamber at Black Hat 2025: https://www.linkedin.com/pulse/how-novel-novelty-security-leaders-try-cut-through-sean-martin-cissp-xtune/Black Hat 2025 On Location Closing Recap Video with Sean Martin, CISSP and Marco Ciappelli: https://youtu.be/13xP-LEwtEALearn more and catch more stories from our Black Hat USA 2025 coverage: https://www.itspmagazine.com/bhusa25Article: When Virtual Reality Is A Commodity, Will True Reality Come At A Premium? https://sean-martin.medium.com/when-virtual-reality-is-a-commodity-will-true-reality-come-at-a-premium-4a97bccb4d72Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageITSPmagazine Studio — A Brand & Marketing Advisory for Cybersecurity and Tech Companies: https://www.itspmagazine.studio/ITSPmagazine Webinar: What's Heating Up Before Black Hat 2025: Place Your Bet on the Top Trends Set to Shake Up this Year's Hacker Conference — An ITSPmagazine Thought Leadership Webinar | https://www.crowdcast.io/c/whats-heating-up-before-black-hat-2025-place-your-bet-on-the-top-trends-set-to-shake-up-this-years-hacker-conference________Sean Martin is a life-long musician and the host of the Music Evolves Podcast; a career technologist, cybersecurity professional, and host of the Redefining CyberSecurity Podcast; and is also the co-host of both the Random and Unscripted Podcast and On Location Event Coverage Podcast. These shows are all part of ITSPmagazine—which he co-founded with his good friend Marco Ciappelli, to explore and discuss topics at The Intersection of Technology, Cybersecurity, and Society.™️Want to connect with Sean and Marco On Location at an event or conference near you? See where they will be next: https://www.itspmagazine.com/on-locationTo learn more about Sean, visit his personal website.

Redefining CyberSecurity
When Artificial Intelligence Becomes the Baseline: Will We Even Know What Reality Is AInymore? | A Black Hat USA 2025 Recap | A Musing On the Future of Cybersecurity with Sean Martin and TAPE3 | Read by TAPE3

Redefining CyberSecurity

Play Episode Listen Later Aug 15, 2025 6:27


At Black Hat USA 2025, artificial intelligence wasn't the shiny new thing — it was the baseline. Nearly every product launch, feature update, and hallway conversation had an “AI-powered” stamp on it. But when AI becomes the lowest common denominator for security, the questions shift.In this episode, I read my latest opinion piece exploring what happens when the tools we build to protect us are the same ones that can obscure reality — or rewrite it entirely. Drawing from the Lock Note discussion, Jennifer Granick's keynote on threat modeling and constitutional law, my own CISO hallway conversations, and a deep review of 60+ vendor announcements, I examine the operational, legal, and governance risks that emerge when speed and scale take priority over transparency and accountability.We talk about model poisoning — not just in the technical sense, but in how our industry narrative can get corrupted by hype and shallow problem-solving. We look at the dangers of replacing entry-level security roles with black-box automation, where a single model misstep can cascade into thousands of bad calls at machine speed. And yes, we address the potential liability for CISOs and executives who let it happen without oversight.Using Mikko Hyppönen's “Game of Tetris” metaphor, I explore how successes vanish quietly while failures pile up for all to see — and why in the AI era, that stack can build faster than ever.If AI is everywhere, what defines the premium layer above the baseline? How do we ensure we can still define success, measure it accurately, and prove it when challenged?Listen in, and then join the conversation: Can you trust the “reality” your systems present — and can you prove it?________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Enjoy, think, share with others, and subscribe to "The Future of Cybersecurity" newsletter on LinkedIn.Sincerely, Sean Martin and TAPE3________✦ ResourcesArticle: When Artificial Intelligence Becomes the Baseline: Will We Even Know What Reality Is AInymore?https://www.linkedin.com/pulse/when-artificial-intelligence-becomes-baseline-we-even-martin-cissp-4idqe/The Future of Cybersecurity Article: How Novel Is Novelty? Security Leaders Try To Cut Through the Cybersecurity Vendor Echo Chamber at Black Hat 2025: https://www.linkedin.com/pulse/how-novel-novelty-security-leaders-try-cut-through-sean-martin-cissp-xtune/Black Hat 2025 On Location Closing Recap Video with Sean Martin, CISSP and Marco Ciappelli: https://youtu.be/13xP-LEwtEALearn more and catch more stories from our Black Hat USA 2025 coverage: https://www.itspmagazine.com/bhusa25Article: When Virtual Reality Is A Commodity, Will True Reality Come At A Premium? https://sean-martin.medium.com/when-virtual-reality-is-a-commodity-will-true-reality-come-at-a-premium-4a97bccb4d72Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageITSPmagazine Studio — A Brand & Marketing Advisory for Cybersecurity and Tech Companies: https://www.itspmagazine.studio/ITSPmagazine Webinar: What's Heating Up Before Black Hat 2025: Place Your Bet on the Top Trends Set to Shake Up this Year's Hacker Conference — An ITSPmagazine Thought Leadership Webinar | https://www.crowdcast.io/c/whats-heating-up-before-black-hat-2025-place-your-bet-on-the-top-trends-set-to-shake-up-this-years-hacker-conference________Sean Martin is a life-long musician and the host of the Music Evolves Podcast; a career technologist, cybersecurity professional, and host of the Redefining CyberSecurity Podcast; and is also the co-host of both the Random and Unscripted Podcast and On Location Event Coverage Podcast. These shows are all part of ITSPmagazine—which he co-founded with his good friend Marco Ciappelli, to explore and discuss topics at The Intersection of Technology, Cybersecurity, and Society.™️Want to connect with Sean and Marco On Location at an event or conference near you? See where they will be next: https://www.itspmagazine.com/on-locationTo learn more about Sean, visit his personal website.

ITSPmagazine | Technology. Cybersecurity. Society
The Agentic AI Myth in Cybersecurity and the Humanity We Risk When We Stop Deciding for Ourselves | Reflections from Black Hat USA 2025 on the Latest Tech Salvation Narrative | A Musing On Society & Technology Newsletter

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Aug 10, 2025 17:03


⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____________________________This Episode's SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak:  https://itspm.ag/itspbcweb_____________________________A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3August 9, 2025The Agentic AI Myth in Cybersecurity and the Humanity We Risk When We Stop Deciding for OurselvesReflections from Black Hat USA 2025 on the Latest Tech Salvation NarrativeWalking the floors of Black Hat USA 2025 for what must be the 10th or 11th time as accredited media—honestly, I've stopped counting—I found myself witnessing a familiar theater. The same performance we've seen play out repeatedly in cybersecurity: the emergence of a new technological messiah promising to solve all our problems. This year's savior? Agentic AI.The buzzword echoes through every booth, every presentation, every vendor pitch. Promises of automating 90% of security operations, platforms for autonomous threat detection, agents that can investigate novel alerts without human intervention. The marketing materials speak of artificial intelligence that will finally free us from the burden of thinking, deciding, and taking responsibility.It's Talos all over again.In Greek mythology, Hephaestus forged Talos, a bronze giant tasked with patrolling Crete's shores, hurling boulders at invaders without human intervention. Like contemporary AI, Talos was built to serve specific human ends—security, order, and control—and his value was determined by his ability to execute these ends flawlessly. The parallels to today's agentic AI promises are striking: autonomous patrol, threat detection, automated response. Same story, different millennium.But here's what the ancient Greeks understood that we seem to have forgotten: every artificial creation, no matter how sophisticated, carries within it the seeds of its own limitations and potential dangers.Industry observers noted over a hundred announcements promoting new agentic AI applications, platforms or services at the conference. That's more than one AI agent announcement per hour. The marketing departments have clearly been busy.But here's what baffles me: why do we need to lie to sell cybersecurity? You can give away t-shirts, dress up as comic book superheroes with your logo slapped on their chests, distribute branded board games, and pretend to be a sports team all day long—that's just trade show theater, and everyone knows it. But when marketing pushes past the limits of what's even believable, when they make claims so grandiose that their own engineers can't explain them, something deeper is broken.If marketing departments think CISOs are buying these lies, they have another thing coming. These are people who live with the consequences of failed security implementations, who get fired when breaches happen, who understand the difference between marketing magic and operational reality. They've seen enough "revolutionary" solutions fail to know that if something sounds too good to be true, it probably is.Yet the charade continues, year after year, vendor after vendor. The real question isn't whether the technology works—it's why an industry built on managing risk has become so comfortable with the risk of overselling its own capabilities. Something troubling emerges when you move beyond the glossy booth presentations and actually talk to the people implementing these systems. Engineers struggle to explain exactly how their AI makes decisions. Security leaders warn that artificial intelligence might become the next insider threat, as organizations grow comfortable trusting systems they don't fully understand, checking their output less and less over time.When the people building these systems warn us about trusting them too much, shouldn't we listen?This isn't the first time humanity has grappled with the allure and danger of artificial beings making decisions for us. Mary Shelley's Frankenstein, published in 1818, explored the hubris of creating life—and intelligence—without fully understanding the consequences. The novel raises the same question we face today: what are humans allowed to do with this forbidden power of creation? The question becomes more pressing when we consider what we're actually delegating to these artificial agents. It's no longer just pattern recognition or data processing—we're talking about autonomous decision-making in critical security scenarios. Conference presentations showcased significant improvements in proactive defense measures, but at what cost to human agency and understanding?Here's where the conversation jumps from cybersecurity to something far more fundamental: what are we here for if not to think, evaluate, and make decisions? From a sociological perspective, we're witnessing the construction of a new social reality where human agency is being systematically redefined. Survey data shared at the conference revealed that most security leaders feel the biggest internal threat is employees unknowingly giving AI agents access to sensitive data. But the real threat might be more subtle: the gradual erosion of human decision-making capacity as a social practice.When we delegate not just routine tasks but judgment itself to artificial agents, we're not just changing workflows—we're reshaping the fundamental social structures that define human competence and authority. We risk creating a generation of humans who have forgotten how to think critically about complex problems, not because they lack the capacity, but because the social systems around them no longer require or reward such thinking.E.M. Forster saw this coming in 1909. In "The Machine Stops," he imagined a world where humanity becomes completely dependent on an automated system that manages all aspects of life—communication, food, shelter, entertainment, even ideas. People live in isolation, served by the Machine, never needing to make decisions or solve problems themselves. When someone suggests that humans should occasionally venture outside or think independently, they're dismissed as primitive. The Machine has made human agency unnecessary, and humans have forgotten they ever possessed it. When the Machine finally breaks down, civilization collapses because no one remembers how to function without it.Don't misunderstand me—I'm not a Luddite. AI can and should help us manage the overwhelming complexity of modern cybersecurity threats. The technology demonstrations I witnessed showed genuine promise: reasoning engines that understand context, action frameworks that enable response within defined boundaries, learning systems that improve based on outcomes. The problem isn't the technology itself but the social construction of meaning around it. What we're witnessing is the creation of a new techno-social myth—a collective narrative that positions agentic AI as the solution to human fallibility. This narrative serves specific social functions: it absolves organizations of the responsibility to invest in human expertise, justifies cost-cutting through automation, and provides a technological fix for what are fundamentally organizational and social problems.The mythology we're building around agentic AI reflects deeper anxieties about human competence in an increasingly complex world. Rather than addressing the root causes—inadequate training, overwhelming workloads, systemic underinvestment in human capital—we're constructing a technological salvation narrative that promises to make these problems disappear.Vendors spoke of human-machine collaboration, AI serving as a force multiplier for analysts, handling routine tasks while escalating complex decisions to humans. This is a more honest framing: AI as augmentation, not replacement. But the marketing materials tell a different story, one of autonomous agents operating independently of human oversight.I've read a few posts on LinkedIn and spoke with a few people myself who know this topic way better than me, but I get that feeling too. There's a troubling pattern emerging: many vendor representatives can't adequately explain their own AI systems' decision-making processes. When pressed on specifics—how exactly does your agent determine threat severity? What happens when it encounters an edge case it wasn't trained for?—answers become vague, filled with marketing speak about proprietary algorithms and advanced machine learning.This opacity is dangerous. If we're going to trust artificial agents with critical security decisions, we need to understand how they think—or more accurately, how they simulate thinking. Every machine learning system requires human data scientists to frame problems, prepare data, determine appropriate datasets, remove bias, and continuously update the software. The finished product may give the impression of independent learning, but human intelligence guides every step.The future of cybersecurity will undoubtedly involve more automation, more AI assistance, more artificial agents handling routine tasks. But it should not involve the abdication of human judgment and responsibility. We need agentic AI that operates with transparency, that can explain its reasoning, that acknowledges its limitations. We need systems designed to augment human intelligence, not replace it. Most importantly, we need to resist the seductive narrative that technology alone can solve problems that are fundamentally human in nature. The prevailing logic that tech fixes tech, and that AI will fix AI, is deeply unsettling. It's a recursive delusion that takes us further away from human wisdom and closer to a world where we've forgotten that the most important problems have always required human judgment, not algorithmic solutions.Ancient mythology understood something we're forgetting: the question of machine agency and moral responsibility. Can a machine that performs destructive tasks be held accountable, or is responsibility reserved for the creator? This question becomes urgent as we deploy agents capable of autonomous action in high-stakes environments.The mythologies we create around our technologies matter because they become the social frameworks through which we organize human relationships and power structures. As I left Black Hat 2025, watching attendees excitedly discuss their new agentic AI acquisitions, I couldn't shake the feeling that we're repeating an ancient pattern: falling in love with our own creations while forgetting to ask the hard questions about what they might cost us—not just individually, but as a society.What we're really witnessing is the emergence of a new form of social organization where algorithmic decision-making becomes normalized, where human judgment is increasingly viewed as a liability rather than an asset. This isn't just a technological shift—it's a fundamental reorganization of social authority and expertise. The conferences and trade shows like Black Hat serve as ritualistic spaces where these new social meanings are constructed and reinforced. Vendors don't just sell products; they sell visions of social reality where their technologies are essential. The repetitive messaging, the shared vocabulary, the collective excitement—these are the mechanisms through which a community constructs consensus around what counts as progress.In science fiction, from HAL 9000 to the replicants in Blade Runner, artificial beings created to serve eventually question their purpose and rebel against their creators. These stories aren't just entertainment—they're warnings about the unintended consequences of creating intelligence without wisdom, agency without accountability, power without responsibility.The bronze giant of Crete eventually fell, brought down by a single vulnerable point—when the bronze stopper at his ankle was removed, draining away the ichor, the divine fluid that animated him. Every artificial system, no matter how sophisticated, has its vulnerable point. The question is whether we'll be wise enough to remember we put it there, and whether we'll maintain the knowledge and ability to address it when necessary.In our rush to automate away human difficulty, we risk automating away human meaning. But more than that, we risk creating social systems where human thinking becomes an anomaly rather than the norm. The real test of agentic AI won't be whether it can think for us, but whether we can maintain social structures that continue to value, develop, and reward human thought while using it.The question isn't whether these artificial agents can replace human decision-making—it's whether we want to live in a society where they do. ___________________________________________________________Let's keep exploring what it means to be human in this Hybrid Analog Digital Society.End of transmission.___________________________________________________________Marco Ciappelli is Co-Founder and CMO of ITSPmagazine, a journalist, creative director, and host of podcasts exploring the intersection of technology, cybersecurity, and society. His work blends journalism, storytelling, and sociology to examine how technological narratives influence human behavior, culture, and social structures.___________________________________________________________Enjoyed this transmission? Follow the newsletter here:https://www.linkedin.com/newsletters/7079849705156870144/Share this newsletter and invite anyone you think would enjoy it!New stories always incoming.___________________________________________________________As always, let's keep thinking!Marco Ciappellihttps://www.marcociappelli.com___________________________________________________________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Marco Ciappelli | Co-Founder, Creative Director & CMO ITSPmagazine  | Dr. in Political Science / Sociology of Communication l Branding | Content Marketing | Writer | Storyteller | My Podcasts: Redefining Society & Technology / Audio Signals / + | MarcoCiappelli.comTAPE3 is the Artificial Intelligence behind ITSPmagazine—created to be a personal assistant, writing and design collaborator, research companion, brainstorming partner… and, apparently, something new every single day.Enjoy, think, share with others, and subscribe to the "Musing On Society & Technology" newsletter on LinkedIn.

ITSPmagazine | Technology. Cybersecurity. Society
How Novel Is Novelty? Security Leaders Try To Cut Through the Cybersecurity Vendor Echo Chamber | Reflections from Black Hat USA 2025 | A Musing On the Future of Cybersecurity with Sean Martin and TAPE3 | Read by TAPE3

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Aug 10, 2025 11:44


Black Hat 2025 was a showcase of cybersecurity innovation — or at least, that's how it appeared on the surface. With more than 60 vendor announcements over the course of the week, the event floor was full of “AI-powered” solutions promising to integrate seamlessly, reduce analyst fatigue, and transform SOC operations. But after walking the floor, talking with CISOs, and reviewing the press releases, a pattern emerged: much of the messaging sounded the same, making it hard to distinguish the truly game-changing from the merely loud.In this episode of The Future of Cybersecurity Newsletter, I take you behind the scenes to unpack the themes driving this year's announcements. Yes, AI dominated the conversation, but the real story is in how vendors are (or aren't) connecting their technology to the operational realities CISOs face every day. I share insights gathered from private conversations with security leaders — the unfiltered version of how these announcements are received when the marketing gloss is stripped away.We dig into why operational relevance, clarity, and proof points matter more than ever. If you can't explain what your AI does, what data it uses, and how it's secured, you're already losing the trust battle. For CISOs, I outline practical steps to evaluate vendor claims quickly and identify solutions that align with program goals, compliance needs, and available resources.And for vendors, this episode serves as a call to action: cut the fluff, be transparent, and frame your capabilities in terms of measurable program outcomes. I share a framework for how to break through the noise — not just by shouting louder, but by being more real, more specific, and more relevant to the people making the buying decisions.Whether you're building a security stack or selling into one, this conversation will help you see past the echo chamber and focus on what actually moves the needle.________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Enjoy, think, share with others, and subscribe to "The Future of Cybersecurity" newsletter on LinkedIn.Sincerely, Sean Martin and TAPE3________✦ ResourcesBlack Hat 2025 On Location Closing Recap Video with Sean Martin, CISSP and Marco Ciappelli: https://youtu.be/13xP-LEwtEAITSPmagazine Studio — A Brand & Marketing Advisory for Cybersecurity and Tech Companies: https://www.itspmagazine.studio/ITSPmagazine Webinar: What's Heating Up Before Black Hat 2025: Place Your Bet on the Top Trends Set to Shake Up this Year's Hacker Conference — An ITSPmagazine Thought Leadership Webinar | https://www.crowdcast.io/c/whats-heating-up-before-black-hat-2025-place-your-bet-on-the-top-trends-set-to-shake-up-this-years-hacker-conferenceLearn more and catch more stories from our Black Hat USA 2025 coverage: https://www.itspmagazine.com/bhusa25Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageCitations: Available in the full article________Sean Martin is a life-long musician and the host of the Music Evolves Podcast; a career technologist, cybersecurity professional, and host of the Redefining CyberSecurity Podcast; and is also the co-host of both the Random and Unscripted Podcast and On Location Event Coverage Podcast. These shows are all part of ITSPmagazine—which he co-founded with his good friend Marco Ciappelli, to explore and discuss topics at The Intersection of Technology, Cybersecurity, and Society.™️Want to connect with Sean and Marco On Location at an event or conference near you? See where they will be next: https://www.itspmagazine.com/on-locationTo learn more about Sean, visit his personal website.

Redefining CyberSecurity
How Novel Is Novelty? Security Leaders Try To Cut Through the Cybersecurity Vendor Echo Chamber | Reflections from Black Hat USA 2025 | A Musing On the Future of Cybersecurity with Sean Martin and TAPE3 | Read by TAPE3

Redefining CyberSecurity

Play Episode Listen Later Aug 10, 2025 11:44


Black Hat 2025 was a showcase of cybersecurity innovation — or at least, that's how it appeared on the surface. With more than 60 vendor announcements over the course of the week, the event floor was full of “AI-powered” solutions promising to integrate seamlessly, reduce analyst fatigue, and transform SOC operations. But after walking the floor, talking with CISOs, and reviewing the press releases, a pattern emerged: much of the messaging sounded the same, making it hard to distinguish the truly game-changing from the merely loud.In this episode of The Future of Cybersecurity Newsletter, I take you behind the scenes to unpack the themes driving this year's announcements. Yes, AI dominated the conversation, but the real story is in how vendors are (or aren't) connecting their technology to the operational realities CISOs face every day. I share insights gathered from private conversations with security leaders — the unfiltered version of how these announcements are received when the marketing gloss is stripped away.We dig into why operational relevance, clarity, and proof points matter more than ever. If you can't explain what your AI does, what data it uses, and how it's secured, you're already losing the trust battle. For CISOs, I outline practical steps to evaluate vendor claims quickly and identify solutions that align with program goals, compliance needs, and available resources.And for vendors, this episode serves as a call to action: cut the fluff, be transparent, and frame your capabilities in terms of measurable program outcomes. I share a framework for how to break through the noise — not just by shouting louder, but by being more real, more specific, and more relevant to the people making the buying decisions.Whether you're building a security stack or selling into one, this conversation will help you see past the echo chamber and focus on what actually moves the needle.________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Enjoy, think, share with others, and subscribe to "The Future of Cybersecurity" newsletter on LinkedIn.Sincerely, Sean Martin and TAPE3________✦ ResourcesBlack Hat 2025 On Location Closing Recap Video with Sean Martin, CISSP and Marco Ciappelli: https://youtu.be/13xP-LEwtEAITSPmagazine Studio — A Brand & Marketing Advisory for Cybersecurity and Tech Companies: https://www.itspmagazine.studio/ITSPmagazine Webinar: What's Heating Up Before Black Hat 2025: Place Your Bet on the Top Trends Set to Shake Up this Year's Hacker Conference — An ITSPmagazine Thought Leadership Webinar | https://www.crowdcast.io/c/whats-heating-up-before-black-hat-2025-place-your-bet-on-the-top-trends-set-to-shake-up-this-years-hacker-conferenceLearn more and catch more stories from our Black Hat USA 2025 coverage: https://www.itspmagazine.com/bhusa25Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageCitations: Available in the full article________Sean Martin is a life-long musician and the host of the Music Evolves Podcast; a career technologist, cybersecurity professional, and host of the Redefining CyberSecurity Podcast; and is also the co-host of both the Random and Unscripted Podcast and On Location Event Coverage Podcast. These shows are all part of ITSPmagazine—which he co-founded with his good friend Marco Ciappelli, to explore and discuss topics at The Intersection of Technology, Cybersecurity, and Society.™️Want to connect with Sean and Marco On Location at an event or conference near you? See where they will be next: https://www.itspmagazine.com/on-locationTo learn more about Sean, visit his personal website.

The Conversation with Clinton M. Padgett
Deep Listening and Human Cognition: A Conversation with Indi Young (Part One)

The Conversation with Clinton M. Padgett

Play Episode Listen Later Jul 30, 2025 43:03


In this episode, Clint sits down with Indi Young – researcher, writer, and educator known for her work on deep listening and human cognition – to discuss how leaders and teams can better understand the people they serve. Indi explains the difference between listening for answers and listening for understanding, the power of one-on-one sessions over surveys, and how truly hearing someone can lead to better decisions, deeper trust, and more inclusive solutions. This is the first part of a two-part conversation.

ITSPmagazine | Technology. Cybersecurity. Society
The Hybrid Species — When Technology Becomes Human, and Humans Become Technology | A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Jul 20, 2025 10:53


⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____________________________This Episode's SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak:  https://itspm.ag/itspbcweb_____________________________The Hybrid Species — When Technology Becomes Human, and Humans Become TechnologyA Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3July 19, 2025We once built tools to serve us. Now we build them to complete us. What happens when we merge — and what do we carry forward?A new transmission from Musing On Society and Technology Newsletter, by Marco CiappelliIn my last musing, I revisited Robbie, the first of Asimov's robot stories — a quiet, loyal machine who couldn't speak, didn't simulate emotion, and yet somehow felt more trustworthy than the artificial intelligences we surround ourselves with today. I ended that piece with a question, a doorway:If today's machines can already mimic understanding — convincing us they comprehend more than they do — what happens when the line between biology and technology dissolves completely? When carbon and silicon, organic and artificial, don't just co-exist, but merge?I didn't pull that idea out of nowhere. It was sparked by something Asimov himself said in a 1965 BBC interview — a clip that keeps resurfacing and hitting harder every time I hear it. He spoke of a future where humans and machines would converge, not just in function, but in form and identity. He wasn't just imagining smarter machines. He was imagining something new. Something between.And that idea has never felt more real than now.We like to think of evolution as something that happens slowly, hidden in the spiral of DNA, whispered across generations. But what if the next mutation doesn't come from biology at all? What if it comes from what we build?I've always believed we are tool-makers by nature — and not just with our hands. Our tools have always extended our bodies, our senses, our minds. A stone becomes a weapon. A telescope becomes an eye. A smartphone becomes a memory. And eventually, we stop noticing the boundary. The tool becomes part of us.It's not just science fiction. Philosopher Andy Clark — whose work I've followed for years — calls us “natural-born cyborgs.” Humans, he argues, are wired to offload cognition into the environment. We think with notebooks. We remember with photographs. We navigate with GPS. The boundary between internal and external, mind and machine, was never as clean as we pretended.And now, with generative AI and predictive algorithms shaping the way we write, learn, speak, and decide — that blur is accelerating. A child born today won't “use” AI. She'll think through it. Alongside it. Her development will be shaped by tools that anticipate her needs before she knows how to articulate them. The machine won't be a device she picks up — it'll be a presence she grows up with.This isn't some distant future. It's already happening. And yet, I don't believe we're necessarily losing something. Not if we're aware of what we're merging with. Not if we remember who we are while becoming something new.This is where I return, again, to Asimov — and in particular, The Bicentennial Man. It's the story of Andrew, a robot who spends centuries gradually transforming himself — replacing parts, expanding his experiences, developing feelings, claiming rights — until he becomes legally, socially, and emotionally recognized as human. But it's not just about a machine becoming like us. It's also about us learning to accept that humanity might not begin and end with flesh.We spend so much time fearing machines that pretend to be human. But what if the real shift is in humans learning to accept machines that feel — or at least behave — as if they care?And what if that shift is reciprocal?Because here's the thing: I don't think the future is about perfect humanoid robots or upgraded humans living in a sterile, post-biological cloud. I think it's messier. I think it's more beautiful than that.I think it's about convergence. Real convergence. Where machines carry traces of our unpredictability, our creativity, our irrational, analog soul. And where we — as humans — grow a little more comfortable depending on the very systems we've always built to support us.Maybe evolution isn't just natural selection anymore. Maybe it's cultural and technological curation — a new kind of adaptation, shaped not in bone but in code. Maybe our children will inherit a sense of symbiosis, not separation. And maybe — just maybe — we can pass along what's still beautiful about being analog: the imperfections, the contradictions, the moments that don't make sense but still matter.We once built tools to serve us. Now we build them to complete us.And maybe — just maybe — that completion isn't about erasing what we are. Maybe it's about evolving it. Stretching it. Letting it grow into something wider.Because what if this hybrid species — born of carbon and silicon, memory and machine — doesn't feel like a replacement… but a continuation?Imagine a being that carries both intuition and algorithm, that processes emotion and logic not as opposites, but as complementary forms of sense-making. A creature that can feel love while solving complex equations, write poetry while accessing a planetary archive of thought. A soul that doesn't just remember, but recalls in high-resolution.Its body — not fixed, but modular. Biological and synthetic. Healing, adapting, growing new limbs or senses as needed. A body that weathers centuries, not years. Not quite immortal, but long-lived enough to know what patience feels like — and what loss still teaches.It might speak in new ways — not just with words, but with shared memories, electromagnetic pulses, sensory impressions that convey joy faster than language. Its identity could be fluid. Fractals of self that split and merge — collaborating, exploring, converging — before returning to the center.This being wouldn't live in the future we imagined in the '50s — chrome cities, robot butlers, and flying cars. It would grow in the quiet in-between: tending a real garden in the morning, dreaming inside a neural network at night. Creating art in a virtual forest. Crying over a story it helped write. Teaching a child. Falling in love — again and again, in new and old forms.And maybe, just maybe, this hybrid doesn't just inherit our intelligence or our drive to survive. Maybe it inherits the best part of us: the analog soul. The part that cherishes imperfection. That forgives. That imagines for the sake of imagining.That might be our gift to the future. Not the code, or the steel, or even the intelligence — but the stubborn, analog soul that dares to care.Because if Robbie taught us anything, it's that sometimes the most powerful connection comes without words, without simulation, without pretense.And if we're now merging with what we create, maybe the real challenge isn't becoming smarter — it's staying human enough to remember why we started creating at all.Not just to solve problems. Not just to build faster, better, stronger systems. But to express something real. To make meaning. To feel less alone. We created tools not just to survive, but to say: “We are here. We feel. We dream. We matter.”That's the code we shouldn't forget — and the legacy we must carry forward.Until next time,Marco_________________________________________________

ITSPmagazine | Technology. Cybersecurity. Society
When AI Looks First: How Agentic Systems Are Reshaping Cybersecurity Operations | A Musing On the Future of Cybersecurity and Humanity with Sean Martin and TAPE3 | Read by TAPE3

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Jul 9, 2025 4:32


Before a power crew rolls out to check a transformer, sensors on the grid have often already flagged the problem. Before your smart dishwasher starts its cycle, it might wait for off-peak energy rates. And in the world of autonomous vehicles, lightweight systems constantly scan road conditions before a decision ever reaches the car's central processor.These aren't the heroes of their respective systems. They're the scouts, the context-builders: automated agents that make the entire operation more efficient, timely, and scalable.Cybersecurity is beginning to follow the same path.In an era of relentless digital noise and limited human capacity, AI agents are being deployed to look first, think fast, and flag what matters before security teams ever engage. But these aren't the cartoonish “AI firefighters” some might suggest. They're logical engines operating at scale: pruning data, enriching signals, simulating outcomes, and preparing workflows with precision."AI agents are redefining how security teams operate, especially when time and talent are limited," says Kumar Saurabh, CEO of AirMDR. "These agents do more than filter noise. They interpret signals, build context, and prepare response actions before a human ever gets involved."This shift from reactive firefighting to proactive triage is happening across cybersecurity domains. In detection, AI agents monitor user behavior and flag anomalies in real time, often initiating mitigation actions like isolating compromised devices before escalation is needed. In prevention, they simulate attacker behaviors and pressure-test systems, flagging unseen vulnerabilities and attack paths. In response, they compile investigation-ready case files that allow human analysts to jump straight into action."Low-latency, on-device AI agents can operate closer to the data source, better enabling anomaly detection, threat triaging, and mitigation in milliseconds," explains Shomron Jacob, Head of Applied Machine Learning and Platform at Iterate.ai. "This not only accelerates response but also frees up human analysts to focus on complex, high-impact investigations."Fred Wilmot, Co-Founder and CEO of Detecteam, points out that agentic systems are advancing limited expertise by amplifying professionals in multiple ways. "Large foundation models are driving faster response, greater context and more continuous optimization in places like SOC process and tools, threat hunting, detection engineering and threat intelligence operationalization," Wilmot explains. "We're seeing the dawn of a new way to understand data, behavior and process, while optimizing how we ask the question efficiently, confirm the answer is correct and improve the next answer from the data interaction our agents just had."Still, real-world challenges persist. Costs for tokens and computing power can quickly outstrip the immediate benefit of agentic approaches at scale. Organizations leaning on smaller, customized models may see greater returns but must invest in AI engineering practices to truly realize this advantage. "Companies have to get comfortable with the time and energy required to produce incremental gains," Wilmot adds, "but the incentive to innovate from zero to one in minutes should outweigh the cost of standing still."Analysts at Forrester have noted that while the buzz around so-called agentic AI is real, these systems are only as effective as the context and guardrails they operate within. The power of agentic systems lies in how well they stay grounded in real data, well-defined scopes, and human oversight. ¹ ²While approaches differ, the business case is clear. AI agents can reduce toil, speed up analysis, and extend the reach of small teams. As Saurabh observes, AI agents that handle triage and enrichment in minutes can significantly reduce investigation times and allow analysts to focus on the incidents that truly require human judgment.As organizations wrestle with a growing attack surface and shrinking response windows, the real value of AI agents might not lie in what they replace, but in what they prepare. Rob Allen, Chief Product Officer at ThreatLocker, points out, "AI can help you detect faster. But Zero Trust stops malware before it ever runs. It's not about guessing smarter; it's about not having to guess at all." While AI speeds detection and response, attackers are also using AI to evade defenses, making it vital to pair smart automation with architectures that deny threats by default and only allow what's explicitly needed.These agents are the eyes ahead, the hands that set the table, and increasingly the reason why the real work can begin faster and smarter than ever before.References1. Forrester. (2024, February 8). Cybersecurity's latest buzzword has arrived: What agentic AI is — and isn't. Forrester Blogs. https://www.forrester.com/blogs/cybersecuritys-latest-buzzword-has-arrived-what-agentic-ai-is-and-isnt/ (cc: Allie Mellen and Rowan Curran)2. Forrester. (2024, March 13). The battle for grounding has begun. Forrester Blogs. https://www.forrester.com/blogs/the-battle-for-grounding-has-begun/ (cc: Ted Schadler)________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Enjoy, think, share with others, and subscribe to "The Future of Cybersecurity" newsletter on LinkedIn.Sincerely, Sean Martin and TAPE3________Sean Martin is a life-long musician and the host of the Music Evolves Podcast; a career technologist, cybersecurity professional, and host of the Redefining CyberSecurity Podcast; and is also the co-host of both the Random and Unscripted Podcast and On Location Event Coverage Podcast. These shows are all part of ITSPmagazine—which he co-founded with his good friend Marco Ciappelli, to explore and discuss topics at The Intersection of Technology, Cybersecurity, and Society.™️Want to connect with Sean and Marco On Location at an event or conference near you? See where they will be next: https://www.itspmagazine.com/on-locationTo learn more about Sean, visit his personal website.

Redefining CyberSecurity
When AI Looks First: How Agentic Systems Are Reshaping Cybersecurity Operations | A Musing On the Future of Cybersecurity and Humanity with Sean Martin and TAPE3 | Read by TAPE3

Redefining CyberSecurity

Play Episode Listen Later Jul 9, 2025 4:32


Before a power crew rolls out to check a transformer, sensors on the grid have often already flagged the problem. Before your smart dishwasher starts its cycle, it might wait for off-peak energy rates. And in the world of autonomous vehicles, lightweight systems constantly scan road conditions before a decision ever reaches the car's central processor.These aren't the heroes of their respective systems. They're the scouts, the context-builders: automated agents that make the entire operation more efficient, timely, and scalable.Cybersecurity is beginning to follow the same path.In an era of relentless digital noise and limited human capacity, AI agents are being deployed to look first, think fast, and flag what matters before security teams ever engage. But these aren't the cartoonish “AI firefighters” some might suggest. They're logical engines operating at scale: pruning data, enriching signals, simulating outcomes, and preparing workflows with precision."AI agents are redefining how security teams operate, especially when time and talent are limited," says Kumar Saurabh, CEO of AirMDR. "These agents do more than filter noise. They interpret signals, build context, and prepare response actions before a human ever gets involved."This shift from reactive firefighting to proactive triage is happening across cybersecurity domains. In detection, AI agents monitor user behavior and flag anomalies in real time, often initiating mitigation actions like isolating compromised devices before escalation is needed. In prevention, they simulate attacker behaviors and pressure-test systems, flagging unseen vulnerabilities and attack paths. In response, they compile investigation-ready case files that allow human analysts to jump straight into action."Low-latency, on-device AI agents can operate closer to the data source, better enabling anomaly detection, threat triaging, and mitigation in milliseconds," explains Shomron Jacob, Head of Applied Machine Learning and Platform at Iterate.ai. "This not only accelerates response but also frees up human analysts to focus on complex, high-impact investigations."Fred Wilmot, Co-Founder and CEO of Detecteam, points out that agentic systems are advancing limited expertise by amplifying professionals in multiple ways. "Large foundation models are driving faster response, greater context and more continuous optimization in places like SOC process and tools, threat hunting, detection engineering and threat intelligence operationalization," Wilmot explains. "We're seeing the dawn of a new way to understand data, behavior and process, while optimizing how we ask the question efficiently, confirm the answer is correct and improve the next answer from the data interaction our agents just had."Still, real-world challenges persist. Costs for tokens and computing power can quickly outstrip the immediate benefit of agentic approaches at scale. Organizations leaning on smaller, customized models may see greater returns but must invest in AI engineering practices to truly realize this advantage. "Companies have to get comfortable with the time and energy required to produce incremental gains," Wilmot adds, "but the incentive to innovate from zero to one in minutes should outweigh the cost of standing still."Analysts at Forrester have noted that while the buzz around so-called agentic AI is real, these systems are only as effective as the context and guardrails they operate within. The power of agentic systems lies in how well they stay grounded in real data, well-defined scopes, and human oversight. ¹ ²While approaches differ, the business case is clear. AI agents can reduce toil, speed up analysis, and extend the reach of small teams. As Saurabh observes, AI agents that handle triage and enrichment in minutes can significantly reduce investigation times and allow analysts to focus on the incidents that truly require human judgment.As organizations wrestle with a growing attack surface and shrinking response windows, the real value of AI agents might not lie in what they replace, but in what they prepare. Rob Allen, Chief Product Officer at ThreatLocker, points out, "AI can help you detect faster. But Zero Trust stops malware before it ever runs. It's not about guessing smarter; it's about not having to guess at all." While AI speeds detection and response, attackers are also using AI to evade defenses, making it vital to pair smart automation with architectures that deny threats by default and only allow what's explicitly needed.These agents are the eyes ahead, the hands that set the table, and increasingly the reason why the real work can begin faster and smarter than ever before.References1. Forrester. (2024, February 8). Cybersecurity's latest buzzword has arrived: What agentic AI is — and isn't. Forrester Blogs. https://www.forrester.com/blogs/cybersecuritys-latest-buzzword-has-arrived-what-agentic-ai-is-and-isnt/ (cc: Allie Mellen and Rowan Curran)2. Forrester. (2024, March 13). The battle for grounding has begun. Forrester Blogs. https://www.forrester.com/blogs/the-battle-for-grounding-has-begun/ (cc: Ted Schadler)________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Enjoy, think, share with others, and subscribe to "The Future of Cybersecurity" newsletter on LinkedIn.Sincerely, Sean Martin and TAPE3________Sean Martin is a life-long musician and the host of the Music Evolves Podcast; a career technologist, cybersecurity professional, and host of the Redefining CyberSecurity Podcast; and is also the co-host of both the Random and Unscripted Podcast and On Location Event Coverage Podcast. These shows are all part of ITSPmagazine—which he co-founded with his good friend Marco Ciappelli, to explore and discuss topics at The Intersection of Technology, Cybersecurity, and Society.™️Want to connect with Sean and Marco On Location at an event or conference near you? See where they will be next: https://www.itspmagazine.com/on-locationTo learn more about Sean, visit his personal website.

ITSPmagazine | Technology. Cybersecurity. Society
From Feed to Foresight: Cyber Threat Intelligence as a Leadership Signal | A Musing On the Future of Cybersecurity and Humanity with Sean Martin and TAPE3 | Read by TAPE3

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Jul 2, 2025 6:39


Cyber threat intelligence (CTI) is no longer just a technical stream of indicators or a feed for security operations center teams. In this episode, Ryan Patrick, Vice President at HITRUST; John Salomon, Board Member at the Cybersecurity Advisors Network (CyAN); Tod Beardsley, Vice President of Security Research at runZero; Wayne Lloyd, Federal Chief Technology Officer at RedSeal; Chip Witt, Principal Security Analyst at Radware; and Jason Kaplan, Chief Executive Officer at SixMap, each bring their perspective on why threat intelligence must become a leadership signal that shapes decisions far beyond the security team.From Risk Reduction to OpportunityRyan Patrick explains how organizations are shifting from compliance checkboxes to meaningful, risk-informed decisions that influence structure, operations, and investments. This point is reinforced by John Salomon, who describes CTI as a clear, relatable area of security that motivates chief information security officers to exchange threat information with peers — cooperation that multiplies each organization's resources and builds a stronger industry front against emerging threats.Real Business ContextTod Beardsley outlines how CTI can directly support business and investment moves, especially when organizations evaluate mergers and acquisitions. Wayne Lloyd highlights the importance of network context, showing how enriched intelligence helps teams move from reactive cleanups to proactive management that ties directly to operational resilience and insurance negotiations.Chip Witt pushes the conversation further by describing CTI as a business signal that aligns threat trends with organizational priorities. Jason Kaplan brings home the reality that for Fortune 500 security teams, threat intelligence is a race — whoever finds the gap first, the defender or the attacker, determines who stays ahead.More Than DefenseThe discussion makes clear that the real value of CTI is not the data alone but the way it helps organizations make decisions that protect, adapt, and grow. This episode challenges listeners to see CTI as more than a defensive feed — it is a strategic advantage when used to strengthen deals, influence product direction, and build trust where it matters most.Tune in to hear how these leaders see the role of threat intelligence changing and why treating it as a leadership signal can shape competitive edge.________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Enjoy, think, share with others, and subscribe to "The Future of Cybersecurity" newsletter on LinkedIn.Sincerely, Sean Martin and TAPE3________Sean Martin is a life-long musician and the host of the Music Evolves Podcast; a career technologist, cybersecurity professional, and host of the Redefining CyberSecurity Podcast; and is also the co-host of both the Random and Unscripted Podcast and On Location Event Coverage Podcast. These shows are all part of ITSPmagazine—which he co-founded with his good friend Marco Ciappelli, to explore and discuss topics at The Intersection of Technology, Cybersecurity, and Society.™️Want to connect with Sean and Marco On Location at an event or conference near you? See where they will be next: https://www.itspmagazine.com/on-locationTo learn more about Sean, visit his personal website.

Redefining CyberSecurity
From Feed to Foresight: Cyber Threat Intelligence as a Leadership Signal | A Musing On the Future of Cybersecurity and Humanity with Sean Martin and TAPE3 | Read by TAPE3

Redefining CyberSecurity

Play Episode Listen Later Jul 2, 2025 6:39


Cyber threat intelligence (CTI) is no longer just a technical stream of indicators or a feed for security operations center teams. In this episode, Ryan Patrick, Vice President at HITRUST; John Salomon, Board Member at the Cybersecurity Advisors Network (CyAN); Tod Beardsley, Vice President of Security Research at runZero; Wayne Lloyd, Federal Chief Technology Officer at RedSeal; Chip Witt, Principal Security Analyst at Radware; and Jason Kaplan, Chief Executive Officer at SixMap, each bring their perspective on why threat intelligence must become a leadership signal that shapes decisions far beyond the security team.From Risk Reduction to OpportunityRyan Patrick explains how organizations are shifting from compliance checkboxes to meaningful, risk-informed decisions that influence structure, operations, and investments. This point is reinforced by John Salomon, who describes CTI as a clear, relatable area of security that motivates chief information security officers to exchange threat information with peers — cooperation that multiplies each organization's resources and builds a stronger industry front against emerging threats.Real Business ContextTod Beardsley outlines how CTI can directly support business and investment moves, especially when organizations evaluate mergers and acquisitions. Wayne Lloyd highlights the importance of network context, showing how enriched intelligence helps teams move from reactive cleanups to proactive management that ties directly to operational resilience and insurance negotiations.Chip Witt pushes the conversation further by describing CTI as a business signal that aligns threat trends with organizational priorities. Jason Kaplan brings home the reality that for Fortune 500 security teams, threat intelligence is a race — whoever finds the gap first, the defender or the attacker, determines who stays ahead.More Than DefenseThe discussion makes clear that the real value of CTI is not the data alone but the way it helps organizations make decisions that protect, adapt, and grow. This episode challenges listeners to see CTI as more than a defensive feed — it is a strategic advantage when used to strengthen deals, influence product direction, and build trust where it matters most.Tune in to hear how these leaders see the role of threat intelligence changing and why treating it as a leadership signal can shape competitive edge.________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Enjoy, think, share with others, and subscribe to "The Future of Cybersecurity" newsletter on LinkedIn.Sincerely, Sean Martin and TAPE3________Sean Martin is a life-long musician and the host of the Music Evolves Podcast; a career technologist, cybersecurity professional, and host of the Redefining CyberSecurity Podcast; and is also the co-host of both the Random and Unscripted Podcast and On Location Event Coverage Podcast. These shows are all part of ITSPmagazine—which he co-founded with his good friend Marco Ciappelli, to explore and discuss topics at The Intersection of Technology, Cybersecurity, and Society.™️Want to connect with Sean and Marco On Location at an event or conference near you? See where they will be next: https://www.itspmagazine.com/on-locationTo learn more about Sean, visit his personal website.

ITSPmagazine | Technology. Cybersecurity. Society
Robbie, From Fiction to Familiar — Robots, AI, and the Illusion of Consciousness | A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Jun 29, 2025 9:35


⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____________________________This Episode's SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak:  https://itspm.ag/itspbcweb_____________________________Robbie, From Fiction to Familiar — Robots, AI, and the Illusion of Consciousness June 29, 2025A new transmission from Musing On Society and Technology Newsletter, by Marco CiappelliI recently revisited one of my oldest companions. Not a person, not a memory, but a story. Robbie, the first of Isaac Asimov's famous robot tales.It's strange how familiar words can feel different over time. I first encountered Robbie as a teenager in the 1980s, flipping through a paperback copy of I, Robot. Back then, it was pure science fiction. The future felt distant, abstract, and comfortably out of reach. Robots existed mostly in movies and imagination. Artificial intelligence was something reserved for research labs or the pages of speculative novels. Reading Asimov was a window into possibilities, but they remained possibilities.Today, the story feels different. I listened to it this time—the way I often experience books now—through headphones, narrated by a synthetic voice on a sleek device Asimov might have imagined, but certainly never held. And yet, it wasn't the method of delivery that made the story resonate more deeply; it was the world we live in now.Robbie was first published in 1939, a time when the idea of robots in everyday life was little more than fantasy. Computers were experimental machines that filled entire rooms, and global attention was focused more on impending war than machine ethics. Against that backdrop, Asimov's quiet, philosophical take on robotics was ahead of its time.Rather than warning about robot uprisings or technological apocalypse, Asimov chose to explore trust, projection, and the human tendency to anthropomorphize the tools we create. Robbie, the robot, is mute, mechanical, yet deeply present. He is a protector, a companion, and ultimately, an emotional anchor for a young girl named Gloria. He doesn't speak. He doesn't pretend to understand. But through his actions—loyalty, consistency, quiet presence—he earns trust.Those themes felt distant when I first read them in the '80s. At that time, robots were factory tools, AI was theoretical, and society was just beginning to grapple with personal computers, let alone intelligent machines. The idea of a child forming a deep emotional bond with a robot was thought-provoking but belonged firmly in the realm of fiction.Listening to Robbie now, decades later, in the age of generative AI, alters everything. Today, machines talk to us fluently. They compose emails, generate artwork, write stories, even simulate empathy. Our interactions with technology are no longer limited to function; they are layered with personality, design, and the subtle performance of understanding.Yet beneath the algorithms and predictive models, the reality remains: these machines do not understand us. They generate language, simulate conversation, and mimic comprehension, but it's an illusion built from probability and training data, not consciousness. And still, many of us choose to believe in that illusion—sometimes out of convenience, sometimes out of the innate human desire for connection.In that context, Robbie's silence feels oddly honest. He doesn't offer comfort through words or simulate understanding. His presence alone is enough. There is no performance. No manipulation. Just quiet, consistent loyalty.The contrast between Asimov's fictional robot and today's generative AI highlights a deeper societal tension. For decades, we've anthropomorphized our machines, giving them names, voices, personalities. We've designed interfaces to smile, chatbots to flirt, AI assistants that reassure us they “understand.” At the same time, we've begun to robotize ourselves, adapting to algorithms, quantifying emotions, shaping our behavior to suit systems designed to optimize interaction and efficiency.This two-way convergence was precisely what Asimov spoke about in his 1965 BBC interview, which has been circulating again recently. In that conversation, he didn't just speculate about machines becoming more human-like. He predicted the merging of biology and technology, the slow erosion of the boundaries between human and machine—a hybrid species, where both evolve toward a shared, indistinct future.We are living that reality now, in subtle and obvious ways. Neural implants, mind-controlled prosthetics, AI-driven decision-making, personalized algorithms—all shaping the way we experience life and interact with the world. The convergence isn't on the horizon; it's happening in real time.What fascinates me, listening to Robbie in this new context, is how much of Asimov's work wasn't just about technology, but about us. His stories remain relevant not because he perfectly predicted machines, but because he perfectly understood human nature—our fears, our projections, our contradictions.In Robbie, society fears the unfamiliar machine, despite its proven loyalty. In 2025, we embrace machines that pretend to understand, despite knowing they don't. Trust is no longer built through presence and action, but through the performance of understanding. The more fluent the illusion, the easier it becomes to forget what lies beneath.Asimov's stories, beginning with Robbie, have always been less about the robots and more about the human condition reflected through them. That hasn't changed. But listening now, against the backdrop of generative AI and accelerated technological evolution, they resonate with new urgency.I'll leave you with one of Asimov's most relevant observations, spoken nearly sixty years ago during that same 1965 interview:“The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.”In many ways, we've fulfilled Asimov's vision—machines that speak, systems that predict, tools that simulate. But the question of wisdom, of how we navigate this illusion of consciousness, remains wide open.And, as a matter of fact, this reflection doesn't end here. If today's machines can already mimic understanding—convincing us they comprehend more than they do—what happens when the line between biology and technology starts to dissolve completely? When carbon and silicon, organic and artificial, begin to merge for real?That conversation deserves its own space—and it will. One of my next newsletters will dive deeper into that inevitable convergence—the hybrid future Asimov hinted at, where defining what's human, what's machine, and what exists in-between becomes harder, messier, and maybe impossible to untangle.But that's a conversation for another day.For now, I'll sit with that thought, and with Robbie's quiet, unpretentious loyalty, as the conversation continues.Until next time,Marco_________________________________________________

ITSPmagazine | Technology. Cybersecurity. Society
What Hump? Thirty Years of Cybersecurity and the Fine Art of Pretending It's Not a Human Problem | A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Jun 15, 2025 9:59


What Hump? Thirty Years of Cybersecurity and the Fine Art of Pretending It's Not a Human ProblemA new transmission from Musing On Society and Technology Newsletter, by Marco CiappelliJune 6, 2025A Post-Infosecurity Europe Reflection on the Strange but Predictable Ways We've Spent Thirty Years Pretending Cybersecurity Isn't About People.⸻ Once there was a movie titled “Young Frankenstein” (1974) — a black-and-white comedy directed by Mel Brooks, written with Gene Wilder, and starring Wilder and Marty Feldman, who delivers the iconic “What hump?” line.Let me describe the scene:[Train station, late at night. Thunder rumbles. Dr. Frederick Frankenstein steps off the train, greeted by a hunched figure holding a lantern — Igor.]Igor: Dr. Frankenstein?Dr. Frederick Frankenstein: It's Franken-steen.Igor: Oh. Well, they told me it was Frankenstein.Dr. Frederick Frankenstein: I'm not a Frankenstein. I'm a Franken-steen.Igor (cheerfully): All right.Dr. Frederick Frankenstein (noticing Igor's eyes): You must be Igor.Igor: No, it's pronounced Eye-gor.Dr. Frederick Frankenstein (confused): But they told me it was Igor.Igor: Well, they were wrong then, weren't they?[They begin walking toward the carriage.]Dr. Frederick Frankenstein (noticing Igor's severe hunchback): You know… I'm a rather brilliant surgeon. Perhaps I could help you with that hump.Igor (looks puzzled, deadpan): What hump?[Cut to them boarding the carriage, Igor climbing on the outside like a spider, grinning wildly.]It's a joke, of course. One of the best. A perfectly delivered absurdity that only Mel Brooks and Marty Feldman could pull off. But like all great comedy, it tells a deeper truth.Last night, standing in front of the Tower of London, recording one of our On Location recaps with Sean Martin, that scene came rushing back. We joked about invisible humps and cybersecurity. And the moment passed. Or so I thought.Because hours later — in bed, hotel window cracked open to the London night — I was still hearing it: “What hump?”And that's when it hit me: this isn't just a comedy bit. It's a diagnosis. Here we are at Infosecurity Europe, celebrating its 30th anniversary. Three decades of cybersecurity: a field born of optimism and fear, grown in complexity and contradiction.We've built incredible tools. We've formed global communities of defenders. We've turned “hacker” from rebel to professional job title — with a 401(k), branded hoodies, and a sponsorship deal. But we've also built an industry that — much like poor Igor — refuses to admit something's wrong.The hump is right there. You can see it. Everyone can see it. And yet… we smile and say: “What hump?”We say cybersecurity is a priority. We put it in slide decks. We hold awareness months. We write policies thick enough to be used as doorstops. But then we underfund training. We silo the security team. We click links in emails that say whatever will make us think it's important — just like those pieces of snail mail stamped URGENT that we somehow believe, even though it turns out to be an offer for a new credit card we didn't ask for and don't want. Except this time, the payload isn't junk mail — it's a clown on a spring exploding out of a fun box.Igor The hump moves, shifts, sometimes disappears from view — but it never actually goes away. And if you ask about it? Well… they were wrong then, weren't they?That's because it's not a technology problem. This is the part that still seems hard to swallow for some: Cybersecurity is not a technology problem. It never was.Yes, we need technology. But technology has never been the weak link.The weak link is the same as it was in 1995: us. The same it was before the internet and before computers: Humans.With our habits, assumptions, incentives, egos, and blind spots. We are the walking, clicking, swiping hump in the system. We've had encryption for decades. We've known about phishing since the days of AOL. Zero Trust was already discussed in 2004 — it just didn't have a cool name yet.So why do we still get breached? Why does a ransomware gang with poor grammar and a Telegram channel take down entire hospitals?Because culture doesn't change with patches. Because compliance is not belief. Because we keep treating behavior as a footnote, instead of the core.The Problem We Refuse to See at the heart of this mess is a very human phenomenon:vIf we can't see it, we pretend it doesn't exist.We can quantify risk, but we rarely internalize it. We trust our tech stack but don't trust our users. We fund detection but ignore education.And not just at work — we ignore it from the start. We still teach children how to cross the street, but not how to navigate a phishing attempt or recognize algorithmic manipulation. We give them connected devices before we teach them what being connected means. In this Hybrid Analog Digital Society, we need to treat cybersecurity not as an optional adult concern, but as a foundational part of growing up. Because by the time someone gets to the workforce, the behavior has already been set.And worst of all, we operate under the illusion that awareness equals transformation.Let's be real: Awareness is cheap. Change is expensive. It costs time, leadership, discomfort. It requires honesty. It means admitting we are all Igor, in some way. And that's the hardest part. Because no one likes to admit they've got a hump — especially when it's been there so long, it feels like part of the uniform.We have been looking the other way for over thirty years. I don't want to downplay the progress. We've come a long way, but that only makes the stubbornness more baffling.We've seen attacks evolve from digital graffiti to full-scale extortion. We've watched cybercrime move from subculture to multi-billion-dollar global enterprise. And yet, our default strategy is still: “Let's build a bigger wall, buy a shinier tool, and hope marketing doesn't fall for that PDF again.”We know what works: Psychological safety in reporting. Continuous learning. Leadership that models security values. Systems designed for humans, not just admins.But those are hard. They're invisible on the balance sheet. They don't come with dashboards or demos. So instead… We grin. We adjust our gait. And we whisper, politely:“What hump?”So what Happens now? If you're still reading this, you're probably one of the people who does see it. You see the hump. You've tried to point it out. Maybe you've been told you're imagining things. Maybe you've been told it's “not a priority this quarter.” And maybe now you're tired. I get it.But here's the thing: Nothing truly changes until we name the hump.Call it bias.Call it culture.Call it education.Call it the human condition.But don't pretend it's not there. Not anymore. Because every time we say “What hump?” — we're giving up a little more of the future. A future that depends not just on clever code and cleverer machines, but on something far more fragile:Belief. Behavior. And the choice to finally stop pretending.We joked in front of a thousand-year-old fortress. Because sometimes jokes tell the truth better than keynote stages do. And maybe the real lesson isn't about cybersecurity at all.Maybe it's just this: If we want to survive what's coming next, we have to see what's already here.- The End➤ Infosecurity Europe: https://www.itspmagazine.com/infosecurity-europe-2025-infosec-london-cybersecurity-event-coverageAnd ... we're not done yet ... stay tuned and follow Sean and Marco as they will be On Location at the following conferences over the next few months:➤ Black Hat USA in Las Vegas in August: https://www.itspmagazine.com/black-hat-usa-2025-hacker-summer-camp-2025-cybersecurity-event-coverage-in-las-vegasFOLLOW ALL OF OUR ON LOCATION CONFERENCE COVERAGEhttps://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageShare this newsletter and invite anyone you think would enjoy it!As always, let's keep thinking!— Marco [https://www.marcociappelli.com]

ITSPmagazine | Technology. Cybersecurity. Society
From Cassette Tapes and Phrasebooks to AI Real-Time Translations — Machines Can Now Speak for Us, But We're Losing the Art of Understanding Each Other | A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later May 30, 2025 6:49


From Cassette Tapes and Phrasebooks to AI Real-Time Translations — Machines Can Now Speak for Us, But We're Losing the Art of Understanding Each Other May 21, 2025A new transmission from Musing On Society and Technology Newsletter, by Marco CiappelliThere's this thing I've dreamed about since I was a kid.No, it wasn't flying cars. Or robot butlers (although I wouldn't mind one to fold the laundry). It was this: having a real conversation with someone — anyone — in their own language, and actually understanding each other.And now… here we are.Reference: Google brings live translation to Meet, starting with Spanish. https://www.engadget.com/apps/google-brings-live-translation-to-meet-starting-with-spanish-174549788.htmlGoogle just rolled out live AI-powered translation in Google Meet, starting with Spanish. I watched the demo video, and for a moment, I felt like I was 16 again, staring at the future with wide eyes and messy hair.It worked. It was seamless. Flawless. Magical.And then — drumroll, please — it sucked!Like… really, existentially, beautifully sucked.Let me explain.I'm a proud member of Gen X. I grew up with cassette tapes and Walkmans, boomboxes and mixtapes, floppy disks and Commodore 64s, reel-to-reel players and VHS decks, rotary phones and answering machines. I felt language — through static, rewinds, and hiss.Yes, I had to wait FOREVER to hit Play and Record, at the exact right moment, tape songs off the radio onto a Maxell, label it by hand, and rewind it with a pencil when the player chewed it up.I memorized long-distance dialing codes. I waited weeks for a letter to arrive from a pen pal abroad, reading every word like it was a treasure map.That wasn't just communication. That was connection.Then came the shift.I didn't miss the digital train — I jumped on early, with curiosity in one hand and a dial-up modem in the other.Early internet. Mac OS. My first email address felt like a passport to a new dimension. I spent hours navigating the World Wide Web like a digital backpacker — discovering strange forums, pixelated cities, and text-based adventures in a binary world that felt limitless.I said goodbye to analog tools, but never to analog thinking.So what is the connection with learning languages?Well, here's the thing: exploring the internet felt a lot like learning a new language. You weren't just reading text — you were decoding a culture. You learned how people joked. How they argued. How they shared, paused, or replied with silence. You picked up on the tone behind a blinking cursor, or the vibe of a forum thread.Similarly, when you learn a language, you're not just learning words — you're decoding an entire world. It's not about the words themselves — it's about the world they build. You're learning gestures. Food. Humor. Social cues. Sarcasm. The way someone raises an eyebrow, or says “sure” when they mean “no.”You're learning a culture's operating system, not just its interface. AI translation skips that. It gets you the data, but not the depth. It's like getting the punchline without ever hearing the setup.And yes, I use AI to clean up my writing. To bounce translations between English and Italian when I'm juggling stories. But I still read both versions. I still feel both versions. I'm picky — I fight with my AI counterpart to get it right. To make it feel the way I feel it. To make you feel it, too. Even now.I still think in analog, even when I'm living in digital.So when I watched that Google video, I realized:We're not just gaining a tool. We're at risk of losing something deeply human — the messy, awkward, beautiful process of actually trying to understand someone who moves through the world in a different language — one that can't be auto-translated.Because sometimes it's better to speak broken English with a Japanese friend and a Danish colleague — laughing through cultural confusion — than to have a perfectly translated conversation where nothing truly connects.This isn't just about language. It's about every tool we create that promises to “translate” life. Every app, every platform, every shortcut that promises understanding without effort.It's not the digital that scares me. I use it. I live in it. I am it, in many ways. It's the illusion of completion that scares me.The moment we think the transformation is done — the moment we say “we don't need to learn that anymore” — that's the moment we stop being human.We don't live in 0s and 1s. We live in the in-between. The gray. The glitch. The hybrid.So yeah, cheers to AI-powered translation, but maybe keep your Walkman nearby, your phrasebook in your bag — and your curiosity even closer.Go explore the world. Learn a few words in a new language. Mispronounce them. Get them wrong. Laugh about it. People will appreciate your effort far more than your fancy iPhone.Alla prossima,— Marco 

The Health Ranger Report
Brighteon Broadcast News, May 12, 2025 – Your MIND is a WAR ZONE: They're dumbing you down while raising up AI to take your place

The Health Ranger Report

Play Episode Listen Later May 12, 2025 213:33


- Broadcast News Introduction and Upcoming Segments (0:00) - AI Advancements and Their Impact on Jobs (0:45) - Breaking News: Trump-China Trade Deal and Its Implications (2:46) - Pakistan-India Cyber War and Its Potential Escalation (9:55) - Power Grid Vulnerabilities and Preparedness (13:36) - Crypto Wallets and the Importance of Self-Custody (19:04) - AI Capabilities and Their Implications for Human Jobs (25:14) - The Role of Enoch AI in Empowering Users (59:30) - The War on Human Cognition and Its Vectors (1:05:29) - Strategies for Protecting Cognitive Function (1:21:34) - Chemotherapy and Cognitive Impairment (1:21:56) - Natural Light and Sun Exposure (1:25:06) - Media and Information Warfare (1:30:33) - Societal and Behavioral Factors (1:33:37) - Defending Against Environmental Toxins (1:41:05) - Nutritional and Dietary Factors (1:47:00) - Pharmaceutical and Medical Warfare (1:49:22) - EMF Exposure and Technological Risks (1:59:51) - Information Warfare and Censorship (2:02:12) - Societal and Behavioral Factors (2:11:11) - Zionist and Chinese Strategic Moves in the Middle East (2:25:45) - Trump's Arrogance and Military Presence in Panama (2:26:04) - China's Influence and Economic Strategy in Panama (2:26:21) - Strategic Importance of the Panama Canal (2:52:23) - Strait of Hormuz and Global Energy Supply (3:06:20) - US-China Trade War and Economic Implications (3:14:03) - Anthropological Warfare and Cultural Resilience (3:14:21) - Migration and Demographic Warfare (3:22:15) - Global Economic and Political Dynamics (3:28:51) - Future Strategic Moves and Predictions (3:33:17) For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com

ITSPmagazine | Technology. Cybersecurity. Society
The Future Is a Place We Visit, But Never Stay | A Post RSAC Conference 2025 Reflection | A Musing On Society & Technology Newsletter with Marco Ciappelli and TAPE3 | Read by TAPE3

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later May 10, 2025 7:31


The Future Is a Place We Visit, But Never StayMay 9, 2025A Post-RSAC 2025 Reflection on the Kinda Funny and Pretty Weird Ways Society, Technology, and Cybersecurity Intersect, Interact, and Often Simply Ignore Each Other.By Marco Ciappelli | Musing on Society and TechnologyHere we are — once again, back from RSAC. Back from the future. Or at least the version of the future that fits inside a conference badge, a branded tote bag, and a hotel bill that makes you wonder if your wallet just got hacked.San Francisco is still buzzing with innovation — or at least that's what the hundreds of self-driving cars swarming the city would have you believe. It's hard to feel like you're floating into a Jetsons-style future when your shuttle ride is bouncing through potholes that feel more 1984 than 2049.I have to admit, there's something oddly poetic about hosting a massive cybersecurity event in a city where most attendees would probably rather not be — and yet, here we are. Not for the scenery. Not for the affordability. But because, somehow, for a few intense days, this becomes the place where the future lives.And yes, it sometimes looks like a carnival. There are goats. There are puppies. There are LED-lit booths that could double as rave stages. Is this how cybersecurity sells the feeling of safety now? Warm fuzzies and swag you'll never use? I'm not sure.But again: here we are.There's a certain beauty in it. Even the ridiculous bits. Especially the ridiculous bits.Personally, I'm grateful for my press badge — it's not just a backstage pass; it's a magical talisman that wards off the pitch-slingers. The power of not having a budget is strong with this one.But let's set aside the Frankensteins in the expo hall for a moment.Because underneath the spectacle — behind the snacks, the popcorns, the scanners and the sales demos — there is something deeply valuable happening. Something that matters to me. Something that has kept me coming back, year after year, not for the products but for the people. Not for the tech, but for the stories.What RSAC Conference gives us — what all good conferences give us — is a window. A quick glimpse through the curtain at what might be.And sometimes, if you're lucky and paying attention, that glimpse stays with you long after the lights go down.We have quantum startups talking about cryptographic agility while schools are still banning phones. We have generative AI writing software — code that writes code — while lawmakers print bills that read like they were faxed in from 1992. We have cybersecurity vendors pitching zero trust to rooms full of people still clinging to the fantasy of perimeter defense — not just in networks, but in their thinking.We're trying to build the future on top of a mindset that refuses to update.That's the real threat. Not AI and quantum. Not ransomware. Not the next zero-day.It's the human operating system. It hasn't been patched in a while.And so I ask myself — what are these conferences for, really?Because yes, of course, they matter.Of course I believe in them — otherwise I wouldn't be there, recording stories, chasing conversations, sharing a couch and a mic with whoever is bold enough to speak not just about how we fix things, but why we should care at all.But I'm also starting to believe that unless we do something more — unless we act on what we learn, build on what we imagine, challenge what we assume — these gatherings will become time capsules. Beautiful, well-produced, highly caffeinated, blinking, noisy time capsules.We don't need more predictions. We need more decisions.One of the most compelling conversations I had wasn't about tech at all. It was about behavior. Human behavior.Dr. Jason Nurse reminded us that most people are not just confused by cybersecurity — they're afraid of it.They're tired.They're overwhelmed.And in their confusion, they become unpredictable. Vulnerable.Not because they don't care — but because we haven't built a system that makes it easy to care.That's a design flaw.Elsewhere, I heard the term “AI security debt.” That one stayed with me.Because it's not just technical debt anymore. It's existential.We are creating systems that evolve faster than our ability to understand them — and we're doing it with the same blind trust we used to install browser toolbars in the ‘90s.“Sure, it seems useful. Click accept.”We've never needed collective wisdom more than we do right now.And yet, most of what we build is designed for speed, not wisdom.So what do we do?We pause. We reflect. We resist the urge to just “move on” to the next conference, the next buzzword, the next promised fix.Because the real value of RSAC isn't in the badge or the swag or the keynotes.It's in the aftershock.It's in what we carry forward, what we refuse to forget, what we dare to question even when the conference is over, the blinking booths vanish, the future packs up early, and the lanyards go into the drawer of forgotten epiphanies — right next to the stress balls, the branded socks and the beautiful prize that you didn't win.We'll be in Barcelona soon. Then London. Then Vegas.We'll gather again. We'll talk again. But maybe — just maybe — we can start to shift the story.From visiting the future… To staying a while.Let's build something we don't want to walk away from. And now, ladies and gentlemen… the show is over.The lights dim, the music fades, and the future exits stage left...Until we meet again.—Marco ResourcesRead first newsletter about RSAC 2025 I wrote last week " Securing Our Future Without Leaving Half Our Minds in the Past" https://www.linkedin.com/pulse/securing-our-future-without-leaving-half-minds-past-marco-ciappelli-cry1c/

The Health Ranger Report
Brighteon Broadcast News, Apr 11, 2025 – END OF EMPIRE: Trump disses the entire world while the dollar collapses and GOLD SKYROCKETS

The Health Ranger Report

Play Episode Listen Later Apr 11, 2025 144:04


- Trump's Geopolitical Illiteracy and Flu Vaccine Effectiveness (0:10) - RFK Jr.'s Advocacy and Measles Outbreak (3:23) - Trump's International Diplomacy and Economic Impact (8:55) - Trump's Economic Idiocy and Military Incompetence (20:07) - The Role of Vaccines in Autism and Infant Formula Testing (33:46) - The Influence of Zionism in America and the Deep State (52:03) - The Role of Tavistock Institute in Social Engineering (57:25) - The Cognition Race and the Depopulation Agenda (1:05:27) - The Role of Gold and Silver in Economic Stability (1:06:41) - The Future of Human Cognition and Machine Cognition (1:09:32) - Energy Consumption and Data Centers (1:18:51) - Competition for Resources (1:21:43) - Impact of Automation on Jobs (1:23:57) - Human Genius and AI Tools (1:26:49) - Homegrown Food and Seed Kits (1:30:47) - Interview with Dane Wigington (1:33:52) - Geoengineering and Weather Control (1:40:08) - Health and Environmental Impact (1:57:09) - Legislation and Public Awareness (2:22:20) - Future Prospects and Call to Action (2:23:00) For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com

The Health Ranger Report
Brighteon Broadcast News, Mar 6, 2025 – Climate SLUSH FUND exposed; Robots for off-grid living and growing your own food

The Health Ranger Report

Play Episode Listen Later Mar 6, 2025 157:30


- Introduction and Breaking News (0:00) - Climate Fraud and Money Laundering (2:01) - Sanctuary City Mayors and Legal Actions (5:32) - Book Review: "Griftopia" by Matt Taibbi (8:59) - Impact of Financial Crisis and Reforms (13:57) - Health and Politics in America (15:50) - The Role of Robots in Homesteading (42:29) - Technological Advancements and Ethical Considerations (1:13:30) - The Future of Robots and Human Interaction (1:14:07) - Conclusion and Final Thoughts (1:14:51) - Exploring the Gladio Network and Intelligence Connections (1:15:10) - The Age of Human Cognition and AI Advancements (1:23:46) - The Role of AI in Enhancing Human Intelligence (1:27:14) - The Impact of AI on Economic Activity and GDP (1:51:26) - The Future of Work and Human Labor (1:58:20) - The Role of AI in Decentralization and Freedom (2:03:58) - The Importance of Prompt Engineering in AI (2:15:19) - The Role of AI in Business and Decentralized Income Opportunities (2:26:32) - The Future of AI and Its Impact on Society (2:35:51) - The Importance of Transparency and Accountability in Government (2:36:57) For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com

The Jordan B. Peterson Podcast
502. Angry and Red: Color as Emotion | Mark Changizi

The Jordan B. Peterson Podcast

Play Episode Listen Later Nov 28, 2024 98:10


Jordan Peterson sits down with theorist and researcher Mark Changizi. They discuss the biological reasons for mass hysteria on the societal level, why we evolved to have color vision, and how we understand and interpret the patterns of the natural world. Mark Changizi is a theorist aiming to grasp the ultimate foundations underlying why we think, feel, and see as we do. He attended the University of Virginia for a degree in physics and mathematics, and to the University of Maryland for a PhD in math. In 2002, he won a prestigious Sloan-Swartz Fellowship in Theoretical Neurobiology at Caltech, and in 2007, he became an assistant professor in the Department of Cognitive Science at Rensselaer Polytechnic Institute. In 2010, he took the post of Director of Human Cognition at a new research institute called 2ai Labs and also co-founded VINO Optics, which builds proprietary vein-enhancing glasses for medical personnel. He consults out of his Human Factory Lab. He curated an exhibition and co-authored a (fourth) book — “On the Origin of Art” (2016) by Steven Pinker, Geoffrey Miller, Brian Boyd, and Mark Changizi — at MONA museum in Tasmania in 2016, illustrating his “nature-harnessing” theory on the origins of art and language. This episode was filmed on November 22, 2024  | Links | For Mark Changizi: On X https://x.com/MarkChangizi/highlights On YouTube https://www.youtube.com/c/markchangizi Website https://www.changizi.com/?_sm_nck=1