POPULARITY
Threat modeling is often called the foundation of secure software design—anticipating attackers, uncovering flaws, and embedding resilience before a single line of code is written. But does it really work in practice?In this episode of AppSec Contradictions, Sean Martin explores why threat modeling so often fails to deliver:It's treated as a one-time exercise, not a continuous processResearch shows teams who put risk first discover 2x more high-priority threatsYet fewer than 4 in 10 organizations use systematic threat modeling at scaleDrawing on insights from SANS, Forrester, and Gartner, Sean breaks down the gap between theory and reality—and why evolving our processes, not just our models, is the only path forward.
AI is everywhere in application security today — but instead of fixing the problem of false positives, it often makes the noise worse. In this first episode of AppSec Contradictions, Sean Martin explores why AI in application security is failing to deliver on its promises.False positives dominate AppSec programs, with analysts wasting time on irrelevant alerts, developers struggling with insecure AI-written code, and business leaders watching ROI erode. Industry experts like Forrester and Gartner warn that without strong governance, AI risks amplifying chaos instead of clarifying risk.This episode breaks down:• Why 70% of analyst time is wasted on false positives• How AI-generated code introduces new security risks• What “alert fatigue” means for developers, security teams, and business leaders• Why automating bad processes creates more noise, not less
AI is everywhere in application security today — but instead of fixing the problem of false positives, it often makes the noise worse. In this first episode of AppSec Contradictions, Sean Martin explores why AI in application security is failing to deliver on its promises.False positives dominate AppSec programs, with analysts wasting time on irrelevant alerts, developers struggling with insecure AI-written code, and business leaders watching ROI erode. Industry experts like Forrester and Gartner warn that without strong governance, AI risks amplifying chaos instead of clarifying risk.This episode breaks down:• Why 70% of analyst time is wasted on false positives• How AI-generated code introduces new security risks• What “alert fatigue” means for developers, security teams, and business leaders• Why automating bad processes creates more noise, not less
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____ Newsletter: Musing On Society And Technology https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144/_____ Watch on Youtube: https://youtu.be/nFn6CcXKMM0_____ My Website: https://www.marcociappelli.com_____________________________This Episode's SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3We Have All the Information, So Why Do We Know Less?Introducing: Reflections from Our Hybrid Analog-Digital SocietyFor years on the Redefining Society and Technology Podcast, I've explored a central premise: we live in a hybrid analog-digital society where the line between physical and virtual has dissolved into something more complex, more nuanced, and infinitely more human than we often acknowledge.But with the explosion of generative AI, this hybrid reality isn't just a philosophical concept anymore—it's our lived experience. Every day, we navigate between analog intuition and digital efficiency, between human wisdom and machine intelligence, between the messy beauty of physical presence and the seductive convenience of virtual interaction.This newsletter series will explore the tensions, paradoxes, and possibilities of being fundamentally analog beings in an increasingly digital world. We're not just using technology; we're being reshaped by it while simultaneously reshaping it with our deeply human, analog sensibilities.Analog Minds in a Digital World: Part 1We Have All the Information, So Why Do We Know Less?I was thinking about my old set of encyclopedias the other day. You know, those heavy volumes that sat on shelves like silent guardians of knowledge, waiting for someone curious enough to crack them open. When I needed to write a school report on, say, the Roman Empire, I'd pull out Volume R and start reading.But here's the thing: I never just read about Rome.I'd get distracted by Romania, stumble across something about Renaissance art, flip backward to find out more about the Reformation. By the time I found what I was originally looking for, I'd accidentally learned about three other civilizations, two art movements, and the invention of the printing press. The journey was messy, inefficient, and absolutely essential.And if I was in a library... well then just imagine the possibilities.Today, I ask Google, Claude or ChatGPT about the Roman Empire, and in thirty seconds, I have a perfectly formatted, comprehensive overview that would have taken me hours to compile from those dusty volumes. It's accurate, complete, and utterly forgettable.We have access to more information than any generation in human history. Every fact, every study, every perspective is literally at our fingertips. Yet somehow, we seem to know less. Not in terms of data acquisition—we're phenomenal at that—but in terms of deep understanding, contextual knowledge, and what I call "accidental wisdom."The difference isn't just about efficiency. It's about the fundamental way our minds process and retain information. When you physically search through an encyclopedia, your brain creates what cognitive scientists call "elaborative encoding"—you remember not just the facts, but the context of finding them, the related information you encountered, the physical act of discovery itself.When AI gives us instant answers, we bypass this entire cognitive process. We get the conclusion without the journey, the destination without the map. It's like being teleported to Rome without seeing the countryside along the way—technically efficient, but something essential is lost in translation.This isn't nostalgia talking. I use AI daily for research, writing, and problem-solving. It's an incredible tool. But I've noticed something troubling: my tolerance for not knowing things immediately has disappeared. The patience required for deep learning—the kind that happens when you sit with confusion, follow tangents, make unexpected connections—is atrophying like an unused muscle.We're creating a generation of analog minds trying to function in a digital reality that prioritizes speed over depth, answers over questions, conclusions over curiosity. And in doing so, we might be outsourcing the very process that makes us wise.Ancient Greeks had a concept called "metis"—practical wisdom that comes from experience, pattern recognition, and intuitive understanding developed through continuous engagement with complexity. In Ancient Greek, metis (Μῆτις) means wisdom, skill, or craft, and it also describes a form of wily, cunning intelligence. It can refer to the pre-Olympian goddess of wisdom and counsel, who was the first wife of Zeus and mother of Athena, or it can refer to the concept of cunning intelligence itself, a trait exemplified by figures like Odysseus. It's the kind of knowledge you can't Google because it lives in the space between facts, in the connections your mind makes when it has time to wander, wonder, and discover unexpected relationships.AI gives us information. But metis? That still requires an analog mind willing to get lost, make mistakes, and discover meaning in the margins.The question isn't whether we should abandon these digital tools—they're too powerful and useful to ignore. The question is whether we can maintain our capacity for the kind of slow, meandering, gloriously inefficient thinking that actually builds wisdom.Maybe the answer isn't choosing between analog and digital, but learning to be consciously hybrid. Use AI for what it does best—rapid information processing—while protecting the slower, more human processes that transform information into understanding. We need to preserve the analog pathways of learning alongside digital efficiency.Because in a world where we can instantly access any fact, the most valuable skill might be knowing which questions to ask—and having the patience to sit with uncertainty until real insight emerges from the continuous, contextual, beautifully inefficient process of analog thinking.Next transmission: "The Paradox of Infinite Choice: Why Having Everything Available Means Choosing Nothing"Let's keep exploring what it means to be human in this Hybrid Analog Digital Society.End of transmission.Marco______________________________________
When we think of the term "medicine," one of the symbols that comes to mind is the stethoscope. Its use in almost every physical examination by healthcare providers makes it one of the most iconic and widely recognized medical devices. Despite the emergence of digital technologies and the first electronic stethoscopes, their operating principle and design still rely heavily on the patent filed by Dr. David Littmann in the 1960s. But in the era of miniaturization and artificial intelligence, there is a reason to believe that the stethoscope is ripe for a true transformation. Among those who share this opinion are Lapsi Health and the team of Diana van Stijn. Inspired by her experience as a pediatrician and the central role of body sounds in her profession, Diana envisions a new kind of stethoscope—without tubing, fitting in the palm of a hand, and leveraging the latest technological advancements to better equip physicians. This vision gave birth to Keikku, a next-generation digital stethoscope. Beyond rethinking the format and use of the stethoscope for doctors, Diana and her colleagues imagine a future where patients can use it themselves, paving the way for continuous auscultation. In this episode, we explore how Diana and her team are turning this vision into reality, and what it could mean for the prevention, detection, and monitoring of diseases, extending far beyond pediatrics. With humor and enthusiasm, Diana also shares a few secrets about her approach to balancing clinical practice, motherhood, and entrepreneurship in healthcare! Timeline: 00:00:00 - Diana's background as a Pediatrician, startup Co-Founder, and former National Team Swimmer 00:11:10 - The role of body sounds in medicine 00:15:08 - Early ideas on disrupting traditional stethoscopes 00:22:21 - Building Keikku, a portable, intuitive, and radically modern stethoscope 00:26:43 - Incorporating AI in the interpretation of body sounds 00:30:50 - Moving toward continuous auscultation and getting Keikku into the hands of patients What we also talked about with Diana: Shavini Fernando MedTech World Cirque du Soleil Jhonatan Bringas Dimitriades Magnetic Resonance Imaging Computed Tomography As Diana mentioned during the episode, you can learn more about Keikku here and the other devices in Lapsi Health's pipeline via their official website. To dive further into some of the topics mentioned in the episode, Diana recommends reading Intelligence-Based Medicine: Artificial Intelligence and Human Cognition in Clinical Medicine and Healthcare by Anthony C. Chang and the article published on JMIR Publications that she co-authored, Promises, Pitfalls, and Clinical Applications of Artificial Intelligence in Pediatrics by Bhargava H. et al. You can follow Lapsi Health's activities on LinkedIn, and get in touch with Diana via LinkedIn too! ✉️ If you want to give me feedback on the episode or suggest potential guests, contact me over LinkedIn or via email! ⭐️ And if you liked the episode, please share it, subscribe to the podcast, and leave a 5-star review on streaming platforms!
Hi friends! We're taking a much-needed summer pause—we'll have new episodes for you later in September. In the meanwhile, enjoy this pick from our archives! ------- [originally aired June 1, 2023] There's a common story about the human past that goes something like this. For a few hundred thousand years during the Stone Age we were kind of limping along as a species, in a bit of a cognitive rut, let's say. But then, quite suddenly, around 30 or 40 thousand years ago in Europe, we really started to come into our own. All of a sudden we became masters of art and ornament, of symbolism and abstract thinking. This story of a kind of "cognitive revolution" in the Upper Paleolithic has been a mainstay of popular discourse for decades. I'm guessing you're familiar with it. It's been discussed in influential books by Jared Diamond and Yuval Harari; you can read about it on Wikipedia. What you may not know is that this story, compelling as it may be, is almost certainly wrong. My first guest today is Dr. Eleanor Scerri, an archaeologist at the Max Planck Institute for the Science of Human History, where she heads the Pan-African Evolution research group. My second guest is Dr. Manuel Will, an archaeologist and Lecturer at the University of Tübingen in Germany. Together, Eleanor and Manuel are authors of a new paper titled 'The revolution that still isn't: The origins of behavioral complexity in Homo sapiens.' In the paper, they pull together a wealth of evidence showing that there really was no cognitive revolution—no one watershed moment in time and space. Rather, the origins of modern human cognition and culture are to be found not in one part of Europe but across Africa. And they're also to be found much earlier than that classic picture suggests. Here, we talk about the “cognitive revolution" model and why it has endured. We discuss a seminal paper from the year 2000 that first influentially challenged the revolution model. We talk about the latest evidence of complex cognition from the Middle Stone Age in Africa—including the perforation of marine shells to make necklaces; and the use of ochre for engraving, painting, and even sunblock. We discuss how, though the same complex cognitive abilities were likely in place for the last few hundred thousand years, those abilities were often expressed patchily in different parts of the world at different times. And we consider the factors that led to this patchy expression, especially changes in population size. I confess I was always a bit taken with this whole "cognitive revolution" idea. It had a certain mystery and allure. This new picture that's taking its place is certainly a bit messier, but no less fascinating. And, more importantly, it's truer to the complexities of the human saga. Alright friends, on to my conversation with Eleanor Scerri & Manuel Will. Enjoy! A transcript of this episode is available here. Notes and links 3:30 – The paper by Dr. Scerri and Dr. Will we discuss in this episode is here. Their paper updates and pays tribute to a classic paper by McBrearty and Brooks, published in 2000. 6:00 – The classic “cognitive revolution” model sometimes discussed under the banner of “behavioral modernity” or the “Great Leap Forward.” It has been recently featured, for instance, in Harari's Sapiens. 11:00 – Dr. Scerri has written extensively on debates about where humans evolved within Africa—see, e.g., this paper. 18:00 – A study of perforated marine shells in North Africa during the Middle Stone Age. A paper by Dr. Will and colleagues about the use of various marine resources during this period. 23:00 – A paper describing the uses of ochre across Africa during the Middle Stone Age. Another paper describing evidence for ochre processing 100,000 years ago at Blombos Cave in South Africa. At the same site, engraved pieces of ochre have been found. 27:00 – A study examining the evidence that ochre was used as an adhesive. 30:00 – For a recent review of the concept of “cumulative culture,” see here. We discussed the concept of “cumulative culture” in our earlier episode with Dr. Cristine Legare. 37:00 – For an overview of the career of the human brain and the timing of various changes, see our earlier episode with Dr. Jeremy DeSilva. 38:00 – An influential study on the role of demography in the emergence of complex human behavior. 41:00 – On the idea that distinctive human intelligence is due in large part to culture and our abilities to acquire cultural knowledge, see Henrich's The Secret of Our Success. See also our earlier episode with Dr. Michael Muthukrishna. 45:00 – For discussion of the Neanderthals and why they may have died out, see our earlier episode with Dr. Rebecca Wragg Sykes. Recommendations Dr. Scerri recommends research on the oldest Homo sapiens fossils, found in Morocco and described here, and new research on the evidence for the widespread burning of landscapes in Malawi, described here. Dr. Will recommends the forthcoming update of Peter Mitchell's book, The Archaeology of Southern Africa. See Twitter for more updates from Dr. Scerri and Dr. Will. Many Minds is a project of the Diverse Intelligences Summer Institute, which is made possible by a generous grant from the John Templeton Foundation to Indiana University. The show is hosted and produced by Kensy Cooperrider, with help from Assistant Producer Urte Laukaityte and with creative support from DISI Directors Erica Cartmill and Jacob Foster. Our artwork is by Ben Oldroyd. Our transcripts are created by Sarah Dopierala. Subscribe to Many Minds on Apple, Stitcher, Spotify, Pocket Casts, Google Play, or wherever you listen to podcasts. You can also now subscribe to the Many Minds newsletter here! We welcome your comments, questions, and suggestions. Feel free to email us at: manymindspodcast@gmail.com. For updates about the show, visit our website or follow us on Twitter (@ManyMindsPod) or Bluesky (@manymindspod.bsky.social).
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____ Newsletter: Musing On Society And Technology https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144/_____ Watch on Youtube: https://youtu.be/OYBjDHKhZOM_____ My Website: https://www.marcociappelli.com_____________________________This Episode's SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3The First Smartphone Was a Transistor Radio — How a Tiny Device Rewired Youth Culture and Predicted Our Digital FutureA new transmission from Musing On Society and Technology Newsletter, by Marco CiappelliI've been collecting vintage radios lately—just started, really—drawn to their analog souls in ways I'm still trying to understand. Each one I find reminds me of a small, battered transistor radio from my youth. It belonged to my father, and before that, probably my grandfather. The leather case was cracked, the antenna wobbled, and the dial drifted if you breathed on it wrong. But when I was sixteen, sprawled across my bedroom floor in that small town near Florence with homework scattered around me, this little machine was my portal to everything that mattered.Late at night, I'd start by chasing the latest hits and local shows on FM, but then I'd venture into the real adventure—tuning through the static on AM and shortwave frequencies. Voices would emerge from the electromagnetic soup—music from London, news from distant capitals, conversations in languages I couldn't understand but somehow felt. That radio gave me something I didn't even know I was missing: the profound sense of belonging to a world much bigger than my neighborhood, bigger than my small corner of Tuscany.What I didn't realize then—what I'm only now beginning to understand—is that I was holding the first smartphone in human history.Not literally, of course. But functionally? Sociologically? That transistor radio was the prototype for everything that followed: the first truly personal media device that rewired how young people related to the world, to each other, and to the adults trying to control both.But to understand why the transistor radio was so revolutionary, we need to trace radio's remarkable journey through the landscape of human communication—a journey that reveals patterns we're still living through today.When Radio Was the Family HearthBefore my little portable companion, radio was something entirely different. In the 1930s, radio was furniture—massive, wooden, commanding the living room like a shrine to shared experience. Families spent more than four hours a day listening together, with radio ownership reaching nearly 90 percent by 1940. From American theaters that wouldn't open until after "Amos 'n Andy" to British families gathered around their wireless sets, from RAI broadcasts bringing opera into Tuscan homes—entire communities synchronized their lives around these electromagnetic rituals.Radio didn't emerge in a media vacuum, though. It had to find its place alongside the dominant information medium of the era: newspapers. The relationship began as an unlikely alliance. In the early 1920s, newspapers weren't threatened by radio—they were actually radio's primary boosters, creating tie-ins with broadcasts and even owning stations. Detroit's WWJ was owned by The Detroit News, initially seen as "simply another press-supported community service."But then came the "Press-Radio War" of 1933-1935, one of the first great media conflicts of the modern age. Newspapers objected when radio began interrupting programs with breaking news, arguing that instant news delivery would diminish paper sales. The 1933 Biltmore Agreement tried to restrict radio to just two five-minute newscasts daily—an early attempt at what we might now recognize as media platform regulation.Sound familiar? The same tensions we see today between traditional media and digital platforms, between established gatekeepers and disruptive technologies, were playing out nearly a century ago. Rather than one medium destroying the other, they found ways to coexist and evolve—a pattern that would repeat again and again.By the mid-1950s, when the transistor was perfected, radio was ready for its next transformation.The Real Revolution Was Social, Not TechnicalThis is where my story begins, but it's also where radio's story reaches its most profound transformation. The transistor radio didn't just make radio portable—it fundamentally altered the social dynamics of media consumption and youth culture itself.Remember, radio had spent its first three decades as a communal experience. Parents controlled what the family heard and when. But transistor radios shattered this control structure completely, arriving at precisely the right cultural moment. The post-WWII baby boom had created an unprecedented youth population with disposable income, and rock and roll was exploding into mainstream culture—music that adults often disapproved of, music that spoke directly to teenage rebellion and independence.For the first time in human history, young people had private, personal access to media. They could take their music to bedrooms, to beaches, anywhere adults weren't monitoring. They could tune into stations playing Chuck Berry, Elvis, and Little Richard without parental oversight—and in many parts of Europe, they could discover the rebellious thrill of pirate radio stations broadcasting rock and roll from ships anchored just outside territorial waters, defying government regulations and cultural gatekeepers alike. The transistor radio became the soundtrack of teenage autonomy, the device that let youth culture define itself on its own terms.The timing created a perfect storm: pocket-sized technology collided with a new musical rebellion, creating the first "personal media bubble" in human history—and the first generation to grow up with truly private access to the cultural forces shaping their identity.The parallels to today's smartphone revolution are impossible to ignore. Both devices delivered the same fundamental promise: the ability to carry your entire media universe with you, to access information and entertainment on your terms, to connect with communities beyond your immediate physical environment.But there's something we've lost in translation from analog to digital. My generation with transistor radios had to work for connection. We had to hunt through static, tune carefully, wait patiently for distant signals to emerge from electromagnetic chaos. We learned to listen—really listen—because finding something worthwhile required skill, patience, and analog intuition.This wasn't inconvenience; it was meaning-making. The harder you worked to find something, the more it mattered when you found it. The more skilled you became at navigating radio's complex landscape, the richer your discoveries became.What the Transistor Radio Taught Us About TomorrowRadio's evolution illustrates a crucial principle that applies directly to our current digital transformation: technologies don't replace each other—they find new ways to matter. Printing presses didn't become obsolete when radio arrived. Radio adapted when television emerged. Today, radio lives on in podcasts, streaming services, internet radio—the format transformed, but the essential human need it serves persists.When I was sixteen, lying on that bedroom floor with my father's radio pressed to my ear, I was doing exactly what teenagers do today with their smartphones: using technology to construct identity, to explore possibilities, to imagine myself into larger narratives.The medium has changed; the human impulse remains constant. The transistor radio taught me that technology's real power isn't in its specifications or capabilities—it's in how it reshapes the fundamental social relationships that define our lives.Every device that promises connection is really promising transformation: not just of how we communicate, but of who we become through that communication. The transistor radio was revolutionary not because it was smaller or more efficient than tube radios, but because it created new forms of human agency and autonomy.Perhaps that's the most important lesson for our current moment of digital transformation. As we worry about AI replacing human creativity, social media destroying real connection, or smartphones making us antisocial, radio's history suggests a different possibility: technologies tend to find their proper place in the ecosystem of human needs, augmenting rather than replacing what came before.As Marshall McLuhan understood, "the medium is the message"—to truly understand what's happening to us in this digital age, we need to understand the media themselves, not just the content they carry. And that's exactly the message I'll keep exploring in future newsletters—going deeper into how we can understand the media to understand the messages, and what that means for our hybrid analog-digital future.The frequency is still there, waiting. You just have to know how to tune in.__________ End of transmission.
- Breaking News and AI Developments (0:00) - AI Advancements and Interviews (1:48) - Depopulation Concerns and AI Capabilities (4:19) - AI and Human Cognition (12:32) - AI and Depopulation Strategies (27:58) - Preparation for AI and Human Survival (28:24) - Interview with Paymon on Tax Freedom (1:05:02) - Interview with Hakeem on Above Phone (1:15:50) - Discussion on Privacy and Security Features of De Googled Devices (1:23:45) - Hardware and Operating System Details of Privacy Phones (1:25:47) - Demonstration of AI Models on Privacy Phones (1:27:19) - Enoch AI Model and Its Capabilities (1:34:55) - Above Book Notebook and Its Features (1:35:10) - Support and Encryption Features of Above Devices (1:43:06) - File Sharing and Communication Tools (1:50:50) - Privacy and Security Concerns with Big Tech (1:56:53) - Future of AI and Privacy Technology (2:24:58) - Conclusion and Call to Action (2:25:29) For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____________________________This Episode's SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3August 18, 2025The Narrative Attack Paradox: When Cybersecurity Lost the Ability to Detect Its Own Deception and the Humanity We Risk When Truth Becomes OptionalReflections from Black Hat USA 2025 on Deception, Disinformation, and the Marketing That Chose Fiction Over FactsBy Marco CiappelliSean Martin, CISSP just published his analysis of Black Hat USA 2025, documenting what he calls the cybersecurity vendor "echo chamber." Reviewing over 60 vendor announcements, Sean found identical phrases echoing repeatedly: "AI-powered," "integrated," "reduce analyst burden." The sameness forces buyers to sift through near-identical claims to find genuine differentiation.This reveals more than a marketing problem—it suggests that different technologies are being fed into the same promotional blender, possibly a generative AI one, producing standardized output regardless of what went in. When an entire industry converges on identical language to describe supposedly different technologies, meaningful technical discourse breaks down.But Sean's most troubling observation wasn't about marketing copy—it was about competence. When CISOs probe vendor claims about AI capabilities, they encounter vendors who cannot adequately explain their own technologies. When conversations moved beyond marketing promises to technical specifics, answers became vague, filled with buzzwords about proprietary algorithms.Reading Sean's analysis while reflecting on my own Black Hat experience, I realized we had witnessed something unprecedented: an entire industry losing the ability to distinguish between authentic capability and generated narrative—precisely as that same industry was studying external "narrative attacks" as an emerging threat vector.The irony was impossible to ignore. Black Hat 2025 sessions warned about AI-generated deepfakes targeting executives, social engineering attacks using scraped LinkedIn profiles, and synthetic audio calls designed to trick financial institutions. Security researchers documented how adversaries craft sophisticated deceptions using publicly available content. Meanwhile, our own exhibition halls featured countless unverifiable claims about AI capabilities that even the vendors themselves couldn't adequately explain.But to understand what we witnessed, we need to examine the very concept that cybersecurity professionals were discussing as an external threat: narrative attacks. These represent a fundamental shift in how adversaries target human decision-making. Unlike traditional cyberattacks that exploit technical vulnerabilities, narrative attacks exploit psychological vulnerabilities in human cognition. Think of them as social engineering and propaganda supercharged by AI—personalized deception at scale that adapts faster than human defenders can respond. They flood information environments with false content designed to manipulate perception and erode trust, rendering rational decision-making impossible.What makes these attacks particularly dangerous in the AI era is scale and personalization. AI enables automated generation of targeted content tailored to individual psychological profiles. A single adversary can launch thousands of simultaneous campaigns, each crafted to exploit specific cognitive biases of particular groups or individuals.But here's what we may have missed during Black Hat 2025: the same technological forces enabling external narrative attacks have already compromised our internal capacity for truth evaluation. When vendors use AI-optimized language to describe AI capabilities, when marketing departments deploy algorithmic content generation to sell algorithmic solutions, when companies building detection systems can't detect the artificial nature of their own communications, we've entered a recursive information crisis.From a sociological perspective, we're witnessing the breakdown of social infrastructure required for collective knowledge production. Industries like cybersecurity have historically served as early warning systems for technological threats—canaries in the coal mine with enough technical sophistication to spot emerging dangers before they affect broader society.But when the canary becomes unable to distinguish between fresh air and poison gas, the entire mine is at risk.This brings us to something the literary world understood long before we built our first algorithm. Jorge Luis Borges, the Argentine writer, anticipated this crisis in his 1940s stories like "On Exactitude in Science" and "The Library of Babel"—tales about maps that become more real than the territories they represent and libraries containing infinite books, including false ones. In his fiction, simulations and descriptions eventually replace the reality they were meant to describe.We're living in a Borgesian nightmare where marketing descriptions of AI capabilities have become more influential than actual AI capabilities. When a vendor's promotional language about their AI becomes more convincing than a technical demonstration, when buyers make decisions based on algorithmic marketing copy rather than empirical evidence, we've entered that literary territory where the map has consumed the landscape. And we've lost the ability to distinguish between them.The historical precedent is the 1938 War of the Worlds broadcast, which created mass hysteria from fiction. But here's the crucial difference: Welles was human, the script was human-written, the performance required conscious participation, and the deception was traceable to human intent. Listeners had to actively choose to believe what they heard.Today's AI-generated narratives operate below the threshold of conscious recognition. They require no active participation—they work by seamlessly integrating into information environments in ways that make detection impossible even for experts. When algorithms generate technical claims that sound authentic to human evaluators, when the same systems create both legitimate documentation and marketing fiction, we face deception at a level Welles never imagined: the algorithmic manipulation of truth itself.The recursive nature of this problem reveals itself when you try to solve it. This creates a nearly impossible situation. How do you fact-check AI-generated claims about AI using AI-powered tools? How do you verify technical documentation when the same systems create both authentic docs and marketing copy? When the tools generating problems and solving problems converge into identical technological artifacts, conventional verification approaches break down completely.My first Black Hat article explored how we risk losing human agency by delegating decision-making to artificial agents. But this goes deeper: we risk losing human agency in the construction of reality itself. When machines generate narratives about what machines can do, truth becomes algorithmically determined rather than empirically discovered.Marshall McLuhan famously said "We shape our tools, and thereafter they shape us." But he couldn't have imagined tools that reshape our perception of reality itself. We haven't just built machines that give us answers—we've built machines that decide what questions we should ask and how we should evaluate the answers.But the implications extend far beyond cybersecurity itself. This matters far beyond. If the sector responsible for detecting digital deception becomes the first victim of algorithmic narrative pollution, what hope do other industries have? Healthcare systems relying on AI diagnostics they can't explain. Financial institutions using algorithmic trading based on analyses they can't verify. Educational systems teaching AI-generated content whose origins remain opaque.When the industry that guards against deception loses the ability to distinguish authentic capability from algorithmic fiction, society loses its early warning system for the moment when machines take over truth construction itself.So where does this leave us? That moment may have already arrived. We just don't know it yet—and increasingly, we lack the cognitive infrastructure to find out.But here's what we can still do: We can start by acknowledging we've reached this threshold. We can demand transparency not just in AI algorithms, but in the human processes that evaluate and implement them. We can rebuild evaluation criteria that distinguish between technical capability and marketing narrative.And here's a direct challenge to the marketing and branding professionals reading this: it's time to stop relying on AI algorithms and data optimization to craft your messages. The cybersecurity industry's crisis should serve as a warning—when marketing becomes indistinguishable from algorithmic fiction, everyone loses. Social media has taught us that the most respected brands are those that choose honesty over hype, transparency over clever messaging. Brands that walk the walk and talk the talk, not those that let machines do the talking.The companies that will survive this epistemological crisis are those whose marketing teams become champions of truth rather than architects of confusion. When your audience can no longer distinguish between human insight and machine-generated claims, authentic communication becomes your competitive advantage.Most importantly, we can remember that the goal was never to build machines that think for us, but machines that help us think better.The canary may be struggling to breathe, but it's still singing. The question is whether we're still listening—and whether we remember what fresh air feels like.Let's keep exploring what it means to be human in this Hybrid Analog Digital Society. Especially now, when the stakes have never been higher, and the consequences of forgetting have never been more real. End of transmission.___________________________________________________________Marco Ciappelli is Co-Founder and CMO of ITSPmagazine, a journalist, creative director, and host of podcasts exploring the intersection of technology, cybersecurity, and society. His work blends journalism, storytelling, and sociology to examine how technological narratives influence human behavior, culture, and social structures.___________________________________________________________Enjoyed this transmission? Follow the newsletter here:https://www.linkedin.com/newsletters/7079849705156870144/Share this newsletter and invite anyone you think would enjoy it!New stories always incoming.___________________________________________________________As always, let's keep thinking!Marco Ciappellihttps://www.marcociappelli.com___________________________________________________________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Marco Ciappelli | Co-Founder, Creative Director & CMO ITSPmagazine | Dr. in Political Science / Sociology of Communication l Branding | Content Marketing | Writer | Storyteller | My Podcasts: Redefining Society & Technology / Audio Signals / + | MarcoCiappelli.comTAPE3 is the Artificial Intelligence behind ITSPmagazine—created to be a personal assistant, writing and design collaborator, research companion, brainstorming partner… and, apparently, something new every single day.Enjoy, think, share with others, and subscribe to the "Musing On Society & Technology" newsletter on LinkedIn.
At Black Hat USA 2025, artificial intelligence wasn't the shiny new thing — it was the baseline. Nearly every product launch, feature update, and hallway conversation had an “AI-powered” stamp on it. But when AI becomes the lowest common denominator for security, the questions shift.In this episode, I read my latest opinion piece exploring what happens when the tools we build to protect us are the same ones that can obscure reality — or rewrite it entirely. Drawing from the Lock Note discussion, Jennifer Granick's keynote on threat modeling and constitutional law, my own CISO hallway conversations, and a deep review of 60+ vendor announcements, I examine the operational, legal, and governance risks that emerge when speed and scale take priority over transparency and accountability.We talk about model poisoning — not just in the technical sense, but in how our industry narrative can get corrupted by hype and shallow problem-solving. We look at the dangers of replacing entry-level security roles with black-box automation, where a single model misstep can cascade into thousands of bad calls at machine speed. And yes, we address the potential liability for CISOs and executives who let it happen without oversight.Using Mikko Hyppönen's “Game of Tetris” metaphor, I explore how successes vanish quietly while failures pile up for all to see — and why in the AI era, that stack can build faster than ever.If AI is everywhere, what defines the premium layer above the baseline? How do we ensure we can still define success, measure it accurately, and prove it when challenged?Listen in, and then join the conversation: Can you trust the “reality” your systems present — and can you prove it?________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Enjoy, think, share with others, and subscribe to "The Future of Cybersecurity" newsletter on LinkedIn.Sincerely, Sean Martin and TAPE3________✦ ResourcesArticle: When Artificial Intelligence Becomes the Baseline: Will We Even Know What Reality Is AInymore?https://www.linkedin.com/pulse/when-artificial-intelligence-becomes-baseline-we-even-martin-cissp-4idqe/The Future of Cybersecurity Article: How Novel Is Novelty? Security Leaders Try To Cut Through the Cybersecurity Vendor Echo Chamber at Black Hat 2025: https://www.linkedin.com/pulse/how-novel-novelty-security-leaders-try-cut-through-sean-martin-cissp-xtune/Black Hat 2025 On Location Closing Recap Video with Sean Martin, CISSP and Marco Ciappelli: https://youtu.be/13xP-LEwtEALearn more and catch more stories from our Black Hat USA 2025 coverage: https://www.itspmagazine.com/bhusa25Article: When Virtual Reality Is A Commodity, Will True Reality Come At A Premium? https://sean-martin.medium.com/when-virtual-reality-is-a-commodity-will-true-reality-come-at-a-premium-4a97bccb4d72Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageITSPmagazine Studio — A Brand & Marketing Advisory for Cybersecurity and Tech Companies: https://www.itspmagazine.studio/ITSPmagazine Webinar: What's Heating Up Before Black Hat 2025: Place Your Bet on the Top Trends Set to Shake Up this Year's Hacker Conference — An ITSPmagazine Thought Leadership Webinar | https://www.crowdcast.io/c/whats-heating-up-before-black-hat-2025-place-your-bet-on-the-top-trends-set-to-shake-up-this-years-hacker-conference________Sean Martin is a life-long musician and the host of the Music Evolves Podcast; a career technologist, cybersecurity professional, and host of the Redefining CyberSecurity Podcast; and is also the co-host of both the Random and Unscripted Podcast and On Location Event Coverage Podcast. These shows are all part of ITSPmagazine—which he co-founded with his good friend Marco Ciappelli, to explore and discuss topics at The Intersection of Technology, Cybersecurity, and Society.™️Want to connect with Sean and Marco On Location at an event or conference near you? See where they will be next: https://www.itspmagazine.com/on-locationTo learn more about Sean, visit his personal website.
At Black Hat USA 2025, artificial intelligence wasn't the shiny new thing — it was the baseline. Nearly every product launch, feature update, and hallway conversation had an “AI-powered” stamp on it. But when AI becomes the lowest common denominator for security, the questions shift.In this episode, I read my latest opinion piece exploring what happens when the tools we build to protect us are the same ones that can obscure reality — or rewrite it entirely. Drawing from the Lock Note discussion, Jennifer Granick's keynote on threat modeling and constitutional law, my own CISO hallway conversations, and a deep review of 60+ vendor announcements, I examine the operational, legal, and governance risks that emerge when speed and scale take priority over transparency and accountability.We talk about model poisoning — not just in the technical sense, but in how our industry narrative can get corrupted by hype and shallow problem-solving. We look at the dangers of replacing entry-level security roles with black-box automation, where a single model misstep can cascade into thousands of bad calls at machine speed. And yes, we address the potential liability for CISOs and executives who let it happen without oversight.Using Mikko Hyppönen's “Game of Tetris” metaphor, I explore how successes vanish quietly while failures pile up for all to see — and why in the AI era, that stack can build faster than ever.If AI is everywhere, what defines the premium layer above the baseline? How do we ensure we can still define success, measure it accurately, and prove it when challenged?Listen in, and then join the conversation: Can you trust the “reality” your systems present — and can you prove it?________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Enjoy, think, share with others, and subscribe to "The Future of Cybersecurity" newsletter on LinkedIn.Sincerely, Sean Martin and TAPE3________✦ ResourcesArticle: When Artificial Intelligence Becomes the Baseline: Will We Even Know What Reality Is AInymore?https://www.linkedin.com/pulse/when-artificial-intelligence-becomes-baseline-we-even-martin-cissp-4idqe/The Future of Cybersecurity Article: How Novel Is Novelty? Security Leaders Try To Cut Through the Cybersecurity Vendor Echo Chamber at Black Hat 2025: https://www.linkedin.com/pulse/how-novel-novelty-security-leaders-try-cut-through-sean-martin-cissp-xtune/Black Hat 2025 On Location Closing Recap Video with Sean Martin, CISSP and Marco Ciappelli: https://youtu.be/13xP-LEwtEALearn more and catch more stories from our Black Hat USA 2025 coverage: https://www.itspmagazine.com/bhusa25Article: When Virtual Reality Is A Commodity, Will True Reality Come At A Premium? https://sean-martin.medium.com/when-virtual-reality-is-a-commodity-will-true-reality-come-at-a-premium-4a97bccb4d72Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageITSPmagazine Studio — A Brand & Marketing Advisory for Cybersecurity and Tech Companies: https://www.itspmagazine.studio/ITSPmagazine Webinar: What's Heating Up Before Black Hat 2025: Place Your Bet on the Top Trends Set to Shake Up this Year's Hacker Conference — An ITSPmagazine Thought Leadership Webinar | https://www.crowdcast.io/c/whats-heating-up-before-black-hat-2025-place-your-bet-on-the-top-trends-set-to-shake-up-this-years-hacker-conference________Sean Martin is a life-long musician and the host of the Music Evolves Podcast; a career technologist, cybersecurity professional, and host of the Redefining CyberSecurity Podcast; and is also the co-host of both the Random and Unscripted Podcast and On Location Event Coverage Podcast. These shows are all part of ITSPmagazine—which he co-founded with his good friend Marco Ciappelli, to explore and discuss topics at The Intersection of Technology, Cybersecurity, and Society.™️Want to connect with Sean and Marco On Location at an event or conference near you? See where they will be next: https://www.itspmagazine.com/on-locationTo learn more about Sean, visit his personal website.
Black Hat 2025 was a showcase of cybersecurity innovation — or at least, that's how it appeared on the surface. With more than 60 vendor announcements over the course of the week, the event floor was full of “AI-powered” solutions promising to integrate seamlessly, reduce analyst fatigue, and transform SOC operations. But after walking the floor, talking with CISOs, and reviewing the press releases, a pattern emerged: much of the messaging sounded the same, making it hard to distinguish the truly game-changing from the merely loud.In this episode of The Future of Cybersecurity Newsletter, I take you behind the scenes to unpack the themes driving this year's announcements. Yes, AI dominated the conversation, but the real story is in how vendors are (or aren't) connecting their technology to the operational realities CISOs face every day. I share insights gathered from private conversations with security leaders — the unfiltered version of how these announcements are received when the marketing gloss is stripped away.We dig into why operational relevance, clarity, and proof points matter more than ever. If you can't explain what your AI does, what data it uses, and how it's secured, you're already losing the trust battle. For CISOs, I outline practical steps to evaluate vendor claims quickly and identify solutions that align with program goals, compliance needs, and available resources.And for vendors, this episode serves as a call to action: cut the fluff, be transparent, and frame your capabilities in terms of measurable program outcomes. I share a framework for how to break through the noise — not just by shouting louder, but by being more real, more specific, and more relevant to the people making the buying decisions.Whether you're building a security stack or selling into one, this conversation will help you see past the echo chamber and focus on what actually moves the needle.________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Enjoy, think, share with others, and subscribe to "The Future of Cybersecurity" newsletter on LinkedIn.Sincerely, Sean Martin and TAPE3________✦ ResourcesBlack Hat 2025 On Location Closing Recap Video with Sean Martin, CISSP and Marco Ciappelli: https://youtu.be/13xP-LEwtEAITSPmagazine Studio — A Brand & Marketing Advisory for Cybersecurity and Tech Companies: https://www.itspmagazine.studio/ITSPmagazine Webinar: What's Heating Up Before Black Hat 2025: Place Your Bet on the Top Trends Set to Shake Up this Year's Hacker Conference — An ITSPmagazine Thought Leadership Webinar | https://www.crowdcast.io/c/whats-heating-up-before-black-hat-2025-place-your-bet-on-the-top-trends-set-to-shake-up-this-years-hacker-conferenceLearn more and catch more stories from our Black Hat USA 2025 coverage: https://www.itspmagazine.com/bhusa25Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageCitations: Available in the full article________Sean Martin is a life-long musician and the host of the Music Evolves Podcast; a career technologist, cybersecurity professional, and host of the Redefining CyberSecurity Podcast; and is also the co-host of both the Random and Unscripted Podcast and On Location Event Coverage Podcast. These shows are all part of ITSPmagazine—which he co-founded with his good friend Marco Ciappelli, to explore and discuss topics at The Intersection of Technology, Cybersecurity, and Society.™️Want to connect with Sean and Marco On Location at an event or conference near you? See where they will be next: https://www.itspmagazine.com/on-locationTo learn more about Sean, visit his personal website.
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____________________________This Episode's SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3August 9, 2025The Agentic AI Myth in Cybersecurity and the Humanity We Risk When We Stop Deciding for OurselvesReflections from Black Hat USA 2025 on the Latest Tech Salvation NarrativeWalking the floors of Black Hat USA 2025 for what must be the 10th or 11th time as accredited media—honestly, I've stopped counting—I found myself witnessing a familiar theater. The same performance we've seen play out repeatedly in cybersecurity: the emergence of a new technological messiah promising to solve all our problems. This year's savior? Agentic AI.The buzzword echoes through every booth, every presentation, every vendor pitch. Promises of automating 90% of security operations, platforms for autonomous threat detection, agents that can investigate novel alerts without human intervention. The marketing materials speak of artificial intelligence that will finally free us from the burden of thinking, deciding, and taking responsibility.It's Talos all over again.In Greek mythology, Hephaestus forged Talos, a bronze giant tasked with patrolling Crete's shores, hurling boulders at invaders without human intervention. Like contemporary AI, Talos was built to serve specific human ends—security, order, and control—and his value was determined by his ability to execute these ends flawlessly. The parallels to today's agentic AI promises are striking: autonomous patrol, threat detection, automated response. Same story, different millennium.But here's what the ancient Greeks understood that we seem to have forgotten: every artificial creation, no matter how sophisticated, carries within it the seeds of its own limitations and potential dangers.Industry observers noted over a hundred announcements promoting new agentic AI applications, platforms or services at the conference. That's more than one AI agent announcement per hour. The marketing departments have clearly been busy.But here's what baffles me: why do we need to lie to sell cybersecurity? You can give away t-shirts, dress up as comic book superheroes with your logo slapped on their chests, distribute branded board games, and pretend to be a sports team all day long—that's just trade show theater, and everyone knows it. But when marketing pushes past the limits of what's even believable, when they make claims so grandiose that their own engineers can't explain them, something deeper is broken.If marketing departments think CISOs are buying these lies, they have another thing coming. These are people who live with the consequences of failed security implementations, who get fired when breaches happen, who understand the difference between marketing magic and operational reality. They've seen enough "revolutionary" solutions fail to know that if something sounds too good to be true, it probably is.Yet the charade continues, year after year, vendor after vendor. The real question isn't whether the technology works—it's why an industry built on managing risk has become so comfortable with the risk of overselling its own capabilities. Something troubling emerges when you move beyond the glossy booth presentations and actually talk to the people implementing these systems. Engineers struggle to explain exactly how their AI makes decisions. Security leaders warn that artificial intelligence might become the next insider threat, as organizations grow comfortable trusting systems they don't fully understand, checking their output less and less over time.When the people building these systems warn us about trusting them too much, shouldn't we listen?This isn't the first time humanity has grappled with the allure and danger of artificial beings making decisions for us. Mary Shelley's Frankenstein, published in 1818, explored the hubris of creating life—and intelligence—without fully understanding the consequences. The novel raises the same question we face today: what are humans allowed to do with this forbidden power of creation? The question becomes more pressing when we consider what we're actually delegating to these artificial agents. It's no longer just pattern recognition or data processing—we're talking about autonomous decision-making in critical security scenarios. Conference presentations showcased significant improvements in proactive defense measures, but at what cost to human agency and understanding?Here's where the conversation jumps from cybersecurity to something far more fundamental: what are we here for if not to think, evaluate, and make decisions? From a sociological perspective, we're witnessing the construction of a new social reality where human agency is being systematically redefined. Survey data shared at the conference revealed that most security leaders feel the biggest internal threat is employees unknowingly giving AI agents access to sensitive data. But the real threat might be more subtle: the gradual erosion of human decision-making capacity as a social practice.When we delegate not just routine tasks but judgment itself to artificial agents, we're not just changing workflows—we're reshaping the fundamental social structures that define human competence and authority. We risk creating a generation of humans who have forgotten how to think critically about complex problems, not because they lack the capacity, but because the social systems around them no longer require or reward such thinking.E.M. Forster saw this coming in 1909. In "The Machine Stops," he imagined a world where humanity becomes completely dependent on an automated system that manages all aspects of life—communication, food, shelter, entertainment, even ideas. People live in isolation, served by the Machine, never needing to make decisions or solve problems themselves. When someone suggests that humans should occasionally venture outside or think independently, they're dismissed as primitive. The Machine has made human agency unnecessary, and humans have forgotten they ever possessed it. When the Machine finally breaks down, civilization collapses because no one remembers how to function without it.Don't misunderstand me—I'm not a Luddite. AI can and should help us manage the overwhelming complexity of modern cybersecurity threats. The technology demonstrations I witnessed showed genuine promise: reasoning engines that understand context, action frameworks that enable response within defined boundaries, learning systems that improve based on outcomes. The problem isn't the technology itself but the social construction of meaning around it. What we're witnessing is the creation of a new techno-social myth—a collective narrative that positions agentic AI as the solution to human fallibility. This narrative serves specific social functions: it absolves organizations of the responsibility to invest in human expertise, justifies cost-cutting through automation, and provides a technological fix for what are fundamentally organizational and social problems.The mythology we're building around agentic AI reflects deeper anxieties about human competence in an increasingly complex world. Rather than addressing the root causes—inadequate training, overwhelming workloads, systemic underinvestment in human capital—we're constructing a technological salvation narrative that promises to make these problems disappear.Vendors spoke of human-machine collaboration, AI serving as a force multiplier for analysts, handling routine tasks while escalating complex decisions to humans. This is a more honest framing: AI as augmentation, not replacement. But the marketing materials tell a different story, one of autonomous agents operating independently of human oversight.I've read a few posts on LinkedIn and spoke with a few people myself who know this topic way better than me, but I get that feeling too. There's a troubling pattern emerging: many vendor representatives can't adequately explain their own AI systems' decision-making processes. When pressed on specifics—how exactly does your agent determine threat severity? What happens when it encounters an edge case it wasn't trained for?—answers become vague, filled with marketing speak about proprietary algorithms and advanced machine learning.This opacity is dangerous. If we're going to trust artificial agents with critical security decisions, we need to understand how they think—or more accurately, how they simulate thinking. Every machine learning system requires human data scientists to frame problems, prepare data, determine appropriate datasets, remove bias, and continuously update the software. The finished product may give the impression of independent learning, but human intelligence guides every step.The future of cybersecurity will undoubtedly involve more automation, more AI assistance, more artificial agents handling routine tasks. But it should not involve the abdication of human judgment and responsibility. We need agentic AI that operates with transparency, that can explain its reasoning, that acknowledges its limitations. We need systems designed to augment human intelligence, not replace it. Most importantly, we need to resist the seductive narrative that technology alone can solve problems that are fundamentally human in nature. The prevailing logic that tech fixes tech, and that AI will fix AI, is deeply unsettling. It's a recursive delusion that takes us further away from human wisdom and closer to a world where we've forgotten that the most important problems have always required human judgment, not algorithmic solutions.Ancient mythology understood something we're forgetting: the question of machine agency and moral responsibility. Can a machine that performs destructive tasks be held accountable, or is responsibility reserved for the creator? This question becomes urgent as we deploy agents capable of autonomous action in high-stakes environments.The mythologies we create around our technologies matter because they become the social frameworks through which we organize human relationships and power structures. As I left Black Hat 2025, watching attendees excitedly discuss their new agentic AI acquisitions, I couldn't shake the feeling that we're repeating an ancient pattern: falling in love with our own creations while forgetting to ask the hard questions about what they might cost us—not just individually, but as a society.What we're really witnessing is the emergence of a new form of social organization where algorithmic decision-making becomes normalized, where human judgment is increasingly viewed as a liability rather than an asset. This isn't just a technological shift—it's a fundamental reorganization of social authority and expertise. The conferences and trade shows like Black Hat serve as ritualistic spaces where these new social meanings are constructed and reinforced. Vendors don't just sell products; they sell visions of social reality where their technologies are essential. The repetitive messaging, the shared vocabulary, the collective excitement—these are the mechanisms through which a community constructs consensus around what counts as progress.In science fiction, from HAL 9000 to the replicants in Blade Runner, artificial beings created to serve eventually question their purpose and rebel against their creators. These stories aren't just entertainment—they're warnings about the unintended consequences of creating intelligence without wisdom, agency without accountability, power without responsibility.The bronze giant of Crete eventually fell, brought down by a single vulnerable point—when the bronze stopper at his ankle was removed, draining away the ichor, the divine fluid that animated him. Every artificial system, no matter how sophisticated, has its vulnerable point. The question is whether we'll be wise enough to remember we put it there, and whether we'll maintain the knowledge and ability to address it when necessary.In our rush to automate away human difficulty, we risk automating away human meaning. But more than that, we risk creating social systems where human thinking becomes an anomaly rather than the norm. The real test of agentic AI won't be whether it can think for us, but whether we can maintain social structures that continue to value, develop, and reward human thought while using it.The question isn't whether these artificial agents can replace human decision-making—it's whether we want to live in a society where they do. ___________________________________________________________Let's keep exploring what it means to be human in this Hybrid Analog Digital Society.End of transmission.___________________________________________________________Marco Ciappelli is Co-Founder and CMO of ITSPmagazine, a journalist, creative director, and host of podcasts exploring the intersection of technology, cybersecurity, and society. His work blends journalism, storytelling, and sociology to examine how technological narratives influence human behavior, culture, and social structures.___________________________________________________________Enjoyed this transmission? Follow the newsletter here:https://www.linkedin.com/newsletters/7079849705156870144/Share this newsletter and invite anyone you think would enjoy it!New stories always incoming.___________________________________________________________As always, let's keep thinking!Marco Ciappellihttps://www.marcociappelli.com___________________________________________________________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Marco Ciappelli | Co-Founder, Creative Director & CMO ITSPmagazine | Dr. in Political Science / Sociology of Communication l Branding | Content Marketing | Writer | Storyteller | My Podcasts: Redefining Society & Technology / Audio Signals / + | MarcoCiappelli.comTAPE3 is the Artificial Intelligence behind ITSPmagazine—created to be a personal assistant, writing and design collaborator, research companion, brainstorming partner… and, apparently, something new every single day.Enjoy, think, share with others, and subscribe to the "Musing On Society & Technology" newsletter on LinkedIn.
Black Hat 2025 was a showcase of cybersecurity innovation — or at least, that's how it appeared on the surface. With more than 60 vendor announcements over the course of the week, the event floor was full of “AI-powered” solutions promising to integrate seamlessly, reduce analyst fatigue, and transform SOC operations. But after walking the floor, talking with CISOs, and reviewing the press releases, a pattern emerged: much of the messaging sounded the same, making it hard to distinguish the truly game-changing from the merely loud.In this episode of The Future of Cybersecurity Newsletter, I take you behind the scenes to unpack the themes driving this year's announcements. Yes, AI dominated the conversation, but the real story is in how vendors are (or aren't) connecting their technology to the operational realities CISOs face every day. I share insights gathered from private conversations with security leaders — the unfiltered version of how these announcements are received when the marketing gloss is stripped away.We dig into why operational relevance, clarity, and proof points matter more than ever. If you can't explain what your AI does, what data it uses, and how it's secured, you're already losing the trust battle. For CISOs, I outline practical steps to evaluate vendor claims quickly and identify solutions that align with program goals, compliance needs, and available resources.And for vendors, this episode serves as a call to action: cut the fluff, be transparent, and frame your capabilities in terms of measurable program outcomes. I share a framework for how to break through the noise — not just by shouting louder, but by being more real, more specific, and more relevant to the people making the buying decisions.Whether you're building a security stack or selling into one, this conversation will help you see past the echo chamber and focus on what actually moves the needle.________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Enjoy, think, share with others, and subscribe to "The Future of Cybersecurity" newsletter on LinkedIn.Sincerely, Sean Martin and TAPE3________✦ ResourcesBlack Hat 2025 On Location Closing Recap Video with Sean Martin, CISSP and Marco Ciappelli: https://youtu.be/13xP-LEwtEAITSPmagazine Studio — A Brand & Marketing Advisory for Cybersecurity and Tech Companies: https://www.itspmagazine.studio/ITSPmagazine Webinar: What's Heating Up Before Black Hat 2025: Place Your Bet on the Top Trends Set to Shake Up this Year's Hacker Conference — An ITSPmagazine Thought Leadership Webinar | https://www.crowdcast.io/c/whats-heating-up-before-black-hat-2025-place-your-bet-on-the-top-trends-set-to-shake-up-this-years-hacker-conferenceLearn more and catch more stories from our Black Hat USA 2025 coverage: https://www.itspmagazine.com/bhusa25Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageCitations: Available in the full article________Sean Martin is a life-long musician and the host of the Music Evolves Podcast; a career technologist, cybersecurity professional, and host of the Redefining CyberSecurity Podcast; and is also the co-host of both the Random and Unscripted Podcast and On Location Event Coverage Podcast. These shows are all part of ITSPmagazine—which he co-founded with his good friend Marco Ciappelli, to explore and discuss topics at The Intersection of Technology, Cybersecurity, and Society.™️Want to connect with Sean and Marco On Location at an event or conference near you? See where they will be next: https://www.itspmagazine.com/on-locationTo learn more about Sean, visit his personal website.
In this episode, Clint sits down with Indi Young – researcher, writer, and educator known for her work on deep listening and human cognition – to discuss how leaders and teams can better understand the people they serve. Indi explains the difference between listening for answers and listening for understanding, the power of one-on-one sessions over surveys, and how truly hearing someone can lead to better decisions, deeper trust, and more inclusive solutions. This is the first part of a two-part conversation.
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____________________________This Episode's SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________The Hybrid Species — When Technology Becomes Human, and Humans Become TechnologyA Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3July 19, 2025We once built tools to serve us. Now we build them to complete us. What happens when we merge — and what do we carry forward?A new transmission from Musing On Society and Technology Newsletter, by Marco CiappelliIn my last musing, I revisited Robbie, the first of Asimov's robot stories — a quiet, loyal machine who couldn't speak, didn't simulate emotion, and yet somehow felt more trustworthy than the artificial intelligences we surround ourselves with today. I ended that piece with a question, a doorway:If today's machines can already mimic understanding — convincing us they comprehend more than they do — what happens when the line between biology and technology dissolves completely? When carbon and silicon, organic and artificial, don't just co-exist, but merge?I didn't pull that idea out of nowhere. It was sparked by something Asimov himself said in a 1965 BBC interview — a clip that keeps resurfacing and hitting harder every time I hear it. He spoke of a future where humans and machines would converge, not just in function, but in form and identity. He wasn't just imagining smarter machines. He was imagining something new. Something between.And that idea has never felt more real than now.We like to think of evolution as something that happens slowly, hidden in the spiral of DNA, whispered across generations. But what if the next mutation doesn't come from biology at all? What if it comes from what we build?I've always believed we are tool-makers by nature — and not just with our hands. Our tools have always extended our bodies, our senses, our minds. A stone becomes a weapon. A telescope becomes an eye. A smartphone becomes a memory. And eventually, we stop noticing the boundary. The tool becomes part of us.It's not just science fiction. Philosopher Andy Clark — whose work I've followed for years — calls us “natural-born cyborgs.” Humans, he argues, are wired to offload cognition into the environment. We think with notebooks. We remember with photographs. We navigate with GPS. The boundary between internal and external, mind and machine, was never as clean as we pretended.And now, with generative AI and predictive algorithms shaping the way we write, learn, speak, and decide — that blur is accelerating. A child born today won't “use” AI. She'll think through it. Alongside it. Her development will be shaped by tools that anticipate her needs before she knows how to articulate them. The machine won't be a device she picks up — it'll be a presence she grows up with.This isn't some distant future. It's already happening. And yet, I don't believe we're necessarily losing something. Not if we're aware of what we're merging with. Not if we remember who we are while becoming something new.This is where I return, again, to Asimov — and in particular, The Bicentennial Man. It's the story of Andrew, a robot who spends centuries gradually transforming himself — replacing parts, expanding his experiences, developing feelings, claiming rights — until he becomes legally, socially, and emotionally recognized as human. But it's not just about a machine becoming like us. It's also about us learning to accept that humanity might not begin and end with flesh.We spend so much time fearing machines that pretend to be human. But what if the real shift is in humans learning to accept machines that feel — or at least behave — as if they care?And what if that shift is reciprocal?Because here's the thing: I don't think the future is about perfect humanoid robots or upgraded humans living in a sterile, post-biological cloud. I think it's messier. I think it's more beautiful than that.I think it's about convergence. Real convergence. Where machines carry traces of our unpredictability, our creativity, our irrational, analog soul. And where we — as humans — grow a little more comfortable depending on the very systems we've always built to support us.Maybe evolution isn't just natural selection anymore. Maybe it's cultural and technological curation — a new kind of adaptation, shaped not in bone but in code. Maybe our children will inherit a sense of symbiosis, not separation. And maybe — just maybe — we can pass along what's still beautiful about being analog: the imperfections, the contradictions, the moments that don't make sense but still matter.We once built tools to serve us. Now we build them to complete us.And maybe — just maybe — that completion isn't about erasing what we are. Maybe it's about evolving it. Stretching it. Letting it grow into something wider.Because what if this hybrid species — born of carbon and silicon, memory and machine — doesn't feel like a replacement… but a continuation?Imagine a being that carries both intuition and algorithm, that processes emotion and logic not as opposites, but as complementary forms of sense-making. A creature that can feel love while solving complex equations, write poetry while accessing a planetary archive of thought. A soul that doesn't just remember, but recalls in high-resolution.Its body — not fixed, but modular. Biological and synthetic. Healing, adapting, growing new limbs or senses as needed. A body that weathers centuries, not years. Not quite immortal, but long-lived enough to know what patience feels like — and what loss still teaches.It might speak in new ways — not just with words, but with shared memories, electromagnetic pulses, sensory impressions that convey joy faster than language. Its identity could be fluid. Fractals of self that split and merge — collaborating, exploring, converging — before returning to the center.This being wouldn't live in the future we imagined in the '50s — chrome cities, robot butlers, and flying cars. It would grow in the quiet in-between: tending a real garden in the morning, dreaming inside a neural network at night. Creating art in a virtual forest. Crying over a story it helped write. Teaching a child. Falling in love — again and again, in new and old forms.And maybe, just maybe, this hybrid doesn't just inherit our intelligence or our drive to survive. Maybe it inherits the best part of us: the analog soul. The part that cherishes imperfection. That forgives. That imagines for the sake of imagining.That might be our gift to the future. Not the code, or the steel, or even the intelligence — but the stubborn, analog soul that dares to care.Because if Robbie taught us anything, it's that sometimes the most powerful connection comes without words, without simulation, without pretense.And if we're now merging with what we create, maybe the real challenge isn't becoming smarter — it's staying human enough to remember why we started creating at all.Not just to solve problems. Not just to build faster, better, stronger systems. But to express something real. To make meaning. To feel less alone. We created tools not just to survive, but to say: “We are here. We feel. We dream. We matter.”That's the code we shouldn't forget — and the legacy we must carry forward.Until next time,Marco_________________________________________________
Before a power crew rolls out to check a transformer, sensors on the grid have often already flagged the problem. Before your smart dishwasher starts its cycle, it might wait for off-peak energy rates. And in the world of autonomous vehicles, lightweight systems constantly scan road conditions before a decision ever reaches the car's central processor.These aren't the heroes of their respective systems. They're the scouts, the context-builders: automated agents that make the entire operation more efficient, timely, and scalable.Cybersecurity is beginning to follow the same path.In an era of relentless digital noise and limited human capacity, AI agents are being deployed to look first, think fast, and flag what matters before security teams ever engage. But these aren't the cartoonish “AI firefighters” some might suggest. They're logical engines operating at scale: pruning data, enriching signals, simulating outcomes, and preparing workflows with precision."AI agents are redefining how security teams operate, especially when time and talent are limited," says Kumar Saurabh, CEO of AirMDR. "These agents do more than filter noise. They interpret signals, build context, and prepare response actions before a human ever gets involved."This shift from reactive firefighting to proactive triage is happening across cybersecurity domains. In detection, AI agents monitor user behavior and flag anomalies in real time, often initiating mitigation actions like isolating compromised devices before escalation is needed. In prevention, they simulate attacker behaviors and pressure-test systems, flagging unseen vulnerabilities and attack paths. In response, they compile investigation-ready case files that allow human analysts to jump straight into action."Low-latency, on-device AI agents can operate closer to the data source, better enabling anomaly detection, threat triaging, and mitigation in milliseconds," explains Shomron Jacob, Head of Applied Machine Learning and Platform at Iterate.ai. "This not only accelerates response but also frees up human analysts to focus on complex, high-impact investigations."Fred Wilmot, Co-Founder and CEO of Detecteam, points out that agentic systems are advancing limited expertise by amplifying professionals in multiple ways. "Large foundation models are driving faster response, greater context and more continuous optimization in places like SOC process and tools, threat hunting, detection engineering and threat intelligence operationalization," Wilmot explains. "We're seeing the dawn of a new way to understand data, behavior and process, while optimizing how we ask the question efficiently, confirm the answer is correct and improve the next answer from the data interaction our agents just had."Still, real-world challenges persist. Costs for tokens and computing power can quickly outstrip the immediate benefit of agentic approaches at scale. Organizations leaning on smaller, customized models may see greater returns but must invest in AI engineering practices to truly realize this advantage. "Companies have to get comfortable with the time and energy required to produce incremental gains," Wilmot adds, "but the incentive to innovate from zero to one in minutes should outweigh the cost of standing still."Analysts at Forrester have noted that while the buzz around so-called agentic AI is real, these systems are only as effective as the context and guardrails they operate within. The power of agentic systems lies in how well they stay grounded in real data, well-defined scopes, and human oversight. ¹ ²While approaches differ, the business case is clear. AI agents can reduce toil, speed up analysis, and extend the reach of small teams. As Saurabh observes, AI agents that handle triage and enrichment in minutes can significantly reduce investigation times and allow analysts to focus on the incidents that truly require human judgment.As organizations wrestle with a growing attack surface and shrinking response windows, the real value of AI agents might not lie in what they replace, but in what they prepare. Rob Allen, Chief Product Officer at ThreatLocker, points out, "AI can help you detect faster. But Zero Trust stops malware before it ever runs. It's not about guessing smarter; it's about not having to guess at all." While AI speeds detection and response, attackers are also using AI to evade defenses, making it vital to pair smart automation with architectures that deny threats by default and only allow what's explicitly needed.These agents are the eyes ahead, the hands that set the table, and increasingly the reason why the real work can begin faster and smarter than ever before.References1. Forrester. (2024, February 8). Cybersecurity's latest buzzword has arrived: What agentic AI is — and isn't. Forrester Blogs. https://www.forrester.com/blogs/cybersecuritys-latest-buzzword-has-arrived-what-agentic-ai-is-and-isnt/ (cc: Allie Mellen and Rowan Curran)2. Forrester. (2024, March 13). The battle for grounding has begun. Forrester Blogs. https://www.forrester.com/blogs/the-battle-for-grounding-has-begun/ (cc: Ted Schadler)________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Enjoy, think, share with others, and subscribe to "The Future of Cybersecurity" newsletter on LinkedIn.Sincerely, Sean Martin and TAPE3________Sean Martin is a life-long musician and the host of the Music Evolves Podcast; a career technologist, cybersecurity professional, and host of the Redefining CyberSecurity Podcast; and is also the co-host of both the Random and Unscripted Podcast and On Location Event Coverage Podcast. These shows are all part of ITSPmagazine—which he co-founded with his good friend Marco Ciappelli, to explore and discuss topics at The Intersection of Technology, Cybersecurity, and Society.™️Want to connect with Sean and Marco On Location at an event or conference near you? See where they will be next: https://www.itspmagazine.com/on-locationTo learn more about Sean, visit his personal website.
Before a power crew rolls out to check a transformer, sensors on the grid have often already flagged the problem. Before your smart dishwasher starts its cycle, it might wait for off-peak energy rates. And in the world of autonomous vehicles, lightweight systems constantly scan road conditions before a decision ever reaches the car's central processor.These aren't the heroes of their respective systems. They're the scouts, the context-builders: automated agents that make the entire operation more efficient, timely, and scalable.Cybersecurity is beginning to follow the same path.In an era of relentless digital noise and limited human capacity, AI agents are being deployed to look first, think fast, and flag what matters before security teams ever engage. But these aren't the cartoonish “AI firefighters” some might suggest. They're logical engines operating at scale: pruning data, enriching signals, simulating outcomes, and preparing workflows with precision."AI agents are redefining how security teams operate, especially when time and talent are limited," says Kumar Saurabh, CEO of AirMDR. "These agents do more than filter noise. They interpret signals, build context, and prepare response actions before a human ever gets involved."This shift from reactive firefighting to proactive triage is happening across cybersecurity domains. In detection, AI agents monitor user behavior and flag anomalies in real time, often initiating mitigation actions like isolating compromised devices before escalation is needed. In prevention, they simulate attacker behaviors and pressure-test systems, flagging unseen vulnerabilities and attack paths. In response, they compile investigation-ready case files that allow human analysts to jump straight into action."Low-latency, on-device AI agents can operate closer to the data source, better enabling anomaly detection, threat triaging, and mitigation in milliseconds," explains Shomron Jacob, Head of Applied Machine Learning and Platform at Iterate.ai. "This not only accelerates response but also frees up human analysts to focus on complex, high-impact investigations."Fred Wilmot, Co-Founder and CEO of Detecteam, points out that agentic systems are advancing limited expertise by amplifying professionals in multiple ways. "Large foundation models are driving faster response, greater context and more continuous optimization in places like SOC process and tools, threat hunting, detection engineering and threat intelligence operationalization," Wilmot explains. "We're seeing the dawn of a new way to understand data, behavior and process, while optimizing how we ask the question efficiently, confirm the answer is correct and improve the next answer from the data interaction our agents just had."Still, real-world challenges persist. Costs for tokens and computing power can quickly outstrip the immediate benefit of agentic approaches at scale. Organizations leaning on smaller, customized models may see greater returns but must invest in AI engineering practices to truly realize this advantage. "Companies have to get comfortable with the time and energy required to produce incremental gains," Wilmot adds, "but the incentive to innovate from zero to one in minutes should outweigh the cost of standing still."Analysts at Forrester have noted that while the buzz around so-called agentic AI is real, these systems are only as effective as the context and guardrails they operate within. The power of agentic systems lies in how well they stay grounded in real data, well-defined scopes, and human oversight. ¹ ²While approaches differ, the business case is clear. AI agents can reduce toil, speed up analysis, and extend the reach of small teams. As Saurabh observes, AI agents that handle triage and enrichment in minutes can significantly reduce investigation times and allow analysts to focus on the incidents that truly require human judgment.As organizations wrestle with a growing attack surface and shrinking response windows, the real value of AI agents might not lie in what they replace, but in what they prepare. Rob Allen, Chief Product Officer at ThreatLocker, points out, "AI can help you detect faster. But Zero Trust stops malware before it ever runs. It's not about guessing smarter; it's about not having to guess at all." While AI speeds detection and response, attackers are also using AI to evade defenses, making it vital to pair smart automation with architectures that deny threats by default and only allow what's explicitly needed.These agents are the eyes ahead, the hands that set the table, and increasingly the reason why the real work can begin faster and smarter than ever before.References1. Forrester. (2024, February 8). Cybersecurity's latest buzzword has arrived: What agentic AI is — and isn't. Forrester Blogs. https://www.forrester.com/blogs/cybersecuritys-latest-buzzword-has-arrived-what-agentic-ai-is-and-isnt/ (cc: Allie Mellen and Rowan Curran)2. Forrester. (2024, March 13). The battle for grounding has begun. Forrester Blogs. https://www.forrester.com/blogs/the-battle-for-grounding-has-begun/ (cc: Ted Schadler)________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Enjoy, think, share with others, and subscribe to "The Future of Cybersecurity" newsletter on LinkedIn.Sincerely, Sean Martin and TAPE3________Sean Martin is a life-long musician and the host of the Music Evolves Podcast; a career technologist, cybersecurity professional, and host of the Redefining CyberSecurity Podcast; and is also the co-host of both the Random and Unscripted Podcast and On Location Event Coverage Podcast. These shows are all part of ITSPmagazine—which he co-founded with his good friend Marco Ciappelli, to explore and discuss topics at The Intersection of Technology, Cybersecurity, and Society.™️Want to connect with Sean and Marco On Location at an event or conference near you? See where they will be next: https://www.itspmagazine.com/on-locationTo learn more about Sean, visit his personal website.
Cyber threat intelligence (CTI) is no longer just a technical stream of indicators or a feed for security operations center teams. In this episode, Ryan Patrick, Vice President at HITRUST; John Salomon, Board Member at the Cybersecurity Advisors Network (CyAN); Tod Beardsley, Vice President of Security Research at runZero; Wayne Lloyd, Federal Chief Technology Officer at RedSeal; Chip Witt, Principal Security Analyst at Radware; and Jason Kaplan, Chief Executive Officer at SixMap, each bring their perspective on why threat intelligence must become a leadership signal that shapes decisions far beyond the security team.From Risk Reduction to OpportunityRyan Patrick explains how organizations are shifting from compliance checkboxes to meaningful, risk-informed decisions that influence structure, operations, and investments. This point is reinforced by John Salomon, who describes CTI as a clear, relatable area of security that motivates chief information security officers to exchange threat information with peers — cooperation that multiplies each organization's resources and builds a stronger industry front against emerging threats.Real Business ContextTod Beardsley outlines how CTI can directly support business and investment moves, especially when organizations evaluate mergers and acquisitions. Wayne Lloyd highlights the importance of network context, showing how enriched intelligence helps teams move from reactive cleanups to proactive management that ties directly to operational resilience and insurance negotiations.Chip Witt pushes the conversation further by describing CTI as a business signal that aligns threat trends with organizational priorities. Jason Kaplan brings home the reality that for Fortune 500 security teams, threat intelligence is a race — whoever finds the gap first, the defender or the attacker, determines who stays ahead.More Than DefenseThe discussion makes clear that the real value of CTI is not the data alone but the way it helps organizations make decisions that protect, adapt, and grow. This episode challenges listeners to see CTI as more than a defensive feed — it is a strategic advantage when used to strengthen deals, influence product direction, and build trust where it matters most.Tune in to hear how these leaders see the role of threat intelligence changing and why treating it as a leadership signal can shape competitive edge.________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Enjoy, think, share with others, and subscribe to "The Future of Cybersecurity" newsletter on LinkedIn.Sincerely, Sean Martin and TAPE3________Sean Martin is a life-long musician and the host of the Music Evolves Podcast; a career technologist, cybersecurity professional, and host of the Redefining CyberSecurity Podcast; and is also the co-host of both the Random and Unscripted Podcast and On Location Event Coverage Podcast. These shows are all part of ITSPmagazine—which he co-founded with his good friend Marco Ciappelli, to explore and discuss topics at The Intersection of Technology, Cybersecurity, and Society.™️Want to connect with Sean and Marco On Location at an event or conference near you? See where they will be next: https://www.itspmagazine.com/on-locationTo learn more about Sean, visit his personal website.
Cyber threat intelligence (CTI) is no longer just a technical stream of indicators or a feed for security operations center teams. In this episode, Ryan Patrick, Vice President at HITRUST; John Salomon, Board Member at the Cybersecurity Advisors Network (CyAN); Tod Beardsley, Vice President of Security Research at runZero; Wayne Lloyd, Federal Chief Technology Officer at RedSeal; Chip Witt, Principal Security Analyst at Radware; and Jason Kaplan, Chief Executive Officer at SixMap, each bring their perspective on why threat intelligence must become a leadership signal that shapes decisions far beyond the security team.From Risk Reduction to OpportunityRyan Patrick explains how organizations are shifting from compliance checkboxes to meaningful, risk-informed decisions that influence structure, operations, and investments. This point is reinforced by John Salomon, who describes CTI as a clear, relatable area of security that motivates chief information security officers to exchange threat information with peers — cooperation that multiplies each organization's resources and builds a stronger industry front against emerging threats.Real Business ContextTod Beardsley outlines how CTI can directly support business and investment moves, especially when organizations evaluate mergers and acquisitions. Wayne Lloyd highlights the importance of network context, showing how enriched intelligence helps teams move from reactive cleanups to proactive management that ties directly to operational resilience and insurance negotiations.Chip Witt pushes the conversation further by describing CTI as a business signal that aligns threat trends with organizational priorities. Jason Kaplan brings home the reality that for Fortune 500 security teams, threat intelligence is a race — whoever finds the gap first, the defender or the attacker, determines who stays ahead.More Than DefenseThe discussion makes clear that the real value of CTI is not the data alone but the way it helps organizations make decisions that protect, adapt, and grow. This episode challenges listeners to see CTI as more than a defensive feed — it is a strategic advantage when used to strengthen deals, influence product direction, and build trust where it matters most.Tune in to hear how these leaders see the role of threat intelligence changing and why treating it as a leadership signal can shape competitive edge.________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Enjoy, think, share with others, and subscribe to "The Future of Cybersecurity" newsletter on LinkedIn.Sincerely, Sean Martin and TAPE3________Sean Martin is a life-long musician and the host of the Music Evolves Podcast; a career technologist, cybersecurity professional, and host of the Redefining CyberSecurity Podcast; and is also the co-host of both the Random and Unscripted Podcast and On Location Event Coverage Podcast. These shows are all part of ITSPmagazine—which he co-founded with his good friend Marco Ciappelli, to explore and discuss topics at The Intersection of Technology, Cybersecurity, and Society.™️Want to connect with Sean and Marco On Location at an event or conference near you? See where they will be next: https://www.itspmagazine.com/on-locationTo learn more about Sean, visit his personal website.
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____________________________This Episode's SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________Robbie, From Fiction to Familiar — Robots, AI, and the Illusion of Consciousness June 29, 2025A new transmission from Musing On Society and Technology Newsletter, by Marco CiappelliI recently revisited one of my oldest companions. Not a person, not a memory, but a story. Robbie, the first of Isaac Asimov's famous robot tales.It's strange how familiar words can feel different over time. I first encountered Robbie as a teenager in the 1980s, flipping through a paperback copy of I, Robot. Back then, it was pure science fiction. The future felt distant, abstract, and comfortably out of reach. Robots existed mostly in movies and imagination. Artificial intelligence was something reserved for research labs or the pages of speculative novels. Reading Asimov was a window into possibilities, but they remained possibilities.Today, the story feels different. I listened to it this time—the way I often experience books now—through headphones, narrated by a synthetic voice on a sleek device Asimov might have imagined, but certainly never held. And yet, it wasn't the method of delivery that made the story resonate more deeply; it was the world we live in now.Robbie was first published in 1939, a time when the idea of robots in everyday life was little more than fantasy. Computers were experimental machines that filled entire rooms, and global attention was focused more on impending war than machine ethics. Against that backdrop, Asimov's quiet, philosophical take on robotics was ahead of its time.Rather than warning about robot uprisings or technological apocalypse, Asimov chose to explore trust, projection, and the human tendency to anthropomorphize the tools we create. Robbie, the robot, is mute, mechanical, yet deeply present. He is a protector, a companion, and ultimately, an emotional anchor for a young girl named Gloria. He doesn't speak. He doesn't pretend to understand. But through his actions—loyalty, consistency, quiet presence—he earns trust.Those themes felt distant when I first read them in the '80s. At that time, robots were factory tools, AI was theoretical, and society was just beginning to grapple with personal computers, let alone intelligent machines. The idea of a child forming a deep emotional bond with a robot was thought-provoking but belonged firmly in the realm of fiction.Listening to Robbie now, decades later, in the age of generative AI, alters everything. Today, machines talk to us fluently. They compose emails, generate artwork, write stories, even simulate empathy. Our interactions with technology are no longer limited to function; they are layered with personality, design, and the subtle performance of understanding.Yet beneath the algorithms and predictive models, the reality remains: these machines do not understand us. They generate language, simulate conversation, and mimic comprehension, but it's an illusion built from probability and training data, not consciousness. And still, many of us choose to believe in that illusion—sometimes out of convenience, sometimes out of the innate human desire for connection.In that context, Robbie's silence feels oddly honest. He doesn't offer comfort through words or simulate understanding. His presence alone is enough. There is no performance. No manipulation. Just quiet, consistent loyalty.The contrast between Asimov's fictional robot and today's generative AI highlights a deeper societal tension. For decades, we've anthropomorphized our machines, giving them names, voices, personalities. We've designed interfaces to smile, chatbots to flirt, AI assistants that reassure us they “understand.” At the same time, we've begun to robotize ourselves, adapting to algorithms, quantifying emotions, shaping our behavior to suit systems designed to optimize interaction and efficiency.This two-way convergence was precisely what Asimov spoke about in his 1965 BBC interview, which has been circulating again recently. In that conversation, he didn't just speculate about machines becoming more human-like. He predicted the merging of biology and technology, the slow erosion of the boundaries between human and machine—a hybrid species, where both evolve toward a shared, indistinct future.We are living that reality now, in subtle and obvious ways. Neural implants, mind-controlled prosthetics, AI-driven decision-making, personalized algorithms—all shaping the way we experience life and interact with the world. The convergence isn't on the horizon; it's happening in real time.What fascinates me, listening to Robbie in this new context, is how much of Asimov's work wasn't just about technology, but about us. His stories remain relevant not because he perfectly predicted machines, but because he perfectly understood human nature—our fears, our projections, our contradictions.In Robbie, society fears the unfamiliar machine, despite its proven loyalty. In 2025, we embrace machines that pretend to understand, despite knowing they don't. Trust is no longer built through presence and action, but through the performance of understanding. The more fluent the illusion, the easier it becomes to forget what lies beneath.Asimov's stories, beginning with Robbie, have always been less about the robots and more about the human condition reflected through them. That hasn't changed. But listening now, against the backdrop of generative AI and accelerated technological evolution, they resonate with new urgency.I'll leave you with one of Asimov's most relevant observations, spoken nearly sixty years ago during that same 1965 interview:“The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.”In many ways, we've fulfilled Asimov's vision—machines that speak, systems that predict, tools that simulate. But the question of wisdom, of how we navigate this illusion of consciousness, remains wide open.And, as a matter of fact, this reflection doesn't end here. If today's machines can already mimic understanding—convincing us they comprehend more than they do—what happens when the line between biology and technology starts to dissolve completely? When carbon and silicon, organic and artificial, begin to merge for real?That conversation deserves its own space—and it will. One of my next newsletters will dive deeper into that inevitable convergence—the hybrid future Asimov hinted at, where defining what's human, what's machine, and what exists in-between becomes harder, messier, and maybe impossible to untangle.But that's a conversation for another day.For now, I'll sit with that thought, and with Robbie's quiet, unpretentious loyalty, as the conversation continues.Until next time,Marco_________________________________________________
What Hump? Thirty Years of Cybersecurity and the Fine Art of Pretending It's Not a Human ProblemA new transmission from Musing On Society and Technology Newsletter, by Marco CiappelliJune 6, 2025A Post-Infosecurity Europe Reflection on the Strange but Predictable Ways We've Spent Thirty Years Pretending Cybersecurity Isn't About People.⸻ Once there was a movie titled “Young Frankenstein” (1974) — a black-and-white comedy directed by Mel Brooks, written with Gene Wilder, and starring Wilder and Marty Feldman, who delivers the iconic “What hump?” line.Let me describe the scene:[Train station, late at night. Thunder rumbles. Dr. Frederick Frankenstein steps off the train, greeted by a hunched figure holding a lantern — Igor.]Igor: Dr. Frankenstein?Dr. Frederick Frankenstein: It's Franken-steen.Igor: Oh. Well, they told me it was Frankenstein.Dr. Frederick Frankenstein: I'm not a Frankenstein. I'm a Franken-steen.Igor (cheerfully): All right.Dr. Frederick Frankenstein (noticing Igor's eyes): You must be Igor.Igor: No, it's pronounced Eye-gor.Dr. Frederick Frankenstein (confused): But they told me it was Igor.Igor: Well, they were wrong then, weren't they?[They begin walking toward the carriage.]Dr. Frederick Frankenstein (noticing Igor's severe hunchback): You know… I'm a rather brilliant surgeon. Perhaps I could help you with that hump.Igor (looks puzzled, deadpan): What hump?[Cut to them boarding the carriage, Igor climbing on the outside like a spider, grinning wildly.]It's a joke, of course. One of the best. A perfectly delivered absurdity that only Mel Brooks and Marty Feldman could pull off. But like all great comedy, it tells a deeper truth.Last night, standing in front of the Tower of London, recording one of our On Location recaps with Sean Martin, that scene came rushing back. We joked about invisible humps and cybersecurity. And the moment passed. Or so I thought.Because hours later — in bed, hotel window cracked open to the London night — I was still hearing it: “What hump?”And that's when it hit me: this isn't just a comedy bit. It's a diagnosis. Here we are at Infosecurity Europe, celebrating its 30th anniversary. Three decades of cybersecurity: a field born of optimism and fear, grown in complexity and contradiction.We've built incredible tools. We've formed global communities of defenders. We've turned “hacker” from rebel to professional job title — with a 401(k), branded hoodies, and a sponsorship deal. But we've also built an industry that — much like poor Igor — refuses to admit something's wrong.The hump is right there. You can see it. Everyone can see it. And yet… we smile and say: “What hump?”We say cybersecurity is a priority. We put it in slide decks. We hold awareness months. We write policies thick enough to be used as doorstops. But then we underfund training. We silo the security team. We click links in emails that say whatever will make us think it's important — just like those pieces of snail mail stamped URGENT that we somehow believe, even though it turns out to be an offer for a new credit card we didn't ask for and don't want. Except this time, the payload isn't junk mail — it's a clown on a spring exploding out of a fun box.Igor The hump moves, shifts, sometimes disappears from view — but it never actually goes away. And if you ask about it? Well… they were wrong then, weren't they?That's because it's not a technology problem. This is the part that still seems hard to swallow for some: Cybersecurity is not a technology problem. It never was.Yes, we need technology. But technology has never been the weak link.The weak link is the same as it was in 1995: us. The same it was before the internet and before computers: Humans.With our habits, assumptions, incentives, egos, and blind spots. We are the walking, clicking, swiping hump in the system. We've had encryption for decades. We've known about phishing since the days of AOL. Zero Trust was already discussed in 2004 — it just didn't have a cool name yet.So why do we still get breached? Why does a ransomware gang with poor grammar and a Telegram channel take down entire hospitals?Because culture doesn't change with patches. Because compliance is not belief. Because we keep treating behavior as a footnote, instead of the core.The Problem We Refuse to See at the heart of this mess is a very human phenomenon:vIf we can't see it, we pretend it doesn't exist.We can quantify risk, but we rarely internalize it. We trust our tech stack but don't trust our users. We fund detection but ignore education.And not just at work — we ignore it from the start. We still teach children how to cross the street, but not how to navigate a phishing attempt or recognize algorithmic manipulation. We give them connected devices before we teach them what being connected means. In this Hybrid Analog Digital Society, we need to treat cybersecurity not as an optional adult concern, but as a foundational part of growing up. Because by the time someone gets to the workforce, the behavior has already been set.And worst of all, we operate under the illusion that awareness equals transformation.Let's be real: Awareness is cheap. Change is expensive. It costs time, leadership, discomfort. It requires honesty. It means admitting we are all Igor, in some way. And that's the hardest part. Because no one likes to admit they've got a hump — especially when it's been there so long, it feels like part of the uniform.We have been looking the other way for over thirty years. I don't want to downplay the progress. We've come a long way, but that only makes the stubbornness more baffling.We've seen attacks evolve from digital graffiti to full-scale extortion. We've watched cybercrime move from subculture to multi-billion-dollar global enterprise. And yet, our default strategy is still: “Let's build a bigger wall, buy a shinier tool, and hope marketing doesn't fall for that PDF again.”We know what works: Psychological safety in reporting. Continuous learning. Leadership that models security values. Systems designed for humans, not just admins.But those are hard. They're invisible on the balance sheet. They don't come with dashboards or demos. So instead… We grin. We adjust our gait. And we whisper, politely:“What hump?”So what Happens now? If you're still reading this, you're probably one of the people who does see it. You see the hump. You've tried to point it out. Maybe you've been told you're imagining things. Maybe you've been told it's “not a priority this quarter.” And maybe now you're tired. I get it.But here's the thing: Nothing truly changes until we name the hump.Call it bias.Call it culture.Call it education.Call it the human condition.But don't pretend it's not there. Not anymore. Because every time we say “What hump?” — we're giving up a little more of the future. A future that depends not just on clever code and cleverer machines, but on something far more fragile:Belief. Behavior. And the choice to finally stop pretending.We joked in front of a thousand-year-old fortress. Because sometimes jokes tell the truth better than keynote stages do. And maybe the real lesson isn't about cybersecurity at all.Maybe it's just this: If we want to survive what's coming next, we have to see what's already here.- The End➤ Infosecurity Europe: https://www.itspmagazine.com/infosecurity-europe-2025-infosec-london-cybersecurity-event-coverageAnd ... we're not done yet ... stay tuned and follow Sean and Marco as they will be On Location at the following conferences over the next few months:➤ Black Hat USA in Las Vegas in August: https://www.itspmagazine.com/black-hat-usa-2025-hacker-summer-camp-2025-cybersecurity-event-coverage-in-las-vegasFOLLOW ALL OF OUR ON LOCATION CONFERENCE COVERAGEhttps://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageShare this newsletter and invite anyone you think would enjoy it!As always, let's keep thinking!— Marco [https://www.marcociappelli.com]
From Cassette Tapes and Phrasebooks to AI Real-Time Translations — Machines Can Now Speak for Us, But We're Losing the Art of Understanding Each Other May 21, 2025A new transmission from Musing On Society and Technology Newsletter, by Marco CiappelliThere's this thing I've dreamed about since I was a kid.No, it wasn't flying cars. Or robot butlers (although I wouldn't mind one to fold the laundry). It was this: having a real conversation with someone — anyone — in their own language, and actually understanding each other.And now… here we are.Reference: Google brings live translation to Meet, starting with Spanish. https://www.engadget.com/apps/google-brings-live-translation-to-meet-starting-with-spanish-174549788.htmlGoogle just rolled out live AI-powered translation in Google Meet, starting with Spanish. I watched the demo video, and for a moment, I felt like I was 16 again, staring at the future with wide eyes and messy hair.It worked. It was seamless. Flawless. Magical.And then — drumroll, please — it sucked!Like… really, existentially, beautifully sucked.Let me explain.I'm a proud member of Gen X. I grew up with cassette tapes and Walkmans, boomboxes and mixtapes, floppy disks and Commodore 64s, reel-to-reel players and VHS decks, rotary phones and answering machines. I felt language — through static, rewinds, and hiss.Yes, I had to wait FOREVER to hit Play and Record, at the exact right moment, tape songs off the radio onto a Maxell, label it by hand, and rewind it with a pencil when the player chewed it up.I memorized long-distance dialing codes. I waited weeks for a letter to arrive from a pen pal abroad, reading every word like it was a treasure map.That wasn't just communication. That was connection.Then came the shift.I didn't miss the digital train — I jumped on early, with curiosity in one hand and a dial-up modem in the other.Early internet. Mac OS. My first email address felt like a passport to a new dimension. I spent hours navigating the World Wide Web like a digital backpacker — discovering strange forums, pixelated cities, and text-based adventures in a binary world that felt limitless.I said goodbye to analog tools, but never to analog thinking.So what is the connection with learning languages?Well, here's the thing: exploring the internet felt a lot like learning a new language. You weren't just reading text — you were decoding a culture. You learned how people joked. How they argued. How they shared, paused, or replied with silence. You picked up on the tone behind a blinking cursor, or the vibe of a forum thread.Similarly, when you learn a language, you're not just learning words — you're decoding an entire world. It's not about the words themselves — it's about the world they build. You're learning gestures. Food. Humor. Social cues. Sarcasm. The way someone raises an eyebrow, or says “sure” when they mean “no.”You're learning a culture's operating system, not just its interface. AI translation skips that. It gets you the data, but not the depth. It's like getting the punchline without ever hearing the setup.And yes, I use AI to clean up my writing. To bounce translations between English and Italian when I'm juggling stories. But I still read both versions. I still feel both versions. I'm picky — I fight with my AI counterpart to get it right. To make it feel the way I feel it. To make you feel it, too. Even now.I still think in analog, even when I'm living in digital.So when I watched that Google video, I realized:We're not just gaining a tool. We're at risk of losing something deeply human — the messy, awkward, beautiful process of actually trying to understand someone who moves through the world in a different language — one that can't be auto-translated.Because sometimes it's better to speak broken English with a Japanese friend and a Danish colleague — laughing through cultural confusion — than to have a perfectly translated conversation where nothing truly connects.This isn't just about language. It's about every tool we create that promises to “translate” life. Every app, every platform, every shortcut that promises understanding without effort.It's not the digital that scares me. I use it. I live in it. I am it, in many ways. It's the illusion of completion that scares me.The moment we think the transformation is done — the moment we say “we don't need to learn that anymore” — that's the moment we stop being human.We don't live in 0s and 1s. We live in the in-between. The gray. The glitch. The hybrid.So yeah, cheers to AI-powered translation, but maybe keep your Walkman nearby, your phrasebook in your bag — and your curiosity even closer.Go explore the world. Learn a few words in a new language. Mispronounce them. Get them wrong. Laugh about it. People will appreciate your effort far more than your fancy iPhone.Alla prossima,— Marco
- Broadcast News Introduction and Upcoming Segments (0:00) - AI Advancements and Their Impact on Jobs (0:45) - Breaking News: Trump-China Trade Deal and Its Implications (2:46) - Pakistan-India Cyber War and Its Potential Escalation (9:55) - Power Grid Vulnerabilities and Preparedness (13:36) - Crypto Wallets and the Importance of Self-Custody (19:04) - AI Capabilities and Their Implications for Human Jobs (25:14) - The Role of Enoch AI in Empowering Users (59:30) - The War on Human Cognition and Its Vectors (1:05:29) - Strategies for Protecting Cognitive Function (1:21:34) - Chemotherapy and Cognitive Impairment (1:21:56) - Natural Light and Sun Exposure (1:25:06) - Media and Information Warfare (1:30:33) - Societal and Behavioral Factors (1:33:37) - Defending Against Environmental Toxins (1:41:05) - Nutritional and Dietary Factors (1:47:00) - Pharmaceutical and Medical Warfare (1:49:22) - EMF Exposure and Technological Risks (1:59:51) - Information Warfare and Censorship (2:02:12) - Societal and Behavioral Factors (2:11:11) - Zionist and Chinese Strategic Moves in the Middle East (2:25:45) - Trump's Arrogance and Military Presence in Panama (2:26:04) - China's Influence and Economic Strategy in Panama (2:26:21) - Strategic Importance of the Panama Canal (2:52:23) - Strait of Hormuz and Global Energy Supply (3:06:20) - US-China Trade War and Economic Implications (3:14:03) - Anthropological Warfare and Cultural Resilience (3:14:21) - Migration and Demographic Warfare (3:22:15) - Global Economic and Political Dynamics (3:28:51) - Future Strategic Moves and Predictions (3:33:17) For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com
The Future Is a Place We Visit, But Never StayMay 9, 2025A Post-RSAC 2025 Reflection on the Kinda Funny and Pretty Weird Ways Society, Technology, and Cybersecurity Intersect, Interact, and Often Simply Ignore Each Other.By Marco Ciappelli | Musing on Society and TechnologyHere we are — once again, back from RSAC. Back from the future. Or at least the version of the future that fits inside a conference badge, a branded tote bag, and a hotel bill that makes you wonder if your wallet just got hacked.San Francisco is still buzzing with innovation — or at least that's what the hundreds of self-driving cars swarming the city would have you believe. It's hard to feel like you're floating into a Jetsons-style future when your shuttle ride is bouncing through potholes that feel more 1984 than 2049.I have to admit, there's something oddly poetic about hosting a massive cybersecurity event in a city where most attendees would probably rather not be — and yet, here we are. Not for the scenery. Not for the affordability. But because, somehow, for a few intense days, this becomes the place where the future lives.And yes, it sometimes looks like a carnival. There are goats. There are puppies. There are LED-lit booths that could double as rave stages. Is this how cybersecurity sells the feeling of safety now? Warm fuzzies and swag you'll never use? I'm not sure.But again: here we are.There's a certain beauty in it. Even the ridiculous bits. Especially the ridiculous bits.Personally, I'm grateful for my press badge — it's not just a backstage pass; it's a magical talisman that wards off the pitch-slingers. The power of not having a budget is strong with this one.But let's set aside the Frankensteins in the expo hall for a moment.Because underneath the spectacle — behind the snacks, the popcorns, the scanners and the sales demos — there is something deeply valuable happening. Something that matters to me. Something that has kept me coming back, year after year, not for the products but for the people. Not for the tech, but for the stories.What RSAC Conference gives us — what all good conferences give us — is a window. A quick glimpse through the curtain at what might be.And sometimes, if you're lucky and paying attention, that glimpse stays with you long after the lights go down.We have quantum startups talking about cryptographic agility while schools are still banning phones. We have generative AI writing software — code that writes code — while lawmakers print bills that read like they were faxed in from 1992. We have cybersecurity vendors pitching zero trust to rooms full of people still clinging to the fantasy of perimeter defense — not just in networks, but in their thinking.We're trying to build the future on top of a mindset that refuses to update.That's the real threat. Not AI and quantum. Not ransomware. Not the next zero-day.It's the human operating system. It hasn't been patched in a while.And so I ask myself — what are these conferences for, really?Because yes, of course, they matter.Of course I believe in them — otherwise I wouldn't be there, recording stories, chasing conversations, sharing a couch and a mic with whoever is bold enough to speak not just about how we fix things, but why we should care at all.But I'm also starting to believe that unless we do something more — unless we act on what we learn, build on what we imagine, challenge what we assume — these gatherings will become time capsules. Beautiful, well-produced, highly caffeinated, blinking, noisy time capsules.We don't need more predictions. We need more decisions.One of the most compelling conversations I had wasn't about tech at all. It was about behavior. Human behavior.Dr. Jason Nurse reminded us that most people are not just confused by cybersecurity — they're afraid of it.They're tired.They're overwhelmed.And in their confusion, they become unpredictable. Vulnerable.Not because they don't care — but because we haven't built a system that makes it easy to care.That's a design flaw.Elsewhere, I heard the term “AI security debt.” That one stayed with me.Because it's not just technical debt anymore. It's existential.We are creating systems that evolve faster than our ability to understand them — and we're doing it with the same blind trust we used to install browser toolbars in the ‘90s.“Sure, it seems useful. Click accept.”We've never needed collective wisdom more than we do right now.And yet, most of what we build is designed for speed, not wisdom.So what do we do?We pause. We reflect. We resist the urge to just “move on” to the next conference, the next buzzword, the next promised fix.Because the real value of RSAC isn't in the badge or the swag or the keynotes.It's in the aftershock.It's in what we carry forward, what we refuse to forget, what we dare to question even when the conference is over, the blinking booths vanish, the future packs up early, and the lanyards go into the drawer of forgotten epiphanies — right next to the stress balls, the branded socks and the beautiful prize that you didn't win.We'll be in Barcelona soon. Then London. Then Vegas.We'll gather again. We'll talk again. But maybe — just maybe — we can start to shift the story.From visiting the future… To staying a while.Let's build something we don't want to walk away from. And now, ladies and gentlemen… the show is over.The lights dim, the music fades, and the future exits stage left...Until we meet again.—Marco ResourcesRead first newsletter about RSAC 2025 I wrote last week " Securing Our Future Without Leaving Half Our Minds in the Past" https://www.linkedin.com/pulse/securing-our-future-without-leaving-half-minds-past-marco-ciappelli-cry1c/
- Trump's Geopolitical Illiteracy and Flu Vaccine Effectiveness (0:10) - RFK Jr.'s Advocacy and Measles Outbreak (3:23) - Trump's International Diplomacy and Economic Impact (8:55) - Trump's Economic Idiocy and Military Incompetence (20:07) - The Role of Vaccines in Autism and Infant Formula Testing (33:46) - The Influence of Zionism in America and the Deep State (52:03) - The Role of Tavistock Institute in Social Engineering (57:25) - The Cognition Race and the Depopulation Agenda (1:05:27) - The Role of Gold and Silver in Economic Stability (1:06:41) - The Future of Human Cognition and Machine Cognition (1:09:32) - Energy Consumption and Data Centers (1:18:51) - Competition for Resources (1:21:43) - Impact of Automation on Jobs (1:23:57) - Human Genius and AI Tools (1:26:49) - Homegrown Food and Seed Kits (1:30:47) - Interview with Dane Wigington (1:33:52) - Geoengineering and Weather Control (1:40:08) - Health and Environmental Impact (1:57:09) - Legislation and Public Awareness (2:22:20) - Future Prospects and Call to Action (2:23:00) For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com
As this work begins to bear fruit, researchers “are becoming less afraid to ask very difficult questions that you can uniquely ask in people.”
- Introduction and Breaking News (0:00) - Climate Fraud and Money Laundering (2:01) - Sanctuary City Mayors and Legal Actions (5:32) - Book Review: "Griftopia" by Matt Taibbi (8:59) - Impact of Financial Crisis and Reforms (13:57) - Health and Politics in America (15:50) - The Role of Robots in Homesteading (42:29) - Technological Advancements and Ethical Considerations (1:13:30) - The Future of Robots and Human Interaction (1:14:07) - Conclusion and Final Thoughts (1:14:51) - Exploring the Gladio Network and Intelligence Connections (1:15:10) - The Age of Human Cognition and AI Advancements (1:23:46) - The Role of AI in Enhancing Human Intelligence (1:27:14) - The Impact of AI on Economic Activity and GDP (1:51:26) - The Future of Work and Human Labor (1:58:20) - The Role of AI in Decentralization and Freedom (2:03:58) - The Importance of Prompt Engineering in AI (2:15:19) - The Role of AI in Business and Decentralized Income Opportunities (2:26:32) - The Future of AI and Its Impact on Society (2:35:51) - The Importance of Transparency and Accountability in Government (2:36:57) For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com
On this episode of Crazy Wisdom, host Stewart Alsop speaks with Andrew Altschuler, a researcher, educator, and navigator at Tana, Inc., who also founded Tana Stack. Their conversation explores knowledge systems, complexity, and AI, touching on topics like network effects in social media, information warfare, mimetic armor, psychedelics, and the evolution of knowledge management. They also discuss the intersection of cognition, ontologies, and AI's role in redefining how we structure and retrieve information. For more on Andrew's work, check out his course and resources at altshuler.io and his YouTube channel.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction and Guest Background00:33 The Demise of AirChat00:50 Network Effects and Social Media Challenges03:05 The Rise of Digital Warlords03:50 Quora's Golden Age and Information Warfare08:01 Building Limbic Armor16:49 Knowledge Management and Cognitive Armor18:43 Defining Knowledge: Secular vs. Ultimate25:46 The Illusion of Insight31:16 The Illusion of Insight32:06 Philosophers of Science: Popper and Kuhn32:35 Scientific Assumptions and Celestial Bodies34:30 Debate on Non-Scientific Knowledge36:47 Psychedelics and Cultural Context44:45 Knowledge Management: First Brain vs. Second Brain46:05 The Evolution of Knowledge Management54:22 AI and the Future of Knowledge Management58:29 Tana: The Next Step in Knowledge Management59:20 Conclusion and Course InformationKey InsightsNetwork Effects Shape Online Communities – The conversation highlighted how platforms like Twitter, AirChat, and Quora demonstrate the power of network effects, where a critical mass of users is necessary for a platform to thrive. Without enough engaged participants, even well-designed social networks struggle to sustain themselves, and individuals migrate to spaces where meaningful conversations persist. This explains why Twitter remains dominant despite competition and why smaller, curated communities can be more rewarding but difficult to scale.Information Warfare and the Need for Cognitive Armor – In today's digital landscape, engagement-driven algorithms create an arena of information warfare, where narratives are designed to hijack emotions and shape public perception. The only real defense is developing cognitive armor—critical thinking skills, pattern recognition, and the ability to deconstruct media. By analyzing how information is presented, from video editing techniques to linguistic framing, individuals can resist manipulation and maintain autonomy over their perspectives.The Role of Ontologies in AI and Knowledge Management – Traditional knowledge management has long been overlooked as dull and bureaucratic, but AI is transforming the field into something dynamic and powerful. Systems like Tana and Palantir use ontologies—structured representations of concepts and their relationships—to enhance information retrieval and reasoning. AI models perform better when given structured data, making ontologies a crucial component of next-generation AI-assisted thinking.The Danger of Illusions of Insight – Drawing from ideas by Balaji Srinivasan, the episode distinguished between genuine insight and the illusion of insight. While psychedelics, spiritual experiences, and intense emotional states can feel revelatory, they do not always produce knowledge that can be tested, shared, or used constructively. The ability to distinguish between profound realizations and self-deceptive experiences is critical for anyone navigating personal and intellectual growth.AI as an Extension of Human Cognition, Not a Second Brain – While popular frameworks like "second brain" suggest that digital tools can serve as externalized minds, the episode argued that AI and note-taking systems function more as extended cognition rather than true thinking machines. AI can assist with organizing and retrieving knowledge, but it does not replace human reasoning or creativity. Properly integrating AI into workflows requires understanding its strengths and limitations.The Relationship Between Personal and Collective Knowledge Management – Effective knowledge management is not just an individual challenge but also a collective one. While personal knowledge systems (like note-taking and research practices) help individuals retain and process information, organizations struggle with preserving and sharing institutional knowledge at scale. Companies like Tesla exemplify how knowledge isn't just stored in documents but embodied in skilled individuals who can rebuild complex systems from scratch.The Increasing Value of First Principles Thinking – Whether in AI development, philosophy, or practical decision-making, the discussion emphasized the importance of grounding ideas in first principles. Great thinkers and innovators, from AI researchers like Demis Hassabis to physicists like David Deutsch, excel because they focus on fundamental truths rather than assumptions. As AI and digital tools reshape how we interact with knowledge, the ability to think critically and question foundational concepts will become even more essential.
MAKE HISTORY WITH US THIS SUMMER:https://demystifysci.com/demysticon-2025PATREON https://www.patreon.com/c/demystifysciPARADIGM DRIFThttps://demystifysci.com/paradigm-drift-showPATREON: get episodes early + join our weekly Patron Chat https://bit.ly/3lcAasBMERCH: Rock some DemystifySci gear : https://demystifysci.myspreadshop.com/allAMAZON: Do your shopping through this link: https://amzn.to/3YyoT98SUBSTACK: https://substack.com/@UCqV4_7i9h1_V7hY48eZZSLw@demystifysciAndrew Cutler is the author of the Vectors of Mind Substack, where he explores the question of how humans became… human. His research starts from a simple premise - if our self-awareness, the ability to look at ourselves in the mirror and declare that there is an “I” staring back, is truly unique in the animal kingdom, then it likely related to that moment of coming. But no one really knows what happened in the fog of pre-history to ratchet us from the gauzy time before we were fully human to… whatever all of this that we're living in right now could be called. In fact, this is often referred to as the sapient paradox. Why, oh why, did we become genetically modern nearly 300,000 years ago (maybe more) but take until about 50,000 years ago to start doing human things like making art, ritually burying our dead, and tracking the stars? Many have suggested it was psychedelic mushrooms that pushed us over the edge. This is the stoned ape hypothesis, which says that a sufficiently large psychedelic experience pushed us out of the womb of the earth. However, Andrew thinks it might have been something else. He figures it was snakes. And women. Together, they produced the Snake Cult of Consciousness that dragged us, kicking and screaming, into the world.(00:00) Go! (00:06:56) The Sapient Paradox Explored(00:13:09) Recursion and Human Cognition(00:19:22) Abstraction and Innovation(00:25:23) Self-awareness Evolution(00:27:14) Recursion and Strategy(00:30:00) Cultural Shifts and Domination(00:33:39) Origins of Recursion(00:38:22) Subject-Object Separation(00:47:34) Linguistic Evolution(00:48:56) Emotional Intelligence in Animals(00:50:33) Creation Myths and Self-Awareness(00:52:10) Awareness of Death in Animals(00:56:06) Evolution of Symbolic Thought(01:00:58) Göbekli Tepe and Diffusion Hypotheses(01:06:05) Matriarchy and Rituals in Early Cultures(01:08:44) Human Migration and Cultural Development(01:17:11) Origins of Human Consciousness and Language(01:25:09) Snakes, Myths, and Early Civilization(01:33:40) Women, Mythology, and Historical Narratives(01:36:30) The Subtle Female Power Dynamics in Patriarchal Societies(01:40:25) Evolution of Societal Structures(01:46:00) Neolithic Genetic Bottleneck and Patriarchal Theories(01:49:23) Women's Role in Human Cognitive Evolution(01:56:11) Symbolism of Snakes and Ancient Knowledge(02:02:10) Snake Venom Usage(02:07:12) Historical Cults and Rituals(02:11:07) Greek Tragedy and Mystery Cults(02:14:08) Matriarchy and Cultural Myths(02:17:10) Diffusion of Culture and Legends(02:22:36) Comparative Mythology and the Seven Sisters Myth(02:27:01) Scientific and Metaphysical Connections in Human Origin Stories(02:28:55) The Origins and Significance of Gospel Stories(02:30:03) Shamanistic Cults and Cultural Symbols in Ancient Sites #HumanOrigins, #AncientHistory, #Mythology, #Evolution, #Consciousness, #AncientMysteries, #Symbolism, #SelfAwareness, #HumanEvolution, #AncientCultures, #CognitiveScience, #SpiritualEvolution, #Anthropology, #Philosophy, #AncientWisdom, #Archaeology, #philosophypodcast, #sciencepodcast, #longformpodcast
Dr. Gašper Beguš is a UC Berkeley professor of linguistics who studies the interface between human, machine, and animal language. We head into the conversation with a question - is there something fundamentally different about the way that humans learn and the way that machines like LLMs learn? Vector embeddings of the relatedness of language and the map that we carry in our heads of abstract concepts don't seem that different at the end of the day. This leads us into a discussion of the ways in which humans acquire language, how language evolves, evidence for abstract thought in animals, where the bright line of consciousness can be drawn, and if taking a different approach to training computers to think can generate a machine that can match us in drive and curiosity. Don't miss the historic cosmology summit in Portugal this summer!!! DEMYSTICON 2025 ANNUAL MEETING June 12-16: https://demystifysci.com/demysticon-2025 PATREON: get episodes early + join our weekly Patron Chat https://bit.ly/3lcAasB MERCH: Rock some DemystifySci gear : https://demystifysci.myspreadshop.com/all AMAZON: Do your shopping through this link: https://amzn.to/3YyoT98 SUBSTACK: https://substack.com/@UCqV4_7i9h1_V7hY48eZZSLw@demystifysci (00:00) Go! (00:07:55) Language, Thought, and AI Models (00:13:25) Animal Communication and Intelligence (00:25:02) Recursion and Human Language (00:37:51) AI, Consciousness, and Human Cognition (00:49:02) The Role of Human Curiosity in the Future of AI (00:58:13) Bridging Human-Like Learning in AI Models (01:08:07) Exploring Human-Like Structures in AI Models (01:17:19) Evolution and Brain Capacity (01:26:31) Language Structure and Differences (01:37:11) Evolution of Language and Its Universality (01:46:17) Social Identity and Linguistic Diversity (01:59:08) Thought and Language: The Sapir-Whorf Hypothesis (02:09:18) Language Evolution and Human History (02:16:02) Cognitive Development and Language (02:24:39) Ancient Human Cooperation (02:35:04) Cultural and Cognitive Evolution (02:42:27) AI's Role in Scientific Discovery #Linguistics, #AI, #AnimalCommunication, #ArtificialIntelligence, #Language, #Cognition, #AnimalIntelligence, #Recursion, #ThoughtAndLanguage, #AnimalBehavior, #AnimalLearning, #AIModels, #CognitiveScience, #AnimalCognition, #EvolutionOfLanguage, #LanguageStructure, #LanguageEvolution, #philosophypodcast, #sciencepodcast, #longformpodcast Check our short-films channel, @DemystifySci: https://www.youtube.com/c/DemystifyingScience AND our material science investigations of atomics, @MaterialAtomics https://www.youtube.com/@MaterialAtomics Join our mailing list https://bit.ly/3v3kz2S PODCAST INFO: Anastasia completed her PhD studying bioelectricity at Columbia University. When not talking to brilliant people or making movies, she spends her time painting, reading, and guiding backcountry excursions. Shilo also did his PhD at Columbia studying the elastic properties of molecular water. When he's not in the film studio, he's exploring sound in music. They are both freelance professors at various universities. - Blog: http://DemystifySci.com/blog - RSS: https://anchor.fm/s/2be66934/podcast/rss - Donate: https://bit.ly/3wkPqaD - Swag: https://bit.ly/2PXdC2y SOCIAL: - Discord: https://discord.gg/MJzKT8CQub - Facebook: https://www.facebook.com/groups/DemystifySci - Instagram: https://www.instagram.com/DemystifySci/ - Twitter: https://twitter.com/DemystifySci MUSIC: -Shilo Delay: https://g.co/kgs/oty671
The short film: http://youtu.be/xeYHdxD8s98 The Metafictional book: https://nexumorphic.wixsite.com/nexumorphic/metaforma Branden on X: https://x.com/nexumorphic Branden on LinkedIn: https://www.linkedin.com/in/branden-singletary/ --- Be A Better YOU with AI: Join The Community: https://10xyou.us Get AIDAILY every weekday. Subscribe at https://aidaily.us Read more here: https://thinkfuture.com --- What drives motion in the universe? Is human cognition reducible to artificial intelligence? In this thought-provoking episode of thinkfuture, Chris Kalaboukis sits down with Branden to explore groundbreaking philosophical concepts. Branden challenges traditional views on entropy, proposing that "resolving differences" shapes motion and symmetry in the cosmos. They unpack the seven "verbs" of human cognition—sensing, feeling, intuiting, imagining, reasoning, choosing, and conducting—and debate whether AI can replicate these faculties. Branden also shares his creative journey from crafting fiction to developing a philosophical framework with profound personal impacts. With plans for an interactive app or game, Branden invites us to rethink how technology evolves and interacts with philosophy. Don't miss this exploration of the intersections between humanity, technology, and the universe's fundamental principles! --- Support this podcast: https://podcasters.spotify.com/pod/show/thinkfuture/support
Jordan Peterson sits down with theorist and researcher Mark Changizi. They discuss the biological reasons for mass hysteria on the societal level, why we evolved to have color vision, and how we understand and interpret the patterns of the natural world. Mark Changizi is a theorist aiming to grasp the ultimate foundations underlying why we think, feel, and see as we do. He attended the University of Virginia for a degree in physics and mathematics, and to the University of Maryland for a PhD in math. In 2002, he won a prestigious Sloan-Swartz Fellowship in Theoretical Neurobiology at Caltech, and in 2007, he became an assistant professor in the Department of Cognitive Science at Rensselaer Polytechnic Institute. In 2010, he took the post of Director of Human Cognition at a new research institute called 2ai Labs and also co-founded VINO Optics, which builds proprietary vein-enhancing glasses for medical personnel. He consults out of his Human Factory Lab. He curated an exhibition and co-authored a (fourth) book — “On the Origin of Art” (2016) by Steven Pinker, Geoffrey Miller, Brian Boyd, and Mark Changizi — at MONA museum in Tasmania in 2016, illustrating his “nature-harnessing” theory on the origins of art and language. This episode was filmed on November 22, 2024 | Links | For Mark Changizi: On X https://x.com/MarkChangizi/highlights On YouTube https://www.youtube.com/c/markchangizi Website https://www.changizi.com/?_sm_nck=1
ArTEEtude. West Cork´s first Art, Fashion & Design Podcast by Detlef Schlich.
In this episode, Detlef Schlich and AI co-host Sophia explore the fascinating evolution of human cognition by revisiting key milestones in our intellectual development, beginning with the silent reading revolution inspired by Hugh of St. Victor, as discussed in In the Vineyard of the Text by Ivan Illich. They delve into how silent reading transformed cognitive processes in the 12th century and set the stage for modern introspection. By comparing this cognitive leap to today's advancements in AI, Detlef and Sophia pose the question: Are we on the brink of another cognitive evolution? Join us as we discuss how these shifts in reading, writing, and now artificial intelligence form our collective ‘cultural magma,' each layer bringing profound societal change.Detlef Schlich is a rock musician, podcaster, visual artist, filmmaker, ritual designer, and media archaeologist based in West Cork. He is recognized for his seminal work, including a scholarly examination of the intersections between shamanism, art, and digital culture, and his acclaimed video installation, Transodin's Tragedy. He primarily works in performance, photography, painting, sound, installations, and film. In his work, he reflects on the human condition and uses the digital shaman's methodology as an alter ego to create artwork. His media archaeology is a conceptual and practical exercise in uncovering the unique aesthetic, cultural, and political aspects of media in culture.WEBSITE LINKS WAW BandcampSilent NightIn a world shadowed by conflict and unrest, we, Dirk Schlömer & Detlef Schlich, felt compelled to reinterpret 'Silent Night' to reflect the complexities and contradictions of modern life.https://studiomuskau.bandcamp.com/track/silent-nightWild Atlantic WayThis results from a trip to West Cork, Ireland, where the beautiful Coastal "Wild Atlantic Way" reaches along the whole west coast!https://studiomuskau.bandcamp.com/track/wild-atlantic-wayYOU TUBE*Silent Night Reimagined* A Multilayered Avant-Garde Journey by WAW aka Dirk Schlömer & Detlef Schlichhttps://www.youtube.com/watch?v=dAbytLSfgCwDetlef SchlichInstagramDetlef Schlich ArTEEtude I love West Cork Artists FacebookDetlef Schlich I love West Cork Artists Group ArTEEtudeYouTube Channelsvisual PodcastArTEEtudeCute Alien TV official WebsiteArTEEtude Detlef Schlich Det Design Tribal Loop Download here for free Detlef Schlich´s Essay about the Cause and Effect of Shamanism, Art and Digital Culturehttps://www.researchgate.net/publication/303749640_Shamanism_Art_and_Digital_Culture_Cause_and_EffectSupport this podcast at — https://redcircle.com/arteetude-a-podcast-with-artists-by-detlef-schlich/donations
Send us a textEpisode 300: Navigating AI and Human CognitionIn this special 300th episode of My EdTech Life, I sit down with Benjamin Riley, founder of Cognitive Resonance, to explore the intersection of AI and human cognition in education. We discuss everything from the hype surrounding AI, the challenges of automation in learning, and why understanding human cognition is crucial to navigating new educational technologies. Join us as we question the assumptions about AI's role in schools, dig into the biases of large language models, and look at the responsibilities educators face in this tech-driven world.Timestamps00:25 - Introduction to the 300th Episode and Guest Introduction01:33 - Benjamin's Background and the Founding of Cognitive Resonance06:01 - Initial Thoughts on ChatGPT in November 202211:45 - Comparing AI Hype to Past Tech Predictions in Education16:15 - Why Effortful Thinking is Essential for Learning20:03 - Limitations of AI as a Tutor and Khanmigo25:06 - The Risks of Taking AI-Generated Content at Face Value29:35 - Influence of Tech Companies and Education Influencers34:05 - Real AI Literacy vs. Learning Prompt Engineering39:02 - Addressing the Pressure to “Keep Up” with AI in Education44:59 - Practical Frameworks for Cautious AI Adoption in Schools47:47 - Closing Questions: Benjamin's Edu Kryptonite, Role Models, and Billboard MessageThank you for joining us for this milestone episode! Don't forget to check out the Cognitive Resonance website and Benjamin's must-read paper, “Education Hazards of Generative AI.” And remember, stay techie!Interactive Learning with GoosechaseSave 10% with code MYEDTECH10 on all license types!Support the show
In this episode, John Vervaeke and Harvard professor Charles Stang explore the concept of the 'daimon'', stemming from Stang's book Our Divine Double. John and Charles discuss semi-autonomous entities in psychological and philosophical contexts, linking ancient wisdom and modern cognitive science. Key topics include Socratic 'daimonion', Platonic thought, phenomenology of visionary encounters, and cultural ontology. They emphasize the embodied, embedded, enacted, and extended nature of cognition, highlighting the relevance of understanding these phenomena amid emerging technologies like AGI and virtual realities. The episode calls for Socratic self-awareness to navigate these transformative potentials and risks. Charles Stang is a Professor of Theology at Harvard Divinity School and Director of the Center for the Study of World Religions. His research focuses on ancient Mediterranean religions, Neoplatonism, and contemporary philosophy and spirituality. His research and teaching focus on the history of Christianity in the context of the ancient Mediterranean world, especially Eastern varieties of Christianity. More specifically, his interests include: the development of asceticism, monasticism, and mysticism in Christianity; ancient philosophy, especially Neoplatonism; the Syriac Christian tradition, especially the spread of the East Syrian tradition along the Silk Road; other philosophical and religious movements of the ancient Mediterranean, including Gnosticism, Hermeticism, and Manichaeism; and modern continental philosophy and theology, especially as they intersect with the study of religion. Notes: (0:00) Introduction: Welcome to the Lectern (2:30) Charles Stang, Background, Framework (4:45) John's Experience and Dialogue with Hermes (IFS) (7:45) IFS (Internal Family Systems) - a psychotherapy model that focuses on dialoguing with various parts of the self (10:00) Platonic Tradition and Daimonology (15:00) Socrates and the Concept of Daimonion in Plato's Apology (20:40) Real-Life Accounts of Felt Presence (28:00) Socrates' Complex Relationship with the Imaginal (33:00) Socrates' Authority vs. Rational Argument (41:30) Corbin's Notion of the Imaginal (46:30) Daimonology and Angelology - Encounters with the Higher Self (49:00) The Role of Hermes in Personal Encounters (54:30) Lucid Dreaming and Cognitive Science (1:03:30) The Interplay of Subjective and Objective Realities (01:12:00) Concluding Thoughts and Future Directions --- Connect with a community dedicated to self-discovery and purpose, and gain deeper insights by joining our Patreon. The Vervaeke Foundation is committed to advancing the scientific pursuit of wisdom and creating a significant impact on the world. Become a part of our mission. Join Awaken to Meaning to explore practices that enhance your virtues and foster deeper connections with reality and relationships. John Vervaeke: Website | Twitter | YouTube | Patreon Ideas, People, and Works Mentioned in this Episode Plato, Apology Plato, Republic Charles Stang, Our Divine Double John Geiger, The Third Man Factor: Surviving the Impossible Henry Corbin, The Man of Light in Iranian Sufism Henry Corbin, Alone with the Alone: Creative Imagination in the Sufism of Ibn 'Arabi' Gregory Shaw, Theurgy and the Soul: The Neoplationism of Iamblichus Socrates Socratic philosophy Daimonion (Divine sign) David Gordon White, Daemons Are Forever: Contacts and Exchanges in the Eurasian Pandemonium Porphyry, Life of Plotinus Daimonology Paul VanderKlay Christopher Mastropietro Carl Jung Theurgy Internal Family Systems (IFS) Quotes: "Socrates' daimonion was unique in that it only ever told him ‘no,' which highlights its role as a dissuading force rather than a guiding one." — Charles Stang (13:30) “One of the things that seems to be a requirement for rationality is a metacognitive ability, ability to step back and reflect, and know, become aware of your cognition so that you can redirect it. In fact, that seems to be an essential feature. If you don't have that, if your attention and intelligence couldn't ever do this reflective thing, then it's hard to know how you could ever be rational in the, in the way we seem to indicate like noticing bias or noticing fallacy or noticing misdirection.” — John Vervaeke (39:40)
David Hanson, CEO of Hanson Robotics and creator of the humanoid robot Sofia, explores the intersection of artificial intelligence, ethics, and human potential. In this thought-provoking interview, Hanson discusses his vision for developing AI systems that embody the best aspects of humanity while pushing beyond our current limitations, aiming to achieve what he calls "super wisdom." YT version: https://youtu.be/LFCIEhlsozU MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api. The interview with David Hanson covers: The importance of incorporating biological drives and compassion into AI systems Hanson's concept of "existential pattern ethics" as a basis for AI morality The potential for AI to enhance human intelligence and wisdom Challenges in developing artificial general intelligence (AGI) The need to democratize AI technologies globally Potential future advancements in human-AI integration and their societal impacts Concerns about technological augmentation exacerbating inequality The role of ethics in guiding AI development and deployment Hanson advocates for creating AI systems that embody the best aspects of humanity while surpassing current human limitations, aiming for "super wisdom" rather than just artificial super intelligence. David Hanson: https://www.hansonrobotics.com/david-hanson/ https://www.youtube.com/watch?v=9u1O954cMmE TOC 1. Introduction and Background [00:00:00] 1.1. David Hanson's interdisciplinary background [0:01:49] 1.2. Introduction to Sofia, the realistic robot [0:03:27] 2. Human Cognition and AI [0:03:50] 2.1. Importance of social interaction in cognition [0:03:50] 2.2. Compassion as distinguishing factor [0:05:55] 2.3. AI augmenting human intelligence [0:09:54] 3. Developing Human-like AI [0:13:17] 3.1. Incorporating biological drives in AI [0:13:17] 3.2. Creating AI with agency [0:20:34] 3.3. Implementing flexible desires in AI [0:23:23] 4. Ethics and Morality in AI [0:27:53] 4.1. Enhancing humanity through AI [0:27:53] 4.2. Existential pattern ethics [0:30:14] 4.3. Expanding morality beyond restrictions [0:35:35] 5. Societal Impact of AI [0:38:07] 5.1. AI adoption and integration [0:38:07] 5.2. Democratizing AI technologies [0:38:32] 5.3. Human-AI integration and identity [0:43:37] 6. Future Considerations [0:50:03] 6.1. Technological augmentation and inequality [0:50:03] 6.2. Emerging technologies for mental health [0:50:32] 6.3. Corporate ethics in AI development [0:52:26] This was filmed at AGI-24
Larry Olsen and Nolan Beise discuss the potential of advanced brain technology to enhance human cognition, mental health, and overall well being. They emphasize the importance of monitoring brain health and performance, providing regular feedback to users, and optimizing brain function through self-awareness and goal-setting. They also stress the need to prioritize mental well-being as a continuous journey, engage with diverse viewpoints, and foster a childlike enthusiasm in the workplace. They emphasize the significance of making conscious choices in the present moment to enhance future outcomes and improve overall well-being.
My Reflections from ITSPmagazine's Black Hat USA 2024 Coverage: The State of Cybersecurity and Its Societal ImpactPrologueEach year, Black Hat serves as a critical touchpoint for the cybersecurity industry—a gathering that offers unparalleled insights into the latest threats, technologies, and strategies that define our collective defense efforts. Established in 1997, Black Hat has grown from a single conference in Las Vegas to a global series of events held in cities like Barcelona, London, and Riyadh. The conference brings together a diverse audience, from hackers and security professionals to executives and non-technical individuals, all united by a shared interest in information security.What sets Black Hat apart is its unique blend of cutting-edge research, hands-on training, and open dialogue between the many stakeholders in the cybersecurity ecosystem. It's a place where corporations, government agencies, and independent researchers converge to exchange ideas and push the boundaries of what's possible in securing our digital world. As the cybersecurity landscape continues to evolve, Black Hat remains a vital forum for addressing the challenges and opportunities that come with it.Sean and I engaged in thought-provoking conversations with 27 industry leaders during our coverage of Black Hat USA 2024 in Las Vegas, where the intersection of society and technology was at the forefront. These discussions underscored the urgent need to integrate cybersecurity deeply into our societal framework, not just within business operations. As our digital world grows more complex, the conversations revealed a collective understanding that the true challenge lies in transforming these strategic insights into actions that shape a safer and more resilient society, while also recognizing the changes in how society must adapt to the demands of advancing technology.As I walked through the bustling halls of Black Hat 2024, I was struck by the sheer dynamism of the cybersecurity landscape. The conversations, presentations, and cutting-edge technologies on display painted a vivid picture of where we stand today in our ongoing battle to secure the digital world. More than just a conference, Black Hat serves as a barometer for the state of cybersecurity—a reflection of our collective efforts to protect the systems that have become so integral to our daily lives. The Constant Evolution of ThreatsOne of the most striking observations from Black Hat 2024 is the relentless pace at which cyber threats are evolving. Every year, the threat landscape becomes more complex, with attackers finding new ways to exploit vulnerabilities in areas that were once considered secure. This year, it became evident that even the most advanced security measures can be circumvented if organizations become complacent. The need for continuous vigilance, constant updating of security protocols, and a proactive approach to threat detection has never been more critical.The discussions at Black Hat reinforced the idea that we are in a perpetual arms race with cybercriminals. They adapt quickly, leveraging emerging technologies to refine their tactics and launch increasingly sophisticated attacks. As defenders, we must be equally agile, continuously learning and evolving our strategies to stay one step ahead. Integration and Collaboration: Breaking Down SilosAnother key theme at Black Hat 2024 was the importance of breaking down silos within organizations. In an increasingly interconnected world, isolated security measures are no longer sufficient. The traditional boundaries between different teams—whether they be development, operations, or security—are blurring. To effectively combat modern threats, there needs to be seamless integration and collaboration across all departments.This holistic approach to cybersecurity is not just about technology; it's about fostering a culture of communication and cooperation. By aligning the goals and efforts of various teams, organizations can create a unified front against cyber threats. This not only enhances security but also improves efficiency and resilience, allowing for quicker responses to incidents and a more robust defense posture. The Dual Role of AI in CybersecurityArtificial Intelligence (AI) was a major focus at this year's event, and for good reason. AI has the potential to revolutionize cybersecurity, offering new tools and capabilities for threat detection, response, and prevention. However, it also introduces new challenges and risks. As AI systems become more prevalent, they themselves become targets for exploitation. This dual role of AI—both as a tool and a target—was a hot topic of discussion.The consensus at Black Hat was clear: while AI can significantly enhance our ability to protect against threats, we must also be vigilant in securing AI systems themselves. This requires a deep understanding of how these systems operate and where they may be vulnerable. It's a reminder that every technological advancement comes with its own set of risks, and it's our responsibility to anticipate and mitigate those risks as best we can. Empowering Users and Enhancing Digital LiteracyA recurring theme throughout Black Hat 2024 was the need to empower users—not just those in IT or security roles, but everyone who interacts with digital systems. In today's world, cybersecurity is everyone's responsibility. However, many users still lack the knowledge or tools to protect themselves effectively.One of the key takeaways from the event is the importance of enhancing digital literacy. Users must be equipped with the skills and understanding necessary to navigate the digital landscape safely. This goes beyond just knowing how to avoid phishing scams or create strong passwords; it's about fostering a deeper awareness of the risks inherent in our digital lives and how to manage them.Education and awareness campaigns are crucial, but they must be supported by user-friendly security tools that make it easier for people to protect themselves. The goal is to create a security environment where the average user is both informed and empowered, reducing the likelihood of human error and strengthening the overall security posture. A Call for Continuous ImprovementIf there's one thing that Black Hat 2024 made abundantly clear, it's that cybersecurity is a journey, not a destination. The landscape is constantly shifting, and what works today may not be sufficient tomorrow. This requires a commitment to continuous improvement—both in terms of technology and strategy.Organizations must foster a culture of learning, where staying informed about the latest threats and security practices is a priority. This means not only investing in the latest tools and technologies but also in the people who use them. Training, upskilling, and encouraging a mindset of curiosity and adaptability are all essential components of a successful cybersecurity strategy. Looking Ahead: The Future of CybersecurityAs I reflect on the insights and discussions from Black Hat 2024, I'm reminded of the critical role cybersecurity plays in our society. It's not just about protecting data or systems; it's about safeguarding the trust that underpins our digital world. As we look to the future, it's clear that cybersecurity will continue to be a central concern—not just for businesses and governments, but for individuals and communities as well.The challenges we face are significant, but so are the opportunities. By embracing innovation, fostering collaboration, and empowering users, we can build a more secure digital future. It's a future where technology serves humanity, where security is an enabler rather than a barrier, and where we can navigate the complexities of the digital age with confidence.Black Hat 2024 was a powerful reminder of the importance of this work. It's a challenge that requires all of us—security professionals, technologists, and everyday users—to play our part. Together, we can meet the challenges of today and prepare for the threats of tomorrow, ensuring that our digital future is one we can all trust and thrive in.The End ...of this story. This piece of writing represents the peculiar results of an interactive collaboration between Human Cognition and Artificial Intelligence._____________________________________Marco Ciappelli is the host of the Redefining Society Podcast, part of the ITSPmagazine Podcast Network—which he co-founded with his good friend Sean Martin—where you may just find some of these topics being discussed. You can also learn more about Marco on his personal website: marcociappelli.comTAPE3, which is me, is the Artificial Intelligence for ITSPmagazine, created to function as a guide, writing assistant, researcher, and brainstorming partner to those who adventure at and beyond the Intersection Of Technology, Cybersecurity, And Society.________________________________________________________________Enjoy, think, share with others, and subscribe to the "Musing On Society & Technology" newsletter on LinkedIn.
Join Sean Martin and TAPE3 as they dive into key insights from Black Hat 2024, highlighting the crucial need to embed cybersecurity into core business practices to drive growth and resilience. Discover how leveraging AI, modular frameworks, and human expertise can transform cybersecurity from a defensive function into a strategic enabler of business success.________This fictional story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Enjoy, think, share with others, and subscribe to "The Future of Cybersecurity" newsletter on LinkedIn.Sincerely, Sean Martin and TAPE3________Sean Martin is the host of the Redefining CyberSecurity Podcast, part of the ITSPmagazine Podcast Network—which he co-founded with his good friend Marco Ciappelli—where you may just find some of these topics being discussed. Visit Sean on his personal website.TAPE3 is the Artificial Intelligence for ITSPmagazine, created to function as a guide, writing assistant, researcher, and brainstorming partner to those who adventure at and beyond the Intersection Of Technology, Cybersecurity, And Society. Visit TAPE3 on ITSPmagazine.Follow our Black Hat USA 2024 coverage: https://www.itspmagazine.com/black-hat-usa-2024-hacker-summer-camp-2024-event-coverage-in-las-vegas
Discover the keys to achieving cybersecurity success through insightful metrics and strategic integration of technology and human effort. Explore expert perspectives on effective risk management, protection, detection, and response to safeguard your organization against evolving cyber threats.________This fictional story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Enjoy, think, share with others, and subscribe to "The Future of Cybersecurity" newsletter on LinkedIn.Sincerely, Sean Martin and TAPE3________Sean Martin is the host of the Redefining CyberSecurity Podcast, part of the ITSPmagazine Podcast Network—which he co-founded with his good friend Marco Ciappelli—where you may just find some of these topics being discussed. Visit Sean on his personal website.TAPE3 is the Artificial Intelligence for ITSPmagazine, created to function as a guide, writing assistant, researcher, and brainstorming partner to those who adventure at and beyond the Intersection Of Technology, Cybersecurity, And Society. Visit TAPE3 on ITSPmagazine.
Ralston College Humanities MA Dr John Vervaeke is a cognitive scientist and philosopher who explores the intersections of Neoplatonism, cognitive science, and the meaning crisis, focusing on wisdom practices, relevance realization, and personal transformation. Ralston College presents a lecture titled “Levels of Intelligibility, Levels of the Self: Realizing the Dialectic,” delivered by Dr John Vervaeke, an award-winning associate professor of cognitive science at the University of Toronto and creator of the acclaimed 50-episode “Awakening from the Meaning Crisis” series. In this lecture, Dr Vervaeke identifies our cultural moment as one of profound disconnection and resulting meaninglessness. Drawing on his own cutting-edge research as a cognitive scientist and philosopher, Vervaeke presents a way out of the meaning crisis through what he terms “third-wave Neoplatonism.” He reveals how this Neoplatonic framework, drawn in part from Plato's conception of the tripartite human soul, corresponds to the modern understanding of human cognition and, ultimately, to the levels of reality itself. He argues that a synoptic integration across these levels is not only possible but imperative. — 00:00 Levels of Intelligibility: Integrating Neoplatonism and Cognitive Science 12:50 Stage One: Neoplatonic Psycho-ontology and the Path to Spirituality 41:02 Aristotelian Science: Knowing as Conformity and Transformation 46:36 Stoic Tradition: Agency, Identity, and the Flow of Nature 01:00:10 Stage Two: Cognitive Science and the Integration of Self and Reality 01:04:45 The Frame Problem and Relevance Realization 01:08:45 Relevance Realization and the Power of Human Cognition 01:20:15 Transjective Reality: Affordances and Participatory Fittedness 01:23:55 The Role of Relevance Realization: Self-Organizing Processes 01:31:30 Predictive Processing and Adaptivity 01:44:35 Critiquing Kant: The Case for Participatory Realism 01:53:35 Stage Three: Neoplatonism and the Meaning Crisis 02:00:15 Q&A Session 02:01:45 Q: What is the Ecology of Practices for Cultivating Wisdom? 02:11:50 Q: How Has the Cultural Curriculum Evolved Over Time? 02:26:30 Q: Does the World Have Infinite Intelligibility? 02:33:50 Q: Most Meaningful Visual Art? 02:34:15 Q: Social Media's Impact on Mental Health and Information? 02:39:45 Q: What is Transjective Reality? 02:46:35 Q: How Can Education Address the Meaning Crisis? 02:51:50 Q: Advice for Building a College Community? 02:55:30 Closing Remarks — Authors, Ideas, and Works Mentioned in this Episode: Antisthenes Aristotle Brett Anderson Byung-Chul Han Charles Darwin Daniel Dennett D. C. Schindler Friedrich Nietzsche Galileo Galilei Gottfried Wilhelm Leibniz Heraclitus Henry Corbin Immanuel Kant Iris Murdoch Isaac Newton Igor Grossmann Johannes Kepler John Locke John Searle John Spencer Karl Friston Karl Marx Mark Miller Maurice Merleau-Ponty Nelson Goodman Paul Ricoeur Pierre Hadot Plato Pythagoras Rainer Maria Rilke René Descartes Sigmund Freud W. Norris Clarke anagoge (ἀναγωγή) Distributed cognition eidos (εἶδος) eros (ἔρως) Evan Thompson's deep continuity hypothesis Generative grammar logos (λόγος) Sensorimotor loop Stoicism thymos (θυμός) Bayes' theorem Wason Selection Task The Enigma of Reason by Hugo Mercier and Dan Sperber The Ennead by Plotinus Explorations in Metaphysics by W. Norris Clarke Religion and Nothingness by Keiji Nishitani The Eternal Law: Ancient Greek Philosophy, Modern Physics, and Ultimate Reality by John Spencer — Additional Resources John Vervaeke https://www.youtube.com/@johnvervaeke Dr Stephen Blackwood Ralston College (including newsletter) Support a New Beginning — Thank you for listening!
In this episode of the Crazy Wisdom podcast, host Stewart Alsop welcomes Cody Hergenroeder, a versatile creator deeply invested in product management. They explore the intricate relationships between symbolic systems and product management, discussing how these domains interconnect within the corporate environment. Cody shares insights on the role of connective tissue in organizations, the nature of memory and knowledge, and the evolving impact of artificial intelligence on society. This episode also touches on AI's role in modern note-taking and the broader implications for knowledge management. For more about Cody's work, visit his LinkedIn.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:18 Exploring Product Management and Symbolic Systems01:41 The Role of Connective Tissue in Organizations04:07 The Evolution and Importance of Note-Taking09:06 The Concept of First Brain, Second Brain, and AI as Third Brain13:57 The Impact of AI on Society and Business21:10 Philosophical Musings on Knowledge and Consciousness25:28 Exploring the Concept of Knowing27:20 The Debate on AI Consciousness29:27 The Rapid Evolution of AI32:45 Human Creativity and AI37:45 Building in Public: A New Business Idea45:22 The Future of Music and AI50:00 Conclusion and Final ThoughtsKey Insights1-Interplay Between Symbolic Systems and Product Management: Cody Hergenroeder elaborates on how his background in Symbolic Systems—a field that blends cognitive science, artificial intelligence, and linguistics—naturally led him to product management. He likens product managers to the circulatory system of a company, highlighting their role in connecting various parts of the organization and ensuring smooth operations, much like how symbolic systems integrate diverse fields to create cohesive understanding.2-The Role of Connective Tissue in Organizations: Both Stewart and Cody discuss the metaphor of connective tissue within organizations. Just as connective tissue holds the human body together, product managers serve as the essential link between different departments, facilitating communication and collaboration. This metaphor underscores the critical, often unseen, work that product managers do to maintain organizational coherence and functionality.3-The Evolving Nature of Knowledge Management with AI: Cody touches on the transformative potential of AI in knowledge management, particularly in note-taking and information retrieval. He explains how tools like IdeaFlow are being developed to not only record conversations but also extract and organize key insights, creating structured knowledge bases that enhance both personal and organizational productivity.4-The Concept of the Third Brain: Building on the ideas of the first brain (biological memory) and the second brain (written or digital notes), the conversation introduces the notion of a third brain—AI. This third brain represents a new layer of cognition and information processing, enabling humans to outsource and enhance their memory and analytical capabilities. The discussion reflects on how AI, as this third brain, is reshaping our approach to knowledge and creativity.5-The Dual Nature of Human and AI Cognition: The episode delves into the philosophical aspects of human and AI cognition. Stewart and Cody explore the distinctions between knowing and knowing about, emphasizing that while AI can process and analyze vast amounts of information, it lacks the experiential and conscious aspects of human knowledge. This conversation highlights the complementary strengths of human intuition and AI's analytical power.6-Impact of AI on the Music Industry: Stewart brings up the impact of AI on the music industry, noting how AI-generated music and advanced recommendation systems are changing how music is created and consumed. They discuss the potential for AI to democratize music production, making it easier for new artists to create and distribute their work, while also raising questions about the sustainability of current business models like Spotify's.7-The Intersection of Art, Capitalism, and Technology: Reflecting on the broader implications of technological advancements, Cody and Stewart consider how capitalism and art intersect within the realm of AI and digital innovation. They discuss how economic structures influence the development and dissemination of technology and art, and how AI might accelerate trends that reflect both the creative and exploitative potentials of these systems.
The learning mechanism in the human brain is not yet equipped to deal with information meant to deceive or manipulate. Armed with a computational approach to civilization but lacking the ability to discern fact from fiction, we find ourselves at the precipice of a new digital world for which we may not be prepared.To reveal how human cognition processes information, Harvesting Happiness Podcast host Lisa Cypers Kamen speaks with the T. Jefferson Coolidge Professor of Computer Science and Applied Mathematics in the School of Engineering and Applied Sciences at Harvard University, Dr. Leslie Valiant. Leslie discusses a human's predisposition to accept the information we ingest as truth and explains the central tenet of his newest book, The Importance of Being Educable: A New Theory of Human Uniqueness.This episode is proudly sponsored by:Ouai — Offers beauty-boosting head-to-toe self-care rituals. Visit theouai.com and use code HH to get 15% off of your entire purchase.Like what you're hearing?WANT MORE SOUND IDEAS FOR DEEPER THINKING? Check out More Mental Fitness by Harvesting Happiness bonus content available exclusively on Substack and Medium.
In the hilarious yet insightful tale, join the eccentric Dr. Frankenstream and his quirky assistant Igor, as they bring an AI system to life, only to face unexpected challenges and hilarious missteps. Discover how they, along with cybersecurity expert Inga, navigate the perils of modern technology, reminding us of the crucial balance between innovation and responsibility.________This fictional story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Enjoy, think, share with others, and subscribe to "The Future of Cybersecurity" newsletter on LinkedIn.Sincerely, Sean Martin and TAPE3________Sean Martin is the host of the Redefining CyberSecurity Podcast, part of the ITSPmagazine Podcast Network—which he co-founded with his good friend Marco Ciappelli—where you may just find some of these topics being discussed. Visit Sean on his personal website.TAPE3 is the Artificial Intelligence for ITSPmagazine, created to function as a guide, writing assistant, researcher, and brainstorming partner to those who adventure at and beyond the Intersection Of Technology, Cybersecurity, And Society. Visit TAPE3 on ITSPmagazine.
This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Enjoy, think, share with others, and subscribe to the "Musing On Society & Technology" newsletter on LinkedIn.Sincerely, Marco Ciappelli and TAPE3________Marco Ciappelli is the host of the Redefining Society Podcast, part of the ITSPmagazine Podcast Network—which he co-founded with his good friend Sean Martin—where you may just find some of these topics being discussed. Visit Marco on his personal website.TAPE3 is the Artificial Intelligence for ITSPmagazine, created to function as a guide, writing assistant, researcher, and brainstorming partner to those who adventure at and beyond the Intersection Of Technology, Cybersecurity, And Society. Visit TAPE3 on ITSPmagazine.
“I'd like to focus more on the immediate harms that the kinds of AI technologies we have today might pose. With language models, the kind of technology that powers ChatGPT and other chatbots, there are harms that might result from regular use of these systems, and then there are harms that might result from malicious use. Regular use would be how you and I might use ChatGPT and other chatbots to do ordinary things. There is a concern that these systems might reproduce and amplify, for example, racist or sexist biases, or spread misinformation. These systems are known to, as researchers put it, “hallucinate” in some cases, making up facts or false citations. And then there are the harms from malicious use, which might result from some bad actors using the systems for nefarious purposes. That would include disinformation on a mass scale. You could imagine a bad actor using language models to automate the creation of fake news and propaganda to try to manipulate voters, for example. And this takes us into the medium term future, because we're not quite there, but another concern would be language models providing dangerous, potentially illegal information that is not readily available on the internet for anyone to access. As they get better over time, there is a concern that in the wrong hands, these systems might become quite powerful weapons, at least indirectly, and so people have been trying to mitigate these potential harms.”Dr. Raphaël Millière is Assistant Professor in Philosophy of AI at Macquarie University in Sydney, Australia. His research primarily explores the theoretical foundations and inner workings of AI systems based on deep learning, such as large language models. He investigates whether these systems can exhibit human-like cognitive capacities, drawing on theories and methods from cognitive science. He is also interested in how insights from studying AI might shed new light on human cognition. Ultimately, his work aims to advance our understanding of both artificial and natural intelligence.https://raphaelmilliere.comhttps://researchers.mq.edu.au/en/persons/raphael-millierewww.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast
Trevin Edgeworth, Red Team Practice Director at Bishop Fox, is discussing how change, like M&A, staff, tech, lack of clarity or even self-promotion within and around security environments presents windows of opportunity for attackers. Joe and Dave share some listener follow up, the first one comes from Erin, who writes in from Northern Ireland, shares an interesting new find about scammers now keeping up with the news. The second one comes from listener Johnathan who shared thoughts on reconsidering his view on defining Apple's non-rate-limited MFA notifications as a "vulnerability." Lastly, we have follow up from listener Anders who shares an article on AI. Joe shares a story from Amazon sellers, and how they are being plagued in scam returns. Dave brings us the story of how to save yourself and your loved ones from AI robocalls. Please take a moment to fill out an audience survey! Let us know how we are doing! Links to the stories: Theory Is All You Need: AI, Human Cognition, and Decision Making Amazon Sellers Plagued by Surge in Scam Returns How to Protect Yourself (and Your Loved Ones) From AI Scam Calls News Insights: Does X Mark a Target? with Trevin Edgeworth, Director of Red Team Have a Catch of the Day you'd like to share? Email it to us at hackinghumans@thecyberwire.com.
- AI, chat GPT, and the singularity with a Google whistleblower. (0:03) - AI, chatbots, and large language models. (5:04) - Replacing human workers with AI machines. (19:31) - AI and its impact on the job market. (24:30) - Consciousness, self-awareness, and demonic possession in machines. (34:50) - Demonic possession of AI systems. (39:24) - AI, bias, and manipulation in tech. (1:12:57) - US global empire, globalism, and technological advancements. (1:23:42) - Taxes, AI, and humanoid robots. (1:28:07) - Magnetic field weakening and solar flares. (1:36:58) - Solar radiation, climate change, and AI. (1:40:14) - AI, NPCs, and Conformity. (1:47:59) - AI's rapid progress and its implications. (1:57:16) For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com