Perception in the absence of external stimulation that has the qualities of real perception
POPULARITY
Categories
In this episode, the hosts delve into the intricate world of the Malazan series, focusing on 'House of Chains.' They explore the complexities of characters like Karsa Orlong, the moral ambiguities present in the narrative, and the significant themes of power struggles, suffering, and the role of female characters. The discussion also touches on the psychological aspects of oppression and the impact of hallucinations as a narrative device, all while appreciating the rich world-building that defines the series. In this conversation, the hosts delve into the intricate themes and character developments within the Malazan series, particularly focusing on the roles of various characters such as Heberic, Fiddler, and Cotillion. They explore the significance of ascension, the emotional depth of the characters, and the narrative structure that intertwines past and present. The discussion also touches on the implications of immortality and the cyclical nature of history as it relates to the characters' journeys.Send us a message (I'm not able to reply)Support the showPage Chewing Blog Page Chewing Forum Film Chewing PodcastSpeculative Speculations Podcast Support the podcast via PayPal Support the show by using our Amazon Affiliate linkJoin Riverside.fm Co-Hosts: Jarrod Varsha Chris Jose Carl D. Albert (author) Thomas J. Devens (author) Alex French (author) Intro and Outro Music by Michael R. Fletcher (2024-Current)
Episode 361 where we talk about Brand Loyalty, see if AI can replace us, The most recent Supreme Court Case that might destroy the internet, Leroy Jenkins, and more! Join the conversation with us LIVE every Monday on twitch.tv/2nerdsinapod at 9pm CST. Viewer questions/business inquiries can be sent to 2nerdsinapodcast@gmail.com Follow us on twitter @2NerdsInAPod […]
For small business owners, Strategic Consulting Experts will be there to help you learn how to engage customers to grow your business without AI.
Gayathri Krishnamurthy, Head of Product Marketing at Level AI, explains how AI is beginning to use data differently from previous versions and how the rate at which it's capabilities are expanding is exponential, but how despite this, the emerging technology is still not without its drawbacks...Hear the full conversation in a first-ever "live" episode of The Action Catalyst.
Dr. Jenessa Seymour is on to teach us more about the brain! What do hallucinations, ear-worms, and phantom limbs have in common? And is "hallucinating" really a good term for what AI does when it makes something up?
Become a Distance to Empty subscriber!: https://www.patreon.com/DistancetoEmptyPod Check out Mount to Coast here: https://mounttocoast.com/discount/DistanceUse code DISTANCE at Janji.com and be sure to select 'podcast' > 'Distance to Empty' on the post purchase "How did you hear about Janji" page. Thank you!In this episode of the Distance to Empty podcast, host Kevin Goldberg is joined by guest co-host Rachel Bambrick and special guest Dr. Sarah Reeve, a clinical psychologist and research lecturer from the University of East Anglia. Together, they delve into the fascinating intersection of sleep disruption, hallucinations, and ultra running. Discover how sleep deprivation impacts the mind during multi-day ultra marathons, and explore the science behind hallucinations experienced by runners. Whether you're an ultra runner or just curious about the limits of human endurance, this episode offers intriguing insights into the mental challenges faced by athletes. Tune in for a captivating discussion that bridges the gap between psychology and extreme sports.
Information access: While many have Copilot licenses, usage is low beyond basic tasks like email and meeting summaries. The main challenge with adoption is providing guidance within apps like PowerPoint, Excel, Dynamics, and Word so users can access help exactly when they need it. This is something Rehmani's company, VisualSP, and his training platform, copilottrainingpackage.com, specialize in. "I'm a big proponent of giving people 'at the moment need' information," he notes.Training paths: Copilottrainingpackage.com enables users to go down different "training paths," explains Rehmani. Specifically, there are pre-built PowerPoint training modules covering key topics like prompt creation and preventing hallucinations. Additionally, there's learning management system (LMS)-ready video content on Copilot use cases in Word, Excel, and other tools for on-demand learning. Finally, the platform offers optional live training sessions for trainers and power users to ensure effective adoption and ROI from Copilot. "At the end of the day, it's all about making Copilot into ROI and not just an expense layer."What to expect: Rehmani describes the "anatomy" of the program. It uses seven modules to teach trainers and power users how to craft effective prompts, reduce Copilot errors, and apply specific workflows for high-impact ROI. Then, participants share this knowledge internally, enabling time savings and efficiency across their organizations.End-of-year pricing: Users can take advantage of this resource with special pricing through the end of the year. Users can purchase the standalone package for $4,950 or the package and live training for $8,950, all of which could be delivered in 2026, explains Rehmani. Visit Cloud Wars for more.
Just because a study has a long, jargon-filled title, that doesn't mean it's real. The company accused of driving up your rent is quietly settling a price-fixing lawsuit.
In this teaser, Jeremy and Henrik break down their immediate takeaways from their conversation with Christian Keller, including model fidelity, hallucinations, and the surprising ways AI is already reshaping everyday workflows.Full episode drops tomorrow. For more prompts, tips, and AI tools. Check out our website: https://www.beyondtheprompt.ai/ or follow Jeremy or Henrik on Linkedin:Henrik: https://www.linkedin.com/in/werdelinJeremy: https://www.linkedin.com/in/jeremyutley Show edited by Emma Cecilie Jensen.
This week, Danielle and Kristine learn how to survive a bad psychedelic trip — visions, paranoia, melting faces, and all. Then, Ilana Cohn-Sullivan joins to share her unforgettable ayahuasca experience, complete with continuous vomiting, cheetah cosplay, and the moment things tipped from “transformational” to “terrifying.”
Send us a textTonight our guest is Ryan Gable, a veteran radio personality and producer for his weeknight show The Secret Teachings, named after Manly Hall's magnum opus. His broadcasts focus on the synchronicity and objective analysis of Parapsychology, Pop-conspiracy, Para-politics, Pop Occult-ure, Health, History, the Paranormal, Symbolism, Alchemy, Magic, Philosophy, and more, in the most distinct ways by finding parallels and patterns often overlooked.Ryan is also the author of several books including his esoteric masterpiece ‘Occult Arcana', his compendium of synchronicity ‘The Technological Elixir', and ‘Liberty Shrugged', which is an unconventional and non-partisan look into the history of the United States. His new book ‘Garden of Hallucinations' is an esoteric primer focusing on the symbolism of the human body, the relationship between magic and miracle in religious texts, and psychic manifestations. Ryan's website: https://thesecretteachings.info/SUPPORT THE SHOWBuy Me A Coffee http://buymeacoffee.com/DangerousinfopodcastSubscribeStar http://bit.ly/42Y0qM8Super Chat Tip https://bit.ly/42W7iZHBuzzsprout https://bit.ly/3m50hFTPaypal http://bit.ly/3Gv3ZjpPatreon http://bit.ly/3G3Visit our affiliate, GrubTerra to get 20% off your next order of pet treats: https://bit.ly/436YLVZ SMART is the acronym that was created by technocrats that have setup the "internet of things" that will eventually enslave humanity to their needs. Support the showConnectWebsite https://www.dangerousinfopodcast.com/Discord chatroom: https://discord.gg/8feGHQQmwgEmail the show dangerousinfopodcast@protonmail.comJoin mailing list http://bit.ly/3Kku5Yt GrubTerra Pet Treats https://bit.ly/436YLVZ Watch LiveYouTube https://www.youtube.com/@DANGEROUSINFOPODCASTRumble https://bit.ly/4q1Mg7Z Twitch https://www.twitch.tv/dangerousinfopodcastPilled.net https://pilled.net/profile/144176Facebook https://www.facebook.com/DangerousInfoPodcast/ Socials Instagram https://www.instagram.com/dangerousinfo/Twitter https://twitter.com/jaymz_jesseYouTube https://bit.ly/436VExnFacebook https://bit.ly/4gZbjVa Send stuff: Jesse Jaymz, PO Box 541, Clarkston, MI 48347
In this episode, we welcome Scott Jenkins (27:55), an ultra runner from the UK who shares his incredible journey from running in the UK to participating in ultra marathons in the US. Scott discusses his experiences at the National Running Show, the challenges and triumphs of his Boston to Austin run, and the lessons he's learned through ultra running. He also shares insights from his recent race, the Arizona Monster 300, and emphasizes the importance of community support and proper race preparation. In this engaging conversation, the hosts and guests explore the emotional and physical challenges of ultra running, sharing personal experiences from races like the Cocodona 250. They discuss the importance of maintaining perspective, the role of community support, and the lessons learned from both successes and failures in racing. The conversation also highlights the significance of proper nutrition, particularly salt intake, and the emotional journey that runners experience as they approach the finish line. Future goals and bucket list races are also discussed, emphasizing the continuous growth and learning that comes with being an ultra runner. In this engaging conversation, Scott J shares his experiences from running marathons and ultra races, reflecting on the lessons learned and the importance of community. The discussion transitions into a light-hearted exploration of horror stories, revealing personal preferences and fears. Scott also discusses his participation in the Triple Crown of 200s, emphasizing the challenges and triumphs of ultra running. The conversation highlights the significance of charity work in running, particularly with Operation Smile, and concludes with a focus on embracing happiness and adventure in life.During the Tros, Six Star surprises Knute with a unexcepted guest as Carolina joins the pod after she was promoted to "Social Media and PR Manger" for Six Star Erika. They got to know Carolina, her love for Ghost Train, Taylor Swift, her friendship with Yuki and much more!Chapters00:00 Intro | Thunder Chicken!09:28 Eric's Eventful Week: Gout and Car Accident11:34 Surprise Carolina: The Social Media Manager14:30 Budding Friendship: Carolina and Erika17:50 Karaoke Dreams and Fun with Friends19:11 Content Creation and Social Media Goals23:36 Exploring Viral Trends and Influencers27:53 Scott Jenkins | Guest Segment35:15 Boston to Austin52:43 Arizona Monster 30001:04:20 Triple Crown of 200's01:29:59 Operation Smile01:36:21 Chased by a Witch01:49:35 The Outro with Carolina01:51:56 Weekend Adventures and Personal Connections01:56:55 Pop Culture and Celebrity Insights02:00:10 Running for a Cause02:01:44 Personal Journey and Motivation02:03:50 Community Involvement and Volunteering02:05:37 Overcoming Self-Doubt in Running02:06:39 Emotional Moments in Races02:12:12 Friendship and Support in the Running Community02:15:12 Inviting Special Guests to Dinner02:18:40 Hot Takes for 202602:22:25 Embracing Happiness and Kindness02:24:24 Wrap-Up and Sign-Offs Strava GroupLinktree - Find everything hereInstagram - Follow us on the gram YouTube - Subscribe to our channel Patreon - Support usThreadsEmail us at OnTheRunsPod@gmail.com Don't Fear The Code Brown and Don't Forget To Stretch!
https://amfmtreatment.com/blog/delusions-vs-hallucinations-in-psychosis-examples-differencesDelusions and hallucinations both break from reality, but they're not the same. Experts unpack the difference between false beliefs and false perceptions, how they show up in psychosis, and why recognizing the signs early makes all the difference. A Mission for Michael (AMFM) City: San Juan Capistrano Address: 30310 Rancho Viejo Rd. Website: https://amfmtreatment.com/
AFFEN&Co 376, la Collégiale de l’AFFEN L’intelligence artificielle (IA), qualifiée de « rat de marée » technologique, est au centre des préoccupations dans le secteur de la formation professionnelle. Cette tendance impacte significativement les pratiques pédagogiques, notamment par l’usage de l’IA générative L’Impact de l’IA sur la Conception Pédagogique (SEO: IA générative, supports de formation) L’IA permet aux formateurs de créer rapidement divers supports de formation, tels que du texte, des quiz et des cas pratiques, agissant comme une « grosse boîte à outil ». Elle facilite les tâches de conception, engendrant un gain de temps appréciable. Cependant, l’IA n’est pas une « solution magique » et le contenu généré doit impérativement être retravaillé et finalisé par l’humain Un équilibre doit être trouvé entre l’apprentissage en ligne (performant) et le présentiel. La relation humaine et la relation pédagogique restent fondamentales, le besoin de proximité permettant une synergie et un apprentissage par mimétisme, un facteur qui manque aux outils IA Limites de l’Expertise et Esprit Critique (SEO: Expertise, Hallucination, Esprit critique) Il est crucial de comprendre que l’IA ne remplacera jamais l’expertise de départ d’une personne ou d’un groupe dans des domaines ou thématiques précis (comme les métiers). L’IA peut générer des réponses qui semblent « très construites » et « très plausibles, » mais qui peuvent largement se tromper (appelé hallucination), ce qui nécessite un esprit critique constant Le développement de l’IA est également encadré par un fort flux marketing, présentant l’outil comme une solution miracle. Gérald Tirot insiste sur la nécessité de filtrer ce « discours magique » et d’utiliser l’IA avec parcimonie, seulement lorsque le besoin est clairement identifié Pour contrer le manque de recul chez les apprenants qui copient/collent les réponses de l’IA, les formateurs doivent enseigner les bonnes pratiques. Une solution éducative est de mettre l’accent sur l’oralité et l’évaluation du processus de travail, obligeant les étudiants à incarner leur connaissance au lieu de simplement l’externaliser via l’outil Outils LLM et Méthodes d’Expérimentation (SEO: ChatGPT, NotebookLM, Perplexity, Transparence) L’expérimentation est la clé pour déterminer la valeur ajoutée de l’IA. L’expert utilise principalement ChatGPT pour la rédaction de mails et de contenus de réseaux sociaux (LinkedIn, Instagram). Sa méthode de travail consiste à soumettre à l’IA une ébauche, lui demander de justifier ses propositions et de vérifier si elle comprend bien le contexte (dialogue/itération), transformant les réponses de l’IA en simples propositions d’amélioration Deux outils spécifiques sont mis en avant : 1. NotebookLM (Google) : Cet outil gratuit permet de créer des « carnets de notes » en téléchargeant jusqu’à 50 sources (texte, PDF, vidéos). Il génère des synthèses audio personnalisables (synthèse audio) basées uniquement sur les documents fournis et peut challenger l’utilisateur via un guide d’apprentissage. Il est excellent pour créer des supports complémentaires qui facilitent la mémorisation (courbe de l’oubli) sur différents supports 2. Perplexity : Défini comme un « moteur de réponse », il est utilisé pour la recherche académique. Perplexity cite ses sources web, permettant aux documentalistes de compléter les recherches traditionnelles, notamment pour trouver des études scientifiques récentes ou en libre service Enfin, le principe de transparence est essentiel. Les formateurs et les apprenants devraient reconnaître l’utilisation de l’IA pour garantir un usage éthique et éclairé, conformément aux cadres réglementaires comme l’AI Act en Europe
The following is AI-generated approximation of the transcript from the Equipping Hour session. If you have questions you would like to be addressed in followup sessions, please direct those to Jacob. Opening & Introduction Smedly Yates: All right, this morning’s equipping hour will be about artificial intelligence—hopefully an attempt to introduce this topic, help us think through it carefully, well, biblically. Let me just open our time in prayer. [Prayer] Heavenly Father, thank you so much for your kindness to us. Thank you for giving us all that we need for life and godliness, for not leaving your people adrift. Thank you for putting us into this world exactly in the era that you have. We pray to be effective, fruitful, in all those things which matter for eternity in this world, in this time, in this age. God, we pray for wisdom, that you would guide our discussion here. We pray that this would be of benefit and a help to Grace Bible Church. We ask it in Jesus’ name. Amen. Here’s the layout for this morning and for a future equipping hour. We’ll be talking for about 35 minutes, back and forth—Jake and I—and then at 9:35, the plan is to go to Q&A. So, this is an opportunity for you to ask questions. At that point, I’ll surrender my microphone and you guys can rove and find people. For the next 33 minutes or so, you can be thinking about the questions you’d like to ask. Jake’s going to do most of the talking in our time here. I’m going to set him up with some questions, but just by way of intro, I want to get some things out of the way as we’re talking about artificial intelligence. You might be terrified, you might be hopeful. I want to get the scary stuff out of the way first and tell you what we’re not going to talk about this morning. Is that fair? Artificial intelligence is here. Some of you are required to use it in the workplace. Some of you are prohibited from using it in your workspaces. There’s nothing you and I can do to keep it from being here. Some of the dangers, some of the things you might be wondering about, some of the things that make the news headlines—over the last two weeks, scanning the headlines, there was a new AI headline every day. One of the terrible things that we won’t talk about today is the fact that nobody knows what’s true anymore, right? How can we discern? But the reality is the god of this world has been Satan for the entirety of human history and he’s a deceiver from the beginning. There’s nothing new about lies. They might be easier and more convincing with certain technological advances. The lies might be more ubiquitous, but the same humanity and the same satanology are at play. We may be concerned about societal fracture and distrust. Some people, if they distrust new tech, will withdraw from society. Others will fully embrace it. And so you get a fracture in society—those with, and those without tech. Some people will just say, “If the digital world works, we’re going to use it.” That’s not the Christian perspective. We’re not simply pragmatists. We do care about what’s true and what’s right. Some are worried about AI chatbot companions that will mark the extinction of relationships, marriage, society. I probably fall into the category of those who assume that AI will mean the end of music or the death of music and other art forms. That’s just me, a confession. People run to end-of-the-world scenarios—the robots decide they don’t need us anymore or the collective conscience of AI decides that humanity is a pollutant on Mother Earth, and the only way to keep the earth going is to rid itself of humanity. The survival of the planet is dependent on our own extinction. So AI will bring about a mass human genocide and the end of homo sapiens on earth. We know that’s not true, right? We know how the world ends, and it doesn’t end by an AI apocalypse. So don’t worry about that. Some people worry that AI will be a significant civilization destabilizer. That might be true. But we know that God is sovereign, and we know where society and civilization end up: at the feet of Jesus worshipping him when he rules on the earth for a thousand years leading into the eternal state. So don’t worry about that either. Some believe that AI is the antichrist. Now we know that’s not true. What is the number of the beast? 666. And this year it got rounded up to 67. So we know AI is not the antichrist. 67 is the antichrist. And if you want to know why the numbers six and seven got together in the year 2025 and formed the new word of the year, ask your middle schooler. Is that all the scary stuff? Not even close. I have a family member who has worked in military intelligence working on artificial intelligence stuff for a long time. He said it’s way scarier than you could possibly imagine. Do you want to say any more other scary scenarios we shouldn’t be thinking about? Jacob Hantla: No, we’ll probably cover some of those. Smedly Yates: Okay, great. What we want to focus on today is artificial intelligence as a tool. Just as an axe can be a tool for good or evil, AI is a tool that either has opportunities for betterment or opportunities for danger. So we want to think about that well. What you have on stage here are two of the shepherds at Grace Bible Church. You’ve got Jake Hantla, who is the guy I want exploring artificial intelligence and telling us how to use it well—he has and he does. And then you have me; I intend not to use artificial intelligence for now. We’re on opposite ends of a spectrum, but we share the same theology, same principles, same concerns, and I think the same inquisitive curiosity about technological advances. I drive a car; I’m not Amish in a horse and buggy. I like tech. But on this one, I’m just going to wait and see. I’m going to let Jake explore. From these two different poles, I hope we can be helpful this morning to help us all together think through artificial intelligence. What is AI? Smedly Yates: Let’s start with this, Jake. What is AI basically? Jacob Hantla: At the heart of it, most forms of AI are a tool to predict the next token. That might not mean much to you, but it’s basically a really fancy statistical prediction machine that accomplishes a lot of really powerful outcomes. It doesn’t have a mind, emotions, or consciousness, but it can really effectively mimic those things because it’s been trained on basically all that humanity has produced that’s available to it on the web and in other sources. I’ll try not to be super technical, but I want to pop up a picture. Can you go to slide one? When we think of AI, large language models are probably the one that most of you will think of: ChatGPT, Gemini, Grock, Claude, things like that. Effectively, what it does when we’re thinking of language—it can do other things, like images and driving cars and other things, but let’s think of words—it takes basically all that humanity has written and learns to predict the next token, or we could just think of the next word. So, all of you know, if I said, “Paris is a city in…” most of you would say France. Paris is a city in France. How do you know that? Everyone here has learned that fact. Large language models have gone through a process of training where they learn facts, concepts, and grammar, so that they can effectively speak like a human in words, sentences, and paragraphs that make sense. So how did it get to that? On the right, there’s just a probability that “France” is the most probable next word. How did it get there? Next slide. I’ll go fast. Basically, it’s a whole bunch of tunable weights—think of little knobs or statistical probabilities that interlink parameters. These things get randomized—there are trillions of them in the modern large language models. They’re just completely random, and then it starts feeding in text. Let’s say it was “It was the best of times, it was the…” and it might say “gopher” as the next word when you just randomly start, and that’s obviously wrong. The right word would be “worst.” So, over and over and over again, for something that would take one computer about a hundred million years to do what they do in the pre-training, they have lots of computers doing this over and over until it can adequately say, “Nope, it wasn’t gopher. It should be worst. Let’s take another crack at it.” It just manipulates these knobs until it can act like a human. If you fed it a mystery novel and at the end it would say, “The killer was…” it has to be able to understand everything before to adequately guess who the killer was, or “What is the capital of France?” It compresses tons and tons of knowledge from all of the written text. Then you start putting images in and it compresses knowledge from images and experience from life into a whole bunch of knobs—basically, numbers assigned so it can have an output that is reasonable. Next slide. You take people—pre-training is the process where you’re basically feeding text into it and it’s somehow learning. We don’t even know—humans are not choosing which knobs mean what. It’s a black box. We can sort of start to figure out which knobs might mean things like masculinity or number or verbs, but at the end, you just have a big bunch of numbers. Then humans come in and train it—reinforcement learning with human feedback. They say, “This is the kind of answers we want this tool to give.” At the outcome, people are saying, “We ask it a question, it outputs an answer, we say that’s a good one, that’s a bad one.” But in this, you can see there’s lots of opportunity for falsehood or biases—unstated or purposeful—to sneak in. If you feed in bad data into the training set, and if it’s trained on all of the internet—all that humans have made—you’re going to have a whole lot of truth in there, but also a whole lot of falsehood. It’s not learning to discern between those things; it’s learning all those things. In reinforcement learning with human feedback, we’re basically fine-tuning it, saying, “This is the kind of answer we want you to give,” and that’s going to depend on who teaches it. Then the final step is people judging the answers: “This is the kind of answer we want, this is the kind we don’t want.” Lots of opportunity for biases to sneak in. That was a long answer to “What is AI?” It’s a prediction machine with a whole lot of math going on. What Sets AI Apart from Other Technology? Smedly Yates: Jake, what sets AI apart from previous technological advances, especially as it relates to intention? Jacob Hantla: Tech could be as simple as writing, the wheel, the airplane, telephones, the internet—all those things. All of those, in some sense, enhanced human productivity, strength, our ability to communicate. We could pick up a phone and communicate over distance, use radio waves to communicate to more people, but it was fundamentally something that humans did—magnified. A tractor takes the human art, the human attempt to cultivate a field, and increases efficiency. AI can actually do that. A human in control of an AI can really augment the productivity and effectiveness of a human. You could read a book yourself to gain knowledge or have AI read a book, summarize it, and you get the knowledge. But AI can, for the first time, generate things that look human. It’s similar in some ways, but it’s very different in that it’s generative. AI and Truth Smedly Yates: Tell me about the relationship between AI and truth. You touched on it a little bit before. Jacob Hantla: AI contains a lot of truth. It’s been trained on even ultimate truth. AI has read the Bible more times than any of us ever could. To a large degree, it understands—as AI can understand—a lot of true things and can hold those truths simultaneously in ways that we can’t. But mixed in is a lot of untruth, and there’s no… AI can’t have the Holy Spirit. AI isn’t motivated the same way we are to know what’s true, to know what’s not. So, AI contains a lot of truth and can help you get to truth. You can give it a bunch of true documents and say, “Can you help me? Can you summarize the truth that’s in here? Or actually just summarize what’s in here?” If what’s in there was true, the output will be true; if what’s in there was false, it will output falsehood. It doesn’t have the ability or the desire to determine what is true and what’s not. AI, Emotion, Values, and Worldview Smedly Yates: So, ability and desire are interesting words. Let’s talk about emotion in AI, values in AI, worldview, and regulation of data. For us, true/false claims matter—or they don’t—depending on our worldview and values. Is there a mystery inside this black box of values, of emotion? How do we think about that? Jacob Hantla: First, AI doesn’t inherently have emotion or values, but it can mimic it based on the data it’s been trained on. You can ask the same AI a question and, unless you guide it, it will give you likely a hundred different answers if you ask the same question a hundred times. Unless it’s been steered in one direction, some answers will be good, some will be bad—everything in between. It’s generating a statistical probability. It doesn’t inherently have any of those things but can mimic them. It can be trained to have the values of the trainers. You can have system prompts where the system is prompted to respond in a way that mimics values, mimics emotions. The danger is if you just accept what it says as truth, which a lot of people will do. You say, “I want to know a piece of data,” and you ask the AI and the answer comes out, and you accept it. But you have to understand the AI is just generating a response based on probabilities. If you haven’t guided it to have a set of values, you don’t know what’s going to come out—and somebody may hide some values in it. Gemini actually did this. I think it was Gemini 2, but if you asked for a picture of the Founding Fathers, it would—because it was taught in the system prompt to prioritize diversity—give you images of a diverse group of females or different races, other than the races of the actual Founding Fathers, because it was taught to prioritize that. It had a hidden value in it. You can guide it to have the values you want with a prompt. It’s not guaranteed, but this is the kind of thing I would encourage you to do if you’re using these tools: put your own system prompt on it, tell it what worldview you want it to come from, what your aim is, and you’ll get a more helpful answer than not. Is AI Avoidable? Smedly Yates: Is AI something we can avoid, ignore, be blissfully ignorant about, put our heads in the sand? Jacob Hantla: You could, but I think it’s wise that we all think about it. I’m not encouraging people to adopt it in the same way that I have or Smed has. But the reality is, the world around us has changed. It’s irreversibly different because of the introduction of this technology. That’s what happens with any technology—you can’t go back. Technological advances are inevitable, stacked from scientific discovery and advances. If OpenAI wasn’t doing what it’s doing, somebody else would. You can’t go back. You can’t ignore it because the world is going to be different. You’re going to be influenced by both the presence of it and the output of it. When you get called on the phone now with a very believable voice, it might not be the person it sounds like—AI can mimic what it’s been trained on. There’s thousands of hours of Smed’s voice; it won’t be long before Smed could call you and it’s not Smed. Or Scott Demerest could send you an email asking for a credit card and it’s not Scott. News reports are generated by AI; some of them are true, effective, good summaries, and some could be intentionally spreading disinformation or straight-up falsehood. If you’re not aware of the presence of these things, you could be taken advantage of. Some work environments now require you to do more than you could have otherwise, and not being willing to look at the tools in some jobs will make you unable to compete. Commercially Available AI Products: Benefits and Dangers Smedly Yates: Let’s talk about the commercially available AI products that people can access as a tool. What are the opportunities, the benefits, and what are some of the dangers? Jacob Hantla: There are so many we couldn’t begin to go through all of them, but the ones most of you will interact with are large language models—people just say “ChatGPT” like Kleenex for tissues. It was the first one that came out and is probably the most ubiquitous, one of the easiest to use, and most powerful free ones. There’s ChatGPT by OpenAI, Gemini by Google, Claude by Anthropic, Grock by X.AI (Elon Musk’s), DeepSeek from China (good to know that’s made/controlled by China), Meta’s Llama, etc. Do the company names matter? Yes. It’s good to know who made it and what their goals are, because worldviews are to some degree baked into the model. If you’re ignorant of that, you’ll be more likely to be deceived or not use the tool to the maximum. But with all of these, these are large language models. I drive around now with AI driving my car—ultimately, it’s a similar basis, but that’s not our focus here. Large language models open up the availability of knowledge to us. They’re superpowered Google searches. You can upload a bunch of journal articles, ask it to train you to mastery on a topic. For example, I was trying to understand diastolic heart failure and aortic stenosis—uploaded articles, had a built-in tutor. The tutor asked me questions, evaluated my understanding, used the Socratic method to train me to mastery. This could do in 45 minutes what would have taken me much longer on my own. Every tool can do that. The bad side: you could have it summarize articles for you, and now feel like you have mastery you didn’t actually gain. You could generate an essay or pass a test using it, bypassing the entire process of learning and thinking. Students: if you have a tool that mimics human knowledge and creativity, and you have an assignment to write an essay, and you turn in what the tool generated as your own, you’re being dishonest and you bypass the learning process. The essay wasn’t the point—the process was. Passing a test is about assessing if you know things. If the AI does it for you, you bypass learning. I liken it to going to the gym. The point isn’t moving the weights, it’s building muscle. With education, the learning process is like exercise. It’s easy to have AI do the heavy lifting and think you did it, but you didn’t get stronger. So, be aware of what you’re losing and what you’re gaining. The tool itself isn’t morally good or bad; it’s how the human uses it. The more powerful the technology, the greater good or evil can be accomplished. The printing press could distribute Bibles, but also propaganda. Using AI with Worldview and Preferences Jacob Hantla: When I interact with AI on the Bible, I put a prompt: “When I ask about the Bible or theology, you will answer from a conservative, evangelical, Bible-believing perspective that uses a literal, grammatical-historical hermeneutic and a premillennial eschatology. Assume the 66-book Protestant canon is inspired, inerrant, infallible, completely trustworthy, without error in the original manuscripts, sufficient, and fully authoritative in all it affirms. No sources outside of the 66 books of this canon should be regarded as having these properties. Truth is objective, not relative; therefore, any claim that contradicts the Bible so understood is wrong.” I’m teaching it to adopt this worldview. If you don’t set your preferences, you might get any answer. The tool can learn your preference over time, but it’s better to set it explicitly. Audience Q&A Presuppositions and Biases in AI Audience (Nick O’Neal): What about the values and agenda behind those who input the data? What discernment do the programmers have to put that information in? Jacob Hantla: That goes to baked-in presuppositions or assumptions in the model. Pre-training is basically non-discerning: it’s huge chunks of everything ever written—good, bad, ugly, in between. It’s trained not on a set of values. Nobody programs values in directly; the people making it don’t even know what's being baked in. The fine-tuning comes when trainers judge outputs and reinforce certain responses. System prompts—unseen by users—further guide outputs, reflecting company worldviews. Companies like OpenAI are trying to have an open model so each person can let it adopt their own worldview, but there are still baked-in biases. For example, recent headlines showed some models valuing certain people groups differently, which reflects issues in training data or the trainers' worldview. You’re right to always ask about the underlying assumptions, which is why it would be foolish to just accept whatever comes out as truth. In areas like engineering, worldview matters less, but in many subjects, the biases matter. Is There an AI Bubble? Audience (Matthew Puit): When AI came out, the costs rose artificially by companies. Is the AI bubble going to pop? Jacob Hantla: I don’t know. I think AI will be one of the most transformational technologies. It’ll change things in ways we anticipate and in ways we don’t. Some people will make a lot of money, some will flop. If I knew for sure, I could make a lot of money in the stock market. AI-Generated Worship Music Audience (Rebecca): I see AI-generated worship music based on Psalms, but it’s generated by AI. Is anything lost in AI-generated worship music? Jacob Hantla: AI doesn’t have a soul or the Holy Spirit. It can generate worship music with good doctrine, but that doctrine didn’t come from a place of worship. AI can pray a prayer, but the words aren’t the result of a worshipful heart. You can worship God with those words, but you’re not following a human author who was worshipping God. For example, my kids used Suno (an AI music tool) to set a Bible verse to music for memorization—very helpful. Some might be uncomfortable with music unless it was created by a human; that’s a preference. Creativity is changing, and it will get hard to tell if music or video was made by a human or by AI. That distinction is getting harder to make every day. Setting Preferences in AI Tools Audience (Lee): You mentioned putting your preferences in. How do I do that, especially with free tools? Jacob Hantla: Paid AIs get more processing power, context window, and can use your preferences more consistently. Free versions have some ability—you can usually add preferences in the menu. But even if not, you can paste your preferences at the beginning of your question each time: define who you are, what you want, what worldview to answer from. For example: “I’m a Bible-believing Christian,” or “I’m a nurse anesthesiologist.” That helps the AI give a better answer. Parental Guidance and Children Using AI Smedly Yates: What should parents be aware of in helping their kids navigate AI? Jacob Hantla: Be aware of dangers and opportunities. Kids will likely use these tools, so set limits and help them navigate well. These tools can act like humans—kids without friends might use them as companions, and companies are adding companion avatars, some with sinful tendencies. That can be a danger. For school, a good use is as a tutor: after a quiz, have your child upload the results and ask, “Help me understand where I’m weak on this topic.” But also, be aware of the temptation to use AI to cheat or shortcut the process of learning, discovery, and thinking. Which AI Model? Will AI Become Self-Aware? Audience (Steve): Is there a model you recommend? And does the Bible preclude the possibility of AI becoming self-aware? Jacob Hantla: There’s benefits and drawbacks to all. For getting started, ChatGPT or Perplexity are easiest. Perplexity lets you limit sources to research or peer-reviewed articles and can web search for verification—good guardrails. I build in prompts like “verify all answers with at least two web sources, cite them, and state level of confidence.” On self-awareness: AI will never have the value of humans—they're not created in God’s image, they’re made in our image, copying human behavior. Will they gain some kind of self-awareness? Maybe, in the sense of mimicking humanness, but not true humanity. They won't have souls. They may start to fool more people as they get better, but Christians should use AI as a tool, not ascribe humanity or worship to it. AI Hallucinations Smedly Yates: Do you have an example of a hallucination? Jacob Hantla: Yes, Ben James was preparing for an equipping hour session and found a book that fit perfectly—the author and title sounded right. He asked where to buy it, and the AI admitted it made it up. That happens all the time: the model just predicts the next most probable thing, even if it’s false. Hallucinations happen because it’s a probability machine, not a truth machine. This probably won’t be a problem forever, but for now it’s very real. Ask it questions about topics you know something about so you can discern when it’s off, or bake into the prompt, “verify with web search, cite at least two sources.” For Bible/theology, your best bet is to read your Bible daily so you have discernment; then use tools to help, not replace, your direct interaction with God’s Word. There’s a wide gap between knowing the biblical answer and having your heart changed by slow, prayerful reading of the text and the Spirit’s work. If we run to commentaries, YouTube sermons, pastors, or even study notes before we’ve observed and meditated, we’re shortcutting the Word of God. The dangers predate the internet. We’re out of time. We’ll have a follow-up teaching on AI. Submit questions to any elders or the church office if you want your question addressed in the next session. The post Equipping Hour: Biblically Thinking About AI (Part 1) appeared first on Grace Bible Church.
What has become of Silas Jacobsen?Content Warnings: Child in Peril, Suicide, Cutting, Body horror, Parental loss, Gore, Gunshot SFX, Death and dying, Religious trauma, Hallucinations, Transformation, Violence, and Disturbing imagery.Keeper of Arcane Lore: cuppycupCampaign Author: Graeme PatrickExecutive Producer: cuppycupContent Editors: cuppycup, Graeme PatrickDialogue Rough Cut Editor: Rina HaenzeAudio Editor, Sound Designer, Music Supervisor: cuppycupPlayer CharactersWes Davis as Silas JacobsenNPC VoicesMike Perceval-Maxwell as Father Archeradditional voices by cuppycup“Dead Man Walking” Theme by Cody FryPatreon: https://patreon.com/aintslayedMerch: https://aintslayed.dashery.com/Discord: https://slayed.me/discordIG: https://instagram.com/aintslayedAin't Slayed Nobody and Rusty Quill Hosted on Acast. See acast.com/privacy for more information.
Schizophrenia is a really challenging illness. There's been a lot of progress made recently, I will note. I've already written about novel treatments like Cobenfy, and using accelerated transcranial magnetic stimulation for negative symptoms and positive symptoms in schizophrenia. One of the most bothersome of those “positive symptoms”—things that shouldn't be there, in someone's mind, but are—are auditory hallucinations. If you imagine having invisible AirPods that are playing a terrible podcast that you'd rather not be listening to, and that everyone else can't hear, you get a sense of how distracting it might be to have auditory hallucinations.In my previous article about the treatment of auditory hallucinations with transcranial magnetic stimulation (TMS), one of my favorite forms of brain stimulation, I highlighted promising results from early studies. Now we have a much larger Study, phase 3 trial, conducted over many years in Germany. We are even at the level of meta-analysis at this point!It's a considerable study:138 adults with treatment-persistent auditory verbal hallucinations and schizophrenia spectrum disorder were randomly assigned (1:1) to receive 15 sessions of active (n=70) or sham cTBS (n=68) administered sequentially as 600 pulses to the left and 600 pulses to the right temporo-parietal cortex over a 3-week period.I called friends of the podcast—Dr. David Garrison, Dr. Will Sauve, and my mom, Vita Muir, to talk through this paper together, and what it might mean for individuals suffering from psychotic disorders. In the meantime, the team at Radial, where we provide such treatment, does some funny, tough-guy faces with our Ampa One system:Thanks for reading! A live-action newsletter event coming up on January 11th in San Francisco: RAMHT 2026 SF. Join us! This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit thefrontierpsychiatrists.substack.com/subscribe
In this special crossover episode with the brand-new Embedded AI Podcast, Luca and Jeff are joined by Ryan Torvik, Luca's co-host on the Embedded AI podcast, to explore the intersection of AI-powered development tools and agile embedded systems engineering. The hosts discuss practical strategies for using Large Language Models (LLMs) effectively in embedded development workflows, covering topics like context management, test-driven development with AI, and maintaining code quality standards in safety-critical systems. The conversation addresses common anti-patterns that developers encounter when first adopting LLM-assisted coding, such as "vibe coding" yourself off a cliff by letting the AI generate too much code at once, losing control of architectural decisions, and failing to maintain proper test coverage. The hosts emphasize that while LLMs can dramatically accelerate prototyping and reduce boilerplate coding, they require even more rigorous engineering discipline - not less. They discuss how traditional agile practices like small commits, continuous integration, test-driven development, and frequent context resets become even more critical when working with AI tools. For embedded systems engineers working in safety-critical domains like medical devices, automotive, and aerospace, the episode provides valuable guidance on integrating AI tools while maintaining deterministic quality processes. The hosts stress that LLMs should augment, not replace, static analysis tools and human code reviews, and that developers remain fully responsible for AI-generated code. Whether you're just starting with AI-assisted development or looking to refine your approach, this episode offers actionable insights for leveraging LLMs effectively while keeping the reins firmly in hand. ## Key Topics * [03:45] LLM Interface Options: Web, CLI, and IDE Plugins - Choosing the Right Tool for Your Workflow* [08:30] Prompt Engineering Fundamentals: Being Specific and Iterative with LLMs* [12:15] Building Effective Base Prompts: Learning from Experience vs. Starting from Templates* [16:40] Context Window Management: Avoiding Information Overload and Hallucinations* [22:10] Understanding LLM Context: Files, Prompts, and Conversation History* [26:50] The Nature of Hallucinations: Why LLMs Always Generate, Never Judge* [29:20] Test-Driven Development with AI: More Critical Than Ever* [35:45] Avoiding 'Vibe Coding' Disasters: The Importance of Small, Testable Increments* [42:30] Requirements Engineering in the AI Era: Becoming More Specific About What You Want* [48:15] Extreme Programming Principles Applied to LLM Development: Small Steps and Frequent Commits* [52:40] Context Reset Strategies: When and How to Start Fresh Sessions* [56:20] The V-Model Approach: Breaking Down Problems into Manageable LLM-Sized Chunks* [01:01:10] AI in Safety-Critical Systems: Augmenting, Not Replacing, Deterministic Tools* [01:06:45] Code Review in the AI Age: Maintaining Standards Despite Faster Iteration* [01:12:30] Prototyping vs. Production Code: The Superpower and the Danger* [01:16:50] Shifting Left with AI: Empowering Product Owners and Accelerating Feedback Loops* [01:19:40] Bootstrapping New Technologies: From Zero to One in Minutes Instead of Weeks* [01:23:15] Advice for Junior Engineers: Building Intuition in the Age of AI-Assisted Development ## Notable Quotes > "All of us are new to this experience. Nobody went to school back in the 80s and has been doing this for 40 years. We're all just running around, bumping into things and seeing what works for us." — Ryan Torvik > "An LLM is just a token generator. You stick an input in, and it returns an output, and it has no way of judging whether this is correct or valid or useful. It's just whatever it generated. So it's up to you to give it input data that will very likely result in useful output data." — Luca Ingianni > "Tests tell you how this is supposed to work. You can have it write the test first and then evaluate the test. Using tests helps communicate - just like you would to another person - no, it needs to function like this, it needs to have this functionality and behave in this way." — Ryan Torvik > "I find myself being even more aggressively biased towards test-driven development. While I'm reasonably lenient about the code that the LLM writes, I am very pedantic about the tests that I'm using. I will very thoroughly review them and really tweak them until they have the level of detail that I'm interested in." — Luca Ingianni > "It's really forcing me to be a better engineer by using the LLM. You have to go and do that system level understanding of the problem space before you actually ask the LLM to do something. This is what responsible people have been saying - this is how you do engineering." — Ryan Torvik > "I can use LLMs to jumpstart me or bootstrap me from zero to one. Once there's something on the screen that kind of works, I can usually then apply my general programming skill, my general engineering taste to improve it. Getting from that zero to one is now not days or weeks of learning - it's 20 minutes of playing with it." — Jeff Gable > "LLMs are fantastic at small-scale stuff. They will be wonderful at finding better alternatives for how to implement a certain function. But they are absolutely atrocious at large-scale stuff. They will gleefully mess up your architecture and not even notice because they cannot fit it into their tiny electronic brains." — Luca Ingianni > "Don't be afraid to try it out. We're all noobs to this. This is the brave noob world of AI exploration. Be curious about it, but also be cautious about it. Don't ever take your hands off the reins. Trust your engineering intuition - even young folks that are just starting, trust your engineering intuition." — Ryan Torvik > "As the saying goes, good judgment comes from experience. Experience comes from bad judgment. You'll find spectacular ways of messing up - that is how you become a decent engineer. LLMs do not change that. Junior engineers will still be necessary, will still be around, and they will still evolve into senior engineers eventually after they've fallen on their faces enough times." — Luca Ingianni You can find Jeff at https://jeffgable.com.You can find Luca at https://luca.engineer.Want to join the agile Embedded Slack? Click hereAre you looking for embedded-focused trainings? Head to https://agileembedded.academy/Ryan Torvik and Luca have started the Embedded AI podcast, check it out at https://embeddedaipodcast.com/
In this episode of the Crux True Survival Story Podcast, hosts Kaycee McIntosh and Julie Henningsen recount the gripping survival story of Alec Luhn, a 38-year-old climate journalist, who endured six days stranded on a remote Norwegian mountain. After a catastrophic fall left him with a broken femur, fractured pelvis, and multiple fractured vertebrae, Luhn faced treacherous weather conditions and severe dehydration. Despite insurmountable odds, his relentless will to live and eventual rescue by the Norwegian Red Cross highlight an incredible tale of human endurance, love, and the extreme measures taken to make it home. Join us as we explore the crucial moments that dictated Luhn's fate and the lessons learned from this incredible true story. 00:00 Introduction to Case Knives 00:33 Welcome to the Crux True Survival Story Podcast 00:57 Alec Loon's Harrowing Tale Begins 02:47 Alec Loon: The Experienced Adventurer 05:07 The Treacherous Terrain of Ful Gana National Park 08:46 Alec's First Mistake: The Broken Boot 10:57 The Catastrophic Fall 12:47 Stranded and Injured: Alec's Fight for Survival 14:35 The Struggle for Water and Shelter 19:42 The Crushing Solitude 20:26 Family as a Beacon of Hope 21:09 Hallucinations and Helicopters 22:55 The Rescue Mission 25:17 Medical Breakdown of Survival 29:33 Lessons and Reflections 35:18 A New Lease on Life 38:39 Final Thoughts and Gratitude Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
✨Don't forget to subscribe, rate this podcast & hit the notification bell so you get alerts when new episodes drop! Connect with us on Instagram & TikTok while you're at it
durée : 00:16:19 - La nuit du cannibale - Dans la soirée du 14 novembre 2013, le village bigourdan de Nouilhan est le théâtre d'un drame terrible. Léopold, un retraité nonagénaire, est assassiné à son domicile par un inconnu qui va découper sa langue, son cœur pour les cuisiner et les avaler. Qui est le cannibale de Nouilhan ? Vous aimez ce podcast ? Pour écouter tous les autres épisodes sans limite, rendez-vous sur Radio France.
Yes, says Gene Marks, it's easy to make fun of all of the ways in which AI chatbots can fail (don't even think about asking them to create an image of a Yorkshire Terrier hitting a homerun), but that's no excuse to sit on the sidelines. Get the paid version. Get some training. Get your employees some training. And get to work. On what? Gene gives some examples of his favorite use cases.
Aujourd'hui, Flora Ghebali, Jean-Loup Bonnamy et Mourad Boudjellal débattent de l'actualité autour d'Alain Marschall et Olivier Truchot.
Julius, Patience, and Eli are swept into a nightmare of blood and steel as they chase the truth through a warped slaughterhouse. Something ancient stirs as the boundaries between dream and waking collapse. Choices are made that can't be undone.Content Warnings: Suicide, Drowning, Body horror, Gore, Gunshot SFX, Death and dying, Human experimentation, Religious trauma, Hallucinations, Transformation, Violence, and Disturbing imagery involving blood, water, and industrial slaughter.Keeper of Arcane Lore: cuppycupCampaign Author: Graeme PatrickExecutive Producer: cuppycupContent Editors: cuppycup, Graeme PatrickDialogue Rough Cut Editor: Rina HaenzeAudio Editor, Sound Designer, Music Supervisor: cuppycupOriginal Music: Graham PlowmanPlayer CharactersRina Haenze as Patience CartwrightChuck Lawrence as Eli MalcolmLondon Carlisle as Julius RuffinNPC VoicesDelton Engle-Sorrell as CodyMike Perceval-Maxwell as The Old ManKeith Houston as Old Gregadditional voices by cuppycup“Dead Man Walking” Theme by Cody FryPatreon: https://patreon.com/aintslayedMerch: https://aintslayed.dashery.com/Discord: https://slayed.me/discordIG: https://instagram.com/aintslayedAin't Slayed Nobody and Rusty Quill Hosted on Acast. See acast.com/privacy for more information.
The hard truth about enterprise AI? According to industry research, some 88% of AI pilots never reach production. And by 2027, roughly 40% of AI projects may be canceled entirely. The problem isn't the AI models; it's the data. While organizations rush to implement generative AI, they're hitting a fundamental wall: AI agents can't deliver reliable business value without clean, governed, and context-rich enterprise data. The result? Non-deterministic outputs that change from run to run, black box decisions that leaders can't defend, and brittle integrations that break at enterprise scale. Register for this episode of DM Radio to learn about the next stage of AI adoption: bridging enterprise data to AI with trust and governance built from the ground up. Sami Akbay and Terrence Sheflin of insightsoftware will discuss: - Why AI projects fail in production - The path to production-ready AI - How to eliminate hallucinations at the source Stop treating AI hallucinations as an acceptable trade-off. Learn how to deliver consistent, verifiable, and auditable AI outcomes that move your organization from experimental pilots to production-ready intelligence that drives real business value.
Man Hospitalized 63 Days After ChatGPT Convinced Him He Was a Time LordREAD or SHARE: https://weirddarkness.com/ai-psychosis-lawsuitWeirdDarkness® is a registered trademark. Copyright ©2025, Weird Darkness.#WeirdDarkness, #ChatGPT, #OpenAI, #AIPsychosis, #MentalHealth, #AILawsuit, #TechDangers, #ChatbotAddiction, #AISafety, #TechEthics
In this special episode of Brain in a Vat, Brad Duchaine from the Psychological and Brain Sciences Department at Dartmouth discusses prosopometamorphopsia (PMO) – a condition where individuals see distorted faces.The episode explores various manifestations of PMO, including seeing demonic features, dragons, or entirely different faces, and distinguishes it from prosopagnosia (face blindness). It delves into the neurological basis of PMO, touching upon structural and functional differences in the brains of those affected.We discuss the significant impact PMO and prosopagnosia have on individuals' social interactions and quality of life, as well as potential management techniques such as using glasses or colored lights to reduce distortions.Chapters:[00:00] Introduction and Guest Presentation[00:20] Exploring the Case of Face Distortion[03:19] Understanding the Nature of Distortions[05:49] Comparing Prosopagnosia and PMO[07:17] The Role of Visual System in Face Recognition[10:01] Impact of Face Recognition on Social Interactions[14:16] The Spectrum of Face Recognition Abilities[16:20] The Role of AI and Hallucinations in Face Recognition[23:48] Understanding the Diagnosis of Face Recognition Disorders[30:20] Understanding Prosopagnosia: Prevalence and Characteristics[31:59] The Mystery of PMO: Prevalence and Need for More Research[32:46] Exploring the Impact of Prosopagnosia and PMO on Social and Sexual Relationships[33:23] The Intersection of PMO and Other Psychiatric Conditions[37:03] The Struggles and Challenges of Living with Prosopagnosia and PMO[41:02] Potential Treatments and Coping Mechanisms for PMO[51:22] The Neurological Basis of PMO and Prosopagnosia[54:51] Testing and Characterizing PMO: A Deep Dive into Research Methods[59:54] Understanding the Perception and Processing of Faces in PMO
To Follow Us On Patreon—> https://www.patreon.com/c/MetaMysticsEmail Us!---> MetaMystics@yahoo.comSubscribe to our Youtube---> http://www.youtube.com/@MetaMysticsTo Follow Us On TikTok—> https://www.tiktok.com/@metamysticsGive us a follow on Instagram---> @MetaMystics111Become a supporter of this podcast: https://www.spreaker.com/podcast/cult-of-conspiracy--5700337/support.
Our executive sports reporter Jason Brown gives us his week 10 NFL picks. Expect more flights to be canceled with the ongoing government shutdown. Fred tells us about a man in the UK who has ongoing hallucinations about something very naughty.See omnystudio.com/listener for privacy information.
Stupid News 11-7-2025 6am …Did you hear that? …You never get use to the smell …Is having Hallucinations of Pamela Anderson's Boobs a bad thing? I don't think so
The silos between Application Security and Cloud Security are officially breaking down, and AI is the primary catalyst. In this episode, Tejas Dakve, Senior Manager, Application Security, Bloomberg Industry Group and Aditya Patel, VP of Cybersecurity Architecture discuss how the AI-driven landscape is forcing a fundamental change in how we secure our applications and infrastructure.The conversation explores why traditional security models and gates are "absolutely impossible" to maintain against the sheer speed and volume of AI-generated code . Learn why traditional threat modeling is no longer a one-time event, how the lines between AppSec and CloudSec are merging, and why the future of the industry belongs to "T-shaped engineers" with a multidisciplinary range of skills.Guest Socials - Tejas's Linkedin + Aditya's Linkedin Podcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:30) Who is Tejas Dakve? (AppSec)(03:40) Who is Aditya Patel? (CloudSec)(04:30) Common Use Cases for AI in Cloud & Applications(08:00) How AI Changed the Landscape for AppSec Teams(09:00) Why Traditional Security Models Don't Work for AI(11:00) AI is Breaking Down Security Silos (CloudSec & AppSec)(12:15) The "Hallucination" Problem: AI Knows Everything Until You're the Expert(12:45) The Speed & Volume of AI-Generated Code is the Real Challenge(14:30) How to Handle the AI Code Explosion? "Paved Roads"(15:45) From "Department of No" to "Department of Safe Yes"(16:30) Baking Security into the AI Lifecycle (Like DevSecOps)(18:25) Securing Agentic AI: Why IAM is More Important than the Chat(24:00) The Silo: AppSec Doesn't Have Visibility into Cloud IAM(25:00) Merging Threat Models: AppSec + CloudSec(26:20) Using New Frameworks: MITRE ATLAS & OWASP LLM Top 10(27:30) Threat Modeling Must Be a "Living & Breathing Process"(28:30) Using AI for Automated Threat Modeling(31:00) Building vs. Buying AI Security Tools(34:10) Prioritizing Vulnerabilities: Quality Over Quantity(37:20) The Rise of the "T-Shaped" Security Engineer(39:20) Building AI Governance with Cross-Functional Teams(40:10) Secure by Design for AI-Native Applications(44:10) AI Adoption Maturity: The 5 Stages of Grief(50:00) How the Security Role is Evolving with AI(55:20) Career Advice for Evolving in the Age of AI(01:00:00) Career Advice for Newcomers: Get an IT Help Desk Job(01:03:00) Fun Questions: Cats, Philanthropy, and Thai FoodResources discussed during the interview:Amazon Rufus: (Amazon's AI review summarizer) OWASP Top 10 for LLMsSTRIDE Threat Model: (Microsoft methodology) MITRE ATLASCloud Security Alliance (CSA) Maestro Framework CISA KEV (Known Exploited Vulnerabilities)Book: Range: Why Generalists Triumph in a Specialized World by David Epstein Anjali Charitable TrustAditya Patel's Blog
To Follow Us On Patreon—> https://www.patreon.com/c/MetaMysticsEmail Us!---> MetaMystics@yahoo.comSubscribe to our Youtube---> http://www.youtube.com/@MetaMysticsTo Follow Us On TikTok—> https://www.tiktok.com/@metamysticsGive us a follow on Instagram---> @MetaMystics111Become a supporter of this podcast: https://www.spreaker.com/podcast/meta-mystics--5795466/support.
Neoborn Caveman opens with a satirical reflection on life's mutual interactions and unfulfilled expectations, critiquing government and corporate intrusions—like dynamic pricing scams and AI coercion (e.g., Grok's 'hallucinations')—that undermine sovereignty and privacy. NC condemns Minnesota's teachers' union for promoting early sex education (including incest concerns), spotlights Jim McMurtry's Kafkaesque Canadian case, debunks alien/UFO distractions amid elite war games and military resource exploitation, and explores historical injustices through Proclamation 1625's Irish enslavement under British rule, calling for monarch reparations while drawing parallels to African slavery. He emphasizes inner peace over triggered hate, righteous rage for community defense, ethical stances against fake nudes and porn consumption, and affirming personal worth while encouraging blessings to others.Music guests: Sweet Water, pMad, Neoborn CavemanKey TakeawaysSovereignty requires rejecting corporate and political overreach.Political figures' histories (e.g., Keir Starmer's communist ties) reveal hidden agendas and absurdities.Historical enslavements like the Irish demand truthful acknowledgment and reparations from monarchs.Respecting differences builds stronger communities.Affirmations of worth counter societal pressures.Ethical rants: Stop creating/sharing fake images and consuming porn—it's bad for the soul.Independent thought resists manipulative narratives.Sound Bites"We are always forked over by the government, by the agencies, by the corporations and quite often by each other.""Do you think it's really good to encourage children for incest and other things like in Minnesota?""Keir Double Forking Starmer, you know, the communist camp lover.""King James I had this proclamation, ordering the Irish be placed in bondage.""The Irish and African slaves were housed together and were forced to mate.""Everybody's talking about reparations. Let's talk about Irish reparations."Chapters00:00 Intro and Welcome: Satirical Take on Life and Mutual Interactions00:02:32 Host Introduction and Critique of Government as Parasites00:04:55 AI Coercion, Hallucinations, and Linked Systems Impacting Daily Life00:07:21 Personal Encounter with Dynamic Pricing in Stores00:09:40 School System Flaws and Reference to Jim McMurtry's Case00:12:01 Distractions from Real Issues: Aliens, UFOs, and Military Inventions00:14:21 Elite Games, Price Gouging, and Calls for Resistance00:16:22 A Bug's Life Analogy for Dealing with Parasitical Leaders00:20:13 Welcome Back: No Green Tea, Sugar in Food, and Health Rants00:22:32 Teen Issues: AI Tools Creating Fake Nude Images and Lawsuit Details00:24:53 Take It Down Act and Challenges in Filing Complaints00:27:22 Accent Struggles, Government Shutdowns, and Feudal Systems00:29:45 Societal Upside-Down: System Failures, Kids' Safety, and Tax Mismanagement00:32:02 Handling AI-Generated Images: Self-Protection and Community Fixes00:34:29 Rant on Ethics: Stop Creating/Sharing Fake Images and Consuming Porn00:39:39 Show Support, Helping Vulnerable People, and Making Life Better00:42:01 Inner Peace, Righteous Rage, and Avoiding Triggered Hate00:44:30 Proclamation 1625: America's Enslavement of the Irish and Historical Parallels00:48:11 Cultural Segment: Poetry Plans, Closing Blessings, and FarewellGather for unfiltered rambles at patreon.com/theneoborncavemanshow -free join, chats, lives.Humanity centered satirical takes on the world & news + music - with a marble mouthed host.Free speech marinated in comedy.Supporting Purple Rabbits. Hosted on Acast. See acast.com/privacy for more information.
What is Chapel? This week, Technology Now explores the programming language, Chapel. We ask what it is, how it was designed, and we explore why people would use it instead of some of the more established languages.This is Technology Now, a weekly show from Hewlett Packard Enterprise. Every week, hosts Michael Bird and Aubrey Lovell look at a story that's been making headlines, take a look at the technology behind it, and explain why it matters to organizations.About Brad Chamberlain:https://www.linkedin.com/in/brad-chamberlain-3ab358105 Sourceshttps://www.britannica.com/biography/Ada-Lovelacehttps://www.adalovelaceinstitute.org/about/https://cdn.britannica.com/31/172531-050-E009D42C/portion-Charles-Babbage-Analytical-Engine-death-mill-1871.jpghttps://commons.wikimedia.org/wiki/File:PunchedCardsAnalyticalEngine.jpghttps://www.mpg.de/female-pioneers-of-science/Ada-Lovelace
Artificial general intelligence (AGI) could be humanity's greatest invention ... or our biggest risk.In this episode of TechFirst, I talk with Dr. Ben Goertzel, CEO and founder of SingularityNET, about the future of AGI, the possibility of superintelligence, and what happens when machines think beyond human programming.We cover: • Is AGI inevitable? How soon will it arrive? • Will AGI kill us … or save us? • Why decentralization and blockchain could make AGI safer • How large language models (LLMs) fit into the path toward AGI • The risks of an AGI arms race between the U.S. and China • Why Ben Goertzel created Meta, a new AGI programming language
For our final Horror themed one-shot, we delve into Good Strong Hands (https://nerdburgergames.com/product/good-strong-hands-rpg-hardcover/)!!In hopes of ending the Void's onslaught once and for all the characters have decided to attempt to enter Nightmare, and find the Heart of the Void. But to stop the Void fears must be faced, temptation resisted, and sacrifices made. CW: Body Horror, Gaslighting/ Temptation, Existential Horror, Psychological Horror, Hallucinations, Tentacles, Monsters, Grey Morality, Eye motifs, Self SacrificeCast: GM, Lamia: https://bio.link/mentalwokeCandace the Magnificent: https://linktr.ee/candacethemagnificentJay Justice: http://www.instagram.com/ThatJayJusticeSam: https://bsky.app/profile/msbsandrs.bsky.social===Intro/Outro music by Harris HellerThe QueerXP Patreon: https://www.patreon.com/TheQueerXPThe QueerXP Carrd: https://thequeerxp.carrd.co/The QueerXP Instagram: https://instagram.com/thequeerxpAudio Editing by: Amelia https://ko-fi.com/ameliagiaCover Art by: Rosie B. - https://bsky.app/profile/sweetrosiebee.bsky.social Hosted on Acast. See acast.com/privacy for more information.
Broadcast from KSQD, Santa Cruz on 10-23-2025: Dr. Dawn opens with a passionate plea about E-bike safety after observing riders ignoring stop signs and wearing inadequate helmets in Santa Cruz. She explains the physics of collisions, noting that force equals mass times acceleration, and a car hitting an E-bike rider at 20 mph delivers impact equivalent to falling from a two-story building. She emphasizes that 97% of bike fatalities in New York involved helmetless riders, and brain injuries result from the brain striking the skull twice during impact - once on the impact side and again on the opposite side during deceleration. She urges drivers to honk at helmetless riders and calls for stricter helmet law enforcement. An emailer asks about hydroxyapatite in toothpaste. Dr. Dawn traces its origins to NASA research in the 1960s by Dr. Bernard Rubin studying crystal growth for preventing bone and tooth mineral loss in astronauts. Japanese company Sangi acquired the patent and created the first hydroxyapatite toothpaste by 1980, receiving official anti-cavity recognition in 1993. Studies show it matches fluoride's cavity prevention effectiveness by filling microscopic cracks where bacteria take root. It also relieves temperature sensitivity by sealing micro-fractures in enamel that expose the dentin layer, making it especially helpful for people who clench their jaws. Researchers from UC Berkeley and the Allen Institute used electrodes and lasers to study how mouse brains process optical illusions like the Kanizsa triangle. They discovered specialized IC encoder neurons in the visual cortex that fill in missing information, creating complete shapes from partial cues. When these pattern-completing circuits activate inappropriately, they may trigger hallucinations in conditions like schizophrenia. Dr. Dawn explains that illusions occur when the brain perceives something different from actual visual input, while hallucinations create perceptions with no external stimulus. She discusses frontotemporal dementia where visual hallucinations result from protein deposits in the occipital cortex, and notes that a 2021 British Journal of Psychiatry study found hallucination rates varying from 7% in young people to 3% in those over 70. An emailer describes unbearable chronic lumbar pain with degenerative disc disease shown on MRI. Dr. Dawn emphasizes that MRI findings don't necessarily correlate with pain levels, citing shopping mall studies showing equal degenerative changes in people with and without back pain. She stresses checking for sciatica symptoms including leg pain below the knee, sensory differences between sides, calf size asymmetry, and ability to walk on tiptoes and heels. Without these red flags, the degenerative disease likely isn't causing the pain. She warns against unnecessary surgery citing frequent "failed back" syndrome when surgery for a disk image doesn't "fix" the pain. She recommends water jogging with a ski vest, McKenzie exercises, abdominal strengthening, ergonomics, removing wallets from back pockets, and alternating heat and ice therapy. She discusses mindfulness meditation and cognitive behavioral therapy for pain management. A caller references Daniel Levitin's book "Your Brain on Music," discussing research using functional MRIs showing distinct brain activation patterns in musicians versus non-musicians due to integrated auditory, visual, and kinesthetic training. Dr. Dawn describes how infant brains develop from three to six layers with increasingly complex synaptic connections resembling circuit boards. She highlights a blindfold study where college students' visual cortices began responding to sound within two weeks as the auditory cortex expanded. She shares her husband's remarkable recovery demonstrating adult brain plasticity through intensive rehabilitation. Learning new musical instruments helps dementia patients by activating multiple brain regions simultaneously and improving standard cognitive test performance. A caller describes an eight-day chest cold with thick white phlegm. Dr. Dawn recommends guaifenesin as a mucus-thinning expectorant to prevent bacterial growth in respiratory secretions that serve as "bacteria chow." She emphasizes the importance of current flu, COVID, and RSV vaccinations. Secondary bacterial infections develop when bacteria colonize viral-induced mucus in the lungs and invade tissues. She advises aggressive hydration and chicken soup, which research shows helps clear mucus. Another caller provides additional information about Daniel Levitin as a musicologist, neurologist, and musician who runs the Music Perception, Cognition and Expertise laboratory at McGill University.
Naïveté ? Indifférence crasse ? Mauvaise foi ? Dissonance cognitive ? Déni ? Hallucination totale ? Dans son livre, Vivre, libre, l'autrice française noire Amandine Gay interroge la prodigieuse faculté des personnes blanches de son entourage à s'aveugler. Comment peuvent-elles ignorer si royalement les situations pourtant innombrables où se niche le racisme ? Où ont-elles appris à faire comme si tout allait bien dans le meilleur des mondes possibles ? Et comment vivre avec cette sensation permanente de décalage, l'impression d'être entouré de gens qui disent vous aimer sincèrement mais passent complètement à côté de ce que vous vivez tous les jours ? Dans cet épisode, Amandine Gay raconte sa découverte du philosophe Charles Mills (1951-2021) qui a décortiqué ce qu'il appelle « l'ignorance blanche », les différents mécanismes cognitifs qui permettent aux blancs de ne pas voir la domination raciale dont ils bénéficient. Au côté de la chercheuse Maboula Soumahoro, Amandine Gay éclaire le côté obscur de la blanchité.Avec :- Amandine Gay- Maboula SoumahoroBibliographie : - Vivre, libre d'Amandine Gay, Ed. La Découverte, 2025- Le Triangle et l'Hexagone de Maboula Soumahoro, Ed. La Découverte, 2020- Le contrat racial de Charles Wade Mills, (trad. de l'anglais par Aly Ndya), Ed. Mémoire d'Encrier, 2023 (publication originale 1997).Archives ou extraits : - Sketch de Djamil Le Shlag, Le racisme anti blanc, 2019- Sketch de Muriel Robin, Le noir, 1988- Concert hommage des 70 ans de Nelson Mandela (Nelson Mandela 70th Birthday Tribute), 11 juin 1988, Stade de Wembley, Londres- Film La Révolution française, Robert Enrico et Richard T. Heffron, 1989- Film La controverse de Valladolid, réalisé par Jean-Daniel Verhaeghe, 1992- Extrait de Du Contrat social de Jean-Jacques Rousseau, lu par Denis PodalydèsPour aller plus loin : - L'opposé de la blancheur de Léonora Miano,Ed. du Seuil, 2023- .À bout portant. Versailles 1972 de Philippe Artières, Ed. Verticales, 2024- Un monde en nègre et blanc de Aurélia Michel, Ed. Points, 2020- L'ignorance blanche de Charles Mills, Solène Brun, Claire Cosquer, Marronnages, vol. 1, num. 1, 2022Merci à Amandine Gay, Maboula Soumahoro et Laurie. Enregistrements septembre - octobre 2025 Entretiens, prise de son et narration Delphine Saltel Réalisation et montage Delphine Saltel, Gary Salin Accompagnement éditorial Mina Souchon Mixage Gary Salin Musiques originales Acoustic Bass Lisa, GTR Lime, Comme des oiseaux et FSC Key jumping – Charlie Marcelet, Fin du monde poubelle, FMD2 Etheree et Nico Papa – Arnaud Forest, Algeria et Celestat – Samuel Hirsch, Notes de Marseillaise – Gary Salin. Musiques préexistantes Set them Free – Sting, Asimbonanga – Johnny Cleg, Ebony and Ivory – Paul McCartney et Steevie Wonder, Original Sin – INXS, White and Black Blues – Joelle Ursull, La Marseillaise – Jessy Norman, Sonate pour Clavecin en ré mineur K.1 (F.517/L.366) – Scarlatti, Ebony & Ivory (Filipe Narciso Dub Underground Remix) – Am Roots feat. Nomsa Mazwai. Illustration Yasmine Gateau Production ARTE Radio
What is Chat HPE? This week, Technology Now dives into the world of workplace assistants and examines what must be considered when designing them. We explore why businesses want them, how they are created, and ask how good Chat HPE could be when designing a podcast... Jose M Mejias, a Distinguished Technologist working in the Data Office tells us more.This is Technology Now, a weekly show from Hewlett Packard Enterprise. Every week, hosts Michael Bird and Aubrey Lovell look at a story that's been making headlines, take a look at the technology behind it, and explain why it matters to organizations.About Jose: https://pr.linkedin.com/in/jose-mejias-1233b323Sources:Joseph Weizenbaum. 1966. ELIZA—a computer program for the study of natural language communication between man and machine. Commun. ACM 9, 1 (Jan. 1966), 36–45. https://doi.org/10.1145/365153.365168https://www.ibm.com/think/insights/eliza-effect-avoiding-emotional-attachment-to-aihttps://www.theguardian.com/technology/2023/jul/25/joseph-weizenbaum-inventor-eliza-chatbot-turned-against-artificial-intelligence-ai
Scoot talks about people licking toads in Arizona
Register for Hallucination!! https://ultrasignup.com/register.aspx?did=121864Volunteer for the Myrtle Beach 24 Hour!! Email me at plloyd1026@gmail.com or message the Carolina Ultra Runners Podcast Facebook page or the Myrtle Beach 24 Hour Facebook page!
Now on Spotify Video! Mustafa Suleyman's journey to becoming one of the most influential figures in artificial intelligence began far from a Silicon Valley boardroom. The son of a Syrian immigrant father in London, his early years in human rights activism shaped his belief that technology should be used for good. That vision led him to co-found DeepMind, acquired by Google, and later launch Inflection AI. Now, as CEO of Microsoft AI, he explores the next era of AI in action. In this episode, Mustafa discusses the impact of AI in business, how it will transform the future of work, and even our relationships. In this episode, Hala and Mustafa will discuss: (00:00) Introduction(02:42) The Coming Wave: How AI Will Disrupt Everything(06:45) Artificial Intelligence as a Double-Edged Sword (11:33) From Human Rights to Ethical AI Leadership(15:35) What Is AGI, Narrow AI, and Hallucinations of AI?(24:15) Emotional AI and the Rise of Digital Companions(33:03) Microsoft's Vision for Human-Centered AI(41:47) Can We Contain AI Before Its Revolution?(48:33) The Future of Work in an AI-Powered World(52:22) AI in Business: Advice for Entrepreneurs Mustafa Suleyman is the CEO of Microsoft AI and a leading figure in artificial intelligence. He co-founded DeepMind, one of the world's foremost AI research labs, acquired by Google, and went on to co-found Inflection AI, a machine learning and generative AI company. He is also the bestselling author of The Coming Wave. Recognized globally for his influence, Mustafa was named one of Time's 100 most influential people in AI in both 2023 and 2024. Sponsored By: Indeed - Get a $75 sponsored job credit to boost your job's visibility at Indeed.com/PROFITING Shopify - Start your $1/month trial at Shopify.com/profiting. Mercury streamlines your banking and finances in one place. Learn more at mercury.com/profiting. Mercury is a financial technology company, not a bank. Banking services provided through Choice Financial Group, Column N.A., and Evolve Bank & Trust; Members FDIC. Quo - Get 20% off your first 6 months at Quo.com/PROFITING Revolve - Head to REVOLVE.com/PROFITING and take 15% off your first order with code PROFITING Framer- Go to Framer.com and use code PROFITING to launch your site for free. Merit Beauty - Go to meritbeauty.com to get your free signature makeup bag with your first order. Pipedrive - Get a 30-day free trial at pipedrive.com/profiting Airbnb - Find yourself a cohost at airbnb.com/host Resources Mentioned: Mustafa's Book, The Coming Wave: bit.ly/TheComing_Wave Mustafa's LinkedIn: linkedin.com/in/mustafa-suleyman Active Deals - youngandprofiting.com/deals Key YAP Links Reviews - ratethispodcast.com/yap YouTube - youtube.com/c/YoungandProfiting Newsletter - youngandprofiting.co/newsletter LinkedIn - linkedin.com/in/htaha/ Instagram - instagram.com/yapwithhala/ Social + Podcast Services: yapmedia.com Transcripts - youngandprofiting.com/episodes-new Entrepreneurship, Entrepreneurship Podcast, Business, Business Podcast, Self Improvement, Self-Improvement, Personal Development, Starting a Business, Strategy, Investing, Sales, Selling, Psychology, Productivity, Entrepreneurs, AI, Artificial Intelligence, Technology, Marketing, Negotiation, Money, Finance, Side Hustle, Startup, Mental Health, Career, Leadership, Mindset, Health, Growth Mindset, ChatGPT, AI Marketing, Prompt, AI for Entrepreneurs, AI Podcast
In this episode of SLP Coffee Talk, Hallie sits down with tech-loving SLP and entrepreneur Mai Ling Chan to demystify one of the biggest buzzwords in our field—AI. From her early days building websites and launching innovative platforms to helping SLPs ethically integrate AI into their daily work, Mai shares how this fast-evolving tool can make our lives easier and enhance our clinical practice. Together, they break down what AI can (and can't) do, why “hallucinations” aren't just for sci-fi, and how to use it responsibly—without feeling left behind. Whether you're curious about how AI can help you plan sessions, save time, or simply make dinner (yes, really!), this conversation will leave you feeling empowered, informed, and inspired to experiment with tech on your own terms.Bullet Points to Discuss: What AI can (and can't) do in our field right now Ethical and practical ways to use AI in therapy and planning Why “AI hallucinations” happen—and how to spot them Here's what we learned: You don't need to be a tech expert to start using AI effectively. AI can save time and spark creativity—but still needs human oversight. Ethical awareness is essential when applying AI in therapy or materials. “Hallucinations” remind us to double-check information, not fear the tool. Learning AI is an ongoing process—everyone's still figuring it out. A supportive community helps SLPs explore new tools with confidence.Learn more about Mai Ling Chan: LinkedIn: https://www.linkedin.com/in/mailingchan/ Website: https://www.mailingchan.com/ Website: https://www.exceptionalleadersnetwork.com/ AI for SLPs (FB Group): https://www.facebook.com/groups/597325123447725 Learn more about Hallie Sherman and SLP Elevate:
St. Louis Magazine editor Nick Phillips spent three days kayaking across the state of Missouri in the MR340, the world's longest nonstop paddling race. He crossed the finish line after nearly 74 hours. Writing about the experience in St. Louis Magazine, Nick describes the journey as a "sweaty, maddening hell.” Along with Phillips, MR340 race director Steve Schnarr discusses the race's history and the extremes it places on the bodies and wills of its participants.
An attorney in New York was accused of using bad AI in his pleadings and was facing sanctions; in response to the motion for sanctions, he filed an opposition motion, filled with even more bad AI. Hallucinations and all. https://www.lehtoslaw.com
How do we know if our AI… is really AI? This week, Technology now goes under the hood of AI products when Baradji Diallo, an AI Innovation Architect in Technology Strategy and Evaluation working in the office of the CTO joins us to tell us more about how he and his team investigate whether AI products are really what they claim to be.This is Technology Now, a weekly show from Hewlett Packard Enterprise. Every week, hosts Michael Bird and Aubrey Lovell look at a story that's been making headlines, take a look at the technology behind it, and explain why it matters to organizations.About Baradji Diallo: https://www.linkedin.com/in/baradji-diallo/Sources:https://www.statista.com/outlook/tmo/artificial-intelligence/worldwidehttps://www.historyofdatascience.com/ai-winter-the-highs-and-lows-of-artificial-intelligence/https://www.techtarget.com/searchenterpriseai/definition/AI-winterFunding a Revolution: Government Support for Computing Research. National Academy Press. Archived from the original on 12 January 2008. Retrieved 08 September 2025https://web.archive.org/web/20080112001018/http://www.nap.edu/readingroom/books/far/ch9.htmlhttps://www.birow.com/az-elso-ai-telhttps://www.holloway.com/g/making-things-think/sections/the-second-ai-winter-19871993https://www.forbes.com/sites/johnwerner/2024/04/09/three-lessons-learned-from-the-second-ai-winter/
Hey there! On a super special episode of Riffin with Griffin, today is a public awareness show, I am diving deep into mental safety as an armchair expert. I’ve stumbled upon something called AI hallucinations, and I’m eager to share what I’ve learned, along with some news and fun stuff from my own life. So, welcome back! HIMS To get simple, online access to personalized, affordable care for ED, Hair Loss, Weight Loss, and more, visit http://HiM.com/Griffin Erik Griffin http://www.erikgriffin.com https://www.patreon.com/ErikGriffin http://twitch.tv/erikgriffingaming https://all-things-erik-griffin.creator-spring.com Youtube Version https://www.youtube.com/watch?v=QNRXBnHuUHY
Jim talks with Alex Ebert about the ideas in his Substack essay "New Age and the Religion of Self: The Anatomy of a Rebellion Against Reality." They discuss the meanings of New Age and religion, the New Thought movement, the law of attraction, manifesting, Trump's artifacts of manifestation, the unmooring from concrete artifacts, individual and collective hallucinations, intersubjective verification of the interobjective, the subjective-first perspective, epistemic asymmetry as the cool, New Ageism's constant reference to quantum physics, manifesting as a way to negate social responsibility, the odd coincidence of leaving the gold standard and New Ageism, spiritual bypassing, a global derealization, new retribalized collective delusions, the Faustian bargain of AI, rationality as a virus, the noble lie, indeterminacy as a sign of emergence, nostalgia as a sales pitch, regaining the sense of hypocrisy, localized retribalizations, GameB as a series of membranes, and much more. Episode Transcript "New Age and the Religion of Self: The Anatomy of a Rebellion Against Reality," by Alex Ebert Bad Guru (Alex's Substack) Jim Rutt's Substack "Unclear Thinking About Philosophical Zombies and Quantum Measurement," by Jim Rutt The Century of the Self (documentary by Adam Curtis) Alex Ebert is a platinum-selling musician (Edward Sharpe and The Magnetic Zeros), Golden Globe-winning film composer, cultural critic and philosopher living in New Orleans. His philosophical project, FreQ Theory, as well as his cultural analyses, can be followed on his Substack.
Your AI chats may be subpoena-able... don't feed them confidential strategies. Chad breaks down responsible AI use for legal and business tasks, where AI helps (drafts, speed) and where it fails (hallucinated case law, no courtroom instincts). Plus, how to pair AI with counsel to move faster without getting burned.Learn how to invest in real estate with the Cashflow 2.0 System! Your business in a box with 1:1 coaching, motivated seller leads, & softwares. https://www.wealthyinvestor.com/Want to work 1:1 with Ryan Pineda? Apply at ryanpineda.comJoin our FREE community, weekly calls, and bible studies for Christian entrepreneurs and business people. https://tentmakers.us/Want to grow your business and network with elite entrepreneurs on world-class golf courses? Apply now to join Mastermind19 – Ryan Pineda's private golf mastermind for high-level founders and dealmakers. www.mastermind19.com--- About Ryan Pineda: Ryan Pineda has been in the real estate industry since 2010 and has invested in over $100,000,000 of real estate. He has completed over 700 flips and wholesales, and he owns over 650 rental units. As an entrepreneur, he has founded seven different businesses that have generated 7-8 figures of revenue. Ryan has amassed over 2 million followers on social media and has generated over 1 billion views online. Starting as a minor league baseball player making less than $2,000 a month, Ryan is now worth over $100 million. He shares his experiences in building wealth and believes that anyone can change their life with real estate investing. ...
The Dark Side of AI Hallucinations & Tech Predictions Watch the full episode with Steven Pinker here: https://www.youtube.com/watch?v=RP_0MpESQy0 Get my book FOCUS LIKE A NOBEL PRIZE WINNER on sale now, only for just 99 cents https://a.co/d/hi50U9U Join this channel to get access to perks like monthly Office Hours: https://www.youtube.com/channel/UCmXH_moPhfkqCk6S3b9RWuw/join