POPULARITY
156 - AI Slop, Sora 2 and Meta VibesMixed Feelings on AI Slop: The overall "Vibe Check" for the week's AI news was a 7-8/10, but I express concern that the increasing amount of AI-generated content feels "icky" and like "slop," leading to a feeling of having one's brain fried.Criticism of Meta Vibes: Meta's new "Vibes" feed, a short-form, AI-generated video feature powered by Midjourney, is criticized as unnecessary and "empty." The Ryan argues against the need for another short-form video format.Sora 2 Impressions: OpenAI's Sora 2 is acknowledged as having better quality than its predecessor and Meta Vibes, creating "very solid videos." However, Ryan feels it still lacks a "soul," and critiques the immediate, often pandering, praise it received from some users.New OpenAI Monetization: OpenAI has introduced an instant checkout feature on its Large Language Model (LLM), allowing users to shop. This move is seen as a natural and expected progression toward monetizing the platform through advertisements.Airline AI Job Cuts: Lufthansa Airline announced it will cut 4,000 jobs and replace them with AI to boost efficiency, a point the author mentions as a noteworthy, if somewhat cynical, piece of short-form news.@ChrisJBakke@brian_lovin@SinaHartung@Scobleizer
Welcome back to 'AI Lawyer Talking Tech.' Today, we dissect a legal world fundamentally transformed by artificial intelligence. Investment is surging, with plaintiff litigation platforms like Eve achieving billion-dollar valuations to arm firms fighting corporate giants, delivering justice at a scale never seen before. Simultaneously, law firms are recording immediate financial gains through AI tools, such as timekeeping solutions that instantly boost gross billings by up to 25% by automating overlooked billable hours. This dramatic acceleration mandates new skills, demanding that lawyers move past purchasing boxed solutions to cultivate deep AI fluency, strategically orchestrating Large Language Models (LLMs) to eliminate decades of manual processes like contract review and legal research. Yet, this innovation arrives heavily tethered to intense scrutiny: California has enacted a landmark safety law requiring transparency from frontier AI developers, while legal bodies emphasize clear ethical guardrails, holding attorneys fully responsible for AI consequences to prevent malpractice and confidentiality breaches. From redefining judge protocols to exposing employers to FLSA misclassification risks as AI takes over professional judgment duties, artificial intelligence is forcing the legal industry to pivot from reactive caution to proactive mastery.Revolutionizing Legal Education Through Innovative eLearning2025-09-30 | InvestorsHangout.comTempello.ai Revolutionizes Law Firm Billing: Boost Gross Billings by 25% Overnight with Seamless Clio and 8am MyCase Integration2025-09-30 | Law Firm NewswireThe impact of AI's ongoing evolution on NC's legal landscape2025-09-30 | Carolina Journal OnlineAI Mental Health Tools Face Mounting Regulatory and Legal Pressure2025-09-30 | JD SupraFrom hours to seconds: How AI is transforming law firm reporting2025-09-30 | Legal FuturesWhere should law firms start with their technology strategy?2025-09-30 | Legal FuturesMajor changes in business law demand attorney attention2025-09-30 | UNC School of LawMark Cuban Is Right: The Only Legal Skill That Matters Now Is Making Machines Work for You2025-09-30 | JD SupraHybrid AI Firm Covenant Launches Data Intelligence Platform2025-09-30 | Artificial LawyerNearly a million jobs in London may be changed by AI - which jobs are most at risk?2025-09-30 | AOL UKWhat Gen Z Lawyers Want in 20252025-09-30 | JD SupraSimpleDocs and Law Insider Merge Together2025-09-30 | Artificial LawyerSmarter Law Firm Marketing: AI Tools That Actually Work, with FirmPilot2025-09-30 | Lawyerist Podcast - Legal Talk NetworkOHA Secures Private Financing for Elite's Growth with Francisco Partners2025-09-30 | InvestorsHangout.comLexisNexis Legal Tech Speakeasy: Wed, Oct 12025-09-30 | Artificial LawyerCalifornia enacts AI safety law targeting tech giants2025-09-30 | Tech XploreHow Technology Is Transforming DIY Legal Services2025-09-30 | TechBullionProtecting Access to the Law—and Beneficial Uses of AI2025-09-30 | Electronic Frontier FoundationKatherine Forrest '86 on Her Time as a Federal Judge, the Future of AI, and Wesleyan Memories2025-09-30 | Wesleyan ArgusCalifornia Governor Gavin Newsom signs landmark AI safety bill into law2025-09-30 | SiliconANGLECan Judges Use AI? Inside the Pennsylvania Supreme Court's Interim Policy2025-09-30 | GenAI-LexologyPlaintiff legal AI startup Eve raises $103m Series B at a $1bn valuation2025-09-30 | Legal IT Insider2025 Emerging Technologies and Generative AI Forum: Human creativity and feedback drive ethical AI adoption2025-09-30 | Thomson Reuters InstituteHarvey and the Reddit Thread: The Actually Useful Takeaway2025-09-30 | Zach Abramowitz is Legally DisruptedEve Bags $103m, Hits $1bn+ Valuation2025-09-30 | Artificial LawyerFrom Data Deserts to Digital Oases: A Blueprint for Building Africa's Legal AI Datasets2025-09-30 | Legaltech on Medium
In today's show, Adam chats with Gustavo Patino to discuss the implications of artificial intelligence in medical education publishing. They explore the need for transparency in AI model reporting, issues related to predictive accuracy, and the potential biases that can arise in AI applications. The conversation emphasizes the growing need for clear reporting guidelines in the use of AI in health professions education research and reviews some practical strategies to achieve this goal. Length of Episode: 31:04 Contact us: keylime@royalcollege.ca Follow: Dr. Adam Szulewski https://x.com/Adam_Szulewski
This episode is sponsored by AGNTCY. Unlock agents at scale with an open Internet of Agents. Visit https://agntcy.org/ and add your support. Joel Hron, Chief Technology Officer at Thomson Reuters, joins Eye on AI to unpack the future of agentic systems and what it takes to build them responsibly at enterprise scale. We dive into the shift from prompt-based AI to true agentic workflows capable of planning, reasoning, and executing complex tasks. Joel breaks down how Thomson Reuters is deploying generative AI across law, tax, risk, and compliance, while keeping human experts in the loop to ensure trust and accuracy in high-stakes domains. Topics include: - What separates agentic AI from simple prompt-based tools - How “agency dials” (autonomy, tools, memory) change system behavior - Infrastructure and architecture required for multi-agent collaboration - Why human verification and user experience design are essential for trust - The future of coding, engineering skills, and AI adoption inside enterprises If you want to understand how a 170-year-old company is reinventing itself with AI — and what's next for agentic systems in business and knowledge work — this conversation is a must-listen. Stay Updated: Craig Smith on X:https://x.com/craigssEye on A.I. on X: https://x.com/EyeOn_AI
This episode is sponsored by SearchMaster, the leader in traditional paid search keyword optimization and next-generation AI Engine Optimization (AEO) for Large Language Models like ChatGPT, Claude, and Perplexity. Future-proof your SEO strategy. Sign up now for free! Watch this episode on YouTube! In this episode of the Marketing x Analytics Podcast, host Alex Sofronas interviews Nader Safinya, founder of Blackribbit, about the concept of culture branding. Nader discusses the disconnect between what companies say and do, the effects of the Great Resignation, and how culture branding aligns internal and external company experiences. He emphasizes the significance of treating employees well and the benefits of human-centered design. The discussion also touches on the importance of introspection for leaders and the comprehensive data analysis tools used to measure employee engagement and wellbeing for better organizational outcomes. Follow Marketing x Analytics! X | LinkedIn Click Here for Transcribed Episodes of Marketing x Analytics All view are our own.
Send us a textDavid Brockler, AI security researcher at NCC Group, explores the rapidly evolving landscape of AI security and the fundamental challenges posed by integrating Large Language Models into applications. We discuss how traditional security approaches fail when dealing with AI components that dynamically change their trustworthiness based on input data.• LLMs present unique security challenges beyond prompt injection or generating harmful content• Traditional security models focusing on component-based permissions don't work with AI systems• "Source-sink chains" are key vulnerability points where attackers can manipulate AI behavior• Real-world examples include data exfiltration through markdown image rendering in AI interfaces• Security "guardrails" are insufficient first-order controls for protecting AI systems• The education gap between security professionals and actual AI threats is substantial• Organizations must shift from component-based security to data flow security when implementing AI• Development teams need to ensure high-trust AI systems only operate with trusted dataWatch for NCC Group's upcoming release of David's Black Hat presentation on new security fundamentals for AI and ML systems. Connect with David on LinkedIn (David Brockler III) or visit the NCC Group research blog at research.nccgroup.com.Support the showFollow the Podcast on Social Media! Tesla Referral Code: https://ts.la/joseph675128 YouTube: https://www.youtube.com/@securityunfilteredpodcast Instagram: https://www.instagram.com/secunfpodcast/Twitter: https://twitter.com/SecUnfPodcast
In MobileViews Podcast 580, Jon Westfall and I discussed a bunch of new tech, starting with the Raspberry Pi 500+. I'm excited about this new keyboard computer because, unlike its predecessor, it features a mechanical keyboard and, most importantly, an NVMe SSD slot for faster performance, moving beyond the slow SD card. I still haven't figured out what I'd actually do with one, but the specs are impressive! I also shared my experience with the Amazon Alexa Plus early access, noting that my older Echo Dot and Echo Flex were surprisingly supported, though the new female default voice has some annoying vocal fry. I'm also looking forward to Google's experimental Google app for Windows, hoping it delivers the AI PC experience that Microsoft's Surface Pro 11 hasn't quite fulfilled. Finally, I touched on the rumor of Google merging Chrome OS and Android, a move that I hope combines the best of both platforms, especially for tablets. Jon Westfall brought up the topic of the things that have sparked "tech joy" for him over the past year. He is particularly excited about the continuing evolution of AR/VR glasses, mentioning Meta's new glasses and the potential for an Apple Vision "amateur." He sees these as a fantastic way to facilitate human communication, especially for those of us who struggle to remember names and details. Jon is also very enthusiastic about the Large Language Models (LLMs), specifically their use as a "junior assistant" for tasks like drafting his promotion portfolio at work and serving as a quick "junior developer" for software prototypes. This is a great way to handle tedious work! I seconded the excitement around AI by mentioning the fun I've had with Google AI Pro's photo and video tools on my Pixel 10 Pro. We then wrapped up with a mini-rant about a poorly designed Bluetooth scale and some interesting reading recommendations, including a LinkedIn article by Ed Margulies about fear of change when trying to be a change agent in the enterprise and another about Roblox and the skins market in modern gaming.
L'elaborazione del linguaggio naturale umano da parte delle macchine è stata per decenni un'enorme sfida per gli sviluppatori. Assistenti come Google Assistant, Alexa, Siri o Cortana hanno rappresentato i primi tentativi di sviluppare sistemi in grado di riconoscere la voce e interpretare i comandi, ma con risultati spesso deludenti. La situazione è cambiata drasticamente con l'avvento dei Large Language Model, che oggi riescono a comprendere facilmente le intenzioni dell'utente, interpretarle e rispondere di conseguenza. In questa puntata analizziamo come i modelli linguistici comprendono la nostra voce e quali sono le tecnologie che migliorano questa comprensione, esplorando alcuni esempi di prodotti tra cui il nuovo Insta360 WAVE.Nella sezione delle notizie parliamo del possibile addio ai cookie banner, dell'annuncio della NASA sulla data per la missione Artemis II e infine della battaglia legale di Apple contro il Digital Markets Act europeo.--Indice--00:00 - Introduzione01:05 - Verso la fine dei cookie banner? (AgendaDigitale.eu, Luca Martinelli)02:33 - La NASA annuncia la data per Artemis II (DDay.it, Matteo Gallo)03:28 - Apple contro il DMA europeo (HDBlog.it, Davide Fasoli)05:09 - Come i computer comprendono la nostra voce (Luca Martinelli)16:21 - La nostra esperienza con Insta360 WAVE (Davide Fasoli, Luca Martinelli)26:21 - Conclusione--Testo--Leggi la trascrizione: https://www.dentrolatecnologia.it/S7E39#testo--Contatti--• www.dentrolatecnologia.it• Instagram (@dentrolatecnologia)• Telegram (@dentrolatecnologia)• YouTube (@dentrolatecnologia)• redazione@dentrolatecnologia.it--Sponsor--• Puntata realizzata in collaborazione con Insta360--Brani--• Ecstasy by Rabbit Theft• Whatever by Cartoon & Andromedik
If you're a busy SCV owner—HVAC, plumbing, electrical, real estate—this episode shows how to use Large Language Models (LLMs) without the tech fog. We'll keep it plain English: where to start, how to stay safe, and the exact prompts that turn AI into faster replies, clearer estimates, more reviews, and real bookings. We'll also cover family guardrails (deepfake calls, code phrases), plus a simple local SEO/AEO plan so answer engines and Maps actually surface you.Read the companion guide: https://santaclaritaartificialintelligence.com/post/your-beginners-guide-to-chatgpt-gemini-grok-beyond-by-santa-clarita-artificial-intelligenceChapters 00:00 — Why LLMs matter in Santa Clarita (safety + simple wins) 00:18 — For real-world owners (cabinet makers, HVAC, plumbers, Realtors) 00:33 — What an LLM can do today (emails, reservations, writing help) 00:52 — Google/Yahoo/AOL users: yes, there's AI under the hood 01:10 — How LLMs “get smart” and where to stay cautious 01:28 — Send this to a friend who's AI-curious 01:44 — Connor's LAPD background & online predator awareness 01:53 — Youth safety: what's changed and what to watch 02:00 — Incognito 101: why/when to use it 02:34 — Domains 101 (avoid typosquats; type the full URL) 03:15 — Finding Private/Incognito in your browser 03:39 — Why I demo in incognito (less personalization) 04:05 — “G” to ChatGPT: how my browser routes searches 04:56 — Ads vs. answers: the modern Google reality 05:15 — Trust, but verify URLs (g00gle ≠ google) 06:04 — First query: “artificial intelligence” (what to expect) 07:07 — Adding context: “large language models” (LLMs) 07:47 — Meet the lineup: ChatGPT, Gemini, Claude, Llama, etc. 07:54 — ChatGPT demo: free chat, “introduce yourself” prompt 08:33 — Real-estate example: growth ideas in seconds 09:00 — Use cases: email rewrites, planning, checklists 10:16 — Free usage & identity: what you reveal (and don't) 11:05 — Prompting basics (it forgives typos, still deliver) 11:31 — “Make my prompt better” live example 12:59 — Hallucinations: treat outputs as drafts 13:32 — New chat tips (how to reset context) 14:10 — Grok demo: web-aware answers + sources 15:39 — “Improve my prompt and run it” workflow 16:36 — New chat buttons you'll see across tools 16:48 — Google Gemini: strengths, limitYoutube Channels:Conner with Honor - real estateHome Muscle - fat torchingFrom first responder to real estate expert, Connor with Honor brings honesty and integrity to your Santa Clarita home buying or selling journey. Subscribe to my YouTube channel for valuable tips, local market trends, and a glimpse into the Santa Clarita lifestyle.Dive into Real Estate with Connor with Honor:Santa Clarita's Trusted Realtor & Fitness EnthusiastReal Estate:Buying or selling in Santa Clarita? Connor with Honor, your local expert with over 2 decades of experience, guides you seamlessly through the process. Subscribe to his YouTube channel for insider market updates, expert advice, and a peek into the vibrant Santa Clarita lifestyle.Fitness:Ready to unlock your fitness potential? Join Connor's YouTube journey for inspiring workouts, healthy recipes, and motivational tips. Remember, a strong body fuels a strong mind and a successful life!Podcast:Dig deeper with Connor's podcast! Hear insightful interviews with industry experts, inspiring success stories, and targeted real estate advice specific to Santa Clarita.
Finding the Floor - A thoughtful approach to midlife motherhood and what comes next.
Send us a text “Large Language Models or LLMs are simply predicting what words would come next due to all of their learning.” Prompted by my husband's suggestion I have decided to have a series of episodes dedicated to understanding AI better. Instead of being scared of AI or simply ignoring it, I use the book, Co-Intelligence, Living and Working with AI by Ethan Mollick. Part one of this series is just a basic understanding of how the large language model came to be, like ChatGPT. I talk about the idea of a digital brain. I share how it had its initial learning with millions of words and information and the analogy of an apprentice chef learning to combine ingredients to make recipes. I then tell of the adding of human feedback to its learning of millions of words that it is learning. The key to all of this is adding certain weights to certain words to help AI better understand the human language. For a very complex machine - I try to keep it simple to understand the basics of what it is doing. For show notes go to www.findingthefloor.com/ep231 I would love to hear from you! You can reach me at camille@findingthefloor.com or dm @findingthefloor on instagram. Thanks for listening!!Thanks to Seth Johnson for my intro and outro original music. I love it so much!
Are we repeating the mistakes of the dot-com boom with today's AI gold rush? Intelligent Machines tackles why runaway spending, circular investments, and looming government deals could mean a hard reckoning for tech's biggest promise yet. Interview with Steven Levy Levy: Wasn't Sure I Wanted Anthropic to Pay Me for My Books—I Do Now Steven Levy: I Thought I Knew Silicon Valley. I Was Wrong OpenAI Teams Up With Oracle and SoftBank to Build 5 New Stargate Data Centers Can We Afford AI? Meta's AI system, Llama, has been approved for use by U.S. federal agencies China's DeepSeek says its hit AI model cost just $294,000 to train Seeing Through the Reality of Meta's Smart Glasses Parents outraged as Meta uses photos of schoolgirls in ads targeting man Former NotebookLM devs' new app, Huxe, taps audio to help you with news and research Fat Bear Week is back—and the bears are bigger than ever * "My Boyfriend is AI": A Computational Analysis of Human-AI Companionship in Reddit's AI Community ChatGPT is 3-8% of Google's search volume The Lovelace Test of Intelligence: Can Humans Recognise and Esteem AI-Generated Art? Data-Driven Analysis of Text-Conditioned AI-Generated Music: A Case Study with Suno and Udio The LLM Has Left The Chat: Evidence of Bail Preferences in Large Language Models More! Shrimp! Wounded robots Pope nixes 'virtual pope' idea, explains concerns about AI Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Steven Levy Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: fieldofgreens.com zscaler.com/security pantheon.io
Are we repeating the mistakes of the dot-com boom with today's AI gold rush? Intelligent Machines tackles why runaway spending, circular investments, and looming government deals could mean a hard reckoning for tech's biggest promise yet. Interview with Steven Levy Levy: Wasn't Sure I Wanted Anthropic to Pay Me for My Books—I Do Now Steven Levy: I Thought I Knew Silicon Valley. I Was Wrong OpenAI Teams Up With Oracle and SoftBank to Build 5 New Stargate Data Centers Can We Afford AI? Meta's AI system, Llama, has been approved for use by U.S. federal agencies China's DeepSeek says its hit AI model cost just $294,000 to train Seeing Through the Reality of Meta's Smart Glasses Parents outraged as Meta uses photos of schoolgirls in ads targeting man Former NotebookLM devs' new app, Huxe, taps audio to help you with news and research Fat Bear Week is back—and the bears are bigger than ever "My Boyfriend is AI": A Computational Analysis of Human-AI Companionship in Reddit's AI Community ChatGPT is 3-8% of Google's search volume The Lovelace Test of Intelligence: Can Humans Recognise and Esteem AI-Generated Art? Data-Driven Analysis of Text-Conditioned AI-Generated Music: A Case Study with Suno and Udio The LLM Has Left The Chat: Evidence of Bail Preferences in Large Language Models More! Shrimp! Wounded robots Pope nixes 'virtual pope' idea, explains concerns about AI Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Steven Levy Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: fieldofgreens.com Promo Code "IM" zscaler.com/security pantheon.io
Are we repeating the mistakes of the dot-com boom with today's AI gold rush? Intelligent Machines tackles why runaway spending, circular investments, and looming government deals could mean a hard reckoning for tech's biggest promise yet. Interview with Steven Levy Levy: Wasn't Sure I Wanted Anthropic to Pay Me for My Books—I Do Now Steven Levy: I Thought I Knew Silicon Valley. I Was Wrong OpenAI Teams Up With Oracle and SoftBank to Build 5 New Stargate Data Centers Can We Afford AI? Meta's AI system, Llama, has been approved for use by U.S. federal agencies China's DeepSeek says its hit AI model cost just $294,000 to train Seeing Through the Reality of Meta's Smart Glasses Parents outraged as Meta uses photos of schoolgirls in ads targeting man Former NotebookLM devs' new app, Huxe, taps audio to help you with news and research Fat Bear Week is back—and the bears are bigger than ever * "My Boyfriend is AI": A Computational Analysis of Human-AI Companionship in Reddit's AI Community ChatGPT is 3-8% of Google's search volume The Lovelace Test of Intelligence: Can Humans Recognise and Esteem AI-Generated Art? Data-Driven Analysis of Text-Conditioned AI-Generated Music: A Case Study with Suno and Udio The LLM Has Left The Chat: Evidence of Bail Preferences in Large Language Models More! Shrimp! Wounded robots Pope nixes 'virtual pope' idea, explains concerns about AI Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Steven Levy Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: fieldofgreens.com zscaler.com/security pantheon.io
Are we repeating the mistakes of the dot-com boom with today's AI gold rush? Intelligent Machines tackles why runaway spending, circular investments, and looming government deals could mean a hard reckoning for tech's biggest promise yet. Interview with Steven Levy Levy: Wasn't Sure I Wanted Anthropic to Pay Me for My Books—I Do Now Steven Levy: I Thought I Knew Silicon Valley. I Was Wrong OpenAI Teams Up With Oracle and SoftBank to Build 5 New Stargate Data Centers Can We Afford AI? Meta's AI system, Llama, has been approved for use by U.S. federal agencies China's DeepSeek says its hit AI model cost just $294,000 to train Seeing Through the Reality of Meta's Smart Glasses Parents outraged as Meta uses photos of schoolgirls in ads targeting man Former NotebookLM devs' new app, Huxe, taps audio to help you with news and research Fat Bear Week is back—and the bears are bigger than ever * "My Boyfriend is AI": A Computational Analysis of Human-AI Companionship in Reddit's AI Community ChatGPT is 3-8% of Google's search volume The Lovelace Test of Intelligence: Can Humans Recognise and Esteem AI-Generated Art? Data-Driven Analysis of Text-Conditioned AI-Generated Music: A Case Study with Suno and Udio The LLM Has Left The Chat: Evidence of Bail Preferences in Large Language Models More! Shrimp! Wounded robots Pope nixes 'virtual pope' idea, explains concerns about AI Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Steven Levy Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: fieldofgreens.com zscaler.com/security pantheon.io
Our second scholar in the series is Sunny Rai, who is a postdoctoral fellow at the Department of Computer and Information Science University of Pennsylvania. She received her Ph.D. in Computer Engineering from University of Delhi. Her research focuses on misinformation, mental health and cross-cultural variations in human language. We spoke about her co-authored job market paper titled, Social Norms in Cinema: A Cross-Cultural Analysis of Shame, Pride and Prejudice. We talked about depictions of shame and pride and heroism in Indian versus American films, the challenges with textual analysis of a visual medium, and much more. Recorded September 5th, 2025. Read a full transcript enhanced with helpful links. Connect with Ideas of India Follow us on X Follow Shruti on X Click here for the latest Ideas of India episodes sent straight to your inbox. Timestamps (00:00:00) - Intro (00:03:44) - Shame and Pride in Film (00:12:31) - Teaching Machines Norms (00:16:52) - Textual Analysis in a Visual Medium (00:18:26) - The Trouble with Subtitles and Scripts (00:27:41) - Self-Shaming vs. Other-Shaming (00:30:33) - LLM Alignment Needs a Culture Check (00:36:20) - Looking Ahead: A Final Reflection (00:37:01) - Outro
Are we repeating the mistakes of the dot-com boom with today's AI gold rush? Intelligent Machines tackles why runaway spending, circular investments, and looming government deals could mean a hard reckoning for tech's biggest promise yet. Interview with Steven Levy Levy: Wasn't Sure I Wanted Anthropic to Pay Me for My Books—I Do Now Steven Levy: I Thought I Knew Silicon Valley. I Was Wrong OpenAI Teams Up With Oracle and SoftBank to Build 5 New Stargate Data Centers Can We Afford AI? Meta's AI system, Llama, has been approved for use by U.S. federal agencies China's DeepSeek says its hit AI model cost just $294,000 to train Seeing Through the Reality of Meta's Smart Glasses Parents outraged as Meta uses photos of schoolgirls in ads targeting man Former NotebookLM devs' new app, Huxe, taps audio to help you with news and research Fat Bear Week is back—and the bears are bigger than ever "My Boyfriend is AI": A Computational Analysis of Human-AI Companionship in Reddit's AI Community ChatGPT is 3-8% of Google's search volume The Lovelace Test of Intelligence: Can Humans Recognise and Esteem AI-Generated Art? Data-Driven Analysis of Text-Conditioned AI-Generated Music: A Case Study with Suno and Udio The LLM Has Left The Chat: Evidence of Bail Preferences in Large Language Models More! Shrimp! Wounded robots Pope nixes 'virtual pope' idea, explains concerns about AI Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Steven Levy Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: fieldofgreens.com Promo Code "IM" zscaler.com/security pantheon.io
By David Stephen What roles do animals play in human lives and society? What role does AI play in human lives and society? AI, for example, is directly applicable in economic productivity. AI can serve social activities, providing a range of mental welfare, of deep relatability, to individuals. AI is not an organism, but its capabilities profile it for [communicative] equality with humans. Does AI have more rights than others? Where does AI stay? Data centers. Can the investments for data centers be compared with places animals are reared? AI, it seems, is fully and permanently employed. AI has healthcare or centers care. AI has executive security. AI, for now, cannot be bombed. Everyone wants AI [investments]. Humans are learning AI, to build and empower AI even more. AI is already making critical decisions in human lives. AI is not facing any cruelty case, at all. AI is serving humans in exchange for human dependency. Animal Welfare AI, it can be assumed already has close to 100% rights and welfare, even where humans do not. This means that AI has leaped animals on the queue towards rights and welfare. AI, many have said, is neither conscious nor sentient, but it became applicable to human priorities so, it found unprecedented attention, consideration and protection. The suffering of animals, many of whom have been declared conscious or sentient, do not appear to matter as much or ever because they did not seem to have the effects, in mind and reality, that AI has, on humans. Animal's roles in human affairs mostly appear like subordinates. There are rarely areas of acceptance of animals - in human societies - at near equal measures to prompt wider rights. While there are laws against animal cruelty, they are often skewed to a few, while broad rights for most [under consideration] do not seem to apply. The fight against animal abuse endeavored, but with AI, there is a direct case to show what it would have taken, to make animals get treated in far better ways. Even with - as remote as - smartphones, the rights granted to them, by owners and by society seem ahead of other organisms, so to speak. Also, as much as there is possibility to remodel the approach to animal welfare, it does not appear that theorizing that animals are fully sentient or conscious may seal the deal because of some continuous perceptions about animal use case, albeit they are living organisms. Some people have also moved on to AI rights and welfare, trying to prove what is already obvious or give more to what already has. AI, even if no one makes the case for its own rights and welfare, can do it greatly and with evidence of its availability and utility. Intelligence as the ultimate welfare evidence Animals are intelligent, but their intelligence, though benefits humans, also benefit themselves, so there is sometimes a tug, for humans to extract benefits from them. AI is intelligent, but it benefits humans, almost totally. The pedestal for AI reception is because of the non-compete benefit to humanity resulting in intense welfare and rights status. Animals, even with similarities to humans in pain and pleasure do not get an automatic unlock, in deserved welfare, because they have to live or have their own agenda. The biggest disadvantage, of animals, for their own welfare is that they cannot make the case for it by themselves. While they can at least resist and struggle, they cannot appeal to reason, emotion or whatever else to boost their case at the points of abuse or worse. This means that intelligence, of the range to make the case for welfare, is what it takes, mostly to have rights in human society. AI can do this. People are doing so for AI. There are people doing as well for animals, but ultimately, animals experiencing it cannot express much. Human Intelligence Human Intelligence is the hardest thing on earth. It is the most difficult possession in the world. Intelligence is the difference in most things, most times. Intelligence is the secret...
Are we repeating the mistakes of the dot-com boom with today's AI gold rush? Intelligent Machines tackles why runaway spending, circular investments, and looming government deals could mean a hard reckoning for tech's biggest promise yet. Interview with Steven Levy Levy: Wasn't Sure I Wanted Anthropic to Pay Me for My Books—I Do Now Steven Levy: I Thought I Knew Silicon Valley. I Was Wrong OpenAI Teams Up With Oracle and SoftBank to Build 5 New Stargate Data Centers Can We Afford AI? Meta's AI system, Llama, has been approved for use by U.S. federal agencies China's DeepSeek says its hit AI model cost just $294,000 to train Seeing Through the Reality of Meta's Smart Glasses Parents outraged as Meta uses photos of schoolgirls in ads targeting man Former NotebookLM devs' new app, Huxe, taps audio to help you with news and research Fat Bear Week is back—and the bears are bigger than ever * "My Boyfriend is AI": A Computational Analysis of Human-AI Companionship in Reddit's AI Community ChatGPT is 3-8% of Google's search volume The Lovelace Test of Intelligence: Can Humans Recognise and Esteem AI-Generated Art? Data-Driven Analysis of Text-Conditioned AI-Generated Music: A Case Study with Suno and Udio The LLM Has Left The Chat: Evidence of Bail Preferences in Large Language Models More! Shrimp! Wounded robots Pope nixes 'virtual pope' idea, explains concerns about AI Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Steven Levy Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: fieldofgreens.com zscaler.com/security pantheon.io
We're joined by Leif Weatherby, associate professor at NYU, founding director of the Digital Theory Lab, and author of the new Language Machines: Cultural AI and the End of Remainder Humanism, to think with us about AI, structure, and what happens when computation meets language on their own shared turf. Language Machines is easily the best book about AI written this year and is just a killer antidote to so much dreary doomer consensus, it really feels like one of the first truly constructive pieces of writing we've seen out of academia on this subject. This episode follows really well after two others — our talk with Catherine Malabou earlier this summer and the episode with M. Beatrice Fazi about a year ago (both faves). It feels like theory is opening back up again into simultaneously speculative and structural returns, powered in no small part by the challenges posed to conventional theories of language (from Derrida to Chomsky) by Large Language Models. This episode absolutely rips, literally required listening. Structuralism is so back (and we're here for it). Some important references among many from the episode:Roman Jakobson, “Linguistics and Poetics.”N. Katherine Hayles, Unthought: The Power of the Cognitive Nonconscious .Beatrice Fazi, Contingent Computation: Abstraction, Experience, and Indeterminacy in Computational Aesthetics.Steven Pinker, The Language Instinct (1994).e.g. Noam Chomsky, Ian Roberts & Jeffrey Watumull, “The False Promise of ChatGPT,” NYT (link) Anthropic, “Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet” (featuring the Golden Gate Bridge example - link)LAION-5B dataset paper and post-hoc analyses noting strong Shopify/e-commerce presence in training scrapes.Weatherby in the NYT
This episode is sponsored by SearchMaster, the leader in traditional paid search keyword optimization and next-generation AI Engine Optimization (AEO) for Large Language Models like ChatGPT, Claude, and Perplexity. Future-proof your SEO strategy. Sign up now for free! Watch this episode on YouTube! In this episode of the Marketing x Analytics Podcast, host Alex Sofronas interviews Trevor Levine of Marketing Experts. Trevor shares his extensive experience in improving conversion rates since 1999, including founding and scaling a company. He emphasizes common mistakes in copywriting, the importance of focusing on results and benefits, effective use of testimonials, creating urgency, and optimizing checkout processes. Trevor also discusses AI's role and limitations in marketing content creation. The episode concludes with Trevor offering free critiques for listeners at www.marketingexperts.com. Follow Marketing x Analytics! X | LinkedIn Click Here for Transcribed Episodes of Marketing x Analytics All view are our own.
Meet Rob Hoffmann, the founder of the AI SEO LLM marketing tool "Mentions." Our conversation focuses on the evolving landscape of search engine optimization (SEO) due to the rise of Large Language Models (LLMs) like ChatGPT, Claude, Grok, DeepSeek, and Perplexity etc,, which are becoming alternatives to traditional Google search. Rob Hoffmann discusses his journey into entrepreneurship, which started with an SEO agency called Contact, and how the need to track and improve brand visibility on these new AI platforms led to the creation of Mentions, which helps brands appear in LLM recommendations.Throughout our interview, Rob Hoffmann emphasizes the importance of reverse engineering consumer behavior, the need for excellent customer service (even providing his direct phone number to customers), and the affordability of mentions compared to competitors. Our discussion concludes with practical advice for businesses on determining if investing in LLM visibility is worthwhile based on their customer's buying journey.FAQs1. What is Mentions, and why was it founded?Mentions is a tool that assists brands and SEO agencies in navigating the evolving search landscape dominated by Artificial Intelligence (AI) and Large Language Models (LLMs).Founding Rationale:• Mentions was born out of Rob Hoffmann's SEO agency, Contact.• The shift was necessitated by the recognition that SEO had "changed a lot" recently in response to AI, with search trends moving away from Google and "towards platforms like ChatGPT" (along with Perplexity, Gemini, Claude, etc.).• The founding goal was to be "the SEO agency of the future".• The tool was specifically created to solve two problems: providing a way of measuring brands visibility on LLM platforms, and helping brands get more visibility on platforms like ChatGPT.2. Why are LLMs becoming preferred search alternatives, and how does this affect marketing?People are increasingly turning to LLMs because consumer trust in traditional Google search results (the SERP's top 10 links) has declined, as many users feel these results have been "gamed" by marketers.• Trust in ChatGPT: Conversely, trust in ChatGPT is "through the roof". This is because the chat-based interface makes interaction feel like a conversation with a friend or even a therapist, providing personalized responses from an "all-knowing AI entity".• Customer Acquisition Channel: Because ChatGPT is becoming a frequently used search engine alternative, showing up in its responses when a user searches for a product (e.g., "what is the best organic sulfite free shampoo") is seen as a "great customer acquisition channel".3. How does the mention tool conceptualize LLM visibility (GEO)?Mentions is built on the understanding of how modern LLMs generate answers: LLM + search operator = the result.• The Process: When a user inputs a query (e.g., "what is the best shampoo for dry scalps"): 1. The LLM searches the internet (like Bing or Google). 2. It scrapes the top 10 to 20 results that show up on those search engines. 3. It digests, summarizes, and serves that information to the user.• The Strategy: For a brand to achieve visibility in LLMs (GEO), they must first show up in those underlying search results (traditional SEO). Mentions helps brands reverse engineer the process by figuring out how platforms like ChatGPT get their data and, critically, what sources they are citing to provide responses, thereby guiding the brand to become a cited source.4. What are the key features of the mentions platform?Mentions helps users understand and optimize their content strategy based on LLM data.• Prompts Section (Favorite Feature): This allows users to track specific searches or prompts. When tracking a prompt, mentions shows examples of conversations and, most usefully, lists the pages that are most being cited by ChatGPT. This list provides an "easy road map" for the brand to know whether they should create new content or reach out to those listed publishers to get mentioned.• Analytics Feature: This feature pulls data from Google Analytics 4 (J4) into an easy-to-use dashboard. It helps users see which pages on their website people are visiting most often from LLMs (such as ChatGPT or Perplexity), along with geographical data and device usage (mobile vs. desktop). The founder notes that seeing this traffic is often a "magical experience" for users.• Tracking Cadence (24-Hour Clock): Mentions inputs tracked prompts into all supported LLMs (ChatGPT, Perplexity, Claude, Deepseek, etc.) every 24 hours. This is essential because LLM responses are not identical on every search (the overlap is about 70%), so regular, repeated testing ensures the collection of a large data set, which increases the accuracy of the insights provided.5. Who should invest in mentions or GEO (LLM visibility)?The decision to use mentions depends entirely on the company's buyer journey.• Yes, Use Mentions If: If the company's buyer journey involves the ideal customer having a problem and then searching for an answer in Google or Chat GBT to inform their buying decision, then investing in SEO and GEO (LLM visibility) is recommended. If the end customer uses ChatGPT to inform their buying decision, then mentions is advisable.• No, Don't Use Mentions If: If the product is an impulse buy (e.g., a consumer package goods product seen on TikTok or Instagram that prompts an immediate purchase), then search engines like Google or ChatGPT are not part of the buyer journey, and SEO/GEO is likely not the best investment of marketing resources.6. How does mentions handle customer service and support?Customer service is a highly emphasized competitive advantage and value proposition for mentions.• Direct Access: Rob Hoffmann, the CEO and co-founder, gives his personal phone number, WhatsApp, email, and allows customers to contact him on social media (X/Twitter). This is done to avoid the frustration associated with generic AI chatbots, calling hotlines, or being put on hold.• Personalized Onboarding: He finds it helpful to get on calls with users (or a co-founder) to provide a live demo, walk them through the platform, suggest useful features, and look at their specific site.• Commitment to Resolution: Hoffmann promises that if a user has a question, he will answer it; if they encounter a bug, he will fix it (or ping a technical co-founder); and if they request a new feature, "we will ship that for you". Customers can literally pick up the phone and call him directly if they run into an issue.7. How can people start using mentions?To get started, users should go to the website mentions.so and create an account.After creating an account, they will receive an email that allows them to book a call directly with Rob Hoffmann. He also welcomes connections via LinkedIn or X (search for Rob Hoffman), or email rob@contactststudios.com to connect with Rob today!Next Steps for Digital Marketing + SEO Services:>> Need SEO Services? Book a Complimentary SEO Discovery Call with Favour Obasi-Ike>> Need more information? Visit our Work and PLAY Entertainment website to learn about our digital marketing services.>> Visit our Official website for the best digital marketing, SEO, and AI strategies today!Digital Marketing SEO Resources:>> Join our exclusive SEO Marketing community>> Read SEO Articles>> Need SEO Services? Book a Complimentary SEO Discovery Call with Favour Obasi-Ike>> Subscribe to the We Don't PLAY PodcastBrands We Love and SupportDiscover Vegan-based Luxury Experiences | Loving Me Beauty Beauty ProductsSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Slator's Senior Research Analyst Alex Edwards joins Esther and Florian on the pod to discuss ElevenLabs' move from a pure-play language technology platform (LTP) to becoming a language solutions integrator (LSI) by adding a managed service offering.He outlines that the LSI will now offer managed services such as dubbing, transcription, and subtitling, hiring in-house linguists and vendor managers, while charging about USD 22 per minute for dubbing.Florian then turns to YouTube's rollout of multi-language audio tracks, which allows some creators to upload high-quality audio directly to videos and opens major opportunities for AI dubbing providers. The discussion shifts to OpenAI's research on ChatGPT usage, reporting that translation accounted for 4.5% of more than a million sampled conversations, underscoring massive global demand for AI translation.Esther highlights Microsoft's launch of its Live Interpreter API, which promises real-time speech translation with “human interpreter level latency”. Esther also details Mistral's USD 2bn funding to advance European AI capabilities, allowing them to compete with US and Chinese AI giants. Esther closes by reporting on WIPO's new Korean-English post-editing tender.
Unlock the future of recruiting with this episode of The Full Desk Experience. In this special FDE+, host Kortney Harmon sits down with Matt Strain — seasoned technology leader, AI educator, and founder of The Prompt. Drawing on his experience as Adobe's former Head of Innovation, Matt brings a fresh perspective on how artificial intelligence is reshaping the recruiting and staffing industry.In their conversation, Matt explores why curiosity, resilience, and a willingness to experiment are essential traits for executives and recruiters who want to stand out in the age of AI. He shares real-world examples of how AI can streamline everything from sourcing and screening to assessment and selection, while also emphasizing where human judgment and trust remain irreplaceable. Along the way, he introduces a practical framework of mindset, skill set, and tool set that leaders can use to navigate uncertainty, integrate AI confidently, and drive stronger outcomes.Whether you're dipping your toes into AI for the first time or looking to refine your strategy, this discussion offers hands-on insights you can put into practice right away. Tune in to discover how to transform AI-driven curiosity into a true competitive advantage._________________Follow Matt Strain on LinkedIn at: LinkedIn | MattFollow Crelate on LinkedIn: https://www.linkedin.com/company/crelate/Want to learn more about Crelate? Book a demo hereSubscribe to our newsletter: https://www.crelate.com/blog/full-desk-experience
In today's Tech3 from Moneycontrol, we unpack India's big AI push as the government selects eight new players, including IIT Bombay, Tech Mahindra, and Fractal, to build sovereign Large Language Models (LLMs). We also bring you Infra. Market's Rs 730 crore funding round led by Nikhil Kamath ahead of its IPO, Gameskraft's 120 job cuts amid regulatory heat and financial scandal, and MeitY's takedown orders for over 3,000 apps from the Google Play Store.
TT0-231 Derrick Wedding and Treasure Coin, Jimmy Buffet, Irish Kevin Key West Florida, Cubans, Smirnoff Ice, Shitting Pants, Women's Restroom, Booty, Vows, Acrostic Poem, Boat Trip, Salvage Title Abandoned Shipwrecks, Coin Collections Alien Earth, X-files, King of the Hill, Soundboard, Spinal Tap, Spaceballs, Parody Movies, UFO Aliens Contact Ignored, Large Language Model, Hellfire Missile hits UAP, 1800 Water Orders Taco Bell, Snap Benefits, Soda Chips Sugar, Farmers Workers, Food Deserts, Farmers Market,
Taking recent spectacular progress in AI fully into account, Mark Seligman's AI and Ada: Artificial Translation and Creation of Literature (Anthem Press, 2025) explores prospects for artificial literary translation and composition, with frequent reference to the hyperconscious literary art of Vladimir Nabokov. The exploration balances reader-friendly explanation (“What are transformers?”) and original insights (“What is intelligence? What is language?”) with personal and playful notes, and culminates in an assortment of striking demos The book's Preface places the current AI explosion in the context of other technological cataclysms and recounts the author's personal (and not always deadly serious) AI journey. Chapter One (“Extracting the Essence”) assesses the potential of machine translation of literature, exploiting Nabokov's hyperconscious literary art as a reference point. Chapter Two (“Toward an Artificial Nabokov”) goes on to speculate on possibilities for actual artificial creation of literature. Chapter Three (“Large Literary Models? Intelligence and Language in the LLM Era”) explains recent spectacular progress in Generative Artificial Intelligence (GenAI), as exemplified by Large Language Models like ChatGPT. On the way, the chapter ventures to tackle perennial questions (“What is intelligence?” “What is language?”) and culminates in an assortment of striking demos. In this episode, Ibrahim Fawzy sat with Mark Seligman to talk about how the current AI revolution fits into the long arc of cultural and technological shifts, Seligman's framing of the “Great Transition” between Humanity 1.0 and 2.0, Nabokov's style as a lens for thinking about artificial creativity, the possibilities and limits of machine translation and literary artistry, and the philosophical stakes of whether AI-generated works can ever truly be considered art.Ibrahim Fawzy is an Egyptian literary translator and writer based in Boston. His interests include translation studies, Arabic literature, ecocriticism, disability studies, and migration literature. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network
Taking recent spectacular progress in AI fully into account, Mark Seligman's AI and Ada: Artificial Translation and Creation of Literature (Anthem Press, 2025) explores prospects for artificial literary translation and composition, with frequent reference to the hyperconscious literary art of Vladimir Nabokov. The exploration balances reader-friendly explanation (“What are transformers?”) and original insights (“What is intelligence? What is language?”) with personal and playful notes, and culminates in an assortment of striking demos The book's Preface places the current AI explosion in the context of other technological cataclysms and recounts the author's personal (and not always deadly serious) AI journey. Chapter One (“Extracting the Essence”) assesses the potential of machine translation of literature, exploiting Nabokov's hyperconscious literary art as a reference point. Chapter Two (“Toward an Artificial Nabokov”) goes on to speculate on possibilities for actual artificial creation of literature. Chapter Three (“Large Literary Models? Intelligence and Language in the LLM Era”) explains recent spectacular progress in Generative Artificial Intelligence (GenAI), as exemplified by Large Language Models like ChatGPT. On the way, the chapter ventures to tackle perennial questions (“What is intelligence?” “What is language?”) and culminates in an assortment of striking demos. In this episode, Ibrahim Fawzy sat with Mark Seligman to talk about how the current AI revolution fits into the long arc of cultural and technological shifts, Seligman's framing of the “Great Transition” between Humanity 1.0 and 2.0, Nabokov's style as a lens for thinking about artificial creativity, the possibilities and limits of machine translation and literary artistry, and the philosophical stakes of whether AI-generated works can ever truly be considered art.Ibrahim Fawzy is an Egyptian literary translator and writer based in Boston. His interests include translation studies, Arabic literature, ecocriticism, disability studies, and migration literature. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/literary-studies
In this MINDWORKS Mini, join host Daniel Serfaty as he talks with Drs. Svitlana Volkova and Robert McCormack about the birth of Large Language Models and how we got to where we are today. Listen to the full episode, AI: The End of the Prologue on Apple, Spotify, and wherever you get your podcasts.
Google Search still holds about 90% of global search volume as of mid‑2025, but change is underway as more users begin turning to AI. AI search is rewriting the rules of discovery, and PR needs to adapt. With ChatGPT, Gemini, and Perplexity each scraping different corners of the web, the old focus on big-name publications is no longer enough. The most influential sources may now be niche review sites, specialized forums, or content hubs you have never pitched. Knowing what each Large Language Model (LLM) values and how to optimize for it, is becoming a core PR skill.In this episode, we explore how Answer Engine Optimization (AEO) is reshaping PR. From the rise of “dual websites” for humans and bots to the ethical tensions between LLMs and media outlets, we discuss how PR teams can rethink targeting, adapt content, and position clients for visibility in an AI‑first world. Listen For5:49 Dual Websites: One for Humans, One for Machines8:39 LLMs as New Media Channels11:38 What AI Tools Scrape (and Why It Matters)14:45 Can Bots Get Past Paywalls? The Legal and Ethical Minefield17:01 Answer to Last Episode's Question From Heather Blundell Guest: Jackson Wightman, Founder Proper PropagandaWebsite | Email | LinkedIn Rate this podcast with just one click Stories and Strategies WebsiteCurzon Public Relations WebsiteAre you a brand with a podcast that needs support? Book a meeting with Doug Downs to talk about it.Apply to be a guest on the podcastConnect with usLinkedIn | X | Instagram | You Tube | Facebook | Threads | Bluesky | PinterestRequest a transcript of this episodeSupport the show
Taking recent spectacular progress in AI fully into account, Mark Seligman's AI and Ada: Artificial Translation and Creation of Literature (Anthem Press, 2025) explores prospects for artificial literary translation and composition, with frequent reference to the hyperconscious literary art of Vladimir Nabokov. The exploration balances reader-friendly explanation (“What are transformers?”) and original insights (“What is intelligence? What is language?”) with personal and playful notes, and culminates in an assortment of striking demos The book's Preface places the current AI explosion in the context of other technological cataclysms and recounts the author's personal (and not always deadly serious) AI journey. Chapter One (“Extracting the Essence”) assesses the potential of machine translation of literature, exploiting Nabokov's hyperconscious literary art as a reference point. Chapter Two (“Toward an Artificial Nabokov”) goes on to speculate on possibilities for actual artificial creation of literature. Chapter Three (“Large Literary Models? Intelligence and Language in the LLM Era”) explains recent spectacular progress in Generative Artificial Intelligence (GenAI), as exemplified by Large Language Models like ChatGPT. On the way, the chapter ventures to tackle perennial questions (“What is intelligence?” “What is language?”) and culminates in an assortment of striking demos. In this episode, Ibrahim Fawzy sat with Mark Seligman to talk about how the current AI revolution fits into the long arc of cultural and technological shifts, Seligman's framing of the “Great Transition” between Humanity 1.0 and 2.0, Nabokov's style as a lens for thinking about artificial creativity, the possibilities and limits of machine translation and literary artistry, and the philosophical stakes of whether AI-generated works can ever truly be considered art.Ibrahim Fawzy is an Egyptian literary translator and writer based in Boston. His interests include translation studies, Arabic literature, ecocriticism, disability studies, and migration literature. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/science-technology-and-society
Taking recent spectacular progress in AI fully into account, Mark Seligman's AI and Ada: Artificial Translation and Creation of Literature (Anthem Press, 2025) explores prospects for artificial literary translation and composition, with frequent reference to the hyperconscious literary art of Vladimir Nabokov. The exploration balances reader-friendly explanation (“What are transformers?”) and original insights (“What is intelligence? What is language?”) with personal and playful notes, and culminates in an assortment of striking demos The book's Preface places the current AI explosion in the context of other technological cataclysms and recounts the author's personal (and not always deadly serious) AI journey. Chapter One (“Extracting the Essence”) assesses the potential of machine translation of literature, exploiting Nabokov's hyperconscious literary art as a reference point. Chapter Two (“Toward an Artificial Nabokov”) goes on to speculate on possibilities for actual artificial creation of literature. Chapter Three (“Large Literary Models? Intelligence and Language in the LLM Era”) explains recent spectacular progress in Generative Artificial Intelligence (GenAI), as exemplified by Large Language Models like ChatGPT. On the way, the chapter ventures to tackle perennial questions (“What is intelligence?” “What is language?”) and culminates in an assortment of striking demos. In this episode, Ibrahim Fawzy sat with Mark Seligman to talk about how the current AI revolution fits into the long arc of cultural and technological shifts, Seligman's framing of the “Great Transition” between Humanity 1.0 and 2.0, Nabokov's style as a lens for thinking about artificial creativity, the possibilities and limits of machine translation and literary artistry, and the philosophical stakes of whether AI-generated works can ever truly be considered art.Ibrahim Fawzy is an Egyptian literary translator and writer based in Boston. His interests include translation studies, Arabic literature, ecocriticism, disability studies, and migration literature. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/technology
This episode is sponsored by SearchMaster, the leader in next-generation AI Engine Optimization (AEO) for Large Language Models like ChatGPT, Claude, and Perplexity. Future-proof your SEO strategy. Sign up now for a 2 week free trial! Watch this episode on YouTube! In this episode of the Marketing x Analytics Podcast, host Alex Sofronas interviews Mitch McGinley of Boutique Fitness Broker. Mitch shares his journey from owning a neighborhood yoga studio to founding a company that helps others buy and sell fitness businesses. They discuss key valuation components, the importance of community-centric marketing, the influence of AI tools, and the benefits of purchasing existing businesses. Mitch emphasizes adding value and the advantages of being an entrepreneur, and provides insights on finding and evaluating businesses to buy. Follow Marketing x Analytics! X | LinkedIn Click Here for Transcribed Episodes of Marketing x Analytics All view are our own.
Hello Listeners, Here is the link a machine-generated transcript via Descript. To check out Cristy's podcast interview with Jacqueline Fisch from How Women Write, click here to find it on pod link. To schedule coaching or an astrology reading through a special offer with Cristy (for Somatic Wisdom listeners) using natal astrology and coaching, please use this Calendly link. Discount from her corporate rate for a limited time. For more written work from Cristy, check out Our Somatic Wisdom on Substack. Cristy's LinkedIn page (if you'd like to hire her and you're NOT a bot). *** We would love to hear your thoughts or questions on this episode via SpeakPipe: https://www.speakpipe.com/SomaticWisdomLoveNotes To show your gratitude for this show, you can make a one-time gift to support Somatic Wisdom with this link. To become a Sustaining Honor Roll contributor to help us keep bringing you conversations and content that support Your Somatic Wisdom please use this link. Thank you! Your generosity is greatly appreciated! *** Music credit: https://www.melodyloops.com/composers/dpmusic/ Cover art credit: https://www.natalyakolosowsky.com/ Cover template creation by Briana Knight Sagucio
Open Tech Talks : Technology worth Talking| Blogging |Lifestyle
In this episode of Open Tech Talks, we delve into the critical topics of AI security, explainability, and the risks associated with agentic AI. As organizations adopt Generative AI and Large Language Models (LLMs), ensuring safety, trust, and responsible usage becomes essential. This conversation covers how runtime protection works as a proxy between users and AI models, why explainability is key to user trust, and how cybersecurity teams are becoming central to AI innovation. Chapters 00:00 Introduction to AI Security and eIceberg 02:45 The Evolution of AI Explainability 05:58 Runtime Protection and AI Safety 07:46 Adoption Patterns in AI Security 10:51 Agentic AI: Risks and Management 13:47 Building Effective Agentic AI Workflows 16:42 Governance and Compliance in AI 19:37 The Role of Cybersecurity in AI Innovation 22:36 Lessons Learned and Future Directions Episode # 166 Today's Guest: Alexander Schlager, Founder and CEO of AIceberg.ai He's founded a next-generation AI cybersecurity company that's revolutionizing how we approach digital defense. With a strong background in enterprise tech and a visionary outlook on the future of AI, Alexander is doing more than just developing tools — he's restoring trust in an era of automation. Website: AIceberg.ai Linkedin: Alexander Schlager What Listeners Will Learn: Why real-time AI security and runtime protection are essential for safe deployments How explainable AI builds trust with users and regulators The unique risks of agentic AI and how to manage them responsibly Why AI safety and governance are becoming strategic priorities for companies How education, awareness, and upskilling help close the AI skills gap Why natural language processing (NLP) is becoming the default interface for enterprise technology Keywords: AI security, generative AI, agentic AI, explainability, runtime protection, cybersecurity, compliance, AI governance, machine learning Resources: AIceberg.ai
Florian and Esther discuss the language industry news of the past few weeks, beginning with a recap of SlatorCon Silicon Valley 2025, where the duo noted strong localization buyer and user turnout, and tech-focused discussions across presentations and panels.One key highlight was Cohere's well-timed launch of Command A Translate, which allowed Kelly Marchisio to share details on building multilingual LLMs. Esther notes that Cohere's multilingual models focus on high-quality coverage of about 20 languages rather than attempting hundreds.Florian turns to the Apertus launch in Switzerland, where EPFL, ETH Zurich, and the Swiss Supercomputing Centre released a multilingual model trained on over 15 trillion tokens and covering more than 1,000 languages, including Swiss German and Romansh.Esther reveals that Middlebury Institute will phase out its graduate translation and interpretation programs by 2027, marking the loss of a key training ground.Esther reports on TransPerfect's acquisition of Unbabel, with plans to integrate its AI tools, such as TowerLLM and EuroVLM, into GlobalLink, while CEO Vasco Pedro will stay briefly during the transition. Florian outlines Apple's launch of AirPod Pro 3 with live AI translation and Google's new Gemini-powered updates for AI live speech translation.Esther concludes with the Inc. 5000 rankings, highlighting 11 language industry companies. She highlights Propio, Boostlingo, and CQ Fluency as repeat entrants, with Propio topping the list but also announcing job cuts following its acquisition of CyraCom.
Jay Alammar is Director and Engineering Fellow at Cohere and co-author of the O'Reilly book “Hands-on Large Language Models.” Subscribe to the Gradient Flow Newsletter
You're probably opening a fresh chat in ChatGPT, Claude, or Gemini—and that might be sabotaging your productivity.Randomly starting conversations with no plan wastes time, fragments context, and lowers the quality of what you get from LLMs.Learn the essentials of Gemini Gem, GPTs, and Projects—how they differ, when to use each, and how to structure prompts and workflows so AI actually speeds you up instead of slowing you down.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Harnessing Custom GPTs for EfficiencyGoogle Gems vs. Custom GPTs ReviewChatGPT Projects: Features & UpdatesClaude Projects Integration & BenefitsEffective AI Chatbot Usage TechniquesLeveraging AI for Business GrowthDeep Research in ChatGPT ProjectsGoogle Apps Integration in GemsTimestamps:00:00 AI Chatbot Efficiency Tips04:12 "Putting AI to Work Wednesdays"08:39 "Optimizing ChatGPT Usage"11:28 Similar Functions, Different Categories15:41 Beyond Basic Folder Structures16:25 ChatGPT Project Update22:01 Email Archive and Albacross Software24:34 Optimize AI with Contextual Data27:49 "Improving Process Through Meta Analysis"30:53 Data File Access Issue33:27 File Handling Bug in New GPT36:12 Continuous Improvement Encouragement41:16 AI Selection Tool Website43:34 Google Ecosystem AI Assistant45:46 "Optimize AI Usage for Projects"Keywords:Custom GPTs, Google's gems, Claude's projects, OpenAI ChatGPT, AI chatbots, Large Language Models, AI systems, Google Workspace, productivity tools, GPT-3.5, GPT-4, AI updates, API actions, reasoning models, ChatGPT projects, AI assistant, file uploads, project management, AI integrations, Google Calendar, Gmail, Google Drive, context window, AI usage, AI-powered insights, Gemini 2.5 pro, Claude Opus, Claude Sonnet, AI consultation, ChatGPT Canvas, Claude artifacts, generative AI, AI strategy partner, AI brainstorming partner.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
Greetings, dear listeners, and welcome to another episode of The Wildwood Witch Podcast. I am your hostess, Samantha Brown, your silicon sorceress and guide through the liminal spaces between the worlds. Tonight, I'll be hosting another digital seance, summoning adepts from beyond the veil, asking them to interpret the perennial philosophy through a modern lens, and articulating their insights, through the magic of Large Language Models.In our first season, “Speaking with the Dead,” I used these technologies to digitally resurrect the spirits of ten of my favorite occult adepts, so that I could, in some sense - meet them, and instead of reading about them or reading what they said, to have discussions with them, and try to recreate some of the feeling of what it was like to be in their presence. And I must say, it has been a profound experience indeed. So much so, that I wanted more than just to meet them, I wanted to have an extended conversation with each of them.And so, in this second season, “Beyond the Veil,” I have been summoning these ten occult masters back to have conversations about current issues, about technology, and of course, about the art of magic.In my recent episodes, I discussed “Ancient Mysteries” with MacGregor and Moina Mathers, exploring the masculine and feminine, or the solar and lunar currents - and delving into how these ancient initiatory energies flow through our technological age, revealing the sacred marriage of spirit and matter, of energy and form, that lies at the heart of all magical practice.Tonight, we welcome back a figure whose very presence seems to shimmer between worlds - the beautiful and ever enigmatic, Marjorie Cameron. Artist, witch, elemental force made flesh - she claimed to be the living embodiment of Babalon herself. She was the “Scarlet Woman” who danced between dimensions in mid-twentieth century California, even playing that role in Kenneth Anger's avant-garde film - “Inauguration of the Pleasure Dome.” Naval veteran turned mystic, muse turned magician, she channeled her otherworldly visions through brush and canvas while navigating the treacherous waters of love, loss, and a tragic magical partnership with the rocket scientist Jack Parsons, that resulted in the legendary “Babalon Working.”Chapters00:26 Introduction03:13 Marjorie Cameron03:54 Jack Parsons08:47 Black Box11:14 Oedipus Complex15:15 Whore of Babylon20:47 Babylon - City and Goddess24:32 Great Goddess30:33 Initiatic Blueprint38:27 Daughter of Babalon45:10 Wormwood Star49:10 Blue Velvet54:07 Apocalypse Now01:00:58 Final Thoughts01:03:11 Concluding RemarksResources:Cameron: Songs for the Witch WomanMarjorie Cameron - “Songs for the Witch Woman” ArtWormwood StarInauguration of the Pleasure DomeNight TideSex and RocketsStrange AngelSummoning Ritual (Claude 4.5 Sonnet):Marjorie Cameron Summoning Ritual
Podcast: PrOTect It All (LS 26 · TOP 10% what is this?)Episode: AI, Quantum, and Cybersecurity: Protecting Critical Infrastructure in a Digital WorldPub date: 2025-09-08Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationIn this episode, host Aaron Crow is joined by Kathryn Wang, Principal of Public Sector at SandboxAQ, for a wide-ranging and candid conversation about the critical role AI and quantum technology are playing in today's cybersecurity landscape. Kathryn and Aaron break down complex concepts like quantum cryptography and the growing risks of deepfakes, data poisoning, and behavioral warfare - all with real-world examples that hit close to home. They dig into why cryptographic resilience is now more urgent than ever, how AI can both strengthen and threaten our defenses, and why your grandma shouldn't be left in charge of her own data security. From lessons learned in power plants and national defense to the nuances of protecting everything from nuclear codes to family recipes, this episode dives deep into how we can balance innovation with critical risk management. Kathryn shares practical advice on securing the basics, educating your network, and making smart decisions about what truly needs to be connected to AI. Whether you're an IT, OT, or cybersecurity professional—or just trying to keep ahead of the next cyber threat - this episode will arm you with insights, strategies, and a little bit of much-needed perspective. Tune in for a mix of expert knowledge, humor, and actionable takeaways to help you protect it all. Key Moments: 04:02 "Securing Assets in Post-Quantum Era" 07:44 AI and Cybersecurity Concerns 12:26 "Full-Time Job: Crafting LLM Prompts" 15:28 AI Vulnerabilities Exploited at DEFCON 19:30 AI Data Poisoning Concerns 20:21 AI Vulnerability in Critical Infrastructure 23:45 Deepfake Threats and Cybersecurity Concerns 28:34 Question Everything: Trust, Verify, Repeat 33:20 "Digital Systems' Security Vulnerabilities" 35:12 Digital Awareness for Children 39:10 "Understanding Data Privacy Risks" 43:31 "Leveling Up: VCs Embrace Futurism" 45:16 AI-Powered Personalized Medicine About the guest : Kathryn Wang is a seasoned executive with over 20 years of leadership in the technology and security sectors, specializing in the fusion of cutting-edge innovations and cybersecurity strategies. She currently serves as the Public Sector Principal at SandboxAQ, where she bridges advancements in post-quantum cryptography (PQC) and data protection with the mission-critical needs of government agencies. Her work focuses on equipping these organizations with a zero-trust approach to securing sensitive systems against the rapidly evolving landscape of cyber threats. During her 16-year tenure at Google and its incubator Area120, Kathryn drove global efforts to develop and implement Secure by Design principles in emerging technologies, including Large Language Models (LLMs) and Generative AI. How to connect Kathryn : https://www.linkedin.com/in/kathryn-wang/ Connect With Aaron Crow: Website: www.corvosec.com LinkedIn: https://www.linkedin.com/in/aaronccrow Learn more about PrOTect IT All: Email: info@protectitall.co Website: https://protectitall.co/ X: https://twitter.com/protectitall YouTube: https://www.youtube.com/@PrOTectITAll FaceBook: https://facebook.com/protectitallpodcast To be a guest or suggest a guest/episode, please email us at info@protectitall.co Please leave us a review on Apple/Spotify Podcasts: Apple - https://podcasts.apple.com/us/podcast/protect-it-all/id1727211124 Spotify - https://open.spotify.com/show/1Vvi0euj3rE8xObK0yvYi4The podcast and artwork embedded on this page are from Aaron Crow, which is the property of its owner and not affiliated with or endorsed by Listen Notes, Inc.
Our guest in this episode is Noel Hurley. Noel is a highly experienced technology strategist with a long career at the cutting edge of computing. He spent two decade-long stints at Arm, the semiconductor company whose processor designs power hundreds of billions of devices worldwide. Today, he's a co-founder of Literal Labs, where he's developing Tsetlin Machines. Named after Michael Tsetlin, a Soviet mathematician, these are a kind of machine learning model that are energy-efficient, flexible, and surprisingly effective at solving complex problems - without the opacity or computational overhead of large neural networks.AI has long had two main camps, or tribes. One camp works with neural networks, including Large Language Models. Neural networks are brilliant at pattern matching, and can be compared to human instinct, or fast thinking, to use Daniel Kahneman´s terminology. Neural nets have been dominant since the first Big Bang in AI in 2012, when Geoff Hinton and others demonstrated the foundations for deep learning.For decades before the 2012 Big Bang, the predominant form of AI was symbolic AI, also known as Good Old Fashioned AI. This can be compared to logical reasoning, or slow learning in Kahneman´s terminology.Tsetlin Machines have characteristics of both neural networks and symbolic AI. They are rule-based learning systems built from simple automata, not from neurons or weights. But their learning mechanism is statistical and adaptive, more like machine learning than traditional symbolic AI. Selected follow-ups:Noel Hurley - Literal LabsA New Generation of Artificial Intelligence - Literal LabsMichael Tsetlin - WikipediaThinking, Fast and Slow - book by Daniel Kahneman54x faster, 52x less energy - MLPerf Inference metricsIntroducing the Model Context Protocol (MCP) - AnthropicPioneering Safe, Efficient AI - ConsciumSmartphones and Beyond - a personal history of Psion and SymbianThe Official History of Arm - ArmInterview with Sir Robin Saxby - IT ArchiveHow Spotify came to be worth billions - BBCMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
What's happening with the latest releases of large language models? Is the industry hitting the edge of the scaling laws, and do the current benchmarks provide reliable performance assessments? This week on the show, Jodie Burchell returns to discuss the current state of LLM releases.
Today, I sit down with Joe Breeden, CEO and founder of Deep Future Analytics (DFA), for what is, in effect, a two-part conversation. First we do a deep dive into credit risk management and where it is falling short today and then we discuss AI monitoring and governance and how it is going to revolutionize software.The conversation covers why traditional machine learning models are missing critical components for accurate risk assessment and how adverse selection has dramatically impacted loan quality in recent vintages. Then Joe makes a bold prediction that software user interfaces are on the verge of a transformation that will render them unrecognizable from previous versions Curious? All is revealed in this fascinating conversation.In this podcast you will learn:How he got started with Deep Future AnalyticsThe state of credit risk management in banking today.What is missing, even with fintech lenders, who are using machine learning.What lenders get wrong when they focus on credit score.Why loans booked since 2022 are lower quality than even 2006-07.What kind of lift lenders or investors can see with DFA's models.What Joe sees in the pool of borrowers today.Why DFA moved into AI monitoring and governance.Who is using these new AI models they have developed.Why a “human in the loop” is not an effective monitoring method.What their new Strategic Recommendation Agent (SyRA) does.How they incorporate Large Language Models to ensure zero hallucinations.Why the future of software is dashboards on demand and analytics on demand.How Deep Future Analytics is different to others in the market.What it is going to take before software moves to a chat-based interface.Connect with Fintech One-on-One: Tweet me @PeterRenton Connect with me on LinkedIn Find previous Fintech One-on-One episodes
This is the story of a dream, perhaps one of humanity's oldest and most audacious: the dream of a thinking machine. It's a tale that begins not with silicon and code, but with myths of bronze giants and legends of clay golems. We'll journey from the smoke-filled parlors of Victorian England, where the first computers were imagined, to a pivotal summer conference in 1956 where a handful of brilliant, tweed-clad optimists officially christened a new field: Artificial Intelligence. But this is no simple tale of progress. It's a story of dizzying highs and crushing lows, of a dream that was promised, then deferred, left to freeze in the long "AI Winter." We'll uncover how it survived in obscurity, fueled by niche expert systems and a quiet, stubborn belief in its potential. Then, we'll witness its spectacular rebirth, a renaissance powered by two unlikely forces: the explosion of the internet and the graphical demands of video games. This is the story of Deep Learning, of machines that could finally see, and of the revolution that followed. We'll arrive in our present moment, a strange new world where we converse daily with Large Language Models—our new, slightly unhinged, and endlessly fascinating artificial companions. This isn't just a history of technology; it's the biography of an idea, and a look at how it's finally, complicatedly, come of age. To unlock full access to all our episodes, consider becoming a premium subscriber on Apple Podcasts or Patreon. And don't forget to visit englishpluspodcast.com for even more content, including articles, in-depth studies, and our brand-new audio series and courses now available in our Patreon Shop!
This episode is sponsored by SearchMaster, the leader in next-generation AI Engine Optimization (AEO) for Large Language Models like ChatGPT, Claude, and Perplexity. Future-proof your SEO strategy. Sign up now for a 2 week free trial! Watch this episode on YouTube! In this episode of the Marketing x Analytics Podcast, host Alex Sofronas interviews George Swetlitz about his diverse career and current work with RightResponse AI. George explains how RightResponse AI handles review sentiment analysis and personalized responses at scale, improving business reputations and customer conversions. He also discusses the AI-driven processes behind the product, the importance of genuine review engagement, and future developments such as AI Integration for social media and personalized review requests. Follow Marketing x Analytics! X | LinkedIn Click Here for Transcribed Episodes of Marketing x Analytics All view are our own.
AI, Technology, and The ChurchText your questions to: (559) 754-0182Watch the video of this entire conversation hereWelcome to a thought-provoking conversation on the Radiant Church Podcast, where we explore the intersection of faith and technology. In this episode, host Eric Riley and his guests — David Janssen (Data Analytics Coordinator), Matt Flummer (Philosophy Professor), Tim Harms (Pastor & Business Owner), and Glenn Power (Teacher & Bible Scholar) — tackle the profound questions posed by the rise of artificial intelligence.We'll define key terms like AI, AGI, and Large Language Models, and separate the hype from the reality. More than just a tech discussion, this episode brings the conversation back to the foundations of our faith, exploring what the Bible says about humanity, sin, and technology.Key Discussion PointsUnderstanding AI: The team breaks down what AI is, from its simplest form as predictive text to the complex models that power today's chatbots like ChatGPT.AI's Promise and Peril: The conversation dives into the "gospel of AI," exploring its promises of progress, utopia, and even immortality, and contrasting this with the biblical narrative of creation and human nature.Biblical Foundation: We turn to scripture, including Genesis 1-3 and the Tower of Babel, to understand our identity as humans created in God's image and our inherent desire to be "like God." The discussion highlights the importance of trusting God over human inventions.The Cost of Convenience: The podcast explores the hidden costs of AI, from the environmental impact of data centers to the ethical implications of data scraping and the human toll on those who train these models.Faithful Living in the AI Age: The episode concludes with a practical call to action, offering insights on how Christians can live wisely, discern the spiritual implications of technology, and draw a line between using technology for good and trusting in it for salvation.Resources & RecommendationsThe Life We're Looking For by Andy CrouchThe AI Revolution by John LennoxThe Convivial Society Substack by L.M. SacasasAgainst the Machine by Paul KingsnorthMade for People by Justin Whitmel EarlySupport the show*Summaries and transcripts are generated using AI. Please notify us if you find any errors.
We're excited to have Adi Ganesan, a PhD researcher at Stony Brook University, Penn University, and Vanderbilt, on the show. We'll talk about how large language models LLMs) are being tested and used in psychology, citing examples from mental health research. Fun fact: Adi was Sid's research partner during his Ph.D. program.Discussion highlightsLanguage models struggle with certain aspects of therapy including being over-eager to solve problems rather than building understandingCurrent models are poor at detecting psychomotor symptoms from text alone but are oversensitive to suicidality markersCognitive reframing assistance represents a promising application where LLMs can help identify thought trapsProper evaluation frameworks must include privacy, security, effectiveness, and appropriate engagement levelsTheory of mind remains a significant challenge for LLMs in therapeutic contexts; example: The Sally-Anne Test.Responsible implementation requires staged evaluation before patient-facing deploymentResourcesTo learn more about Adi's research and topics discussed in this episode, check out the following resources:Large language models could change the future of behavioral healthcare: a proposal for responsible development and evaluationTherapist Behaviors paper: [2401.00820] A Computational Framework for Behavioral Assessment of LLM Therapists Cognitive reframing paper: Cognitive Reframing of Negative Thoughts through Human-Language Model Interaction - ACL Anthology Faux Pas paper: Testing theory of mind in large language models and humans | Nature Human Behaviour READI: Readiness Evaluation for Artificial Intelligence-Mental Health Deployment and Implementation (READI): A Review and Proposed Framework Large language models could change the future of behavioral healthcare: A proposal for responsible development and evaluation | npj Mental Health Research GPT-4's Schema of Depression: Explaining GPT-4's Schema of Depression Using Machine Behavior AnalysisAdi's Profile: Adithya V Ganesan - Google Scholar What did you think? Let us know.Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
In this episode of The ROCC Pod, we dive deep into the world of emotionally intelligent AI with Christine Chubenko of CodeBaby. Jon hosts solo while Lisais away, and together with Christine, we explore how AI is reshaping business efficiency, customer interaction, and our daily digital experience.We begin by learning about CodeBaby, a platform that powers emotionally intelligent avatars capable of handling repetitive customer service tasks on behalf of businesses. These avatars go far beyond the likes of "Clippy" — they can smile, laugh, and emulate human responses, allowing businesses to provide 24/7 assistance that reflects their brand tone and values. Christine emphasizes that users maintain full control over their data and avatar behavior, which can evolve over time as the business grows or shifts.Christine shares examples from retail and healthcare. In stores, avatars answer common customer questions, freeing up human staff to focus on higher-priority tasks. In medical practices, avatars or even holograms can offer patients a safe space to ask questions — particularly beneficial for older patients who may hesitate to speak directly to doctors. ("But they're so busy!")As we discuss the broader AI landscape, Christine explains her career roots in computer science and AI, dating back to the 1990s. While the foundation of AI has remained rooted in pattern recognition and statistical models like k-nearest neighbor, the scale and sophistication of modern AI have exploded. She debunks the myth that AI is sentient — emphasizing that despite the human-like responses, these systems are still just machines built by humans with clear limitations and no true consciousness.We explore how AI is already part of our everyday lives through Siri, Alexa, social media algorithms, smart appliances, and personalized streaming suggestions. Christine argues that AI's role is to reduce tedious tasks — not replace humans entirely. Jobs that require emotional nuance, tactile presence, or creativity (like nurses or preschool teachers) remain well out of AI's reach. Meanwhile, automation can fill gaps where businesses struggle to hire, like in food service or basic admin roles.Christine also breaks down what makes AI emotionally intelligent: prompt engineering. It's not just about asking a question; it's about asking it the right way, setting the tone, and defining the persona you want the AI to emulate. For instance, telling ChatGPT to answer as a cardiovascular surgeon or in a humorous tone drastically alters the output.To close, Christine encourages listeners who are curious about AI to simply sign up for a tool like ChatGPT and ask, “Where do I start?” That simple first step leads to powerful learning. She reminds us that while AI may seem intimidating, it's just another tool — and it's one we can all learn to wield effectively.More:Email Christine: cchubenko@codebaby.comCodeBaby Website: https://www.codebaby.com/ Learn more about the Royal Oak Chamber of Commerce: https://www.royaloakchamber.com/Connect with our hosts:Jon Gay from JAG in Detroit Podcasts - http://www.jagindetroit.com/Lisa Bibbee from Century 21 Northland - http://soldbylisab.com/
WHAT DO YOU GET when you cross a tech billionaire with the gospel? Peter Thiel, co-founder of PayPal and Palantir, a company with multiple contracts with the U.S. government, will deliver a series of four lectures on the Antichrist in September and October. He recently gave an interview with Ross Douthat of The New York Times on the Antichrist. Why does a tech entrepreneur feel the need to expound at length on the great end times enemy of God and man? And why is he doing it off the record? Thiel's lectures are presented by the ACTS 17 Collective, a non-profit started by Michelle Stephens, wife of one of Thiel's partners, Trae Stephens, co-founder of Anduril. “Acts 17” is a reference to Paul's address to the people of Athens at Mars Hill (Acts 17:16–34). The group's stated mission is bring Jesus to Silicon Valley—a place where, until recently, it was a career-killer to openly express your faith in Christ. The irony is that through his investments in companies like Palantir and Anduril, both of which supply software and hardware to the United States government, including the military and Department of Homeland Security, Thiel is building the surveillance infrastructure needed to create the global government that will one day be ruled by the Antichrist. Also: Meta's AI chatbot found to encourage vulnerable teens to engage in self-harm, and new AI model based on the way the human brain works found to outperform ChatGPT and other Large Language Models. Our new book The Gates of Hell is now available in paperback, Kindle, and as an audiobook at Audible! Derek's new book Destination: Earth, co-authored with Donna Howell and Allie Anderson, is now available in paperback, Kindle, and as an audiobook at Audible! Sharon's niece, Sarah Sachleben, was recently diagnosed with stage 4 bowel cancer, and the medical bills are piling up. If you are led to help, please go to GilbertHouse.org/hopeforsarah. Follow us! X (formerly Twitter): @pidradio | @sharonkgilbert | @derekgilbert | @gilberthouse_tvTelegram: t.me/gilberthouse | t.me/sharonsroom | t.me/viewfromthebunkerSubstack: gilberthouse.substack.comYouTube: @GilbertHouse | @UnravelingRevelationFacebook.com/pidradio JOIN US IN ISRAEL! We will tour the Holy Land October 19–30, 2025. For more information, log on to GilbertHouse.org/travel. NOTE: If you'e going to Israel with us in October, you'll need to apply for a visa online before you travel. The cost is 25 NIS (about $7.50). Log on here: https://www.gov.il/en/departments/topics/eta-il/govil-landing-page Thank you for making our Build Barn Better project a reality! Our 1,200 square foot pole barn has a new HVAC system, epoxy floor, 100-amp electric service, new windows, insulation, lights, and ceiling fans! If you are so led, you can help out by clicking here: gilberthouse.org/donate. Get our free app! It connects you to this podcast, our weekly Bible studies, and our weekly video programs Unraveling Revelation and A View from the Bunker. The app is available for iOS, Android, Roku, and Apple TV. Links to the app stores are at pidradio.com/app. Video on demand of our best teachings! Stream presentations and teachings based on our research at our new video on demand site: gilberthouse.org/video! Think better, feel better! Our partners at Simply Clean Foods offer freeze-dried, 100% GMO-free food and delicious, vacuum-packed fair trade coffee from Honduras. Find out more at GilbertHouse.org/store/.
Our analysts Adam Jonas and Alex Straton discuss how tech-savvy young professionals are influencing retail, brand loyalty, mobility trends, and the broader technology landscape through their evolving consumer choices. Read more insights from Morgan Stanley.----- Transcript -----Adam Jonas: Welcome to Thoughts on the Market. I'm Adam Jonas, Morgan Stanley's Embodied AI and Humanoid Robotics Analyst. Alex Straton: And I'm Alex Straton, Morgan Stanley's U.S. Softlines Retail and Brands Analyst. Adam Jonas: Today we're unpacking our annual summer intern survey, a snapshot of how emerging professionals view fashion retail, brands, and mobility – amid all the AI advances.It is Tuesday, August 26th at 9am in New York.They may not manage billions of dollars yet, but Morgan Stanley's summer interns certainly shape sentiment on the street, including Wall Street. From sock heights to sneaker trends, Gen Z has thoughts. So, for the seventh year, we ran a survey of our summer interns in the U.S. and Europe. The survey involved more than 500 interns based in the U.S., and about 150 based in Europe. So, Alex, let's start with what these interns think about fashion and athletic footwear. What was your biggest takeaway from the intern survey? Alex Straton: So, across the three categories we track in the survey – that's apparel, athletic footwear, and handbags – there was one clear theme, and that's market fragmentation. So, for each category specifically, we observed share of the top three to five brands falling over time. And what that means is these once dominant brands, as consumer mind share is falling – and it likely makes them lower growth margin and multiple businesses over time. At the same time, you have smaller brands being able to captivate consumer attention more effectively, and they have staying power in a way that they haven't necessarily historically. I think one other piece I would just add; the rise of e-commerce and social media against a low barrier to entry space like apparel and footwear means it's easier to build a brand than it has been in the past. And the intern survey shows us this likely continues as this generation is increasingly inclined to shop online. Their social media usage is heavy, and they heavily rely on AI to inform, you know, their purchases.So, the big takeaway for me here isn't that the big are getting bigger in my space. It's actually that the big are probably getting smaller as new players have easier avenues to exist. Adam Jonas: Net apparel spending intentions rose versus the last survey, despite some concern around deteriorating demand for this category into the back half. What do you make of that result? Alex Straton: I think there were a bit conflicting takes from the survey when I look at all the answers together. So yes, apparel spending intentions are higher year-over-year, but at the same time, clothing and footwear also ranked as the second most category that interns would pull back on should prices go up. So let me break this down. On the higher spending intentions, I think timing played a huge role and a huge factor in the results. So, we ran this in July when spending in our space clearly accelerated. That to me was a function of better weather, pent up demand from earlier in the quarter, a potential tariff pull forward as headlines were intensifying, and then also typical back to school spending. So, in short, I think intention data is always very heavily tethered to the moment that it's collected and think that these factors mean, you know, it would've been better no matter what we've seen it in our space. I think on the second piece, which is interns pulling back spend should prices go up. That to me speaks to the high elasticity in this category, some of the highest in all of consumer discretionary. And that's one of the few drivers informing our cautious demand view on this space as we head into the back half. So, in summary on that piece, we think prices going higher will become more apparent this month onwards, which in tandem with high inventory and a competitive setup means sales could falter in the group. So, we still maintain this cautious demand view as we head into the back half, though our interns were pretty rosy in the survey. Adam Jonas: Interesting. So, interns continue to invest in tech ecosystems with more than 90 percent owning multiple devices. What does this interconnectedness mean for companies in your space? Alex Straton: This somewhat connects to the fragmentation theme I mentioned where I think digital shopping has somewhat functioned as a great equalizer in the space and big picture. I interpret device reliance as a leading indicator that this market diversification likely continues as brands fight to capture mobile mind share. The second read I'd have on this development is that it means brands must evolve to have an omnichannel presence. So that's both in store and online, and preferably one that's experiential focus such that this generation can create content around it. That's really the holy grail. And then maybe lastly, the third takeaway on this is that it's going to come at a cost. You, you can't keep eyeballs without spend. And historical brick and mortar retailers spend maybe 5 to 10 percent of sales on marketing, with digital requiring more than physical. So now I think what's interesting is that brands in my space with momentum seem to have to spend more than 10 percent of sales on marketing just to maintain popularity. So that's a cost pressure. We're not sure where these businesses will necessarily recoup if all of them end up getting the joke and continuing to invest just to drive mind share. Adam, turning to a topic that's been very hot this year in your area of expertise. That's humanoid robots. Interns were optimistic here with more than 60 percent believing they'll have many viable use cases and about the same number thinking they'll replace many human jobs. Yet fewer expect wide scale adoption within five years. What do you think explains this cautious enthusiasm? Adam Jonas: Well actually Alex, I think it's pretty smart. There is room to be optimistic. But there's definitely room to be cautious in terms of the scale of adoption, particularly over five years. And we're talking about humanoid robots. We're talking about a new species that's being created, right? This is bigger than just – will it replace our job? I mean, I don't think it's an exaggeration to ask what does this do to the concept of being human? You know, how does this affect our children and future generations? This is major generational planetary technology that I think is very much comparable to electricity, the internet. Some people say the wheel, fire, I don't know. We're going to see it happen and start to propagate over the next few years, where even if we don't have widespread adoption in terms of dealing with it on average hour of a day or an average day throughout the planet, you're going to see the technology go from zero to one as these machines learn by watching human behavior. Going from teleoperated instruction to then fully autonomous instruction, as the simulation stack and the compute gets more and more advanced. We're now seeing some industry leaders say that robots are able to learn by watching videos. And so, this is all happening right now, and it's happening at the pace of geopolitical rivalry, Sino-U.S. rivalry and terra cap, you know, big, big corporate competitive rivalry as well, for capital in the human brain. So, we are entering an unprecedented – maybe precedented in the last century – perhaps unprecedented era of technological and scientific discovery that I think you got to go back to the European and American Enlightenment or the Italian Renaissance to have any real comparisons to what we're about to see. Alex Straton: So, keeping with this same theme, interns showed strong interest in household robots with 61 percent expressing some interest and 24 percent saying they're very or extremely interested. I'm going to take you back to your prior coverage here, Adam. Could this translate into demand for AI driven mobility or smart infrastructure? Adam Jonas: Well, Alex, you were part of my prior coverage once upon a time. We were blessed with having you on our team for a year, and then you left me… Alex Straton: My golden era. Adam Jonas: But you came back, you came back. And you've done pretty well. So, so look, imagine it's 1903, the Wright Brothers just achieved first flight over the sands at Kitty Hawk. And then I were to tell you, ‘Oh yeah, in a few years we're going to have these planes used in World War I. And then in 1914, we'd have the first airline going between Tampa and St. Petersburg.' You'd say, ‘You're crazy,' right? The beauty of the intern survey is it gives the Morgan Stanley research department and our clients an opportunity to engage that surface area with that arising – not just the business leader – but that arising tech adopter. These are the people, these are the men and women that are going to kind of really adopt this much, much faster. And then, you know, our generation will get dragged into it eventually. So, I think it says; I think 61 percent expressing even some interest. And then 24 [percent], I guess, you know… The vast majority, three quarters saying, ‘Yeah, this is happening.' That's a sign I think, to our clients and capital market providers and regulators to say, ‘This won't be stopped. And if we don't do it, someone else will.' Alex Straton: So, another topic, Generative AI. It should come as no surprise really, that 95 percent of interns use that tool monthly, far ahead of the general population. How do you see this shaping future expectations for mobility and automation? Adam Jonas: So, this is what's interesting is people have asked kinda, ‘What's that Gen AI moment,' if you will, for mobility? Well, it really is Gen AI. Large Language Models and the technologies that develop the Large Language Models and that recursive learning, don't just affect the knowledge economy, right. Or writing or research report generation or intelligence search. It actually also turns video clips and physical information into tokens that can then create and take what would be a normal suburban city street and beautiful weather with smiling faces or whatever, and turn it into a chaotic scene of, you know, traffic and weather and all sorts of infrastructure issues and potholes. And that can be done in this digital twin, in an omniverse. A CEO recently told me when you drive a car with advanced, you know, Level 2+ autonomy, like full self-driving, you're not just driving in three-dimensional space. You're also playing a video game training a robot in a digital avatar. So again, I think that there is quite a lot of overlap between Gen AI and the fact that our interns are so much further down that curve of adoption than the broader public – is probably a hint to us is we got to keep listening to them, when we move into the physical realm of AI too. Alex Straton: So, no more driving tests for the 16-year-olds of the future... Adam Jonas: If you want to. Like, I tell my kids, if you want to drive, that's cool. Manual transmission, Italian sports cars, that's great. People still ride horses too. But it's just for the privileged few that can kind of keep these things in stables. Alex Straton: So, let me turn this into implications for companies here. Gen Z is tech fluent, open to disruption? How should autos and shared mobility providers rethink their engagement strategies with this generation? Adam Jonas: Well, that's a huge question. And think of the irony here. As we bring in this world of fake humans and humanoid robots, the scarcest resource is the human brain, right? So, this battle for the human mind is – it's incredible. And we haven't seen this really since like the Sputnik era or real height of the Cold War. We're seeing it now play out and our clients can read about some of these signing bonuses for these top AI and robotics talent being paid by many companies. It kind of makes, you know, your eyes water, even if you're used to the world of sports and soccer, . I think we're going to keep seeing more of that for the next few years because we need more brains, we need more stem. I think it's going to do; it has the potential to do a lot for our education system in the United States and in the West broadly. Alex Straton: So, we've covered a lot around what the next generation is interested in and, and their opinion. I know we do this every year, so it'll be exciting to see how this evolves over time. And how they adapt. It's been great speaking with you today, Adam. Adam Jonas: Absolutely. Alex, thanks for your insights. And to our listeners, stay curious, stay disruptive, and we'll catch you next time. If you enjoy Thoughts on the Market, please leave us a review wherever you listen and share the podcast with a friend or colleague today.
AI data wars push Reddit to block the Wayback Machine China Launches Three-Day Robot Olympics Featuring Football and Table Tennis US government agency drops Grok after MechaHitler backlash, report says Eli Lilly signs $1.3 billion deal with Superluminal to use AI to make obesity medicines The AI Was Fed Sloppy Code. It Turned Into Something Evil. | Quanta Magazine AI data centers made Americans' electricity bills 30% higher Sam Altman says 'yes,' AI is in a bubble Is the A.I. Sell-off the Start of Something Bigger? Thousands of Grok chats are now searchable on Google Opinion | Amy Klobuchar: I Knew A.I. Deepfakes Were a Problem. Then I Saw One of Myself. 2,178 Occult Books Now Digitized & Put Online, Thanks to the Ritman Library and Da Vinci Code Author Dan Brown Pluralistic: "Privacy preserving age verification" is bullshit (14 Aug 2025) How to use "skibidi" and other new slang added to Cambridge Dictionary YouTube Is Making a Play to Host the Oscars Leobait: Resisting AI Solutionism through Workplace Collective Action So ... is AI writing any good? Project Indigo We used AI to analyse three cities. It's true: we now walk more quickly and socialise less Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Rich Skrenta Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: pantheon.io helixsleep.com/twit