POPULARITY
You ever do something 600 times in a row? That's what we're doing today. To celebrate our 600th episode, we're bringing you: 6 AI Myths You Should Stop BelievingX10 AI Systems You Must Learn and X10 AI Trends You Can't Afford to IgnoreNewsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Six Common AI Myths DebunkedAI as Competitive Advantage MythProductivity Gains from AI ToolsAI Copilot vs Autonomous AI AgentsEmpathy and Creativity in AI ModelsAI Job Creation vs Job LossesHuman in the Loop LimitationsTen Must-Learn AI Systems OverviewChatGPT Usage for Business LeadersGoogle AI Studio and Gemini ApplicationsImportance of Agentic Browsers and CopilotOpen Source AI Model AdoptionAI Video Platform Skill DevelopmentAI Coding Tools for Non-DevelopersEvaluating and Benchmarking AI ModelsTen Key AI Trends for 2025Digital Evidence and AI-Generated ContentThird-Party AI Chat Platform DeclineImpact of AI on Social Media AdsChanging Landscape of Web BrowsingSurge in Open Source AI SolutionsWorld Models as Next AI FrontierRise of AI-Native Consulting FirmsExplainable AI and Agentic TraceabilityAI's Influence on US 2026 ElectionsGenerative AI Impact on Remote WorkTimestamps:00:00 "Mastering AI: Myths, Systems, Trends"04:48 Exclusive AI Insights Offer07:39 AI Tools Misunderstood by Executives12:01 AI: More Empathetic and Creative13:19 "AI's Impact on Full-Time Work"17:51 Partner with Us for AI Training19:28 Essential AI Skills for 2020s22:32 "Google Gemini: Free Powerful AI Model"26:06 Copilot Access and Permissions Training29:20 Evaluating Constantly Evolving AI Models33:24 "Learn AI Coding Tools Now"37:19 "Enterprise AI Survival Prospects"40:48 Open Source's Rise Over Websites44:59 "AI Market: Speed and Accountability"48:20 AI Disrupts Work-from-Home ModelsKeywords:AI myths, generative AI, AI systems, AI trends, AI fact vs fiction, AI competitive advantage, AI productivity, AI tool deployment, Copilot, Microsoft Copilot, ChatGPT, OpenAI, Google Gemini, agentic AI, agentic browsers, AI automation, workplace AI adoption, AI training, AI business strategy, AI model benchmarking, model evaluation, modular AI solutions, Hugging Face, LLM Arena, Google AI Studio, prompt engineering, context engineerSend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
The AI Breakdown: Daily Artificial Intelligence News and Discussions
On this episode, Andreessen Horowitz's Top 100 Gen AI Consumer Apps report highlights big shifts in just six months. Google scored four web entries with Gemini at #2, Grok rocketed to #4 with 20 million mobile users, coding tools like Lovable and Replit cemented their dominance, and Chinese AI firms kept expanding abroad despite home-market bans. The consumer AI space is finally settling into core categories.Brought to you by:KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. https://www.kpmg.us/AIpodcastsBlitzy.com - Go to https://blitzy.com/ to build enterprise software in days, not months Vanta - Simplify compliance - https://vanta.com/nlwPlumb - The automation platform for AI experts and consultants https://useplumb.com/The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Subscribe to the newsletter: https://aidailybrief.beehiiv.com/Interested in sponsoring the show? nlw@breakdown.network
Google's Nano Banana is the best AI image editor we've ever seen & it bodes well for the future of Gemini going forward. But even better, it's actually useful in everyday life. In other AI News, OpenAI's new Realtime API improves its voice AI systems. It's also taking people back from Meta who is also doing a deal with Midjourney. YES, it's the CIRCLE OF AI…. Plus Unitree's robot carries heavy stuff, Krea's got a new real time AI video model, NVIDIA's cutting edge new algo that speeds up LLMS & yet another demo of our very own new start-up AndThen! WE GO BANANAS. AGAIN AND AGAIN. YOU KNOW THE DEAL. #ai #ainews #openai Come to our Discord to try our Secret Project: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/ // Show Links // Nano Banana Lands aka Google Flash 2.5 Image https://x.com/GeminiApp/status/1960342037536108930 Logan From Google “Past Forward” Nano Banana Demo https://x.com/LimitlessFT/status/1960377217940152377 Gavin Nano Banana Pics: https://x.com/gavinpurcell/status/1960352979527041280 Took a old pic & made the do the electric slide: https://x.com/gavinpurcell/status/1960376142365327548 Isometric From Building https://x.com/demishassabis/status/1960716082890657828 https://x.com/demishassabis/status/1961077016830083103 Gavin Space Needle https://x.com/gavinpurcell/status/1961088493385638074 Kevin's Isometric Games Repositioning https://x.com/Attack/status/1961090913142460668 Our SpeedRun Photo https://x.com/gavinp urcell/status/1960450271009558636 OpenAI Realtime Update Demo https://openai.com/index/introducing-gpt-realtime/ People Already Leaving Meta's SuperIntelligence Lab https://www.businessinsider.com/meta-superintelligence-team-researchers-exit-ai-push-2025-8 BUT Hypernova Glasses Coming This Year https://x.com/mingchikuo/status/1960513106704277658 Meta + Midjourney https://x.com/alexandr_w ang/status/1958983843169673367 New Codex Update https://x.com/OpenAIDevs/status/1960809814596182163 NVIDIA Jet-Nemotron https://x.com/JacksonAtkinsX/status/1960090774122483783 Vibe-Voice Open Source TTS From MSFT https://x.com/realmrfakename/status/1960008298545270981 Krea Real Time Video Model https://x.com/krea_ai/status/1961074072487620635 Google's AI Hurricane Model Give 72 Hour Heads up on Cat 5 Hurricane https://arstechnica.com/science/2025/08/googles-ai-model-just-nailed-the-forecast-for-the-strongest-atlantic-storm-this-year/ Unitree A2 Carries 250kg Up & Down Stairs https://www.reddit.com/r/singularity/comments/1n0rvm6/unitree_a2_is_doing_endurance_tests_w_250kg_in/ Triple Backflip on Spot | Boston Dynamics https://youtu.be/LMPxtcEgtds?si=CF1sSdH__CRa9gLU Zuck Vs Sam Matrix Video https://www.reddit.com/r/Bard/comments/1n1dt1g/forget_google_this_is_the_power_of_open_source/ Top 100 Gen AI App List From The Olivia Moore/a16z https://a16z.com/100-gen-ai-apps-5/ AndThen Homepage (sign up for updates!) https://andthen.chat/
The GPT-5 rollout was messy. Then, Google went AI ship crazy. In between all of that, OpenAI released some powerhouse features inside ChatGPT that seemingly no one is paying attention to. Join us as we uncover them and give you a leg up on everyone else. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Five Overlooked ChatGPT Features RecapFlashcards with Quiz GPT Interactive ToolCustom ChatGPT Personalities ExplainedAdvanced Voice Mode in ChatGPT & GPTsGmail and Google Calendar Auto ConnectorsCustom Instructions for ChatGPT ProjectsProject Folders vs. Custom GPT OrganizationChatGPT Agent Mode New Use CasesTimestamps:00:00 Overlooked ChatGPT 5 Features05:06 Unannounced OpenAI Updates Discussion09:02 Personalized Learning with LLMs10:58 GPT-4 Personalities Address Sycophancy14:12 Custom Instructions and Personalities in ChatGPT18:00 Custom GPT Voice Limitations21:38 Streamlining Email with AI Prompts25:10 Custom Chat Instructions Toggle28:43 Customizing ChatGPT: Flexibility Challenges31:02 "AI Updates and Sharing Instructions"Keywords:ChatGPT, GPT-5, GPT 4o, OpenAI, ChatGPT features, overlooked ChatGPT updates, custom personalities, flashcards, GPT quiz, interactive quiz, advanced voice mode, voice mode updates, ChatGPT connectors, Gmail connector, Google Calendar connector, auto connectors, custom instructions, ChatGPT projects, project memory, ChatGPT organization, ChatGPT folders, project only memory, memory settings, ChatGPT system prompt, ChatGPT hallucinations, ChatGPT prompts, ChatGPT deep research, custom GPTs, Canvas mode, Notebook LM, Gemini, Gemini live, Claude, Anthropic Claude, email management with AI, AI productivity tools, AI for business leaders, AI learning tools, AI-powered flashcards, interactive learning AI, personalized AI, AI chat modes, sycophantic GPT, ChatGPT tone settings, ChatGPT settings, AI updates 2025, AI task automation, AI-driven workflow, ChatGPT troubleshootingSend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
Nano Banana is no longer a mystery.Google officially released Gemini 2.5 Flash Image on Tuesday (AKA Nano Banana), revealing it was the company behind the buzzy AI image model that had the internet talking. But... what does it actually do? And how can you put it to work for you? Find out in our newish weekly segment, AI at Work on Wednesdays.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Gemini 2.5 Flash Image (Nano Banana) RevealBenchmark Scores: Gemini 2.5 Flash Image vs. CompetitionMultimodal Model Capabilities ExplainedCharacter Consistency in AI Image GenerationAdvanced Image Editing: Removal and Object ControlIntegration with Google AI Studio and APIReal-World Business Use Cases for Gemini 2.5Live Demos: Headshots, Mockups, and InfographicsGemini 2.5 Flash Image Pricing and LimitsIterative Prompting for AI Image CreationTimestamps:00:00 "AI Highlights: Google's Gemini 2.5"06:17 "Nano Banana AI Features"09:58 "Revolutionizing Photo Editing Tools"12:31 "Nano Banana: Effortless Video Updating"14:39 "Impressions on Nano Banana"19:24 AI Growth Strategies Unlocked20:58 Turning Selfie into Professional Headshot24:48 AI-Enhanced Headshots and Team Photos29:51 "3D AI Logo Mockups"32:22 Improved Logo Design Review35:41 Photoshop Shortcut Critique38:50 Deconstructive Design with Logos44:01 "Transform Diagrams Into Presentations"46:12 "Refining AI for Jaw-Dropping Results"Keywords:Gemini 2.5, Gemini 2.5 Flash Image, Nano Banana, Google AI, Google DeepMind, AI image generation, multimodal model, AI photo editing, image manipulation, text-to-image model, image editing AI, large language model, character consistency, AI headshot generator, real estate image editing, product mockup generator, smart image blending, style transfer AI, Google AI Studio, LM Arena, Elo score, AI watermarks, synthID fingerprint, Photoshop alternative, AI-powered design, generative AI, API integration, Adobe integration, AI for business, visual content creation, creative AI tools, professional image editing, iterative prompting, interior design AI, infographic generator, training material visuals, A/B test variations, marketing asset creation, production scaling, image benchmark, AI output watermark, cost-effective AI images, scalable AI infrastructure, prompt-based editing, natural language image editing, OpenAI GPT-4o image, benchmarking leader, visSend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
Healthcare marketers are navigating a landscape where martech is both exploding in scale and under pressure to prove its value. In this episode, hosts Chris Boyer and Reed Smith explore the latest shifts shaping the future of healthcare marketing and technology: The rise of autonomous systems – moving from pilots to practical tools that reshape operations and engagement. Human–machine collaboration – how AI copilots, wearables, and adaptive systems are augmenting—not replacing—marketing teams. Scaling challenges – balancing infrastructure, talent, and regulatory realities in the age of compute-heavy GenAI workloads. Personalization as strategy – why advanced targeting and AI-driven tools remain a growth lever for healthcare organizations. Guest experts Kathy Divis and Mike Schneider from Greystone.net, share insights from their work with health systems across the country, and preview what's ahead at the upcoming Healthcare Internet Conference (HCIC). Give it a listen and learn how marketing leaders can stay ahead of martech trends while preparing for the next wave of innovation. Mentions from the Show: https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-top-trends-in-tech https://www.deloittedigital.com/nl/en/insights/perspective/marketing-trends-2025.html https://thehill.com/policy/technology/5460663-generative-ai-zero-returns-businesses-mit-report/amp/ https://www.williamflaiz.com/blog/marketing-ops-vs-revops-vs-martech-what-s-the-difference Kathy Divis on LinkedIn Mike Schneider on LinkedIn Healthcare Interactive Conference Reed Smith on LinkedIn Chris Boyer on LinkedIn Chris Boyer website Learn more about your ad choices. Visit megaphone.fm/adchoices
Daniel and Chris sit with Citadel AI's Rick Kobayashi and Kenny Song and unpack AI safety and security challenges in the generative AI era. They compare Japan's approach to AI adoption with the US's, and explore the implications of real-world failures in AI systems, along with strategies for AI monitoring and evaluation.Featuring:Rick Kobayashi – LinkedInKenny Song – LinkedInChris Benson – Website, LinkedIn, Bluesky, GitHub, XDaniel Whitenack – Website, GitHub, XLinks:Citadel AIRegister for upcoming webinars here!
Working Smarter is back for season two! Starting September 3, we're going beyond the hype and headlines to bring you stories about real people using AI to do more of what they love about their jobs. From the F1 track to the kitchen—and even the bottom of a lake—learn how new tools are helping creatives, makers, visionaries, and their teams think big, move faster, and focus on the work that matters most.~ ~ ~Working Smarter is brought to you by Dropbox Dash—the AI universal search and knowledge management tool from Dropbox. Learn more at workingsmarter.ai/dashYou can listen to more episodes of Working Smarter on Apple Podcasts, Spotify, YouTube Music, Amazon Music, or wherever you get your podcasts. To read more stories and past interviews, visit workingsmarter.aiThis show would not be possible without the talented team at Cosmic Standard: producer Dominic Girard, sound engineer Aja Simpson, technical director Jacob Winik, and executive producer Eliza Smith. Special thanks to our illustrators Justin Tran and Fanny Luor, marketing consultant Meggan Ellingboe, and editorial support from Catie Keck. Our theme song was composed by Doug Stuart. Working Smarter is hosted by Matthew Braga. Thanks for listening!
On this episode of Ropes & Gray's Insights Lab's multi-part Multidimensional Data Reversion podcast series, Shannon Capone Kirk and David Yanofsky discuss how artificial intelligence and machine learning are being applied to legal investigations and document reviews. They explore the evolution from traditional search term methods to advanced techniques like predictive coding, continuous active learning, and the emerging role of generative AI (“GenAI”) while demystifying what these techniques are actually doing with your data. The conversation highlights the importance of using plain language when describing these technologies, the critical role of human expertise in refining AI tools, and the practical challenges and efficiencies gained when integrating AI into internal investigations and privilege reviews. Tune in to gain insight into how legal teams are balancing innovation, accuracy, and defensibility as they adopt new data-driven approaches.
Our analysts Adam Jonas and Alex Straton discuss how tech-savvy young professionals are influencing retail, brand loyalty, mobility trends, and the broader technology landscape through their evolving consumer choices. Read more insights from Morgan Stanley.----- Transcript -----Adam Jonas: Welcome to Thoughts on the Market. I'm Adam Jonas, Morgan Stanley's Embodied AI and Humanoid Robotics Analyst. Alex Straton: And I'm Alex Straton, Morgan Stanley's U.S. Softlines Retail and Brands Analyst. Adam Jonas: Today we're unpacking our annual summer intern survey, a snapshot of how emerging professionals view fashion retail, brands, and mobility – amid all the AI advances.It is Tuesday, August 26th at 9am in New York.They may not manage billions of dollars yet, but Morgan Stanley's summer interns certainly shape sentiment on the street, including Wall Street. From sock heights to sneaker trends, Gen Z has thoughts. So, for the seventh year, we ran a survey of our summer interns in the U.S. and Europe. The survey involved more than 500 interns based in the U.S., and about 150 based in Europe. So, Alex, let's start with what these interns think about fashion and athletic footwear. What was your biggest takeaway from the intern survey? Alex Straton: So, across the three categories we track in the survey – that's apparel, athletic footwear, and handbags – there was one clear theme, and that's market fragmentation. So, for each category specifically, we observed share of the top three to five brands falling over time. And what that means is these once dominant brands, as consumer mind share is falling – and it likely makes them lower growth margin and multiple businesses over time. At the same time, you have smaller brands being able to captivate consumer attention more effectively, and they have staying power in a way that they haven't necessarily historically. I think one other piece I would just add; the rise of e-commerce and social media against a low barrier to entry space like apparel and footwear means it's easier to build a brand than it has been in the past. And the intern survey shows us this likely continues as this generation is increasingly inclined to shop online. Their social media usage is heavy, and they heavily rely on AI to inform, you know, their purchases.So, the big takeaway for me here isn't that the big are getting bigger in my space. It's actually that the big are probably getting smaller as new players have easier avenues to exist. Adam Jonas: Net apparel spending intentions rose versus the last survey, despite some concern around deteriorating demand for this category into the back half. What do you make of that result? Alex Straton: I think there were a bit conflicting takes from the survey when I look at all the answers together. So yes, apparel spending intentions are higher year-over-year, but at the same time, clothing and footwear also ranked as the second most category that interns would pull back on should prices go up. So let me break this down. On the higher spending intentions, I think timing played a huge role and a huge factor in the results. So, we ran this in July when spending in our space clearly accelerated. That to me was a function of better weather, pent up demand from earlier in the quarter, a potential tariff pull forward as headlines were intensifying, and then also typical back to school spending. So, in short, I think intention data is always very heavily tethered to the moment that it's collected and think that these factors mean, you know, it would've been better no matter what we've seen it in our space. I think on the second piece, which is interns pulling back spend should prices go up. That to me speaks to the high elasticity in this category, some of the highest in all of consumer discretionary. And that's one of the few drivers informing our cautious demand view on this space as we head into the back half. So, in summary on that piece, we think prices going higher will become more apparent this month onwards, which in tandem with high inventory and a competitive setup means sales could falter in the group. So, we still maintain this cautious demand view as we head into the back half, though our interns were pretty rosy in the survey. Adam Jonas: Interesting. So, interns continue to invest in tech ecosystems with more than 90 percent owning multiple devices. What does this interconnectedness mean for companies in your space? Alex Straton: This somewhat connects to the fragmentation theme I mentioned where I think digital shopping has somewhat functioned as a great equalizer in the space and big picture. I interpret device reliance as a leading indicator that this market diversification likely continues as brands fight to capture mobile mind share. The second read I'd have on this development is that it means brands must evolve to have an omnichannel presence. So that's both in store and online, and preferably one that's experiential focus such that this generation can create content around it. That's really the holy grail. And then maybe lastly, the third takeaway on this is that it's going to come at a cost. You, you can't keep eyeballs without spend. And historical brick and mortar retailers spend maybe 5 to 10 percent of sales on marketing, with digital requiring more than physical. So now I think what's interesting is that brands in my space with momentum seem to have to spend more than 10 percent of sales on marketing just to maintain popularity. So that's a cost pressure. We're not sure where these businesses will necessarily recoup if all of them end up getting the joke and continuing to invest just to drive mind share. Adam, turning to a topic that's been very hot this year in your area of expertise. That's humanoid robots. Interns were optimistic here with more than 60 percent believing they'll have many viable use cases and about the same number thinking they'll replace many human jobs. Yet fewer expect wide scale adoption within five years. What do you think explains this cautious enthusiasm? Adam Jonas: Well actually Alex, I think it's pretty smart. There is room to be optimistic. But there's definitely room to be cautious in terms of the scale of adoption, particularly over five years. And we're talking about humanoid robots. We're talking about a new species that's being created, right? This is bigger than just – will it replace our job? I mean, I don't think it's an exaggeration to ask what does this do to the concept of being human? You know, how does this affect our children and future generations? This is major generational planetary technology that I think is very much comparable to electricity, the internet. Some people say the wheel, fire, I don't know. We're going to see it happen and start to propagate over the next few years, where even if we don't have widespread adoption in terms of dealing with it on average hour of a day or an average day throughout the planet, you're going to see the technology go from zero to one as these machines learn by watching human behavior. Going from teleoperated instruction to then fully autonomous instruction, as the simulation stack and the compute gets more and more advanced. We're now seeing some industry leaders say that robots are able to learn by watching videos. And so, this is all happening right now, and it's happening at the pace of geopolitical rivalry, Sino-U.S. rivalry and terra cap, you know, big, big corporate competitive rivalry as well, for capital in the human brain. So, we are entering an unprecedented – maybe precedented in the last century – perhaps unprecedented era of technological and scientific discovery that I think you got to go back to the European and American Enlightenment or the Italian Renaissance to have any real comparisons to what we're about to see. Alex Straton: So, keeping with this same theme, interns showed strong interest in household robots with 61 percent expressing some interest and 24 percent saying they're very or extremely interested. I'm going to take you back to your prior coverage here, Adam. Could this translate into demand for AI driven mobility or smart infrastructure? Adam Jonas: Well, Alex, you were part of my prior coverage once upon a time. We were blessed with having you on our team for a year, and then you left me… Alex Straton: My golden era. Adam Jonas: But you came back, you came back. And you've done pretty well. So, so look, imagine it's 1903, the Wright Brothers just achieved first flight over the sands at Kitty Hawk. And then I were to tell you, ‘Oh yeah, in a few years we're going to have these planes used in World War I. And then in 1914, we'd have the first airline going between Tampa and St. Petersburg.' You'd say, ‘You're crazy,' right? The beauty of the intern survey is it gives the Morgan Stanley research department and our clients an opportunity to engage that surface area with that arising – not just the business leader – but that arising tech adopter. These are the people, these are the men and women that are going to kind of really adopt this much, much faster. And then, you know, our generation will get dragged into it eventually. So, I think it says; I think 61 percent expressing even some interest. And then 24 [percent], I guess, you know… The vast majority, three quarters saying, ‘Yeah, this is happening.' That's a sign I think, to our clients and capital market providers and regulators to say, ‘This won't be stopped. And if we don't do it, someone else will.' Alex Straton: So, another topic, Generative AI. It should come as no surprise really, that 95 percent of interns use that tool monthly, far ahead of the general population. How do you see this shaping future expectations for mobility and automation? Adam Jonas: So, this is what's interesting is people have asked kinda, ‘What's that Gen AI moment,' if you will, for mobility? Well, it really is Gen AI. Large Language Models and the technologies that develop the Large Language Models and that recursive learning, don't just affect the knowledge economy, right. Or writing or research report generation or intelligence search. It actually also turns video clips and physical information into tokens that can then create and take what would be a normal suburban city street and beautiful weather with smiling faces or whatever, and turn it into a chaotic scene of, you know, traffic and weather and all sorts of infrastructure issues and potholes. And that can be done in this digital twin, in an omniverse. A CEO recently told me when you drive a car with advanced, you know, Level 2+ autonomy, like full self-driving, you're not just driving in three-dimensional space. You're also playing a video game training a robot in a digital avatar. So again, I think that there is quite a lot of overlap between Gen AI and the fact that our interns are so much further down that curve of adoption than the broader public – is probably a hint to us is we got to keep listening to them, when we move into the physical realm of AI too. Alex Straton: So, no more driving tests for the 16-year-olds of the future... Adam Jonas: If you want to. Like, I tell my kids, if you want to drive, that's cool. Manual transmission, Italian sports cars, that's great. People still ride horses too. But it's just for the privileged few that can kind of keep these things in stables. Alex Straton: So, let me turn this into implications for companies here. Gen Z is tech fluent, open to disruption? How should autos and shared mobility providers rethink their engagement strategies with this generation? Adam Jonas: Well, that's a huge question. And think of the irony here. As we bring in this world of fake humans and humanoid robots, the scarcest resource is the human brain, right? So, this battle for the human mind is – it's incredible. And we haven't seen this really since like the Sputnik era or real height of the Cold War. We're seeing it now play out and our clients can read about some of these signing bonuses for these top AI and robotics talent being paid by many companies. It kind of makes, you know, your eyes water, even if you're used to the world of sports and soccer, . I think we're going to keep seeing more of that for the next few years because we need more brains, we need more stem. I think it's going to do; it has the potential to do a lot for our education system in the United States and in the West broadly. Alex Straton: So, we've covered a lot around what the next generation is interested in and, and their opinion. I know we do this every year, so it'll be exciting to see how this evolves over time. And how they adapt. It's been great speaking with you today, Adam. Adam Jonas: Absolutely. Alex, thanks for your insights. And to our listeners, stay curious, stay disruptive, and we'll catch you next time. If you enjoy Thoughts on the Market, please leave us a review wherever you listen and share the podcast with a friend or colleague today.
You got duped.The MIT '95 % of AI pilots fail' study has taken over the internet, and it's one of the worst studies I've ever read. (And I've read thousands.) ↳ So, what's the truth?↳ Is AI a bubble that's about to pop? ↳ Why is this study rubbish? ↳ And how does it impact you? Join us and we'll dish it all.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:MIT AI Study Claims 95% Failure RateBreakdown of MIT Study MethodologyImpact of Viral MIT AI Study HeadlinesFlaws in MIT Study ROI MeasurementComparison With Reputable AI ROI StudiesMIT Study's Biased Participant SelectionNanda Project Marketing in MIT ReportFive Major Red Flags in MIT AI ResearchBusiness Implications of Flawed AI Pilots DataHow Media Sensationalizes AI Study ResultsTimestamps:00:00 "MIT AI Study Critique"04:16 AI Investments Trigger Stock Market Decline06:37 "Host's Background Overview"10:58 Flawed AI Study Critique13:28 MIT Study Highlights AI Implementation Challenges18:58 AI Work Trends & ROI Insights20:17 "Crossing the Gen AI Divide"23:25 Flawed Study with Misleading Claims29:34 "Uncritical Reposting Spurs Fake Study"30:30 "Read Studies, Not Summaries"Keywords:MIT AI study, 95% AI pilot failure, enterprise AI pilots, generative AI ROI, AI pilot success rate, AI project failure, state of AI in business, gen AI divide, MIT Media Lab, AI investment, AI implementation challenges, AI return on investment, AI research methodology, AI study critique, AI marketing, Nanda project, AI vendor solutions, agentic web, MCP protocol, A2A protocol, Fortune article, AI media coverage, stock market impact, NVIDIA stock drop, Palantir, ARM stock, qualitative AI data, AI structured interviews, AI industry surveys, IDC AI research, Snowflake ESG report, McKinsey AI analysis, Microsoft Work Trend Index, Boston Consulting Group AI study, AI adoption rates, enterprise AI transformation, sample size in AI studies, research limitations, AI productivity impact, AI workflow automation, AI business decisions, AI bubble, AI reporting in media, AI pilot timeline, enterprise AI tools, AI agent capabilities, AI autonomy, custom AI solutions, AI study bias, marketing disguised as research, sensationalized AI studies.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
This week on Tacos & Tech, Neal sits down with Drew Wilson, longtime North County founder, designer, engineer, and community builder. From launching products solo to landing a GoDaddy acquisition, Drew's track record speaks for itself. Now he's back with a new company, Opacity, and relaunching his beloved creative conference, ValioCon, right in the heart of Oceanside. In this episode, Drew shares his journey from early design days and Flash websites to building Plasso, navigating acquisition, and diving headfirst into GenAI-powered product development. He also gives us a behind-the-scenes look at what's coming with Opacity, why he's bullish on version control for designers, and how tools like Midjourney and Claude are shaping his build stack. Key Points Building Plasso as a solo founder and selling to GoDaddy Going through YC while still working at GoDaddy Launching (and shutting down) a modern digital bank The origins of ValioCon and why it's back after 9 years The inspiration behind Opacity and the future of visual coding Building products in the GenAI era — what's actually different His go-to North County burrito and tales from Cave Week Links & Resources Learn more about Opacity Grab your ticket to ValioCon Connect with Drew & Neal Follow Drew on Linkedin & X Follow Neal: LinkedIn & X
AI-powered video intelligence is transforming the retail environment from one of surveillance to one of support. In this episode, Joe Troy, Senior Manager of Site Risk at Amazon, shares how generative AI is helping organizations not only reduce shrink and detect fraud, but also foster transparency, trust, and operational efficiency across frontline teams. Joe explains how AI can detect patterns that traditional systems miss—like organized fraud rings and duplicate refund behaviors—while still relying on human judgment to determine context and action. He highlights the importance of building a culture of “mutual visibility,” where employees feel supported rather than monitored, and risk teams evolve from gatekeepers into trusted business advisors. Want to share your AI adoption story with executive peers? Click emerj.com/expert2 for more information and to be a potential future guest on the ‘AI in Business' podcast! If you've enjoyed or benefited from some of the insights of this episode, consider leaving us a five-star review on Apple Podcasts, and let us know what you learned, found helpful, or liked most about this show!
How can People Analytics shift from being a cost center to being a profit center?Why is it critical for HR leaders to transform workforce insights into concrete strategic initiatives?My guest on this episode is Cole Napper, VP Research, Innovation, & Talent Insights at Lightcast and author of People Analytics: Using Data-Driven HR and Gen AI as a Business Asset”During our conversation Cole and I discuss:How generative AI is democratizing data-driven decision making in HR.Why Cole believes more People Analytics leaders could rise to the CHRO role in the next decade.Why people analytics teams need to intentionally link their work to tangible business outcomes.Why generative AI will disrupt traditional HR operating models.Why business acumen isn't just nice to have—it's the fundamental requirement for all HR professionals including people analytics. Connecting with Cole NapperConnect with Cole on LinkedInLearn more about Cole and his new book, “People Analytics: Using Data-Driven HR and Gen AI as a Business Asset”Episode Sponsor: Next-Gen HR Accelerator - Learn more about this best-in-class leadership development program for next-gen HR leadersHR Leader's Blueprint - 18 pages of real-world advice from 100+ HR thought leaders. Simple, actionable, and proven strategies to advance your career.Succession Planning Playbook: In this focused 1-page resource, I cut through the noise to give you the vital elements that define what “great” succession planning looks like.
On this episode, I cover problems caused by recent Windows Updates, Gartner's update to the DaaS Magic Quadrant, Windows 10 ESU concerns and much more! Reference Links: https://www.rorymon.com/blog/windows-update-woes-xai-brings-suit-against-apple-and-openai-is-genai-a-bubble/
What separates a legal department that saves money from one that builds competitive advantage? Two powerhouse CLOs, Rishi Varma (Cargill) and Tim Fraser (Toshiba America) - sit down with David Cowen to unpack the shift from legal risk managers to business growth drivers. If you're a legal leader, strategist, or tech-savvy operator, this is essential listening. The future isn't coming. It's here. And these leaders are already in it. Key Topics Covered: The AI Dividend: What it is, how to measure it, and why it's your next performance metric Data as Infrastructure: Why CLOs are racing to eliminate the “search function” and build a legal “brain” OKRs That Matter: How top legal departments align KPIs to business growth, not compliance checklists Tech Stack in Action: Inside the tools (Copilot, GenAI) that are driving real productivity gains today Talent Evolution: What CLOs actually look for in 2025, critical thinking, adaptability, and strategic fluency Cross-Functional Power Moves: Why your next big win requires partnering with your CIO (or CEO) From Perfection to Performance: Why "excellence over perfection" is the new rule of law
Google had so many wins in AI this week that it was hard to keep track.Yet their competitors.... not so much. ↳ OpenAI has already shifted the conversation to GPT-6 after a rocky GPT-5 rollout. ↳ Meta keeps reshuffling their AI teams and is reportedly on an AI hiring freeze. ↳ And Apple's AI prospects are getting so bleak that they might turn to their biggest competitor to get the job done.Don't waste hours a day trying to keep up with AI. Join Everyday AI on Mondays for the AI News That MattersNewsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:GPT-6 Rumors and Roadmap RevealedSam Altman Admits GPT-5 Rollout IssuesElon Musk, Meta, and OpenAI Legal ClashMusk's $97B OpenAI Takeover Bid ExposedMIT Report: 95% AI Pilots Fail ROIAI Bubble Concerns Hit Enterprise AdoptionMicrosoft AI Chief Warns of Sentient AI RisksGoogle AI Mode Expands with Agentic FeaturesGemini for Home to Replace Google AssistantApple in Talks to Use Google Gemini for SiriMeta Freezes AI Hiring Amid RestructuringMeta and Midjourney Announce AI Visuals DealGoogle Teases Gemini Nano Banana AI ModelOpenAI Project Memory Enhances User ExperienceGoogle Photos Adds Conversational AI EditingOpenAI Integrates Gmail and Calendar SupportPerplexity and Eleven Labs Roll Out AI AgentsTimestamps:00:00 "OpenAI's GPT-5 Rollout Issues"05:44 Musk's Failed Bid for OpenAI07:20 Musk Sues Over OpenAI Structure13:32 "Risks of Attributing Sentience to AI"16:03 Google AI Offers Automated Reservations20:46 Google's Premium Features, Apple Lags22:29 Apple Considers AI Tech from Competitors27:16 Meta's AI Hiring and Cost Concerns31:05 AI Updates: Major Memory Innovations31:57 AI Enhances Gmail and CalendarKeywords:GPT-6, GPT 6 rumors, OpenAI, Sam Altman, GPT-5 rollout, GPT-5 backlash, GPT-4o, emotional bonds with AI, unhealthy AI relationships, model safety guardrails, user-controlled tone settings, AI capacity limits, GPU shortages, trillion-dollar data centers, Elon Musk, XAI, Grok, Meta, Mark Zuckerberg, OpenAI takeover bid, public benefits corporation, antitrust scrutiny, MIT AI study, generative AI pilots, AI ROI, enterprise AI adoption, AI bubble, AI workflow issues, AI psychosis, seemingly conscious AI, SCAI, Mustafa Suleiman, Microsoft AI, sentient AI risks, Google AI model updates, agentic search features, AI mode, Google Search, Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
Interview with Harish Peri from Okta Oktane Preview: building frameworks to secure our Agentic AI future Like it or not, Agentic AI and protocols like MCP and A2A are getting pushed as the glue to take business process automation to the next level. Giving agents the power and access they need to accomplish these lofty goals is going to be challenging, from a security perspective. How do put AI agents in the position to perform broad tasks autonomously without granting them all the privileges? How do we avoid making AI agents a gold mine for attackers - the first place they stop once they hack into our companies? These are some examples of the questions Okta aims to answer at this year's Oktane event, and we aim to kick off the conversations a little early - with this interview! Segment Resources: Check out securityweekly.com/oktane for all our live coverage during the event this year! More information about the event and how you can attend can be found here: https://www.okta.com/oktane/ AI at Work 2025: Securing the AI-powered workforce Topic - Indirect Prompt Injection Getting Out of Hand Reports of indirect prompt injection issues have been around for a while. Of particular note was Michael Bargury's Living off Microsoft Copilot presentation from Black Hat USA 2024. Simply sending an email to a Copilot user could make bad stuff happen. Now, at Black Hat 2025, we've got more: the ability to plunder any data resource connected to ChatGPT (they call these integrations "Connectors") from Tamir Ishay Sharbat at Zenity Labs. The research is titled AgentFlayer: ChatGPT Connectors 0click Attack. Looks like Google Jules is also vulnerable to what the Embrace the Red blog is calling invisible prompts. Sourcegraph's Amp Code is also vulnerable to the same attack, which encodes instructions to make them invisible. What's really going to ruffle feathers is the fact that all these companies know this stuff is possible, but don't seem to be able to figure out how to prevent it. Ideally, we'd want to be able to distinguish between intended instruction and instructions injected via attachments or some other means outside of the prompt box. I guess that's easier said than done? News Finally, in the enterprise security news, Drones are coming for you… to help? One of the most powerful botnets ever goes down Phishing training is still pointless Microsoft sets an alarm on its phone for 8 years from now to do post-quantum stuff vulns galore in commercial ZTNA apps GenAI projects are struggling to make it to production Adblockers could be made illegal - in Germany Windows is getting native Agentic support Automating bug discovery AND remediation? Public service announcement: time is running out for Windows 10 All that and more, on this episode of Enterprise Security Weekly. Visit https://www.securityweekly.com/esw for all the latest episodes! Show Notes: https://securityweekly.com/esw-421
The Automotive Troublemaker w/ Paul J Daly and Kyle Mountsier
Shoot us a Text.Episode #1129: Today Steve Greenfield joins Paul again as we spotlight the 2025 Automotive News All-Stars, including a few friends of the show. We also talk about some industry icons' new digs. And a new survey shows just how deep Gen AI has made it into your customers' shopping habits.This year's 2025 Automotive News All-Stars spotlight not only industry excellence but also creative vision, resilience, and innovation across every corner of automotive. From digital stardom to strategic investment and relentless dealership growth, here are three stories that stood out — with a few personal shoutouts we just had to include.Grace Kerber and Ben Bushen went from a whiteboard in upstate New York to GM headquarters thanks to their viral mockumentary series "The Dealership." The duo's humor, heart, and authenticity earned them a national audience — and a new role for Grace at GM.Bill Cariss keeps Holman Growth Ventures in the fast lane, securing a minority stake in FM Capital's $240M fund to scale automotive and mobility tech investments. “We are still going to do direct investments...but they are not going to be near the size of the funds that we will own with partners,” he said.Teddy Morse has taken Ed Morse Automotive from 12 stores to over 50 with cowboy boots, Harley-Davidsons, and a deeply personal leadership style. “You can lose the fact that there's a romantic side to this business,” Morse said. “To what we do to help people get their first car; to help people get their dream car.”Whether it's Grace's storytelling, Bill's venture savvy, or Teddy's boots-on-the-ground heart, these All-Stars prove that auto leadership is anything but average.A new player in the inventory sourcing space is making waves as sellmyride brings on a stacked roster of industry veterans. Unlike traditional lead-gen platforms, sellmyride is focused on helping dealers consistently source inventory from private sellers — a move designed to keep vehicles in local markets and out of national players' hands.Chip Perry, founding CEO of Autotrader and former TrueCar chief, has joined sellmyride as chairman, calling it the best dealer-to-public acquisition tool he's seen in 25 years.Steve Greenfield's Automotive Ventures is backing the company as part of a broader raise to support U.S. expansion.Robbie Bezdek, a Cox and iHeartMedia alum, brings marketplace and media expertise to help dealers acquire 50+ units per month from the public.The platform is designed to be “always on,” dealership-branded, and built for consistent private-party sourcing rather than ad hoc lead chasing.“Why shouldn't our clients capture those cars?” Perry said. “That's what we hope, that's what we dream about and that's what we're inspired to do.”A new survey from Omnisend shows just how deeply generative AI has embedded itself in e-commerce habits. Over half of American online shoppers now turnJoin Paul J Daly and Kyle Mountsier every morning for the Automotive State of the Union podcast as they connect the dots across car dealerships, retail trends, emerging tech like AI, and cultural shifts—bringing clarity, speed, and people-first insight to automotive leaders navigating a rapidly changing industry.Get the Daily Push Back email at https://www.asotu.com/ JOIN the conversation on LinkedIn at: https://www.linkedin.com/company/asotu/
Interview with Harish Peri from Okta Oktane Preview: building frameworks to secure our Agentic AI future Like it or not, Agentic AI and protocols like MCP and A2A are getting pushed as the glue to take business process automation to the next level. Giving agents the power and access they need to accomplish these lofty goals is going to be challenging, from a security perspective. How do put AI agents in the position to perform broad tasks autonomously without granting them all the privileges? How do we avoid making AI agents a gold mine for attackers - the first place they stop once they hack into our companies? These are some examples of the questions Okta aims to answer at this year's Oktane event, and we aim to kick off the conversations a little early - with this interview! Segment Resources: Check out securityweekly.com/oktane for all our live coverage during the event this year! More information about the event and how you can attend can be found here: https://www.okta.com/oktane/ AI at Work 2025: Securing the AI-powered workforce Topic - Indirect Prompt Injection Getting Out of Hand Reports of indirect prompt injection issues have been around for a while. Of particular note was Michael Bargury's Living off Microsoft Copilot presentation from Black Hat USA 2024. Simply sending an email to a Copilot user could make bad stuff happen. Now, at Black Hat 2025, we've got more: the ability to plunder any data resource connected to ChatGPT (they call these integrations "Connectors") from Tamir Ishay Sharbat at Zenity Labs. The research is titled AgentFlayer: ChatGPT Connectors 0click Attack. Looks like Google Jules is also vulnerable to what the Embrace the Red blog is calling invisible prompts. Sourcegraph's Amp Code is also vulnerable to the same attack, which encodes instructions to make them invisible. What's really going to ruffle feathers is the fact that all these companies know this stuff is possible, but don't seem to be able to figure out how to prevent it. Ideally, we'd want to be able to distinguish between intended instruction and instructions injected via attachments or some other means outside of the prompt box. I guess that's easier said than done? News Finally, in the enterprise security news, Drones are coming for you… to help? One of the most powerful botnets ever goes down Phishing training is still pointless Microsoft sets an alarm on its phone for 8 years from now to do post-quantum stuff vulns galore in commercial ZTNA apps GenAI projects are struggling to make it to production Adblockers could be made illegal - in Germany Windows is getting native Agentic support Automating bug discovery AND remediation? Public service announcement: time is running out for Windows 10 All that and more, on this episode of Enterprise Security Weekly. Visit https://www.securityweekly.com/esw for all the latest episodes! Show Notes: https://securityweekly.com/esw-421
Interview with Harish Peri from Okta Oktane Preview: building frameworks to secure our Agentic AI future Like it or not, Agentic AI and protocols like MCP and A2A are getting pushed as the glue to take business process automation to the next level. Giving agents the power and access they need to accomplish these lofty goals is going to be challenging, from a security perspective. How do put AI agents in the position to perform broad tasks autonomously without granting them all the privileges? How do we avoid making AI agents a gold mine for attackers - the first place they stop once they hack into our companies? These are some examples of the questions Okta aims to answer at this year's Oktane event, and we aim to kick off the conversations a little early - with this interview! Segment Resources: Check out securityweekly.com/oktane for all our live coverage during the event this year! More information about the event and how you can attend can be found here: https://www.okta.com/oktane/ AI at Work 2025: Securing the AI-powered workforce Topic - Indirect Prompt Injection Getting Out of Hand Reports of indirect prompt injection issues have been around for a while. Of particular note was Michael Bargury's Living off Microsoft Copilot presentation from Black Hat USA 2024. Simply sending an email to a Copilot user could make bad stuff happen. Now, at Black Hat 2025, we've got more: the ability to plunder any data resource connected to ChatGPT (they call these integrations "Connectors") from Tamir Ishay Sharbat at Zenity Labs. The research is titled AgentFlayer: ChatGPT Connectors 0click Attack. Looks like Google Jules is also vulnerable to what the Embrace the Red blog is calling invisible prompts. Sourcegraph's Amp Code is also vulnerable to the same attack, which encodes instructions to make them invisible. What's really going to ruffle feathers is the fact that all these companies know this stuff is possible, but don't seem to be able to figure out how to prevent it. Ideally, we'd want to be able to distinguish between intended instruction and instructions injected via attachments or some other means outside of the prompt box. I guess that's easier said than done? News Finally, in the enterprise security news, Drones are coming for you… to help? One of the most powerful botnets ever goes down Phishing training is still pointless Microsoft sets an alarm on its phone for 8 years from now to do post-quantum stuff vulns galore in commercial ZTNA apps GenAI projects are struggling to make it to production Adblockers could be made illegal - in Germany Windows is getting native Agentic support Automating bug discovery AND remediation? Public service announcement: time is running out for Windows 10 All that and more, on this episode of Enterprise Security Weekly. Show Notes: https://securityweekly.com/esw-421
React to a message and trigger a workflow. That might be fun if you want to dedicate a certain emoji for 'volunteering' for a task. In other news, some Gen-AI features of the M365 Copilot app Create module will be made available to unlicensed users. What else will Daniel and Darrell discuss? – Emoji Reactions Workflows in Microsoft Teams - [Copilot Extensibility] Admins can manage ownerless Copilot agents with new lifecycle controls - Meeting Search in MS Teams Desktop - Gen AI capabilities in the Create module of Microsoft 365 Copilot app coming to all Copilot Chat users - Microsoft Teams: Private channels increased limits and transition to group compliance - Streamlined file preview experience in Microsoft 365 Copilot app for iOS Join Daniel Glenn and Darrell as a Service Webster as they cover the latest messages in the Microsoft 365 Message Center. Check out Darrell & Daniel's own YouTube channels at: Darrell - https://youtube.com/modernworkmentor Daniel - https://youtube.com/DanielGlenn
Send us a textArun Saigal is the Co-Founder and CEO of Thunkable, the no-code platform where anyone — from students to startups to enterprise teams — can build powerful, native mobile apps. With an intuitive drag-and-drop interface and integrated Gen AI tools, Thunkable empowers creators to go from idea to real, publishable app without writing a single line of code. Arun has an S.B. and M.Eng. in Computer Science and Electrical Engineering from MIT and has held various leading roles at technology companies, including Quizlet, Khan Academy, Aspiring Minds, and Google.
You think using AI is your moat? Nope. Just using LLMs isn't enough to power your company's AI success. But do you know the real fuel? Having your data right is the ACTUAL key. So how do you do it? And how does your company's data strategy change with agentic AI? Find out from Deloitte's US Chief Data Analytics Officer, Ashish Verma.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Have a question? Join the convo here.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Transformative Data Strategy for AI SuccessImportance of Data Strategy in AIDeloitte's Data Marketplace ApproachMulti-Agent Orchestration ChallengesStructured vs. Unstructured Data in AISynthetic Data and AI TransformationAgentic AI and Data Labeling EssentialsAI's Impact on Business Value ChainTimestamps:00:00 "AI Success Requires Data Strategy"05:27 Data Integration and Utilization Insights10:31 Contextual Data Marketplace Evolution13:06 Structuring Unstructured AI Insights17:02 Agent Reasoning and Orchestration Insights20:37 Data Annotation Challenges23:39 AI's Impact on Industry Evolution26:09 "Data Strategy: Begin with the End"Keywords:transformative data strategy, AI success, generative AI, non-technical people, data teams, data strategy, business leaders, companies, careers, unedited podcast, livestream, Deloitte, US chief data and analytics officer, data analytics, GenAI, data experiments, third-party data, synthetic data, data marketplace, data concierge, chief data officer, compute environment, deterministic, probabilistic, AI transformation, digital transformation, data minder, CFO, CMO, public domain data, business partner data, metadata, business glossary, technical catalog, agentic AI, multi-agent orchestration, agent registry, agent orchestration, open standard protocols, economic AI, digital transformation strategy, data advantages, structured data, unstructured data, hybrid data, PowerPoint, staffing optimization, resource management, query engine, relevance-ranked search, annotation, data regulation, governance, data procurement, data curation, data feeds, data platforms, information indexing, future predictions.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info)
Join Simtheory (STILLRELEVANT): https://simtheory.ai----CHAPTERS:00:00 - Simtheory Podcast Ad lolz01:59 - A Not So Memorable Week, Nano Banana & Google AI Announcements15:10 - New Podcast MCP lolz: crime podcasts33:47 - Qwen Image Edit: Does it live up to hype?37:54 - MCP UI: Output types, future of apps with MCP UIs54:32 - No results from Gen AI investments in the Enterprise (MIT report)1:08:32 - How to Hire AI Natives? Hiring in an AI world...----Thanks for your support and listening... see you next week xox
On today's podcast episode, we discuss our ‘very specific, but highly unlikely' predictions for the future of digital in 2026 and beyond. Why browsers will become the new AI battleground, what does it mean if agentic AI doesn't take over shopping, and can GenAI actually lead to more of the jobs it can easily destroy? Join Senior Director of Podcasts and host, Marcus Johnson, Senior Director of Briefings, Jeremy Goldman, Principal Analyst, Sara Marzano, and Vice President of Content, Paul Verna. Listen everywhere and watch on YouTube and Spotify. To learn more about our research and get access to PRO+ go to EMARKETER.com Follow us on Instagram at: https://www.instagram.com/emarketer/ For sponsorship opportunities contact us: advertising@emarketer.com For more information visit: https://www.emarketer.com/advertise/ Have questions or just want to say hi? Drop us a line at podcast@emarketer.com For a transcript of this episode click here: https://www.emarketer.com/content/podcast-what-if-future-of-digital-browsers-ai-battleground-agentic-shopping-behind-numbers © 2025 EMARKETER Got an ecommerce challenge? Awin has you covered. With Awin's affiliate platform, brands of all sizes can unlock endless marketing opportunities, reach consumers everywhere, and choose partners that fit their goals. Control costs, customize programs, and drive real results. Learn more at awin.com/emarketer.
The Automotive Troublemaker w/ Paul J Daly and Kyle Mountsier
Shoot us a Text.Episode #1127: We're diving into major tariff relief for EU carmakers, Sonic Automotive's EchoPark outpacing expectations, and how generative AI is rewriting the rules for online retail traffic. Show Notes with links:The automotive industry is watching closely as the U.S. and EU hammer out a framework deal that could bring massive tariff relief for European automaker. The fine print could mean big savings and new market access.The EU and U.S. announced a new trade framework aiming to reduce U.S. auto tariffs from 27.5% to 15%.Relief would be retroactive to August 1 if the EU introduces enabling legislation this month.In exchange, the EU pledged to cut tariffs on U.S. industrial goods and increase access for American agricultural products.The deal may expand to include mutual recognition of auto safety standards and influence future U.S. agreements with Japan and South Korea.EU Trade Commissioner Maros Sefcovic emphasized urgency: “It is the European Commission's firm intention to make proposals by the end of this month.”Sonic Automotive just dropped its Q2 2025 earnings, and while a hefty impairment charge dented the bottom line, EchoPark's performance made sure the story stayed bullish.Total revenue reached a record $3.7B, up 6% YoY.Despite a $172.4M impairment charge, adjusted EPS surged 49% to $2.19, beating expectations.EchoPark led the charge with $62.1M in gross profit (+22%) and a 679% increase in adjusted segment income.Segment income rose from $3.9M to $11.7M — a 200% leap.“EchoPark is just on fire,” said Sonic President Jeff Dyke.Adobe reports a massive 4,700% YoY increase in U.S. retail site traffic driven by generative AI platforms like ChatGPT and Gemini — a clear signal that AI is transforming the online shopping journey.Traffic from gen-AI sources has grown monthly since the 2024 holiday season.90% of users trust gen-AI recommendations; bounce rates are down 27%.Visits from AI referrals are 10% more engaged, with 32% longer durations.The conversion gap between AI and non-AI traffic has shrunk from 49% in January to 23% in July.“It's allowing a very optimized, urgent, efficient journey,” said Adobe's Vivek Pandya.0:00 Intro with Paul J Daly and Kyle Mountsier1:35 Next week, Paul and Erroll Bomar III will be at NAMAD next week2:38 EU-US Finalizing New Trade Deal4:35 EchoPark Boosts Sonic's Q2 Earnings6:52 4700% Increase In Retail Traffic From GenAI SitesJoin Paul J Daly and Kyle Mountsier every morning for the Automotive State of the Union podcast as they connect the dots across car dealerships, retail trends, emerging tech like AI, and cultural shifts—bringing clarity, speed, and people-first insight to automotive leaders navigating a rapidly changing industry.Get the Daily Push Back email at https://www.asotu.com/ JOIN the conversation on LinkedIn at: https://www.linkedin.com/company/asotu/
Research First Approach to AI Tools with Short AnswerIn this episode of My EdTech Life, I sit down with Adam and Alexa Sparks, the husband-and-wife founders of Short Answer, a writing platform built to strengthen peer-to-peer learning and classroom dialogue.We dive into their journey from classrooms and Stanford labs to building an edtech startup that puts pedagogy first. Adam shares how a tough reception at a ResearchED conference burst the EdTech bubble and reshaped their vision for AI in schools. Alexa explains why her research left her more skeptical of AI than when she started, and how that skepticism helps keep Short Answer grounded in solid pedagogy.Together, they open up about:Why EdTech needs more dialectical thinkingHow Short Answer blends AI feedback with student discourseTheir new partnership with EduProtocols and what it means for teachersThe tension between tool-driven conferences and pedagogy-first practicesWhat's next on the roadmap for Short AnswerThis conversation is for every educator, leader, or EdTech enthusiast who wants to see technology serve teachers and students—not replace them.⏱️ Time Stamps0:00 – Welcome and sponsor shoutouts2:00 – Adam and Alexa's journey into EdTech7:00 – Why EdTech needs dialectical thinking10:00 – ResearchED pushback and bursting the EdTech bubble15:00 – From hype to caution: AI in writing feedback20:00 – Short Answer's approach to peer-to-peer learning27:00 – Tool-first vs pedagogy-first conferences31:00 – Partnering with EduProtocols37:00 – New features: Quick Write and Pen Pals43:00 – What's next for Short Answer45:00 – Edu Kryptonite: time and the EdTech bubble49:00 – Billboard messages and who they'd trade places with50:00 – Closing thoughts and final shoutouts
Is enterprise AI in danger? In episode 69 of Mixture of Experts, host Tim Hwang is joined by Marina Danilevsky, Nathalie Baracaldo and Sandi Besen to debrief MIT's report on gen AI pilots. Next, GPT-5 has a hidden system prompt? Then, we revisit the conversation about chain of thought (CoT) reasoning with our researchers. Are large reasoning models not thinking straight? Finally, Anthropic announced Claude will close down "distressing” conversations and we debate AI welfare. All that and more on today's episode of Mixture of Experts. 00:00 – Intro 1:13 – US Open, Meta restructuring Superintelligence lab and Robot Olympics 3:11 – Gen AI pilots fail 11:09 – GPT-5's hidden prompt revealed 22:47 – Reasoning model flaws 33:55 – Claude closing chats The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity. Subscribe to the Think newsletter → https://www.ibm.com/account/reg/us-en/signup?formid=news-urx-52120 Learn more about artificial intelligence → https://www.ibm.com/think/artificial-intelligence Visit Mixture of Experts podcast page to get more AI content → https://www.ibm.com/think/podcasts/mixture-of-experts
México alcanza un máximo histórico de inversión extranjera en el 2T, mientras Claudia Sheinbaum rechaza un operativo bilateral con la DEA y el INE cierra sin pruebas el caso de Pío López Obrador. Una corte libera a Donald Trump de una multa civil, avanza el acuerdo comercial EE. UU.–UE y persisten los retos para la paz en Ucrania. Los Beatles anuncian “Anthology 4”. En mercados: Walmart cae, el S&P 500 liga pérdidas y las tecnológicas retroceden. Israel inicia acciones en Gaza y Moscú condiciona garantías para Ucrania. Además, Texas redibuja su mapa electoral.Este episodio es presentado por STRTGY y EVA (Enterprise Virtual Analyst), plataforma que integra analítica avanzada, IA generativa y geointeligencia para optimizar ventas, inventarios y rentabilidad. Implementación rápida, pronósticos a 30/60/90 días y un asistente GenAI que responde en segundos. Visita su sitio web y solicita una demo gratuita hoy.Recibe gratis nuestro newsletter con las noticias más importantes del día.Si te interesa una mención en El Brieff, escríbenos a arturo@brieffy.com Hosted on Acast. See acast.com/privacy for more information.
Danny Liu shares a different way to think about AI and assessment on episode 584 of the Teaching in Higher Ed podcast. Quotes from the episode Our students are presented with this massive array of things they could choose from. They may not know the right things to choose or the best things to choose. And our role as educators is to kind of guide them in trying to find the most healthy options from the menu to choose from. -Danny Liu People want to give their students clarity. They want to give their students a bit of guidance on how to approach AI, what is going to be helpful for them for learning and not helpful for learning. -Danny Liu There is no way to really know if the rules that you're putting in place are going to be followed by students, and it doesn't mean that we need to detect them or surveil them more when they're doing their assignments. -Danny Liu We need to accept the reality that students could be using AI in ways that we don't want them to be using AI if they're not in front of us. -Danny Liu Not everyone lies. Most of our students want to do the right thing. They want to learn, but they have the temptation of AI there that is saying, I can do this work for you. Just click, just chat with me. -Danny Liu Our role as teachers is not to be cops, it's to teach and therefore to be in a position where we can trust you and help you make the right choice. -Danny Liu Resources Menus, not traffic lights: A different way to think about AI and assessments, by Danny Liu Talk is cheap: why structural assessment changes are needed for a time of GenAI, by Thomas Corbin,Phillip Dawson, &Danny Liu What to do about assessments if we can't out-design or out-run AI? by Danny Liu and Adam Bridgeman Course: Welcome to AI for Educators from the University of Sydney Whitepaper: Generative AI in Higher Education: Current Practices and Ways Forward, by Danny Y.T. Liu, Simon Bates Five myths about interactive oral assessments and how to get started, by Eszter Kalman, Benjamin Miller and Danny Liu Interactive Oral Assessment in practice, by Leanne Stevenson, Benjamin Miller and Clara Sitbon ‘Tell me what you learned': oral assessments and assurance of learning in the age of generative AI, by Meraiah Foley, Ju Li Ng and Vanessa Loh Interactive Oral Assessments: A New but Old Approach to Assessment Design from the University of South Australia Interactive oral assessments from the University of Melbourne Long live RSS Feeds New AI RSS Feed New AI RSS Page Broken: How Our Social Systems are Failing Us and How We Can Fix Them by Paul LeBlanc
It's YOUR time to #EdUpIn this episode, part of our Academic Integrity Series, sponsored by Pangram Labs,YOUR guest is Dr. Paul Krouss, Teaching Professor & Faculty Lead for Innovative Pedagogies, Washington State University VancouverYOUR cohost is Bradley Emi , Cofounder & CTO, Pangram LabsYOUR host is Elvin FreytesHow does Dr. Krouss define academic integrity & why does he emphasize student intent as a crucial factor? What makes Washington State University Vancouver unique with its 40% first-generation students & non-residential campus model? How is Dr. Krouss approaching AI integration in math education despite current limitations? Listen in to #EdUpThank YOU so much for tuning in. Join us on the next episode for YOUR time to EdUp!Connect with YOUR EdUp Team - Elvin Freytes & Dr. Joe Sallustio● Join YOUR EdUp community at The EdUp ExperienceWe make education YOUR business!P.S. Support the podcast trusted by higher ed leaders. Get early, ad-free access & exclusive leadership content by supporting Elvin & Joe for only $5.99 a month or $44.99 a year. YOU can also donate or gift a subscription at edupexperience.com
If you like what you hear, please subscribe, leave us a review and tell a friend!
Today's guest is Amit Gupta, Chief Digital Officer, Life Sciences Manufacturing Industry, at Danaher. Amit returns to the program to share the operational playbook behind building enterprise-ready AI infrastructure. While AI headlines tend to focus on models, Amit emphasizes that success begins with what's underneath — the data. He outlines a four-part architecture that includes data aggregation, integration, transformation, and harnessing, walking listeners through how Danaher built a system that enables — not constrains — AI deployment. He also explores how tiered storage strategies address not just technical needs, but real-world challenges in compliance, cost control, and security. Want to share your AI adoption story with executive peers? Click emerj.com/expert2 for more information and to be a potential future guest on the ‘AI in Business' podcast! This episode is sponsored by Pure Storage. Learn how brands work with Emerj and other Emerj Media options at emerj.com/ad1.
Dr. Rebecca Portnoff generates awareness of the threat landscape, enablers, challenges and solutions to the complex but addressable issue of online child sexual abuse. Rebecca and Kimberly discuss trends in online child sexual abuse; pillars of impact and harm; how GenAI expands the threat landscape; personalized targeting and bespoke abuse; Thorn's Safety by Design Initiative; scalable prevention strategies; technical and legal barriers; standards, consensus and commitment; building better from the beginning; accountability as an innovative goal; and not confusing complex with unsolvable. Dr. Rebecca Portnoff is the Vice President of Data Science at Thorn, a non-profit dedicated to protecting children from sexual abuse. Read Thorn's seminal Safety by Design paper, bookmark the Research Center to stay updated and support Thorn's critical work by donating here. Related Resources Thorn's Safety by Design Initiative (News): https://www.thorn.org/blog/generative-ai-principles/ Safety by Design Progress Reports: https://www.thorn.org/blog/thorns-safety-by-design-for-generative-ai-progress-reports/ Thorn + SIO AIG-CSAM Research (Report): https://cyber.fsi.stanford.edu/io/news/ml-csam-report A transcript of this episode is here.
Hosted by David Cowen | Presented by Steno Live at ILTACON 2025, in this candid conversation, Shannon Bales, Litigation Support Senior Manager at Munger, Tolles & Olson, shares how he's preparing his team for the AI-driven future, by turning ticket-takers into consultants. From tools like ChatGPT, Copilot, Harvey, and Claude, to the foundational skills that matter most (language, curiosity, and communication), Shannon talks shop about what it really takes to lead through change in today's legal tech landscape. If you manage teams, advise on tools, or just want to sharpen your edge, this one's for you. Key Topics Covered: Why Shannon trains his team to be consultants, not just executors ChatGPT, Claude, Copilot, Harvey, when to use what, and why Why there's no one AI winner (yet) and what due diligence really means From curiosity to clarity: why communication is the real AI skill How GenAI shifts the legal conversation and why workflows must follow Why legal tech leadership today means being agile, patient, and connected Shannon's weekend writing practice and why he's documenting GenAI's foundation for legal This Episode is presented by Steno: Smarter transcripts. Faster delivery. Built for modern legal teams.
Hosted by David Cowen | Presented by Steno What does it take to get in “the room where it happens”? Melissa Faragasso, a fifth-year associate at Cleary Gottlieb, didn't wait for permission - she stepped forward, made the ask, and landed a secondment to the firm's innovation team led by Ilona Logvinova. Live from the floor at ILTACON 2025, in this candid conversation, Melissa shares how a passion for privacy law, a sharp eye on emerging tech, and a dose of courage put her on a path that most associates only dream about. If you're a legal professional curious about GenAI, career growth, or making bold moves, this is your playbook. Key Topics Covered: How Melissa leveraged her privacy expertise to break into legal innovation What it really means to “operationalize” GenAI at a top-tier law firm The importance of doing it scared and why courage pays off How Cleary's acquisition of an AI startup created unexpected opportunities Why younger associates might have an edge in emerging tech law The value of curiosity, initiative, and asking the right question at the right moment This Episode is presented by Steno: Smarter transcripts. Faster delivery. Built for modern legal teams.
Hosted by David Cowen | Presented by Steno Live from the floor at ILTACON 2025, in this rich, retrospective conversation, Phil Bryce, longtime legal Knowledge Management leader and strategist - traces the evolution of legal tech from the dawn of email to today's GenAI disruption. But this episode is about more than just tech. Phil shares hard-earned lessons on connection, courage, and how relationships made 20 years ago still shape his career today. If you're navigating what's next or building your place in this industry, Phil's story is a masterclass in going far together. Key Topics Covered: What GenAI means now and how it echoes the early days of legal tech How Knowledge Management, strategy, and innovation emerged from organized chaos The power of connection: how one lunch sparked a 20-year peer network Why today's best opportunities aren't in job descriptions, you create them “Follow the joy”: Phil's framework for building a career worth having How courage and curiosity created the career he didn't know he was building The future of legal tech leadership and why thinking like a managing partner matters This Episode is presented by Steno: Smarter transcripts. Faster delivery. Built for modern legal teams.
Hosted by David Cowen | Presented by Steno Live from ILTACON 2025, this episode features a candid, heart-forward conversation with Melanie Prevost, Senior Director of IT Infrastructure & Technical Support at Vinson & Elkins. Melanie shares how GenAI is empowering neurodiverse professionals, reducing barriers to productivity, and creating real inclusion, not just policy-driven but experience-based. We dive into how teams are building confidence, collaboration, and creativity through tools like Copilot, Grammarly, and ChatGPT, along with what it means to work across silos, connect with communications teams, and lead from a place of openness and curiosity. Key Topics Covered: How GenAI tools like Copilot, Grammarly, and ChatGPT are leveling the playing field Supporting neurodiverse professionals through AI-enabled workflows Building psychological safety and confidence across technical teams Working across silos: IT + Communications = new power partnerships Why accessibility and inclusion need to be built into tech strategy What's changing at ILTACON and why it feels different this year From recipes to real strategy: how personal AI use is driving workplace adoption This Episode is presented by Steno: Smarter transcripts. Faster delivery. Built for modern legal teams
Hosted by David Cowen | Presented by Steno Live from the floor at ILTACON 2025, this episode dives into the real transformation happening inside law firms and what GenAI has to do with it. Julie Brown, Director of Practice Technology at Vorys, shares why attorneys are finally asking for AI, what it means to build digital agents, and how legal tech professionals have evolved from taskmasters to strategic leaders. From workflow automation to workforce evolution, Julie breaks down what's changing, what's coming next, and how to stay ahead in a profession that's reinventing itself in real time. Key Topics Covered: How GenAI has flipped the script on legal tech adoption Why attorneys are now driving the demand for innovation The shift from eDiscovery silos to full-firm strategic impact What the rise of digital agents means for tomorrow's workforce How legal ops teams are becoming drivers of business value Julie's “second career” in agent design, automation, and workflow strategy Why human-in-the-loop AI still matters and always will This Episode is presented by Steno: Smarter transcripts. Faster delivery. Built for modern legal teams.
What are the hidden dangers lurking beneath the surface of vibe coded apps and hyped-up CEO promises? And what is Influence Ops?I'm joined by Susanna Cox (Disesdi), an AI security architect, researcher, and red teamer who has been working at the intersection of AI and security for over a decade. She provides a masterclass on the current state of AI security, from explaining the "color teams" (red, blue, purple) to breaking down the fundamental vulnerabilities that make GenAI so risky.We dive into the recent wave of AI-driven disasters, from the Tea dating app that exposed its users' sensitive data to the massive Catholic Health breach. We also discuss why the trend of blindly vibe coding is an irresponsible and unethical shortcut that will create endless liabilities in the near term.Susanna also shares her perspective on AI policy, the myth of separating "responsible" from "secure" AI, and the one threat that truly keeps her up at night: the terrifying potential of weaponized globally scaled Influence Ops to manipulate public opinion and democracy itself.Find Disesdi Susanna Cox:Substack: https://disesdi.substack.com/Socials (LinkedIn, X, etc.): @DisesdiKEY MOMENTS:00:26 - Who is Disesdi Susanna Cox?03:52 - What are Red, Blue, and Purple Teams in Security?07:29 - Probabilistic vs. Deterministic Thinking: Why Data & Security Teams Clash12:32 - How GenAI Security is Different (and Worse) than Classical ML14:39 - Recent AI Disasters: Catholic Health, Agent Smith & the "T" Dating App18:34 - The Unethical Problem with "Vibe Coding"24:32 - "Vibe Companies": The Gaslighting from CEOs About AI30:51 - Why "Responsible AI" and "Secure AI" Are the Same Thing33:13 - Deconstructing the "Woke AI" Panic44:39 - What Keeps an AI Security Expert Up at Night? Influence Ops52:30 - The Vacuous, Haiku-Style Hellscape of LinkedIn
For episode 584 of the BlockHash Podcast, host Brandon Zemp is joined by Péter W. Szabó, Founder of Tengr.ai, an image generation AI, which is now developing sophisticated AI systems that serve as professional partners for business and creative applications with a privacy-by-design approach. ⏳ Timestamps: (0:00) Introduction(1:08) Who is Péter W. Szabó?(10:25) What makes Tengr AI unique?(14:18) Tengr AI use-cases(17:14) Capital raise plans(19:38) Pricing tiers(27:20) Future of Gen AI(34:42) Tengr AI roadmap(36:00) AI events & conferences(37:19) Website, socials & community
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____________________________This Episode's SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3August 18, 2025The Narrative Attack Paradox: When Cybersecurity Lost the Ability to Detect Its Own Deception and the Humanity We Risk When Truth Becomes OptionalReflections from Black Hat USA 2025 on Deception, Disinformation, and the Marketing That Chose Fiction Over FactsBy Marco CiappelliSean Martin, CISSP just published his analysis of Black Hat USA 2025, documenting what he calls the cybersecurity vendor "echo chamber." Reviewing over 60 vendor announcements, Sean found identical phrases echoing repeatedly: "AI-powered," "integrated," "reduce analyst burden." The sameness forces buyers to sift through near-identical claims to find genuine differentiation.This reveals more than a marketing problem—it suggests that different technologies are being fed into the same promotional blender, possibly a generative AI one, producing standardized output regardless of what went in. When an entire industry converges on identical language to describe supposedly different technologies, meaningful technical discourse breaks down.But Sean's most troubling observation wasn't about marketing copy—it was about competence. When CISOs probe vendor claims about AI capabilities, they encounter vendors who cannot adequately explain their own technologies. When conversations moved beyond marketing promises to technical specifics, answers became vague, filled with buzzwords about proprietary algorithms.Reading Sean's analysis while reflecting on my own Black Hat experience, I realized we had witnessed something unprecedented: an entire industry losing the ability to distinguish between authentic capability and generated narrative—precisely as that same industry was studying external "narrative attacks" as an emerging threat vector.The irony was impossible to ignore. Black Hat 2025 sessions warned about AI-generated deepfakes targeting executives, social engineering attacks using scraped LinkedIn profiles, and synthetic audio calls designed to trick financial institutions. Security researchers documented how adversaries craft sophisticated deceptions using publicly available content. Meanwhile, our own exhibition halls featured countless unverifiable claims about AI capabilities that even the vendors themselves couldn't adequately explain.But to understand what we witnessed, we need to examine the very concept that cybersecurity professionals were discussing as an external threat: narrative attacks. These represent a fundamental shift in how adversaries target human decision-making. Unlike traditional cyberattacks that exploit technical vulnerabilities, narrative attacks exploit psychological vulnerabilities in human cognition. Think of them as social engineering and propaganda supercharged by AI—personalized deception at scale that adapts faster than human defenders can respond. They flood information environments with false content designed to manipulate perception and erode trust, rendering rational decision-making impossible.What makes these attacks particularly dangerous in the AI era is scale and personalization. AI enables automated generation of targeted content tailored to individual psychological profiles. A single adversary can launch thousands of simultaneous campaigns, each crafted to exploit specific cognitive biases of particular groups or individuals.But here's what we may have missed during Black Hat 2025: the same technological forces enabling external narrative attacks have already compromised our internal capacity for truth evaluation. When vendors use AI-optimized language to describe AI capabilities, when marketing departments deploy algorithmic content generation to sell algorithmic solutions, when companies building detection systems can't detect the artificial nature of their own communications, we've entered a recursive information crisis.From a sociological perspective, we're witnessing the breakdown of social infrastructure required for collective knowledge production. Industries like cybersecurity have historically served as early warning systems for technological threats—canaries in the coal mine with enough technical sophistication to spot emerging dangers before they affect broader society.But when the canary becomes unable to distinguish between fresh air and poison gas, the entire mine is at risk.This brings us to something the literary world understood long before we built our first algorithm. Jorge Luis Borges, the Argentine writer, anticipated this crisis in his 1940s stories like "On Exactitude in Science" and "The Library of Babel"—tales about maps that become more real than the territories they represent and libraries containing infinite books, including false ones. In his fiction, simulations and descriptions eventually replace the reality they were meant to describe.We're living in a Borgesian nightmare where marketing descriptions of AI capabilities have become more influential than actual AI capabilities. When a vendor's promotional language about their AI becomes more convincing than a technical demonstration, when buyers make decisions based on algorithmic marketing copy rather than empirical evidence, we've entered that literary territory where the map has consumed the landscape. And we've lost the ability to distinguish between them.The historical precedent is the 1938 War of the Worlds broadcast, which created mass hysteria from fiction. But here's the crucial difference: Welles was human, the script was human-written, the performance required conscious participation, and the deception was traceable to human intent. Listeners had to actively choose to believe what they heard.Today's AI-generated narratives operate below the threshold of conscious recognition. They require no active participation—they work by seamlessly integrating into information environments in ways that make detection impossible even for experts. When algorithms generate technical claims that sound authentic to human evaluators, when the same systems create both legitimate documentation and marketing fiction, we face deception at a level Welles never imagined: the algorithmic manipulation of truth itself.The recursive nature of this problem reveals itself when you try to solve it. This creates a nearly impossible situation. How do you fact-check AI-generated claims about AI using AI-powered tools? How do you verify technical documentation when the same systems create both authentic docs and marketing copy? When the tools generating problems and solving problems converge into identical technological artifacts, conventional verification approaches break down completely.My first Black Hat article explored how we risk losing human agency by delegating decision-making to artificial agents. But this goes deeper: we risk losing human agency in the construction of reality itself. When machines generate narratives about what machines can do, truth becomes algorithmically determined rather than empirically discovered.Marshall McLuhan famously said "We shape our tools, and thereafter they shape us." But he couldn't have imagined tools that reshape our perception of reality itself. We haven't just built machines that give us answers—we've built machines that decide what questions we should ask and how we should evaluate the answers.But the implications extend far beyond cybersecurity itself. This matters far beyond. If the sector responsible for detecting digital deception becomes the first victim of algorithmic narrative pollution, what hope do other industries have? Healthcare systems relying on AI diagnostics they can't explain. Financial institutions using algorithmic trading based on analyses they can't verify. Educational systems teaching AI-generated content whose origins remain opaque.When the industry that guards against deception loses the ability to distinguish authentic capability from algorithmic fiction, society loses its early warning system for the moment when machines take over truth construction itself.So where does this leave us? That moment may have already arrived. We just don't know it yet—and increasingly, we lack the cognitive infrastructure to find out.But here's what we can still do: We can start by acknowledging we've reached this threshold. We can demand transparency not just in AI algorithms, but in the human processes that evaluate and implement them. We can rebuild evaluation criteria that distinguish between technical capability and marketing narrative.And here's a direct challenge to the marketing and branding professionals reading this: it's time to stop relying on AI algorithms and data optimization to craft your messages. The cybersecurity industry's crisis should serve as a warning—when marketing becomes indistinguishable from algorithmic fiction, everyone loses. Social media has taught us that the most respected brands are those that choose honesty over hype, transparency over clever messaging. Brands that walk the walk and talk the talk, not those that let machines do the talking.The companies that will survive this epistemological crisis are those whose marketing teams become champions of truth rather than architects of confusion. When your audience can no longer distinguish between human insight and machine-generated claims, authentic communication becomes your competitive advantage.Most importantly, we can remember that the goal was never to build machines that think for us, but machines that help us think better.The canary may be struggling to breathe, but it's still singing. The question is whether we're still listening—and whether we remember what fresh air feels like.Let's keep exploring what it means to be human in this Hybrid Analog Digital Society. Especially now, when the stakes have never been higher, and the consequences of forgetting have never been more real. End of transmission.___________________________________________________________Marco Ciappelli is Co-Founder and CMO of ITSPmagazine, a journalist, creative director, and host of podcasts exploring the intersection of technology, cybersecurity, and society. His work blends journalism, storytelling, and sociology to examine how technological narratives influence human behavior, culture, and social structures.___________________________________________________________Enjoyed this transmission? Follow the newsletter here:https://www.linkedin.com/newsletters/7079849705156870144/Share this newsletter and invite anyone you think would enjoy it!New stories always incoming.___________________________________________________________As always, let's keep thinking!Marco Ciappellihttps://www.marcociappelli.com___________________________________________________________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Marco Ciappelli | Co-Founder, Creative Director & CMO ITSPmagazine | Dr. in Political Science / Sociology of Communication l Branding | Content Marketing | Writer | Storyteller | My Podcasts: Redefining Society & Technology / Audio Signals / + | MarcoCiappelli.comTAPE3 is the Artificial Intelligence behind ITSPmagazine—created to be a personal assistant, writing and design collaborator, research companion, brainstorming partner… and, apparently, something new every single day.Enjoy, think, share with others, and subscribe to the "Musing On Society & Technology" newsletter on LinkedIn.
Today's guest is Sidharth Ojha, Head of Process Optimization in Data and AI at the AXA XL Global Underwriting Office. AXA XL is the property and casualty and specialty risk division of AXA, serving mid-sized companies and large multinationals across more than 200 countries and territories. In this episode, Sidharth breaks down where insurers are seeing real GenAI adoption beyond the back office — including use cases in underwriting efficiency, regulatory alignment, and internal risk triage. He discusses how summarization tools are evolving into more advanced applications like quote comparison and submission prioritization, helping insurers shift from cost savings to growth-focused outcomes. We also explore how AI is helping insurers unlock unstructured policy data and why data governance is critical for enabling the prescriptive workflows many organizations are aiming toward. Want to share your AI adoption story with executive peers? Click emerj.com/expert2 for more information and to be a potential future guest on the ‘AI in Business' podcast! If you've enjoyed or benefited from some of the insights of this episode, consider leaving us a five-star review on Apple Podcasts, and let us know what you learned, found helpful, or liked most about this show!
The e-discovery company Reveal Data recently announced that it will launch its new generative AI-powered document review platform, called “aji,” in late September. Notably, the company said it is offering full access to the platform at no cost through Dec. 31, in order to enable “the entire legal community to explore and master the next era in GenAI review innovation.” To discuss the launch of aji, today's episode features Reveal's founder and CEO Wendell Jisa, together with the company's chief technology officer, Matthew Brothers-McGrew. This launch, Jisa says, represents the culmination of a deeply personal 30-year journey in legal tech from delivering photocopies in Chicago during blizzards to leading what he believes is one of the most significant technology companies in the legal industry. In their conversation with host Bob Ambrogi, Jisa and Brothers-McGrew make the case that generative AI presents the legal profession with the opportunity to become technology trailblazers rather than laggards. Their goal, they say, is to support the profession by democratizing access to AI across firms of all sizes and types. They also discuss Reveal's recent launch of Reveal Private Deployment, an initiative to support customers in whatever way they want to deploy Reveal's software, whether in the cloud, on-premises, or hybrid. At a time when other companies are pushing their customers away from on-premises deployments and into the cloud, Jisa and Brothers-McGrew say this is yet another way in which Reveal is seeking to democratize access by accommodating the interests of all its customers. Thank You To Our Sponsors This episode of LawNext is generously made possible by our sponsors. We appreciate their support and hope you will check them out. Paradigm, home to the practice management platforms PracticePanther, Bill4Time, MerusCase and LollyLaw; the e-payments platform Headnote; and the legal accounting software TrustBooks. Briefpoint, eliminating routine discovery response and request drafting tasks so you can focus on drafting what matters (or just make it home for dinner). Paxton, Rapidly conduct research, accelerate drafting, and analyze documents with Paxton. What do you need to get done today? If you enjoy listening to LawNext, please leave us a review wherever you listen to podcasts.
Gen AI projects succeed when grounded in practical, incremental use cases—not in overhyped visions of full autonomy. Avoid the mirage; real value comes from augmentation, not automation. That's the key take-away message of this episode of the Wise Decision Maker Show, which discusses why many Gen AI projects are doomed to fail.This article forms the basis for this episode: https://disasteravoidanceexperts.com/heres-why-many-gen-ai-projects-are-doomed-to-fail/
The annual “security summer camp” that is made up of the Black Hat and DefCon conferences is just past and the security analyst team, Scott Crawford, Dan Kennedy, Justin Lam and Mark Ehr, join host Eric Hanselman to examine what they saw and discuss the implications. Despite the heat of a Las Vegas summer, it's become bigger than the two main conferences, with a number of side events, like B-Sides, there's a lot going on. AI conversations are evolving and maturing. We've mostly moved beyond blaming user foibles for breaches, but AI is expanding the attack surface with new and more complex tactics for user manipulation. AI is lowering the barriers for attackers. The days of script kiddies have morphed into Claude Code-fueled attack development. The larger question is how security vendors are responding to AI risks. Claims that tier 1 security analysts should start looking for another job just seem irresponsible in the current environment. AI augmentation can reduce toil and digest the masses of events that security teams struggle to deal with today. At the same time, AI is scaling attack volumes. It's the constant hegemony that's always played out at the core of security. More S&P Global Content: RSAC Conference 2025: Breaking records at the threshold of uncertainty AI for security: Agentic AI will be a focus for security operations in 2025 Next in Tech | Ep. 215: RSA Conference Preview Deep Pocket Inspection: RSAC Innovation Sandbox Retrospective & Perspective Next in Tech | Ep. 227: Managed Security Services Next in Tech | Ep. 225: Security for MCP For S&P Global Subscribers: Use of GenAI security solutions has spiked, continued uptake projected – Highlights from VotE: Information Security Infosec spending projected to rise 27% on average in 2025 – Highlights from VotE: Information Security CNAPP in focus after large infosec acquisition – Highlights from VotE: Information Security Data Insight: Data security market to top $26B in 2029 Data Security Market Monitor & Forecast CNAPP matures into full-spectrum security solution Credits: Host/Author: Eric Hanselman Guests: Scott Crawford, Dan Kennedy, Justin Lam, Mark Ehr Producer/Editor: Adam Kovalsky Published With Assistance From: Sophie Carr, Feranmi Adeoshun, Kyra Smith
What if one judge kills GenAI in court? Former U.S. Magistrate Judge Andrew Peck says it could set legal innovation back a decade. In this episode, David Cowen and co-host Nicole Giantonio go inside the mind of the “Godfather of eDiscovery” to unpack the real risks, courtroom landmines, and what every lawyer needs to know before using AI in a case. This isn't theory. It's already happening. What You'll Learn: Why one bad GenAI ruling could halt progress for 10 years How “hallucinated” case law is already damaging court credibility The question no one can agree on: Are AI prompts discoverable? What smart firms are doing right now to stay defensible Why your judge's tech IQ might matter more than your case facts The #1 safeguard Judge Peck says you must have in place today What DLA Piper is doing that most law firms still haven't figured out
Show Notes: Sandor Marton, a McKinsey alum and co-founder of Chronos Insights introduces the AI research companion tool and explains its features. Sandor shares his background in growth strategy, commercial strategy, and due diligence work. He identifies the problems faced by small firms and independent practitioners who don't have large teams in place to assign people to do the massive amount of necessary foundational research, which limits the size of projects they can take on. An Overview of Chronos Insights Sandor discusses partnering with Matt Jones and Dustin Chrysler to develop an AI-powered solution for market and competitive landscape research. He talks about the background of the team and the issues they initially tackled. The team aimed to create a specialized prompt leveraging ChatGPT's large language model while moving away from ChatGPT's programming “to make people happy” and to deliver more accurate results. They developed their tool, Chronos, with a different approach. The tool uses neural search and keyword search to provide more accurate and sourced research solutions. Sandor talks about how they tested the agents. He found it saved 75% of research time using the platform and emphasizes the need for review and revision of AI-generated outputs. Chronos Research Capabilities The conversation turns to access to proprietary databases, and Sandor explains the current use of the alpha version sources publicly available information. He also talks about their engineering resources and can build a version that taps into proprietary, private, or licensed resources, and that the user can direct the agent to source from specific sources. Sandor refers to a large Fortune 100 company that has hired them to build an agent to work with their internal research team. Sandor discusses potential future features like an interview finder tool and a composite self-referencing insights feature. The discussion touches on the flexibility of the tool in handling different types of research problems. User Interface Explained Sandor explains the simple UI of the tool, which includes five landing pages: Project Overview, Research Statement of Work, Research Plan, Research Tab, and Research Summary. The tool creates a research plan based on the user's research statement of work, breaking it down into discrete task prompts. Sandor demonstrates how to use the tool by copying and pasting a Word document into the Research Statement of Work tab. The tool generates a detailed research plan, including market landscape, competitor landscape, and key trends and developments. Project Overview Research Plan Generation Sandor explains the process of generating a research plan, including organizing research tasks by major research categories and subsections. He shares an example of the various categories to be researched within the statement of work. The tool creates task prompts for each topic within the research statement of work. Users can edit tasks and change the language model used for research. Sandor highlights the efficiency of running multiple tasks concurrently using the platform and demonstrates how it works and the various categories of research that can be explored, analyzed, and summarized. How Chronos Summarizes Prompts Sections Identified and Tasks to Complete Subsections Explained Reviewing and Exporting Sandor demonstrates how to review and export the research outputs generated by the tool. The tool provides citations for each task, allowing users to verify the sources. Users can rerun tasks and edit them as needed. The tool offers a research summary feature, which condenses the research outputs into a concise format. Research Task Overview Market Landscape Research Use Cases and Future Developments Sandor talks about use cases and future developments. He explains that it can handle various research topics and how they have programmed the agent to eliminate errors. Sandor mentions a project which included researching sales cycles and client selection criteria. The tool has potential for developing primary research, valuation and diligence work, and incorporating proprietary or licensed material. Timestamps: 02:44: Development of the AI Research Companion 05:42: Current and Future Features of the Tool 08:55: Walkthrough of the AI Research Companion Tool 18:54: Detailed Explanation of the Research Plan 25:22: Running and Reviewing the Research Plan 33:20: Future Developments and Use Cases 37:20: Conclusion and Contact Information Links: Website: https://www.chronosinsights.com/ Unleashed is produced by Umbrex, which has a mission of connecting independent management consultants with one another, creating opportunities for members to meet, build relationships, and share lessons learned. Learn more at www.umbrex.com.
Confused by AI jargon and unsure which tools actually move the needle for your business? We break down the real differences between traditional algorithms, large language models (LLMs), and agents — including agentic AI — and give practical guidance leaders can use now.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Choosing AI: Algorithms vs. AgentsUnderstanding AI Models and AgentsUsing Conditional Statements in AIImportance of Data in AI TrainingRisk Factors in Agentic AI ProjectsInnovation through AI ExperimentationEvaluating AI for Business SolutionsTimestamps:00:00 AWS AI Leader Departs Amid Talent War03:43 Meta Wins Copyright Lawsuit07:47 Choosing AI: Short or Long Term?12:58 Agentic AI: Dynamic Decision Models16:12 "Demanding Data-Driven Precision in Business"20:08 "Agentic AI: Adoption and Risks"22:05 Startup Challenges Amidst Tech Giants24:36 Balancing Innovation and Routine27:25 AGI: Future of Work and SurvivalKeywords:AI algorithms, Large Language Models, LLMs, Agents, Agentic AI, Multi agentic AI, Amazon Web Services, AWS, Vazhi Philemon, Gen AI efforts, Amazon Bedrock, talent wars in tech, OpenAI, Google, Meta, Copyright lawsuit, AI training, Sarah Silverman, Llama, Fair use in AI, Anthropic, AI deep research model, API, Webhooks, MCP, Code interpreter, Keymaker, Data labeling, Training datasets, Computer vision models, Block out time to experiment, Decision-making, If else conditional statements, Data-driven approach, AGI, Teleporting, Innovation in AI, Experiment with AI, Business leaders, Performance improvements, Sustainable business models, Corporate blade.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner