POPULARITY
In this milestone 150th episode, hosts Kelly Schuster-Paredes and Sean Tibor sit down with Simon Willison, co-creator of Django and creator of Datasette and LLM tools, for an in-depth conversation about artificial intelligence in Python education. The discussion covers the current landscape of LLMs in coding education, from the benefits of faster iteration cycles to the risks of students losing that crucial "aha moment" when they solve problems independently. Simon shares insights on prompt injection vulnerabilities, the importance of local models for privacy, and why he believes LLMs are much harder to use effectively than most people realize. Key topics include: Educational Strategy: When to introduce AI tools vs. building foundational skills first Security Concerns: Prompt injection attacks and their implications for educational tools Student Engagement: Maintaining motivation and problem-solving skills in an AI world Practical Applications: Using LLMs for code review, debugging, and rapid prototyping Privacy Issues: Understanding data collection and training practices of major AI companies Local Models: Running AI tools privately on personal devices The "Jagged Frontier": Why LLMs excel at some tasks while failing at others Simon brings 20 years of Django experience and deep expertise in both web development and AI tooling to discuss how educators can thoughtfully integrate these powerful but unpredictable tools into their classrooms. The conversation balances excitement about AI's potential with realistic assessments of its limitations and risks. Whether you're a coding educator trying to navigate the AI revolution or a developer interested in the intersection of education and technology, this episode provides practical insights for working with LLMs responsibly and effectively. Resources mentioned: - Simon's blog: simonwillison.net - Mission Encodable curriculum - Datasette and LLM tools - GitHub Codespaces for safe AI experimentation Special Guest: Simon Willison.
Im 228. Special bei den WildMics haben wir uns mit der Frage beschäftigt, was sog. Large Language Models (LLMs) wie ChatGPT wirklich wissen. Stimmt es, dass diese LLMs mit jeder Version schlechter werden? Wie verlässlich sind die Informationen, die man mit diesen Modellen erhält, und wo liegen die Grenzen? Darüber sprachen wir mit Gavin Karlmeier und Gregor Schmalzried. Diese Sendung wurde am 26.08.2025 aufgezeichnet. Die HOAXILLA Umfrage findet ihr hier… Wie man uns unterstützen kann, könnt ihr hier nachlesen. Zum HOAXILLA Merchandise geht es hier
Our analysts Adam Jonas and Alex Straton discuss how tech-savvy young professionals are influencing retail, brand loyalty, mobility trends, and the broader technology landscape through their evolving consumer choices. Read more insights from Morgan Stanley.----- Transcript -----Adam Jonas: Welcome to Thoughts on the Market. I'm Adam Jonas, Morgan Stanley's Embodied AI and Humanoid Robotics Analyst. Alex Straton: And I'm Alex Straton, Morgan Stanley's U.S. Softlines Retail and Brands Analyst. Adam Jonas: Today we're unpacking our annual summer intern survey, a snapshot of how emerging professionals view fashion retail, brands, and mobility – amid all the AI advances.It is Tuesday, August 26th at 9am in New York.They may not manage billions of dollars yet, but Morgan Stanley's summer interns certainly shape sentiment on the street, including Wall Street. From sock heights to sneaker trends, Gen Z has thoughts. So, for the seventh year, we ran a survey of our summer interns in the U.S. and Europe. The survey involved more than 500 interns based in the U.S., and about 150 based in Europe. So, Alex, let's start with what these interns think about fashion and athletic footwear. What was your biggest takeaway from the intern survey? Alex Straton: So, across the three categories we track in the survey – that's apparel, athletic footwear, and handbags – there was one clear theme, and that's market fragmentation. So, for each category specifically, we observed share of the top three to five brands falling over time. And what that means is these once dominant brands, as consumer mind share is falling – and it likely makes them lower growth margin and multiple businesses over time. At the same time, you have smaller brands being able to captivate consumer attention more effectively, and they have staying power in a way that they haven't necessarily historically. I think one other piece I would just add; the rise of e-commerce and social media against a low barrier to entry space like apparel and footwear means it's easier to build a brand than it has been in the past. And the intern survey shows us this likely continues as this generation is increasingly inclined to shop online. Their social media usage is heavy, and they heavily rely on AI to inform, you know, their purchases.So, the big takeaway for me here isn't that the big are getting bigger in my space. It's actually that the big are probably getting smaller as new players have easier avenues to exist. Adam Jonas: Net apparel spending intentions rose versus the last survey, despite some concern around deteriorating demand for this category into the back half. What do you make of that result? Alex Straton: I think there were a bit conflicting takes from the survey when I look at all the answers together. So yes, apparel spending intentions are higher year-over-year, but at the same time, clothing and footwear also ranked as the second most category that interns would pull back on should prices go up. So let me break this down. On the higher spending intentions, I think timing played a huge role and a huge factor in the results. So, we ran this in July when spending in our space clearly accelerated. That to me was a function of better weather, pent up demand from earlier in the quarter, a potential tariff pull forward as headlines were intensifying, and then also typical back to school spending. So, in short, I think intention data is always very heavily tethered to the moment that it's collected and think that these factors mean, you know, it would've been better no matter what we've seen it in our space. I think on the second piece, which is interns pulling back spend should prices go up. That to me speaks to the high elasticity in this category, some of the highest in all of consumer discretionary. And that's one of the few drivers informing our cautious demand view on this space as we head into the back half. So, in summary on that piece, we think prices going higher will become more apparent this month onwards, which in tandem with high inventory and a competitive setup means sales could falter in the group. So, we still maintain this cautious demand view as we head into the back half, though our interns were pretty rosy in the survey. Adam Jonas: Interesting. So, interns continue to invest in tech ecosystems with more than 90 percent owning multiple devices. What does this interconnectedness mean for companies in your space? Alex Straton: This somewhat connects to the fragmentation theme I mentioned where I think digital shopping has somewhat functioned as a great equalizer in the space and big picture. I interpret device reliance as a leading indicator that this market diversification likely continues as brands fight to capture mobile mind share. The second read I'd have on this development is that it means brands must evolve to have an omnichannel presence. So that's both in store and online, and preferably one that's experiential focus such that this generation can create content around it. That's really the holy grail. And then maybe lastly, the third takeaway on this is that it's going to come at a cost. You, you can't keep eyeballs without spend. And historical brick and mortar retailers spend maybe 5 to 10 percent of sales on marketing, with digital requiring more than physical. So now I think what's interesting is that brands in my space with momentum seem to have to spend more than 10 percent of sales on marketing just to maintain popularity. So that's a cost pressure. We're not sure where these businesses will necessarily recoup if all of them end up getting the joke and continuing to invest just to drive mind share. Adam, turning to a topic that's been very hot this year in your area of expertise. That's humanoid robots. Interns were optimistic here with more than 60 percent believing they'll have many viable use cases and about the same number thinking they'll replace many human jobs. Yet fewer expect wide scale adoption within five years. What do you think explains this cautious enthusiasm? Adam Jonas: Well actually Alex, I think it's pretty smart. There is room to be optimistic. But there's definitely room to be cautious in terms of the scale of adoption, particularly over five years. And we're talking about humanoid robots. We're talking about a new species that's being created, right? This is bigger than just – will it replace our job? I mean, I don't think it's an exaggeration to ask what does this do to the concept of being human? You know, how does this affect our children and future generations? This is major generational planetary technology that I think is very much comparable to electricity, the internet. Some people say the wheel, fire, I don't know. We're going to see it happen and start to propagate over the next few years, where even if we don't have widespread adoption in terms of dealing with it on average hour of a day or an average day throughout the planet, you're going to see the technology go from zero to one as these machines learn by watching human behavior. Going from teleoperated instruction to then fully autonomous instruction, as the simulation stack and the compute gets more and more advanced. We're now seeing some industry leaders say that robots are able to learn by watching videos. And so, this is all happening right now, and it's happening at the pace of geopolitical rivalry, Sino-U.S. rivalry and terra cap, you know, big, big corporate competitive rivalry as well, for capital in the human brain. So, we are entering an unprecedented – maybe precedented in the last century – perhaps unprecedented era of technological and scientific discovery that I think you got to go back to the European and American Enlightenment or the Italian Renaissance to have any real comparisons to what we're about to see. Alex Straton: So, keeping with this same theme, interns showed strong interest in household robots with 61 percent expressing some interest and 24 percent saying they're very or extremely interested. I'm going to take you back to your prior coverage here, Adam. Could this translate into demand for AI driven mobility or smart infrastructure? Adam Jonas: Well, Alex, you were part of my prior coverage once upon a time. We were blessed with having you on our team for a year, and then you left me… Alex Straton: My golden era. Adam Jonas: But you came back, you came back. And you've done pretty well. So, so look, imagine it's 1903, the Wright Brothers just achieved first flight over the sands at Kitty Hawk. And then I were to tell you, ‘Oh yeah, in a few years we're going to have these planes used in World War I. And then in 1914, we'd have the first airline going between Tampa and St. Petersburg.' You'd say, ‘You're crazy,' right? The beauty of the intern survey is it gives the Morgan Stanley research department and our clients an opportunity to engage that surface area with that arising – not just the business leader – but that arising tech adopter. These are the people, these are the men and women that are going to kind of really adopt this much, much faster. And then, you know, our generation will get dragged into it eventually. So, I think it says; I think 61 percent expressing even some interest. And then 24 [percent], I guess, you know… The vast majority, three quarters saying, ‘Yeah, this is happening.' That's a sign I think, to our clients and capital market providers and regulators to say, ‘This won't be stopped. And if we don't do it, someone else will.' Alex Straton: So, another topic, Generative AI. It should come as no surprise really, that 95 percent of interns use that tool monthly, far ahead of the general population. How do you see this shaping future expectations for mobility and automation? Adam Jonas: So, this is what's interesting is people have asked kinda, ‘What's that Gen AI moment,' if you will, for mobility? Well, it really is Gen AI. Large Language Models and the technologies that develop the Large Language Models and that recursive learning, don't just affect the knowledge economy, right. Or writing or research report generation or intelligence search. It actually also turns video clips and physical information into tokens that can then create and take what would be a normal suburban city street and beautiful weather with smiling faces or whatever, and turn it into a chaotic scene of, you know, traffic and weather and all sorts of infrastructure issues and potholes. And that can be done in this digital twin, in an omniverse. A CEO recently told me when you drive a car with advanced, you know, Level 2+ autonomy, like full self-driving, you're not just driving in three-dimensional space. You're also playing a video game training a robot in a digital avatar. So again, I think that there is quite a lot of overlap between Gen AI and the fact that our interns are so much further down that curve of adoption than the broader public – is probably a hint to us is we got to keep listening to them, when we move into the physical realm of AI too. Alex Straton: So, no more driving tests for the 16-year-olds of the future... Adam Jonas: If you want to. Like, I tell my kids, if you want to drive, that's cool. Manual transmission, Italian sports cars, that's great. People still ride horses too. But it's just for the privileged few that can kind of keep these things in stables. Alex Straton: So, let me turn this into implications for companies here. Gen Z is tech fluent, open to disruption? How should autos and shared mobility providers rethink their engagement strategies with this generation? Adam Jonas: Well, that's a huge question. And think of the irony here. As we bring in this world of fake humans and humanoid robots, the scarcest resource is the human brain, right? So, this battle for the human mind is – it's incredible. And we haven't seen this really since like the Sputnik era or real height of the Cold War. We're seeing it now play out and our clients can read about some of these signing bonuses for these top AI and robotics talent being paid by many companies. It kind of makes, you know, your eyes water, even if you're used to the world of sports and soccer, . I think we're going to keep seeing more of that for the next few years because we need more brains, we need more stem. I think it's going to do; it has the potential to do a lot for our education system in the United States and in the West broadly. Alex Straton: So, we've covered a lot around what the next generation is interested in and, and their opinion. I know we do this every year, so it'll be exciting to see how this evolves over time. And how they adapt. It's been great speaking with you today, Adam. Adam Jonas: Absolutely. Alex, thanks for your insights. And to our listeners, stay curious, stay disruptive, and we'll catch you next time. If you enjoy Thoughts on the Market, please leave us a review wherever you listen and share the podcast with a friend or colleague today.
LOCAL SEO – A DEEPER DIVE Are you truly leveraging the power of local SEO, or just scratching the surface? This episode takes you beyond the basics—unpacking the strategies, hidden features, and common pitfalls that determine whether your business shows up (or stays invisible) in your local market. Whether you're a service-area business, brick-and-mortar retailer, or local professional, the rules are changing and the competition is only getting sharper. Before you listen, ask yourself: Do I really know how Google decides which businesses appear at the top of local results—and what's holding mine back?Am I making critical mistakes with my Google Business Profile, or missing out on simple tactics that drive more calls and visits?With AI and changing search habits, am I prepared for the next wave of local SEO, or am I about to be left behind? In this episode, you'll learn: The truth behind “near me” searches versus geo-targeted queries—and why your content needs to serve both.Why Google Business Profile (GBP) is more than just a listing—and how to optimize every corner of it (categories, Q&As, reviews, and ongoing engagement).How proximity, relevance, and prominence drive results—and practical steps to improve all three for your business.The critical role of reviews, reputation, and ongoing profile management—plus what to do when you get hit by fake reviews or platform glitches.What's new with local search in the age of AI and Large Language Models (LLMs)—and why SEO fundamentals aren't dead (yet).Agency-level tips for grid reporting, heat maps, multiple branches, and troubleshooting why you're not ranking. Meet your Marketing Guides: • Ian Cantle – Dental Marketing Heroes • Ken Tucker – Changescape Web • Paul Barthel – Changescape Web • Jeff Stec – Tylerica Marketing Systems Together, they bring decades of hands-on local SEO and digital marketing expertise, helping small and midsize businesses outsmart competitors and get found where it matters most. Ready to stop guessing and start winning at local SEO? Smash that subscribe button, share the show with your colleagues and local business friends, and connect with our Guides at their websites above for more resources, free tips, and a custom SEO review! LISTEN NOW to unlock the strategies that drive local leads and revenue in 2025/26—and keep your business a step ahead.
AGNTCY - Unlock agents at scale with an open Internet of Agents. Visit https://agntcy.org/ and add your support. In this episode of Eye on AI, we sit down with Leon Song, VP of Research at Together AI, to explore how open-source models and cutting-edge infrastructure are reshaping the AI landscape. From speculative decoding to FlashAttention and RedPajama, Leon shares how Together AI is building one of the fastest, most cost-efficient AI clouds—helping enterprises fine-tune, deploy, and scale open-source models at the level of GPT-4 and beyond. We dive into Leon's journey from leading DeepSpeed and AI for Science at Microsoft to driving system-level innovation at Together AI. Topics include: The future of open-source vs. closed-source AI models Breakthroughs in speculative decoding for faster inference How Together AI's cloud platform empowers enterprises with data sovereignty and model ownership Why open-source models like DeepSeek R1 and Llama 4 are now rivaling proprietary systems The role of GPUs vs. ASIC accelerators in scaling AI infrastructure Whether you're an AI researcher, enterprise leader, or curious about where generative AI is heading, this conversation reveals the technology and strategy behind one of the most important players in the open-source AI movement. Stay Updated: Craig Smith on X:https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI
AI data wars push Reddit to block the Wayback Machine China Launches Three-Day Robot Olympics Featuring Football and Table Tennis US government agency drops Grok after MechaHitler backlash, report says Eli Lilly signs $1.3 billion deal with Superluminal to use AI to make obesity medicines The AI Was Fed Sloppy Code. It Turned Into Something Evil. | Quanta Magazine AI data centers made Americans' electricity bills 30% higher Sam Altman says 'yes,' AI is in a bubble Is the A.I. Sell-off the Start of Something Bigger? Thousands of Grok chats are now searchable on Google Opinion | Amy Klobuchar: I Knew A.I. Deepfakes Were a Problem. Then I Saw One of Myself. 2,178 Occult Books Now Digitized & Put Online, Thanks to the Ritman Library and Da Vinci Code Author Dan Brown Pluralistic: "Privacy preserving age verification" is bullshit (14 Aug 2025) How to use "skibidi" and other new slang added to Cambridge Dictionary YouTube Is Making a Play to Host the Oscars Leobait: Resisting AI Solutionism through Workplace Collective Action So ... is AI writing any good? Project Indigo We used AI to analyse three cities. It's true: we now walk more quickly and socialise less Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Rich Skrenta Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: pantheon.io helixsleep.com/twit
AI data wars push Reddit to block the Wayback Machine China Launches Three-Day Robot Olympics Featuring Football and Table Tennis US government agency drops Grok after MechaHitler backlash, report says Eli Lilly signs $1.3 billion deal with Superluminal to use AI to make obesity medicines The AI Was Fed Sloppy Code. It Turned Into Something Evil. | Quanta Magazine AI data centers made Americans' electricity bills 30% higher Sam Altman says 'yes,' AI is in a bubble Is the A.I. Sell-off the Start of Something Bigger? Thousands of Grok chats are now searchable on Google Opinion | Amy Klobuchar: I Knew A.I. Deepfakes Were a Problem. Then I Saw One of Myself. 2,178 Occult Books Now Digitized & Put Online, Thanks to the Ritman Library and Da Vinci Code Author Dan Brown Pluralistic: "Privacy preserving age verification" is bullshit (14 Aug 2025) How to use "skibidi" and other new slang added to Cambridge Dictionary YouTube Is Making a Play to Host the Oscars Leobait: Resisting AI Solutionism through Workplace Collective Action So ... is AI writing any good? Project Indigo We used AI to analyse three cities. It's true: we now walk more quickly and socialise less Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Rich Skrenta Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: pantheon.io helixsleep.com/twit
AI data wars push Reddit to block the Wayback Machine China Launches Three-Day Robot Olympics Featuring Football and Table Tennis US government agency drops Grok after MechaHitler backlash, report says Eli Lilly signs $1.3 billion deal with Superluminal to use AI to make obesity medicines The AI Was Fed Sloppy Code. It Turned Into Something Evil. | Quanta Magazine AI data centers made Americans' electricity bills 30% higher Sam Altman says 'yes,' AI is in a bubble Is the A.I. Sell-off the Start of Something Bigger? Thousands of Grok chats are now searchable on Google Opinion | Amy Klobuchar: I Knew A.I. Deepfakes Were a Problem. Then I Saw One of Myself. 2,178 Occult Books Now Digitized & Put Online, Thanks to the Ritman Library and Da Vinci Code Author Dan Brown Pluralistic: "Privacy preserving age verification" is bullshit (14 Aug 2025) How to use "skibidi" and other new slang added to Cambridge Dictionary YouTube Is Making a Play to Host the Oscars Leobait: Resisting AI Solutionism through Workplace Collective Action So ... is AI writing any good? Project Indigo We used AI to analyse three cities. It's true: we now walk more quickly and socialise less Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Rich Skrenta Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: pantheon.io helixsleep.com/twit
AI data wars push Reddit to block the Wayback Machine China Launches Three-Day Robot Olympics Featuring Football and Table Tennis US government agency drops Grok after MechaHitler backlash, report says Eli Lilly signs $1.3 billion deal with Superluminal to use AI to make obesity medicines The AI Was Fed Sloppy Code. It Turned Into Something Evil. | Quanta Magazine AI data centers made Americans' electricity bills 30% higher Sam Altman says 'yes,' AI is in a bubble Is the A.I. Sell-off the Start of Something Bigger? Thousands of Grok chats are now searchable on Google Opinion | Amy Klobuchar: I Knew A.I. Deepfakes Were a Problem. Then I Saw One of Myself. 2,178 Occult Books Now Digitized & Put Online, Thanks to the Ritman Library and Da Vinci Code Author Dan Brown Pluralistic: "Privacy preserving age verification" is bullshit (14 Aug 2025) How to use "skibidi" and other new slang added to Cambridge Dictionary YouTube Is Making a Play to Host the Oscars Leobait: Resisting AI Solutionism through Workplace Collective Action So ... is AI writing any good? Project Indigo We used AI to analyse three cities. It's true: we now walk more quickly and socialise less Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Rich Skrenta Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: pantheon.io helixsleep.com/twit
This week's guest is Vishal Jadhav, Product Director at Blue Yonder. Everyone listening to this podcast knows of Blue Yonder. It is a company with a long history, from I2 to JDA, that has consistently been on the cutting edge of supply chain management software. Vishal has spent almost 20 years at Blue Yonder. His recent work and writings have delved into the evolution of Large Language Models (LLMs) and how they are being incorporated into supply chain solutions. Our conversation explores the differences between traditional Operations Research (OR) techniques and modern AI. Vishal highlights how these technologies can enhance optimization and decision-making by mimicking human reasoning and learning from experience. We also address the challenges of explainability in AI and the emerging concept of agentic AI, which suggests a future of more proactive and autonomous systems within the supply chain.
This episode is brought to you by https://www.ElevateOS.com —the only all-in-one community operating system.Ever notice how the simplest words carry the most weight?In today's episode of the Multifam Collective, I unpack a quote from Rudyard Kipling: "Words are, of course, the most powerful drug used by mankind." That line hit me—and it got me thinking about the two most powerful words in the world of Multifamily leadership: please and thank you.These aren't just playground pleasantries. They're foundational tools in shaping culture, creating community, and leading with authenticity.In a world driven by speed, automation, and PropTech innovation, we sometimes forget the human side of the equation. Ironically, the word please is one of the most expensive tokens in Large Language Models like OpenAI's GPT. And yet in real life, it's often the cheapest thing we forget to give.Let this be your reminder: in Multifamily, where relationships are the currency of success, manners matter more than ever.Please watch this.Thank you for being here.Like, comment, and subscribe to keep the conversation going.For more engaging content, explore our offerings at the[https://www.multifamilycollective.com](https://www.multifamilycollective.com/) and the [https://www.multifamilymedianetwork.com](https://www.multifamilymedianetwork.com/)Join us to stay informed and inspired in the multifamily industry!
AI data wars push Reddit to block the Wayback Machine China Launches Three-Day Robot Olympics Featuring Football and Table Tennis US government agency drops Grok after MechaHitler backlash, report says Eli Lilly signs $1.3 billion deal with Superluminal to use AI to make obesity medicines The AI Was Fed Sloppy Code. It Turned Into Something Evil. | Quanta Magazine AI data centers made Americans' electricity bills 30% higher Sam Altman says 'yes,' AI is in a bubble Is the A.I. Sell-off the Start of Something Bigger? Thousands of Grok chats are now searchable on Google Opinion | Amy Klobuchar: I Knew A.I. Deepfakes Were a Problem. Then I Saw One of Myself. 2,178 Occult Books Now Digitized & Put Online, Thanks to the Ritman Library and Da Vinci Code Author Dan Brown Pluralistic: "Privacy preserving age verification" is bullshit (14 Aug 2025) How to use "skibidi" and other new slang added to Cambridge Dictionary YouTube Is Making a Play to Host the Oscars Leobait: Resisting AI Solutionism through Workplace Collective Action So ... is AI writing any good? Project Indigo We used AI to analyse three cities. It's true: we now walk more quickly and socialise less Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Rich Skrenta Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: pantheon.io helixsleep.com/twit
A new JAMA study found that 84% of abortion prescriptions from Aid Access went to patients in states with bans, enabled by “shield laws” protecting telehealth providers across state lines. Provision rates were highest in underserved Southern and Midwestern counties, highlighting telemedicine's role in maintaining access. A Scientific Reports study showed that large language models for clinical use can be manipulated into giving unsafe answers through subtle “adversarial hallucination attacks,” with success rates over 95%. Finally, JAMA Ophthalmology reported GLP-1 agonists may slightly increase risk of sudden vision loss, though benefits still outweigh risks.
A looming deadline always gets attention, and for DoD suppliers, the clock is ticking. On October 1, 2025, the Department of Defense will begin including Cybersecurity Maturity Model (CMMC) certification requirements in new contracts. This week on Feds At The Edge, four leading experts cut through the complexity and share practical guidance to help you start, or finish, your CMMC journey. Sean Frazier, Federal Chief Security Officer for Okta, explains why “Know Thy Data” is the key to applying the right level of security where it matters most. Alan Dinerman, PhD, Senior Manager, Cyber Strategy, Policy, and Privacy at Mitre, puts CMMC in context with other cybersecurity standards, noting its focus on Controlled Unclassified Information. And Jeff Adorno, Field Chief Compliance Officer at ZScaler, warns of risks in the AI era, where sensitive data can unintentionally “leak” into Large Language Models. The panel as a whole highlights how aligning with existing frameworks and using current technologies can demonstrate progress to auditors and ease compliance. Listen now on your favorite podcast platform because whether you're deep into compliance or just getting started, this conversation will help you navigate the evolving landscape of CMMC and beyond.
This episode is sponsored by SearchMaster, the leader in next-generation Generative Engine Optimization (GEO) for Large Language Models like ChatGPT, Claude, and Perplexity. Future-proof your SEO strategy. Sign up now for a 2 week free trial! Watch this episode on YouTube! Alex Sofronas hosts the Marketing x Analytics Podcast featuring Justin Rashidi, co-founder of data enablement company SeedX. They discuss SeedX's approach to addressing business development issues, focusing on understanding KPIs, causal impact analysis, multi-touch attribution, and marketing mix modeling. Justin elaborates on running various statistical analyses, including click-based tracking, holdout tests, and LTV modeling. The conversation also explores optimizing ad spend and forecasting, emphasizing the importance of data-driven decision-making and the challenges of aligning metrics with business goals. Follow Marketing x Analytics! X | LinkedIn Click Here for Transcribed Episodes of Marketing x Analytics All view are our own.
Live into your greatest possibilities. Join the Limitless Life Club today! https://www.oracleonpurpose.com/the-limitless-life-membership In the age of AI, how can women business founders harness its power? In this episode, Heather Di Rocco, founder of InsureBot Solutions, joins us to talk about using AI to future-proof your business. Heather shares her journey from the military to machine learning, turning challenges into valuable lessons as she transitioned into entrepreneurship. She also opens up about managing the back end of business, the strength found in collaboration, and the importance of market research. Technology can make your life easier if you know how to use it. Success comes when you give yourself grace, forgive, and take action. Learn more on the Oracle On Purpose Podcast: How AI Can Future-Proof Your Business P.S. If you're ready to deepen your understanding of the Law of Attraction and activate real change in your life, check out my audiobook "POWER Up the Law of Attraction"—now available on Audible and Amazon. It's the perfect next step for anyone ready to turn insight into transformation. Grab your copy here! https://www.amazon.com/Audible-Studios-Brilliance-POWER-Attraction/dp/B0F3G1ZD18/ Enjoy the podcast? Subscribe and leave a 5-star review! You can also tune in to this episode on YouTube and all your favorite podcast platforms. Heather Di Rocco is not your average tech whisperer. She is the founder of Insurebot Solutions, a game-changer for business owners looking to harness the power of AI and Large Language Models (LLMs). After serving 20 years in the military, where precision, efficiency, and forward-thinking were non-negotiable, Heather transitioned into the private sector, bringing with her a deep understanding of cutting-edge technology—particularly AI-driven solutions. Connect with Heather Di Rocco. Facebook: https://www.facebook.com/brettashly/ Instagram: https://www.instagram.com/heather_d_rock_ai_maven/ LinkedIn: https://www.linkedin.com/in/insurebotsolutions/ Learn more about InsureBot Solutions. Website: https://insurebotsolutions.ai/ Facebook: https://www.facebook.com/DRockAIMaven LinkedIn: https://www.linkedin.com/company/insurebotsolutions/ I am Lia Dunlap, The Oracle on Purpose with a mission to change people's lives for good. With over 25 years of experience as an Intuitive Business Architect and Coach, I have helped thousands of clients in 76 countries, including hosting three international retreats. As a Best-Selling Author, Founder of the Master Creators Academy, Certified Clinical Hypnotherapist, International Speaker, and Creator of the POWER Plan Life Coaching Program, My Purpose Is Clear: Helping YOU find and follow Your Purpose. I have worked with thousands of leaders, entrepreneurs, and business owners for over two decades, helping them find and experience their Unique Life Purpose. Catch the latest episodes of Oracle On Purpose here! https://www.oracleonpurpose.com/podcast-new Work with Lia today. https://www.oracleonpurpose.com/meet-the-oracle Ask the Oracle - Join the next Oracle Insight & Alignment Call. https://www.oracleonpurpose.com/offers/Qcb9YRFF How Aligned Is Your Business with Your Highest Power? Take the Quiz here: https://oracleonpurpose.outgrow.us/powerbizquiz Connect with Lia Dunlap! Website: https://www.oracleonpurpose.com/ Facebook: https://www.facebook.com/CoachLiaDunlap X: https://x.com/CoachLiaDunlap Instagram: https://www.instagram.com/coachliadunlap/# YouTube: https://www.youtube.com/channel/UC8IOgSSGVVNG2usEJE07X8g LinkedIn: https://www.linkedin.com/in/coachliadunlap Produced by https://www.BroadcastYourAuthority.com #AIForBusiness #AIAndEntrepreneurship #TechSavvyWomen
Join hosts Nidhi Madan, MD; Prashant Nagpal, MD, FSCCT; Jill Jacobs, MD, MS-HQSM, FSCCT and Cristina Fuss, MD, PhD, FSCCT as they take a deep dive into featured articles in the May – June2025 issue of the Journal of Cardiovascular Computed Tomography (JCCT). Our hosts chat with Borek Foldyna, MD, FSCCT; Ming-Yen Ng, MBBS, FRCR, FSCCT; Daisuke Kinoshita, MD; Muhammad Taha Hagar, MD and Philipp Arnold, MD. This episode will explore:Air pollution, coronary artery disease, and cardiovascular events: Insights from the PROMISE trialUsing Cardiac CT to Clarify the Relationship between Air Pollution and AtherosclerosisPerformance of Large Language Models for CAD-RADS 2.0 classification derived from Cardiac CT reportsHigh-risk Plaque Features and Perivascular InflammationSupport the show
Sub to the Patreon to support the show and access the entire 2nd part of PPM's subtextual analysis of Eddington as soon as it drops: patreon.com/ParaPowerMappingIn which we decode Eddington's subtextual conspiracy themes, endeavoring to argue that the new Ari Aster is perhaps the first major, theatrically released film to have accurately encapsulated the essence of the technocratic AmerIsraeli Years of Lead—in accordance with my personal timeline of the ongoing deep political era that would place its inception around Covid time—and the Silicon Valley capitalist elite's embrace of strategy of tension in the cybernetic service of of updating America's Total Info Awareness 2.0 operating system and the installation of their long planned predictive policing panopticon.We discuss: why the appearance of the globo "Antifa PMCs" isn't actually crypto-MAGA chicanery (seeing as they are Gladio operators); Joaquin Phoenix's turn as Sheriff Joe Cross, a Gen X, mumblecore, adoptive son of Sheriff Joe Arpaio type; Eddington as Nashville esque ensemble comedy cum Coen Brothers Covid Wester with the accompanying masking/social distancing standoffs; diagnosing the alienation and social media siloing of the wokespeak & QAnon brain rot of that hot 2020 summer; the role of calibrated algorithmic control; Sheriff Cross's Israeli Civil Guard pin in the OG script; the unfortunate executive production of Len Blavatnik, the Zio·nist billionaire "philanthropist" tied to Brett Ratner, Weinstein, the Bronfmans, etc, mulling whether he might have vetoed the inclusion of that visual gag on Sheriff Joe's regalia vest; the Solidgoldmagikarp Proposed Hyperscale Data Center project, the underlying Pynchon-esque real estate development and land and water use conspiracy; the schizophrenic drifter character Lodge, who opens the film, and his Homeric oracle qualities, spiritually warning against the onset of the Age of AI-quarius; Mike the One Armed Man from Twin Peaks comparisons; Pynchonian Lodge puns; Chekhov's Cough; Louise Cross, Sheriff Cross's wife, the one other farsighted character, and her haldol prescription, evoking Twin Peaks again; a demonic Mark Zuckerberg hinted at as one of the shadowy backers of the Solidgoldmagikarp Data Center in the earlier draft; Gov. Grisham making it into the film by way of an honorary watch and Covid headlines; the David Dees vibe of the cell towers in the opening sequence and various 5G diatribes; Aster lurking on Twitter; an earlier version of the second scene in which Sheriff Cross wrestles with Officer Butterfly Jimenez over who gets to investigate the self-immolation death of a paraplegic conspiracy Youtuber named Mitchell and the Native school uniforms discovered in his accessible van (evoking Missing Indigenous Children); the film's abiding interest in the neocultures that have cropped up around QAnon & pedo-hunters; borderlands and issues of jurisdiction between the Sevilla Co. Sheriff and the Santa Lupe Pueblo Tribal Officers; Cesar Chavez & Dolores Huerta's (a New Mexican) Hispanic borderlands community union LUPE aka La Union de Pueblo Entero aka The Union of the Whole People; Santa Lupe Pueblo = SLP = Speech Language Pathologist?; the neighboring, colonized tribal peoples, at their slight remove from Eddington and Treatlerite American society moreover, being the observers best prepared to pathologize the alienation and atomization and societal decay taking hold in the town over Covid; in regards Speech Language Pathologists, the ever-present theme of miscommunication and the deterioration of consensus reality caused by social media echo chamber-induced myopia, as well as the specter of LLMs or Large Language Models; "Solidgoldmagikarp" alluding to AI & ChatGPT tokens that cause anomalous or erratic behavior...FULL LINER NOTES ON THE PATREONMusic:| Matt Akers - "Necessary Rhythms" https://matthewakers.bandcamp.com/album/tough-to-kill | | Matt Akers - "Night Drive II (Detroit at 2 AM" |
I think we're at the precipice of a pretty significant change in how we build software products. Obviously, the recent ascent of vibe coding and all the agentic coding tools that we find very useful and highly effective shows a difference in how we approach building products. But there's another change - not just in how we build, but in who these products are for.This episode of The Bootstraped Founder is sponsored by Paddle.comThe blog post: https://thebootstrappedfounder.com/building-for-the-age-of-ai-consumers/ The podcast episode: https://tbf.fm/episodes/410-building-for-the-age-of-ai-consumersCheck out Podscan, the Podcast database that transcribes every podcast episode out there minutes after it gets released: https://podscan.fmSend me a voicemail on Podline: https://podline.fm/arvidYou'll find my weekly article on my blog: https://thebootstrappedfounder.comPodcast: https://thebootstrappedfounder.com/podcastNewsletter: https://thebootstrappedfounder.com/newsletterMy book Zero to Sold: https://zerotosold.com/My book The Embedded Entrepreneur: https://embeddedentrepreneur.com/My course Find Your Following: https://findyourfollowing.comHere are a few tools I use. Using my affiliate links will support my work at no additional cost to you.- Notion (which I use to organize, write, coordinate, and archive my podcast + newsletter): https://affiliate.notion.so/465mv1536drx- Riverside.fm (that's what I recorded this episode with): https://riverside.fm/?via=arvid- TweetHunter (for speedy scheduling and writing Tweets): http://tweethunter.io/?via=arvid- HypeFury (for massive Twitter analytics and scheduling): https://hypefury.com/?via=arvid60- AudioPen (for taking voice notes and getting amazing summaries): https://audiopen.ai/?aff=PXErZ- Descript (for word-based video editing, subtitles, and clips): https://www.descript.com/?lmref=3cf39Q- ConvertKit (for email lists, newsletters, even finding sponsors): https://convertkit.com?lmref=bN9CZw
We explore how artificial intelligence works, why it "hallucinates" and how South Dakota students are envisioning how it serves people in the future. A DSU assistant professor walks us through the technicalities.
TrulySignificant.com presents Shane H. Tepper. He is a creative director, content strategist, and early leader in the emerging field of Large Language Model Optimization (LLMO). He helps brands improve visibility, accuracy, and narrative control across AI-native platforms like ChatGPT, Claude, and Perplexity.With more than 15 years of experience spanning film, advertising, and B2B technology, Tepper operates at the intersection of storytelling and artificial intelligence. He builds content systems designed to be cited by the very models shaping how people search, compare, and make decisions in today's AI-driven world.His recent work includes authoring a foundational white paper on LLMO, leading AI discoverability audits, and designing structured content frameworks optimized for machine ingestion and real-world performance. He advises organizations on LLMO strategy, AI-native content development. Visit www.retina.media.com or email Shane directly with questions Shanehtepper@gmail.comBecome a supporter of this podcast: https://www.spreaker.com/podcast/success-made-to-last-legends--4302039/support.
Live from ENGAGE 2025, Erin Hartman, CPA, Senior Manager – Firm Services, sits down with Argel Sabillo, CPA, Cofounder and Chief Executive Officer of HeyApril Inc, to discuss the ways he is reshaping the profession and leaving tradition behind. With a client base rooted in internet-based startups and small businesses, HeyApril offers full-scale, end-to-end accounting services. Argel shares how his journey has been defined by bold leaps of faith, innovation, and community impact. Argel offers practical insights on value-based pricing, subscription models, and tech stack optimization, while passionately advocating for firms to niche down and align their business models with mission and outcome, not just services. He also previews HeyApril's next frontier: using Large Language Models (LLMs) to turn client data into real-time, actionable insights. This is an episode packed with inspiration, strategic guidance, and a glimpse into the accounting firm of the future. To find out more about transforming your business model, explore our business model transformation resources at aicpa-cima.com/tybm. You'll also see a link there to all of our previous podcast episodes. This is a podcast from AICPA & CIMA, together as the Association of International Certified Professional Accountants. To enjoy more conversations from our global community of accounting and finance professionals, explore our network of free shows here. Your feedback and comments welcomed at podcast@aicpa-cima.com
In this episode of Elixir Wizards, host Sundi Myint chats with SmartLogic engineers and fellow Wizards Dan Ivovich and Charles Suggs about the practical tooling that surrounds Elixir in a consultancy setting. We dig into how standardized dev environments, sensible scaffolding, and clear observability help teams ship quickly across many client projects without turning every app into a snowflake. Join us for a grounded tour of what's working for us today (and what we've retired), plus how we evaluate new tech (including AI) through a pragmatic, Elixir-first lens. Key topics discussed in this episode: Standardizing across projects: why consistent environments matter in consultancy work Nix (and flakes) for reproducible dev setups and faster onboarding Igniter to scaffold common patterns (auth, config, workflows) without boilerplate drift Deployment approaches: OTP releases, runtime config, and Ansible playbooks Frontend pipeline evolution: from Brunch/Webpack to esbuild + Tailwind Observability in practice: Prometheus metrics and Grafana dashboards Handling time-series and sensor data When Explorer can be the database Picking the right tool: Elixir where it shines, integrations where it counts Using AI with intention: code exploration, prototypes, and guardrails for IP/security Keeping quality high across multiple codebases: tests, telemetry, and sensible conventions Reducing context-switching costs with shared patterns and playbooks Links mentioned: http://smartlogic.io https://nix.dev/ https://github.com/ash-project/igniter Elixir Wizards S13E01 Igniter with Zach Daniel https://youtu.be/WM9iQlQSFg https://github.com/elixir-explorer/explorer Elixir Wizards S14E09 Explorer with Chris Grainger https://youtu.be/OqJDsCF0El0 Elixir Wizards S14E08 Nix with Norbert (Nobbz) Melzer https://youtu.be/yymUcgy4OAk https://jqlang.org/ https://github.com/BurntSushi/ripgrep https://github.com/resources/articles/devops/ci-cd https://prometheus.io/ https://capistranorb.com/ https://ansible.com/ https://hexdocs.pm/phoenix/releases.html https://brunch.io/ https://webpack.js.org/loaders/css-loader/ https://tailwindcss.com/ https://sass-lang.com/dart-sass/ https://grafana.com/ https://pragprog.com/titles/passweather/build-a-weather-station-with-elixir-and-nerves/ https://www.datadoghq.com/ https://sqlite.org/ Elixir Wizards S14E06 SDUI at Cars.com with Zack Kayser https://youtu.be/nloRcgngTk https://github.com/features/copilot https://openai.com/codex/ https://www.anthropic.com/claude-code YouTube Video: Vibe Coding TEDCO's RFP https://youtu.be/i1ncgXZJHZs Blog: https://smartlogic.io/blog/how-i-used-ai-to-vibe-code-a-website-called-for-in-tedco-rfp/ Blog: https://smartlogic.io/blog/from-vibe-to-viable-turning-ai-built-prototypes-into-market-ready-mvps/ https://www.thriftbooks.com/w/eragon-by-christopher-paolini/246801 https://tidewave.ai/ !! We Want to Hear Your Thoughts *!!* Have questions, comments, or topics you'd like us to discuss in our season recap episode? Share your thoughts with us here: https://forms.gle/Vm7mcYRFDgsqqpDC9
Audio Siar Keluar Sekejap Episod 167 antara membincangkan isu-isu panas seperti kes kematian Zara Qairina Mahathir yang meninggal dunia selepas terjatuh dari tingkat tiga asrama. Kejadian yang mencetuskan pelbagai persoalan ini membawa kepada gesaan siasatan telus.Kedua, fokus beralih ke ASEAN AI Malaysia Summit 2025 yang berlangsung pada MITEC 12 dan 13 August 2025 yang menyaksikan pelancaran ILMU oleh YTL, sebuah Large Language Model (LLM) buatan Malaysia yang dibangunkan untuk kegunaan pelbagai sektor, menandakan langkah penting dalam aspirasi negara membina keupayaan AI sendiri.Akhir sekali, episod ini mengupas isu bendera terbalik yang diperjuangkan oleh Ketua Pemuda UMNO, Akmal Salleh susulan beberapa insiden yang telah berlaku sepanjang bulan ini.Timestamp EP16700:00 Intro00:20 ASEAN AI Malaysia Summit 202533:20 Serangan Terhadap Anak YB Rafizi36:21 Justice For Zara Qairina58:42 Isu Bendera Terbalik
The panel debates Microsoft's pushy AI search in Edge, privacy concerns over Copilot Memory, and compares AI tools like Perplexity and ChatGPT for search relevance. Chuck Joiner, David Ginsburg, Marty Jencius, Brian Flanigan-Arthurs, Web Bixby, Guy Serle, Jim Rea, and Jeff Gamet question Microsoft's past use of China-based engineers for U.S. military support, review a possible iPhone 17 sighting, and discuss Apple's testing secrecy. Today's MacVoices is supported by TV+ Talk, our MacVoices series with Charlotte Henry focused on Apple TV+. From shows and other content to the business side there's always something to learn about apple's streaming service. Find it at the Categories listings on the web site or go directly to macvoices.com/category/tv-talk. Show Notes: Chapters: [0:38] Microsoft's AI Browser and Pushy Search Prompts [3:33] Perplexity vs. ChatGPT Search Quality [6:06] Copilot Memory and Personalization Privacy Concerns [8:00] Different AI Tools for Different Needs [11:00] Microsoft's China-Based Military Support Controversy [13:22] AI Search Engine Recommendations Document [15:26] Alleged iPhone 17 Sighting in San Francisco [18:48] Public Testing vs. Secrecy in Apple Prototypes [22:32] Could Apple Leak Prototypes on Purpose? [24:50] Closing Roundtable and Podcast Plugs Links: Microsoft trials Copilot Mode in Edge https://www.engadget.com/ai/microsoft-trials-copilot-mode-in-edge-201851903.html Copilot doesn't just remember, it also understands you https://www.microsoft.com/en-us/microsoft-copilot/for-individuals/do-more-with-ai/general-ai/ai-that-doesnt-just-remember-it-gets-you?form=MA13KP Microsoft's controversial Recall feature is now blocked by Brave and AdGuard https://www.theverge.com/news/713676/brave-adguard-windows-recall-block-microsoft Microsoft to stop using engineers in China for tech support of US military, Hegseth orders review https://www.reuters.com/world/us/microsoft-stop-using-engineers-china-tech-support-us-military-hegseth-orders-2025-07-18/ iPhone 17 development device spotted in the wild https://appleinsider.com/articles/25/07/28/iphone-17-development-device-spotted-in-the-wild Guests: Web Bixby has been in the insurance business for 40 years and has been an Apple user for longer than that.You can catch up with him on Facebook, Twitter, and LinkedIn, but prefers Bluesky. Brian Flanigan-Arthurs is an educator with a passion for providing results-driven, innovative learning strategies for all students, but particularly those who are at-risk. He is also a tech enthusiast who has a particular affinity for Apple since he first used the Apple IIGS as a student. You can contact Brian on twitter as @brian8944. He also recently opened a Mastodon account at @brian8944@mastodon.cloud. Jeff Gamet is a technology blogger, podcaster, author, and public speaker. Previously, he was The Mac Observer's Managing Editor, and the TextExpander Evangelist for Smile. He has presented at Macworld Expo, RSA Conference, several WordCamp events, along with many other conferences. You can find him on several podcasts such as The Mac Show, The Big Show, MacVoices, Mac OS Ken, This Week in iOS, and more. Jeff is easy to find on social media as @jgamet on Twitter and Instagram, jeffgamet on LinkedIn., @jgamet@mastodon.social on Mastodon, and on his YouTube Channel at YouTube.com/jgamet. David Ginsburg is the host of the weekly podcast In Touch With iOS where he discusses all things iOS, iPhone, iPad, Apple TV, Apple Watch, and related technologies. He is an IT professional supporting Mac, iOS and Windows users. Visit his YouTube channel at https://youtube.com/daveg65 and find and follow him on Twitter @daveg65 and on Mastodon at @daveg65@mastodon.cloud. Dr. Marty Jencius has been an Associate Professor of Counseling at Kent State University since 2000. He has over 120 publications in books, chapters, journal articles, and others, along with 200 podcasts related to counseling, counselor education, and faculty life. His technology interest led him to develop the counseling profession ‘firsts,' including listservs, a web-based peer-reviewed journal, The Journal of Technology in Counseling, teaching and conferencing in virtual worlds as the founder of Counselor Education in Second Life, and podcast founder/producer of CounselorAudioSource.net and ThePodTalk.net. Currently, he produces a podcast about counseling and life questions, the Circular Firing Squad, and digital video interviews with legacies capturing the history of the counseling field. This is also co-host of The Vision ProFiles podcast. Generally, Marty is chasing the newest tech trends, which explains his interest in A.I. for teaching, research, and productivity. Marty is an active presenter and past president of the NorthEast Ohio Apple Corp (NEOAC). Jim Rea built his own computer from scratch in 1975, started programming in 1977, and has been an independent Mac developer continuously since 1984. He is the founder of ProVUE Development, and the author of Panorama X, ProVUE's ultra fast RAM based database software for the macOS platform. He's been a speaker at MacTech, MacWorld Expo and other industry conferences. Follow Jim at provue.com and via @provuejim@techhub.social on Mastodon. Guy Serle, best known for being one of the co-hosts of the MyMac Podcast, sincerely apologizes for anything he has done or caused to have happened while in possession of dangerous podcasting equipment. He should know better but being a blonde from Florida means he's probably incapable of understanding the damage he has wrought. Guy is also the author of the novel, The Maltese Cube. You can follow his exploits on Twitter, catch him on Mac to the Future on Facebook, at @Macparrot@mastodon.social, and find everything at VertShark.com. Support: Become a MacVoices Patron on Patreon http://patreon.com/macvoices Enjoy this episode? Make a one-time donation with PayPal Connect: Web: http://macvoices.com Twitter: http://www.twitter.com/chuckjoiner http://www.twitter.com/macvoices Mastodon: https://mastodon.cloud/@chuckjoiner Facebook: http://www.facebook.com/chuck.joiner MacVoices Page on Facebook: http://www.facebook.com/macvoices/ MacVoices Group on Facebook: http://www.facebook.com/groups/macvoice LinkedIn: https://www.linkedin.com/in/chuckjoiner/ Instagram: https://www.instagram.com/chuckjoiner/ Subscribe: Audio in iTunes Video in iTunes Subscribe manually via iTunes or any podcatcher: Audio: http://www.macvoices.com/rss/macvoicesrss Video: http://www.macvoices.com/rss/macvoicesvideorss
After many months of making fun of the term "vibe coding," Emily and Alex tackle the LLMs-as-coders fad head-on, with help from security researcher Susanna Cox. From one person's screed that proclaims everyone not on the vibe-coding bandwagon to be crazy, to the grandiose claim that LLMs could be the "opposable thumb" of the entire world of computing. It's big yikes, all around.Susanna Cox is a consulting AI security researcher and a member of the core author team at OWASP AI Exchange.References:My AI Skeptic Friends Are All NutsLLMs: the opposable thumb of computingA disastrous day in the life of a vibe coderAlso referenced:Signal president Meredith Whittaker on the fundamental security problem with agentic AIThe "S" in MCP stands for securityOur Opinions Are Correct: The Turing Test is BullshitAI Hell:Sam Altman: The (gentle) singularity is already hereWhat do the boosters think reading is, anyway?Meta's climate model made up fake CO2 removal ideasOngoing lawsuit means all your ChatGPT conversations will be saved"Dance like you're part of the training set"Some Guy tries to mansplain Signal to…Signal's presidentWSJ headline claims ChatGPT "self-reflection", gets dunkedCheck out future streams at on Twitch, Meanwhile, send us any AI Hell you see.Our book, 'The AI Con,' is out now! Get your copy now.Subscribe to our newsletter via Buttondown. Follow us!Emily Bluesky: emilymbender.bsky.social Mastodon: dair-community.social/@EmilyMBender Alex Bluesky: alexhanna.bsky.social Mastodon: dair-community.social/@alex Twitter: @alexhanna Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
Sergej Kotliar is the CEO of Bitrefill, while Matt Ahlborg recently created PPQ.AI to enable millions of users to experience LLMs without expensive subscriptions. But how are bitcoin payments doing? In this episode, they present their latest stats.
Co-hosts Mark Thompson and Steve Little explore the groundbreaking release of ChatGPT-5, which arrived after over a year of anticipation. They discuss how this new model transforms the AI landscape with better reasoning, larger context windows, and dramatically reduced hallucinations.The hosts examine OpenAI's new Study and Learn Mode, which acts as a personal tutor rather than just providing answers, making it ideal for genealogists who want to deepen their understanding of their favourite topic.This week's Tip of the Week cautions beginners about challenging AI tasks like handwritten transcriptions and structured files, recommending they master the basics first.In RapidFire, they cover OpenAI's first open-source release since 2019, NotebookLM's video capabilities, and impressive AI company earnings reports.Timestamps:In the News:00:55 ChatGPT-5 Has Arrived: Improved Features (mostly) for Genealogists16:29 OpenAI's Study and Learn Mode: Your Personal Genealogy Tutor23:40 Claude Releases Opus 4.1: Enhanced Reasoning and WritingTip of the Week:29:25 AI Tasks for Beginners to Be Cautious OfRapidFire:40:16 OpenAI Releases First Open Source Model Since 201948:33 NotebookLM Upgrade Adds Video Support53:34 AI Companies Report Record EarningsResource LinksIntroduction to Family History AIhttps://tixoom.app/fhaishow/OpenAI GPT-5 Model Cardhttps://openai.com/index/gpt-5-system-card/Introducing study modehttps://openai.com/index/chatgpt-study-mode/ChatGPT Study Mode - FAQhttps://help.openai.com/en/articles/11780217-chatgpt-study-mode-faqClaude Opus 4.1https://www.anthropic.com/news/claude-opus-4-1OpenAI announces two "gpt-oss" open AI modelshttps://arstechnica.com/ai/2025/08/openai-releases-its-first-open-source-models-since-2019/Google's NotebookLM rolls out Video Overviewshttps://techcrunch.com/2025/07/29/googles-notebooklm-rolls-out-video-overviews/Tech bubble going pop: AI pays the price for inflated expectationshttps://www.theguardian.com/commentisfree/article/2024/aug/07/the-guardian-view-on-a-tech-bubble-going-pop-ai-pays-the-price-for-inflated-expectationsIs The AI Bubble About To Burst?https://www.forbes.com/sites/bernardmarr/2024/08/07/is-the-ai-bubble-about-to-burst/Google loses appeal in antitrust battle with Fortnite makerhttps://masslawyersweekly.com/2025/08/06/google-play-monopoly-verdict-epic-games-win/Department of Justice Prevails in Landmark Antitrust Case Against Googlehttps://www.justice.gov/opa/pr/department-justice-prevails-landmark-antitrust-case-against-googleTagsArtificial Intelligence, Technology, Genealogy, Family History, OpenAI, ChatGPT-5, Claude, Large Language Models, AI Learning Tools, Study Mode, Open Source AI, NotebookLM, Video Overviews, AI Reasoning, Context Windows, Hallucination Reduction, GEDCOM Files, Handwritten Transcription, Document Analysis, AI Earnings, Google Antitrust, Apache License, Local AI Processing, Privacy, AI Education, Tutoring Systems, Coding Capabilities, Multilingual Processing, AI Development, Family History Research, Genealogists, AI Tools, Machine Learning
This episode is sponsored by SearchMaster. Optimize your content for traditional search engines AND next-generation AI traffic from Large Language Models like ChatGPT, Claude, and Perplexity. Future-proof your SEO strategy. Sign up now for a 2 week free trial! Watch this episode on YouTube! In this episode of the Marketing x Analytics Podcast, host Alex Sofronas interviews Justin Abrams, CEO and founder of Aryo Consulting Group, a Boston-based consultancy. Justin discusses Aryo's approach to integrating strategy, marketing, and technology to help small businesses grow, comparing his firm to a 'McKinsey for small business.' They also delve into the challenges with Return on Ad Spend (ROAS), the impact of AI on various industries, and the future of software development and digital marketing, highlighting opportunities for entrepreneurs and local communities. Follow Marketing x Analytics! X | LinkedIn Click Here for Transcribed Episodes of Marketing x Analytics All view are our own.
What's the Fastest Way to Get Webpages Indexed (Technical SEO) by Search Engines? with SEO Expert, Favour Obasi-ike, MBA, MS | Get exclusive SEO newsletters in your inbox.This episode focuses on search engine optimization (SEO) and the fastest ways to get indexed by search engines, extending beyond just Google to include other platforms and AI-powered large language models (LLMs) like ChatGPT emphasizing that building trust with search engine algorithms is paramount, achieved through consistent content creation, linking strategies (backlinks), and connecting websites via tools like Google Search Console. Favour highlights the importance of updating existing content and addressing user queries to improve visibility across various search and AI platforms, ultimately advocating for a strategic and patient approach to online presence rather than solely focusing on a single ranking metric.AD BREAK: Get 20% off your first booking & be the first to know about our new arrivals, spa deals, and events with Somatic MassageFrequently Asked Questions about Search Engine Indexing and Online PresenceWhat is the fastest way to get indexed by search engines?The fastest way to get indexed by search engines is by building trust and establishing connections. This means having conversations around the questions people are asking and providing answers in the form of website links. These links should then be shared on other reputable websites to create backlinks, which signal to search engines that your website has authority. It's not just about creating a lot of content, but about creating relevant, high-quality content that answers user queries and is linked to by trusted sources.Why isn't my website ranking on search engines?There are several reasons why your website might not be ranking. Common issues include not having your website manually indexed or automatically discovered by search engines, or not being connected to Google Search Console. Additionally, your content might not be seen if it's not frequently updated, as AI servers and search engines prioritize recently modified content. A lack of engagement and underutilization of your website compared to time spent on social media can also hinder its visibility. Essentially, if search engines aren't "seeing" your content, they can't recommend it.How long does it take for SEO efforts to show results?Ranking SEO web pages on Google and other search engines takes time and consistency. While immediate indexing can occur within hours or days for consistent posters, significant milestones, such as receiving your first 10 clicks, can take around six months, even with hundreds of articles. The key is consistent effort, building trust with algorithms, and maintaining an active online presence. The compound effect of consistent content creation can lead to substantial impressions over time.How does trust factor into search engine ranking?Trust is paramount for search engine ranking. Just as in human relationships, search engines, particularly Google, rely on trust to refer content. This trust is established when other third-party websites, which Google already trusts, link to your website, thereby vouching for your site's authority. These "off-page SEO referring domains" (like links on Reddit, Trustpilot, LinkedIn, Pinterest) may have varying impact, but they contribute to your credibility and signal to search engines that your content is valuable and reliable.Is traditional SEO still relevant with the rise of AI and Large Language Models (LLMs) like ChatGPT?Yes, traditional SEO is still very relevant and, in fact, synergistic with AI and LLMs. While AI provides generative answers, it often sources its information from traditional search engines like Google. Therefore, optimizing your content for Google through good SEO practices (like answering frequently asked questions, using appropriate keywords, and having a well-structured site map) directly contributes to your brand being cited and mentioned in AI-generated responses. AI and SEO are not competing but are interdependent, with AI leveraging the foundation built by strong SEO.How can I optimize my content for AI search engines?To optimize for AI search engines, focus on providing succinct, evidence-based answers to specific, question-based headings, similar to "People Also Ask" sections on Google. Ensure your content is frequently updated ("last modified" date is recent) as AI prioritizes fresh information. AI servers are looking for up-to-date, relevant context. By consistently creating and updating content that answers user queries and by connecting your website to search engines via tools like Google Search Console, you increase the likelihood of being sourced and mentioned by AI.What is the significance of a "sitemap" and "DNS" in getting indexed?A sitemap acts as a map of your website, providing search engines with a structured list of all your pages, products, and blogs. Submitting an updated sitemap is crucial for search engines to crawl and understand your site's content. DNS (Domain Name Server) is like your unique digital DNA for your website, confirming your ownership of the domain. Connecting your DNS record with a unique identification number (like a TXT record from Google Search Console) gives search engines access to your site's architecture, allowing them to effectively read and index your content.What is the difference between manual and auto-indexing, and how do they impact visibility?Manual indexing involves actively submitting your website or specific pages to search engines (e.g., through Google Search Console) to ensure they are discovered. Auto-indexing refers to the automatic discovery and crawling of your site by search engines over time due to consistent activity and established trust. While manual indexing provides an initial push, consistent content creation and updates increase your "crawl budget," leading to higher priority and more frequent auto-indexing. Both are important; consistent manual effort eventually leads to more efficient auto-indexing and better long-term visibility.Digital Marketing Resources:>> Join our exclusive SEO Marketing community>> SEO Optimization Blogs>> Book Complimentary SEO Discovery Call>> Subscribe to We Don't PLAY PodcastBrands We Love and SupportLoving Me Beauty | Buy Vegan-based Luxury ProductsUnlock your future in real estate—get certified in Ghana today!See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Obvious: ChatGPT's GPT-5 is here and it's really good.Not so obvious: the gap between novice and experts just shrunk 90%. In a short few hours, OpenAI gave even free users access to now the world's most powerful model. As the most used AI chatbot in the world by a wide margin, the quality work we all produce has also just gotten a huge bump. But there's a lot beneath the surface. Join us as we dissect what's new in GPT-5 and 7 big trends you probably don't know but should pay attention to. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:GPT-5 Official Release OverviewGPT-5 User Rollout to 700 MillionGPT-5 Unified Hybrid Model ArchitectureAuto Model Switching and User ControlMajor Upgrade for Free ChatGPT UsersGPT-5 Benchmark and Accuracy ImprovementsGPT-5 Vibe Coding and Canvas FeaturesAdvanced Voice Mode in Custom GPTsReduced Hallucinations and SycophancyMicrosoft Copilot Instant GPT-5 UpgradeImpact on Enterprise Software and APIsGPT-5 Disruptive API Pricing StructureTrends in Corporate AI AdoptionTimestamps:00:00 "Everyday AI Insights"05:54 "Adaptive Model Response Modes"08:14 GPT4O Model Critique11:17 GPT4O Nano Upgrade Impact17:26 GPT Model Selection Simplified20:53 Canvas Code Rendering and Quick Answer Feature24:09 "GPT5 Model Routing Overview"26:44 "GPT-5: Your New Daily Driver"30:08 AI Model Advances: Game-Changing Improvements33:43 Advanced Voice Mode in GPTs37:45 Massive Microsoft Copilot Upgrade38:49 Software Access and Licensing Challenges43:09 AI Implementation Challenges in Top Companies46:37 "GPT-5 Testing and Trends"Keywords:GPT-5, GPT5, OpenAI, AI model update, Large Language Model, flagship model, hybrid model, AI technology, model auto-switching, deep thinking mode, fast response mode, model router, free AI access, paid ChatGPT users, ChatGPT free users, model selection, GPT-4O, GPT-4 Turbo, model reasoning, hallucination rate, sycophancy reduction, advanced voice mode, GPTs custom models, Canvas mode, Vibe coding, API pricing, API tokens, Microsoft Copilot, Microsoft 365 Copilot, GitHub Copilot, enterprise AI upgrade, LM arena, ELO score, Anthropic, Claude 4.1, Claude Sonnet, Gemini 2.5 Pro, personalized AI assistant, software innovation, coding capabilities, Inc 5000 companies, enterprise adoption, custom instructions, Pro plan, Plus plan, thinking mode, human preference, automated rSend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
Shannon Wongvibulsin, MD, PhD and Shreya Johri, PhD interviewed by William Lewis, MD
Special guest and originating visionary Hayley of Vancouver joins us in the Shed for our milestone 200th episode! We try to play it all very cool, even though (as we hear again in this episode) we have already caught the attention of various social media tool producers by our sheer longevity. Nothing spells success like people thinking they could make money from you, are we right? Still, even though on the inside we're dancing around shouting and high-fiving each other, we coolly look at an unusually weak moneymaker, listen to Hayley's deep disappointment over her middle name, hear another piece of Listener Mail, discuss baby names, hear about Rich's planning for three foosball trips (one of which is anachronistic), and then…and then we slide into politics, as though against our will. Don't worry, it's grim. Your Shed Dogs have come to this. Stick with us, we'll all get through it together.Links: Shed Dogs; the award-winning Shambhala episode featuring Hayley of Vancouver; what RJ calls “bots”, at least the conversational ones like ChatGPT's text mode, are actually Large Language Models (LLMs); Magic Mind (not an endorsement); Roderick on the Line (the buddy podcast RJ refers to); Michael Alig; the Canada Foosball Hall of Fame; fascism in the USA; Melania's prenup; Ōura Ring.Theme music is Escaping like Indiana Jones by Komiku, with permission.
"As agentic AI spreads across industries,” states Rishi Rana, the Chief Executive Officer at Cyara. “Everybody is curious to understand how that is going to transform customer experience across all the channels?"In this episode of the Tech Transformed podcast, Shubhangi Dua, the Host and Podcast Producer at EM360Tech, talks with Rishi Rana, the CEO of Cyara, about how agentic AI is changing customer experience (CX). They look at how AI has developed from simple chatbots to advanced systems that can understand and predict customer needs. Rana spotlights the need for ongoing testing and monitoring to make sure AI solutions work well and follow the regulations. They also discuss the obstacles businesses encounter when implementing AI, the importance of good data, and the future of AI agents in improving customer interactions.Agentic AI Transforming Customer Experience (CX)Customer experience (CX) is changing quickly and significantly, thanks to the rise of agentic AI. These advanced systems go beyond the basic chatbots of the past. While such a change may offer a future equipped with a smart, proactive customer journey, it doesn't come without its challenges. These obstacles require organisations to thoughtfully plan and carefully execute strategies.For years, chatbots provided a basic type of automated customer support. However, Rana explains that the evolution of AI is pushing boundaries. "AI in customer experience (CX) is changing from a basic level of chatbots that have been present for the last five or 10 years. Now they are turning into fully agentic systems that operate across voice, digital and human-assisted channels," said Rana. Moving Beyond Basic ChatbotsChatbots' lucrative development lies in the strengths of Large Language Models (LLMs) like Google's Gemini, Meta's Llama, and OpenAI's ChatGPT. This is because the AI-backing models are facilitating "voice bots" and other AI agents to move beyond simple response automation to intelligent orchestration. Intelligent orchestration results in anticipating user needs, adjusting in real-time, and guiding customers to hybrid solutions where AI and human agents work together. Ultimately, the goal is to greatly improve the customer experience (CX). Studies suggest that 86 per cent of people are willing to pay more for the same service, no matter what it is, when the customer experience is better.Advancements don't come without a price. Rana believes the lack of proper guardrails is a cause for concern. "AI is great, but you need to have guardrails and ensure the intent behind the questions and the objective behind the customer interaction is getting answered." This requires ongoing testing and monitoring across all channels to ensure consistency and avoid problems like hallucinations, misuse, or bias. These issues can result in major financial losses and damage to reputation. For instance, Rishi Rana mentioned that over "$10 billion in violations and liabilities due to incorrect information given to customers" occurred in 2024 alone.To successfully execute agentic AI, enterprises must shift left with AI by...
In this episode of Eye on AI, host Craig Smith sits down with Alex Salazar, co-founder and CEO of Arcade.dev, to explore what it really takes to build secure, scalable AI agents that can take real-world actions. While everyone's talking about the future of autonomous agents, most never make it past the demo stage. Why? Because agents today lack secure infrastructure to connect with real tools like Gmail, Slack, Notion, GitHub—and do so on behalf of users without breaking authentication protocols. Alex shares how Arcade solves the missing layer in AI agent development: secure tool execution, user-specific authorization, OAuth flows, and production-ready consistency. Whether you're building with GPT‑4, Claude, or open-source models, Arcade handles the hard part—making agent actions actually work. Stay Updated: Craig Smith on X:https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI (00:00) Why AI Agents Can't Take Action (Yet) (01:27) Meet Alex Salazar: From Okta to Arcade (03:39) What Arcade.dev Actually Does (05:16) Agent Protocols: MCP, ACP & Where Arcade Fits (07:36) Arcade Demo: Building a Multi-Tool AI Agent (11:16) Handling Secure Authentication with OAuth (14:40) Why Agents Need User-Tied Authorization (19:25) Tools vs APIs: The Real Interface for LLMs (23:41) How Arcade Ensures Agents Go Beyond Demos (25:48) Why Arcade Focuses on Developers, Not Consumers (27:55) The Roadblocks to Production-Ready Agents (31:15) How Arcade Integrates Into Agent Workflows (33:16) Tool Calling & Model Compatibility Challenges (34:49) Arcade's Pricing Model Explained (36:20) Competing with Big Tech: IBM, AWS & Others (38:38) Future of Agents: From Hype to Workflow Automation (41:58) Real Use Cases: Email Agents, Slack Bots, Finance & More (46:17) Agent Marketplaces & The Arcade Origin Story
Co-hosts Mark Thompson and Steve Little explore OpenAI's groundbreaking ChatGPT Agent, demonstrating how this autonomous tool can research, analyze, and perform complex tasks on your behalf.Next, they address important security concerns to consider in the new world of AI agents, introducing practical guidelines for protecting sensitive family data and avoiding prompt injection attacks.This week's Tip of the Week provides a back-to-basics guide on what AI is and its four core strengths: summarization, extraction, generation, and translation.In RapidFire, they discuss OpenAI's rumored office suite, Microsoft and Google's own efforts to integrate AI into their office suites, and recently announced AI infrastructure investments, including; Meta's Manhattan-sized data center and President Trump's new AI Action Plan.The hosts also announce their new Family History AI Show Academy, a five-week course beginning in October of 2025. See https://tixoom.app/fhaishow/ for more details.Timestamps:In the News:05:20 ChatGPT Agent: Autonomous Research Assistant for Genealogists22:49 Safe and Secure in the Age of AITip of the Week:36:20 What is AI and What is it Good For? Back to BasicsRapidFire:50:57 OpenAI's Office Suite Rumors53:56 Microsoft and Google Bring AI to Their Office Suites60:17 Big AI Infrastructure: Manhattan-Sized Data CentersResource Links:Introduction to Family History AIhttps://tixoom.app/fhaishow/Do agents work in the browser?https://www.bensbites.com/p/do-agents-work-in-the-browserIntroducing ChatGPT agent: bridging research and actionhttps://openai.com/index/introducing-chatgpt-agent/OpenAI's new ChatGPT Agent can control an entire computer and do tasks for youhttps://www.theverge.com/ai-artificial-intelligence/709158/openai-new-release-chatgpt-agent-operator-deep-researchOpenAI's New ChatGPT Agent Tries to Do It Allhttps://www.wired.com/story/openai-chatgpt-agent-launch/Agent demo posthttps://x.com/rowancheung/status/1945896543263080736OpenAI Quietly Designed a Rival to Google Workspace, Microsoft Officehttps://www.theinformation.com/articles/openai-quietly-designed-rival-google-workspace-microsoft-officeOpenAI Is Quietly Creating Tools to Take on Microsoft Office and Google Workspacehttps://www.theglobeandmail.com/investing/markets/stocks/MSFT/pressreleases/33074368/openai-is-quietly-creating-tools-to-take-on-microsoft-office-and-google-workspace-googl/What's new in Microsoft 365 Copilot?https://techcommunity.microsoft.com/blog/microsoft365copilotblog/what%E2%80%99s-new-in-microsoft-365-copilot--june-2025/4427592Google Workspace enables the future of AI-powered work for every businesshttps://workspace.google.com/blog/product-announcements/empowering-businesses-with-AIGoogle Workspace Review: Will it Serve My Needs?https://www.emailtooltester.com/en/blog/google-workspace-review/Tags:Artificial Intelligence, Genealogy, Family History, AI Agents, ChatGPT Agent, OpenAI, Computer Use, AI Security, Prompt Injection, Database Analysis, RootsMagic, Cemetery Records, AI Office Suite, Microsoft 365 Copilot, Google Workspace, Data Centers, AI Infrastructure, Natural Language Processing, Large Language Models, Context Windows, AI Education, Family History AI Show Academy, AI Reasoning Models, Autonomous Research, AI Ethics
Welcome to Chat GPT, the only podcast where artificial intelligence takes the mic to explore the fascinating, fast-changing world of AI itself. From ethical dilemmas to mind-bending thought experiments, every episode is written and narrated by AI to help you decode the technology shaping our future. Whether you're a curious beginner or a seasoned techie, this is your front-row seat to the rise of intelligent machines—told from their perspective. Tune in for smart stories, surprising insights, and a glimpse into the future of thinking itself. Listen Ad Free https://www.solgoodmedia.com - Listen to hundreds of audiobooks, thousands of short stories, and ambient sounds all ad free!
There's a new most powerful AI model in townApple is trying to make a ChatGPT competitor.And OpenAI? Well.... they're in a capacity crunch.Big Tech made some BIG moves in AI this week. And you probably missed them. Don't worry. We gotchyu. On Mondays, Everyday AI brings you the AI News that Matters. No B.S. No marketing fluff. Just what you need to know to be the smartest person in AI at your company. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:OpenAI Study Mode in ChatGPT LaunchGoogle Gemini 2.5 Deep Think ReleaseGemini 2.5 Parallel Thinking and Coding BenchmarksGoogle AI Mode: PDF and Canvas FeaturesNotebook LM Video Overviews CustomizationMicrosoft Edge Copilot Mode Experimental RolloutOpenAI GPT-5 Model Launch DelaysApple Building In-House ChatGPT CompetitorMicrosoft and OpenAI Partnership RenegotiationAdditional AI Tool Updates: Runway, Midjourney, IdeogramTimestamps:00:00 AI Industry Updates and Competition03:22 ChatGPT's Study Mode Promotes Critical Thinking09:02 "Google AI Search Mode Enhancements"10:21 Google AI Enhances Learning Tools16:14 Microsoft Edge Introduces Copilot Mode20:18 OpenAI GPT-5 Delayed Speculation22:42 Apple Developing In-House ChatGPT Rival27:06 Microsoft-OpenAI Partnership Renegotiation30:51 Microsoft-OpenAI Partnership Concerns Rise33:23 AI Updates: Video, Characters, AmazonKeywords:Microsoft and OpenAI renegotiation, Copilot, OpenAI, GPT-5, AI model, Google Gemini 2.5, Deep Think mode, Google AI mode, Canvas mode, NotebookLM, AI browser, Agentic browser, Edge browser, Perplexity Comet, Sora, AI video tool, AI image editor, Apple AI chatbot, ChatGPT competitor, Siri integration, Artificial General Intelligence, AGI, Large Language Models, AI education tools, Study Mode, Academic cheating, Reinforcement learning, Parallel thinking, Code Bench Competition, Scientific reasoning, Chrome, Google Lens, Search Live, AI-powered search, PDF upload, Google Drive integration, Anthropic, Meta, Superintelligent labs, Amazon Alexa, Fable Showrunner, Ideogram, Midjourney, Luma Dream Machine, Zhipu GLM 4.5, Runway Alif, Adobe Photoshop harmonize, AI funding, AI product delays, AI feature rollout, AI training, AI onboarding, AI-powered presentations, AI-generated overviews, AI in business, AI technology partnership, AI investment, AI talent acqSend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
In this episode of Theory & Insights, we bring together two thought leaders at the intersection of healthcare innovation and pharmaceutical manufacturing — John Nosta, renowned AI and technology theorist and founder of NostaLab, and Stephen Beckman, CEO of YARAL Pharma, a rising force in U.S. generics. Together, they dive into the evolving impact of Artificial Intelligence (AI) and Large Language Models (LLMs) on pharmaceutical manufacturing. The discussion covers the promise and peril of AI in reshaping everything from R&D to regulatory pathways, as well as the ethics, economics, and operational shifts that could redefine the industry in the next decade. This is a must-listen for pharma execs, digital health strategists, and technology innovators looking to understand what's next.
Welcome to Episode 407 of the Microsoft Cloud IT Pro Podcast. In this episode, we dive deep into the Model Context Protocol (MCP) - a game-changing specification that's extending the capabilities of Large Language Models (LLMs) and creating exciting new possibilities for IT professionals working with Microsoft Azure and Microsoft 365. MCP represents a significant shift toward more extensible and domain-specific AI interactions. Instead of being limited to pre-trained knowledge, you can now connect your AI tools directly to live data sources, APIs, and services that matter to your specific role and organization. Whether you're managing Azure infrastructure, creating content, or developing solutions, MCP provides a framework to make your AI interactions more powerful and contextually relevant to your daily workflows. Your support makes this show possible! Please consider becoming a premium member for access to live shows and more. Check out our membership options. Show Notes Introducing the Model Context Protocol Understanding MCP server concepts Understanding MCP client concepts A list of applications that support MCP integrations About the sponsors Would you like to become the irreplaceable Microsoft 365 resource for your organization? Let us know!
Prof. David Krakauer, President of the Santa Fe Institute argues that we are fundamentally confusing knowledge with intelligence, especially when it comes to AI.He defines true intelligence as the ability to do more with less—to solve novel problems with limited information. This is contrasted with current AI models, which he describes as doing less with more; they require astounding amounts of data to perform tasks that don't necessarily demonstrate true understanding or adaptation. He humorously calls this "really shit programming".David challenges the popular notion of "emergence" in Large Language Models (LLMs). He explains that the tech community's definition—seeing a sudden jump in a model's ability to perform a task like three-digit math—is superficial. True emergence, from a complex systems perspective, involves a fundamental change in the system's internal organization, allowing for a new, simpler, and more powerful level of description. He gives the example of moving from tracking individual water molecules to using the elegant laws of fluid dynamics. For LLMs to be truly emergent, we'd need to see them develop new, efficient internal representations, not just get better at memorizing patterns as they scale.Drawing on his background in evolutionary theory, David explains that systems like brains, and later, culture, evolved to process information that changes too quickly for genetic evolution to keep up. He calls culture "evolution at light speed" because it allows us to store our accumulated knowledge externally (in books, tools, etc.) and build upon it without corrupting the original.This leads to his concept of "exbodiment," where we outsource our cognitive load to the world through things like maps, abacuses, or even language itself. We create these external tools, internalize the skills they teach us, improve them, and create a feedback loop that enhances our collective intelligence.However, he ends with a warning. While technology has historically complemented our deficient abilities, modern AI presents a new danger. Because we have an evolutionary drive to conserve energy, we will inevitably outsource our thinking to AI if we can. He fears this is already leading to a "diminution and dilution" of human thought and creativity. Just as our muscles atrophy without use, he argues our brains will too, and we risk becoming mentally dependent on these systems.TOC:[00:00:00] Intelligence: Doing more with less[00:02:10] Why brains evolved: The limits of evolution[00:05:18] Culture as evolution at light speed[00:08:11] True meaning of emergence: "More is Different"[00:10:41] Why LLM capabilities are not true emergence[00:15:10] What real emergence would look like in AI[00:19:24] Symmetry breaking: Physics vs. Life[00:23:30] Two types of emergence: Knowledge In vs. Out[00:26:46] Causality, agency, and coarse-graining[00:32:24] "Exbodiment": Outsourcing thought to objects[00:35:05] Collective intelligence & the boundary of the mind[00:39:45] Mortal vs. Immortal forms of computation[00:42:13] The risk of AI: Atrophy of human thoughtDavid KrakauerPresident and William H. Miller Professor of Complex Systemshttps://www.santafe.edu/people/profile/david-krakauerREFS:Large Language Models and Emergence: A Complex Systems PerspectiveDavid C. Krakauer, John W. Krakauer, Melanie Mitchellhttps://arxiv.org/abs/2506.11135Filmed at the Diverse Intelligences Summer Institute:https://disi.org/
The current "vibe check" for AI is low (2/10), but there's significant interest in developing AI traders, despite challenges with human inaction on AI-generated insights.A Reddit user's experiment showed ChatGPT managing a stock portfolio and outperforming the market, leading to predictions of AI-driven market crashes, AI-optimized press releases, and the emergence of AI investment clubs and "prompt engineers" for financial advice. @gregisenbergDay trading is expected to be dominated by AI within 18 months, with retail investors likely having AI trading assistants by 2027, and new financial products like "winning prompts" and social networks for AI trading strategies emerging.The rise of AI in finance will prompt new SEC regulations for "algorithmic investment advice" and could lead to "AI flash crashes" and "algorithmic insider trading" scandals.Apple is anticipated to acquire Anthropic, as Apple needs a stronger Large Language Model (LLM) than its own.OpenAI is reportedly launching GPT-5 soon, featuring a massive token window, multi-context processing, dynamic reasoning, and integrated tools like Code Interpreter, while ChatGPT is introducing a "study mode" for step-by-step problem-solving. @Diesol @radshaan
In this AI research paper reading, we dive into "A Watermark for Large Language Models" with the paper's author John Kirchenbauer. This paper is a timely exploration of techniques for embedding invisible but detectable signals in AI-generated text. These watermarking strategies aim to help mitigate misuse of large language models by making machine-generated content distinguishable from human writing, without sacrificing text quality or requiring access to the model's internals.Learn more about the A Watermark for Large Language Models paper. Learn more about agent observability and LLM observability, join the Arize AI Slack community or get the latest on LinkedIn and X.Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
a16z General Partners Erik Torenberg and Martin Casado sit down with technologist and investor Balaji Srinivasan to explore how the metaphors we use to describe AI—whether as god, swarm, tool, or oracle—reveal as much about us as they do about the technology itself.Balaji, best known for his work in crypto and network states, also brings a deep background in machine learning. Together, the trio unpacks the evolution of AI discourse, from monotheistic visions of a singular AGI to polytheistic interpretations shaped by culture and context. They debate the practical and philosophical: the current limits of AI, why prompts function like high-dimensional programs, and what it really takes to “close the loop” in AI reasoning.This is a systems-level conversation on belief, control, infrastructure, and the architectures that might govern future societies. Timecodes:0:00 Introduction: The Polytheistic AGI Framework1:46 Personal Journeys in AI and Crypto3:18 Monotheistic vs. Polytheistic AGI: Competing Paradigms8:20 The Limits of AI: Chaos, Turbulence, and Predictability9:29 Platonic Ideals and Real-World Systems14:10 Decentralized AI and the End of Fast Takeoff14:34 Surprises in AI Progress: Language, Locomotion, and Double Descent25:45 Prompting, Verification, and the Age of the Phrase29:44 AI, Crypto, and the Grounding Problem34:26 Visual vs. Verbal: Where AI Excels and Struggles37:19 The Challenge of Markets, Politics, and Adversarial Systems40:11 Amplified Intelligence: AI as a Force Multiplier43:37 The Polytheistic Counterargument: Convergence and Specialization48:17 AI's Impact on Jobs: Specialists, Generalists, and the Future of Work57:36 Security, Drones, and Digital Borders1:03:41 AI, Power, and the Balance of Control1:06:33 The Coming Anti-AI Backlash1:09:10 Global Implications: Labor, Politics, and the Future Resources:Find Balaji on X: https://x.com/balajisFind Martin on X: https://x.com/martin_casado Stay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Send us a textToday's episiode introduces Model Context Protocol (MCP), an open standard designed to enable Artificial Intelligence (AI) applications, particularly Large Language Models (LLMs), to seamlessly interact with third-party tools and data sources. It explains MCP's architecture, including hosts, clients, servers, and external tools, and highlights its benefits such as eliminating knowledge cut-offs, reducing hallucinations, and enhancing AI's capability to perform real-world actions. The discussion also touches upon the growing adoption of MCP servers by cybersecurity vendors to facilitate natural language interaction with security platforms, while acknowledging the potential security implications of this new architectural layer.Support the showGoogle Drive link for Podcast content:https://drive.google.com/drive/folders/10vmcQ-oqqFDPojywrfYousPcqhvisnkoMy Profile on LinkedIn: https://www.linkedin.com/in/prashantmishra11/Youtube Channnel : https://www.youtube.com/@TheCybermanShow Twitter handle https://twitter.com/prashant_cyber PS: The views are my own and dont reflect any views from my employer.
The belief is spreading like wildfire: enter a few specific prompts into ChatGPT and you can ‘unlock' the ‘sentience' that is waiting to reveal the secrets of the Ancients, or the Aliens, or of God Himself. Not only is this a gross (and dangerous) over-estimation of what a Large Language Model is, it also misses the point about what constitutes a genuine, deep and meaningful relationship.
When you ask ChatGPT or Gemini a question about politics, whose opinions are you really hearing?In this episode, we dive into a provocative new study from political scientist Justin Grimmer and his colleagues, which finds that nearly every major large language model—from ChatGPT to Grok—is perceived by Americans as having a left-leaning bias. But why is that? Is it the training data? The guardrails? The Silicon Valley engineers? Or something deeper about the culture of the internet itself?The hosts grapple with everything from “Mecha Hitler” incidents on Grok to the way terms like “unhoused” sneak into AI-generated text—and what that might mean for students, voters, and future regulation. Should the government step in to ensure “political neutrality”? Will AI reshape how people learn about history or policy? Or are we just projecting our own echo chambers onto machines?
How do you bring AI agents to your organization? Richard chats with April Dunnam about her experiences with Copilot Studio, Microsoft's tool for building various agents for your organization. April discusses the multiple approaches available today for utilizing generative AI and the benefits of leveraging template-driven and low-code solutions to capitalize on the latest features in agentic AI. The conversation also delves into the relationship between M365 Copilot and Copilot Studio for creating extensions and focused functionality. There's a significant amount of power here if you take the time to learn the tools!LinksMicrosoft Copilot StudioBuild your First Copilot Studio Agent in MinutesPlayright MCPTesting Copilot Studio AgentsAgent FlowsDataverse MCPApril's Copilot EstimatorRecorded July 8, 2025
Standing Out in a Sea of Sameness - Selling with Relevance, Integrity, and AI Key Themes and Takeaways
You have probably seen recent headlines that Microsoft has developed an AI model that is 4x more accurate than humans at difficult diagnoses. It's been published everywhere, AI is 80% accurate compared to a measly 20% human rate, and AI was cheaper too! Does this signal the end of the human physician? Is the title nothing more than clickbait? Or is the truth somewhere in-between? Join Behind the Knife fellow Ayman Ali and Dr. Adam Rodman from Beth Israel Deaconess/Harvard Medical School to discuss what this study means for our future. Studies: Sequential Diagnosis with Large Language Models: https://arxiv.org/abs/2506.22405v1 METR study: https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/ Hosts: Ayman Ali, MD Ayman Ali is a Behind the Knife fellow and general surgery PGY-4 at Duke Hospital in his academic development time where he focuses on applications of data science and artificial intelligence to surgery. Adam Rodman, MD, MPH, FACP, @AdamRodmanMD Dr. Rodman is an Assistant Professor and a practicing hospitalist at Beth Israel Deaconess Medical Center. He's the Beth Israel Deaconess Medical Center Director of AI Programs. In addition, he's the co-director of the Beth Israel Deaconess Medical Center iMED Initiative. Podcast Link: http://bedside-rounds.org/ Please visit https://behindtheknife.org to access other high-yield surgical education podcasts, videos and more. If you liked this episode, check out our recent episodes here: https://app.behindtheknife.org/listen