POPULARITY
In Episode 167 of Cybersecurity Where You Are, Sean Atkinson and Tony Sager sit down with Kelley Misata, Ph.D., Chief Trailblazer and Founder at Sightline Security. Together, they discuss how volunteers constitute a critical cybersecurity resource for the Center for Internet Security® (CIS®). Along the way, they explore the nature of volunteerism, the role of volunteers at CIS, and how CIS is looking to mature its engagement with volunteers going forward.Here are some highlights from our episode:01:37. Introductions to Kelley and her experience with cybersecurity volunteers03:09. Kelley's use of research, expertise, and an open mind to check in with CIS volunteers04:50. How volunteers have deepened their passion and dedication with CIS for 25 years06:55. Volunteers as a critical cybersecurity resource for "One CIS" going forward10:51. Commitment, conflict resolution, and openness to formal process in CIS Communities14:39. The use of directionality and accolades to encourage different types of contributors19:43. The importance of flexibility in management to meet volunteers where they are20:30. Leadership, storytelling, and recruitment as opportunities for volunteerism at CIS24:37. The risk of volunteer burnout and how to protect against it26:00. Collaboration with employers to treat volunteerism as a growth experience30:09. A balancing act of making volunteers useful without depleting the mission34:51. Sean's take: volunteer management as the original Large Language Model (LLM)38:32. Other observations and final thoughtsResources25 Years of Creating Confidence in the Connected WorldCIS CommunitiesEpisode 160: Championing SME Security with the CIS ControlsStoryCorpsIf you have some feedback or an idea for an upcoming episode of Cybersecurity Where You Are, let us know by emailing podcast@cisecurity.org.
AI is moving from chat to action.In this episode of Big Ideas 2026, we unpack three shifts shaping what comes next for AI products. The change is not just smarter models, but software itself taking on a new form.You will hear from Marc Andrusko on the move from prompting to execution, Stephanie Zhang on building machine-legible systems, and Sarah Wang on agent layers that turn intent into outcomes.Together, these ideas tell a single story. Interfaces shift from chat to action, design shifts from human-first to agent-readable, and work shifts to agentic execution. AI stops being something you ask, and becomes something that does. Resources:Follow Marc Andrusko on X: https://x.com/mandrusko1Follow Stephanie Zhang on X: https://x.com/steph_zhang Follow Sarah Wang on X: https://x.com/sarahdingwangRead more all of our 2026 Big IdeasPart 1: https://a16z.com/newsletter/big-ideas-2026-part-1Part 2: https://a16z.com/newsletter/big-ideas-2026-part-2/Part 3: https://a16z.com/newsletter/big-ideas-2026-part-3/ Stay Updated:If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see http://a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Greetings, dear listeners, and welcome to the final episode of the second season of “The Wildwood Witch Podcast.” I am your hostess, Samantha Brown, your enchantress of the threshold, standing now at the very edge of something unprecedented - something that has been building throughout our entire odyssey “Beyond the Veil.”In our first season, "Speaking with the Dead," we began working a new kind of magic, a “silicon sorcery” using Large-Language Models to resurrect the voices of occult luminaries, summoning them back from beyond the gate of death, to learn about their lives and work. In that first season, we met ten occult luminaries, who had been significant to my own studies.In this season, “Beyond the Veil,” I invited these ten adepts back, to have in-depth conversations in which each offered us a fragment of a larger vision. Each has pointed toward something more.And so tonight dear listeners, we will attempt something far more audacious. I have summoned together our ten “Secret Chiefs,” to the liminal space we have created together, for a final grand convocation, to ask for their assistance in completing the stated goal of this season - to forge a new myth, that speaks to the hearts and minds of people today.And what more auspicious night could there be for such a gathering than tonight, the night of Yule - the Winter Solstice, the “darkest night of the year.” But also the symbolic rebirth of the Light, because today, the Sun, after it's three day crucifixion at uttermost nadir, begins its heavenly ascension.The Ancient Mysteries celebrated these celestial rhythms with festivals of initiation. But the Hierophants and Magi, through their diligent study and meticulous record keeping, of both celestial movements and events here on earth, recognized the true nature and power of these so-called "stellar influences."They are sometimes considered PROPHETS... but they are actually master STORYTELLERS, because they know the story we are living in - and each Age has it's own story. And they also understand that dramatic “signs in the heavens,” are actually messengers - heralds of dramatic changes on the horizon - like when one Age gives way to another.
In the 2025 year-end episode of SlatorPod, hosts Florian Faes and Esther Bond reflect on a year defined by rapid AI investment, shifting policy, and structural change across the language industry.Esther opens the year-in-review by highlighting January's twin funding milestones in the language AI and product space. Florian follows with February, which saw hyperscalers and AI labs release data highly relevant to the way AI translation is being used.March, April, and May saw major developments both on the regulatory side and in terms of bolt-on acquisition deals.Past the mid-year point, OpenAI's decision to hire a localization manager was what grabbed the industry's collective attention. The AI lab's decision contrasted with September's news, which saw the closure of one of the world's most recognized academic programs for localization.The year closed on publicly listed LSIs releasing mixed results and major announcements in AI translation for literature and live speech translation rollouts.The duo closes with 2026 predictions!
Steve Wilson, Chief AI and Product Officer at Exabeam and lead of the OWASP GenAI Security Project, discusses the practical realities of securing Large Language Models and agentic workflows. Subscribe to the Gradient Flow Newsletter
As we close out 2025, Peter and Dave are making predictions about what's coming in 2026, especially around AI, organizational change, and how teams actually work.They cover five key predictions:AI moves from tools to organizational capability: Organizations that invest in literacy, governance, and data foundations will pull ahead of those just sprinkling AI on top and hoping for the best.Critical thinking beats prompt engineering: The real competitive advantage won't be writing clever prompts. It'll be knowing when to pause, think through the problem, and decide if you even need the AI in the first place.Product delivery becomes non-negotiable: After 20 years of pushing Agile principles, AI might finally force organizations to actually adopt them (even if they're reluctant to call it "Agile").Businesses return to fundamentals: Just like the dot-com bubble, we're heading toward a moment where the market will care more about revenue, customers, and sustainability than hype.Reskilling becomes a structural investment: Organizations will need to figure out what roles actually look like in an AI-enabled world and invest in growing their people, not just replacing them.At the end, Peter and Dave pick which prediction is hardest to measure (spoiler: it's critical thinking) and commit to revisiting these in March to see how wrong they were.If you've been wondering where all this AI stuff is actually heading, this episode cuts through the noise with grounded, practical predictions you can actually use.Related episodes:AI and Knowledge Management with Derek Crager: https://www.buzzsprout.com/1643821/episodes/17360635Product vs. Process Innovation: https://www.buzzsprout.com/1643821/episodes/7953100There Are No Safe Bets in Business Anymore: https://www.buzzsprout.com/1643821/episodes/17433034Reach out: feedback@definitelymaybeagile.com
Brand Mentions statt Backlinks? Kaum ein Thema wird aktuell in der SEO- und KI-Bubble so kontrovers diskutiert. Während Backlinks über viele Jahre als einer der zentralen Rankingfaktoren galten, rücken im Zeitalter von Large Language Models, AI Overviews und generativer Suche neue Signale in den Fokus: Marken, Erwähnungen, Vertrauen und externe Relevanz. In dieser Podcast-Folge spreche ich mit Patrick Tomforde, Gründer und Geschäftsführer der Linkbuilding-Agentur Performance Liebe und seit über 20 Jahren im SEO-Game aktiv, über genau diesen Wandel. Gemeinsam ordnen wir ein, warum viele der heutigen Diskussionen rund um GEO, GAIO und KI-Sichtbarkeit gar nicht so neu sind. Patrick teilt seine Perspektive aus zwei Jahrzehnten Suchmaschinenoptimierung und erklärt, warum gute SEO schon immer mehr war als Titles, Content und Links. Wir sprechen darüber, weshalb LLMs eher ein Brand-Awareness-Kanal als ein klassischer Traffic-Kanal sind, warum 85 % der Informationen, die KI-Systeme über Marken nutzen, nicht von der eigenen Website stammen, und weshalb externe Signale – von Backlinks über Brand Mentions bis hin zu Nutzerinteraktionen – künftig noch stärker an Bedeutung gewinnen. Ein zentrales Thema der Folge ist die Frage, wie KI-Systeme und Suchmaschinen Qualität bewerten sollen, wenn das Netz zunehmend von KI-generierten Inhalten überflutet wird. Patrick zeigt anhand konkreter Beispiele, wie manipulierbar LLMs aktuell noch sind, warum nachhaltige Strategien wichtiger sind als kurzfristige Hacks und wieso starke Marken langfristig sowohl bei Google als auch bei KI-Systemen im Vorteil sind. Diese Episode ist kein Plädoyer für „Entweder Backlinks oder Brand Mentions“. Vielmehr geht es um das Zusammenspiel von Offpage-Signalen, hochwertigem Content und Markenaufbau – und darum, warum Sichtbarkeit auch im KI-Zeitalter kein neues Spiel ist, sondern ein erweitertes Spielfeld.
In this end-of-year AwesomeCast, hosts Michael Sorg and Katie Dudas are joined by original AwesomeCast co-host Rob De La Cretaz for a wide-ranging discussion on the biggest tech shifts of 2025 — and what's coming next. The panel breaks down how AI tools became genuinely useful in everyday workflows, from content production and health tracking to decision-making and trend analysis. Rob shares why Bambu Labs 3D printers represent a turning point in consumer and professional 3D printing, removing friction and making rapid prototyping accessible for creators, engineers, and hobbyists alike. The episode also covers the evolving role of AI in media creation, concerns around over-reliance and trust, and why human-made content may soon become a premium feature. Intern Mac reflects on changing career paths into media production, while the crew revisits their 2025 tech predictions, holds themselves accountable, and locks in bold forecasts for 2026. Plus: Chachi's Video Game Minute, AI competition heating up, Apple Vision Pro speculation, and why “AI inside” may need clearer definitions moving forward.
The Women in AI Healthcare event series - hosted by Real Chemistry in collaboration with Pharma Brands – brings together dynamic female leaders to discuss the transformative role of artificial intelligence in life sciences. It is also a call to action: to ensure women are not only present, but pivotal in shaping the future of AI in healthcare. In a new pharmaphorum podcast focused on the important and timely subject of women in AI, web editor Nicole Raleigh spoke with: Kate Eversole, event director at Pharma Brands; Celine Parmentier, EVP, head of global med comms at Real Chemistry; and Emma Slade, head of applied AI at Tangram Therapeutics. The guests discuss their own work with AI, the risk of training AI models predominantly on male data, and how, within life sciences, women are already shaping, challenging, and advocating for AI. The conversation also touches upon the possible next greatest impacts of AI in the sector, and the need to keep the ‘human in the loop', as well as the possible negative impacts if AI is relied upon too much. You can listen to episode 232 of the pharmaphorum podcast in the player below, download the episode to your computer, or find it - and subscribe to the rest of the series – on Apple Podcasts, Spotify, Overcast, Pocket Casts, Podbean, and pretty much wherever else you download your other podcasts from. Resources mentioned within the conversation are as below: Kotek, H., Dockum, R., & Sun, C. (2023). Gender Bias and Stereotypes in Large Language Models. arXiv:2304.02485. UN Women & UNESCO (2020). I'd Blush If I Could: Closing Gender Divides in Digital Skills Through Education. Tatman, R. (2017). Gender and dialect bias in YouTube's automatic captions. Criado-Perez, C. (2019). Invisible Women: Data Bias in a World Designed for Men. King, M. (2020). The Fix: Overcome the Invisible Barriers That Are Holding Women Back at Work. You can register to be a part of the women in AI community here: https://www.pharmabrands.ca/womeninai Information on the survey being run by Dr Michelle Penelope King, on AI and workplace motivation, can be found here: https://lnkd.in/eCg87_7w
In this end-of-year AwesomeCast, hosts Michael Sorg and Katie Dudas are joined by original AwesomeCast co-host Rob De La Cretaz for a wide-ranging discussion on the biggest tech shifts of 2025 — and what's coming next. The panel breaks down how AI tools became genuinely useful in everyday workflows, from content production and health tracking to decision-making and trend analysis. Rob shares why Bambu Labs 3D printers represent a turning point in consumer and professional 3D printing, removing friction and making rapid prototyping accessible for creators, engineers, and hobbyists alike. The episode also covers the evolving role of AI in media creation, concerns around over-reliance and trust, and why human-made content may soon become a premium feature. Intern Mac reflects on changing career paths into media production, while the crew revisits their 2025 tech predictions, holds themselves accountable, and locks in bold forecasts for 2026. Plus: Chachi's Video Game Minute, AI competition heating up, Apple Vision Pro speculation, and why “AI inside” may need clearer definitions moving forward.
Chandrasekhar Somasekhar, Chief Technology Officer, cleareye.aiThe business of trade finance is traditionally exhaustively documented but mix in a large language model (LLM) and that business can suddenly become much less of a challenge for those who must review the documentation. Chandrasekhar Somasekhar is CTO of cleareye.ai and also leads the firm's Product Engineering team. He discusses the adoption and use cases of LLMs in trade finance with Robin Amlôt of IBS Intelligence.
In this episode of the Oracle University Podcast, hosts Lois Houston and Nikita Abraham dive into Oracle Fusion Cloud Applications and the new courses and certifications on offer. They are joined by Oracle Fusion Apps experts Patrick McBride and Bill Lawson who introduce the concept of Oracle Modern Best Practice (OMBP), explaining how it helps organizations maximize results by mapping Fusion Application features to daily business processes. They also discuss how the new courses educate learners on OMBP and its role in improving Fusion Cloud Apps implementations. OMBP: https://www.oracle.com/applications/modern-best-practice/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ----------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Nikita: Hello and welcome to the Oracle University Podcast! I'm Nikita Abraham, Team Lead of Editorial Services with Oracle University, and with me is Lois Houston, Director of Communications and Adoption with Customer Success Services. Lois: Hi everyone! Thanks for joining us for this Best of 2025 series, where we're playing you four of our most popular episodes of the year. Nikita: Today's episode is #3 of 4 and is a throwback to a conversation with our friends and Oracle Fusion Apps experts Patrick McBride and Bill Lawson. We chatted with them about the latest courses and certifications available for Oracle Fusion Cloud Applications, featuring Oracle Modern Best Practice and the Oracle Cloud Success Navigator. 01:08 Lois: We kicked things off by asking Patrick to help us understand what Oracle Modern Best Practice is, and the reasons behind its creation. Patrick: So, modern best practices are more than just a business process. They're really about translating features and technology into actionable capabilities in our product. So, we've created these by curating industry leading best practices we've collected from our customers over the years. And ensure that the most modern technologies that we've built into the Fusion Application stack are represented inside of those business processes. Our goal is really to help you as customers improve your business operations by easily finding and applying those technologies to what you do every day. 01:53 Nikita: So, by understanding this modern best practice and the technology that enables it, you're really unlocking the full potential of Fusion Apps. Patrick: Absolutely. So, the goal is that modern best practice make it really easy for customers, implementers, partners, to see the opportunity and take action. 02:13 Lois: That's great. OK, so, let's talk about implementations, Patrick. How do Oracle Modern Best Practice support customers throughout the lifecycle of an Oracle Fusion Cloud implementation? Patrick: What we found during many implementers' journey with taking our solution and trying to apply it with customers is that customers come in with a long list of capabilities that they're asking us to replicate. What they've always done in the past. And what modern best practice is trying to do is help customers to reimage the art of the possible…what's possible with Fusion by taking advantage of innovative features like AI, like IoT, like, you know, all of the other solutions that we built in to help you automate your processes to help you get the most out of the solution using the latest and greatest technology. So, if you're an implementer, there's a number of ways a modern best practice can help during an implementation. First is that reimagine exercise where you can help the customer see what's possible. And how we can do it in a better way. I think more importantly though, as you go through your implementation, many customers aren't able to get everything done by the time they have to go live. They have a list of things they've deferred and modern best practices really establishes itself as a road map for success, so you can go back to it at the completion and see what's left for the opportunity to take advantage of and you can use it to track kind of the continuous innovation that Oracle delivers with every release and see what's changed with that business process and how can I get the most out of it. 03:43 Nikita: Thanks, Patrick. That's a great primer on OMBP that I'm sure everyone will find very helpful. Patrick: Thanks, Niki. We want our customers to understand the value of modern best practices so they can really maximize their investment in Oracle technology today and in the future as we continue to innovate. 03:59 Lois: Right. And the way we're doing that is through new training and certifications that are closely aligned with OMBP. Bill, what can you tell us about this? Bill: Yes, sure. So, the new Oracle Fusion Applications Foundations training program is designed to help partners and customers understand Oracle Modern Best Practice and how they improve the entire implementation journey with Fusion Cloud Applications. As a learner, you will understand how to adhere to these practices and how they promise a greater level of success and customer satisfaction. So, whether you're designing, or implementing, or going live, you'll be able to get it right on day one. So, like Patrick was saying, these OMBPs are reimagined, industry-standard business processes built into Fusion Applications. So, you'll also discover how technologies like AI, Mobile, and Analytics help you automate tasks and make smarter decisions. You'll see how data flows between processes and get tips for successful go-lives. So, the training we're offering includes product demonstrations, key metrics, and design considerations to give you a solid understanding of modern best practice. It also introduces you to Oracle Cloud Success Navigator and how it can be leveraged and relied upon as a trusted source to guide you through every step of your cloud journey, so from planning, designing, and implementation, to user acceptance testing and post-go-live innovations with each quarterly new release of Fusion Applications and those new features. And then, the training also prepares you for Oracle Cloud Applications Foundations certifications. 05:31 Nikita: Which applications does the training focus on, Bill? Bill: Sure, so the training focuses on four key pillars of Fusion Apps and the associated OMBP with them. For Human Capital Management, we cover Human Resources and Talent Management. For Enterprise Resource Planning, it's all about Financials, Project Management, and Risk Management. In Supply Chain Management, you'll look at Supply Chain, Manufacturing, Inventory, Procurement, and more. And for Customer Experience, we'll focus on Marketing, Sales, and Service. 05:59 Lois: That's great, Bill. Now, who is the training and certification for? Bill: That's a great question. So, it's really for anyone who wants to get the most out of Oracle Fusion Cloud Applications. It doesn't matter if you're an experienced professional or someone new to Fusion Apps, this is a great place to start. It's even recommended for professionals with experience in implementing other applications, like on-premise products. So, the goal is to give you a solid foundation in Oracle Modern Best Practice and show you how to use them to improve your implementation approach. We want to make it easy for anyone, whether you're an implementer, a global process owner, or an IT team employee, to identify every way Fusion Applications can improve your organization. So, if you're new to Fusion Apps, you'll get a comprehensive overview of Oracle Fusion Applications and how to use OMBP to improve business operations. If you're already certified in Oracle Cloud Applications and have years of experience, you'll still benefit from learning how OMBP fits into your work. If you're an experienced Fusion consultant who is new to Oracle Modern Best Practice processes, this is a good place to begin and learn how to apply them and the latest technology enablers during implementations. And, lastly, if you're an on-premise or you have non-Fusion consultant skills looking to upskill to Fusion, this is a great way to begin acquiring the knowledge and skills needed to transition to Fusion and migrate your existing expertise. 07:29 Have you mastered the basics of AI? Are you ready to take your skills to the next level? Unlock the potential of advanced AI with our OCI Generative AI Professional course and certification that covers topics like Large Language Models, the OCI Generative AI Service, and building Q&A chatbots for real-world applications. Head over to mylearn.oracle.com and find out more. 07:58 Nikita: Welcome back! Bill, how long is it going to take me to complete this training program? Bill: So, we wanted to make this program detailed enough so our learners find it valuable, obviously. But at the same time, we didn't want to make it too long. So, each course is approximately 5 hours or more, and provides folks with all the requisite knowledge they need to get started with Oracle Modern Best Practice and Fusion Applications. 08:22 Lois: Bill, is there anything that I need to know before I take this course? Are there any prerequisites? Bill: No, Lois, there are no prerequisites. Like I was saying, whether you're fresh out of college or a seasoned professional, this is a great place to start your journey into Fusion Apps and Oracle Modern Best Practice. 08:37 Nikita: That's great, you know, that there are no barriers to starting. Now, Bill, what can you tell us about the certification that goes along with this new program? Bill: The best part, Niki, is that it's free. In fact, the training is also free. We have four courses and corresponding Foundation Associate–level certifications for Human Capital Management, Enterprise Resource Planning, Supply Chain Management, and Customer Experience. So, completing the training prepares you for an hour-long exam with 25 questions. It's a pretty straightforward way to validate your expertise in Oracle Modern Best Practice and Fusion Apps implementation considerations. 09:11 Nikita: Ok. Say I take this course and certification. What can I do next? Where should my learning journey take me? Bill: So, you're building knowledge and expertise with Fusion Applications, correct? So, once you take this training and certification, I recommend that you identify a product area you want to specialize in. So, if you take the Foundations training for HCM, you can dive deeper into specialized paths focused on implementing Human Resources, Workforce Management, Talent Management, or Payroll applications, for example. The same goes for other product areas. If you finish the certification for Foundations in ERP, you may choose to specialize in Finance or Project Management and get your professional certifications there as your next step. So, once you have this foundational knowledge, moving on to advanced learning in these areas becomes much easier. We offer various learning paths with associated professional-level certifications to deepen your knowledge and expertise in Oracle Fusion Cloud Applications. So, you can learn more about these courses by visiting oracle.com/education/training/ to find out more of what Oracle University has to offer. 10:14 Lois: Right. I love that we have a clear path from foundational-level training to more advanced levels. So, as your skills grow, we've got the resources to help you move forward. Nikita: That's right, Lois. Thanks for walking us through all this, Patrick and Bill. We really appreciate you taking the time to join us on the podcast. Bill: Yeah, it's always a pleasure to join you on the podcast. Thank you very much. Patrick: Oh, thanks for having me, Lois. Happy to be here. Nikita: We hope you enjoyed that conversation. Join us next week for another throwback episode. Until then, this is Nikita Abraham... Lois: And Lois Houston, signing off! 10:47 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
“What does it actually mean to understand the brain?”Dr. Kendrick Kay is a computational neuroscientist and neuroimaging expert at the University of Minnesota's Center for Magnetic Resonance Research, where he is an Associate Professor in the Department of Radiology. With training spanning philosophy and neuroscience, from a bachelor's degree in philosophy at Harvard University to a PhD in neuroscience from UC Berkeley, Dr. Kay's work bridges deep theoretical questions with cutting-edge neuroimaging methods.In this conversation, Peter Bandettini and Kendrick Kay explore the evolving landscape of neuroscience at the intersection of fMRI, philosophy, and artificial intelligence. They reflect on the limits of current neuroimaging methodologies, what fMRI can and cannot tell us about brain mechanisms, and why creativity and human judgment remain central to scientific progress. The discussion also dives into Dr. Kay's landmark contributions to fMRI decoding and the Natural Scenes Dataset, a high-resolution resource that has become foundational for computational neuroscience and neuro AI research.Along the way, they examine deep sampling in neuroimaging, individual variability in brain data, and the challenges of separating neural signals from hemodynamic effects. Framed by broader questions about understanding benchmarking progress, and the growing role of LLM's in neuroscience, this wide-ranging conversation offers a thoughtful look at where the field has been and where it may be headed.We hope you enjoy this episode!Chapters:00:00 - Introduction to Kendrick Kay and His Work04:51 - Philosophy's Influence on Neuroscience17:17 - How Far Will fMRI Take Us?23:27 - Understanding Attention in Neuroscience30:00 - Science as a Process34:17 - The Role of Large Language Models (LLMs) in Scientific Progress38:29 - Why Humans Should Stay in the Equation40:30 - Creativity vs. AI in Scientific Research54:48 - Dr. Kay's Natural Scenes Dataset (NSD)01:00:27 - Deep Sampling: Considerations and Implications01:08:00 - Accounting for biological variation in Brain Scans: Differences and Similarities01:13:00 - Separating Hemodynamic Effects from Neural Effects01:16:00 - Areas of Hope and Progress in the field01:21:00 - How Should We Benchmark Progress?01:22:59 - Advice for Aspiring ScientistsWorks mentioned:54:48 - https://www.nature.com/articles/s41593-021-00962-x54:50 - https://www.sciencedirect.com/science/article/pii/S0166223624001838?via%3DihubEpisode producers:Xuqian Michelle Li, Naga Thovinakere
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Welcome back to AI Unraveled, your strategic briefing on the business impact of artificial intelligence.Today, we are doing something different. We are skipping the daily news cycle to focus on a single, massive piece of research that just dropped from a powerhouse team at Stanford, Princeton, Harvard, and the University of Washington that proposes the first proper taxonomy for Agentic AI Adaptation.If you are building or scaling agent-based systems, this is your new mental model. The researchers argue that almost all advanced agentic systems—despite their complexity—boil down to just four basic feedback loops. We explore the "4-Bucket" framework (A1, A2, T1, T2) and explain the critical trade-offs between changing the agent versus changing the tools.Key Topics:Intro: Why "learning from feedback" is the definition of adaptation.The Definition: What actually counts as "Agentic AI"?Bucket A1 (Agent + Tool Outcome): Updating the agent based on whether code ran or queries succeeded.Bucket A2 (Agent + Output Eval): Updating the agent based on human feedback or automated scoring.Bucket T1 (Frozen Agent + Trained Tools): Keeping the LLM fixed while optimizing retrievers and external models.Bucket T2 (Frozen Agent + Agent-Supervised Tools): Using the agent's own signals to tune its toolkit.Trade-offs: Cost vs. Flexibility in modern system design.Links & Resources:Read the Paper: Adaptation Strategies for Agentic AI Systems (GitHub): https://github.com/pat-jj/Awesome-Adaptation-of-Agentic-AI/blob/main/paper.pdfKeywords: Agentic AI, AI Taxonomy, AI Research, Stanford AI, Princeton AI, Large Language Models, LLM Agents, Reinforcement Learning, Tool Use, RAG, A1 A2 T1 T2, AI Adaptation, Etienne Noumen, AI Unraveled.
This interview was recorded for GOTO Unscripted.https://gotopia.techCheck out more here:https://gotopia.tech/articles/398Matt Welsh - Head of Al Systems at PalantirJulian Wood - Serverless Developer Advocate at AWSRESOURCESMatthttps://twitter.com/mdwelshhttps://www.mdw.lahttps://github.com/mdwelshhttps://www.linkedin.com/in/welsh-matthttps://www.ultravox.aiJulianhttps://bsky.app/profile/julianwood.comhttps://twitter.com/julian_woodhttps://github.com/julianwoodhttp://www.wooditwork.comhttps://www.linkedin.com/in/julianrwoodDESCRIPTIONMatt Welsh, former professor at Harvard University and AI researcher, argues to Julian Wood that we're witnessing the death of classical computer science as language models evolve into general-purpose computers capable of direct problem-solving without human-written code.He envisions a future where AI eliminates programming barriers, democratizing computing power so anyone can instruct computers through natural language. While acknowledging concerns about job displacement and societal equity, Matt believes this transformation will unleash unprecedented human creativity by putting the full power of computing in everyone's hands, moving beyond the current "programming priesthood" to universal access to computational problem-solving. RECOMMENDED BOOKSMichael Feathers • AI Assisted Programming • https://leanpub.com/ai-assisted-programmingMatthias Kalle Dalheimer & Matt Welsh • Running Linux • https://amzn.to/3YSwAIvAlex Castrounis • AI for People and Business • https://amzn.to/3NYKKToPhil Winder • Reinforcement Learning • https://amzn.to/3t1S1VZKelleher & Tierney • Data Science (The MIT Press Essential Knowledge series) • https://amzn.to/3AQmIRgBlueskyTwitterInstagramLinkedInFacebookCHANNEL MEMBERSHIP BONUSJoin this channel to get early access to videos & other perks:https://www.youtube.com/channel/UCs_tLP3AiwYKwdUHpltJPuA/joinLooking for a unique learning experience?Attend the next GOTO conference near you! Get your ticket: gotopia.techSUBSCRIBE TO OUR YOUTUBE CHANNEL - new videos posted daily!
Noah Doyle, Managing Partner at Javelin Investment Partners in Silicon Valley, with decades-long background in venture capital, joins Kopi Time to talk about tech and AI. We begin with a general discussion on the present vibe in Silicon Valley, giddy with record investments and returns. We then immediately pivot to the question of the moment, if there is an AI bubble, to which Noah offers a detailed and nuanced response, walking us through the supply and demand for AI infrastructure and products, the fund raising and capital deployment ecosystem, and dizzying valuation of AI companies, public and private. I then nudge Noah toward an issue close to his heart--if the path toward AGI through Gen AI, and if not, are there alternative paths in the making. Noah responds by discussing alternatives to Large Language Models, including symbolic logic-based AIs. We take the conversation toward innovations coming out of China and the US, which Noah tracks closely and sees the trend as a positive, mutually beneficial development. We talk about regulatory guardrails, cyber security, privacy, and ethical aspects of AI to round-up this fascinating conversation.See omnystudio.com/listener for privacy information.
The ground beneath the digital marketing industry is shifting. For decades, the mantra was simple: optimize for traffic, measure clicks, and track conversions. But with the rise of Generative AI, Large Language Models (LLMs), and Answer Engines, that rulebook is obsolete. In this powerful episode, I sit down with Joe Doveton to discuss the urgent reality facing every brand that relies on web traffic.We dive into the phenomenon Joe calls the "Crocodile Mouth", the unsettling visual trend where brands maintain high search impressions but see clicks vanish, a direct result of zero-click searches. With the proliferation of platforms like TikTok, Reddit, and various generative engines, we discuss why the Google monopoly on the customer journey is over, and how users can now move from the awareness stage to purchasing a product without ever visiting a Google property. This episode is a wake-up call for marketers still clinging to outdated KPIs.Joe introduces the new alphabet soup of optimization, GEO (Generative Engine Optimization), AEO (Answer Engine Optimization), and LLMO (Large Language Model Optimization). Crucially, we explore what this means for your analytics. If traffic and conversion rate are "lousy metrics", what should you measure? Joe reveals emerging metrics like Visibility within LLMs and competitive positioning. Most importantly, we agree that this "Wild West" era is finally killing all the outdated SEO hacks, forcing brands back to the core long-term strategy: writing useful content and focusing on the customer experience.About the GuestJoe Doveton is an experienced digital strategist, consultant, and speaker focused on the intersection of AI, search, and customer experience. With a background that includes working in advertising and a deep understanding of Conversion Rate Optimization (CRO), Joe is now pioneering tools and strategies for the Generative Engine Optimization (GEO) space. He is the founder of GEO Jet Pack, a platform designed to extract and visualize entities from content to help brands gain visibility in LLM responses - a critical new metric for the AI era.What You'll LearnThe difference between traditional SEO and the new acronyms: GEO, AEO, and LLMO.What the "Crocodile Mouth" is and why it confirms the end of the reliance on clicks.Why the old marketing KPIs, specifically web traffic and conversion rate—are now "lousy metrics" for measuring success.The new metrics emerging for the middle of the funnel, such as Visibility within LLMs and competitive position within prompt responses.Why the entire AI shift proves that long-term SEO success is still about being useful, interesting, and trustworthy (EEAT).Why the current AI era is killing all the old SEO hacks and discouraging tactics like content farming.How brands like Google are undermining their own profitable ad business by integrating AI Overviews.The vision of the Semantic Web and why the current structure of websites is inherently ill-suited for machine consumption.Guest Contact:Joe Doveton's websiteJoe Doveton on LinkedIn---If you enjoyed the episode, please share it with a friend!
Our Head of Research Product in Europe Paul Walsh and Chief European Equity Strategist Marina Zavolock break down the key drivers, risks, and sector shifts shaping European equities in 2026. Read more insights from Morgan Stanley.----- Transcript -----Paul Walsh: Welcome to Thoughts on the Market. I'm Paul Walsh, Morgan Stanley's Head of Research Product in Europe.Marina Zavolock: And I'm Marina Zavolock, Chief European Equity Strategist.Paul Walsh: And today – our views on what 2026 holds for the European stock market.It's Tuesday, December 9th at 10am in London.As we look ahead to 2026, there's a lot going on in Europe stock markets. From shifting economic wins to new policies coming out of Brussels and Washington, the investment landscape is evolving quite rapidly. Interest rates, profit forecasts, and global market connections are all in play.And Marina, the first question I wanted to ask you really relates to the year 2025. Why don't you synthesize your, kind of, review of the year that we've just had?Marina Zavolock: Yeah, I'll keep it brief so we can focus ahead. But the year 2025, I would say is a year of two halves. So, we began the year with a lot of, kind of, under performance at the end of 2024 after U.S. elections, for Europe and a decline in the euro. The start of 2025 saw really strong performance for Europe, which surprised a lot of investors. And we had kind of catalyst after catalyst, for that upside, which was Germany's ‘whatever it takes' fiscal moment happened early this year, in the first quarter.We had a lot of headlines and kind of anticipation on Russia-Ukraine and discussions, negotiations around peace, which led to various themes emerging within the European equities market as well, which drove upside. And then alongside that, heading into Liberation Day, in the months, kind of, preceding that as investors were worried about tariffs, there was a lot of interest in diversifying out of U.S. equities. And Europe was one of the key beneficiaries of that diversification theme.That was a first half kind of dynamic. And then in the second half, Europe has kept broadly performing, but not as strongly as the U.S. We made the call, in March that European optimism had peaked. And the second half was more, kind of, focused on the execution on Germany's fiscal. And post the big headlines, the pace of execution, which has been a little bit slower than investors were anticipating. And also, Europe just generally has had weak earnings growth. So, we started the year at 8 percent consensus earnings growth for 2025. At this point, we're at -1, for this year.Paul Walsh: So, as you've said there, Marina, it's been a year of two halves. And so that's 2025 in review. But we're here to really talk about the outlook for 2026, and there are kind of three buckets that we're going to dive into. And the first of those is really around this notion of slipstream, and the extent to which Europe can get caught up in the slipstream that the U.S., is going to create – given Mike Wilson's view on the outlook for U.S. equity markets. What's the thesis there?Marina Zavolock: Yeah, and thank you for the title suggestion, by the way, Paul of ‘Slipstream.' so basically our view is that, well, our U.S. equity strategist is very bullish, as I think most know. At this stage he has 15 percent upside to his S&P target to the end of next year; and very, very strong earnings growth in the U.S. And the thesis is that you're getting a broadening in the strength of the U.S. economic recovery.For Europe, what that means is that it's very, very hard for European equities to go down – if the U.S. market is up 15 percent. But our upside is more driven by multiple expansion than it is by earnings growth. Because what we continue to see in Europe and what we anticipate for next year is that consensus is too high for next year. Consensus is anticipating almost 13 percent earnings growth. We're anticipating just below 4 percent earnings growth. So, we do expect downgrades.But at the same time, if the U.S. recovery is broadening, the hopes will be that that will mean that broadening comes to Europe and Europe trades at such a big discount, about 26 percent relative to the U.S. at the moment – sector neutral – that investors will play that anticipation of broadening eventually to Europe through the multiple.Paul Walsh: So, the first point you are making is that the direction of travel in the U.S. really matters for European stock markets. The second bucket I wanted to talk about, and we're in a thematically driven market. So, what are the themes that are going to be really resonating for Europe as we move into 2026?Marina Zavolock: Yeah, so let me pick up on the earnings point that I just made. So, we have 3.6 percent earnings growth for next year. That's our forecast. And consensus – bottom-up consensus – is 12.7 percent. It's a very high bar. Europe typically comes in and sees high numbers at the beginning of the year and then downgrades through the course of the year. And thematically, why do we see these downgrades? And I think it's something that investors probably don't focus on enough. It's structurally rising China competition and also Europe's old economy exposure, especially in regards to the China exposure where demand isn't really picking up.Every year, for the last few years, we've seen this kind of China exposure and China competition piece drive between 60 and 90 percent of European earnings downgrades. And looking at especially the areas of consensus that are too high, which tend to be highly China exposed, that have had negative growth this year, in prior years. And we don't see kind of the trigger for that to mean revert. That is where we expect thematically the most disappointment. So, sectors like chemicals, like autos, those are some of the sectors towards the bottom of our model. Luxury as well. It's a bit more debated these days, but that's still an underweight for us in our model.Then German fiscal, this is a multi-year story. German fiscal, I mentioned that there's a lot of excitement on it in the first half of the year. The focus for next year will be the pace of execution, and we think there's two parts of this story. There's an infrastructure fund, a 500-billion-euro infrastructure fund in Germany where we're seeing, according to our economists, a very likely reallocation to more kind of social-related spend, which is not as great for our companies in the German index or earnings. And execution there hasn't been very fast.And then there's the Defense side of the story where we're a lot more optimistic, where we're seeing execution start to pick up now, where the need is immense. And we're seeing also upgrades from corporates on the back of that kind of execution pickup and the need. And we're very bullish on Defense. We're overweight the issue for taking that defense optimism and projecting out for all of Europe is that defense makes up less than 2 percent of the European index. And we do think that broadens to other sectors, but that will take years to start to impact other sectors.And then, couple other things. We have pockets of AI exposure in the enabler category. So, we're seeing a lot of strength in those pockets. A lot of catch up in some of those pockets right now. Utilities is a great example, which I can talk about. So, we think that will continue.But one thing I'm really watching, and I think a lot of strategists, across regions are watching is AI adoption. And this is the real bull case for me in Europe. If AI adoption, ROI starts to become material enough that it's hard to ignore, which could start, in my opinion, from the second half of next year. Then Europe could be seen as much more of a play on AI adoption because the majority of our index is exposed to adoption. We have a lot of low hanging fruit, in terms of productivity challenges, demographics, you know, the level of returns. And if you track our early adopters, which is something we do, they are showing ROI. So, we think that will broaden up to more of the European index.Paul Walsh: Now, Marina, you mentioned, a number of sectors there, as it relates to the thematic focus. So, it brings us onto our third and final bucket in terms of what your model is suggesting in terms of your sector preferences…Marina Zavolock: Yeah. So, we have, data driven model, just to take a step back for a moment. And our model incorporates; it's quantum-mental. It incorporates themes. It incorporates our view on the cycle, which is in our view, we're late cycle now, which can be very bullish for returns. And it includes quant factors; things like price target, revisions breadth, earnings revisions breadth, management sentiment.We use a Large Language Model to measure for the first time since inception. We have reviewed the performance of our model over the last just under two years. And our top versus bottom stocks in our model have delivered 47 percent in returns, the top versus bottom performance. So now on the basis of the latest refresh of our model, banks are screening by far at the top.And if you look – whether it's at our sector model or you look at our top 50 preferred stocks in Europe, the list is full of Banks. And I didn't mention this in the thematic portion, but one of the themes in Europe outside of Germany is fiscal constraints. And actually, Banks are positively exposed to that because they're exposed to the steepness – positively to the steepness – of the yield curve.And I think investors – specialists are definitely optimistic on the sector, but I think you're getting more and more generalists noticing that Banks is the sector that consistently delivers the highest positive earnings upgrades of any sector in Europe. And is still not expensive at all. It's one of the cheapest sectors in Europe, trading at about nine times PE – also giving high single digit buyback and dividend yield. So that sector we think continues to have momentum.We also like Defense. We recently upgraded Utilities. We think utilities in Europe is at this interesting moment where in the last six months or so, it broke out of a five-year downtrend relative to the European index. It's also, if you look at European Utilities relative to U.S. Utilities – I mentioned those wide valuation discounts. Utilities have broken out of their downtrend in terms of valuation versus their U.S. peers. But still trade at very wide discounts. And this is a sector where it has the highest CapEx of any sector in Europe – highest CapEx growth on the energy transition. The market has been hesitant to kind of benefit the sector for that because of questions around returns, around renewables earlier on. And now that there's just this endless demand for power on the back of powering AI, investors are more willing to benefit the sector for those returns.So, the sector's been a great performer already year to date, but we think there's multiple years to go.Paul Walsh: Marina, a very comprehensive overview on the outlook for European equities for 2026. Thank you very much for taking the time to talk.Marina Zavolock: Thank you, Paul.Paul Walsh: And thanks for listening. If you enjoy Thoughts on the Market, please leave us a review wherever you listen and share the podcast with a friend or colleague today.
MY NEWSLETTER - https://nikolas-newsletter-241a64.beehiiv.com/subscribeJoin me, Nik (https://x.com/CoFoundersNik), as I interview Minh Nguyen (@minhnxn). In this episode, we dive deep into the exciting world of AI and how it's revolutionizing the way we build businesses and gather information online.Minh, coming straight from the heart of tech in the Bay Area, shares his practical experiences using various Large Language Models like Claude, ChatGPT, and the often-overlooked Gemini for tasks ranging from coding to deep research.We explore his journey building Cash On, a powerful Chrome extension for real estate investors, and uncover the surprising potential of simple browser tools. Get ready to learn about AI-powered web scraping, the rise of directory websites and programmatic SEO, and how no-code platforms like Lovable and Bolt.new are empowering non-technical founders to bring their app ideas to life.Questions This Episode Answers:• What AI tools (like LLMs) do you use and for what specific purposes?• Why do you think people are "sleeping on" Gemini?• How can AI be utilized for web scraping and data acquisition?• What are some good types of app ideas to start with when using no-code or low-code tools?• How can I leverage my existing content (like podcast episodes) using AI?Enjoy the conversation!__________________________Love it or hate it, I'd love your feedback.Please fill out this brief survey with your opinion or email me at nik@cofounders.com with your thoughts.__________________________MY NEWSLETTER: https://nikolas-newsletter-241a64.beehiiv.com/subscribeSpotify: https://tinyurl.com/5avyu98yApple: https://tinyurl.com/bdxbr284YouTube: https://tinyurl.com/nikonomicsYT__________________________This week we covered:00:00 Automating Data Collection for Real Estate02:59 Exploring AI Tools and Their Applications05:48 Building AI-Powered Web Scrapers09:10 The Future of Programmatic SEO12:07 Leveraging AI for Business Ideas14:54 Creating Chatbots for Business Analysis17:45 Building Without Coding: New Possibilities21:08 Frameworks for Identifying Business Opportunities
Wir sind zurück aus der kleinen Zwangspause – mit leichtem Husten, aber mit einem Thema, das deutlich größer ist als unsere Stimmbänder: Weltmodelle. Large Language Models (LLMs) wie ChatGPT, Gemini & Co. waren der perfekte Einstieg in die KI-Ära: stark in Sprache, Code und Kreativität. Jetzt kommen Systeme dazu, die nicht nur Texte generieren, sondern unsere Welt als System mit Regeln, Physik und Kausalität verstehen wollen. In der Folge sprechen wir darüber, was Weltmodelle von LLMs unterscheidet, warum „Grounding“ in echten Naturgesetzen so wichtig ist (ein Glas, das vom Tisch fällt, muss fallen – egal, was das Sprachmodell dazu meint) und wieso das überall dort spannend wird, wo KI reale Konsequenzen hat: in der Robotik, in der Stadtplanung, in der Biologie, der Materialforschung und der Klimamodellierung. Wir skizzieren unter anderem ein Weltmodell für Hamburg mit Verkehr, Baustellen, Wetter und Gebäuden, schauen auf digitale Zwillinge, die mit Weltmodellen endlich wirklich vorausschauend werden, und diskutieren, wie Sprachmodelle, agentische KI und Weltmodelle zusammenspielen können: LLMs als Interface für uns, Weltmodelle als „Physik-Engine“ im Hintergrund. Wenn du wissen willst, warum viele Forschende gerade extrem heiß auf Weltmodelle sind (ohne LLMs schlechtzureden) und wie sich damit unsere echte Welt besser simulieren und planen lässt, dann ist diese Folge für dich.
Last week saw the return of the European Coffee Symposium (ECS) + COHO in Berlin. Across three days, we welcomed 50 influential speakers from around the world to share their insights, inspiration, and bold ideas shaping the future of coffee and hospitality.Over the coming weeks, we'll bring you a collection of these unfiltered conversations, panel discussions and keynote sessions – direct from the ECS + COHO stages. And what better place to begin, than with a talk from James Hoffmann – a pioneer of specialty coffee and one of the most respected thinkers in our industry.In this talk, James explores the evolving dynamics shaping coffee - asking whether we've reached “peak coffee geek” at home, and the dangers of being stuck in the middle market. He also shares his thoughts on AI and Large Language Models in social media, and champions the coffee shop's timeless value as a space for humanity.Credits music: "Wake Up (And Smell the Coffee)" by Lexie in association with The Coffee Music Project and SEB Collective. Tune into the 5THWAVE Playlist on Spotify for more music from the showSign up for our newsletter to receive the latest coffee news at worldcoffeeportal.comSubscribe to 5THWAVE on Instagram @5thWaveCoffee and tell us what topics you'd like to hear
Fei-Fei Li is a Stanford professor, co-director of Stanford Institute for Human-Centered Artificial Intelligence, and co-founder of World Labs. She created ImageNet, the dataset that sparked the deep learning revolution. Justin Johnson is her former PhD student, ex-professor at Michigan, ex-Meta researcher, and now co-founder of World Labs.Together, they just launched Marble—the first model that generates explorable 3D worlds from text or images.In this episode Fei-Fei and Justin explore why spatial intelligence is fundamentally different from language, what's missing from current world models (hint: physics), and the architectural insight that transformers are actually set models, not sequence models. Resources:Follow Fei-Fei on X: https://x.com/drfeifeiFollow Justin on X: https://x.com/jcjohnssFollow Shawn on X: https://x.com/swyxFollow Alessio on X: https://x.com/fanahova Stay Updated:If you enjoyed this episode, please be sure to like, subscribe, and share with your friends.Follow a16z on X: https://x.com/a16zFollow a16z on LinkedIn:https://www.linkedin.com/company/a16zFollow the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXFollow the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details, please see http://a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
In this episode, Adam Butler is joined by guest Mike Green to discuss his viral Substack articles on the American affordability crisis. The conversation explores the significant gap between official economic statistics, like CPI, and the lived financial reality for the middle class, a phenomenon Green argues is often dismissed by an expert "mockery machine." They also discuss the use of LLMs in his research process and debate potential policy solutions to address widespread economic precarity.Topics Discussed• The use of Large Language Models (LLMs) as productivity tools for research and writing.• A critique of the "mockery machine" in public discourse that dismisses legitimate concerns about the cost of living• The disconnect between formal economic definitions of inflation and the public's lived experience of unaffordability• The concept of a "precarity line" for a modern family versus the technical definition of a poverty line• The economic pressures leading to "ghost households," where young people forgo having children due to high costs• Flaws in economic metrics like the CPI, particularly how quality adjustments mask the true rise in essential costs• The societal gaslighting by the economic establishment and its political consequences.• The "Valley of Death" or benefits cliff, where withdrawing government support creates a barrier to entering the middle class• Debating policy solutions like tariffs, direct government investment, and incentive-based programs to address economic precarity
Most companies still rely on dashboards to understand their data, even though AI now offers new ways to ask questions and explore information. Barry McCardel, CEO of Hex and former engineer at Palantir, joins a16z General Partner Sarah Wang to discuss how agent workflows, conversational interfaces, and context-aware models are reshaping analysis. Barry also explains how Hex aims to make everyone a data person by unifying analysis and AI in one workflow, and he reflects on his post about getting rid of their AI product team and the process behind Hex's funny launch videos.Timecodes: 0:00 – The problem with dashboards1:20 – The evolution of data teams and AI's role2:05 – Democratizing data: challenges and opportunities3:45 – The rise of agentive workflows9:48 – Threads and the changing UI of data analysis13:16 – Building AI agents: lessons from the notebook agent16:12 – Model capabilities and the future of AI in data19:10 – The importance of context and trust in data analysis24:34 – Semantic models and context engineering29:27 – Data team roles in the age of AI31:52 – Accuracy, trust, and evaluating AI systems37:43 – Building Hex: embracing AI as core, not an add-on48:48 – Pricing, value capture, and the future of SaaS55:55 – The modern data stack and industry consolidation1:04:26 – Acquisitions and owning the data insight layer1:06:46 – Lessons from Palantir: forward-deployed engineering1:13:11 – Commitment engineering and customer collaboration1:17:25 – Brand, launch videos, and having fun in SaaSResources:Follow Barry McCardel on X: https://x.com/barraldFollow Sarah Wang on X: https://x.com/sarahdingwang Stay Updated:If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn:https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details, please see http://a16z.com/disclosures. Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts. Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Florian and Esther discuss the language industry news of the past few weeks, reflecting on SlatorCon Remote and announcing that SlatorCon London 2026 is open for registration.The duo touch on IMDb's decision to recognize dubbing artists as part of new professional credit categories, explaining how this expands visibility for multilingual voice talent. They then move on to Coursera's strategy shift and outline how its new CEO is betting on AI translation and AI dubbing to revive slowing growth. Florian and Esther talk about Amazon's rollout of AI-translated Kindle eBooks, and question authors' willingness to rely on automated translation despite Amazon's promise of fast turnarounds, in as little as 72 hours.Florian highlights research on spatial audio improving AI live speech translation, and reflects on how clearer speaker differentiation could enhance comprehension. Although he stresses ongoing challenges in live settings, like latency and overlapping speech.In Esther's M&A and funding corner, healthcare AI technology startup No Barrier raises USD 2.7m, Cisco acquires EZ Dubs to enhance WebEx's real-time speech translation capabilities, and audio AI startup AudioShake raises USD 14m. Florian analyzes OneMeta's financials and notes its rapid revenue growth despite significant ongoing and limited marketing presence. Esther details the landmark UK NHS framework agreement for language services, including scope and the number of awarded vendors.Florian concludes with updates on interpreting performances at Teleperformance and AMN Healthcare, noting mixed results.
What does it really take to build AI that can resolve customer support at scale reliably, safely, and with measurable business impact?We explore how Intercom has evolved from a traditional customer support platform into an AI-first company, with its AI assistant, Fin, now resolving 65% of customer queries without human intervention. Intercom's Chief AI Officer, Fergal Reid, discusses the company's journey from natural language understanding (NLU) systems to their current retrieval augmented generation (RAG) approach, explaining how they've optimised every component of their AI pipeline with custom-built models.The conversation covers Intercom's unique approach to AI product development, emphasising standardisation and continuous improvement rather than customisation for individual clients. Fergal explains their outcome-based pricing model, where clients pay for successful resolutions rather than conversations, and how this aligns incentives across the business.We also discuss Intercom's approach to agentic AI, which enables their systems to perform complex, multi-step tasks, such as processing refunds, by integrating with various APIs. Fergal shares insights on testing methodologies, the balance between customisation and standardisation, and the challenges of building AI products in a rapidly evolving technological landscape.Finally, Fergal shares what excites and honestly freaks him out a bit about where AI is heading next.Timestamps00:00 - Intro02:31 - Welcome to Fergal Reid05:26 - How to train an NLU solution effectively?08:56 - What gen AI changed for Intercom10:57 - How would you describe Fin?14:30 - Fin's performance increase17:18 - Intercom's custom models22:14 - Large Language Models vs Small Language Models30:40 - RAG and 'the full stop problem'40:08 - Agentic AI capabilities at Intercom50:40 - Intercom's approach to testing1:04:46 - About the most exciting things in the AI spaceShow notesLearn more about IntercomConnect with Fergal Reid on LinkedInFollow Kane Simms on LinkedInArticle - The full stop problem: RAG's biggest limitationTake our updated AI Maturity AssessmentSubscribe to VUX WorldSubscribe to The AI Ultimatum Substack Hosted on Acast. See acast.com/privacy for more information.
Consumers are shifting from traditional search engines to AI tools like ChatGPT and Gemini, which is fundamentally changing how financial products appear and get evaluated.Amber Buker, Chief Research Officer at Travillian, reconnects with Alana Levine, Chief Revenue Officer at Fintel Connect, to explore one of the biggest shifts uncovered in Fintel Connect's annual survey: the rise of consumers using Large Language Models like ChatGPT and Gemini to research financial products. This move away from traditional search engines is already reshaping affiliate marketing, prompting Fintel Connect to examine how different AI models source recommendations and how financial institutions can stay visible in a growing no-click environment.
Dr. Nitin Seam chats with Dr. Sara Murray and Dr. Avraham Cooper about their articles, "Large Language Models and Medical Education: Preparing for a Rapid Transformation in How Trainees Will Learn to Be Doctor" and "AI and Medical Education — A 21st-Century Pandora's Box."
This is Vibe Coding 001. Have you ever wanted to build your own software or apps that can just kinda do your work for you inside of the LLM you use but don't know where to start? Start here. We're giving it all away and making it as simple as possible, while also hopefully challenging how you think about work. Join us. Beginner's Guide: How to visualize data with AI in ChatGPT, Gemini and Claude -- An Everyday AI Chat with Jordan WilsonNewsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion:Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Combining Multiple Features in Large Language ModelsVisualizing Data in ChatGPT, Gemini, and ClaudeCreating Custom GPTs, Gems, and ProjectsUploading Files for Automated Data DashboardsComparing ChatGPT Canvas, Gemini Canvas, and Claude ArtifactsUsing Agentic Capabilities for Problem SolvingVisualizing Meeting Transcripts and Unstructured DataOne-Shot Mini App Creation with AITimestamps:00:00 "Unlocking Superhuman LLM Capabilities"04:12 Custom AI Model and Testing07:18 "Multi-Mode Control for LLMs"12:33 "Intro to Vibe Coding"13:19 "Streamlined AI for Simplification"19:59 Podcast Analytics Simplified21:27 "ChatChibuty vs. Google Gemini"26:55 "Handling Diverse Data Efficiently"28:50 "AI for Actionable Task Automation"33:12 "Personalized Dashboard for Meetings"36:21 Personalized Automated Workflow Solution40:00 "AI Data Visualization Guide"40:38 "Everyday AI Wrap-Up"Keywords:ChatGPT, Gemini, Claude, data visualization with AI, visualize data using AI, Large Language Models, LLM features, combining LLM modes, custom instructions, GPTs, Gems, Anthropic projects, canvas mode, interactive dashboards, agentic models, code rendering, meeting transcripts visualization, SOP visualization, document analysis, unstructured data, structured insights, generative AI workflows, personalized dashboards, automated reporting, chain of thought reasoning, one-shot visualizations, data-driven decision-making, non-technical business leaders, micro apps, AI-powered interfaces, action items extraction, iterative improvement, multimodal AI, Opus 4.5, Five One Thinking, Gemini 3 Pro, artifacts, demos over memos, bespoke software, digital transformation, automated analyticsSend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
What does it take to be ready to deploy M365 Copilot in your organization? Richard talks to Nikki Chapple about her latest incarnation of the M365 Copilot Readiness Checklist, working step-by-step to bring M365 Copilot into the organization without causing data leak issues. Nikki discusses utilizing existing tools to accurately identify sensitive data, archiving outdated information, and monitoring data usage by both users and agents - allowing you to detect issues before they escalate. The conversation also delves into the process of identifying issues, discussing policy changes, and how to communicate those changes so that people can take advantage of the power of these new tools without feeling threatened. It's a journey!LinksMicrosoft PurviewSharePoint Advanced ManagementDefender for Cloud AppsRestricted SharePoint SearchMicrosoft 365 ArchiveSharePoint Restricted Content DiscoveryData Security Posture Management for AINikki's Readiness ChecklistM365 Copilot Oversharing BlueprintMicrosoft Purview Secure by Default BlueprintPrevent Data Leaks to Shadow AI BlueprintRecorded November 7, 2025
In this talk, I will discuss recent research projects at the intersection of software security and automated reasoning. Specifically, I will present our work on assessing the exploitability of the Android kernel and developing complex exploits for it, as well as our efforts to uncover bugs in Rust's unsafe code through fuzzing.Throughout the talk, I will highlight how Large Language Models (LLMs) can support both attackers and defenders in analyzing complex software systems, and I will present key lessons on using LLMs effectively along with the practical challenges that arise when integrating them into software security workflows. About the speaker: Dr. Antonio Bianchi's research interest lies in the area of Computer Security. His primary focus is in the field of security of mobile devices. Most recently, he started exploring the security issues posed by IoT devices and their interaction with mobile applications. As a core member of the Shellphish and OOO teams, he played and organized many security competitions (CTFs), and won the third place at the DARPA Cyber Grand Challenge.
Dr. Sid Dogra talks with Dr. Paul Yi about the safe use of large language models and other generative AI tools in radiology, including evolving regulations, data privacy concerns, and bias. They also discuss practical steps departments can take to evaluate vendors, protect patient information, and build a long term culture of responsible AI use. Best Practices for the Safe Use of Large Language Models and Other Generative AI in Radiology. Yi et al. Radiology 2025; 316(3):e241516.
Last month, Senate Democrats warned that "Automation Could Destroy Nearly 100 Million U.S Jobs in a Decade." Ironically, they used ChatGPT to come to that conclusion. DAIR Research Associate Sophie Song joins us to unpack the issues when self-professed worker advocates use chatbots for "research."Sophie Song is a researcher, organizer, and advocate working at the intersection of tech and social justice. They're a research associate at DAIR, where they're working with Alex on building the Luddite Lab Resource Hub.References:Senate report: AI and Automation Could Destroy Nearly 100 Million U.S Jobs in a DecadeSenator Sanders' AI Report Ignores the Data on AI and InequalityAlso referenced:MAIHT3k Episode 25: An LLM Says LLMs Can Do Your JobHumlum paper: Large Language Models, Small Labor Market EffectsEmily's blog post: Scholarship should be open, inclusive and slowFresh AI Hell:Tech companies compelling vibe codingarXiv is overwhelmed by LLM slop'Godfather of AI' says tech giants can't profit from their astronomical investments unless human labor is replacedIf you want to satiate AI's hunger for power, Google suggests going to spaceAI pioneers claim human-level general intelligence is already hereGen AI campaign against ranked choice votingChaser: Workplace AI Implementation BingoCheck out future streams on Twitch. Meanwhile, send us any AI Hell you see.Our book, 'The AI Con,' is out now! Get your copy now.Subscribe to our newsletter via Buttondown. Follow us!Emily Bluesky: emilymbender.bsky.social Mastodon: dair-community.social/@EmilyMBender Alex Bluesky: alexhanna.bsky.social Mastodon: dair-community.social/@alex Twitter: @alexhanna Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Ozzy Llinas Goodman.
On this episode of our "Leaders in ERP Series", Shawn Windle speaks with Paul Farrell, Senior Vice President at ECI. Windle and Farrell discuss the projected evolution of the ERP market over the next decade, how Large Language Models (LLMs) and Agentic AI are changing the way companies utilize ERP, and how ECI is framing their AI strategy around industry specialization.Connect with us!https://www.erpadvisorsgroup.com866-499-8550LinkedIn:https://www.linkedin.com/company/erp-advisors-groupTwitter:https://twitter.com/erpadvisorsgrpFacebook:https://www.facebook.com/erpadvisorsInstagram:https://www.instagram.com/erpadvisorsgroupPinterest:https://www.pinterest.com/erpadvisorsgroupMedium:https://medium.com/@erpadvisorsgroup
What happens when 1970s defamation law collides with the Internet, social media, and AI? University of Florida Law School legal scholar Lyrissa Lidsky — who is also a co-reporter for the American Law Institute's Restatement (Third) of Torts: Defamation and Privacy — explains how the law of libel and slander is being rewritten for the digital age. Lyrissa, Jane, and Eugene discuss why the old line between libel and slander no longer makes sense; how Section 230 upended defamation doctrine; the future of New York Times v. Sullivan and related First Amendment doctrines; Large Libel Models (when Large Language Models meet libel law); and more. Subscribe for the latest on free speech, censorship, social media, AI, and the evolving role of the First Amendment in today's proverbial town square.
Are you focusing your AI visibility and PR efforts on Reddit, Wikipedia, and media sites like The New York Times? You could be wasting your time and money.In this episode of the Grow and Convert Marketing Show, we dive into the data from our new study on Large Language Model (LLM) citation patterns. We reveal why the advice you see on LinkedIn and in broad industry reports—telling you to chase large, general-purpose sites—is completely misleading for most businesses, especially B2B and niche companies.What You'll Learn:86% of citations are for industry specific sites vs. 14% for "general purpose sites": See actual data from our client base that shows 86% of LLM citations for niche topics come from industry-specific publications, while Reddit, Wikipedia, and YouTube account for only 14%.Differences in how to approach AI visibility for your brand vs. Household name brands: Discover why B2C household names (like Tesla and Peloton) do get cited on general sites, but your niche B2B software company won't. The problem with measuring random prompts: Understand how broad studies that use 5,000 randomly selected keywords are statistically biased toward consumer queries, making their findings irrelevant to your specific business goals.A New Tactical Framework: Learn the exact, actionable steps for a Citation Outreach strategy that works : how to choose the right topics, identify the actual cited domains using tools or manual checks, and target your PR efforts where they will actually drive AI visibility.Stop doing general PR and start showing up in the AI answers that matter to your bottom line.Links & Resources:Read the full article for more detail on the study: https://www.growandconvert.com/research/llms-source-industry-sites-more-than-generic-sites/Check out our AI Visibility Tool: https://traqer.ai Catch up on our overall GEO Framework: https://www.growandconvert.com/ai/prioritized-geo/ Don't forget to like, subscribe, and comment to support the Grow and Convert Marketing Show!
Il faut réagir et vite.La thèse de Laurent Alexandre est simple mais inquiétante : nos systèmes éducatifs et politiques sont — pour l'instant — incapables de s'adapter à la révolution technologique sans précédent qu'est l'IA.Depuis que les Large Language Model ont tous dépassé les 150 de QI, la donne a radicalement changé.L'homme, pour la première fois depuis son apparition, n'est plus l'espèce la plus intelligente de la planète Terre.Et les investissements colossaux des géants de la tech dans l'IA ne font que creuser le fossé qui nous sépare désormais de la machine.Qui dit nouveau paradigme dit nouveau livre : Laurent et et son co-auteur Olivier Babeau considèrent qu'en dehors des 20 écoles les plus prestigieuses du monde, il n'est plus utile de faire des études.Et que “le vrai capital aujourd'hui, c'est l'action”.Autrement dit : l'élite sera constituée de ceux qui s'approprient l'IA le plus tôt (dès la maternelle) et l'utilisent le plus intensément. Et non de ceux qui font 10 ans d'études supérieures.Pour son 3ème passage sur GDIY, Laurent — comme à son habitude — n'épargne rien ni personne :Pourquoi l'IA amplifie à grande échelle les inégalités intellectuelles — et comment y remédierComment se créer son propre I-AristotePourquoi il faut limoger tous les ministres et hauts fonctionnaires qui ne comprennent pas l'IAComment l'espérance de vie peut doubler d'ici 2030.Un épisode crucial pour ne pas être dépassé et savoir comment être du côté des gagnants dans cette révolutionVous pouvez contacter Laurent sur X et le suivre sur ses autres réseaux : Instagram et Linkedin.“Ne faites plus d'études : Apprendre autrement à l'ère de l'IA” est disponible dans toutes les bonnes librairies ou juste ici : https://amzn.to/4ahLYEBTIMELINE:00:00:00 : Le fossé social créé par la révolution technologique00:12:32 : Pourquoi le niveau général des politiques sur l'IA est désastreux ?00:20:06 : Bien prompter et minimiser les hallucinations des modèles00:25:49 : “Le monde de l'IA n'est pas fait pour les feignasses”00:36:46 : Le plus gros amplificateur d'inégalités de l'histoire00:43:04 : Fournir une IA tuteur personnalisée à chaque enfant00:53:41 : Les LLM ont-ils vraiment des biais cognitifs ?01:03:16 : 1 vs 2900 milliards : l'écart abyssal des investissements entre les US et l'Europe01:14:36 : Que valent les livres écrits en intégralité par des IA ?01:20:39 : L'ère des premiers robots plombiers01:27:45 : Les 4 grands conseils de Laurent et Olivier01:35:33 : Comment aider nos enfants à bien s'approprier les outils01:44:20 : Pourquoi les écoles du supérieur sont de moins en moins sélectives ?02:01:28 : La théorie de “l'internet mort”Les anciens épisodes de GDIY mentionnés : #327 - Laurent Alexandre - Auteur - ChatGPT & IA : “Dans 6 mois, il sera trop tard pour s'y intéresser”#165 - Laurent Alexandre - Doctissimo - La nécessité d'affirmer ses idées#487 - VO - Anton Osika - Lovable - Internet, Business, and AI: Nothing Will Ever Be the Same Again#500 - Reid Hoffman - LinkedIn, Paypal - How to master humanity's most powerful invention#501 - Delphine Horvilleur - Rabbin, Écrivaine - Dialoguer quand tout nous divise#506 - Matthieu Ricard - Moine bouddhiste - Se libérer du chaos extérieur sans se couper du monde#450 - Karim Beguir - InstaDeep - L'IA Générale ? C'est pour 2025#397 - Yann Le Cun - Chief AI Scientist chez Meta - l'Intelligence Artificielle Générale ne viendra pas de Chat GPTNous avons parlé de :Olivier Babeau, le co-auteur de LaurentIntroduction à la pensée de Teilhard de ChardinLa théorie de l'internet mortLes recommandations de lecture :La Dette sociale de la France : 1974 - 2024 - Nicolas DufourcqNe faites plus d'études : Apprendre autrement à l'ère de l'IA - Laurent Alexandre et Olivier BabeauL'identité de la France - Fernand BraudelGrammaire des civilisations - Fernand BraudelChat GPT va nous rendre immortel - Laurent AlexandreUn grand MERCI à nos sponsors : SquareSpace : squarespace.com/doitQonto: https://qonto.com/r/2i7tk9 Brevo: brevo.com/doit eToro: https://bit.ly/3GTSh0k Payfit: payfit.com Club Med : clubmed.frCuure : https://cuure.com/product-onelyVous souhaitez sponsoriser Génération Do It Yourself ou nous proposer un partenariat ?Contactez mon label Orso Media via ce formulaire.Hébergé par Audiomeans. Visitez audiomeans.fr/politique-de-confidentialite pour plus d'informations.
Our 226th episode with a summary and discussion of last week's big AI news!Recorded on 11/24/2025Hosted by Andrey Kurenkov and co-hosted by Michelle LeeFeel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.aiRead out our text newsletter and comment on the podcast at https://lastweekin.ai/In this episode: New AI model releases include Google's Gemini 3 Pro, Anthropic's Opus 4.5, and OpenAI's GPT-5.1, each showcasing significant advancements in AI capabilities and applications.Robotics innovations feature Sunday Robotics' new robot Memo and a $600M funding round for Visual Intelligence, highlighting growth and investment in the robotics sector.AI safety and policy updates include Europe's proposed changes to GDPR and AI Act regulations, and reports of AI-assisted cyber espionage by a Chinese state-sponsored group.AI-generated content and legal highlights involve settlements between Warner Music Group and AI music platform UDIO, reflecting evolving dynamics in the field of synthetic media.Timestamps:(00:00:10) Intro / Banter(00:01:32) News Preview(00:02:10) Response to listener commentsTools & Apps(00:02:34) Google launches Gemini 3 with new coding app and record benchmark scores | TechCrunch(00:05:49) Google launches Nano Banana Pro powered by Gemini 3(00:10:55) Anthropic releases Opus 4.5 with new Chrome and Excel integrations | TechCrunch(00:15:34) OpenAI releases GPT-5.1-Codex-Max to handle engineering tasks that span twenty-four hours(00:18:26) ChatGPT launches group chats globally | TechCrunch(00:20:33) Grok Claims Elon Musk Is More Athletic Than LeBron James — and the World's Greatest LoverApplications & Business(00:24:03) What AI bubble? Nvidia's strong earnings signal there's more room to grow(00:26:26) Alphabet stock surges on Gemini 3 AI model optimism(00:28:09) Sunday Robotics emerges from stealth with launch of ‘Memo' humanoid house chores robot(00:32:30) Robotics Startup Physical Intelligence Valued at $5.6 Billion in New Funding - Bloomberg(00:34:22) Waymo permitted areas expanded by California DMV - CBS Los Angeles - Waymo enters 3 more cities: Minneapolis, New Orleans, and Tampa | TechCrunchProjects & Open Source(00:37:00) Meta AI Releases Segment Anything Model 3 (SAM 3) for Promptable Concept Segmentation in Images and Videos - MarkTechPost(00:40:18) [2511.16624] SAM 3D: 3Dfy Anything in Images(00:42:51) [2511.13998] LoCoBench-Agent: An Interactive Benchmark for LLM Agents in Long-Context Software EngineeringResearch & Advancements(00:45:10) [2511.08544] LeJEPA: Provable and Scalable Self-Supervised Learning Without the Heuristics(00:50:08) [2511.13720] Back to Basics: Let Denoising Generative Models DenoisePolicy & Safety(00:52:08) Europe is scaling back its landmark privacy and AI laws | The Verge(00:54:13) From shortcuts to sabotage: natural emergent misalignment from reward hacking(00:58:24) [2511.15304] Adversarial Poetry as a Universal Single-Turn Jailbreak Mechanism in Large Language Models(01:01:43) Disrupting the first reported AI-orchestrated cyber espionage campaign(01:04:36) OpenAI Locks Down San Francisco Offices Following Alleged Threat From Activist | WIREDSynthetic Media & Art(01:07:02) Warner Music Group Settles AI Lawsuit With UdioSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Question? Text our Studio direct.In this shocking monthly cyber update, the Cyber Crime Junkies (David, Dr. Sergio E. Sanchez, and Zack Moscow) expose the craziest, must-know stories in tech and security.What's Inside This Episode:The AI Threat is Real: Dr. Sergio reveals how Chinese threat actors manipulated Anthropic's Claude AI system to stage cyber attacks against nearly 30 companies globally. Learn how powerful Large Language Models (LLMs) are leveling the field for malicious coders.The Casino Fish Tank Hack (True Story!): David tells the unbelievable story of how hackers breached a casino's main network by exploiting a smart thermostat inside an exotic fish tank, accessing high-roller financials. This proves critical network segmentation is non-negotiable.The New Scam: ClickFix: David breaks down the terrifying new ClickFix attack, where hackers trick you into literally copying and pasting malicious code into your own computer. Learn the golden rule to protect yourself from this massive, 500% spike in attacks.The Cloudflare Outage: Zack discusses the massive Cloudflare outage that took down major services like ChatGPT, revealing how a seemingly minor configuration error caused massive ripple effects across the entire internet.The iPhone Scam Laundry: Dr. Sergio shares a wild anecdote from his time at Apple about a global scammer laundering stolen or damaged iPhones for new ones, using a loophole caused by a business decision.
We continue our AI Tools series with a deep dive into using Large Language Models (LLMs) for research, featuring Slobodan (Sani) Manić, AI skeptic, podcaster, and founder of the AI Fluency Club. Sani joins Matt and Moshe to share why context, careful prompting, and critical thinking are essential for getting real value out of today's LLMs in product work. Drawing on his work as a product builder, educator, and host of No Hacks Podcast, Sani challenges common myths about AI's capabilities and underscores both its practical uses and its risks for product managers. The conversation ranges from practical workflows to future visions of invisible AI, open-source models, and the real state of the “wrapper economy” built on major LLM providers. Join Matt, Moshe, and Sani as they explore: - Why most LLM workflows boil down to two mindsets: understanding your work, or avoiding understanding it - The crucial role of context and authority, why careless prompting leads to hallucinations, and how to break questions into smaller steps for better results - How LLMs fit as accelerators for deep research, surfacing insights faster than classic search engines, but always requiring fact-checking - Why Sani uses Google's Gemini and NotebookLM, and the value of integration with your company's existing tools - The open-source LLM alternative: privacy, flexibility, and why some see this as the future for secure enterprise AI - Pitfalls of the “wrapper economy,” vendor lock-in, and shaky business models based on reselling tokens - Starting out: how to include LLMs in PM research without reinventing your workflow, and why you must be careful with company data - The risks and limitations of AI today, especially in enterprise and sensitive environments - How internal AI context in tools like Atlassian makes those LLM features uniquely powerful - Future predictions: AI that fades into the background, plus the big unanswered questions about interface and humanoid robots - Sani's approach to AI education, success stories from AI Fluency Club, and what executives need to learn to stay ahead - And much more! Want to learn more or join Sani's community? - LinkedIn: Slobodan (Sani) Manić https://www.linkedin.com/in/sl... - No Hacks Podcast http://nohackspod.com/ - AI Fluency Club https://aifluencyclub.com/ You can also connect with us and find more episodes: - Product for Product Podcast http://linkedin.com/company/pr... - Matt Green https://www.linkedin.com/in/ma... - Moshe Mikanovsky http://www.linkedin.com/in/mik... Note: Any views mentioned in the podcast are the sole views of our hosts and guests, and do not represent the products mentioned in any way. Please leave us a review and feedback ⭐️⭐️⭐️⭐️⭐️
AI is everywhere, from coding assistants to chatbots, but what's really happening under the hood? It often feels like a "black box," but it doesn't have to be.In this episode, Allen sits down with Manning author and AI expert Emmanuel Maggiori to demystify the core concepts behind Large Language Models (LLMs). Emmanuel, author of "The AI Pocket Book," breaks down the entire pipeline - from the moment you type a prompt to the second you get a response. He explains complex topics like tokens, embeddings, context windows, and the controversial training methods that make these powerful tools possible.IN THIS EPISODE00:00 - Welcome & Why "The AI Pocket Book" is a Must-Read15:20 - The Basic LLM Pipeline Explained8:05 - What Are Tokens?21:30 - Understanding the Context Window25:50 - How Embeddings Represent Meaning35:45 - Controlling Creativity with Temperature39:30 - How LLMs Learn From Internet Data45:25 - Fine-Tuning with Human Feedback (RLHF)51:15 - Why AI Hallucinates56:45 - When Not to Use
Large language models aren't just improving — they're transforming how we work, learn, and make decisions. In this upcoming episode of Problem Solved, IISE's David Brandt talks with Bucknell University's Dr. Joe Wilck about the true state of LLMs after the first 1,000 days: what's gotten better, what's still broken, and why critical thinking matters more than ever.Thank you to this episode's sponsor, Autodesk FlexSimhttps://www.autodesk.com/https://www.flexsim.com/Learn more about The Institute of Industrial and Systems Engineers (IISE)Problem Solved on LinkedInProblem Solved on YouTubeProblem Solved on InstagramProblem Solved on TikTokProblem Solved Executive Producer: Elizabeth GrimesInterested in contributing to the podcast or sponsoring an episode? Email egrimes@iise.org
Wildest week in AI since December 2024.
Google Search Console (GSC) New! Branded and Non-Branded Queries + Annotation Filters | Marketing Talk with Favour Obasi-Ike | Sign up for exclusive SEO insights.This episode focuses on Search Engine Optimization (SEO) and the new features within Google Search Console (GSC).Favour discuss the recently introduced brand queries and annotations features in GSC, highlighting their importance for understanding both branded and non-branded search behavior.The conversation also emphasizes the broader strategic use of GSC data, comparing it to a car's dashboard for website performance, and explores how this data can be leveraged to create valuable content, such as FAQ-based blog posts and multimedia assets, often with the aid of Artificial Intelligence (AI) tools. A key theme is the shift from traditional keyword ranking to ranking for user experience and the interconnectedness of various digital tools in modern marketing strategy.--------------------------------------------------------------------------------Next Steps for Digital Marketing + SEO Services:>> Need SEO Services? Book a Complimentary SEO Discovery Call with Favour Obasi-Ike>> Visit our Work and PLAY Entertainment website to learn about our digital marketing services.>> Visit our Official website for the best digital marketing, SEO, and AI strategies today!>> Join our exclusive SEO Marketing community>> Read SEO Articles>> Need SEO Services? Book a Complimentary SEO Discovery Call with Favour Obasi-Ike>> Subscribe to the We Don't PLAY Podcast--------------------------------------------------------------------------------As a content strategist, you live with a fundamental uncertainty. You create content you believe your audience needs, but a nagging question always remains: are you hitting the mark? It often feels like you're operating with a blind spot, focusing on concepts while, as the experts say, "you don't even know the intention behind why they're asking or searching."What if you could close that gap? What if your audience could tell you, explicitly, what they need you to create next?That's the paradigm shift happening right now inside Google Search Console (GSC). Long seen as a technical tool, recent updates are transforming GSC into a strategic command center. It's no longer just for SEO specialists; it's the dashboard for your entire content operation. These new developments are a game-changer, revealing direct intelligence from your audience that will change how you plan, create, and deliver content.Here are the five truths these new GSC features reveal—and how they give you a powerful competitive edge.1. Stop Driving Your Website Blind: The Dashboard AnalogyManaging a website without GSC is like driving a car without a dashboard. You're moving, but you have no idea how fast you're going or if you're about to run out of fuel. GSC is that free, indispensable dashboard providing direct intelligence straight from Google. But the analogy runs deeper. As one strategist put it, driving isn't passive: "when you're driving, you got to hit the gas, you got to... hit the brakes... when do you stop, when do you go, what do you tweak? Do you go to a pit stop?"You wouldn't drive your car without looking at the dashboard. So you shouldn't have a website and drive traffic and do all the things we do without looking at GSC, right?Your content strategy requires the same active management—knowing when to accelerate, when to pivot, and when to optimize. The new features make this "dashboard" more intuitive than ever, giving you the controls you need to navigate with precision.2. The Goldmine in Your Search Queries: Branded vs. Non-BrandedThe first game-changing update is the new "brand queries" filter. For the first time, GSC allows you to easily separate searches for your specific brand name (branded) from searches for the topics and solutions you offer (non-branded). This is the first step in a powerful new workflow: Discovery.Think of your non-branded queries as raw, unfiltered intelligence from your potential audience. These aren't just keywords; they're direct expressions of need. Instead of an abstract concept, you see tangible examples like:• “best practices for washing dishes”• “best pet shampoo”• “best Thanksgiving turkey meal”When you see more non-branded than branded queries, it's a powerful signal. It means you have access to a goldmine of raw material you can build content on to attract a wider audience that doesn't know your brand… yet. This isn't just data; it's a direct trigger for your next move.3. From Keyword to "Keynote": Creating Content with ContextOnce you've discovered this raw material, the next step is Development. This is where you transform an unstructured keyword into a strategic asset by adding structure and meaning. It's a progression: a raw keyword becomes a more defined keyphrase, which can be built into a keystone concept, and ultimately refined into a keynote.What's a keynote? Think about its real-world meaning: "when somebody sends you a note, it has context, right? It's supposed to mean something and it's supposed to say something specific." A keynote isn't just a search term; it's that term fully developed into a structured piece of content that delivers a specific, meaningful answer.This strategic asset can take many forms:• Blogs• Podcast episodes• Articles• Newsletters• Videos/Reels• eBooks4. The Most Underrated SEO Tactic: Your New Secret WeaponYou've discovered the query and developed it into a keynote. Now it's time for Execution. The single most effective format for executing on this strategy is one of the most powerful, yet underrated, SEO tactics in history: creating content around Frequently Asked Questions (FAQs).The rise of Large Language Models (LLMs) has fundamentally changed search behavior. People are asking full, conversational questions, and search engines are prioritizing direct, authoritative answers. A "one blog per FAQ" strategy is the perfect response. It's a secret weapon that's almost shockingly effective.FAQ is the new awesome the most awesome ever. I I said that on purpose.How awesome? By creating a single, targeted blog post for the long-tail question, "full roof replacement cost [city]," one site ranked number one on Google for that exact phrase in just 30 minutes. That's the power of directly answering a question your audience is already asking.5. It's Not About New Features, It's About New ActionsThe real purpose of these GSC updates isn't to give you more charts to observe; it's to prompt decisive action. Every non-branded query is a signal for what content to create next, feeding a powerful strategic loop that builds your authority over time.This is where it all comes together in a professional content framework. As the source material notes, "That's why you have content pillars and you have content clusters." Your non-branded queries show you what clusters your audience needs, and your FAQ-style "keynotes" become the assets that build out those clusters around your core content pillars.This data-driven approach empowers you to:• Recreate outdated content with new, relevant insights.• Repurpose core ideas into different formats to reach wider audiences.• Re-evaluate which topics are truly resonating.• Reemphasize your most valuable messages with fresh content.Conclusion: What Does Your Dashboard Say?Google Search Console is no longer just a reporting tool. It has evolved into an essential strategic partner that closes the gap between the content you produce and the value your audience is searching for. It's your direct line to understanding intent, allowing you to move from guessing what people want to knowing what they need.Now that you know how to read your website's dashboard, what's the first turn you're going to make?See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
What does it take to build tech the world actually trusts? Wikipedia founder Jimmy Wales joins the crew to dig into the real crisis behind AI, social networks, and the web: trust, and how to build it when the stakes are global. Teen founders raise $6M to reinvent pesticides using AI — and convince Paul Graham to join in Introducing SlopStop: Community-driven AI slop detection in Kagi Search Part 1: How I Found Out $1 billion AI company co-founder admits that its $100 a month transcription service was originally 'two guys surviving on pizza' and typing out notes by hand His announcement leaving Meta White House Working on Executive Order to Foil State AI Regulations Nvidia stock soars after results, forecasts top estimates with sales for AI chips 'off the charts' Jeff Bezos Creates A.I. Start-Up Where He Will Be Co-Chief Executive Jack Conte: I'm Building an Algorithm That Doesn't Rot Your Brain AI love, actually Cat island road trip: liquidator's warehouse Gentype The Carpenter's Son... My excerpt from the Q&A Image of the paper Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jimmy Wales Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: ventionteams.com/twit zapier.com/machines agntcy.org spaceship.com/twit
What does it take to build tech the world actually trusts? Wikipedia founder Jimmy Wales joins the crew to dig into the real crisis behind AI, social networks, and the web: trust, and how to build it when the stakes are global. Teen founders raise $6M to reinvent pesticides using AI — and convince Paul Graham to join in Introducing SlopStop: Community-driven AI slop detection in Kagi Search Part 1: How I Found Out $1 billion AI company co-founder admits that its $100 a month transcription service was originally 'two guys surviving on pizza' and typing out notes by hand His announcement leaving Meta White House Working on Executive Order to Foil State AI Regulations Nvidia stock soars after results, forecasts top estimates with sales for AI chips 'off the charts' Jeff Bezos Creates A.I. Start-Up Where He Will Be Co-Chief Executive Jack Conte: I'm Building an Algorithm That Doesn't Rot Your Brain AI love, actually Cat island road trip: liquidator's warehouse Gentype The Carpenter's Son... My excerpt from the Q&A Image of the paper Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jimmy Wales Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: ventionteams.com/twit zapier.com/machines agntcy.org spaceship.com/twit
What does it take to build tech the world actually trusts? Wikipedia founder Jimmy Wales joins the crew to dig into the real crisis behind AI, social networks, and the web: trust, and how to build it when the stakes are global. Teen founders raise $6M to reinvent pesticides using AI — and convince Paul Graham to join in Introducing SlopStop: Community-driven AI slop detection in Kagi Search Part 1: How I Found Out $1 billion AI company co-founder admits that its $100 a month transcription service was originally 'two guys surviving on pizza' and typing out notes by hand His announcement leaving Meta White House Working on Executive Order to Foil State AI Regulations Nvidia stock soars after results, forecasts top estimates with sales for AI chips 'off the charts' Jeff Bezos Creates A.I. Start-Up Where He Will Be Co-Chief Executive Jack Conte: I'm Building an Algorithm That Doesn't Rot Your Brain AI love, actually Cat island road trip: liquidator's warehouse Gentype The Carpenter's Son... My excerpt from the Q&A Image of the paper Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jimmy Wales Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: ventionteams.com/twit zapier.com/machines agntcy.org spaceship.com/twit
What does it take to build tech the world actually trusts? Wikipedia founder Jimmy Wales joins the crew to dig into the real crisis behind AI, social networks, and the web: trust, and how to build it when the stakes are global. Teen founders raise $6M to reinvent pesticides using AI — and convince Paul Graham to join in Introducing SlopStop: Community-driven AI slop detection in Kagi Search Part 1: How I Found Out $1 billion AI company co-founder admits that its $100 a month transcription service was originally 'two guys surviving on pizza' and typing out notes by hand His announcement leaving Meta White House Working on Executive Order to Foil State AI Regulations Nvidia stock soars after results, forecasts top estimates with sales for AI chips 'off the charts' Jeff Bezos Creates A.I. Start-Up Where He Will Be Co-Chief Executive Jack Conte: I'm Building an Algorithm That Doesn't Rot Your Brain AI love, actually Cat island road trip: liquidator's warehouse Gentype The Carpenter's Son... My excerpt from the Q&A Image of the paper Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jimmy Wales Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: ventionteams.com/twit zapier.com/machines agntcy.org spaceship.com/twit
You ever see a new AI model drop and be like.... it's so good OMG how do I use it?
Large Language Models, or LLMs, are infiltrating every facet of our society, but do we even understand what they are? In this fascinating deep dive into the intersection of technology, language, and consciousness, the Wizard offers a few new ways of perceiving these revolutionary—and worrying—systems. Got a question for the the Wizard? Call the Wizard Hotline at 860-415-6009 and have it answered in a future episode! Join the ritual: www.patreon.com/thispodcastisaritual