POPULARITY
Steve Wilson, Chief AI and Product Officer at Exabeam and lead of the OWASP GenAI Security Project, discusses the practical realities of securing Large Language Models and agentic workflows. Subscribe to the Gradient Flow Newsletter
In this end-of-year AwesomeCast, hosts Michael Sorg and Katie Dudas are joined by original AwesomeCast co-host Rob De La Cretaz for a wide-ranging discussion on the biggest tech shifts of 2025 — and what's coming next. The panel breaks down how AI tools became genuinely useful in everyday workflows, from content production and health tracking to decision-making and trend analysis. Rob shares why Bambu Labs 3D printers represent a turning point in consumer and professional 3D printing, removing friction and making rapid prototyping accessible for creators, engineers, and hobbyists alike. The episode also covers the evolving role of AI in media creation, concerns around over-reliance and trust, and why human-made content may soon become a premium feature. Intern Mac reflects on changing career paths into media production, while the crew revisits their 2025 tech predictions, holds themselves accountable, and locks in bold forecasts for 2026. Plus: Chachi's Video Game Minute, AI competition heating up, Apple Vision Pro speculation, and why “AI inside” may need clearer definitions moving forward.
“What does it actually mean to understand the brain?”Dr. Kendrick Kay is a computational neuroscientist and neuroimaging expert at the University of Minnesota's Center for Magnetic Resonance Research, where he is an Associate Professor in the Department of Radiology. With training spanning philosophy and neuroscience, from a bachelor's degree in philosophy at Harvard University to a PhD in neuroscience from UC Berkeley, Dr. Kay's work bridges deep theoretical questions with cutting-edge neuroimaging methods.In this conversation, Peter Bandettini and Kendrick Kay explore the evolving landscape of neuroscience at the intersection of fMRI, philosophy, and artificial intelligence. They reflect on the limits of current neuroimaging methodologies, what fMRI can and cannot tell us about brain mechanisms, and why creativity and human judgment remain central to scientific progress. The discussion also dives into Dr. Kay's landmark contributions to fMRI decoding and the Natural Scenes Dataset, a high-resolution resource that has become foundational for computational neuroscience and neuro AI research.Along the way, they examine deep sampling in neuroimaging, individual variability in brain data, and the challenges of separating neural signals from hemodynamic effects. Framed by broader questions about understanding benchmarking progress, and the growing role of LLM's in neuroscience, this wide-ranging conversation offers a thoughtful look at where the field has been and where it may be headed.We hope you enjoy this episode!Chapters:00:00 - Introduction to Kendrick Kay and His Work04:51 - Philosophy's Influence on Neuroscience17:17 - How Far Will fMRI Take Us?23:27 - Understanding Attention in Neuroscience30:00 - Science as a Process34:17 - The Role of Large Language Models (LLMs) in Scientific Progress38:29 - Why Humans Should Stay in the Equation40:30 - Creativity vs. AI in Scientific Research54:48 - Dr. Kay's Natural Scenes Dataset (NSD)01:00:27 - Deep Sampling: Considerations and Implications01:08:00 - Accounting for biological variation in Brain Scans: Differences and Similarities01:13:00 - Separating Hemodynamic Effects from Neural Effects01:16:00 - Areas of Hope and Progress in the field01:21:00 - How Should We Benchmark Progress?01:22:59 - Advice for Aspiring ScientistsWorks mentioned:54:48 - https://www.nature.com/articles/s41593-021-00962-x54:50 - https://www.sciencedirect.com/science/article/pii/S0166223624001838?via%3DihubEpisode producers:Xuqian Michelle Li, Naga Thovinakere
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Welcome back to AI Unraveled, your strategic briefing on the business impact of artificial intelligence.Today, we are doing something different. We are skipping the daily news cycle to focus on a single, massive piece of research that just dropped from a powerhouse team at Stanford, Princeton, Harvard, and the University of Washington that proposes the first proper taxonomy for Agentic AI Adaptation.If you are building or scaling agent-based systems, this is your new mental model. The researchers argue that almost all advanced agentic systems—despite their complexity—boil down to just four basic feedback loops. We explore the "4-Bucket" framework (A1, A2, T1, T2) and explain the critical trade-offs between changing the agent versus changing the tools.Key Topics:Intro: Why "learning from feedback" is the definition of adaptation.The Definition: What actually counts as "Agentic AI"?Bucket A1 (Agent + Tool Outcome): Updating the agent based on whether code ran or queries succeeded.Bucket A2 (Agent + Output Eval): Updating the agent based on human feedback or automated scoring.Bucket T1 (Frozen Agent + Trained Tools): Keeping the LLM fixed while optimizing retrievers and external models.Bucket T2 (Frozen Agent + Agent-Supervised Tools): Using the agent's own signals to tune its toolkit.Trade-offs: Cost vs. Flexibility in modern system design.Links & Resources:Read the Paper: Adaptation Strategies for Agentic AI Systems (GitHub): https://github.com/pat-jj/Awesome-Adaptation-of-Agentic-AI/blob/main/paper.pdfKeywords: Agentic AI, AI Taxonomy, AI Research, Stanford AI, Princeton AI, Large Language Models, LLM Agents, Reinforcement Learning, Tool Use, RAG, A1 A2 T1 T2, AI Adaptation, Etienne Noumen, AI Unraveled.
This interview was recorded for GOTO Unscripted.https://gotopia.techCheck out more here:https://gotopia.tech/articles/398Matt Welsh - Head of Al Systems at PalantirJulian Wood - Serverless Developer Advocate at AWSRESOURCESMatthttps://twitter.com/mdwelshhttps://www.mdw.lahttps://github.com/mdwelshhttps://www.linkedin.com/in/welsh-matthttps://www.ultravox.aiJulianhttps://bsky.app/profile/julianwood.comhttps://twitter.com/julian_woodhttps://github.com/julianwoodhttp://www.wooditwork.comhttps://www.linkedin.com/in/julianrwoodDESCRIPTIONMatt Welsh, former professor at Harvard University and AI researcher, argues to Julian Wood that we're witnessing the death of classical computer science as language models evolve into general-purpose computers capable of direct problem-solving without human-written code.He envisions a future where AI eliminates programming barriers, democratizing computing power so anyone can instruct computers through natural language. While acknowledging concerns about job displacement and societal equity, Matt believes this transformation will unleash unprecedented human creativity by putting the full power of computing in everyone's hands, moving beyond the current "programming priesthood" to universal access to computational problem-solving. RECOMMENDED BOOKSMichael Feathers • AI Assisted Programming • https://leanpub.com/ai-assisted-programmingMatthias Kalle Dalheimer & Matt Welsh • Running Linux • https://amzn.to/3YSwAIvAlex Castrounis • AI for People and Business • https://amzn.to/3NYKKToPhil Winder • Reinforcement Learning • https://amzn.to/3t1S1VZKelleher & Tierney • Data Science (The MIT Press Essential Knowledge series) • https://amzn.to/3AQmIRgBlueskyTwitterInstagramLinkedInFacebookCHANNEL MEMBERSHIP BONUSJoin this channel to get early access to videos & other perks:https://www.youtube.com/channel/UCs_tLP3AiwYKwdUHpltJPuA/joinLooking for a unique learning experience?Attend the next GOTO conference near you! Get your ticket: gotopia.techSUBSCRIBE TO OUR YOUTUBE CHANNEL - new videos posted daily!
Noah Doyle, Managing Partner at Javelin Investment Partners in Silicon Valley, with decades-long background in venture capital, joins Kopi Time to talk about tech and AI. We begin with a general discussion on the present vibe in Silicon Valley, giddy with record investments and returns. We then immediately pivot to the question of the moment, if there is an AI bubble, to which Noah offers a detailed and nuanced response, walking us through the supply and demand for AI infrastructure and products, the fund raising and capital deployment ecosystem, and dizzying valuation of AI companies, public and private. I then nudge Noah toward an issue close to his heart--if the path toward AGI through Gen AI, and if not, are there alternative paths in the making. Noah responds by discussing alternatives to Large Language Models, including symbolic logic-based AIs. We take the conversation toward innovations coming out of China and the US, which Noah tracks closely and sees the trend as a positive, mutually beneficial development. We talk about regulatory guardrails, cyber security, privacy, and ethical aspects of AI to round-up this fascinating conversation.See omnystudio.com/listener for privacy information.
The ground beneath the digital marketing industry is shifting. For decades, the mantra was simple: optimize for traffic, measure clicks, and track conversions. But with the rise of Generative AI, Large Language Models (LLMs), and Answer Engines, that rulebook is obsolete. In this powerful episode, I sit down with Joe Doveton to discuss the urgent reality facing every brand that relies on web traffic.We dive into the phenomenon Joe calls the "Crocodile Mouth", the unsettling visual trend where brands maintain high search impressions but see clicks vanish, a direct result of zero-click searches. With the proliferation of platforms like TikTok, Reddit, and various generative engines, we discuss why the Google monopoly on the customer journey is over, and how users can now move from the awareness stage to purchasing a product without ever visiting a Google property. This episode is a wake-up call for marketers still clinging to outdated KPIs.Joe introduces the new alphabet soup of optimization, GEO (Generative Engine Optimization), AEO (Answer Engine Optimization), and LLMO (Large Language Model Optimization). Crucially, we explore what this means for your analytics. If traffic and conversion rate are "lousy metrics", what should you measure? Joe reveals emerging metrics like Visibility within LLMs and competitive positioning. Most importantly, we agree that this "Wild West" era is finally killing all the outdated SEO hacks, forcing brands back to the core long-term strategy: writing useful content and focusing on the customer experience.About the GuestJoe Doveton is an experienced digital strategist, consultant, and speaker focused on the intersection of AI, search, and customer experience. With a background that includes working in advertising and a deep understanding of Conversion Rate Optimization (CRO), Joe is now pioneering tools and strategies for the Generative Engine Optimization (GEO) space. He is the founder of GEO Jet Pack, a platform designed to extract and visualize entities from content to help brands gain visibility in LLM responses - a critical new metric for the AI era.What You'll LearnThe difference between traditional SEO and the new acronyms: GEO, AEO, and LLMO.What the "Crocodile Mouth" is and why it confirms the end of the reliance on clicks.Why the old marketing KPIs, specifically web traffic and conversion rate—are now "lousy metrics" for measuring success.The new metrics emerging for the middle of the funnel, such as Visibility within LLMs and competitive position within prompt responses.Why the entire AI shift proves that long-term SEO success is still about being useful, interesting, and trustworthy (EEAT).Why the current AI era is killing all the old SEO hacks and discouraging tactics like content farming.How brands like Google are undermining their own profitable ad business by integrating AI Overviews.The vision of the Semantic Web and why the current structure of websites is inherently ill-suited for machine consumption.Guest Contact:Joe Doveton's websiteJoe Doveton on LinkedIn---If you enjoyed the episode, please share it with a friend!
Our Head of Research Product in Europe Paul Walsh and Chief European Equity Strategist Marina Zavolock break down the key drivers, risks, and sector shifts shaping European equities in 2026. Read more insights from Morgan Stanley.----- Transcript -----Paul Walsh: Welcome to Thoughts on the Market. I'm Paul Walsh, Morgan Stanley's Head of Research Product in Europe.Marina Zavolock: And I'm Marina Zavolock, Chief European Equity Strategist.Paul Walsh: And today – our views on what 2026 holds for the European stock market.It's Tuesday, December 9th at 10am in London.As we look ahead to 2026, there's a lot going on in Europe stock markets. From shifting economic wins to new policies coming out of Brussels and Washington, the investment landscape is evolving quite rapidly. Interest rates, profit forecasts, and global market connections are all in play.And Marina, the first question I wanted to ask you really relates to the year 2025. Why don't you synthesize your, kind of, review of the year that we've just had?Marina Zavolock: Yeah, I'll keep it brief so we can focus ahead. But the year 2025, I would say is a year of two halves. So, we began the year with a lot of, kind of, under performance at the end of 2024 after U.S. elections, for Europe and a decline in the euro. The start of 2025 saw really strong performance for Europe, which surprised a lot of investors. And we had kind of catalyst after catalyst, for that upside, which was Germany's ‘whatever it takes' fiscal moment happened early this year, in the first quarter.We had a lot of headlines and kind of anticipation on Russia-Ukraine and discussions, negotiations around peace, which led to various themes emerging within the European equities market as well, which drove upside. And then alongside that, heading into Liberation Day, in the months, kind of, preceding that as investors were worried about tariffs, there was a lot of interest in diversifying out of U.S. equities. And Europe was one of the key beneficiaries of that diversification theme.That was a first half kind of dynamic. And then in the second half, Europe has kept broadly performing, but not as strongly as the U.S. We made the call, in March that European optimism had peaked. And the second half was more, kind of, focused on the execution on Germany's fiscal. And post the big headlines, the pace of execution, which has been a little bit slower than investors were anticipating. And also, Europe just generally has had weak earnings growth. So, we started the year at 8 percent consensus earnings growth for 2025. At this point, we're at -1, for this year.Paul Walsh: So, as you've said there, Marina, it's been a year of two halves. And so that's 2025 in review. But we're here to really talk about the outlook for 2026, and there are kind of three buckets that we're going to dive into. And the first of those is really around this notion of slipstream, and the extent to which Europe can get caught up in the slipstream that the U.S., is going to create – given Mike Wilson's view on the outlook for U.S. equity markets. What's the thesis there?Marina Zavolock: Yeah, and thank you for the title suggestion, by the way, Paul of ‘Slipstream.' so basically our view is that, well, our U.S. equity strategist is very bullish, as I think most know. At this stage he has 15 percent upside to his S&P target to the end of next year; and very, very strong earnings growth in the U.S. And the thesis is that you're getting a broadening in the strength of the U.S. economic recovery.For Europe, what that means is that it's very, very hard for European equities to go down – if the U.S. market is up 15 percent. But our upside is more driven by multiple expansion than it is by earnings growth. Because what we continue to see in Europe and what we anticipate for next year is that consensus is too high for next year. Consensus is anticipating almost 13 percent earnings growth. We're anticipating just below 4 percent earnings growth. So, we do expect downgrades.But at the same time, if the U.S. recovery is broadening, the hopes will be that that will mean that broadening comes to Europe and Europe trades at such a big discount, about 26 percent relative to the U.S. at the moment – sector neutral – that investors will play that anticipation of broadening eventually to Europe through the multiple.Paul Walsh: So, the first point you are making is that the direction of travel in the U.S. really matters for European stock markets. The second bucket I wanted to talk about, and we're in a thematically driven market. So, what are the themes that are going to be really resonating for Europe as we move into 2026?Marina Zavolock: Yeah, so let me pick up on the earnings point that I just made. So, we have 3.6 percent earnings growth for next year. That's our forecast. And consensus – bottom-up consensus – is 12.7 percent. It's a very high bar. Europe typically comes in and sees high numbers at the beginning of the year and then downgrades through the course of the year. And thematically, why do we see these downgrades? And I think it's something that investors probably don't focus on enough. It's structurally rising China competition and also Europe's old economy exposure, especially in regards to the China exposure where demand isn't really picking up.Every year, for the last few years, we've seen this kind of China exposure and China competition piece drive between 60 and 90 percent of European earnings downgrades. And looking at especially the areas of consensus that are too high, which tend to be highly China exposed, that have had negative growth this year, in prior years. And we don't see kind of the trigger for that to mean revert. That is where we expect thematically the most disappointment. So, sectors like chemicals, like autos, those are some of the sectors towards the bottom of our model. Luxury as well. It's a bit more debated these days, but that's still an underweight for us in our model.Then German fiscal, this is a multi-year story. German fiscal, I mentioned that there's a lot of excitement on it in the first half of the year. The focus for next year will be the pace of execution, and we think there's two parts of this story. There's an infrastructure fund, a 500-billion-euro infrastructure fund in Germany where we're seeing, according to our economists, a very likely reallocation to more kind of social-related spend, which is not as great for our companies in the German index or earnings. And execution there hasn't been very fast.And then there's the Defense side of the story where we're a lot more optimistic, where we're seeing execution start to pick up now, where the need is immense. And we're seeing also upgrades from corporates on the back of that kind of execution pickup and the need. And we're very bullish on Defense. We're overweight the issue for taking that defense optimism and projecting out for all of Europe is that defense makes up less than 2 percent of the European index. And we do think that broadens to other sectors, but that will take years to start to impact other sectors.And then, couple other things. We have pockets of AI exposure in the enabler category. So, we're seeing a lot of strength in those pockets. A lot of catch up in some of those pockets right now. Utilities is a great example, which I can talk about. So, we think that will continue.But one thing I'm really watching, and I think a lot of strategists, across regions are watching is AI adoption. And this is the real bull case for me in Europe. If AI adoption, ROI starts to become material enough that it's hard to ignore, which could start, in my opinion, from the second half of next year. Then Europe could be seen as much more of a play on AI adoption because the majority of our index is exposed to adoption. We have a lot of low hanging fruit, in terms of productivity challenges, demographics, you know, the level of returns. And if you track our early adopters, which is something we do, they are showing ROI. So, we think that will broaden up to more of the European index.Paul Walsh: Now, Marina, you mentioned, a number of sectors there, as it relates to the thematic focus. So, it brings us onto our third and final bucket in terms of what your model is suggesting in terms of your sector preferences…Marina Zavolock: Yeah. So, we have, data driven model, just to take a step back for a moment. And our model incorporates; it's quantum-mental. It incorporates themes. It incorporates our view on the cycle, which is in our view, we're late cycle now, which can be very bullish for returns. And it includes quant factors; things like price target, revisions breadth, earnings revisions breadth, management sentiment.We use a Large Language Model to measure for the first time since inception. We have reviewed the performance of our model over the last just under two years. And our top versus bottom stocks in our model have delivered 47 percent in returns, the top versus bottom performance. So now on the basis of the latest refresh of our model, banks are screening by far at the top.And if you look – whether it's at our sector model or you look at our top 50 preferred stocks in Europe, the list is full of Banks. And I didn't mention this in the thematic portion, but one of the themes in Europe outside of Germany is fiscal constraints. And actually, Banks are positively exposed to that because they're exposed to the steepness – positively to the steepness – of the yield curve.And I think investors – specialists are definitely optimistic on the sector, but I think you're getting more and more generalists noticing that Banks is the sector that consistently delivers the highest positive earnings upgrades of any sector in Europe. And is still not expensive at all. It's one of the cheapest sectors in Europe, trading at about nine times PE – also giving high single digit buyback and dividend yield. So that sector we think continues to have momentum.We also like Defense. We recently upgraded Utilities. We think utilities in Europe is at this interesting moment where in the last six months or so, it broke out of a five-year downtrend relative to the European index. It's also, if you look at European Utilities relative to U.S. Utilities – I mentioned those wide valuation discounts. Utilities have broken out of their downtrend in terms of valuation versus their U.S. peers. But still trade at very wide discounts. And this is a sector where it has the highest CapEx of any sector in Europe – highest CapEx growth on the energy transition. The market has been hesitant to kind of benefit the sector for that because of questions around returns, around renewables earlier on. And now that there's just this endless demand for power on the back of powering AI, investors are more willing to benefit the sector for those returns.So, the sector's been a great performer already year to date, but we think there's multiple years to go.Paul Walsh: Marina, a very comprehensive overview on the outlook for European equities for 2026. Thank you very much for taking the time to talk.Marina Zavolock: Thank you, Paul.Paul Walsh: And thanks for listening. If you enjoy Thoughts on the Market, please leave us a review wherever you listen and share the podcast with a friend or colleague today.
MY NEWSLETTER - https://nikolas-newsletter-241a64.beehiiv.com/subscribeJoin me, Nik (https://x.com/CoFoundersNik), as I interview Minh Nguyen (@minhnxn). In this episode, we dive deep into the exciting world of AI and how it's revolutionizing the way we build businesses and gather information online.Minh, coming straight from the heart of tech in the Bay Area, shares his practical experiences using various Large Language Models like Claude, ChatGPT, and the often-overlooked Gemini for tasks ranging from coding to deep research.We explore his journey building Cash On, a powerful Chrome extension for real estate investors, and uncover the surprising potential of simple browser tools. Get ready to learn about AI-powered web scraping, the rise of directory websites and programmatic SEO, and how no-code platforms like Lovable and Bolt.new are empowering non-technical founders to bring their app ideas to life.Questions This Episode Answers:• What AI tools (like LLMs) do you use and for what specific purposes?• Why do you think people are "sleeping on" Gemini?• How can AI be utilized for web scraping and data acquisition?• What are some good types of app ideas to start with when using no-code or low-code tools?• How can I leverage my existing content (like podcast episodes) using AI?Enjoy the conversation!__________________________Love it or hate it, I'd love your feedback.Please fill out this brief survey with your opinion or email me at nik@cofounders.com with your thoughts.__________________________MY NEWSLETTER: https://nikolas-newsletter-241a64.beehiiv.com/subscribeSpotify: https://tinyurl.com/5avyu98yApple: https://tinyurl.com/bdxbr284YouTube: https://tinyurl.com/nikonomicsYT__________________________This week we covered:00:00 Automating Data Collection for Real Estate02:59 Exploring AI Tools and Their Applications05:48 Building AI-Powered Web Scrapers09:10 The Future of Programmatic SEO12:07 Leveraging AI for Business Ideas14:54 Creating Chatbots for Business Analysis17:45 Building Without Coding: New Possibilities21:08 Frameworks for Identifying Business Opportunities
Wir sind zurück aus der kleinen Zwangspause – mit leichtem Husten, aber mit einem Thema, das deutlich größer ist als unsere Stimmbänder: Weltmodelle. Large Language Models (LLMs) wie ChatGPT, Gemini & Co. waren der perfekte Einstieg in die KI-Ära: stark in Sprache, Code und Kreativität. Jetzt kommen Systeme dazu, die nicht nur Texte generieren, sondern unsere Welt als System mit Regeln, Physik und Kausalität verstehen wollen. In der Folge sprechen wir darüber, was Weltmodelle von LLMs unterscheidet, warum „Grounding“ in echten Naturgesetzen so wichtig ist (ein Glas, das vom Tisch fällt, muss fallen – egal, was das Sprachmodell dazu meint) und wieso das überall dort spannend wird, wo KI reale Konsequenzen hat: in der Robotik, in der Stadtplanung, in der Biologie, der Materialforschung und der Klimamodellierung. Wir skizzieren unter anderem ein Weltmodell für Hamburg mit Verkehr, Baustellen, Wetter und Gebäuden, schauen auf digitale Zwillinge, die mit Weltmodellen endlich wirklich vorausschauend werden, und diskutieren, wie Sprachmodelle, agentische KI und Weltmodelle zusammenspielen können: LLMs als Interface für uns, Weltmodelle als „Physik-Engine“ im Hintergrund. Wenn du wissen willst, warum viele Forschende gerade extrem heiß auf Weltmodelle sind (ohne LLMs schlechtzureden) und wie sich damit unsere echte Welt besser simulieren und planen lässt, dann ist diese Folge für dich.
Last week saw the return of the European Coffee Symposium (ECS) + COHO in Berlin. Across three days, we welcomed 50 influential speakers from around the world to share their insights, inspiration, and bold ideas shaping the future of coffee and hospitality.Over the coming weeks, we'll bring you a collection of these unfiltered conversations, panel discussions and keynote sessions – direct from the ECS + COHO stages. And what better place to begin, than with a talk from James Hoffmann – a pioneer of specialty coffee and one of the most respected thinkers in our industry.In this talk, James explores the evolving dynamics shaping coffee - asking whether we've reached “peak coffee geek” at home, and the dangers of being stuck in the middle market. He also shares his thoughts on AI and Large Language Models in social media, and champions the coffee shop's timeless value as a space for humanity.Credits music: "Wake Up (And Smell the Coffee)" by Lexie in association with The Coffee Music Project and SEB Collective. Tune into the 5THWAVE Playlist on Spotify for more music from the showSign up for our newsletter to receive the latest coffee news at worldcoffeeportal.comSubscribe to 5THWAVE on Instagram @5thWaveCoffee and tell us what topics you'd like to hear
Fei-Fei Li is a Stanford professor, co-director of Stanford Institute for Human-Centered Artificial Intelligence, and co-founder of World Labs. She created ImageNet, the dataset that sparked the deep learning revolution. Justin Johnson is her former PhD student, ex-professor at Michigan, ex-Meta researcher, and now co-founder of World Labs.Together, they just launched Marble—the first model that generates explorable 3D worlds from text or images.In this episode Fei-Fei and Justin explore why spatial intelligence is fundamentally different from language, what's missing from current world models (hint: physics), and the architectural insight that transformers are actually set models, not sequence models. Resources:Follow Fei-Fei on X: https://x.com/drfeifeiFollow Justin on X: https://x.com/jcjohnssFollow Shawn on X: https://x.com/swyxFollow Alessio on X: https://x.com/fanahova Stay Updated:If you enjoyed this episode, please be sure to like, subscribe, and share with your friends.Follow a16z on X: https://x.com/a16zFollow a16z on LinkedIn:https://www.linkedin.com/company/a16zFollow the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXFollow the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details, please see http://a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
In this episode, Adam Butler is joined by guest Mike Green to discuss his viral Substack articles on the American affordability crisis. The conversation explores the significant gap between official economic statistics, like CPI, and the lived financial reality for the middle class, a phenomenon Green argues is often dismissed by an expert "mockery machine." They also discuss the use of LLMs in his research process and debate potential policy solutions to address widespread economic precarity.Topics Discussed• The use of Large Language Models (LLMs) as productivity tools for research and writing.• A critique of the "mockery machine" in public discourse that dismisses legitimate concerns about the cost of living• The disconnect between formal economic definitions of inflation and the public's lived experience of unaffordability• The concept of a "precarity line" for a modern family versus the technical definition of a poverty line• The economic pressures leading to "ghost households," where young people forgo having children due to high costs• Flaws in economic metrics like the CPI, particularly how quality adjustments mask the true rise in essential costs• The societal gaslighting by the economic establishment and its political consequences.• The "Valley of Death" or benefits cliff, where withdrawing government support creates a barrier to entering the middle class• Debating policy solutions like tariffs, direct government investment, and incentive-based programs to address economic precarity
Most companies still rely on dashboards to understand their data, even though AI now offers new ways to ask questions and explore information. Barry McCardel, CEO of Hex and former engineer at Palantir, joins a16z General Partner Sarah Wang to discuss how agent workflows, conversational interfaces, and context-aware models are reshaping analysis. Barry also explains how Hex aims to make everyone a data person by unifying analysis and AI in one workflow, and he reflects on his post about getting rid of their AI product team and the process behind Hex's funny launch videos.Timecodes: 0:00 – The problem with dashboards1:20 – The evolution of data teams and AI's role2:05 – Democratizing data: challenges and opportunities3:45 – The rise of agentive workflows9:48 – Threads and the changing UI of data analysis13:16 – Building AI agents: lessons from the notebook agent16:12 – Model capabilities and the future of AI in data19:10 – The importance of context and trust in data analysis24:34 – Semantic models and context engineering29:27 – Data team roles in the age of AI31:52 – Accuracy, trust, and evaluating AI systems37:43 – Building Hex: embracing AI as core, not an add-on48:48 – Pricing, value capture, and the future of SaaS55:55 – The modern data stack and industry consolidation1:04:26 – Acquisitions and owning the data insight layer1:06:46 – Lessons from Palantir: forward-deployed engineering1:13:11 – Commitment engineering and customer collaboration1:17:25 – Brand, launch videos, and having fun in SaaSResources:Follow Barry McCardel on X: https://x.com/barraldFollow Sarah Wang on X: https://x.com/sarahdingwang Stay Updated:If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn:https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details, please see http://a16z.com/disclosures. Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts. Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Florian and Esther discuss the language industry news of the past few weeks, reflecting on SlatorCon Remote and announcing that SlatorCon London 2026 is open for registration.The duo touch on IMDb's decision to recognize dubbing artists as part of new professional credit categories, explaining how this expands visibility for multilingual voice talent. They then move on to Coursera's strategy shift and outline how its new CEO is betting on AI translation and AI dubbing to revive slowing growth. Florian and Esther talk about Amazon's rollout of AI-translated Kindle eBooks, and question authors' willingness to rely on automated translation despite Amazon's promise of fast turnarounds, in as little as 72 hours.Florian highlights research on spatial audio improving AI live speech translation, and reflects on how clearer speaker differentiation could enhance comprehension. Although he stresses ongoing challenges in live settings, like latency and overlapping speech.In Esther's M&A and funding corner, healthcare AI technology startup No Barrier raises USD 2.7m, Cisco acquires EZ Dubs to enhance WebEx's real-time speech translation capabilities, and audio AI startup AudioShake raises USD 14m. Florian analyzes OneMeta's financials and notes its rapid revenue growth despite significant ongoing and limited marketing presence. Esther details the landmark UK NHS framework agreement for language services, including scope and the number of awarded vendors.Florian concludes with updates on interpreting performances at Teleperformance and AMN Healthcare, noting mixed results.
What does it really take to build AI that can resolve customer support at scale reliably, safely, and with measurable business impact?We explore how Intercom has evolved from a traditional customer support platform into an AI-first company, with its AI assistant, Fin, now resolving 65% of customer queries without human intervention. Intercom's Chief AI Officer, Fergal Reid, discusses the company's journey from natural language understanding (NLU) systems to their current retrieval augmented generation (RAG) approach, explaining how they've optimised every component of their AI pipeline with custom-built models.The conversation covers Intercom's unique approach to AI product development, emphasising standardisation and continuous improvement rather than customisation for individual clients. Fergal explains their outcome-based pricing model, where clients pay for successful resolutions rather than conversations, and how this aligns incentives across the business.We also discuss Intercom's approach to agentic AI, which enables their systems to perform complex, multi-step tasks, such as processing refunds, by integrating with various APIs. Fergal shares insights on testing methodologies, the balance between customisation and standardisation, and the challenges of building AI products in a rapidly evolving technological landscape.Finally, Fergal shares what excites and honestly freaks him out a bit about where AI is heading next.Timestamps00:00 - Intro02:31 - Welcome to Fergal Reid05:26 - How to train an NLU solution effectively?08:56 - What gen AI changed for Intercom10:57 - How would you describe Fin?14:30 - Fin's performance increase17:18 - Intercom's custom models22:14 - Large Language Models vs Small Language Models30:40 - RAG and 'the full stop problem'40:08 - Agentic AI capabilities at Intercom50:40 - Intercom's approach to testing1:04:46 - About the most exciting things in the AI spaceShow notesLearn more about IntercomConnect with Fergal Reid on LinkedInFollow Kane Simms on LinkedInArticle - The full stop problem: RAG's biggest limitationTake our updated AI Maturity AssessmentSubscribe to VUX WorldSubscribe to The AI Ultimatum Substack Hosted on Acast. See acast.com/privacy for more information.
Consumers are shifting from traditional search engines to AI tools like ChatGPT and Gemini, which is fundamentally changing how financial products appear and get evaluated.Amber Buker, Chief Research Officer at Travillian, reconnects with Alana Levine, Chief Revenue Officer at Fintel Connect, to explore one of the biggest shifts uncovered in Fintel Connect's annual survey: the rise of consumers using Large Language Models like ChatGPT and Gemini to research financial products. This move away from traditional search engines is already reshaping affiliate marketing, prompting Fintel Connect to examine how different AI models source recommendations and how financial institutions can stay visible in a growing no-click environment.
Dr. Nitin Seam chats with Dr. Sara Murray and Dr. Avraham Cooper about their articles, "Large Language Models and Medical Education: Preparing for a Rapid Transformation in How Trainees Will Learn to Be Doctor" and "AI and Medical Education — A 21st-Century Pandora's Box."
This is Vibe Coding 001. Have you ever wanted to build your own software or apps that can just kinda do your work for you inside of the LLM you use but don't know where to start? Start here. We're giving it all away and making it as simple as possible, while also hopefully challenging how you think about work. Join us. Beginner's Guide: How to visualize data with AI in ChatGPT, Gemini and Claude -- An Everyday AI Chat with Jordan WilsonNewsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion:Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Combining Multiple Features in Large Language ModelsVisualizing Data in ChatGPT, Gemini, and ClaudeCreating Custom GPTs, Gems, and ProjectsUploading Files for Automated Data DashboardsComparing ChatGPT Canvas, Gemini Canvas, and Claude ArtifactsUsing Agentic Capabilities for Problem SolvingVisualizing Meeting Transcripts and Unstructured DataOne-Shot Mini App Creation with AITimestamps:00:00 "Unlocking Superhuman LLM Capabilities"04:12 Custom AI Model and Testing07:18 "Multi-Mode Control for LLMs"12:33 "Intro to Vibe Coding"13:19 "Streamlined AI for Simplification"19:59 Podcast Analytics Simplified21:27 "ChatChibuty vs. Google Gemini"26:55 "Handling Diverse Data Efficiently"28:50 "AI for Actionable Task Automation"33:12 "Personalized Dashboard for Meetings"36:21 Personalized Automated Workflow Solution40:00 "AI Data Visualization Guide"40:38 "Everyday AI Wrap-Up"Keywords:ChatGPT, Gemini, Claude, data visualization with AI, visualize data using AI, Large Language Models, LLM features, combining LLM modes, custom instructions, GPTs, Gems, Anthropic projects, canvas mode, interactive dashboards, agentic models, code rendering, meeting transcripts visualization, SOP visualization, document analysis, unstructured data, structured insights, generative AI workflows, personalized dashboards, automated reporting, chain of thought reasoning, one-shot visualizations, data-driven decision-making, non-technical business leaders, micro apps, AI-powered interfaces, action items extraction, iterative improvement, multimodal AI, Opus 4.5, Five One Thinking, Gemini 3 Pro, artifacts, demos over memos, bespoke software, digital transformation, automated analyticsSend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
What does it take to be ready to deploy M365 Copilot in your organization? Richard talks to Nikki Chapple about her latest incarnation of the M365 Copilot Readiness Checklist, working step-by-step to bring M365 Copilot into the organization without causing data leak issues. Nikki discusses utilizing existing tools to accurately identify sensitive data, archiving outdated information, and monitoring data usage by both users and agents - allowing you to detect issues before they escalate. The conversation also delves into the process of identifying issues, discussing policy changes, and how to communicate those changes so that people can take advantage of the power of these new tools without feeling threatened. It's a journey!LinksMicrosoft PurviewSharePoint Advanced ManagementDefender for Cloud AppsRestricted SharePoint SearchMicrosoft 365 ArchiveSharePoint Restricted Content DiscoveryData Security Posture Management for AINikki's Readiness ChecklistM365 Copilot Oversharing BlueprintMicrosoft Purview Secure by Default BlueprintPrevent Data Leaks to Shadow AI BlueprintRecorded November 7, 2025
In this talk, I will discuss recent research projects at the intersection of software security and automated reasoning. Specifically, I will present our work on assessing the exploitability of the Android kernel and developing complex exploits for it, as well as our efforts to uncover bugs in Rust's unsafe code through fuzzing.Throughout the talk, I will highlight how Large Language Models (LLMs) can support both attackers and defenders in analyzing complex software systems, and I will present key lessons on using LLMs effectively along with the practical challenges that arise when integrating them into software security workflows. About the speaker: Dr. Antonio Bianchi's research interest lies in the area of Computer Security. His primary focus is in the field of security of mobile devices. Most recently, he started exploring the security issues posed by IoT devices and their interaction with mobile applications. As a core member of the Shellphish and OOO teams, he played and organized many security competitions (CTFs), and won the third place at the DARPA Cyber Grand Challenge.
Dr. Sid Dogra talks with Dr. Paul Yi about the safe use of large language models and other generative AI tools in radiology, including evolving regulations, data privacy concerns, and bias. They also discuss practical steps departments can take to evaluate vendors, protect patient information, and build a long term culture of responsible AI use. Best Practices for the Safe Use of Large Language Models and Other Generative AI in Radiology. Yi et al. Radiology 2025; 316(3):e241516.
Last month, Senate Democrats warned that "Automation Could Destroy Nearly 100 Million U.S Jobs in a Decade." Ironically, they used ChatGPT to come to that conclusion. DAIR Research Associate Sophie Song joins us to unpack the issues when self-professed worker advocates use chatbots for "research."Sophie Song is a researcher, organizer, and advocate working at the intersection of tech and social justice. They're a research associate at DAIR, where they're working with Alex on building the Luddite Lab Resource Hub.References:Senate report: AI and Automation Could Destroy Nearly 100 Million U.S Jobs in a DecadeSenator Sanders' AI Report Ignores the Data on AI and InequalityAlso referenced:MAIHT3k Episode 25: An LLM Says LLMs Can Do Your JobHumlum paper: Large Language Models, Small Labor Market EffectsEmily's blog post: Scholarship should be open, inclusive and slowFresh AI Hell:Tech companies compelling vibe codingarXiv is overwhelmed by LLM slop'Godfather of AI' says tech giants can't profit from their astronomical investments unless human labor is replacedIf you want to satiate AI's hunger for power, Google suggests going to spaceAI pioneers claim human-level general intelligence is already hereGen AI campaign against ranked choice votingChaser: Workplace AI Implementation BingoCheck out future streams on Twitch. Meanwhile, send us any AI Hell you see.Our book, 'The AI Con,' is out now! Get your copy now.Subscribe to our newsletter via Buttondown. Follow us!Emily Bluesky: emilymbender.bsky.social Mastodon: dair-community.social/@EmilyMBender Alex Bluesky: alexhanna.bsky.social Mastodon: dair-community.social/@alex Twitter: @alexhanna Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Ozzy Llinas Goodman.
On this episode of our "Leaders in ERP Series", Shawn Windle speaks with Paul Farrell, Senior Vice President at ECI. Windle and Farrell discuss the projected evolution of the ERP market over the next decade, how Large Language Models (LLMs) and Agentic AI are changing the way companies utilize ERP, and how ECI is framing their AI strategy around industry specialization.Connect with us!https://www.erpadvisorsgroup.com866-499-8550LinkedIn:https://www.linkedin.com/company/erp-advisors-groupTwitter:https://twitter.com/erpadvisorsgrpFacebook:https://www.facebook.com/erpadvisorsInstagram:https://www.instagram.com/erpadvisorsgroupPinterest:https://www.pinterest.com/erpadvisorsgroupMedium:https://medium.com/@erpadvisorsgroup
What happens when 1970s defamation law collides with the Internet, social media, and AI? University of Florida Law School legal scholar Lyrissa Lidsky — who is also a co-reporter for the American Law Institute's Restatement (Third) of Torts: Defamation and Privacy — explains how the law of libel and slander is being rewritten for the digital age. Lyrissa, Jane, and Eugene discuss why the old line between libel and slander no longer makes sense; how Section 230 upended defamation doctrine; the future of New York Times v. Sullivan and related First Amendment doctrines; Large Libel Models (when Large Language Models meet libel law); and more. Subscribe for the latest on free speech, censorship, social media, AI, and the evolving role of the First Amendment in today's proverbial town square.
Are you focusing your AI visibility and PR efforts on Reddit, Wikipedia, and media sites like The New York Times? You could be wasting your time and money.In this episode of the Grow and Convert Marketing Show, we dive into the data from our new study on Large Language Model (LLM) citation patterns. We reveal why the advice you see on LinkedIn and in broad industry reports—telling you to chase large, general-purpose sites—is completely misleading for most businesses, especially B2B and niche companies.What You'll Learn:86% of citations are for industry specific sites vs. 14% for "general purpose sites": See actual data from our client base that shows 86% of LLM citations for niche topics come from industry-specific publications, while Reddit, Wikipedia, and YouTube account for only 14%.Differences in how to approach AI visibility for your brand vs. Household name brands: Discover why B2C household names (like Tesla and Peloton) do get cited on general sites, but your niche B2B software company won't. The problem with measuring random prompts: Understand how broad studies that use 5,000 randomly selected keywords are statistically biased toward consumer queries, making their findings irrelevant to your specific business goals.A New Tactical Framework: Learn the exact, actionable steps for a Citation Outreach strategy that works : how to choose the right topics, identify the actual cited domains using tools or manual checks, and target your PR efforts where they will actually drive AI visibility.Stop doing general PR and start showing up in the AI answers that matter to your bottom line.Links & Resources:Read the full article for more detail on the study: https://www.growandconvert.com/research/llms-source-industry-sites-more-than-generic-sites/Check out our AI Visibility Tool: https://traqer.ai Catch up on our overall GEO Framework: https://www.growandconvert.com/ai/prioritized-geo/ Don't forget to like, subscribe, and comment to support the Grow and Convert Marketing Show!
Il faut réagir et vite.La thèse de Laurent Alexandre est simple mais inquiétante : nos systèmes éducatifs et politiques sont — pour l'instant — incapables de s'adapter à la révolution technologique sans précédent qu'est l'IA.Depuis que les Large Language Model ont tous dépassé les 150 de QI, la donne a radicalement changé.L'homme, pour la première fois depuis son apparition, n'est plus l'espèce la plus intelligente de la planète Terre.Et les investissements colossaux des géants de la tech dans l'IA ne font que creuser le fossé qui nous sépare désormais de la machine.Qui dit nouveau paradigme dit nouveau livre : Laurent et et son co-auteur Olivier Babeau considèrent qu'en dehors des 20 écoles les plus prestigieuses du monde, il n'est plus utile de faire des études.Et que “le vrai capital aujourd'hui, c'est l'action”.Autrement dit : l'élite sera constituée de ceux qui s'approprient l'IA le plus tôt (dès la maternelle) et l'utilisent le plus intensément. Et non de ceux qui font 10 ans d'études supérieures.Pour son 3ème passage sur GDIY, Laurent — comme à son habitude — n'épargne rien ni personne :Pourquoi l'IA amplifie à grande échelle les inégalités intellectuelles — et comment y remédierComment se créer son propre I-AristotePourquoi il faut limoger tous les ministres et hauts fonctionnaires qui ne comprennent pas l'IAComment l'espérance de vie peut doubler d'ici 2030.Un épisode crucial pour ne pas être dépassé et savoir comment être du côté des gagnants dans cette révolutionVous pouvez contacter Laurent sur X et le suivre sur ses autres réseaux : Instagram et Linkedin.“Ne faites plus d'études : Apprendre autrement à l'ère de l'IA” est disponible dans toutes les bonnes librairies ou juste ici : https://amzn.to/4ahLYEBTIMELINE:00:00:00 : Le fossé social créé par la révolution technologique00:12:32 : Pourquoi le niveau général des politiques sur l'IA est désastreux ?00:20:06 : Bien prompter et minimiser les hallucinations des modèles00:25:49 : “Le monde de l'IA n'est pas fait pour les feignasses”00:36:46 : Le plus gros amplificateur d'inégalités de l'histoire00:43:04 : Fournir une IA tuteur personnalisée à chaque enfant00:53:41 : Les LLM ont-ils vraiment des biais cognitifs ?01:03:16 : 1 vs 2900 milliards : l'écart abyssal des investissements entre les US et l'Europe01:14:36 : Que valent les livres écrits en intégralité par des IA ?01:20:39 : L'ère des premiers robots plombiers01:27:45 : Les 4 grands conseils de Laurent et Olivier01:35:33 : Comment aider nos enfants à bien s'approprier les outils01:44:20 : Pourquoi les écoles du supérieur sont de moins en moins sélectives ?02:01:28 : La théorie de “l'internet mort”Les anciens épisodes de GDIY mentionnés : #327 - Laurent Alexandre - Auteur - ChatGPT & IA : “Dans 6 mois, il sera trop tard pour s'y intéresser”#165 - Laurent Alexandre - Doctissimo - La nécessité d'affirmer ses idées#487 - VO - Anton Osika - Lovable - Internet, Business, and AI: Nothing Will Ever Be the Same Again#500 - Reid Hoffman - LinkedIn, Paypal - How to master humanity's most powerful invention#501 - Delphine Horvilleur - Rabbin, Écrivaine - Dialoguer quand tout nous divise#506 - Matthieu Ricard - Moine bouddhiste - Se libérer du chaos extérieur sans se couper du monde#450 - Karim Beguir - InstaDeep - L'IA Générale ? C'est pour 2025#397 - Yann Le Cun - Chief AI Scientist chez Meta - l'Intelligence Artificielle Générale ne viendra pas de Chat GPTNous avons parlé de :Olivier Babeau, le co-auteur de LaurentIntroduction à la pensée de Teilhard de ChardinLa théorie de l'internet mortLes recommandations de lecture :La Dette sociale de la France : 1974 - 2024 - Nicolas DufourcqNe faites plus d'études : Apprendre autrement à l'ère de l'IA - Laurent Alexandre et Olivier BabeauL'identité de la France - Fernand BraudelGrammaire des civilisations - Fernand BraudelChat GPT va nous rendre immortel - Laurent AlexandreUn grand MERCI à nos sponsors : SquareSpace : squarespace.com/doitQonto: https://qonto.com/r/2i7tk9 Brevo: brevo.com/doit eToro: https://bit.ly/3GTSh0k Payfit: payfit.com Club Med : clubmed.frCuure : https://cuure.com/product-onelyVous souhaitez sponsoriser Génération Do It Yourself ou nous proposer un partenariat ?Contactez mon label Orso Media via ce formulaire.Hébergé par Audiomeans. Visitez audiomeans.fr/politique-de-confidentialite pour plus d'informations.
Our 226th episode with a summary and discussion of last week's big AI news!Recorded on 11/24/2025Hosted by Andrey Kurenkov and co-hosted by Michelle LeeFeel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.aiRead out our text newsletter and comment on the podcast at https://lastweekin.ai/In this episode: New AI model releases include Google's Gemini 3 Pro, Anthropic's Opus 4.5, and OpenAI's GPT-5.1, each showcasing significant advancements in AI capabilities and applications.Robotics innovations feature Sunday Robotics' new robot Memo and a $600M funding round for Visual Intelligence, highlighting growth and investment in the robotics sector.AI safety and policy updates include Europe's proposed changes to GDPR and AI Act regulations, and reports of AI-assisted cyber espionage by a Chinese state-sponsored group.AI-generated content and legal highlights involve settlements between Warner Music Group and AI music platform UDIO, reflecting evolving dynamics in the field of synthetic media.Timestamps:(00:00:10) Intro / Banter(00:01:32) News Preview(00:02:10) Response to listener commentsTools & Apps(00:02:34) Google launches Gemini 3 with new coding app and record benchmark scores | TechCrunch(00:05:49) Google launches Nano Banana Pro powered by Gemini 3(00:10:55) Anthropic releases Opus 4.5 with new Chrome and Excel integrations | TechCrunch(00:15:34) OpenAI releases GPT-5.1-Codex-Max to handle engineering tasks that span twenty-four hours(00:18:26) ChatGPT launches group chats globally | TechCrunch(00:20:33) Grok Claims Elon Musk Is More Athletic Than LeBron James — and the World's Greatest LoverApplications & Business(00:24:03) What AI bubble? Nvidia's strong earnings signal there's more room to grow(00:26:26) Alphabet stock surges on Gemini 3 AI model optimism(00:28:09) Sunday Robotics emerges from stealth with launch of ‘Memo' humanoid house chores robot(00:32:30) Robotics Startup Physical Intelligence Valued at $5.6 Billion in New Funding - Bloomberg(00:34:22) Waymo permitted areas expanded by California DMV - CBS Los Angeles - Waymo enters 3 more cities: Minneapolis, New Orleans, and Tampa | TechCrunchProjects & Open Source(00:37:00) Meta AI Releases Segment Anything Model 3 (SAM 3) for Promptable Concept Segmentation in Images and Videos - MarkTechPost(00:40:18) [2511.16624] SAM 3D: 3Dfy Anything in Images(00:42:51) [2511.13998] LoCoBench-Agent: An Interactive Benchmark for LLM Agents in Long-Context Software EngineeringResearch & Advancements(00:45:10) [2511.08544] LeJEPA: Provable and Scalable Self-Supervised Learning Without the Heuristics(00:50:08) [2511.13720] Back to Basics: Let Denoising Generative Models DenoisePolicy & Safety(00:52:08) Europe is scaling back its landmark privacy and AI laws | The Verge(00:54:13) From shortcuts to sabotage: natural emergent misalignment from reward hacking(00:58:24) [2511.15304] Adversarial Poetry as a Universal Single-Turn Jailbreak Mechanism in Large Language Models(01:01:43) Disrupting the first reported AI-orchestrated cyber espionage campaign(01:04:36) OpenAI Locks Down San Francisco Offices Following Alleged Threat From Activist | WIREDSynthetic Media & Art(01:07:02) Warner Music Group Settles AI Lawsuit With UdioSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Question? Text our Studio direct.In this shocking monthly cyber update, the Cyber Crime Junkies (David, Dr. Sergio E. Sanchez, and Zack Moscow) expose the craziest, must-know stories in tech and security.What's Inside This Episode:The AI Threat is Real: Dr. Sergio reveals how Chinese threat actors manipulated Anthropic's Claude AI system to stage cyber attacks against nearly 30 companies globally. Learn how powerful Large Language Models (LLMs) are leveling the field for malicious coders.The Casino Fish Tank Hack (True Story!): David tells the unbelievable story of how hackers breached a casino's main network by exploiting a smart thermostat inside an exotic fish tank, accessing high-roller financials. This proves critical network segmentation is non-negotiable.The New Scam: ClickFix: David breaks down the terrifying new ClickFix attack, where hackers trick you into literally copying and pasting malicious code into your own computer. Learn the golden rule to protect yourself from this massive, 500% spike in attacks.The Cloudflare Outage: Zack discusses the massive Cloudflare outage that took down major services like ChatGPT, revealing how a seemingly minor configuration error caused massive ripple effects across the entire internet.The iPhone Scam Laundry: Dr. Sergio shares a wild anecdote from his time at Apple about a global scammer laundering stolen or damaged iPhones for new ones, using a loophole caused by a business decision.
Amir Haramaty, Co-Founder and President of aiOla, joins SlatorPod to talk about how spoken, multilingual data can transform enterprise workflows and unlock real ROI.The Co-Founder introduces himself not as a serial entrepreneur but as a serial problem solver, focused on one core challenge: most enterprise data remains uncaptured, unstructured, and unused.Amir emphasizes that traditional speech tech fails in real-world conditions, where accents, noise, and hyper-specific jargon dominate. He illustrates how he tackles this challenge by building workflow-specific language models that extract only the data relevant to a process.Amir says aiOla converts speech not into text but into structured, schema-ready data, allowing organizations to automate workflows, improve compliance, and identify trends long before humans can. He explains that the company focuses on narrow processes rather than general conversation, enabling precision in niche environments.Amir shares how aiOla routinely cuts multi-hour procedures down to minutes, drives efficiency across frontline roles, and creates previously unavailable datasets that feed enterprise intelligence. He highlights ROI examples from supermarkets, airlines, manufacturing, and automotive industries.Amir explains that after proving aiOla's value, he realized the fastest way to scale was through firms already embedded in enterprise digital transformation. He notes that aiOla now partners with UST, Accenture, Salesforce, and Nvidia, creating a distribution engine capable of replicating wins across thousands of clients. He calls this channel strategy a force multiplier that shortens sales cycles and embeds aiOla inside broader modernization initiatives. Amir adds that these partners not only bring scale but also domain expertise aiOla deliberately chose not to build in-house. Amir outlines future priorities, including product-led growth, speech-based coding, and speech-prompted AI agents. He predicts that agentic systems will rely heavily on high-quality spoken data, making aiOla's role even more central.
We continue our AI Tools series with a deep dive into using Large Language Models (LLMs) for research, featuring Slobodan (Sani) Manić, AI skeptic, podcaster, and founder of the AI Fluency Club. Sani joins Matt and Moshe to share why context, careful prompting, and critical thinking are essential for getting real value out of today's LLMs in product work. Drawing on his work as a product builder, educator, and host of No Hacks Podcast, Sani challenges common myths about AI's capabilities and underscores both its practical uses and its risks for product managers. The conversation ranges from practical workflows to future visions of invisible AI, open-source models, and the real state of the “wrapper economy” built on major LLM providers. Join Matt, Moshe, and Sani as they explore: - Why most LLM workflows boil down to two mindsets: understanding your work, or avoiding understanding it - The crucial role of context and authority, why careless prompting leads to hallucinations, and how to break questions into smaller steps for better results - How LLMs fit as accelerators for deep research, surfacing insights faster than classic search engines, but always requiring fact-checking - Why Sani uses Google's Gemini and NotebookLM, and the value of integration with your company's existing tools - The open-source LLM alternative: privacy, flexibility, and why some see this as the future for secure enterprise AI - Pitfalls of the “wrapper economy,” vendor lock-in, and shaky business models based on reselling tokens - Starting out: how to include LLMs in PM research without reinventing your workflow, and why you must be careful with company data - The risks and limitations of AI today, especially in enterprise and sensitive environments - How internal AI context in tools like Atlassian makes those LLM features uniquely powerful - Future predictions: AI that fades into the background, plus the big unanswered questions about interface and humanoid robots - Sani's approach to AI education, success stories from AI Fluency Club, and what executives need to learn to stay ahead - And much more! Want to learn more or join Sani's community? - LinkedIn: Slobodan (Sani) Manić https://www.linkedin.com/in/sl... - No Hacks Podcast http://nohackspod.com/ - AI Fluency Club https://aifluencyclub.com/ You can also connect with us and find more episodes: - Product for Product Podcast http://linkedin.com/company/pr... - Matt Green https://www.linkedin.com/in/ma... - Moshe Mikanovsky http://www.linkedin.com/in/mik... Note: Any views mentioned in the podcast are the sole views of our hosts and guests, and do not represent the products mentioned in any way. Please leave us a review and feedback ⭐️⭐️⭐️⭐️⭐️
AI is everywhere, from coding assistants to chatbots, but what's really happening under the hood? It often feels like a "black box," but it doesn't have to be.In this episode, Allen sits down with Manning author and AI expert Emmanuel Maggiori to demystify the core concepts behind Large Language Models (LLMs). Emmanuel, author of "The AI Pocket Book," breaks down the entire pipeline - from the moment you type a prompt to the second you get a response. He explains complex topics like tokens, embeddings, context windows, and the controversial training methods that make these powerful tools possible.IN THIS EPISODE00:00 - Welcome & Why "The AI Pocket Book" is a Must-Read15:20 - The Basic LLM Pipeline Explained8:05 - What Are Tokens?21:30 - Understanding the Context Window25:50 - How Embeddings Represent Meaning35:45 - Controlling Creativity with Temperature39:30 - How LLMs Learn From Internet Data45:25 - Fine-Tuning with Human Feedback (RLHF)51:15 - Why AI Hallucinates56:45 - When Not to Use
Large language models aren't just improving — they're transforming how we work, learn, and make decisions. In this upcoming episode of Problem Solved, IISE's David Brandt talks with Bucknell University's Dr. Joe Wilck about the true state of LLMs after the first 1,000 days: what's gotten better, what's still broken, and why critical thinking matters more than ever.Thank you to this episode's sponsor, Autodesk FlexSimhttps://www.autodesk.com/https://www.flexsim.com/Learn more about The Institute of Industrial and Systems Engineers (IISE)Problem Solved on LinkedInProblem Solved on YouTubeProblem Solved on InstagramProblem Solved on TikTokProblem Solved Executive Producer: Elizabeth GrimesInterested in contributing to the podcast or sponsoring an episode? Email egrimes@iise.org
Sourcegraph's CTO just revealed why 90% of his code now comes from agents—and why the Chinese models powering America's AI future should terrify Washington. While Silicon Valley obsesses over AGI apocalypse scenarios, Beyang Liu's team discovered something darker: every competitive open-source coding model they tested traces back to Chinese labs, and US companies have gone silent after releasing Llama 3. The regulatory fear that killed American open-source development isn't hypothetical anymore—it's already handed the infrastructure layer of the AI revolution to Beijing, one fine-tuned model at a time. Resources:Follow Beyang Liu on X: https://x.com/beyangFollow Martin Casado on X: https://x.com/martin_casadoFollow Guido Appenzeller on X: https://x.com/appenz Stay Updated:If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see http://a16z.com/disclosures. Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts. Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Wildest week in AI since December 2024.
In dieser Folge der Flowgrade Show spreche ich mit Dr. Reiner Kraft – einem der führenden Technologieexperten Europas, Biohacker, Gründer der Plattform EverHealth und jemand, der seit Jahrzehnten an der Schnittstelle von künstlicher Intelligenz, Achtsamkeit und Gesundheit arbeitet.Reiner war 20 Jahre im Silicon Valley tätig, hat über 120 US-Patente mitentwickelt und zählte zu den Top Innovators under 30 des MIT Technology Review. Heute kombiniert er sein Wissen aus der Hochtechnologie mit funktioneller Medizin und entwickelt Tools für die nächste Stufe der Gesundheitsprävention.Wir sprechen darüber, wie KI unsere Gesundheit beeinflussen kann, was Large Language Models (wie ChatGPT) wirklich leisten – und wo ihre Grenzen liegen. Reiner erklärt, warum datengetriebene Prävention der Schlüssel für gesunde Langlebigkeit ist, und gibt tiefe Einblicke in seine neue Plattform EverHealth, mit der er das Thema „Functional Longevity“ für möglichst viele Menschen zugänglich machen will.Wenn du wissen willst, wie Technologie dich unterstützen kann, gesünder zu leben (ohne dich abhängig zu machen) dann ist diese Folge für dich.Viel Freude beim ZuhörenGo for Flow!
In this episode, host Larry D. Woodard interviews John Pasmore, founder and ceo of Latimer.ai an inclusive large language model designed to address bias in AI by being trained on a diverse dataset that includes experiences, cultures, and histories of Black and Brown communities.Thanks for listening. Don't forget to subscribe.
Google Search Console (GSC) New! Branded and Non-Branded Queries + Annotation Filters | Marketing Talk with Favour Obasi-Ike | Sign up for exclusive SEO insights.This episode focuses on Search Engine Optimization (SEO) and the new features within Google Search Console (GSC).Favour discuss the recently introduced brand queries and annotations features in GSC, highlighting their importance for understanding both branded and non-branded search behavior.The conversation also emphasizes the broader strategic use of GSC data, comparing it to a car's dashboard for website performance, and explores how this data can be leveraged to create valuable content, such as FAQ-based blog posts and multimedia assets, often with the aid of Artificial Intelligence (AI) tools. A key theme is the shift from traditional keyword ranking to ranking for user experience and the interconnectedness of various digital tools in modern marketing strategy.--------------------------------------------------------------------------------Next Steps for Digital Marketing + SEO Services:>> Need SEO Services? Book a Complimentary SEO Discovery Call with Favour Obasi-Ike>> Visit our Work and PLAY Entertainment website to learn about our digital marketing services.>> Visit our Official website for the best digital marketing, SEO, and AI strategies today!>> Join our exclusive SEO Marketing community>> Read SEO Articles>> Need SEO Services? Book a Complimentary SEO Discovery Call with Favour Obasi-Ike>> Subscribe to the We Don't PLAY Podcast--------------------------------------------------------------------------------As a content strategist, you live with a fundamental uncertainty. You create content you believe your audience needs, but a nagging question always remains: are you hitting the mark? It often feels like you're operating with a blind spot, focusing on concepts while, as the experts say, "you don't even know the intention behind why they're asking or searching."What if you could close that gap? What if your audience could tell you, explicitly, what they need you to create next?That's the paradigm shift happening right now inside Google Search Console (GSC). Long seen as a technical tool, recent updates are transforming GSC into a strategic command center. It's no longer just for SEO specialists; it's the dashboard for your entire content operation. These new developments are a game-changer, revealing direct intelligence from your audience that will change how you plan, create, and deliver content.Here are the five truths these new GSC features reveal—and how they give you a powerful competitive edge.1. Stop Driving Your Website Blind: The Dashboard AnalogyManaging a website without GSC is like driving a car without a dashboard. You're moving, but you have no idea how fast you're going or if you're about to run out of fuel. GSC is that free, indispensable dashboard providing direct intelligence straight from Google. But the analogy runs deeper. As one strategist put it, driving isn't passive: "when you're driving, you got to hit the gas, you got to... hit the brakes... when do you stop, when do you go, what do you tweak? Do you go to a pit stop?"You wouldn't drive your car without looking at the dashboard. So you shouldn't have a website and drive traffic and do all the things we do without looking at GSC, right?Your content strategy requires the same active management—knowing when to accelerate, when to pivot, and when to optimize. The new features make this "dashboard" more intuitive than ever, giving you the controls you need to navigate with precision.2. The Goldmine in Your Search Queries: Branded vs. Non-BrandedThe first game-changing update is the new "brand queries" filter. For the first time, GSC allows you to easily separate searches for your specific brand name (branded) from searches for the topics and solutions you offer (non-branded). This is the first step in a powerful new workflow: Discovery.Think of your non-branded queries as raw, unfiltered intelligence from your potential audience. These aren't just keywords; they're direct expressions of need. Instead of an abstract concept, you see tangible examples like:• “best practices for washing dishes”• “best pet shampoo”• “best Thanksgiving turkey meal”When you see more non-branded than branded queries, it's a powerful signal. It means you have access to a goldmine of raw material you can build content on to attract a wider audience that doesn't know your brand… yet. This isn't just data; it's a direct trigger for your next move.3. From Keyword to "Keynote": Creating Content with ContextOnce you've discovered this raw material, the next step is Development. This is where you transform an unstructured keyword into a strategic asset by adding structure and meaning. It's a progression: a raw keyword becomes a more defined keyphrase, which can be built into a keystone concept, and ultimately refined into a keynote.What's a keynote? Think about its real-world meaning: "when somebody sends you a note, it has context, right? It's supposed to mean something and it's supposed to say something specific." A keynote isn't just a search term; it's that term fully developed into a structured piece of content that delivers a specific, meaningful answer.This strategic asset can take many forms:• Blogs• Podcast episodes• Articles• Newsletters• Videos/Reels• eBooks4. The Most Underrated SEO Tactic: Your New Secret WeaponYou've discovered the query and developed it into a keynote. Now it's time for Execution. The single most effective format for executing on this strategy is one of the most powerful, yet underrated, SEO tactics in history: creating content around Frequently Asked Questions (FAQs).The rise of Large Language Models (LLMs) has fundamentally changed search behavior. People are asking full, conversational questions, and search engines are prioritizing direct, authoritative answers. A "one blog per FAQ" strategy is the perfect response. It's a secret weapon that's almost shockingly effective.FAQ is the new awesome the most awesome ever. I I said that on purpose.How awesome? By creating a single, targeted blog post for the long-tail question, "full roof replacement cost [city]," one site ranked number one on Google for that exact phrase in just 30 minutes. That's the power of directly answering a question your audience is already asking.5. It's Not About New Features, It's About New ActionsThe real purpose of these GSC updates isn't to give you more charts to observe; it's to prompt decisive action. Every non-branded query is a signal for what content to create next, feeding a powerful strategic loop that builds your authority over time.This is where it all comes together in a professional content framework. As the source material notes, "That's why you have content pillars and you have content clusters." Your non-branded queries show you what clusters your audience needs, and your FAQ-style "keynotes" become the assets that build out those clusters around your core content pillars.This data-driven approach empowers you to:• Recreate outdated content with new, relevant insights.• Repurpose core ideas into different formats to reach wider audiences.• Re-evaluate which topics are truly resonating.• Reemphasize your most valuable messages with fresh content.Conclusion: What Does Your Dashboard Say?Google Search Console is no longer just a reporting tool. It has evolved into an essential strategic partner that closes the gap between the content you produce and the value your audience is searching for. It's your direct line to understanding intent, allowing you to move from guessing what people want to knowing what they need.Now that you know how to read your website's dashboard, what's the first turn you're going to make?See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
What does it take to build tech the world actually trusts? Wikipedia founder Jimmy Wales joins the crew to dig into the real crisis behind AI, social networks, and the web: trust, and how to build it when the stakes are global. Teen founders raise $6M to reinvent pesticides using AI — and convince Paul Graham to join in Introducing SlopStop: Community-driven AI slop detection in Kagi Search Part 1: How I Found Out $1 billion AI company co-founder admits that its $100 a month transcription service was originally 'two guys surviving on pizza' and typing out notes by hand His announcement leaving Meta White House Working on Executive Order to Foil State AI Regulations Nvidia stock soars after results, forecasts top estimates with sales for AI chips 'off the charts' Jeff Bezos Creates A.I. Start-Up Where He Will Be Co-Chief Executive Jack Conte: I'm Building an Algorithm That Doesn't Rot Your Brain AI love, actually Cat island road trip: liquidator's warehouse Gentype The Carpenter's Son... My excerpt from the Q&A Image of the paper Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jimmy Wales Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: ventionteams.com/twit zapier.com/machines agntcy.org spaceship.com/twit
What does it take to build tech the world actually trusts? Wikipedia founder Jimmy Wales joins the crew to dig into the real crisis behind AI, social networks, and the web: trust, and how to build it when the stakes are global. Teen founders raise $6M to reinvent pesticides using AI — and convince Paul Graham to join in Introducing SlopStop: Community-driven AI slop detection in Kagi Search Part 1: How I Found Out $1 billion AI company co-founder admits that its $100 a month transcription service was originally 'two guys surviving on pizza' and typing out notes by hand His announcement leaving Meta White House Working on Executive Order to Foil State AI Regulations Nvidia stock soars after results, forecasts top estimates with sales for AI chips 'off the charts' Jeff Bezos Creates A.I. Start-Up Where He Will Be Co-Chief Executive Jack Conte: I'm Building an Algorithm That Doesn't Rot Your Brain AI love, actually Cat island road trip: liquidator's warehouse Gentype The Carpenter's Son... My excerpt from the Q&A Image of the paper Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jimmy Wales Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: ventionteams.com/twit zapier.com/machines agntcy.org spaceship.com/twit
What does it take to build tech the world actually trusts? Wikipedia founder Jimmy Wales joins the crew to dig into the real crisis behind AI, social networks, and the web: trust, and how to build it when the stakes are global. Teen founders raise $6M to reinvent pesticides using AI — and convince Paul Graham to join in Introducing SlopStop: Community-driven AI slop detection in Kagi Search Part 1: How I Found Out $1 billion AI company co-founder admits that its $100 a month transcription service was originally 'two guys surviving on pizza' and typing out notes by hand His announcement leaving Meta White House Working on Executive Order to Foil State AI Regulations Nvidia stock soars after results, forecasts top estimates with sales for AI chips 'off the charts' Jeff Bezos Creates A.I. Start-Up Where He Will Be Co-Chief Executive Jack Conte: I'm Building an Algorithm That Doesn't Rot Your Brain AI love, actually Cat island road trip: liquidator's warehouse Gentype The Carpenter's Son... My excerpt from the Q&A Image of the paper Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jimmy Wales Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: ventionteams.com/twit zapier.com/machines agntcy.org spaceship.com/twit
What does it take to build tech the world actually trusts? Wikipedia founder Jimmy Wales joins the crew to dig into the real crisis behind AI, social networks, and the web: trust, and how to build it when the stakes are global. Teen founders raise $6M to reinvent pesticides using AI — and convince Paul Graham to join in Introducing SlopStop: Community-driven AI slop detection in Kagi Search Part 1: How I Found Out $1 billion AI company co-founder admits that its $100 a month transcription service was originally 'two guys surviving on pizza' and typing out notes by hand His announcement leaving Meta White House Working on Executive Order to Foil State AI Regulations Nvidia stock soars after results, forecasts top estimates with sales for AI chips 'off the charts' Jeff Bezos Creates A.I. Start-Up Where He Will Be Co-Chief Executive Jack Conte: I'm Building an Algorithm That Doesn't Rot Your Brain AI love, actually Cat island road trip: liquidator's warehouse Gentype The Carpenter's Son... My excerpt from the Q&A Image of the paper Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jimmy Wales Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: ventionteams.com/twit zapier.com/machines agntcy.org spaceship.com/twit
What does it take to build tech the world actually trusts? Wikipedia founder Jimmy Wales joins the crew to dig into the real crisis behind AI, social networks, and the web: trust, and how to build it when the stakes are global. Teen founders raise $6M to reinvent pesticides using AI — and convince Paul Graham to join in Introducing SlopStop: Community-driven AI slop detection in Kagi Search Part 1: How I Found Out $1 billion AI company co-founder admits that its $100 a month transcription service was originally 'two guys surviving on pizza' and typing out notes by hand His announcement leaving Meta White House Working on Executive Order to Foil State AI Regulations Nvidia stock soars after results, forecasts top estimates with sales for AI chips 'off the charts' Jeff Bezos Creates A.I. Start-Up Where He Will Be Co-Chief Executive Jack Conte: I'm Building an Algorithm That Doesn't Rot Your Brain AI love, actually Cat island road trip: liquidator's warehouse Gentype The Carpenter's Son... My excerpt from the Q&A Image of the paper Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jimmy Wales Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: ventionteams.com/twit zapier.com/machines agntcy.org spaceship.com/twit
This episode covers: Cardiology This Week: A concise summary of recent studies 'ChatGPT, MD?' - Large Language Models at the Bedside Management decisions in myocarditis Statistics Made Easy: Mendelian randomisation Host: Emer Joyce Guests: Carlos Aguiar, Folkert Asselbergs, Massimo Imazio Want to watch that episode? Go to: https://esc365.escardio.org/event/2179 Want to watch that extended interview on 'ChatGPT, MD?': Large Language Models at the Bedside? Go to: https://esc365.escardio.org/event/2179?resource=interview Disclaimer: ESC TV Today is supported by Bristol Myers Squibb and Novartis through an independent funding. The programme has not been influenced in any way by its funding partners. This programme is intended for health care professionals only and is to be used for educational purposes. The European Society of Cardiology (ESC) does not aim to promote medicinal products nor devices. Any views or opinions expressed are the presenters' own and do not reflect the views of the ESC. The ESC is not liable for any translated content of this video. The English language always prevails. Declarations of interests: Stephan Achenbach, Folkert Asselbergs, Yasmina Bououdina, Massimo Imazio, Emer Joyce, and Nicolle Kraenkel have declared to have no potential conflicts of interest to report. Carlos Aguiar has declared to have potential conflicts of interest to report: personal fees for consultancy and/or speaker fees from Abbott, AbbVie, Alnylam, Amgen, AstraZeneca, Bayer, BiAL, Boehringer-Ingelheim, Daiichi-Sankyo, Ferrer, Gilead, GSK, Lilly, Novartis, Pfizer, Sanofi, Servier, Takeda, Tecnimede. John-Paul Carpenter has declared to have potential conflicts of interest to report: stockholder MyCardium AI. Davide Capodanno has declared to have potential conflicts of interest to report: Bristol Myers Squibb, Daiichi Sankyo, Sanofi Aventis, Novo Nordisk, Terumo. Konstantinos Koskinas has declared to have potential conflicts of interest to report: honoraria from MSD, Daiichi Sankyo, Sanofi. Steffen Petersen has declared to have potential conflicts of interest to report: consultancy for Circle Cardiovascular Imaging Inc. Calgary, Alberta, Canada. Emma Svennberg has declared to have potential conflicts of interest to report: Abbott, Astra Zeneca, Bayer, Bristol-Myers, Squibb-Pfizer, Johnson & Johnson.
Host: Emer Joyce Guest: Folkert Asselbergs Want to watch that episode? Go to: https://esc365.escardio.org/event/2179 Want to watch that extended interview on 'ChatGPT, MD?': Large Language Models at the Bedside? Go to: https://esc365.escardio.org/event/2179?resource=interview Disclaimer: ESC TV Today is supported by Bristol Myers Squibb and Novartis through an independent funding. The programme has not been influenced in any way by its funding partners. This programme is intended for health care professionals only and is to be used for educational purposes. The European Society of Cardiology (ESC) does not aim to promote medicinal products nor devices. Any views or opinions expressed are the presenters' own and do not reflect the views of the ESC. The ESC is not liable for any translated content of this video. The English language always prevails. Declarations of interests: Stephan Achenbach, Folkert Asselbergs, Yasmina Bououdina, Emer Joyce, and Nicolle Kraenkel have declared to have no potential conflicts of interest to report. Carlos Aguiar has declared to have potential conflicts of interest to report: personal fees for consultancy and/or speaker fees from Abbott, AbbVie, Alnylam, Amgen, AstraZeneca, Bayer, BiAL, Boehringer-Ingelheim, Daiichi-Sankyo, Ferrer, Gilead, GSK, Lilly, Novartis, Pfizer, Sanofi, Servier, Takeda, Tecnimede. John-Paul Carpenter has declared to have potential conflicts of interest to report: stockholder MyCardium AI. Davide Capodanno has declared to have potential conflicts of interest to report: Bristol Myers Squibb, Daiichi Sankyo, Sanofi Aventis, Novo Nordisk, Terumo. Konstantinos Koskinas has declared to have potential conflicts of interest to report: honoraria from MSD, Daiichi Sankyo, Sanofi. Steffen Petersen has declared to have potential conflicts of interest to report: consultancy for Circle Cardiovascular Imaging Inc. Calgary, Alberta, Canada. E mma Svennberg has declared to have potential conflicts of interest to report: Abbott, Astra Zeneca, Bayer, Bristol-Myers, Squibb-Pfizer, Johnson & Johnson. Abbott, Astra Zeneca, Bayer, Bristol-Myers, Squibb-Pfizer, Johnson & Johnson.
You ever see a new AI model drop and be like.... it's so good OMG how do I use it?
Large Language Models, or LLMs, are infiltrating every facet of our society, but do we even understand what they are? In this fascinating deep dive into the intersection of technology, language, and consciousness, the Wizard offers a few new ways of perceiving these revolutionary—and worrying—systems. Got a question for the the Wizard? Call the Wizard Hotline at 860-415-6009 and have it answered in a future episode! Join the ritual: www.patreon.com/thispodcastisaritual
Large language models aren't just improving — they're transforming how we work, learn, and make decisions. In this upcoming episode of Problem Solved, IISE's David Brandt talks with Bucknell University's Dr. Joe Wilck about the true state of LLMs after the first 1,000 days: what's gotten better, what's still broken, and why critical thinking matters more than ever.Sponsored by Autodesk FlexSimLearn more about The Institute of Industrial and Systems Engineers (IISE)Problem Solved on LinkedInProblem Solved on YouTubeProblem Solved on InstagramProblem Solved on TikTokProblem Solved Executive Producer: Elizabeth GrimesInterested in contributing to the podcast or sponsoring an episode? Email egrimes@iise.org
Adam D'Angelo (Quora/Poe) thinks we're 5 years from automating remote work. Amjad Masad (Replit) thinks we're brute-forcing intelligence without understanding it.In this conversation, two technical founders who are building the AI future disagree on almost everything: whether LLMs are hitting limits, if we're anywhere close to AGI, and what happens when entry-level jobs disappear but experts remain irreplaceable. They dig into the uncomfortable reality that AI might create a "missing middle" in the job market, why everyone in SF is suddenly too focused on getting rich to do weird experiments, and whether consciousness research has been abandoned for prompt engineering.Plus: Why coding agents can now run for 20+ hours straight, the return of the "sovereign individual" thesis, and the surprising sophistication of everyday users juggling multiple AIs. Resources:Follow Amjad on X: https://x.com/amasadFollow Adam on X: https://x.com/adamdangelo Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.