POPULARITY
In this episode of Theory & Insights, we bring together two thought leaders at the intersection of healthcare innovation and pharmaceutical manufacturing — John Nosta, renowned AI and technology theorist and founder of NostaLab, and Stephen Beckman, CEO of YARAL Pharma, a rising force in U.S. generics. Together, they dive into the evolving impact of Artificial Intelligence (AI) and Large Language Models (LLMs) on pharmaceutical manufacturing. The discussion covers the promise and peril of AI in reshaping everything from R&D to regulatory pathways, as well as the ethics, economics, and operational shifts that could redefine the industry in the next decade. This is a must-listen for pharma execs, digital health strategists, and technology innovators looking to understand what's next.
In this AI research paper reading, we dive into "A Watermark for Large Language Models" with the paper's author John Kirchenbauer. This paper is a timely exploration of techniques for embedding invisible but detectable signals in AI-generated text. These watermarking strategies aim to help mitigate misuse of large language models by making machine-generated content distinguishable from human writing, without sacrificing text quality or requiring access to the model's internals.Learn more about the A Watermark for Large Language Models paper. Learn more about agent observability and LLM observability, join the Arize AI Slack community or get the latest on LinkedIn and X.Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
a16z General Partners Erik Torenberg and Martin Casado sit down with technologist and investor Balaji Srinivasan to explore how the metaphors we use to describe AI—whether as god, swarm, tool, or oracle—reveal as much about us as they do about the technology itself.Balaji, best known for his work in crypto and network states, also brings a deep background in machine learning. Together, the trio unpacks the evolution of AI discourse, from monotheistic visions of a singular AGI to polytheistic interpretations shaped by culture and context. They debate the practical and philosophical: the current limits of AI, why prompts function like high-dimensional programs, and what it really takes to “close the loop” in AI reasoning.This is a systems-level conversation on belief, control, infrastructure, and the architectures that might govern future societies. Timecodes:0:00 Introduction: The Polytheistic AGI Framework1:46 Personal Journeys in AI and Crypto3:18 Monotheistic vs. Polytheistic AGI: Competing Paradigms8:20 The Limits of AI: Chaos, Turbulence, and Predictability9:29 Platonic Ideals and Real-World Systems14:10 Decentralized AI and the End of Fast Takeoff14:34 Surprises in AI Progress: Language, Locomotion, and Double Descent25:45 Prompting, Verification, and the Age of the Phrase29:44 AI, Crypto, and the Grounding Problem34:26 Visual vs. Verbal: Where AI Excels and Struggles37:19 The Challenge of Markets, Politics, and Adversarial Systems40:11 Amplified Intelligence: AI as a Force Multiplier43:37 The Polytheistic Counterargument: Convergence and Specialization48:17 AI's Impact on Jobs: Specialists, Generalists, and the Future of Work57:36 Security, Drones, and Digital Borders1:03:41 AI, Power, and the Balance of Control1:06:33 The Coming Anti-AI Backlash1:09:10 Global Implications: Labor, Politics, and the Future Resources:Find Balaji on X: https://x.com/balajisFind Martin on X: https://x.com/martin_casado Stay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Send us a textToday's episiode introduces Model Context Protocol (MCP), an open standard designed to enable Artificial Intelligence (AI) applications, particularly Large Language Models (LLMs), to seamlessly interact with third-party tools and data sources. It explains MCP's architecture, including hosts, clients, servers, and external tools, and highlights its benefits such as eliminating knowledge cut-offs, reducing hallucinations, and enhancing AI's capability to perform real-world actions. The discussion also touches upon the growing adoption of MCP servers by cybersecurity vendors to facilitate natural language interaction with security platforms, while acknowledging the potential security implications of this new architectural layer.Support the showGoogle Drive link for Podcast content:https://drive.google.com/drive/folders/10vmcQ-oqqFDPojywrfYousPcqhvisnkoMy Profile on LinkedIn: https://www.linkedin.com/in/prashantmishra11/Youtube Channnel : https://www.youtube.com/@TheCybermanShow Twitter handle https://twitter.com/prashant_cyber PS: The views are my own and dont reflect any views from my employer.
In this week's episode of Search with Candour, Jack Chambers-Ward and Mark Williams-Cook discuss the evolving landscape of search in the context of Large Language Models (LLMs), the challenges they bring, including A LOT of spam and how they are being manipulated in search.They talk about the potential future of AI search, and the implications for brands and consumers as well as the responsibilities of monitoring and mitigating misinformation, the need for in-depth product data, and the feasibility of AI taking over transactional tasks.Sponsored by fatjoe:Are you ready to get started? Sign up for your free fatjoe account: https://fatjoe.com/References:Use The Brand Control Quadrant To Reclaim Your Brand Narrative: https://www.youtube.com/watch?v=mMx3u6fgg5wWhy OpenAI & Perplexity want clickstream data: https://www.linkedin.com/posts/myriamjessier_ai-search-marketing-activity-7348972981231988738-jDHIHacked sites and expired domains are being cited by ChatGPT: https://digitaloft.co.uk/hacked-sites-and-expired-domains-are-being-used-as-chatgpt-sources/00:00 Introduction and banter01:28 Discussing LLM Spam and Manipulation02:16 Sponsor Message: Fatjoe03:59 The Uses of LLMs in Search06:03 Challenges and Future of AI Search16:38 Phishing and Security Concerns with LLMs19:54 Responsibility and Brand Protection24:47 The Future of AI and Search31:10 Damage Control in the Age of Generative AI31:41 LLMs are Leaky Buckets32:48 Firefighting Tools for AI Errors34:22 The Importance of Brand Reputation35:15 High-Value Leads and Conversion Rates36:46 Misleading AI Conversations37:27 SEO Strategies for E-commerce40:14 The Future of AI in E-commerce44:33 The Impact of AI on Consumer Behaviour47:23 Concluding Thoughts and Upcoming Events
If you've been to any tech conferences lately, especially around generative AI and cloud, you've likely heard this buzzword. Today, we're breaking down what MCP servers are, why they're crucial for advanced AI applications, and how you can confidently deploy and even build them, particularly within an enterprise AWS ecosystem.This episode assumes you have a foundational understanding of LLMs and cloud architecture, and while our examples lean into AWS and .NET, the concepts are broadly applicable. So, let's jump right in!Introduction to MCP Servers: We'll start by setting the stage, explaining what MCP servers are and why they've become so relevant in the world of AI, especially as of mid-2025.The Core Idea of MCP Protocol: We'll break down the fundamental concept behind the MCP protocol, which standardizes how Large Language Models (LLMs) interact with external tools and data, freeing LLMs to focus on intelligence.Anatomy of an MCP Application: We'll look at the higher-level components of an application using an MCP server, including the application itself, the MCP client, and the server's key elements: resources, actions, and prompts.Tackling Enterprise Integration Challenges: A critical discussion on how authentication has evolved for MCP servers, from basic keys to robust OAuth 2.1 and Resource Servers, enabling secure enterprise integration with identity providers like Amazon Cognito.AWS's Blueprint for Enterprise MCP Deployments: We'll walk through AWS's recommended architecture for securely deploying MCP servers, covering everything from CloudFront and AWS WAF to ALBs, authentication services, and serverless compute options like Fargate and Lambda.The Next Evolution: MCP with Amazon Bedrock AgentCore: Discover how AWS Bedrock AgentCore further streamlines agent development and MCP server management, offering specialized runtimes, gateways, and built-in identity and observability.Real-World MCP Server Examples: We'll highlight the growing popularity of MCP servers by examining a practical use case: the AWS Documentation MCP Server, and how it empowers AI assistants with real-time, accurate knowledge.Building Your Own MCP Server in .NET (High-Level): For our developer listeners, we'll provide a concise, step-by-step guide on how to get started building your own MCP server using .NET, from setting up your web app to defining your AI's capabilities.
The belief is spreading like wildfire: enter a few specific prompts into ChatGPT and you can ‘unlock' the ‘sentience' that is waiting to reveal the secrets of the Ancients, or the Aliens, or of God Himself. Not only is this a gross (and dangerous) over-estimation of what a Large Language Model is, it also misses the point about what constitutes a genuine, deep and meaningful relationship.
When you ask ChatGPT or Gemini a question about politics, whose opinions are you really hearing?In this episode, we dive into a provocative new study from political scientist Justin Grimmer and his colleagues, which finds that nearly every major large language model—from ChatGPT to Grok—is perceived by Americans as having a left-leaning bias. But why is that? Is it the training data? The guardrails? The Silicon Valley engineers? Or something deeper about the culture of the internet itself?The hosts grapple with everything from “Mecha Hitler” incidents on Grok to the way terms like “unhoused” sneak into AI-generated text—and what that might mean for students, voters, and future regulation. Should the government step in to ensure “political neutrality”? Will AI reshape how people learn about history or policy? Or are we just projecting our own echo chambers onto machines?
In this episode of B2B Marketing Excellence, Donna Peterson breaks down the intimidating term “prompt engineering” and shows how it's simply a smarter, more consistent way to work—no tech degree required.Drawing from her experience with generative AI and recent insights from the Vanderbilt Prompt Engineering course, Donna shares practical ways to use prompts for repetitive marketing tasks like campaign planning and list recommendations. You'll hear how creating simple, reusable prompts not only saves time but also ensures your whole team is aligned—producing clear, professional results.You'll also learn:Why prompting is more about conversation than coding.How a well-written prompt becomes a shortcut you can use again and again.The difference between prompts and templates—and how to use both for better outcomes.At World Innovators, we focus on providing tools and strategies that make your work easier, your messaging clearer, and your outcomes more consistent. This episode offers practical examples to help you build confidence using AI in a way that's simple and effective.For a step-by-step walkthrough, refer to "Prompt Engineering Examples for Business Teams: 3 ChatGPT Prompt Templates to Boost Productivity" video on YouTube- https://youtu.be/FAlcjTx_xUo?si=uQv6-naLnQGIkn4S.Episode Timestamps:00:00 – Welcome & why the term “prompt engineering” can feel overwhelming00:38 – What prompting really is (and what it's not)01:33 – The early struggles: over-explaining and second-guessing03:02 – Aha moment from the Vanderbilt course04:47 – Using prompts to simplify and speed up repetitive tasks06:14 – Real-world example: Scheduling campaigns with one simple prompt10:22 – Understanding the difference between prompts and templates12:59 – Encouragement to just start talking to your AI assistantIf you found this episode helpful, subscribe to the World Innovators YouTube Channel for more practical ideas on B2B marketing and using AI tools effectively.Leave a review to help us spread the word about quality marketing that puts people first.If you need help building your prompt library or training your team, reach out directly to Donna at dpeterson@worldinnovators.com.
How do you bring AI agents to your organization? Richard chats with April Dunnam about her experiences with Copilot Studio, Microsoft's tool for building various agents for your organization. April discusses the multiple approaches available today for utilizing generative AI and the benefits of leveraging template-driven and low-code solutions to capitalize on the latest features in agentic AI. The conversation also delves into the relationship between M365 Copilot and Copilot Studio for creating extensions and focused functionality. There's a significant amount of power here if you take the time to learn the tools!LinksMicrosoft Copilot StudioBuild your First Copilot Studio Agent in MinutesPlayright MCPTesting Copilot Studio AgentsAgent FlowsDataverse MCPApril's Copilot EstimatorRecorded July 8, 2025
Dinis Guarda citiesabc openbusinesscouncil Thought Leadership Interviews
Erik Schwartz is a technologist and serial entrepreneur with over 20 years of experience in the tech sector, specialising in AI, information search and retrieval, and knowledge discovery. Erik is the Co-Founder and Chief AI Officer at Elysia.ai Labs. integrating Large Language Models (LLMs) and Generative AI into search technologies, offering automation, intelligent AI assistants, and multimodal AI experiences that help businesses unlock AI's full potential and achieve real operational outcomes.Read more about Erik Schwartz: https://businessabc.net/wiki/erik-schwartzErik Schwartz Interview Questions00:00- 1:57 Key highlights1:58- 05:59 Introduction05:00- 12:51 Erik's Background 11:00- 15:41 Erik's Career Evolution15:42- 21:03 Family Influence on Career Journey21:04- 29:09 Video Intelligence at Comcast29:10- 34:05 Building Elysia.ai Labs34:06- 39:59 Generative AI and Search40:00- 49:26 Challenges of Solo Entrepreneurship49:27- 53:03 Client Interaction and Preparedness53:04- 57:35 Bridging the Digital Divide57:36 1:04:34 Building Collaborative Communities1:04:35 -1:08:01 AI Agents in Business1:08:02- 1:13:56 AI's Impact on Workplaces1:13:57- 1:18:38 Ethical and Responsible Use of AI1:18:38 - 1:20:46 ClosureUseful Links and Resourceshttps://uk.linkedin.com/in/eschwaahttps://aibusiness.com/author/erik-schwartzAbout citiesabc.comhttps://www.citiesabc.com/ About businessabc.nethttps://www.businessabc.net/About fashionabc.orghttps://www.fashionabc.org/ About Dinis Guardahttps://www.dinisguarda.com/https://businessabc.net/wiki/dinis-guardaBusiness Inquiries- info@ztudium.comSupport the show
This episode is sponsored by SearchMaster. Optimize your content for traditional search engines AND next-generation AI traffic from Large Language Models like ChatGPT, Claude, and Perplexity. Future-proof your SEO strategy. Be among the first 50 users to sign up and get 6 months of Enterprise tier for free! Watch this episode on YouTube! In this episode of the Marketing x Analytics podcast, host Alex Sofronas interviews Matthew Plese, president of catechismclass.com, about their efforts in optimizing Google Ads campaigns for his B2C business. Matthew shares insights on keyword strategies, the importance of analyzing organic versus paid searches, and the adaptability needed in digital marketing. They discuss specific strategies to improve ROI, including conversion value implementation, keyword analysis, ad creative enhancements, and A/B testing. The conversation highlights the necessity of continuous optimization and data-driven decision-making in successful online advertising. Follow Marketing x Analytics! X | LinkedIn Click Here for Transcribed Episodes of Marketing x Analytics All view are our own.
Sarah Schmidt, Head of Corporate Strategy, Transformation & Portfolio bei SAP, gibt Einblicke in die Strategie eines der größten Softwareunternehmen der Welt – und erklärt, warum der Fokus auf Anwendung und Kundennutzen wichtiger ist als ein eigenes ChatGPT. In dieser Episode erfährst du: - Wie SAP in den letzten 50 Jahren vom Startup zum globalen Tech-Konzern gewachsen ist. - Warum SAP bewusst kein eigenes Large-Language-Model entwickelt – und stattdessen auf praktische KI-Anwendungen setzt. - Welche Rolle Daten, KI-Agenten und eine breite Business-Suite in der Zukunft der Unternehmenssoftware spielen. - Wie SAP die globale Transformation Richtung Cloud und AI gemeistert hat – und was das für 100.000 Mitarbeitende bedeutet. - Was Sarah über digitale Souveränität denkt – und warum sie glaubt, dass Europa klüger in KI investieren muss. Christoph auf LinkedIn: [https://www.linkedin.com/in/christophburseg](https://www.linkedin.com/in/christophburseg) Kontaktiere uns über Instagram: [https://www.instagram.com/vodafonebusinessde/](https://www.instagram.com/vodafonebusinessde/)
You have probably seen recent headlines that Microsoft has developed an AI model that is 4x more accurate than humans at difficult diagnoses. It's been published everywhere, AI is 80% accurate compared to a measly 20% human rate, and AI was cheaper too! Does this signal the end of the human physician? Is the title nothing more than clickbait? Or is the truth somewhere in-between? Join Behind the Knife fellow Ayman Ali and Dr. Adam Rodman from Beth Israel Deaconess/Harvard Medical School to discuss what this study means for our future. Studies: Sequential Diagnosis with Large Language Models: https://arxiv.org/abs/2506.22405v1 METR study: https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/ Hosts: Ayman Ali, MD Ayman Ali is a Behind the Knife fellow and general surgery PGY-4 at Duke Hospital in his academic development time where he focuses on applications of data science and artificial intelligence to surgery. Adam Rodman, MD, MPH, FACP, @AdamRodmanMD Dr. Rodman is an Assistant Professor and a practicing hospitalist at Beth Israel Deaconess Medical Center. He's the Beth Israel Deaconess Medical Center Director of AI Programs. In addition, he's the co-director of the Beth Israel Deaconess Medical Center iMED Initiative. Podcast Link: http://bedside-rounds.org/ Please visit https://behindtheknife.org to access other high-yield surgical education podcasts, videos and more. If you liked this episode, check out our recent episodes here: https://app.behindtheknife.org/listen
Tech Productivity to AI to Cybersecurity to Sports Cars - Best of Tech 1st half 2025 - AZ TRT S06 EP12 (274) 7-6-2025 What We Learned This Week ChatGPT is an AI chatbot, developed by OpenAI, that can engage in human-like conversations Obvious Future is building Machine Learning (AI) programs to be used onsite for a business Oilstain Lab creates high end retro futuristic designer sports car - in EV models ACTRA - Cyber threats affect everyone from Gov't to business to private and growing Clips form podcasts focused on tech in the 1st Half of 2025 Notes: Segment 1: Tech Productivity - AZ TRT S06 EP06 (267) 3-23-2025 What We Learned This Week ChatGPT is an AI chatbot, developed by OpenAI, that can engage in human-like conversations ChatGPT can read docs, edit docs, answer Qs, and transcribe Elevenreader – app that turns any document into audio Google Drive – office suite of tools for spreadsheets, docs, powerpoints, & more Todoist – task management program Pocket – web research tool that saves & organizes links Guest: Denver Nowicz, President - Wealth For Lifehttp://wealthforlife.net/ Denver is an advisor with nearly 20 years experience working with clients in investments and insurance, designing retirement plans with a combo of both. He takes us through different strategies for clients to get the best allocations for their money over the long term. It is the Combo Strategy of both Offense and Defense, the synergy of the mix, not ‘All or Nothing'. Full Show: HERE Segment 2: Cybersecurity Response Plan w/ Frank Grimmelmann of ACTRA - AZ TRT S06 EP03 (264) 2-9-2025 What We Learned This Week ACTRA Arizona Cyber Threat Response Alliance Cyber threats affect everyone from Gov't to business to private and growing Companies need to be responsive with speed to be effective + share information of attacks ACTRA has members from both government and private sector ACTRA helped create a state cybersecurity response model that other states can use Guest: Frank Grimmelmann https://www.actraaz.org/actra/leadership President & CEO/Intelligence Liaison Officer Mr. Grimmelmann also serves as Co-Chair (together with Arizona's Chief Information Security Officer) for the Arizona Cybersecurity Team (‘ACT'), created through the Governor's Executive Order signed in March 2018. He also serves as a Founding Member of the National Leadership Group for the Information Sharing & Analysis Organization Standards Organization (‘ISAO SO') at the University of Texas San Antonio (UTSA), created under the President's Executive Order 13691 in February 2015. As ACTRA's leader, Mr. Grimmelmann was invited as the first private sector representative in the Arizona Counter Terrorism Information Center (ACTIC) and served as its first private sector Executive Board representative from 2014-2019. He presently acts as ACTRA's designated private sector liaison to ACTRA's Key Agency and other non-Member Stakeholders. Full Show: HERE Segment 3: Futuristic EV Designer Sports Car w/ Nikita Bridan of Oilstainlab - AZ TRT S06 EP02 (263) 1-26-2025 What We Learned This Week Oilstain Lab creates high end retro futuristic designer sports car - in EV models EV Car Designers for Gearheads who hate EVs All the capabilities of a sports car, on a liteweight carbon fiber frame, + sound & an electric motor Inspired by the race cars of Italy & classic 1960s sports cars Guest: Nikita Bridan, Co-Founder, CEO Nikita Bridan is co-founder & chief executive officer of Oilstainlab. A car design strategist with 15 years of OEM and startup experience, Nikita has worked with world-renowned brands including Lyft, Cruise, GM, Toyota, Genesis, ONE, and more on electrification, platforms, and strategy. In 2019, Nikita co-founded Oilstainlab with his twin brother, Iliya, as an automotive design consultancy service and playground, and developed it into a boundary-pushing, custom vehicle manufacturer. Nikita lives his life as fast as the cars he builds, once being pulled over at 140mph in Arizona and getting off with a warning. Nikita earned bachelor's degrees in Transportation Design from the Istitudo Europeo di Design in Italy and the ArtCenter College of Design in Pasadena, California, where he now serves as an instructor to the next generation of designers. Leading a New Generation of Automotive with Oilstainlab Co-Founder Nikita Bridan The future of automotive design is in the hands of twin brothers, Nikita and Iliya Bridan. The founders of Oilstainlab have turned heads worldwide with their automotive creations, most notably the Half-11, its half Porsche-half Formula 1 race car that pays homage to the golden age of motor racing. Full Show: HERE Segment 4 Machine Learning (AI) Onsite w/ Eddi Weinwurm of Obvious Future - AZ TRT S06 EP01 (262) 1-5-2025 What We Learned This Week Obvious Future is building Machine Learning (AI) programs to be used onsite for a business Corporate Data is too sensitive to be in the cloud / internet Business cannot use cloud AI programs like ChatGPT, Google Cloud, etc because of IP and privacy concerns Large Language Models are not necessary, have more data than needed, can have smaller AI programs tailored for business Guest: Eddi Weinwurm AI is top of mind for most enterprises…but most don't know the risks especially in the cloud. https://obviousfuture.com/# Eddi Weinwurm is a co-founder and CEO of Obvious Future an AI company with a new approach to keeping AI local and secure. Eddi Weinwurm has many years of experience in both the development of media management software and AI. As a visionary he formed the company to address critical enterprises in the growing AI market. ObviousFuture Resident AI: Faster, Safer, and Transforming Enterprise AI Eddi Weinwurm co-founder and CEO of ObviousFuture is on a mission to make AI safer and faster for enterprises. ObviousFuture, a trailblazer in secure and private AI solutions, will be unveiling a disruptive AI solution for the enterprise on December 18—Resident AI. This solution empowers enterprises to harness the full potential of AI while safeguarding their data locally, marking a critical evolution in the AI landscape. ObviousFuture's Resident AI operates entirely on-premise, solving a $500 billion market problem by addressing vulnerabilities like data privacy risks, compliance challenges, and vendor lock-ins. The company is focused on key sectors such as government, defense, surveillance, medical, and media. Early adopters, have achieved ROI within just two months of deployment of the Resident AI platform. Full Show: HERE Biotech Shows: https://brt-show.libsyn.com/category/Biotech-Life+Sciences-Science AZ Tech Council Shows: https://brt-show.libsyn.com/size/5/?search=az+tech+council *Includes Best of AZ Tech Council show from 2/12/2023 Tech Topic: https://brt-show.libsyn.com/category/Tech-Startup-VC-Cybersecurity-Energy-Science Best of Tech: https://brt-show.libsyn.com/size/5/?search=best+of+tech ‘Best Of' Topic: https://brt-show.libsyn.com/category/Best+of+BRT Thanks for Listening. Please Subscribe to the AZ TRT Podcast. AZ Tech Roundtable 2.0 with Matt Battaglia The show where Entrepreneurs, Top Executives, Founders, and Investors come to share insights about the future of business. AZ TRT 2.0 looks at the new trends in business, & how classic industries are evolving. Common Topics Discussed: Startups, Founders, Funds & Venture Capital, Business, Entrepreneurship, Biotech, Blockchain / Crypto, Executive Comp, Investing, Stocks, Real Estate + Alternative Investments, and more… AZ TRT Podcast Home Page: http://aztrtshow.com/ ‘Best Of' AZ TRT Podcast: Click Here Podcast on Google: Click Here Podcast on Spotify: Click Here More Info: https://www.economicknight.com/azpodcast/ KFNX Info: https://1100kfnx.com/weekend-featured-shows/ Disclaimer: The views and opinions expressed in this program are those of the Hosts, Guests and Speakers, and do not necessarily reflect the views or positions of any entities they represent (or affiliates, members, managers, employees or partners), or any Station, Podcast Platform, Website or Social Media that this show may air on. All information provided is for educational and entertainment purposes. Nothing said on this program should be considered advice or recommendations in: business, legal, real estate, crypto, tax accounting, investment, etc. Always seek the advice of a professional in all business ventures, including but not limited to: investments, tax, loans, legal, accounting, real estate, crypto, contracts, sales, marketing, other business arrangements, etc.
The meeting covered introductions and updates from various participants. Jim Farris discussed his real estate focus, while Clark Hoover highlighted his work on private credit. Stephen Burke presented a bullish outlook on the US economy, citing strong GDP growth, consumer net worth, and corporate profits. He also addressed concerns about tariffs, trade uncertainty, and geopolitical risks. The discussion transitioned to AI's role in business intelligence and decision intelligence, with examples from Radek Biszkont and Jukka Heikka. AI applications in finance, including portfolio optimization and data analysis, were highlighted, emphasizing AI's potential to streamline processes and improve decision-making. The meeting discussed various applications and implications of AI. Hana Hussein highlighted AI's role in identifying profitable sales channels and increasing company valuations. Jukka Heikka emphasized AI's scalability benefits. Clement Utuk noted AI's potential in customer service and manufacturing. Lucia Ordonez-Gamero raised cybersecurity concerns. Belinda Kǒkóèkà Ephraim stressed the importance of proprietary data in AI's effectiveness. Lubna Dajani warned about AI's potential misinformation. J.P. Keating discussed data quality and security. Christine Nady proposed using AI to enhance human-to-human connections and decision-making. Anita Vadavatha highlighted the shift towards synthetic data. The session concluded with a focus on AI's practical uses and ethical considerations.You can subscribe to various 361 events and content at https://361firm.com/subs. For reference: - Web: www.361firm.com/home - Onboard as Investor: https://361.pub/shortdiag - Onboard Deals 361: www.361firm.com/onb - Onboard as Banker: www.361firm.com/bankers - Events: www.361firm.com/events - Content: www.youtube.com/361firm - Weekly Digests: www.361firm.com/digest
In this episode of Crazy Wisdom, I, Stewart Alsop, speak with Thamir Ali Al-Rahedi, host of the From First Principles podcast on YouTube, about the nature of questions and answers, their role in business and truth-seeking, and the trade-offs inherent in technologies like AI. We explore the tension between generalists and specialists, the influence of scientism on culture, and how figures like Steve Jobs embodied the power of questions to shape markets and innovations. Thamir also shares insights from his Arabic book summary platform and his cautious approach to using large language models. You can find Thamir's work on YouTube at From 1st Principles with Thamir and on X at @Thamir's View.Check out this GPT we trained on the conversationTimestamps00:00 Stewart Alsop introduces Thamir Ali Al-Rahedi and they discuss Stewart's book on the nature of questions, curiosity, and shifting his focus to questions in business.05:00 They explore how questions generate value and answers capture it, contrasting dynamic questioning with static certainty in business and philosophy.10:00 The market is described as a subconscious feedback loop, and they examine the role of truth-seeking in entrepreneurship, using Steve Jobs as an example.15:00 Discussion turns to Steve Jobs' spiritual practices, LSD, and how unseen factors and focus shaped Apple's success.20:00 Thamir and Stewart debate starting with spiritual or business perspectives in writing, touching on the generalist curse and discernment in creative work.25:00 They reflect on writing habits, moving from short-form to long-form, and using AI as a thinking partner or tool.30:00 Thamir shares his cautious approach to large language models, viewing them as trade-offs, and discusses building an Arabic book summary platform to inspire reading and curiosity.Key InsightsThe dynamic interplay of questions and answers – Thamir Ali Al-Rahedi explains that questions generate value by opening possibilities, while answers capture and stabilize that value. He sees the best answers as those that spark even more questions, creating a feedback loop of insight rather than static certainty.Business and philosophy demand different relationships to truth – In business, answers often serve as the foundation for action and revenue generation, requiring a “false sense of certainty.” By contrast, philosophy thrives in uncertainty, allowing questions to remain open-ended and exploratory without the pressure to resolve them.The market as a subconscious mirror – Both Thamir and Stewart Alsop describe the market as a form of truth that reflects not only conscious desires but also subconscious patterns and impulses. This understanding reframes economic behavior as a dialogue between collective psychology and external systems.Steve Jobs as a case study of truth-seeking in entrepreneurship – The conversation highlights Steve Jobs's blend of spiritual exploration and technological vision, including his exposure to Eastern philosophy and LSD, as an example of how deep questioning and unconventional insight can manifest in world-changing innovations.AI as a double-edged tool for generalists – Thamir views large language models with caution, seeing them as highly specific tools that risk outsourcing critical thinking if used too early in the learning process. He frames technologies as trade-offs rather than pure solutions, emphasizing the importance of retaining one's cognitive autonomy.The generalist's curse and the art of discernment – Both guests wrestle with how to focus and finish creative projects without sacrificing breadth. Thamir suggests writing medium-length pieces as a way to engage deeply without the paralysis of long-form commitments, while Stewart reflects on how AI accelerates his exploration of open threads.A call for cultural renewal through reading and reflection – Thamir shares his initiative to build an Arabic book summary platform aimed at reviving reading habits, especially among younger audiences. He sees curated human-written content as a gateway to generalist thinking and a counterbalance to instant, algorithm-driven consumption.
Our 216th episode with a summary and discussion of last week's big AI news! Recorded on 07/11/2025 Hosted by Andrey Kurenkov and Jeremie Harris. Feel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai Read out our text newsletter and comment on the podcast at https://lastweekin.ai/. In this episode: xAI launches Grok 4 with breakthrough performance across benchmarks, becoming the first true frontier model outside established labs, alongside a $300/month subscription tier Grok's alignment challenges emerge with antisemitic responses, highlighting the difficulty of steering models toward "truth-seeking" without harmful biases Perplexity and OpenAI launch AI-powered browsers to compete with Google Chrome, signaling a major shift in how users interact with AI systems Meta study reveals AI tools actually slow down experienced developers by 20% on complex tasks, contradicting expectations and anecdotal reports of productivity gains Timestamps + Links: (00:00:10) Intro / Banter (00:01:02) News Preview Tools & Apps (00:01:59) Elon Musk's xAI launches Grok 4 alongside a $300 monthly subscription | TechCrunch (00:15:28) Elon Musk's AI chatbot is suddenly posting antisemitic tropes (00:29:52) Perplexity launches Comet, an AI-powered web browser | TechCrunch (00:32:54) OpenAI is reportedly releasing an AI browser in the coming weeks | TechCrunch (00:33:27) Replit Launches New Feature for its Agent, CEO Calls it ‘Deep Research for Coding' (00:34:40) Cursor launches a web app to manage AI coding agents (00:36:07) Cursor apologizes for unclear pricing changes that upset users | TechCrunch Applications & Business (00:39:10) Lovable on track to raise $150M at $2B valuation (00:41:11) Amazon built a massive AI supercluster for Anthropic called Project Rainier – here's what we know so far (00:46:35) Elon Musk confirms xAI is buying an overseas power plant and shipping the whole thing to the U.S. to power its new data center — 1 million AI GPUs and up to 2 Gigawatts of power under one roof, equivalent to powering 1.9 million homes (00:48:16) Microsoft's own AI chip delayed six months in major setback — in-house chip now reportedly expected in 2026, but won't hold a candle to Nvidia Blackwell (00:49:54) Ilya Sutskever becomes CEO of Safe Superintelligence after Meta poached Daniel Gross (00:52:46) OpenAI's Stock Compensation Reflect Steep Costs of Talent Wars Projects & Open Source (00:58:04) Hugging Face Releases SmolLM3: A 3B Long-Context, Multilingual Reasoning Model - MarkTechPost (00:58:33) Kimi K2: Open Agentic Intelligence (00:58:59) Kyutai Releases 2B Parameter Streaming Text-to-Speech TTS with 220ms Latency and 2.5M Hours of Training Research & Advancements (01:02:14) Does Math Reasoning Improve General LLM Capabilities? Understanding Transferability of LLM Reasoning (01:07:58) Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity (01:13:03) Mitigating Goal Misgeneralization with Minimax Regret (01:17:01) Correlated Errors in Large Language Models (01:20:31) What skills does SWE-bench Verified evaluate? Policy & Safety (01:22:53) Evaluating Frontier Models for Stealth and Situational Awareness (01:25:49) When Chain of Thought is Necessary, Language Models Struggle to Evade Monitors (01:30:09) Why Do Some Language Models Fake Alignment While Others Don't? (01:34:35) Positive review only': Researchers hide AI prompts in papers (01:35:40) Google faces EU antitrust complaint over AI Overviews (01:36:41) The transfer of user data by DeepSeek to China is unlawful': Germany calls for Google and Apple to remove the AI app from their stores (01:37:30) Virology Capabilities Test (VCT): A Multimodal Virology Q&A Benchmark
Enterprise data management is undergoing a fundamental transformation. The traditional data stack built on rigid pipelines, static workflows, and human-led interventions is reaching its breaking point. As data volume, velocity, and variety continue to explode, a new approach is taking shape: agentic data management.In this episode of Tech Transformed, EM360Tech's Trisha Pillay sits down with Jay Mishra, Chief Product and Technology Officer at Astera, to explore why agentic systems powered by autonomous AI agents, Large Language Models (LLMs), and semantic search are rapidly being recognised as the next generation of enterprise data architecture.The conversation explores the drivers behind this shift, real-world applications, the impact on data professionals, challenges faced by agentic platforms, and the future of data stacks. Jay emphasises the importance of starting small and measuring ROI to successfully implement agentic solutions.What is Agentic Data Management?At its core, agentic data management is the application of intelligent, autonomous agents that can perceive, decide, and act across complex data environments. Unlike traditional automation, which follows predefined scripts, agentic AI is adaptive and self-directed. These agents are capable of learning from user behaviour, integrating with different systems, and adjusting to changes in context, all without human prompts.As Jay explains, "An agentic system is one that has the agency to make decisions, solve problems, and orchestrate actions based on real-time data and context, not just on training data.TakeawaysAgentic data management is the next evolutionary step in data architecture.Agents are autonomous and can make decisions on the fly.The demand for agentic solutions is increasing due to data volume and AI strategy needs.Maturity of foundation models enables near-human reasoning capabilities.Real-world applications of agentic AI include insurance claim processing.Data engineers will focus on policy and guardrail creation rather than coding.Governance, debt and hallucinations are significant challenges in agentic platforms.The future of data stacks will include declarative control plans and enhanced memory layers.Analysts will play a crucial role in defining policies for agentic systems.Starting small and demonstrating ROI is key to successful agentic implementation.Chapters00:00 Introduction to Agentic Data Management02:58 Understanding Agentic Data Management06:58 Drivers of Change in Data Management10:03 Real-World Applications of Agentic AI14:15 Impact on Data Engineers and Analysts16:43 Challenges and Limitations of Agentic Data Platforms20:03 Future of Data Stacks23:31 Final Thoughts on Agentic Data ManagementAbout Jay MishraJay Mishra is the Chief Product and Technology Officer at Astera Software, with over two decades of experience in data architecture and data-centric software innovation. He has led the design and development of transformative solutions for major enterprises, including Wells Fargo, Raymond James, and Farmers Mutual. Known for his strategic insight, technical leadership, and passion for empowering organisations, Jay has consistently delivered intelligent, scalable solutions that drive...
Law professor Daniel Ho says that the law is ripe for AI innovation, but a lot is at stake. Naive application of AI can lead to rampant hallucinations in over 80 percent of legal queries, so much research remains to be done in the field. Ho tells how California counties recently used AI to find and redact racist property covenants from their laws—a task predicted to take years, reduced to days. AI can be quite good at removing “regulatory sludge,” Ho tells host Russ Altman in teasing the expanding promise of AI in the law in this episode of Stanford Engineering's The Future of Everything podcastHave a question for Russ? Send it our way in writing or via voice memo, and it might be featured on an upcoming episode. Please introduce yourself, let us know where you're listening from, and share your question. You can send questions to thefutureofeverything@stanford.edu.Episode Reference Links:Stanford Profile: Daniel HoConnect With Us:Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads / Bluesky / MastodonConnect with School of Engineering >>> Twitter/X / Instagram / LinkedIn / FacebookChapters:(00:00:00) IntroductionRuss Altman introduces Dan Ho, a professor of law and computer science at Stanford University.(00:03:36) Journey into Law and AIDan shares his early interest in institutions and social reform.(00:04:52) Misconceptions About LawCommon misunderstandings about the focus of legal work.(00:06:44) Using LLMs for Legal AdviceThe current capabilities and limits of LLMs in legal settings.(00:09:09) Identifying Legislation with AIBuilding a model to identify and redact racial covenants in deeds.(00:13:09) OCR and Multimodal ModelsImproving outdated OCR systems using multimodal AI.(00:14:08) STARA: AI for Statute SearchA tool to scan laws for outdated or excessive requirements.(00:16:18) AI and Redundant ReportsUsing STARA to find obsolete legislatively mandated reports(00:20:10) Verifying AI AccuracyComparing STARA results with federal data to ensure reliability.(00:22:10) Outdated or Wasteful RegulationsExamples of bureaucratic redundancies that hinder legal process.(00:23:38) Consolidating Reports with AIHow different bureaucrats deal with outdated legislative reports.(00:26:14) Open vs. Closed AI ModelsThe risks, benefits, and transparency in legal AI tools.(00:32:14) Replacing Lawyers with Legal ChatbotWhy general-purpose legal chatbots aren't ready to replace lawyers.(00:34:58) Conclusion Connect With Us:Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads / Bluesky / MastodonConnect with School of Engineering >>>Twitter/X / Instagram / LinkedIn / Facebook
Florian and Esther discuss the language industry news of the week, including the newly released Slator 2025 Language AI 50 Under 50, showcasing fifty of the most innovative and fast-growing language AI startups founded within the past fifty months.The duo explain how Slator sifted through hundreds of companies, assessing innovation, practical solutions to real buyer problems, and strong market positioning. The final fifty span five categories: multilingual video and audio, live speech translation, transcription and captions, translation and text generation, and accessibility.The conversation then moves on to language AI and services in the public sector. Esther talks about a new language AI tool, DiploIA, developed and deployed by the French Government for diplomatic agents in sensitive missions.Turning to the US, Esther reports that SOSi secured a significant USD 260m language services contract with the US Drug Enforcement Administration. Meanwhile, the US Defense Health Agency is looking for providers to deliver large volumes of translation and interpreting services.Esther also revisits the major acquisition of CyraCom by Propio, calling it one of 2025's biggest language industry deals. Propio now joins forces with CyraCom's established presence in healthcare and legal interpreting, creating a combined entity with revenues exceeding half a billion dollars and positioning them strongly in the US interpreting market.Florian questions AI voice startup ElevenLabs' plans for an IPO within five years. He then wraps up the pod by exploring large reasoning models (LRMs) and their mixed performance in AI translation. While LRMs outperform traditional LLMs in complex, open-domain translation tasks, research indicates they remain prone to significant weaknesses.
On this episode, Matt, Lisa and serial entrepreneur Rufus Evison delve deep into the challenges and potential dangers of current Generative AI, particularly Large Language Models (LLMs). Rufus argues that LLMs inherently lack three crucial tenets: they are not correctable (corrigible), transparent, or reliable. He asserts that LLMs are “none of the three” and have […]
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Today, we're joined by Fatih Porikli, senior director of technology at Qualcomm AI Research for an in-depth look at several of Qualcomm's accepted papers and demos featured at this year's CVPR conference. We start with “DiMA: Distilling Multi-modal Large Language Models for Autonomous Driving,” an end-to-end autonomous driving system that incorporates distilling large language models for structured scene understanding and safe planning motion in critical "long-tail" scenarios. We explore how DiMA utilizes LLMs' world knowledge and efficient transformer-based models to significantly reduce collision rates and trajectory errors. We then discuss “SharpDepth: Sharpening Metric Depth Predictions Using Diffusion Distillation,” a diffusion-distilled approach that combines generative models with metric depth estimation to produce sharp, accurate monocular depth maps. Additionally, Fatih also shares a look at Qualcomm's on-device demos, including text-to-3D mesh generation, real-time image-to-video and video-to-video generation, and a multi-modal visual question-answering assistant. The complete show notes for this episode can be found at https://twimlai.com/go/738.
In this episode of the Oracle University Podcast, Lois Houston and Nikita Abraham are joined by Mitchell Flinn, VP of Program Management for the CSS Platform, to explore Oracle Cloud Success Navigator. This interactive platform is designed to help customers optimize their cloud journey, offering best practices, AI tools, and personalized guidance from implementation to innovation. Don't miss this insider look at maximizing your Oracle Cloud investment! Oracle Cloud Success Navigator Essentials: https://mylearn.oracle.com/ou/course/oracle-cloud-success-navigator-essentials/147489/242186 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ---------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I'm Nikita Abraham, Team Lead of Editorial Services with Oracle University, and joining me is my co-host Lois Houston, Director of Innovation Programs. Lois: Hi everyone! Today is the first of a two-part special on Oracle Cloud Success Navigator. This is a tool that provides you with a clear path to cloud transformation and helps you get the most out of your cloud investment. 00:52 Nikita: And to tell us more about this, we have Mitchell Flinn joining us today. Mitchell is VP of Program Management for Oracle Cloud Success Navigator. In this episode, we'll ask Mitchell about the ins and outs of this powerful platform, its benefits, key features, and the role it plays in streamlining cloud journeys. Lois: Yeah. Hi Mitchell! What is Oracle's approach to cloud technology and customer success, and how does the Cloud Success Navigator support this philosophy? 01:22 Mitchell: Oracle has an amazing amount of industry-leading enterprise cloud technologies across our entire portfolio. All of this is at your disposal. That, coupled with the sole focus of your success, forms the crux of the company's transformational journey. In other words, we put your success at the heart of everything we do. For each organization, the path to achieve maximum value from our technology is unique. Success Navigator reflects our emphasis on being there with you throughout the entire journey to steer you to success. 01:53 Nikita: Ok, what about from a business's viewpoint? Why would they need the Navigator? Mitchell: Businesses across every industry are moving mission-critical applications to the cloud. However, business leaders understand that there's no one-size-fits-all model for cloud development and deployment. Some fundamentals for success are your need to ensure new technologies are seamlessly integrated into day-to-day operations and continually optimize to align with evolving business requirements. You must ensure stakeholder visibility through the journey with updates at every stage. Building system efficiencies into other key tasks, which has to be done at the forefront when considering your cloud transformation. You also need to quickly identify risks and address them during the implementation process and beyond. Beyond the technical execution, cloud deployments also require significant process and organizational changes to ensure that adoption is aligned with business goals and delivers tangible benefits. Moreover, the training process for new features after cloud adoption can be an organization wide initiative that needs special attention. These requirements or more can be addressed through Oracle Cloud Success Navigator, which is a new interactive digital platform to guide you through all stages of your cloud journey. 03:09 Lois: Mitchell, how does Cloud Success Navigator platform enhance the user experience? How does it support customers at different stages of their cloud journey? Mitchell: Platform is included for free for all cloud application customers. And core to Success Navigator is the goal of increasing transparency among customers, partners in the Oracle team, from project kickoff through quarterly releases. Included in the platform are implementation best practices, Oracle Modern Best Practices focused on solutions provided by our applications, and guidance on living within the cloud. Success Navigator supports you for every stage of your Oracle journey. You can first get your bearings and understand what's possible with your cloud solution using preconfigured starter environments to support your design decisions. It helps you chart a proven course by providing access to Oracle expertise and Oracle Modern Best Practices, so you can use cloud quality standards to guide your implementation approach. You can find value from quarterly releases using AI assistants and preview environments to experience and adopt latest features that matter to you. And you can blaze new trails by building your own cloud roadmap based on your organization's goals, keeping you focused on the capabilities you need for day-to-day and the road ahead. 04:24 Nikita: How does the Navigator cater to the needs of all the different customers? Mitchell: For customers just getting started with Oracle implementations, Navigator provides a framework with success criteria for each stakeholder involved in the implementation, and provides recommended milestones and checklists to keep everyone on track. For existing customers and experienced cloud customers thriving in the cloud, it provides contextually relevant insights based on your cloud landscape. It prepares you for quarterly releases and preview environments, and enables the use of AI and optimization within your cloud investment. For our partners, it allows Oracle to work in a collaborative way to really team up for our customers. Navigator gives transparency to all stakeholders and helps determine what success criteria we should be thinking about at each milestone phase of the journey. And it also helps customers and partners get more out of their Oracle investment through a seamless process. 05:20 Lois: Right. Mitchell, can you elaborate on the use cases of the platform? How does it address challenges and requirements during cloud implementations? Mitchell: We can create transparency and alignment between you, your partner, and the Oracle team using shared view of progress measured through standard criteria. We can incorporate recommended key milestones and activities to help you visualize and measure your progress. And we can use built-in assessments, remove risk, and ask the right questions at the right time to make the right implementation decisions for your organization. Additionally, we can use Starter Configuration, which allows you to experience the latest capabilities and leading practices to enrich design decisions for your organization with Starter Configuration. This can activate Starter Configuration early in your journey to understand what delivered capability can do for you. It allows you to evaluate modern best practices to determine how your design process can work in the future. And it empowers and educates your organization by interacting with real capability, using real processes to make the right decisions for your cloud implementation. You're able to access new features in Fusion updates. You can do this to learn what you need to know about new features in a one-stop shop and connect your company in a compelling capacity. You can find, familiarize, and prioritize new and existing features in one place. And you can experience new features in hands-on preview environments available with you with each quarterly release. You can explore new theme-based approaches using adoption centers for AI and Redwood to understand where to start and how to get there. And you can understand innovation opportunities based on business processes, data insights, and industry benchmarks. 07:01 Nikita: Now that we've covered the basics, can you walk us through some of the key features of the platform? Let's start with the Home page. Mitchell: This is the starting point of the customer journey and the central location for everything Navigator has to offer, including best practice content. You'll find content focused on implementation phase, the innovation phase, and administrative elements like the team structure, program and projects, and other relevant tips and tricks. Cloud Quality Standards provides learning content and checklists for successful business transformation. This helps support the effective adoption and adherence to Cloud Quality Standards and enables individuals to leverage AI and predictive insights. The feature Sunburst allows capability for features to be reviewed in an interactive graphic, illustrating new features by pillar, other attributes, which enable customers to review features curated to identify and adopt new features that meet their needs. It helps you understand recommended features across your application base based off of a production profile, covering mandatory adoption requirements, efficiency gains, innovation capabilities like AI and Redwood to drive business change. Next is the Adoption Center, which addresses the need of our existing and implementing customers. It covers the concept of how Redwood is an imperative for our application customers, what it means, and how, and when we could translate some of the requirements to a business user or an IT user. Roadmap is an opportunity for the team to evaluate which features are most interesting at any given moment, the items that they would like to adopt next, and save features or items that they might require later. 08:36 Lois: That's great. Mitchell, I know we have two new features rolling out in 2025. Can you tell us a little bit about them? Mitchell: Preview Environment. This allows users to explore new features and orderly release through a shared environment by logging into Navigator, eliminating potential regression needs, data adjustments and loads, and other common pre-work. You can explore the feature directly with the support of Oracle Guided Learning. The second additional feature for 2025 is AI Assist. We've taken everything within Navigator and trained an LLM to provide customers the ability to inquire about best practices, solutions, and features within the applications and ultimately make them smarter as they prepare for areas like design workshops, quarterly release readiness, and engaging across our overall Oracle team. Customers can use, share, and apply Oracle content knowledge to their day-to-day responsibilities. 09:32 Have you mastered the basics of AI? Are you ready to take your skills to the next level? Unlock the potential of advanced AI with our OCI Generative AI Professional course and certification that covers topics like Large Language Models, the OCI Generative AI Service, and building Q&A chatbots for real-world applications. Head over to mylearn.oracle.com and find out more. 10:01 Nikita: Welcome back! Mitchell, how can I get started with the Navigator? Mitchell: To request customer access to Success Navigator's environment, you need to submit a request form via the Success Navigator page on oracle.com. You need to be able to provide your customer support identifier with the help icon around the CSI number. If you don't have your CSI number, you can still submit your request and a member of the Success Navigator team will reach out and coordinate with you to understand the information that's required. Once access is granted, you'll receive a welcome email to Oracle Cloud Success Navigator. 10:35 Lois: Alright, and after that's done? Mitchell: Before implementing Oracle Cloud Applications in your organization, you need to think of a structure that helps you organize and manage that implementation. To implement a solution footprint for a relatively small organization operating in a single country, and with an implementation time of only a few months, defining a single project might be a good enough organization to manage the implementation. For example, if you're implementing Core Human Capital Management applications, you could define a single project for that. But if you're implementing a broad solution footprint for a large business across multiple organizational entities in multiple countries, and with an implementation time of several years, a single project isn't enough. In such cases, organizations typically define a structure of a program with multiple underlying projects that reflect the scope, approach, timelines of business transformation. For example, you're implementing both HCM and ERP applications in a two-phased approach. You might decide to define a program with two underlying projects, one for HCM and one for ERP. For large, international business transformations, you can define multiple programs, each with multiple underlying projects. For example, you might first need to define a global design for a common way of working with HCM and then start rolling out that global design to individual countries. In such a scenario, you might define a program called HCM Transformation with a first project for global design, followed by multiple projects associated with individual countries. You could do the same for ERP. 12:16 Nikita: Mitchell, we've mentioned “cloud journey” a few times in this conversation, but can you tell us a little more about it? How can teams use it to guide their work? Mitchell: The main purpose of Oracle's cloud journey is to provide leading practices and actionable guidance to customers and implementation partners for successful implementations, operations within the cloud, and innovation through Oracle Cloud solutions. These are defined in terms of stages, activities, topics, and milestones. They focus on, how do you do the right things at the right time the first time. For customers, implementation partners in the cloud journey, this allows us to facilitate collaboration and transparency, accelerate the journey, and visualize and measure our progress. 12:59 Lois: We'll stop here for today. Join us next week as we continue our discussion on Oracle Cloud Success Navigator. Nikita: And if want to look at some demos of everything we touched upon today, head over to mylearn.oracle.com and take a look at the Oracle Cloud Success Navigator Essentials course. Until next time, this is Nikita Abraham… Lois: And Lois Houston, signing off! 13:22 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
Our 214th episode with a summary and discussion of last week's big AI news! Recorded on 06/27/2025 Hosted by Andrey Kurenkov and Jeremie Harris. Feel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai Read out our text newsletter and comment on the podcast at https://lastweekin.ai/. In this episode: Meta's hiring of key engineers from OpenAI and Thinking Machines Lab securing a $2 billion seed round with a valuation of $10 billion. DeepMind introduces Alpha Genome, significantly advancing genomic research with a model comparable to Alpha Fold but focused on gene functions. Taiwan imposes technology export controls on Huawei and SMIC, while Getty drops key copyright claims against Stability AI in a groundbreaking legal case. A new DeepMind research paper introduces a transformative approach to cognitive debt in AI tasks, utilizing EEG to assess cognitive load and recall in essay writing with LLMs. Timestamps + Links: (00:00:10) Intro / Banter (00:01:22) News Preview (00:02:15) Response to listener comments Tools & Apps (00:06:18) Google is bringing Gemini CLI to developers' terminals (00:12:09) Anthropic now lets you make apps right from its Claude AI chatbot Applications & Business (00:15:54) Sam Altman takes his ‘io' trademark battle public (00:21:35) Huawei Matebook Contains Kirin X90, using SMIC 7nm (N+2) Technology (00:26:05) AMD deploys its first Ultra Ethernet ready network card — Pensando Pollara provides up to 400 Gbps performance (00:31:21) Amazon joins the big nuclear party, buying 1.92 GW for AWS (00:33:20) Nvidia goes nuclear — company joins Bill Gates in backing TerraPower, a company building nuclear reactors for powering data centers (00:36:18) Mira Murati's Thinking Machines Lab closes on $2B at $10B valuation (00:41:02) Meta hires key OpenAI researcher to work on AI reasoning models Research & Advancements (00:49:46) Google's new AI will help researchers understand how our genes work (00:55:13) Direct Reasoning Optimization: LLMs Can Reward And Refine Their Own Reasoning for Open-Ended Tasks (01:01:54) Farseer: A Refined Scaling Law in Large Language Models (01:06:28) LLM-First Search: Self-Guided Exploration of the Solution Space Policy & Safety (01:11:20) Unsupervised Elicitation of Language Models (01:16:04) Taiwan Imposes Technology Export Controls on Huawei, SMIC (01:18:22) Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task Synthetic Media & Art (01:23:41) Judge Rejects Authors' Claim That Meta AI Training Violated Copyrights (01:29:46) Getty drops key copyright claims against Stability AI, but UK lawsuit continues
******Support the channel******Patreon: https://www.patreon.com/thedissenterPayPal: paypal.me/thedissenterPayPal Subscription 1 Dollar: https://tinyurl.com/yb3acuuyPayPal Subscription 3 Dollars: https://tinyurl.com/ybn6bg9lPayPal Subscription 5 Dollars: https://tinyurl.com/ycmr9gpzPayPal Subscription 10 Dollars: https://tinyurl.com/y9r3fc9mPayPal Subscription 20 Dollars: https://tinyurl.com/y95uvkao ******Follow me on******Website: https://www.thedissenter.net/The Dissenter Goodreads list: https://shorturl.at/7BMoBFacebook: https://www.facebook.com/thedissenteryt/Twitter: https://x.com/TheDissenterYT This show is sponsored by Enlites, Learning & Development done differently. Check the website here: http://enlites.com/ Dr. Anna Ivanova is Assistant Professor in the School of Psychology at Georgia Tech. She is interested in studying the relationship between language and other aspects of human cognition. In her work, she uses tools from cognitive neuroscience (such as fMRI) and artificial intelligence (such as large language models). In this episode, we talk about language from the perspective of cognitive neuroscience. We discuss how language relates to all the rest of human cognition, the brain decoding paradigm, and whether the brain represents words. We talk about large language models (LLMs), and we discuss whether they can understand language. We talk about how we can use AI to study human language, and whether there are parallels between programming language and natural languages. Finally, we discuss mapping models in cognitive neuroscience.--A HUGE THANK YOU TO MY PATRONS/SUPPORTERS: PER HELGE LARSEN, JERRY MULLER, BERNARDO SEIXAS, ADAM KESSEL, MATTHEW WHITINGBIRD, ARNAUD WOLFF, TIM HOLLOSY, HENRIK AHLENIUS, FILIP FORS CONNOLLY, ROBERT WINDHAGER, RUI INACIO, ZOOP, MARCO NEVES, COLIN HOLBROOK, PHIL KAVANAGH, SAMUEL ANDREEFF, FRANCIS FORDE, TIAGO NUNES, FERGAL CUSSEN, HAL HERZOG, NUNO MACHADO, JONATHAN LEIBRANT, JOÃO LINHARES, STANTON T, SAMUEL CORREA, ERIK HAINES, MARK SMITH, JOÃO EIRA, TOM HUMMEL, SARDUS FRANCE, DAVID SLOAN WILSON, YACILA DEZA-ARAUJO, ROMAIN ROCH, DIEGO LONDOÑO CORREA, YANICK PUNTER, CHARLOTTE BLEASE, NICOLE BARBARO, ADAM HUNT, PAWEL OSTASZEWSKI, NELLEKE BAK, GUY MADISON, GARY G HELLMANN, SAIMA AFZAL, ADRIAN JAEGGI, PAULO TOLENTINO, JOÃO BARBOSA, JULIAN PRICE, HEDIN BRØNNER, DOUGLAS FRY, FRANCA BORTOLOTTI, GABRIEL PONS CORTÈS, URSULA LITZCKE, SCOTT, ZACHARY FISH, TIM DUFFY, SUNNY SMITH, JON WISMAN, WILLIAM BUCKNER, PAUL-GEORGE ARNAUD, LUKE GLOWACKI, GEORGIOS THEOPHANOUS, CHRIS WILLIAMSON, PETER WOLOSZYN, DAVID WILLIAMS, DIOGO COSTA, ALEX CHAU, AMAURI MARTÍNEZ, CORALIE CHEVALLIER, BANGALORE ATHEISTS, LARRY D. LEE JR., OLD HERRINGBONE, MICHAEL BAILEY, DAN SPERBER, ROBERT GRESSIS, JEFF MCMAHAN, JAKE ZUEHL, BARNABAS RADICS, MARK CAMPBELL, TOMAS DAUBNER, LUKE NISSEN, KIMBERLY JOHNSON, JESSICA NOWICKI, LINDA BRANDIN, GEORGE CHORIATIS, VALENTIN STEINMANN, ALEXANDER HUBBARD, BR, JONAS HERTNER, URSULA GOODENOUGH, DAVID PINSOF, SEAN NELSON, MIKE LAVIGNE, JOS KNECHT, LUCY, MANVIR SINGH, PETRA WEIMANN, CAROLA FEEST, MAURO JÚNIOR, 航 豊川, TONY BARRETT, NIKOLAI VISHNEVSKY, STEVEN GANGESTAD, TED FARRIS, ROBINROSWELL, KEITH RICHARDSON, AND HUGO B.!A SPECIAL THANKS TO MY PRODUCERS, YZAR WEHBE, JIM FRANK, ŁUKASZ STAFINIAK, TOM VANEGDOM, BERNARD HUGUENEY, CURTIS DIXON, BENEDIKT MUELLER, THOMAS TRUMBLE, KATHRINE AND PATRICK TOBIN, JONCARLO MONTENEGRO, NICK GOLDEN, CHRISTINE GLASS, IGOR NIKIFOROVSKI, AND PER KRAULIS!AND TO MY EXECUTIVE PRODUCERS, MATTHEW LAVENDER, SERGIU CODREANU, ROSEY, AND GREGORY HASTINGS!
Shreya Shankar is a PhD student at UC Berkeley in the EECS department. This episode explores how Large Language Models (LLMs) are revolutionizing the processing of unstructured enterprise data like text documents and PDFs. It introduces DocETL, a framework using a MapReduce approach with LLMs for semantic extraction, thematic analysis, and summarization at scale.Subscribe to the Gradient Flow Newsletter
The July 2025 recall features four episodes on systems and innovation in delivering neurologic care. The episode begins with Dr. Scott Friedenberg discussing challenges faced by neurologists in balancing financial productivity with optimal patient care. The episode leads into a conversation with Dr. Marisa Patryce McGinley discussing the utilization of telemedicine in neurology, particularly focusing on disparities in access among different demographic groups. The conversation transitions to Dr. Lidia Moura talking about the implications of large language models for neurologic care. The episode concludes with Dr. Ashish D. Patel discussing headache referrals and the implementation of a design thinking approach to improve access to headache care. Podcast links: Empowering Health Care Providers Disparities in Utilization of Outpatient Telemedicine for Neurologic Care Large Language Models for Quality and Efficiency of Neurologic Care Using Design Thinking to Understand the Reason for Headache Referrals Article links: Empowering Health Care Providers: A Collaborative Approach to Enhance Financial Performance and Productivity in Clinical Practice Disparities in Utilization of Outpatient Telemedicine for Neurologic Care Implications of Large Language Models for Quality and Efficiency of Neurologic Care: Emerging Issues in Neurology Using Design Thinking to Understand the Reason for Headache Referrals and Reduce Referral Rates Disclosures can be found at Neurology.org.
Wir sprechen heute über Spiel, dessen Entwicklung vor mehr als 20 Jahren begonnen hat. Wie ein Podcast dazu geführt hat, dass die technische Umsetzung letztendlich gestartet wurde. Was für eine wichtige Rolle dabei erotische Furry-Games gespielt haben. Wieso es eine physische Version davon gibt. Und wie schwer es war, einem Large Language Model beizubringen, von seiner angestrebten Perfektion abzuweichen und endlich dem rohen düsteren Stil der handgemalten Vorlagen zu folgen.
New to AI? Feeling overwhelmed by all the buzzwords? You're not alone—and this episode is here to help.Today on the Creative Edition Podcast, we're breaking down six essential AI terms every content creator should know to confidently navigate the AI-powered world of content creation. Whether you're already using AI to brainstorm captions, outline podcast episodes, or streamline video edits—or you're just starting to explore AI tools—this episode is packed with practical insights to help you become an AI-native creator.You'll learn:What Prompt Engineering is and how to craft better prompts that lead to higher-quality resultsHow to spot and avoid AI hallucinations (and why fact-checking still matters)The power behind Large Language Models (LLMs) like ChatGPT and ClaudeWhat Fine-Tuning is and how to train AI tools to match your unique voiceAnd what it means to be an AI-Native Creator—plus how early adoption can give your brand a serious edgeWhether you're scaling your content or simply want to stay relevant, this episode will give you the vocabulary and confidence to integrate AI into your creative workflow—no tech background required.Follow us on Instagram: @creativeeditionpodcast Follow Emma on Instagram: @emmasedition | Pinterest: @emmaseditionAnd sign up for our email newsletter.
CISA warns organizations of potential cyber threats from Iranian state-sponsored actors.Scattered Spider targets aviation and transportation. Workforce cuts at the State Department raise concerns about weakened cyber diplomacy. Canada bans Chinese security camera vendor Hikvision over national security concerns.Cisco Talos reports a rise in cybercriminals abusing Large Language Models. MacOS malware Poseidon Stealer rebrands.Researchers discover multiple vulnerabilities in Bluetooth chips used in headphones and earbuds. The FDA issues new guidance on medical device cybersecurity. Our guest is Debbie Gordon, Co-Founder of Cloud Range, looking “Beyond the Stack - Why Cyber Readiness Starts with People.” An IT worker's revenge plan backfires. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you'll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest On today's Industry Voices segment, Debbie Gordon, Co-Founder of Cloud Range, shares insights on looking “Beyond the Stack - Why Cyber Readiness Starts with People.” Learn more about what Debbie discusses in Cloud Range's blog: Bolstering Your Human Security Posture. You can hear Debbie's full conversation here. Selected Reading CISA and Partners Urge Critical Infrastructure to Stay Vigilant in the Current Geopolitical Environment (CISA) Joint Statement from CISA, FBI, DC3 and NSA on Potential Targeted Cyber Activity Against U.S. Critical Infrastructure by Iran (CISA, FBI, DOD Cyber Crime Center, NSA) Prolific cybercriminal group now targeting aviation, transportation companies (Axios) U.S. Cyber Diplomacy at Risk Amid State Department Shakeup (GovInfo Security) Canada Bans Chinese CCTV Vendor Hikvision Over National Security Concerns (Infosecurity Magazine) Malicious AI Models Are Behind a New Wave of Cybercrime, Cisco Talos (Hackread) MacOS malware Poseidon Stealer rebranded as Odyssey Stealer (SC Media) Airoha Chip Vulnerabilities Expose Headphones to Takeover (SecurityWeek) FDA Expands Premarket Medical Device Cyber Guidance (GovInfo Security) 'Disgruntled' British IT worker jailed for hacking employer after being suspended (The Record) Audience Survey Complete our annual audience survey before August 31. Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here's our media kit. Contact us at cyberwire@n2k.com to request more info. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode, Alina Utrata interviews Amira Moeding, a PhD Candidate in History at the University of Cambridge where they held fellowships with Cambridge Digital Humanities and the Cluster of Excellence “Matters of Activity” at Humboldt Universität zu Berlin. They talked all about Amira's research on the intellectual history of Large Language Models, and other types of AI. They began by asking: why is it so shocking to begin with a history and philosophy of linguistics when talking about LLMs? Why did IBM want these natural language processors to be so energy intensive (hint: to make money)? What is machine empiricism, how does it relate to the invention of Big Data, and why does it limit the way we see and understand the world around us? Amira has worked on critical theory, philosophy of science, feminist philosophy, post-colonial theory and the history of law in settler colonial contexts before turning to data and Big Data, and their paper “Machine Empiricism” together with Professor Tobias Matzner is forthcoming. Until June they were employed as an Research Assistant at the Computer Science Department (Computerlab) at the University of Cambridge in this project. For a complete reading list from the episode, check out the Anti-Dystopians substack at bit.ly/3kuGM5X.You can follow Alina Utrata on Bluesky at @alinau27.bsky.socialAll episodes of the Anti-Dystopians are hosted and produced by Alina Utrata and are freely available to all listeners. To support the production of the show, subscribe to the newsletter at bit.ly/3kuGM5X.Nowhere Land by Kevin MacLeodLink: https://incompetech.filmmusic.io/song/4148-nowhere-landLicense: http://creativecommons.org/licenses/by/4.0/ Hosted on Acast. See acast.com/privacy for more information.
Everyone wants the latest and greatest AI buzzword. But at what cost? And what the heck is the difference between algos, LLMs, and agents anyway? Tune in to find out.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Choosing AI: Algorithms vs. AgentsUnderstanding AI Models and AgentsUsing Conditional Statements in AIImportance of Data in AI TrainingRisk Factors in Agentic AI ProjectsInnovation through AI ExperimentationEvaluating AI for Business SolutionsTimestamps:00:00 AWS AI Leader Departs Amid Talent War03:43 Meta Wins Copyright Lawsuit07:47 Choosing AI: Short or Long Term?12:58 Agentic AI: Dynamic Decision Models16:12 "Demanding Data-Driven Precision in Business"20:08 "Agentic AI: Adoption and Risks"22:05 Startup Challenges Amidst Tech Giants24:36 Balancing Innovation and Routine27:25 AGI: Future of Work and SurvivalKeywords:AI algorithms, Large Language Models, LLMs, Agents, Agentic AI, Multi agentic AI, Amazon Web Services, AWS, Vazhi Philemon, Gen AI efforts, Amazon Bedrock, talent wars in tech, OpenAI, Google, Meta, Copyright lawsuit, AI training, Sarah Silverman, Llama, Fair use in AI, Anthropic, AI deep research model, API, Webhooks, MCP, Code interpreter, Keymaker, Data labeling, Training datasets, Computer vision models, Block out time to experiment, Decision-making, If else conditional statements, Data-driven approach, AGI, Teleporting, Innovation in AI, Experiment with AI, Business leaders, Performance improvements, Sustainable business models, Corporate blade.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Try Gemini 2.5 Flash! Sign up at AIStudio.google.com to get started. Try Gemini 2.5 Flash! Sign up at AIStudio.google.com to get started.
AI will fundamentally transform science. It will supercharge the research process, making it faster and more efficient and broader in scope. It will make scientists themselves vastly more productive, more objective, maybe more creative. It will make many human participants—and probably some human scientists—obsolete… Or at least these are some of the claims we are hearing these days. There is no question that various AI tools could radically reshape how science is done, and how much science is done. What we stand to gain in all this is pretty clear. What we stand to lose is less obvious, but no less important. My guest today is Dr. Molly Crockett. Molly is a Professor in the Department of Psychology and the University Center for Human Values at Princeton University. In a recent widely-discussed article, Molly and the anthropologist Dr. Lisa Messeri presented a framework for thinking about the different roles that are being imagined for AI in science. And they argue that, when we adopt AI in these ways, we become vulnerable to certain illusions. Here, Molly and I talk about four visions of AI in science that are currently circulating: AI as an Oracle, as a Surrogate, as a Quant, and as an Arbiter. We talk about the very real problems in the scientific process that AI promises to help us solve. We consider the ethics and challenges of using Large Language Models as experimental subjects. We talk about three illusions of understanding the crop up when we uncritically adopt AI into the research pipeline—an illusion that we understand more than we actually do; an illusion that we're covering a larger swath of a research space than we actually are; and the illusion that AI makes our work more objective. We also talk about how ideas from Science and Technology Studies (or STS) can help us make sense of this AI-driven transformation that, like it or no, is already upon us. Along the way Molly and I touch on: AI therapists and AI tutors, anthropomorphism, the culture and ideology of Silicon Valley, Amazon's Mechanical Turk, fMRI, objectivity, quantification, Molly's mid-career crisis, monocultures, and the squishy parts of human experience. Without further ado, on to my conversation with Dr. Molly Crockett. Enjoy! A transcript of this episode will be posted soon. Notes and links 5:00 – For more on LLMs—and the question of whether we understand how they work—see our earlier episode with Murray Shanahan. 9:00 – For the paper by Dr. Crockett and colleagues about the social/behavioral sciences and the COVID-19 pandemic, see here. 11:30 – For Dr. Crockett and colleagues' work on outrage on social media, see this recent paper. 18:00 – For a recent exchange on the prospects of using LLMs in scientific peer review, see here. 20:30 – Donna Haraway's essay, 'Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective', is here. See also Dr. Haraway's book, Primate Visions. 22:00 – For the recent essay by Henry Farrell and others on AI as a cultural technology, see here. 23:00 – For a recent report on chatbots driving people to mental health crises, see here. 25:30 – For the already-classic “stochastic parrots” article, see here. 33:00 – For the study by Ryan Carlson and Dr. Crockett on using crowd-workers to study altruism, see here. 34:00 – For more on the “illusion of explanatory depth,” see our episode with Tania Lombrozo. 53:00 – For the more about Ohio State's plans to incorporate AI in the classroom, see here. For a recent essay by Dr. Crockett on the idea of “techno-optimism,” see here. Recommendations More Everything Forever, by Adam Becker Transformative Experience, by L. A. Paul Epistemic Injustice, by Miranda Fricker Many Minds is a project of the Diverse Intelligences Summer Institute, which is made possible by a generous grant from the John Templeton Foundation to Indiana University. The show is hosted and produced by Kensy Cooperrider, with help from Assistant Producer Urte Laukaityte and with creative support from DISI Directors Erica Cartmill and Jacob Foster. Our artwork is by Ben Oldroyd. Our transcripts are created by Sarah Dopierala. Subscribe to Many Minds on Apple, Stitcher, Spotify, Pocket Casts, Google Play, or wherever you listen to podcasts. You can also now subscribe to the Many Minds newsletter here! We welcome your comments, questions, and suggestions. Feel free to email us at: manymindspodcast@gmail.com. For updates about the show, visit our website or follow us on Twitter (@ManyMindsPod) or Bluesky (@manymindspod.bsky.social).
Send us a textHow brains compute and learn, blending neuroscience with AI insights.Episode Summary: Dr. Marius Pachitariu discusses how the brain computes information across scales, from single neurons to complex networks, using mice to study visual learning. He explains the differences between supervised and unsupervised learning, the brain's high-dimensional processing, and how it compares to artificial neural networks like large language models. The conversation also covers experimental techniques, such as calcium imaging, and the role of reward prediction errors in learning.About the guest: Marius Pachitariu, PhD is a group leader at the Janelia Research Campus, leading a lab focused on neuroscience with a blend of experimental and computational approaches.Discussion Points:The brain operates at multiple scales, with single neurons acting as computational units and networks creating complex, high-dimensional computations.Pachitariu's lab uses advanced tools like calcium imaging to record from tens of thousands of neurons simultaneously in mice.Unsupervised learning allows mice to form visual memories of environments without rewards, speeding up task learning later.Brain activity during sleep or anesthesia is highly correlated, unlike the high-dimensional, less predictable patterns during wakefulness.The brain expands sensory input dimensionality (e.g., from retina to visual cortex) to simplify complex computations, a principle also seen in artificial neural networks.Reward prediction errors, driven by dopamine, signal when expectations are violated, aiding learning by updating internal models.Large language models rely on self-supervised learning, predicting next words, but lack the forward-modeling reasoning humans excel at.Related episode:M&M 44: Consciousness, Perception, Hallucinations, Selfhood, Neuroscience, Psychedelics & "Being You" | Anil Seth*Not medical advice.Support the showAll episodes, show notes, transcripts, and more at the M&M Substack Affiliates: KetoCitra—Ketone body BHB + potassium, calcium & magnesium, formulated with kidney health in mind. Use code MIND20 for 20% off any subscription (cancel anytime) Lumen device to optimize your metabolism for weight loss or athletic performance. Code MIND for 10% off Readwise: Organize and share what you read. 60 days FREE through link SiPhox Health—Affordable at-home blood testing. Key health markers, visualized & explained. Code TRIKOMES for a 20% discount. MASA Chips—delicious tortilla chips made from organic corn & grass-fed beef tallow. No seed oils or artificial ingredients. Code MIND for 20% off For all the ways you can support my efforts
June 26, 2025: Michael Han, MD, Enterprise CMIO and VP of MultiCare Health System, discusses his transition from the operating room to the boardroom. He argues that the path forward isn't through automating clinical decisions, but through revolutionizing call centers, scheduling, prior authorizations, and referrals. Michael reveals how ambient clinical documentation must evolve beyond simple note-taking into a treasure trove of unstructured data that can drive actions across the entire care continuum—from pre-visit chart preparation to post-visit care coordination. The conversation explores how leaders establish credibility as they transition from the operating room to the boardroom. Key Points: 02:43 Impact of Ambient Clinical Documentation 05:51 AI and Large Language Models in Healthcare 10:27 The Role of CMIO in Digital Transformation 15:05 Leadership and Credibility in Healthcare 24:51 Speed Round: Personal Insights X: This Week Health LinkedIn: This Week Health Donate: Alex's Lemonade Stand: Foundation for Childhood Cancer
Video này được chuyển thể từ bài viết gốc trên nền tảng mạng xã hội chia sẻ tri thức Spiderum
This episode explores the fundamental mindset of building your vocabulary, extending beyond literal words to conceptual understanding and mental models, and how Large Language Models (LLMs) can be a powerful tool for expanding and refining this crucial skill for career growth, clarity, and navigating disruptions.Uncover why building your vocabulary is a fundamental skill that can help you navigate career transitions, disruptions (such as those caused by AI), and changes in roles.Understand that "vocabulary" goes beyond literal words to include mental models, understanding your own self, specific diagrams (like causal loop diagrams or C4 diagrams), and programming paradigms or design patterns. This conceptual vocabulary provides access to nuanced and powerful ways of thinking.Learn how LLMs can be incredibly useful for refining and expanding your conceptual vocabulary, allowing you to explore new subjects, understand systems, and identify leverage points. They can help you understand the connotations, origins, and applications of concepts, as well as how they piece together with adjacent ideas.Discover why starting with fundamental primitives like inputs, outputs, flows, and system types can help you develop vocabulary, and how LLMs can suggest widely used tools or visualisations based on these primitives (e.g., a scatter plot for XY data).Explore why focusing on understanding the "why" and "when" of using a concept or tool is a much higher leverage skill than merely knowing "how" to use it, enabling you to piece together different vocabulary pieces for deeper insights.
Recently, the risks about Artificial Intelligence and the need for ‘alignment' have been flooding our cultural discourse – with Artificial Super Intelligence acting as both the most promising goal and most pressing threat. But amid the moral debate, there's been surprisingly little attention paid to a basic question: do we even have the technical capability to guide where any of this is headed? And if not, should we slow the pace of innovation until we better understand how these complex systems actually work? In this episode, Nate is joined by Artificial Intelligence developer and researcher, Connor Leahy, to discuss the rapid advancements in AI, the potential risks associated with its development, and the challenges of controlling these technologies as they evolve. Connor also explains the phenomenon of what he calls ‘algorithmic cancer' – AI generated content that crowds out true human creations, propelled by algorithms that can't tell the difference. Together, they unpack the implications of AI acceleration, from widespread job disruption and energy-intensive computing to the concentration of wealth and power to tech companies. What kinds of policy and regulatory approaches could help slow down AI's acceleration in order to create safer development pathways? Is there a world where AI becomes a tool to aid human work and creativity, rather than replacing it? And how do these AI risks connect to the deeper cultural conversation about technology's impacts on mental health, meaning, and societal well-being? (Conversation recorded on May 21st, 2025) About Connor Leahy: Connor Leahy is the founder and CEO of Conjecture, which works on aligning artificial intelligence systems by building infrastructure that allows for the creation of scalable, auditable, and controllable AI. Previously, he co-founded EleutherAI, which was one of the earliest and most successful open-source Large Language Model communities, as well as a home for early discussions on the risks of those same advanced AI systems. Prior to that, Connor worked as an AI researcher and engineer for Aleph Alpha GmbH. Show Notes and More Watch this video episode on YouTube Want to learn the broad overview of The Great Simplification in 30 minutes? Watch our Animated Movie. --- Support The Institute for the Study of Energy and Our Future Join our Substack newsletter Join our Discord channel and connect with other listeners
How is AI actually being used in classrooms today? Are teachers adopting it, or resisting it? And could software eventually replace traditional instruction entirely?In this episode of This Week in Consumer AI, a16z partners Justine Moore, Olivia Moore, and Zach Cohen explore one of the most rapidly evolving — and widely debated — frontiers in consumer technology: education.They unpack how generative AI is already reshaping educational workflows, enabling teachers to scale feedback, personalize curriculum, and reclaim time from administrative tasks. We also examine emerging consumer behavior — from students using AI for homework to parents exploring AI-led learning paths for their children. Resources:Find Olivia on X: https://x.com/omooretweetsFind Justine on X: https://x.com/venturetwinsFind Zach on X: https://x.com/zachcohen25 Stay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
In episode 1883, Jack and Miles are joined by writer, comedian, and co-host of Yo, Is This Racist?, Andrew Ti, to discuss… America’s Cold War Strategy Is Coming Home To Roost Huh? Our Information Environment Is So F**ked, Couple Wild Stories About People Not Knowing How To Act Around AI and more! Tucker Vs. Ted Smackdown They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling. Father of man killed in Port St. Lucie officer-involved shooting: 'My son deserved better' LISTEN: Husk by Men I TrustSee omnystudio.com/listener for privacy information.
Before you hit that new chat button in ChatGPT, Claude or Gemini....You're already doing it wrong. I've run 200+ live GenAI training sessions and have taught more than 11,000 business pros and this is one of the biggest mistakes. Just blindly hitting that new chat button can end up killing any perceived productivity you think you're getting while using LLMs. Instead, you need to know the 101s of Gemini Gem, GPTs and Projects. This is one AI at Work Wednesdays you can't miss.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Have a question? Join the convo here.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Harnessing Custom GPTs for EfficiencyGoogle Gems vs. Custom GPTs ReviewChatGPT Projects: Features & UpdatesClaude Projects Integration & BenefitsEffective AI Chatbot Usage TechniquesLeveraging AI for Business GrowthDeep Research in ChatGPT ProjectsGoogle Apps Integration in GemsTimestamps:00:00 AI Chatbot Efficiency Tips04:12 "Putting AI to Work Wednesdays"08:39 "Optimizing ChatGPT Usage"11:28 Similar Functions, Different Categories15:41 Beyond Basic Folder Structures16:25 ChatGPT Project Update22:01 Email Archive and Albacross Software24:34 Optimize AI with Contextual Data27:49 "Improving Process Through Meta Analysis"30:53 Data File Access Issue33:27 File Handling Bug in New GPT36:12 Continuous Improvement Encouragement41:16 AI Selection Tool Website43:34 Google Ecosystem AI Assistant45:46 "Optimize AI Usage for Projects"Keywords:Custom GPTs, Google's gems, Claude's projects, OpenAI ChatGPT, AI chatbots, Large Language Models, AI systems, Google Workspace, productivity tools, GPT-3.5, GPT-4, AI updates, API actions, reasoning models, ChatGPT projects, AI assistant, file uploads, project management, AI integrations, Google Calendar, Gmail, Google Drive, context window, AI usage, AI-powered insights, Gemini 2.5 pro, Claude Opus, Claude Sonnet, AI consultation, ChatGPT Canvas, Claude artifacts, generative AI, AI strategy partner, AI brainstorming partner.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Try Google Veo 3 today! Sign up at gemini.google to get started. Try Google Veo 3 today! Sign up at gemini.google to get started.
Software Engineering Radio - The Podcast for Professional Software Developers
In this episode of Software Engineering Radio, Abhinav Kimothi sits down with host Priyanka Raghavan to explore retrieval-augmented generation (RAG), drawing insights from Abhinav's book, A Simple Guide to Retrieval-Augmented Generation. The conversation begins with an introduction to key concepts, including large language models (LLMs), context windows, RAG, hallucinations, and real-world use cases. They then delve into the essential components and design considerations for building a RAG-enabled system, covering topics such as retrievers, prompt augmentation, indexing pipelines, retrieval strategies, and the generation process. The discussion also touches on critical aspects like data chunking and the distinctions between open-source and pre-trained models. The episode concludes with a forward-looking perspective on the future of RAG and its evolving role in the industry. Brought to you by IEEE Computer Society and IEEE Software magazine.
Our Head of Asia Technology Research Shawn Kim discusses China's distinctly different approach to AI development and its investment implications.Read more insights from Morgan Stanley.----- Transcript -----Welcome to Thoughts on the Market. I'm Shawn Kim, Head of Morgan Stanley's Asia Technology Team. Today: a behind-the-scenes look at how China is reshaping the global AI landscape. It's Tuesday, June 10 at 2pm in Hong Kong. China has been quietly and methodically executing on its top-down strategy to establish its domestic AI capabilities ever since 2017. And while U.S. semiconductor restrictions have presented a near-term challenge, they have also forced China to achieve significant advancements in AI with less hardware. So rather than building the most powerful AI capabilities, China's primary focus has been on bringing AI to market with maximum efficiency. And you can see this with the recent launch of DeepSeek R1, and there are literally hundreds of AI start-ups using open-source Large Language Models to carve out niches and moats in this AI landscape. The key question is: What is the path forward? Can China sustain this momentum and translate its research prowess into global AI leadership? The answer hinges on four things: its energy, its data, talent, and computing. China's centralized government – with more than a billion mobile internet users – possess enormous amounts of data. China also has access to abundant energy: it built 10 nuclear power plants just last year, and there are ten more coming this year. U.S. chips are far better for the moment, but China is also advancing quickly; and getting a lot done without the best chips. Finally, China has plenty of talent – according to the World Economic Forum, 47 percent of the world's top AI researchers are now in China. Plus, there is already a comprehensive AI governance framework in place, with more than 250 regulatory standards ensuring that AI development remains secure, ethical, and strategically controlled. So, all in all, China is well on its way to realizing its ambitious goal of becoming a world leader in AI by 2030. And by that point, AI will be deeply embedded across all sectors of China's economy, supported by a regulatory environment. We believe the AI revolution will boost China's long-term potential GDP growth by addressing key structural headwinds to the economy, such as aging demographics and slowing productivity growth. We estimate that GenAI can create almost 7 trillion RMB in labor and productivity value. This equals almost 5 percent of China's GDP growth last year. And the investment implications of China's approach to AI cannot be overstated. It's clear that China has already established a solid AI foundation. And now meaningful opportunities are emerging not just for the big players, but also for smaller, mass-market businesses as well. And with value shifting from AI hardware to the AI application layer, we see China continuing its success in bringing out AI applications to market and transforming industries in very practical terms. As history shows, whoever adopts and diffuses a new technology the fastest wins – and is difficult to displace. Thanks for listening. If you enjoy the show, please leave us a review wherever you listen and share Thoughts on the Market with a friend or colleague today.
LARGE LANGUAGE MODEL OF THE EIGHTEENTH CENTURY: 1/4: Ingenious: A Biography of Benjamin Franklin, Scientist by Richard Munson (Author) https://www.amazon.com/Ingenious-Biography-Benjamin-Franklin-Scientist/dp/0393882233 Benjamin Franklin was one of the preeminent scientists of his time. Driven by curiosity, he conducted cutting-edge research on electricity, heat, ocean currents, weather patterns, chemical bonds, and plants. But today, Franklin is remembered more for his political prowess and diplomatic achievements than his scientific creativity. In this incisive and rich account of Benjamin Franklin's life and career, Richard Munson recovers this vital part of Franklin's story, reveals his modern relevance, and offers a compelling portrait of a shrewd experimenter, clever innovator, and visionary physicist whose fame opened doors to negotiate French support and funding for American independence. Munson's riveting narrative explores how science underpins Franklin's entire story―from tradesman to inventor to nation-founder―and argues that Franklin's political life cannot be understood without giving proper credit to his scientific accomplishments. 1752
LARGE LANGUAGE MODEL OF THE EIGHTEENTH CENTURY: 2/4: Ingenious: A Biography of Benjamin Franklin, Scientist by Richard Munson (Author) https://www.amazon.com/Ingenious-Biography-Benjamin-Franklin-Scientist/dp/0393882233 Benjamin Franklin was one of the preeminent scientists of his time. Driven by curiosity, he conducted cutting-edge research on electricity, heat, ocean currents, weather patterns, chemical bonds, and plants. But today, Franklin is remembered more for his political prowess and diplomatic achievements than his scientific creativity. In this incisive and rich account of Benjamin Franklin's life and career, Richard Munson recovers this vital part of Franklin's story, reveals his modern relevance, and offers a compelling portrait of a shrewd experimenter, clever innovator, and visionary physicist whose fame opened doors to negotiate French support and funding for American independence. Munson's riveting narrative explores how science underpins Franklin's entire story―from tradesman to inventor to nation-founder―and argues that Franklin's political life cannot be understood without giving proper credit to his scientific accomplishments. 1850
LARGE LANGUAGE MODEL OF THE EIGHTEENTH CENTURY: 3/4: Ingenious: A Biography of Benjamin Franklin, Scientist by Richard Munson (Author) https://www.amazon.com/Ingenious-Biography-Benjamin-Franklin-Scientist/dp/0393882233 Benjamin Franklin was one of the preeminent scientists of his time. Driven by curiosity, he conducted cutting-edge research on electricity, heat, ocean currents, weather patterns, chemical bonds, and plants. But today, Franklin is remembered more for his political prowess and diplomatic achievements than his scientific creativity. In this incisive and rich account of Benjamin Franklin's life and career, Richard Munson recovers this vital part of Franklin's story, reveals his modern relevance, and offers a compelling portrait of a shrewd experimenter, clever innovator, and visionary physicist whose fame opened doors to negotiate French support and funding for American independence. Munson's riveting narrative explores how science underpins Franklin's entire story―from tradesman to inventor to nation-founder―and argues that Franklin's political life cannot be understood without giving proper credit to his scientific accomplishments. 1906
LARGE LANGUAGE MODEL OF THE EIGHTEENTH CENTURY: 4/4: Ingenious: A Biography of Benjamin Franklin, Scientist by Richard Munson (Author) https://www.amazon.com/Ingenious-Biography-Benjamin-Franklin-Scientist/dp/0393882233 Benjamin Franklin was one of the preeminent scientists of his time. Driven by curiosity, he conducted cutting-edge research on electricity, heat, ocean currents, weather patterns, chemical bonds, and plants. But today, Franklin is remembered more for his political prowess and diplomatic achievements than his scientific creativity. In this incisive and rich account of Benjamin Franklin's life and career, Richard Munson recovers this vital part of Franklin's story, reveals his modern relevance, and offers a compelling portrait of a shrewd experimenter, clever innovator, and visionary physicist whose fame opened doors to negotiate French support and funding for American independence. Munson's riveting narrative explores how science underpins Franklin's entire story―from tradesman to inventor to nation-founder―and argues that Franklin's political life cannot be understood without giving proper credit to his scientific accomplishments. 1867 IN PARIS
There's a good chance that before November of 2022, you hadn't heard of tech nonprofit OpenAI or cofounder Sam Altman. But over the last few years, they've become household names with the explosive growth of the generative AI tool called ChatGPT. What's been going on behind the scenes at one of the most influential companies in history and what effect has this had on so many facets of our lives? Karen Hao is an award-winning journalist and the author of “Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI” and has covered the impacts of artificial intelligence on society. She joins WITHpod to discuss the trajectory AI has been on, economic effects, whether or not she thinks the AI bubble will pop and more.
Josh Batson, a research scientist at Anthropic, joins Kevin Frazier, AI Innovation and Law Fellow at the Texas Law and Senior Editor at Lawfare, to break down two research papers—“Mapping the Mind of a Large Language Model” and “Tracing the thoughts of a large language model”—that uncovered some important insights about how advanced generative AI models work. The two discuss those findings as well as the broader significance of interpretability and explainability research.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.