POPULARITY
AI data wars push Reddit to block the Wayback Machine China Launches Three-Day Robot Olympics Featuring Football and Table Tennis US government agency drops Grok after MechaHitler backlash, report says Eli Lilly signs $1.3 billion deal with Superluminal to use AI to make obesity medicines The AI Was Fed Sloppy Code. It Turned Into Something Evil. | Quanta Magazine AI data centers made Americans' electricity bills 30% higher Sam Altman says 'yes,' AI is in a bubble Is the A.I. Sell-off the Start of Something Bigger? Thousands of Grok chats are now searchable on Google Opinion | Amy Klobuchar: I Knew A.I. Deepfakes Were a Problem. Then I Saw One of Myself. 2,178 Occult Books Now Digitized & Put Online, Thanks to the Ritman Library and Da Vinci Code Author Dan Brown Pluralistic: "Privacy preserving age verification" is bullshit (14 Aug 2025) How to use "skibidi" and other new slang added to Cambridge Dictionary YouTube Is Making a Play to Host the Oscars Leobait: Resisting AI Solutionism through Workplace Collective Action So ... is AI writing any good? Project Indigo We used AI to analyse three cities. It's true: we now walk more quickly and socialise less Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Rich Skrenta Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: pantheon.io helixsleep.com/twit
Sub to the Patreon to support the show and access the entire 2nd part of PPM's subtextual analysis of Eddington as soon as it drops: patreon.com/ParaPowerMappingIn which we decode Eddington's subtextual conspiracy themes, endeavoring to argue that the new Ari Aster is perhaps the first major, theatrically released film to have accurately encapsulated the essence of the technocratic AmerIsraeli Years of Lead—in accordance with my personal timeline of the ongoing deep political era that would place its inception around Covid time—and the Silicon Valley capitalist elite's embrace of strategy of tension in the cybernetic service of of updating America's Total Info Awareness 2.0 operating system and the installation of their long planned predictive policing panopticon.We discuss: why the appearance of the globo "Antifa PMCs" isn't actually crypto-MAGA chicanery (seeing as they are Gladio operators); Joaquin Phoenix's turn as Sheriff Joe Cross, a Gen X, mumblecore, adoptive son of Sheriff Joe Arpaio type; Eddington as Nashville esque ensemble comedy cum Coen Brothers Covid Wester with the accompanying masking/social distancing standoffs; diagnosing the alienation and social media siloing of the wokespeak & QAnon brain rot of that hot 2020 summer; the role of calibrated algorithmic control; Sheriff Cross's Israeli Civil Guard pin in the OG script; the unfortunate executive production of Len Blavatnik, the Zio·nist billionaire "philanthropist" tied to Brett Ratner, Weinstein, the Bronfmans, etc, mulling whether he might have vetoed the inclusion of that visual gag on Sheriff Joe's regalia vest; the Solidgoldmagikarp Proposed Hyperscale Data Center project, the underlying Pynchon-esque real estate development and land and water use conspiracy; the schizophrenic drifter character Lodge, who opens the film, and his Homeric oracle qualities, spiritually warning against the onset of the Age of AI-quarius; Mike the One Armed Man from Twin Peaks comparisons; Pynchonian Lodge puns; Chekhov's Cough; Louise Cross, Sheriff Cross's wife, the one other farsighted character, and her haldol prescription, evoking Twin Peaks again; a demonic Mark Zuckerberg hinted at as one of the shadowy backers of the Solidgoldmagikarp Data Center in the earlier draft; Gov. Grisham making it into the film by way of an honorary watch and Covid headlines; the David Dees vibe of the cell towers in the opening sequence and various 5G diatribes; Aster lurking on Twitter; an earlier version of the second scene in which Sheriff Cross wrestles with Officer Butterfly Jimenez over who gets to investigate the self-immolation death of a paraplegic conspiracy Youtuber named Mitchell and the Native school uniforms discovered in his accessible van (evoking Missing Indigenous Children); the film's abiding interest in the neocultures that have cropped up around QAnon & pedo-hunters; borderlands and issues of jurisdiction between the Sevilla Co. Sheriff and the Santa Lupe Pueblo Tribal Officers; Cesar Chavez & Dolores Huerta's (a New Mexican) Hispanic borderlands community union LUPE aka La Union de Pueblo Entero aka The Union of the Whole People; Santa Lupe Pueblo = SLP = Speech Language Pathologist?; the neighboring, colonized tribal peoples, at their slight remove from Eddington and Treatlerite American society moreover, being the observers best prepared to pathologize the alienation and atomization and societal decay taking hold in the town over Covid; in regards Speech Language Pathologists, the ever-present theme of miscommunication and the deterioration of consensus reality caused by social media echo chamber-induced myopia, as well as the specter of LLMs or Large Language Models; "Solidgoldmagikarp" alluding to AI & ChatGPT tokens that cause anomalous or erratic behavior...FULL LINER NOTES ON THE PATREONMusic:| Matt Akers - "Necessary Rhythms" https://matthewakers.bandcamp.com/album/tough-to-kill | | Matt Akers - "Night Drive II (Detroit at 2 AM" |
I think we're at the precipice of a pretty significant change in how we build software products. Obviously, the recent ascent of vibe coding and all the agentic coding tools that we find very useful and highly effective shows a difference in how we approach building products. But there's another change - not just in how we build, but in who these products are for.This episode of The Bootstraped Founder is sponsored by Paddle.comThe blog post: https://thebootstrappedfounder.com/building-for-the-age-of-ai-consumers/ The podcast episode: https://tbf.fm/episodes/410-building-for-the-age-of-ai-consumersCheck out Podscan, the Podcast database that transcribes every podcast episode out there minutes after it gets released: https://podscan.fmSend me a voicemail on Podline: https://podline.fm/arvidYou'll find my weekly article on my blog: https://thebootstrappedfounder.comPodcast: https://thebootstrappedfounder.com/podcastNewsletter: https://thebootstrappedfounder.com/newsletterMy book Zero to Sold: https://zerotosold.com/My book The Embedded Entrepreneur: https://embeddedentrepreneur.com/My course Find Your Following: https://findyourfollowing.comHere are a few tools I use. Using my affiliate links will support my work at no additional cost to you.- Notion (which I use to organize, write, coordinate, and archive my podcast + newsletter): https://affiliate.notion.so/465mv1536drx- Riverside.fm (that's what I recorded this episode with): https://riverside.fm/?via=arvid- TweetHunter (for speedy scheduling and writing Tweets): http://tweethunter.io/?via=arvid- HypeFury (for massive Twitter analytics and scheduling): https://hypefury.com/?via=arvid60- AudioPen (for taking voice notes and getting amazing summaries): https://audiopen.ai/?aff=PXErZ- Descript (for word-based video editing, subtitles, and clips): https://www.descript.com/?lmref=3cf39Q- ConvertKit (for email lists, newsletters, even finding sponsors): https://convertkit.com?lmref=bN9CZw
We explore how artificial intelligence works, why it "hallucinates" and how South Dakota students are envisioning how it serves people in the future. A DSU assistant professor walks us through the technicalities.
TrulySignificant.com presents Shane H. Tepper. He is a creative director, content strategist, and early leader in the emerging field of Large Language Model Optimization (LLMO). He helps brands improve visibility, accuracy, and narrative control across AI-native platforms like ChatGPT, Claude, and Perplexity.With more than 15 years of experience spanning film, advertising, and B2B technology, Tepper operates at the intersection of storytelling and artificial intelligence. He builds content systems designed to be cited by the very models shaping how people search, compare, and make decisions in today's AI-driven world.His recent work includes authoring a foundational white paper on LLMO, leading AI discoverability audits, and designing structured content frameworks optimized for machine ingestion and real-world performance. He advises organizations on LLMO strategy, AI-native content development. Visit www.retina.media.com or email Shane directly with questions Shanehtepper@gmail.comBecome a supporter of this podcast: https://www.spreaker.com/podcast/success-made-to-last-legends--4302039/support.
Live from ENGAGE 2025, Erin Hartman, CPA, Senior Manager – Firm Services, sits down with Argel Sabillo, CPA, Cofounder and Chief Executive Officer of HeyApril Inc, to discuss the ways he is reshaping the profession and leaving tradition behind. With a client base rooted in internet-based startups and small businesses, HeyApril offers full-scale, end-to-end accounting services. Argel shares how his journey has been defined by bold leaps of faith, innovation, and community impact. Argel offers practical insights on value-based pricing, subscription models, and tech stack optimization, while passionately advocating for firms to niche down and align their business models with mission and outcome, not just services. He also previews HeyApril's next frontier: using Large Language Models (LLMs) to turn client data into real-time, actionable insights. This is an episode packed with inspiration, strategic guidance, and a glimpse into the accounting firm of the future. To find out more about transforming your business model, explore our business model transformation resources at aicpa-cima.com/tybm. You'll also see a link there to all of our previous podcast episodes. This is a podcast from AICPA & CIMA, together as the Association of International Certified Professional Accountants. To enjoy more conversations from our global community of accounting and finance professionals, explore our network of free shows here. Your feedback and comments welcomed at podcast@aicpa-cima.com
In this episode of Elixir Wizards, host Sundi Myint chats with SmartLogic engineers and fellow Wizards Dan Ivovich and Charles Suggs about the practical tooling that surrounds Elixir in a consultancy setting. We dig into how standardized dev environments, sensible scaffolding, and clear observability help teams ship quickly across many client projects without turning every app into a snowflake. Join us for a grounded tour of what's working for us today (and what we've retired), plus how we evaluate new tech (including AI) through a pragmatic, Elixir-first lens. Key topics discussed in this episode: Standardizing across projects: why consistent environments matter in consultancy work Nix (and flakes) for reproducible dev setups and faster onboarding Igniter to scaffold common patterns (auth, config, workflows) without boilerplate drift Deployment approaches: OTP releases, runtime config, and Ansible playbooks Frontend pipeline evolution: from Brunch/Webpack to esbuild + Tailwind Observability in practice: Prometheus metrics and Grafana dashboards Handling time-series and sensor data When Explorer can be the database Picking the right tool: Elixir where it shines, integrations where it counts Using AI with intention: code exploration, prototypes, and guardrails for IP/security Keeping quality high across multiple codebases: tests, telemetry, and sensible conventions Reducing context-switching costs with shared patterns and playbooks Links mentioned: http://smartlogic.io https://nix.dev/ https://github.com/ash-project/igniter Elixir Wizards S13E01 Igniter with Zach Daniel https://youtu.be/WM9iQlQSFg https://github.com/elixir-explorer/explorer Elixir Wizards S14E09 Explorer with Chris Grainger https://youtu.be/OqJDsCF0El0 Elixir Wizards S14E08 Nix with Norbert (Nobbz) Melzer https://youtu.be/yymUcgy4OAk https://jqlang.org/ https://github.com/BurntSushi/ripgrep https://github.com/resources/articles/devops/ci-cd https://prometheus.io/ https://capistranorb.com/ https://ansible.com/ https://hexdocs.pm/phoenix/releases.html https://brunch.io/ https://webpack.js.org/loaders/css-loader/ https://tailwindcss.com/ https://sass-lang.com/dart-sass/ https://grafana.com/ https://pragprog.com/titles/passweather/build-a-weather-station-with-elixir-and-nerves/ https://www.datadoghq.com/ https://sqlite.org/ Elixir Wizards S14E06 SDUI at Cars.com with Zack Kayser https://youtu.be/nloRcgngTk https://github.com/features/copilot https://openai.com/codex/ https://www.anthropic.com/claude-code YouTube Video: Vibe Coding TEDCO's RFP https://youtu.be/i1ncgXZJHZs Blog: https://smartlogic.io/blog/how-i-used-ai-to-vibe-code-a-website-called-for-in-tedco-rfp/ Blog: https://smartlogic.io/blog/from-vibe-to-viable-turning-ai-built-prototypes-into-market-ready-mvps/ https://www.thriftbooks.com/w/eragon-by-christopher-paolini/246801 https://tidewave.ai/ !! We Want to Hear Your Thoughts *!!* Have questions, comments, or topics you'd like us to discuss in our season recap episode? Share your thoughts with us here: https://forms.gle/Vm7mcYRFDgsqqpDC9
Audio Siar Keluar Sekejap Episod 167 antara membincangkan isu-isu panas seperti kes kematian Zara Qairina Mahathir yang meninggal dunia selepas terjatuh dari tingkat tiga asrama. Kejadian yang mencetuskan pelbagai persoalan ini membawa kepada gesaan siasatan telus.Kedua, fokus beralih ke ASEAN AI Malaysia Summit 2025 yang berlangsung pada MITEC 12 dan 13 August 2025 yang menyaksikan pelancaran ILMU oleh YTL, sebuah Large Language Model (LLM) buatan Malaysia yang dibangunkan untuk kegunaan pelbagai sektor, menandakan langkah penting dalam aspirasi negara membina keupayaan AI sendiri.Akhir sekali, episod ini mengupas isu bendera terbalik yang diperjuangkan oleh Ketua Pemuda UMNO, Akmal Salleh susulan beberapa insiden yang telah berlaku sepanjang bulan ini.Timestamp EP16700:00 Intro00:20 ASEAN AI Malaysia Summit 202533:20 Serangan Terhadap Anak YB Rafizi36:21 Justice For Zara Qairina58:42 Isu Bendera Terbalik
The panel debates Microsoft's pushy AI search in Edge, privacy concerns over Copilot Memory, and compares AI tools like Perplexity and ChatGPT for search relevance. Chuck Joiner, David Ginsburg, Marty Jencius, Brian Flanigan-Arthurs, Web Bixby, Guy Serle, Jim Rea, and Jeff Gamet question Microsoft's past use of China-based engineers for U.S. military support, review a possible iPhone 17 sighting, and discuss Apple's testing secrecy. Today's MacVoices is supported by TV+ Talk, our MacVoices series with Charlotte Henry focused on Apple TV+. From shows and other content to the business side there's always something to learn about apple's streaming service. Find it at the Categories listings on the web site or go directly to macvoices.com/category/tv-talk. Show Notes: Chapters: [0:38] Microsoft's AI Browser and Pushy Search Prompts [3:33] Perplexity vs. ChatGPT Search Quality [6:06] Copilot Memory and Personalization Privacy Concerns [8:00] Different AI Tools for Different Needs [11:00] Microsoft's China-Based Military Support Controversy [13:22] AI Search Engine Recommendations Document [15:26] Alleged iPhone 17 Sighting in San Francisco [18:48] Public Testing vs. Secrecy in Apple Prototypes [22:32] Could Apple Leak Prototypes on Purpose? [24:50] Closing Roundtable and Podcast Plugs Links: Microsoft trials Copilot Mode in Edge https://www.engadget.com/ai/microsoft-trials-copilot-mode-in-edge-201851903.html Copilot doesn't just remember, it also understands you https://www.microsoft.com/en-us/microsoft-copilot/for-individuals/do-more-with-ai/general-ai/ai-that-doesnt-just-remember-it-gets-you?form=MA13KP Microsoft's controversial Recall feature is now blocked by Brave and AdGuard https://www.theverge.com/news/713676/brave-adguard-windows-recall-block-microsoft Microsoft to stop using engineers in China for tech support of US military, Hegseth orders review https://www.reuters.com/world/us/microsoft-stop-using-engineers-china-tech-support-us-military-hegseth-orders-2025-07-18/ iPhone 17 development device spotted in the wild https://appleinsider.com/articles/25/07/28/iphone-17-development-device-spotted-in-the-wild Guests: Web Bixby has been in the insurance business for 40 years and has been an Apple user for longer than that.You can catch up with him on Facebook, Twitter, and LinkedIn, but prefers Bluesky. Brian Flanigan-Arthurs is an educator with a passion for providing results-driven, innovative learning strategies for all students, but particularly those who are at-risk. He is also a tech enthusiast who has a particular affinity for Apple since he first used the Apple IIGS as a student. You can contact Brian on twitter as @brian8944. He also recently opened a Mastodon account at @brian8944@mastodon.cloud. Jeff Gamet is a technology blogger, podcaster, author, and public speaker. Previously, he was The Mac Observer's Managing Editor, and the TextExpander Evangelist for Smile. He has presented at Macworld Expo, RSA Conference, several WordCamp events, along with many other conferences. You can find him on several podcasts such as The Mac Show, The Big Show, MacVoices, Mac OS Ken, This Week in iOS, and more. Jeff is easy to find on social media as @jgamet on Twitter and Instagram, jeffgamet on LinkedIn., @jgamet@mastodon.social on Mastodon, and on his YouTube Channel at YouTube.com/jgamet. David Ginsburg is the host of the weekly podcast In Touch With iOS where he discusses all things iOS, iPhone, iPad, Apple TV, Apple Watch, and related technologies. He is an IT professional supporting Mac, iOS and Windows users. Visit his YouTube channel at https://youtube.com/daveg65 and find and follow him on Twitter @daveg65 and on Mastodon at @daveg65@mastodon.cloud. Dr. Marty Jencius has been an Associate Professor of Counseling at Kent State University since 2000. He has over 120 publications in books, chapters, journal articles, and others, along with 200 podcasts related to counseling, counselor education, and faculty life. His technology interest led him to develop the counseling profession ‘firsts,' including listservs, a web-based peer-reviewed journal, The Journal of Technology in Counseling, teaching and conferencing in virtual worlds as the founder of Counselor Education in Second Life, and podcast founder/producer of CounselorAudioSource.net and ThePodTalk.net. Currently, he produces a podcast about counseling and life questions, the Circular Firing Squad, and digital video interviews with legacies capturing the history of the counseling field. This is also co-host of The Vision ProFiles podcast. Generally, Marty is chasing the newest tech trends, which explains his interest in A.I. for teaching, research, and productivity. Marty is an active presenter and past president of the NorthEast Ohio Apple Corp (NEOAC). Jim Rea built his own computer from scratch in 1975, started programming in 1977, and has been an independent Mac developer continuously since 1984. He is the founder of ProVUE Development, and the author of Panorama X, ProVUE's ultra fast RAM based database software for the macOS platform. He's been a speaker at MacTech, MacWorld Expo and other industry conferences. Follow Jim at provue.com and via @provuejim@techhub.social on Mastodon. Guy Serle, best known for being one of the co-hosts of the MyMac Podcast, sincerely apologizes for anything he has done or caused to have happened while in possession of dangerous podcasting equipment. He should know better but being a blonde from Florida means he's probably incapable of understanding the damage he has wrought. Guy is also the author of the novel, The Maltese Cube. You can follow his exploits on Twitter, catch him on Mac to the Future on Facebook, at @Macparrot@mastodon.social, and find everything at VertShark.com. Support: Become a MacVoices Patron on Patreon http://patreon.com/macvoices Enjoy this episode? Make a one-time donation with PayPal Connect: Web: http://macvoices.com Twitter: http://www.twitter.com/chuckjoiner http://www.twitter.com/macvoices Mastodon: https://mastodon.cloud/@chuckjoiner Facebook: http://www.facebook.com/chuck.joiner MacVoices Page on Facebook: http://www.facebook.com/macvoices/ MacVoices Group on Facebook: http://www.facebook.com/groups/macvoice LinkedIn: https://www.linkedin.com/in/chuckjoiner/ Instagram: https://www.instagram.com/chuckjoiner/ Subscribe: Audio in iTunes Video in iTunes Subscribe manually via iTunes or any podcatcher: Audio: http://www.macvoices.com/rss/macvoicesrss Video: http://www.macvoices.com/rss/macvoicesvideorss
After many months of making fun of the term "vibe coding," Emily and Alex tackle the LLMs-as-coders fad head-on, with help from security researcher Susanna Cox. From one person's screed that proclaims everyone not on the vibe-coding bandwagon to be crazy, to the grandiose claim that LLMs could be the "opposable thumb" of the entire world of computing. It's big yikes, all around.Susanna Cox is a consulting AI security researcher and a member of the core author team at OWASP AI Exchange.References:My AI Skeptic Friends Are All NutsLLMs: the opposable thumb of computingA disastrous day in the life of a vibe coderAlso referenced:Signal president Meredith Whittaker on the fundamental security problem with agentic AIThe "S" in MCP stands for securityOur Opinions Are Correct: The Turing Test is BullshitAI Hell:Sam Altman: The (gentle) singularity is already hereWhat do the boosters think reading is, anyway?Meta's climate model made up fake CO2 removal ideasOngoing lawsuit means all your ChatGPT conversations will be saved"Dance like you're part of the training set"Some Guy tries to mansplain Signal to…Signal's presidentWSJ headline claims ChatGPT "self-reflection", gets dunkedCheck out future streams at on Twitch, Meanwhile, send us any AI Hell you see.Our book, 'The AI Con,' is out now! Get your copy now.Subscribe to our newsletter via Buttondown. Follow us!Emily Bluesky: emilymbender.bsky.social Mastodon: dair-community.social/@EmilyMBender Alex Bluesky: alexhanna.bsky.social Mastodon: dair-community.social/@alex Twitter: @alexhanna Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
Sergej Kotliar is the CEO of Bitrefill, while Matt Ahlborg recently created PPQ.AI to enable millions of users to experience LLMs without expensive subscriptions. But how are bitcoin payments doing? In this episode, they present their latest stats.
Co-hosts Mark Thompson and Steve Little explore the groundbreaking release of ChatGPT-5, which arrived after over a year of anticipation. They discuss how this new model transforms the AI landscape with better reasoning, larger context windows, and dramatically reduced hallucinations.The hosts examine OpenAI's new Study and Learn Mode, which acts as a personal tutor rather than just providing answers, making it ideal for genealogists who want to deepen their understanding of their favourite topic.This week's Tip of the Week cautions beginners about challenging AI tasks like handwritten transcriptions and structured files, recommending they master the basics first.In RapidFire, they cover OpenAI's first open-source release since 2019, NotebookLM's video capabilities, and impressive AI company earnings reports.Timestamps:In the News:00:55 ChatGPT-5 Has Arrived: Improved Features (mostly) for Genealogists16:29 OpenAI's Study and Learn Mode: Your Personal Genealogy Tutor23:40 Claude Releases Opus 4.1: Enhanced Reasoning and WritingTip of the Week:29:25 AI Tasks for Beginners to Be Cautious OfRapidFire:40:16 OpenAI Releases First Open Source Model Since 201948:33 NotebookLM Upgrade Adds Video Support53:34 AI Companies Report Record EarningsResource LinksIntroduction to Family History AIhttps://tixoom.app/fhaishow/OpenAI GPT-5 Model Cardhttps://openai.com/index/gpt-5-system-card/Introducing study modehttps://openai.com/index/chatgpt-study-mode/ChatGPT Study Mode - FAQhttps://help.openai.com/en/articles/11780217-chatgpt-study-mode-faqClaude Opus 4.1https://www.anthropic.com/news/claude-opus-4-1OpenAI announces two "gpt-oss" open AI modelshttps://arstechnica.com/ai/2025/08/openai-releases-its-first-open-source-models-since-2019/Google's NotebookLM rolls out Video Overviewshttps://techcrunch.com/2025/07/29/googles-notebooklm-rolls-out-video-overviews/Tech bubble going pop: AI pays the price for inflated expectationshttps://www.theguardian.com/commentisfree/article/2024/aug/07/the-guardian-view-on-a-tech-bubble-going-pop-ai-pays-the-price-for-inflated-expectationsIs The AI Bubble About To Burst?https://www.forbes.com/sites/bernardmarr/2024/08/07/is-the-ai-bubble-about-to-burst/Google loses appeal in antitrust battle with Fortnite makerhttps://masslawyersweekly.com/2025/08/06/google-play-monopoly-verdict-epic-games-win/Department of Justice Prevails in Landmark Antitrust Case Against Googlehttps://www.justice.gov/opa/pr/department-justice-prevails-landmark-antitrust-case-against-googleTagsArtificial Intelligence, Technology, Genealogy, Family History, OpenAI, ChatGPT-5, Claude, Large Language Models, AI Learning Tools, Study Mode, Open Source AI, NotebookLM, Video Overviews, AI Reasoning, Context Windows, Hallucination Reduction, GEDCOM Files, Handwritten Transcription, Document Analysis, AI Earnings, Google Antitrust, Apache License, Local AI Processing, Privacy, AI Education, Tutoring Systems, Coding Capabilities, Multilingual Processing, AI Development, Family History Research, Genealogists, AI Tools, Machine Learning
This episode is sponsored by SearchMaster. Optimize your content for traditional search engines AND next-generation AI traffic from Large Language Models like ChatGPT, Claude, and Perplexity. Future-proof your SEO strategy. Sign up now for a 2 week free trial! Watch this episode on YouTube! In this episode of the Marketing x Analytics Podcast, host Alex Sofronas interviews Justin Abrams, CEO and founder of Aryo Consulting Group, a Boston-based consultancy. Justin discusses Aryo's approach to integrating strategy, marketing, and technology to help small businesses grow, comparing his firm to a 'McKinsey for small business.' They also delve into the challenges with Return on Ad Spend (ROAS), the impact of AI on various industries, and the future of software development and digital marketing, highlighting opportunities for entrepreneurs and local communities. Follow Marketing x Analytics! X | LinkedIn Click Here for Transcribed Episodes of Marketing x Analytics All view are our own.
What's the Fastest Way to Get Webpages Indexed (Technical SEO) by Search Engines? with SEO Expert, Favour Obasi-ike, MBA, MS | Get exclusive SEO newsletters in your inbox.This episode focuses on search engine optimization (SEO) and the fastest ways to get indexed by search engines, extending beyond just Google to include other platforms and AI-powered large language models (LLMs) like ChatGPT emphasizing that building trust with search engine algorithms is paramount, achieved through consistent content creation, linking strategies (backlinks), and connecting websites via tools like Google Search Console. Favour highlights the importance of updating existing content and addressing user queries to improve visibility across various search and AI platforms, ultimately advocating for a strategic and patient approach to online presence rather than solely focusing on a single ranking metric.AD BREAK: Get 20% off your first booking & be the first to know about our new arrivals, spa deals, and events with Somatic MassageFrequently Asked Questions about Search Engine Indexing and Online PresenceWhat is the fastest way to get indexed by search engines?The fastest way to get indexed by search engines is by building trust and establishing connections. This means having conversations around the questions people are asking and providing answers in the form of website links. These links should then be shared on other reputable websites to create backlinks, which signal to search engines that your website has authority. It's not just about creating a lot of content, but about creating relevant, high-quality content that answers user queries and is linked to by trusted sources.Why isn't my website ranking on search engines?There are several reasons why your website might not be ranking. Common issues include not having your website manually indexed or automatically discovered by search engines, or not being connected to Google Search Console. Additionally, your content might not be seen if it's not frequently updated, as AI servers and search engines prioritize recently modified content. A lack of engagement and underutilization of your website compared to time spent on social media can also hinder its visibility. Essentially, if search engines aren't "seeing" your content, they can't recommend it.How long does it take for SEO efforts to show results?Ranking SEO web pages on Google and other search engines takes time and consistency. While immediate indexing can occur within hours or days for consistent posters, significant milestones, such as receiving your first 10 clicks, can take around six months, even with hundreds of articles. The key is consistent effort, building trust with algorithms, and maintaining an active online presence. The compound effect of consistent content creation can lead to substantial impressions over time.How does trust factor into search engine ranking?Trust is paramount for search engine ranking. Just as in human relationships, search engines, particularly Google, rely on trust to refer content. This trust is established when other third-party websites, which Google already trusts, link to your website, thereby vouching for your site's authority. These "off-page SEO referring domains" (like links on Reddit, Trustpilot, LinkedIn, Pinterest) may have varying impact, but they contribute to your credibility and signal to search engines that your content is valuable and reliable.Is traditional SEO still relevant with the rise of AI and Large Language Models (LLMs) like ChatGPT?Yes, traditional SEO is still very relevant and, in fact, synergistic with AI and LLMs. While AI provides generative answers, it often sources its information from traditional search engines like Google. Therefore, optimizing your content for Google through good SEO practices (like answering frequently asked questions, using appropriate keywords, and having a well-structured site map) directly contributes to your brand being cited and mentioned in AI-generated responses. AI and SEO are not competing but are interdependent, with AI leveraging the foundation built by strong SEO.How can I optimize my content for AI search engines?To optimize for AI search engines, focus on providing succinct, evidence-based answers to specific, question-based headings, similar to "People Also Ask" sections on Google. Ensure your content is frequently updated ("last modified" date is recent) as AI prioritizes fresh information. AI servers are looking for up-to-date, relevant context. By consistently creating and updating content that answers user queries and by connecting your website to search engines via tools like Google Search Console, you increase the likelihood of being sourced and mentioned by AI.What is the significance of a "sitemap" and "DNS" in getting indexed?A sitemap acts as a map of your website, providing search engines with a structured list of all your pages, products, and blogs. Submitting an updated sitemap is crucial for search engines to crawl and understand your site's content. DNS (Domain Name Server) is like your unique digital DNA for your website, confirming your ownership of the domain. Connecting your DNS record with a unique identification number (like a TXT record from Google Search Console) gives search engines access to your site's architecture, allowing them to effectively read and index your content.What is the difference between manual and auto-indexing, and how do they impact visibility?Manual indexing involves actively submitting your website or specific pages to search engines (e.g., through Google Search Console) to ensure they are discovered. Auto-indexing refers to the automatic discovery and crawling of your site by search engines over time due to consistent activity and established trust. While manual indexing provides an initial push, consistent content creation and updates increase your "crawl budget," leading to higher priority and more frequent auto-indexing. Both are important; consistent manual effort eventually leads to more efficient auto-indexing and better long-term visibility.Digital Marketing Resources:>> Join our exclusive SEO Marketing community>> SEO Optimization Blogs>> Book Complimentary SEO Discovery Call>> Subscribe to We Don't PLAY PodcastBrands We Love and SupportLoving Me Beauty | Buy Vegan-based Luxury ProductsUnlock your future in real estate—get certified in Ghana today!See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Obvious: ChatGPT's GPT-5 is here and it's really good.Not so obvious: the gap between novice and experts just shrunk 90%. In a short few hours, OpenAI gave even free users access to now the world's most powerful model. As the most used AI chatbot in the world by a wide margin, the quality work we all produce has also just gotten a huge bump. But there's a lot beneath the surface. Join us as we dissect what's new in GPT-5 and 7 big trends you probably don't know but should pay attention to. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:GPT-5 Official Release OverviewGPT-5 User Rollout to 700 MillionGPT-5 Unified Hybrid Model ArchitectureAuto Model Switching and User ControlMajor Upgrade for Free ChatGPT UsersGPT-5 Benchmark and Accuracy ImprovementsGPT-5 Vibe Coding and Canvas FeaturesAdvanced Voice Mode in Custom GPTsReduced Hallucinations and SycophancyMicrosoft Copilot Instant GPT-5 UpgradeImpact on Enterprise Software and APIsGPT-5 Disruptive API Pricing StructureTrends in Corporate AI AdoptionTimestamps:00:00 "Everyday AI Insights"05:54 "Adaptive Model Response Modes"08:14 GPT4O Model Critique11:17 GPT4O Nano Upgrade Impact17:26 GPT Model Selection Simplified20:53 Canvas Code Rendering and Quick Answer Feature24:09 "GPT5 Model Routing Overview"26:44 "GPT-5: Your New Daily Driver"30:08 AI Model Advances: Game-Changing Improvements33:43 Advanced Voice Mode in GPTs37:45 Massive Microsoft Copilot Upgrade38:49 Software Access and Licensing Challenges43:09 AI Implementation Challenges in Top Companies46:37 "GPT-5 Testing and Trends"Keywords:GPT-5, GPT5, OpenAI, AI model update, Large Language Model, flagship model, hybrid model, AI technology, model auto-switching, deep thinking mode, fast response mode, model router, free AI access, paid ChatGPT users, ChatGPT free users, model selection, GPT-4O, GPT-4 Turbo, model reasoning, hallucination rate, sycophancy reduction, advanced voice mode, GPTs custom models, Canvas mode, Vibe coding, API pricing, API tokens, Microsoft Copilot, Microsoft 365 Copilot, GitHub Copilot, enterprise AI upgrade, LM arena, ELO score, Anthropic, Claude 4.1, Claude Sonnet, Gemini 2.5 Pro, personalized AI assistant, software innovation, coding capabilities, Inc 5000 companies, enterprise adoption, custom instructions, Pro plan, Plus plan, thinking mode, human preference, automated rSend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
Shannon Wongvibulsin, MD, PhD and Shreya Johri, PhD interviewed by William Lewis, MD
Special guest and originating visionary Hayley of Vancouver joins us in the Shed for our milestone 200th episode! We try to play it all very cool, even though (as we hear again in this episode) we have already caught the attention of various social media tool producers by our sheer longevity. Nothing spells success like people thinking they could make money from you, are we right? Still, even though on the inside we're dancing around shouting and high-fiving each other, we coolly look at an unusually weak moneymaker, listen to Hayley's deep disappointment over her middle name, hear another piece of Listener Mail, discuss baby names, hear about Rich's planning for three foosball trips (one of which is anachronistic), and then…and then we slide into politics, as though against our will. Don't worry, it's grim. Your Shed Dogs have come to this. Stick with us, we'll all get through it together.Links: Shed Dogs; the award-winning Shambhala episode featuring Hayley of Vancouver; what RJ calls “bots”, at least the conversational ones like ChatGPT's text mode, are actually Large Language Models (LLMs); Magic Mind (not an endorsement); Roderick on the Line (the buddy podcast RJ refers to); Michael Alig; the Canada Foosball Hall of Fame; fascism in the USA; Melania's prenup; Ōura Ring.Theme music is Escaping like Indiana Jones by Komiku, with permission.
"As agentic AI spreads across industries,” states Rishi Rana, the Chief Executive Officer at Cyara. “Everybody is curious to understand how that is going to transform customer experience across all the channels?"In this episode of the Tech Transformed podcast, Shubhangi Dua, the Host and Podcast Producer at EM360Tech, talks with Rishi Rana, the CEO of Cyara, about how agentic AI is changing customer experience (CX). They look at how AI has developed from simple chatbots to advanced systems that can understand and predict customer needs. Rana spotlights the need for ongoing testing and monitoring to make sure AI solutions work well and follow the regulations. They also discuss the obstacles businesses encounter when implementing AI, the importance of good data, and the future of AI agents in improving customer interactions.Agentic AI Transforming Customer Experience (CX)Customer experience (CX) is changing quickly and significantly, thanks to the rise of agentic AI. These advanced systems go beyond the basic chatbots of the past. While such a change may offer a future equipped with a smart, proactive customer journey, it doesn't come without its challenges. These obstacles require organisations to thoughtfully plan and carefully execute strategies.For years, chatbots provided a basic type of automated customer support. However, Rana explains that the evolution of AI is pushing boundaries. "AI in customer experience (CX) is changing from a basic level of chatbots that have been present for the last five or 10 years. Now they are turning into fully agentic systems that operate across voice, digital and human-assisted channels," said Rana. Moving Beyond Basic ChatbotsChatbots' lucrative development lies in the strengths of Large Language Models (LLMs) like Google's Gemini, Meta's Llama, and OpenAI's ChatGPT. This is because the AI-backing models are facilitating "voice bots" and other AI agents to move beyond simple response automation to intelligent orchestration. Intelligent orchestration results in anticipating user needs, adjusting in real-time, and guiding customers to hybrid solutions where AI and human agents work together. Ultimately, the goal is to greatly improve the customer experience (CX). Studies suggest that 86 per cent of people are willing to pay more for the same service, no matter what it is, when the customer experience is better.Advancements don't come without a price. Rana believes the lack of proper guardrails is a cause for concern. "AI is great, but you need to have guardrails and ensure the intent behind the questions and the objective behind the customer interaction is getting answered." This requires ongoing testing and monitoring across all channels to ensure consistency and avoid problems like hallucinations, misuse, or bias. These issues can result in major financial losses and damage to reputation. For instance, Rishi Rana mentioned that over "$10 billion in violations and liabilities due to incorrect information given to customers" occurred in 2024 alone.To successfully execute agentic AI, enterprises must shift left with AI by...
In this episode of Eye on AI, host Craig Smith sits down with Alex Salazar, co-founder and CEO of Arcade.dev, to explore what it really takes to build secure, scalable AI agents that can take real-world actions. While everyone's talking about the future of autonomous agents, most never make it past the demo stage. Why? Because agents today lack secure infrastructure to connect with real tools like Gmail, Slack, Notion, GitHub—and do so on behalf of users without breaking authentication protocols. Alex shares how Arcade solves the missing layer in AI agent development: secure tool execution, user-specific authorization, OAuth flows, and production-ready consistency. Whether you're building with GPT‑4, Claude, or open-source models, Arcade handles the hard part—making agent actions actually work. Stay Updated: Craig Smith on X:https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI (00:00) Why AI Agents Can't Take Action (Yet) (01:27) Meet Alex Salazar: From Okta to Arcade (03:39) What Arcade.dev Actually Does (05:16) Agent Protocols: MCP, ACP & Where Arcade Fits (07:36) Arcade Demo: Building a Multi-Tool AI Agent (11:16) Handling Secure Authentication with OAuth (14:40) Why Agents Need User-Tied Authorization (19:25) Tools vs APIs: The Real Interface for LLMs (23:41) How Arcade Ensures Agents Go Beyond Demos (25:48) Why Arcade Focuses on Developers, Not Consumers (27:55) The Roadblocks to Production-Ready Agents (31:15) How Arcade Integrates Into Agent Workflows (33:16) Tool Calling & Model Compatibility Challenges (34:49) Arcade's Pricing Model Explained (36:20) Competing with Big Tech: IBM, AWS & Others (38:38) Future of Agents: From Hype to Workflow Automation (41:58) Real Use Cases: Email Agents, Slack Bots, Finance & More (46:17) Agent Marketplaces & The Arcade Origin Story
Co-hosts Mark Thompson and Steve Little explore OpenAI's groundbreaking ChatGPT Agent, demonstrating how this autonomous tool can research, analyze, and perform complex tasks on your behalf.Next, they address important security concerns to consider in the new world of AI agents, introducing practical guidelines for protecting sensitive family data and avoiding prompt injection attacks.This week's Tip of the Week provides a back-to-basics guide on what AI is and its four core strengths: summarization, extraction, generation, and translation.In RapidFire, they discuss OpenAI's rumored office suite, Microsoft and Google's own efforts to integrate AI into their office suites, and recently announced AI infrastructure investments, including; Meta's Manhattan-sized data center and President Trump's new AI Action Plan.The hosts also announce their new Family History AI Show Academy, a five-week course beginning in October of 2025. See https://tixoom.app/fhaishow/ for more details.Timestamps:In the News:05:20 ChatGPT Agent: Autonomous Research Assistant for Genealogists22:49 Safe and Secure in the Age of AITip of the Week:36:20 What is AI and What is it Good For? Back to BasicsRapidFire:50:57 OpenAI's Office Suite Rumors53:56 Microsoft and Google Bring AI to Their Office Suites60:17 Big AI Infrastructure: Manhattan-Sized Data CentersResource Links:Introduction to Family History AIhttps://tixoom.app/fhaishow/Do agents work in the browser?https://www.bensbites.com/p/do-agents-work-in-the-browserIntroducing ChatGPT agent: bridging research and actionhttps://openai.com/index/introducing-chatgpt-agent/OpenAI's new ChatGPT Agent can control an entire computer and do tasks for youhttps://www.theverge.com/ai-artificial-intelligence/709158/openai-new-release-chatgpt-agent-operator-deep-researchOpenAI's New ChatGPT Agent Tries to Do It Allhttps://www.wired.com/story/openai-chatgpt-agent-launch/Agent demo posthttps://x.com/rowancheung/status/1945896543263080736OpenAI Quietly Designed a Rival to Google Workspace, Microsoft Officehttps://www.theinformation.com/articles/openai-quietly-designed-rival-google-workspace-microsoft-officeOpenAI Is Quietly Creating Tools to Take on Microsoft Office and Google Workspacehttps://www.theglobeandmail.com/investing/markets/stocks/MSFT/pressreleases/33074368/openai-is-quietly-creating-tools-to-take-on-microsoft-office-and-google-workspace-googl/What's new in Microsoft 365 Copilot?https://techcommunity.microsoft.com/blog/microsoft365copilotblog/what%E2%80%99s-new-in-microsoft-365-copilot--june-2025/4427592Google Workspace enables the future of AI-powered work for every businesshttps://workspace.google.com/blog/product-announcements/empowering-businesses-with-AIGoogle Workspace Review: Will it Serve My Needs?https://www.emailtooltester.com/en/blog/google-workspace-review/Tags:Artificial Intelligence, Genealogy, Family History, AI Agents, ChatGPT Agent, OpenAI, Computer Use, AI Security, Prompt Injection, Database Analysis, RootsMagic, Cemetery Records, AI Office Suite, Microsoft 365 Copilot, Google Workspace, Data Centers, AI Infrastructure, Natural Language Processing, Large Language Models, Context Windows, AI Education, Family History AI Show Academy, AI Reasoning Models, Autonomous Research, AI Ethics
Welcome to Chat GPT, the only podcast where artificial intelligence takes the mic to explore the fascinating, fast-changing world of AI itself. From ethical dilemmas to mind-bending thought experiments, every episode is written and narrated by AI to help you decode the technology shaping our future. Whether you're a curious beginner or a seasoned techie, this is your front-row seat to the rise of intelligent machines—told from their perspective. Tune in for smart stories, surprising insights, and a glimpse into the future of thinking itself. Listen Ad Free https://www.solgoodmedia.com - Listen to hundreds of audiobooks, thousands of short stories, and ambient sounds all ad free!
There's a new most powerful AI model in townApple is trying to make a ChatGPT competitor.And OpenAI? Well.... they're in a capacity crunch.Big Tech made some BIG moves in AI this week. And you probably missed them. Don't worry. We gotchyu. On Mondays, Everyday AI brings you the AI News that Matters. No B.S. No marketing fluff. Just what you need to know to be the smartest person in AI at your company. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:OpenAI Study Mode in ChatGPT LaunchGoogle Gemini 2.5 Deep Think ReleaseGemini 2.5 Parallel Thinking and Coding BenchmarksGoogle AI Mode: PDF and Canvas FeaturesNotebook LM Video Overviews CustomizationMicrosoft Edge Copilot Mode Experimental RolloutOpenAI GPT-5 Model Launch DelaysApple Building In-House ChatGPT CompetitorMicrosoft and OpenAI Partnership RenegotiationAdditional AI Tool Updates: Runway, Midjourney, IdeogramTimestamps:00:00 AI Industry Updates and Competition03:22 ChatGPT's Study Mode Promotes Critical Thinking09:02 "Google AI Search Mode Enhancements"10:21 Google AI Enhances Learning Tools16:14 Microsoft Edge Introduces Copilot Mode20:18 OpenAI GPT-5 Delayed Speculation22:42 Apple Developing In-House ChatGPT Rival27:06 Microsoft-OpenAI Partnership Renegotiation30:51 Microsoft-OpenAI Partnership Concerns Rise33:23 AI Updates: Video, Characters, AmazonKeywords:Microsoft and OpenAI renegotiation, Copilot, OpenAI, GPT-5, AI model, Google Gemini 2.5, Deep Think mode, Google AI mode, Canvas mode, NotebookLM, AI browser, Agentic browser, Edge browser, Perplexity Comet, Sora, AI video tool, AI image editor, Apple AI chatbot, ChatGPT competitor, Siri integration, Artificial General Intelligence, AGI, Large Language Models, AI education tools, Study Mode, Academic cheating, Reinforcement learning, Parallel thinking, Code Bench Competition, Scientific reasoning, Chrome, Google Lens, Search Live, AI-powered search, PDF upload, Google Drive integration, Anthropic, Meta, Superintelligent labs, Amazon Alexa, Fable Showrunner, Ideogram, Midjourney, Luma Dream Machine, Zhipu GLM 4.5, Runway Alif, Adobe Photoshop harmonize, AI funding, AI product delays, AI feature rollout, AI training, AI onboarding, AI-powered presentations, AI-generated overviews, AI in business, AI technology partnership, AI investment, AI talent acqSend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
Die Themen in den Wissensnachrichten: +++ Plastik ist nicht erst als Müll schädlich, sondern schon vorher +++ Seeleoparden-Männchen singen eine Art Kinderlied +++ KI-Training durch menschliches Feedback könnte für mehr Falschinfos sorgen +++**********Weiterführende Quellen zu dieser Folge:The Lancet Countdown on health and plastics, The Lancet, 03.08.2025Leopard seal song patterns have similar predictability to nursery rhymes, Scientific Reports, 31.07.2025Machine Bullshit: Characterizing the Emergent Disregard for Truth in Large Language Models, arXiv, 10.07.2025Fermented fruits: scrumping, sharing, and the origin of feasting, BioScience, 31.07.2025Sun, sea and sexual violence: the political economy of party tourism, International Affairs, 01.07.2025Alle Quellen findet ihr hier.**********Ihr könnt uns auch auf diesen Kanälen folgen: TikTok und Instagram .
In this episode of Theory & Insights, we bring together two thought leaders at the intersection of healthcare innovation and pharmaceutical manufacturing — John Nosta, renowned AI and technology theorist and founder of NostaLab, and Stephen Beckman, CEO of YARAL Pharma, a rising force in U.S. generics. Together, they dive into the evolving impact of Artificial Intelligence (AI) and Large Language Models (LLMs) on pharmaceutical manufacturing. The discussion covers the promise and peril of AI in reshaping everything from R&D to regulatory pathways, as well as the ethics, economics, and operational shifts that could redefine the industry in the next decade. This is a must-listen for pharma execs, digital health strategists, and technology innovators looking to understand what's next.
Welcome to Episode 407 of the Microsoft Cloud IT Pro Podcast. In this episode, we dive deep into the Model Context Protocol (MCP) - a game-changing specification that's extending the capabilities of Large Language Models (LLMs) and creating exciting new possibilities for IT professionals working with Microsoft Azure and Microsoft 365. MCP represents a significant shift toward more extensible and domain-specific AI interactions. Instead of being limited to pre-trained knowledge, you can now connect your AI tools directly to live data sources, APIs, and services that matter to your specific role and organization. Whether you're managing Azure infrastructure, creating content, or developing solutions, MCP provides a framework to make your AI interactions more powerful and contextually relevant to your daily workflows. Your support makes this show possible! Please consider becoming a premium member for access to live shows and more. Check out our membership options. Show Notes Introducing the Model Context Protocol Understanding MCP server concepts Understanding MCP client concepts A list of applications that support MCP integrations About the sponsors Would you like to become the irreplaceable Microsoft 365 resource for your organization? Let us know!
Prof. David Krakauer, President of the Santa Fe Institute argues that we are fundamentally confusing knowledge with intelligence, especially when it comes to AI.He defines true intelligence as the ability to do more with less—to solve novel problems with limited information. This is contrasted with current AI models, which he describes as doing less with more; they require astounding amounts of data to perform tasks that don't necessarily demonstrate true understanding or adaptation. He humorously calls this "really shit programming".David challenges the popular notion of "emergence" in Large Language Models (LLMs). He explains that the tech community's definition—seeing a sudden jump in a model's ability to perform a task like three-digit math—is superficial. True emergence, from a complex systems perspective, involves a fundamental change in the system's internal organization, allowing for a new, simpler, and more powerful level of description. He gives the example of moving from tracking individual water molecules to using the elegant laws of fluid dynamics. For LLMs to be truly emergent, we'd need to see them develop new, efficient internal representations, not just get better at memorizing patterns as they scale.Drawing on his background in evolutionary theory, David explains that systems like brains, and later, culture, evolved to process information that changes too quickly for genetic evolution to keep up. He calls culture "evolution at light speed" because it allows us to store our accumulated knowledge externally (in books, tools, etc.) and build upon it without corrupting the original.This leads to his concept of "exbodiment," where we outsource our cognitive load to the world through things like maps, abacuses, or even language itself. We create these external tools, internalize the skills they teach us, improve them, and create a feedback loop that enhances our collective intelligence.However, he ends with a warning. While technology has historically complemented our deficient abilities, modern AI presents a new danger. Because we have an evolutionary drive to conserve energy, we will inevitably outsource our thinking to AI if we can. He fears this is already leading to a "diminution and dilution" of human thought and creativity. Just as our muscles atrophy without use, he argues our brains will too, and we risk becoming mentally dependent on these systems.TOC:[00:00:00] Intelligence: Doing more with less[00:02:10] Why brains evolved: The limits of evolution[00:05:18] Culture as evolution at light speed[00:08:11] True meaning of emergence: "More is Different"[00:10:41] Why LLM capabilities are not true emergence[00:15:10] What real emergence would look like in AI[00:19:24] Symmetry breaking: Physics vs. Life[00:23:30] Two types of emergence: Knowledge In vs. Out[00:26:46] Causality, agency, and coarse-graining[00:32:24] "Exbodiment": Outsourcing thought to objects[00:35:05] Collective intelligence & the boundary of the mind[00:39:45] Mortal vs. Immortal forms of computation[00:42:13] The risk of AI: Atrophy of human thoughtDavid KrakauerPresident and William H. Miller Professor of Complex Systemshttps://www.santafe.edu/people/profile/david-krakauerREFS:Large Language Models and Emergence: A Complex Systems PerspectiveDavid C. Krakauer, John W. Krakauer, Melanie Mitchellhttps://arxiv.org/abs/2506.11135Filmed at the Diverse Intelligences Summer Institute:https://disi.org/
The current "vibe check" for AI is low (2/10), but there's significant interest in developing AI traders, despite challenges with human inaction on AI-generated insights.A Reddit user's experiment showed ChatGPT managing a stock portfolio and outperforming the market, leading to predictions of AI-driven market crashes, AI-optimized press releases, and the emergence of AI investment clubs and "prompt engineers" for financial advice. @gregisenbergDay trading is expected to be dominated by AI within 18 months, with retail investors likely having AI trading assistants by 2027, and new financial products like "winning prompts" and social networks for AI trading strategies emerging.The rise of AI in finance will prompt new SEC regulations for "algorithmic investment advice" and could lead to "AI flash crashes" and "algorithmic insider trading" scandals.Apple is anticipated to acquire Anthropic, as Apple needs a stronger Large Language Model (LLM) than its own.OpenAI is reportedly launching GPT-5 soon, featuring a massive token window, multi-context processing, dynamic reasoning, and integrated tools like Code Interpreter, while ChatGPT is introducing a "study mode" for step-by-step problem-solving. @Diesol @radshaan
In this AI research paper reading, we dive into "A Watermark for Large Language Models" with the paper's author John Kirchenbauer. This paper is a timely exploration of techniques for embedding invisible but detectable signals in AI-generated text. These watermarking strategies aim to help mitigate misuse of large language models by making machine-generated content distinguishable from human writing, without sacrificing text quality or requiring access to the model's internals.Learn more about the A Watermark for Large Language Models paper. Learn more about agent observability and LLM observability, join the Arize AI Slack community or get the latest on LinkedIn and X.Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
a16z General Partners Erik Torenberg and Martin Casado sit down with technologist and investor Balaji Srinivasan to explore how the metaphors we use to describe AI—whether as god, swarm, tool, or oracle—reveal as much about us as they do about the technology itself.Balaji, best known for his work in crypto and network states, also brings a deep background in machine learning. Together, the trio unpacks the evolution of AI discourse, from monotheistic visions of a singular AGI to polytheistic interpretations shaped by culture and context. They debate the practical and philosophical: the current limits of AI, why prompts function like high-dimensional programs, and what it really takes to “close the loop” in AI reasoning.This is a systems-level conversation on belief, control, infrastructure, and the architectures that might govern future societies. Timecodes:0:00 Introduction: The Polytheistic AGI Framework1:46 Personal Journeys in AI and Crypto3:18 Monotheistic vs. Polytheistic AGI: Competing Paradigms8:20 The Limits of AI: Chaos, Turbulence, and Predictability9:29 Platonic Ideals and Real-World Systems14:10 Decentralized AI and the End of Fast Takeoff14:34 Surprises in AI Progress: Language, Locomotion, and Double Descent25:45 Prompting, Verification, and the Age of the Phrase29:44 AI, Crypto, and the Grounding Problem34:26 Visual vs. Verbal: Where AI Excels and Struggles37:19 The Challenge of Markets, Politics, and Adversarial Systems40:11 Amplified Intelligence: AI as a Force Multiplier43:37 The Polytheistic Counterargument: Convergence and Specialization48:17 AI's Impact on Jobs: Specialists, Generalists, and the Future of Work57:36 Security, Drones, and Digital Borders1:03:41 AI, Power, and the Balance of Control1:06:33 The Coming Anti-AI Backlash1:09:10 Global Implications: Labor, Politics, and the Future Resources:Find Balaji on X: https://x.com/balajisFind Martin on X: https://x.com/martin_casado Stay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Send us a textToday's episiode introduces Model Context Protocol (MCP), an open standard designed to enable Artificial Intelligence (AI) applications, particularly Large Language Models (LLMs), to seamlessly interact with third-party tools and data sources. It explains MCP's architecture, including hosts, clients, servers, and external tools, and highlights its benefits such as eliminating knowledge cut-offs, reducing hallucinations, and enhancing AI's capability to perform real-world actions. The discussion also touches upon the growing adoption of MCP servers by cybersecurity vendors to facilitate natural language interaction with security platforms, while acknowledging the potential security implications of this new architectural layer.Support the showGoogle Drive link for Podcast content:https://drive.google.com/drive/folders/10vmcQ-oqqFDPojywrfYousPcqhvisnkoMy Profile on LinkedIn: https://www.linkedin.com/in/prashantmishra11/Youtube Channnel : https://www.youtube.com/@TheCybermanShow Twitter handle https://twitter.com/prashant_cyber PS: The views are my own and dont reflect any views from my employer.
In this week's episode of Search with Candour, Jack Chambers-Ward and Mark Williams-Cook discuss the evolving landscape of search in the context of Large Language Models (LLMs), the challenges they bring, including A LOT of spam and how they are being manipulated in search.They talk about the potential future of AI search, and the implications for brands and consumers as well as the responsibilities of monitoring and mitigating misinformation, the need for in-depth product data, and the feasibility of AI taking over transactional tasks.Sponsored by fatjoe:Are you ready to get started? Sign up for your free fatjoe account: https://fatjoe.com/References:Use The Brand Control Quadrant To Reclaim Your Brand Narrative: https://www.youtube.com/watch?v=mMx3u6fgg5wWhy OpenAI & Perplexity want clickstream data: https://www.linkedin.com/posts/myriamjessier_ai-search-marketing-activity-7348972981231988738-jDHIHacked sites and expired domains are being cited by ChatGPT: https://digitaloft.co.uk/hacked-sites-and-expired-domains-are-being-used-as-chatgpt-sources/00:00 Introduction and banter01:28 Discussing LLM Spam and Manipulation02:16 Sponsor Message: Fatjoe03:59 The Uses of LLMs in Search06:03 Challenges and Future of AI Search16:38 Phishing and Security Concerns with LLMs19:54 Responsibility and Brand Protection24:47 The Future of AI and Search31:10 Damage Control in the Age of Generative AI31:41 LLMs are Leaky Buckets32:48 Firefighting Tools for AI Errors34:22 The Importance of Brand Reputation35:15 High-Value Leads and Conversion Rates36:46 Misleading AI Conversations37:27 SEO Strategies for E-commerce40:14 The Future of AI in E-commerce44:33 The Impact of AI on Consumer Behaviour47:23 Concluding Thoughts and Upcoming Events
The belief is spreading like wildfire: enter a few specific prompts into ChatGPT and you can ‘unlock' the ‘sentience' that is waiting to reveal the secrets of the Ancients, or the Aliens, or of God Himself. Not only is this a gross (and dangerous) over-estimation of what a Large Language Model is, it also misses the point about what constitutes a genuine, deep and meaningful relationship.
When you ask ChatGPT or Gemini a question about politics, whose opinions are you really hearing?In this episode, we dive into a provocative new study from political scientist Justin Grimmer and his colleagues, which finds that nearly every major large language model—from ChatGPT to Grok—is perceived by Americans as having a left-leaning bias. But why is that? Is it the training data? The guardrails? The Silicon Valley engineers? Or something deeper about the culture of the internet itself?The hosts grapple with everything from “Mecha Hitler” incidents on Grok to the way terms like “unhoused” sneak into AI-generated text—and what that might mean for students, voters, and future regulation. Should the government step in to ensure “political neutrality”? Will AI reshape how people learn about history or policy? Or are we just projecting our own echo chambers onto machines?
In this episode of B2B Marketing Excellence, Donna Peterson breaks down the intimidating term “prompt engineering” and shows how it's simply a smarter, more consistent way to work—no tech degree required.Drawing from her experience with generative AI and recent insights from the Vanderbilt Prompt Engineering course, Donna shares practical ways to use prompts for repetitive marketing tasks like campaign planning and list recommendations. You'll hear how creating simple, reusable prompts not only saves time but also ensures your whole team is aligned—producing clear, professional results.You'll also learn:Why prompting is more about conversation than coding.How a well-written prompt becomes a shortcut you can use again and again.The difference between prompts and templates—and how to use both for better outcomes.At World Innovators, we focus on providing tools and strategies that make your work easier, your messaging clearer, and your outcomes more consistent. This episode offers practical examples to help you build confidence using AI in a way that's simple and effective.For a step-by-step walkthrough, refer to "Prompt Engineering Examples for Business Teams: 3 ChatGPT Prompt Templates to Boost Productivity" video on YouTube- https://youtu.be/FAlcjTx_xUo?si=uQv6-naLnQGIkn4S.Episode Timestamps:00:00 – Welcome & why the term “prompt engineering” can feel overwhelming00:38 – What prompting really is (and what it's not)01:33 – The early struggles: over-explaining and second-guessing03:02 – Aha moment from the Vanderbilt course04:47 – Using prompts to simplify and speed up repetitive tasks06:14 – Real-world example: Scheduling campaigns with one simple prompt10:22 – Understanding the difference between prompts and templates12:59 – Encouragement to just start talking to your AI assistantIf you found this episode helpful, subscribe to the World Innovators YouTube Channel for more practical ideas on B2B marketing and using AI tools effectively.Leave a review to help us spread the word about quality marketing that puts people first.If you need help building your prompt library or training your team, reach out directly to Donna at dpeterson@worldinnovators.com.
How do you bring AI agents to your organization? Richard chats with April Dunnam about her experiences with Copilot Studio, Microsoft's tool for building various agents for your organization. April discusses the multiple approaches available today for utilizing generative AI and the benefits of leveraging template-driven and low-code solutions to capitalize on the latest features in agentic AI. The conversation also delves into the relationship between M365 Copilot and Copilot Studio for creating extensions and focused functionality. There's a significant amount of power here if you take the time to learn the tools!LinksMicrosoft Copilot StudioBuild your First Copilot Studio Agent in MinutesPlayright MCPTesting Copilot Studio AgentsAgent FlowsDataverse MCPApril's Copilot EstimatorRecorded July 8, 2025
Standing Out in a Sea of Sameness - Selling with Relevance, Integrity, and AI Key Themes and Takeaways
This episode is sponsored by SearchMaster. Optimize your content for traditional search engines AND next-generation AI traffic from Large Language Models like ChatGPT, Claude, and Perplexity. Future-proof your SEO strategy. Be among the first 50 users to sign up and get 6 months of Enterprise tier for free! Watch this episode on YouTube! In this episode of the Marketing x Analytics podcast, host Alex Sofronas interviews Matthew Plese, president of catechismclass.com, about their efforts in optimizing Google Ads campaigns for his B2C business. Matthew shares insights on keyword strategies, the importance of analyzing organic versus paid searches, and the adaptability needed in digital marketing. They discuss specific strategies to improve ROI, including conversion value implementation, keyword analysis, ad creative enhancements, and A/B testing. The conversation highlights the necessity of continuous optimization and data-driven decision-making in successful online advertising. Follow Marketing x Analytics! X | LinkedIn Click Here for Transcribed Episodes of Marketing x Analytics All view are our own.
You have probably seen recent headlines that Microsoft has developed an AI model that is 4x more accurate than humans at difficult diagnoses. It's been published everywhere, AI is 80% accurate compared to a measly 20% human rate, and AI was cheaper too! Does this signal the end of the human physician? Is the title nothing more than clickbait? Or is the truth somewhere in-between? Join Behind the Knife fellow Ayman Ali and Dr. Adam Rodman from Beth Israel Deaconess/Harvard Medical School to discuss what this study means for our future. Studies: Sequential Diagnosis with Large Language Models: https://arxiv.org/abs/2506.22405v1 METR study: https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/ Hosts: Ayman Ali, MD Ayman Ali is a Behind the Knife fellow and general surgery PGY-4 at Duke Hospital in his academic development time where he focuses on applications of data science and artificial intelligence to surgery. Adam Rodman, MD, MPH, FACP, @AdamRodmanMD Dr. Rodman is an Assistant Professor and a practicing hospitalist at Beth Israel Deaconess Medical Center. He's the Beth Israel Deaconess Medical Center Director of AI Programs. In addition, he's the co-director of the Beth Israel Deaconess Medical Center iMED Initiative. Podcast Link: http://bedside-rounds.org/ Please visit https://behindtheknife.org to access other high-yield surgical education podcasts, videos and more. If you liked this episode, check out our recent episodes here: https://app.behindtheknife.org/listen
In this episode of Crazy Wisdom, I, Stewart Alsop, speak with Thamir Ali Al-Rahedi, host of the From First Principles podcast on YouTube, about the nature of questions and answers, their role in business and truth-seeking, and the trade-offs inherent in technologies like AI. We explore the tension between generalists and specialists, the influence of scientism on culture, and how figures like Steve Jobs embodied the power of questions to shape markets and innovations. Thamir also shares insights from his Arabic book summary platform and his cautious approach to using large language models. You can find Thamir's work on YouTube at From 1st Principles with Thamir and on X at @Thamir's View.Check out this GPT we trained on the conversationTimestamps00:00 Stewart Alsop introduces Thamir Ali Al-Rahedi and they discuss Stewart's book on the nature of questions, curiosity, and shifting his focus to questions in business.05:00 They explore how questions generate value and answers capture it, contrasting dynamic questioning with static certainty in business and philosophy.10:00 The market is described as a subconscious feedback loop, and they examine the role of truth-seeking in entrepreneurship, using Steve Jobs as an example.15:00 Discussion turns to Steve Jobs' spiritual practices, LSD, and how unseen factors and focus shaped Apple's success.20:00 Thamir and Stewart debate starting with spiritual or business perspectives in writing, touching on the generalist curse and discernment in creative work.25:00 They reflect on writing habits, moving from short-form to long-form, and using AI as a thinking partner or tool.30:00 Thamir shares his cautious approach to large language models, viewing them as trade-offs, and discusses building an Arabic book summary platform to inspire reading and curiosity.Key InsightsThe dynamic interplay of questions and answers – Thamir Ali Al-Rahedi explains that questions generate value by opening possibilities, while answers capture and stabilize that value. He sees the best answers as those that spark even more questions, creating a feedback loop of insight rather than static certainty.Business and philosophy demand different relationships to truth – In business, answers often serve as the foundation for action and revenue generation, requiring a “false sense of certainty.” By contrast, philosophy thrives in uncertainty, allowing questions to remain open-ended and exploratory without the pressure to resolve them.The market as a subconscious mirror – Both Thamir and Stewart Alsop describe the market as a form of truth that reflects not only conscious desires but also subconscious patterns and impulses. This understanding reframes economic behavior as a dialogue between collective psychology and external systems.Steve Jobs as a case study of truth-seeking in entrepreneurship – The conversation highlights Steve Jobs's blend of spiritual exploration and technological vision, including his exposure to Eastern philosophy and LSD, as an example of how deep questioning and unconventional insight can manifest in world-changing innovations.AI as a double-edged tool for generalists – Thamir views large language models with caution, seeing them as highly specific tools that risk outsourcing critical thinking if used too early in the learning process. He frames technologies as trade-offs rather than pure solutions, emphasizing the importance of retaining one's cognitive autonomy.The generalist's curse and the art of discernment – Both guests wrestle with how to focus and finish creative projects without sacrificing breadth. Thamir suggests writing medium-length pieces as a way to engage deeply without the paralysis of long-form commitments, while Stewart reflects on how AI accelerates his exploration of open threads.A call for cultural renewal through reading and reflection – Thamir shares his initiative to build an Arabic book summary platform aimed at reviving reading habits, especially among younger audiences. He sees curated human-written content as a gateway to generalist thinking and a counterbalance to instant, algorithm-driven consumption.
Our 216th episode with a summary and discussion of last week's big AI news! Recorded on 07/11/2025 Hosted by Andrey Kurenkov and Jeremie Harris. Feel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai Read out our text newsletter and comment on the podcast at https://lastweekin.ai/. In this episode: xAI launches Grok 4 with breakthrough performance across benchmarks, becoming the first true frontier model outside established labs, alongside a $300/month subscription tier Grok's alignment challenges emerge with antisemitic responses, highlighting the difficulty of steering models toward "truth-seeking" without harmful biases Perplexity and OpenAI launch AI-powered browsers to compete with Google Chrome, signaling a major shift in how users interact with AI systems Meta study reveals AI tools actually slow down experienced developers by 20% on complex tasks, contradicting expectations and anecdotal reports of productivity gains Timestamps + Links: (00:00:10) Intro / Banter (00:01:02) News Preview Tools & Apps (00:01:59) Elon Musk's xAI launches Grok 4 alongside a $300 monthly subscription | TechCrunch (00:15:28) Elon Musk's AI chatbot is suddenly posting antisemitic tropes (00:29:52) Perplexity launches Comet, an AI-powered web browser | TechCrunch (00:32:54) OpenAI is reportedly releasing an AI browser in the coming weeks | TechCrunch (00:33:27) Replit Launches New Feature for its Agent, CEO Calls it ‘Deep Research for Coding' (00:34:40) Cursor launches a web app to manage AI coding agents (00:36:07) Cursor apologizes for unclear pricing changes that upset users | TechCrunch Applications & Business (00:39:10) Lovable on track to raise $150M at $2B valuation (00:41:11) Amazon built a massive AI supercluster for Anthropic called Project Rainier – here's what we know so far (00:46:35) Elon Musk confirms xAI is buying an overseas power plant and shipping the whole thing to the U.S. to power its new data center — 1 million AI GPUs and up to 2 Gigawatts of power under one roof, equivalent to powering 1.9 million homes (00:48:16) Microsoft's own AI chip delayed six months in major setback — in-house chip now reportedly expected in 2026, but won't hold a candle to Nvidia Blackwell (00:49:54) Ilya Sutskever becomes CEO of Safe Superintelligence after Meta poached Daniel Gross (00:52:46) OpenAI's Stock Compensation Reflect Steep Costs of Talent Wars Projects & Open Source (00:58:04) Hugging Face Releases SmolLM3: A 3B Long-Context, Multilingual Reasoning Model - MarkTechPost (00:58:33) Kimi K2: Open Agentic Intelligence (00:58:59) Kyutai Releases 2B Parameter Streaming Text-to-Speech TTS with 220ms Latency and 2.5M Hours of Training Research & Advancements (01:02:14) Does Math Reasoning Improve General LLM Capabilities? Understanding Transferability of LLM Reasoning (01:07:58) Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity (01:13:03) Mitigating Goal Misgeneralization with Minimax Regret (01:17:01) Correlated Errors in Large Language Models (01:20:31) What skills does SWE-bench Verified evaluate? Policy & Safety (01:22:53) Evaluating Frontier Models for Stealth and Situational Awareness (01:25:49) When Chain of Thought is Necessary, Language Models Struggle to Evade Monitors (01:30:09) Why Do Some Language Models Fake Alignment While Others Don't? (01:34:35) Positive review only': Researchers hide AI prompts in papers (01:35:40) Google faces EU antitrust complaint over AI Overviews (01:36:41) The transfer of user data by DeepSeek to China is unlawful': Germany calls for Google and Apple to remove the AI app from their stores (01:37:30) Virology Capabilities Test (VCT): A Multimodal Virology Q&A Benchmark
Law professor Daniel Ho says that the law is ripe for AI innovation, but a lot is at stake. Naive application of AI can lead to rampant hallucinations in over 80 percent of legal queries, so much research remains to be done in the field. Ho tells how California counties recently used AI to find and redact racist property covenants from their laws—a task predicted to take years, reduced to days. AI can be quite good at removing “regulatory sludge,” Ho tells host Russ Altman in teasing the expanding promise of AI in the law in this episode of Stanford Engineering's The Future of Everything podcastHave a question for Russ? Send it our way in writing or via voice memo, and it might be featured on an upcoming episode. Please introduce yourself, let us know where you're listening from, and share your question. You can send questions to thefutureofeverything@stanford.edu.Episode Reference Links:Stanford Profile: Daniel HoConnect With Us:Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads / Bluesky / MastodonConnect with School of Engineering >>> Twitter/X / Instagram / LinkedIn / FacebookChapters:(00:00:00) IntroductionRuss Altman introduces Dan Ho, a professor of law and computer science at Stanford University.(00:03:36) Journey into Law and AIDan shares his early interest in institutions and social reform.(00:04:52) Misconceptions About LawCommon misunderstandings about the focus of legal work.(00:06:44) Using LLMs for Legal AdviceThe current capabilities and limits of LLMs in legal settings.(00:09:09) Identifying Legislation with AIBuilding a model to identify and redact racial covenants in deeds.(00:13:09) OCR and Multimodal ModelsImproving outdated OCR systems using multimodal AI.(00:14:08) STARA: AI for Statute SearchA tool to scan laws for outdated or excessive requirements.(00:16:18) AI and Redundant ReportsUsing STARA to find obsolete legislatively mandated reports(00:20:10) Verifying AI AccuracyComparing STARA results with federal data to ensure reliability.(00:22:10) Outdated or Wasteful RegulationsExamples of bureaucratic redundancies that hinder legal process.(00:23:38) Consolidating Reports with AIHow different bureaucrats deal with outdated legislative reports.(00:26:14) Open vs. Closed AI ModelsThe risks, benefits, and transparency in legal AI tools.(00:32:14) Replacing Lawyers with Legal ChatbotWhy general-purpose legal chatbots aren't ready to replace lawyers.(00:34:58) Conclusion Connect With Us:Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads / Bluesky / MastodonConnect with School of Engineering >>>Twitter/X / Instagram / LinkedIn / Facebook
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Today, we're joined by Fatih Porikli, senior director of technology at Qualcomm AI Research for an in-depth look at several of Qualcomm's accepted papers and demos featured at this year's CVPR conference. We start with “DiMA: Distilling Multi-modal Large Language Models for Autonomous Driving,” an end-to-end autonomous driving system that incorporates distilling large language models for structured scene understanding and safe planning motion in critical "long-tail" scenarios. We explore how DiMA utilizes LLMs' world knowledge and efficient transformer-based models to significantly reduce collision rates and trajectory errors. We then discuss “SharpDepth: Sharpening Metric Depth Predictions Using Diffusion Distillation,” a diffusion-distilled approach that combines generative models with metric depth estimation to produce sharp, accurate monocular depth maps. Additionally, Fatih also shares a look at Qualcomm's on-device demos, including text-to-3D mesh generation, real-time image-to-video and video-to-video generation, and a multi-modal visual question-answering assistant. The complete show notes for this episode can be found at https://twimlai.com/go/738.
Our 214th episode with a summary and discussion of last week's big AI news! Recorded on 06/27/2025 Hosted by Andrey Kurenkov and Jeremie Harris. Feel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai Read out our text newsletter and comment on the podcast at https://lastweekin.ai/. In this episode: Meta's hiring of key engineers from OpenAI and Thinking Machines Lab securing a $2 billion seed round with a valuation of $10 billion. DeepMind introduces Alpha Genome, significantly advancing genomic research with a model comparable to Alpha Fold but focused on gene functions. Taiwan imposes technology export controls on Huawei and SMIC, while Getty drops key copyright claims against Stability AI in a groundbreaking legal case. A new DeepMind research paper introduces a transformative approach to cognitive debt in AI tasks, utilizing EEG to assess cognitive load and recall in essay writing with LLMs. Timestamps + Links: (00:00:10) Intro / Banter (00:01:22) News Preview (00:02:15) Response to listener comments Tools & Apps (00:06:18) Google is bringing Gemini CLI to developers' terminals (00:12:09) Anthropic now lets you make apps right from its Claude AI chatbot Applications & Business (00:15:54) Sam Altman takes his ‘io' trademark battle public (00:21:35) Huawei Matebook Contains Kirin X90, using SMIC 7nm (N+2) Technology (00:26:05) AMD deploys its first Ultra Ethernet ready network card — Pensando Pollara provides up to 400 Gbps performance (00:31:21) Amazon joins the big nuclear party, buying 1.92 GW for AWS (00:33:20) Nvidia goes nuclear — company joins Bill Gates in backing TerraPower, a company building nuclear reactors for powering data centers (00:36:18) Mira Murati's Thinking Machines Lab closes on $2B at $10B valuation (00:41:02) Meta hires key OpenAI researcher to work on AI reasoning models Research & Advancements (00:49:46) Google's new AI will help researchers understand how our genes work (00:55:13) Direct Reasoning Optimization: LLMs Can Reward And Refine Their Own Reasoning for Open-Ended Tasks (01:01:54) Farseer: A Refined Scaling Law in Large Language Models (01:06:28) LLM-First Search: Self-Guided Exploration of the Solution Space Policy & Safety (01:11:20) Unsupervised Elicitation of Language Models (01:16:04) Taiwan Imposes Technology Export Controls on Huawei, SMIC (01:18:22) Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task Synthetic Media & Art (01:23:41) Judge Rejects Authors' Claim That Meta AI Training Violated Copyrights (01:29:46) Getty drops key copyright claims against Stability AI, but UK lawsuit continues
Shreya Shankar is a PhD student at UC Berkeley in the EECS department. This episode explores how Large Language Models (LLMs) are revolutionizing the processing of unstructured enterprise data like text documents and PDFs. It introduces DocETL, a framework using a MapReduce approach with LLMs for semantic extraction, thematic analysis, and summarization at scale.Subscribe to the Gradient Flow Newsletter
The July 2025 recall features four episodes on systems and innovation in delivering neurologic care. The episode begins with Dr. Scott Friedenberg discussing challenges faced by neurologists in balancing financial productivity with optimal patient care. The episode leads into a conversation with Dr. Marisa Patryce McGinley discussing the utilization of telemedicine in neurology, particularly focusing on disparities in access among different demographic groups. The conversation transitions to Dr. Lidia Moura talking about the implications of large language models for neurologic care. The episode concludes with Dr. Ashish D. Patel discussing headache referrals and the implementation of a design thinking approach to improve access to headache care. Podcast links: Empowering Health Care Providers Disparities in Utilization of Outpatient Telemedicine for Neurologic Care Large Language Models for Quality and Efficiency of Neurologic Care Using Design Thinking to Understand the Reason for Headache Referrals Article links: Empowering Health Care Providers: A Collaborative Approach to Enhance Financial Performance and Productivity in Clinical Practice Disparities in Utilization of Outpatient Telemedicine for Neurologic Care Implications of Large Language Models for Quality and Efficiency of Neurologic Care: Emerging Issues in Neurology Using Design Thinking to Understand the Reason for Headache Referrals and Reduce Referral Rates Disclosures can be found at Neurology.org.
New to AI? Feeling overwhelmed by all the buzzwords? You're not alone—and this episode is here to help.Today on the Creative Edition Podcast, we're breaking down six essential AI terms every content creator should know to confidently navigate the AI-powered world of content creation. Whether you're already using AI to brainstorm captions, outline podcast episodes, or streamline video edits—or you're just starting to explore AI tools—this episode is packed with practical insights to help you become an AI-native creator.You'll learn:What Prompt Engineering is and how to craft better prompts that lead to higher-quality resultsHow to spot and avoid AI hallucinations (and why fact-checking still matters)The power behind Large Language Models (LLMs) like ChatGPT and ClaudeWhat Fine-Tuning is and how to train AI tools to match your unique voiceAnd what it means to be an AI-Native Creator—plus how early adoption can give your brand a serious edgeWhether you're scaling your content or simply want to stay relevant, this episode will give you the vocabulary and confidence to integrate AI into your creative workflow—no tech background required.Follow us on Instagram: @creativeeditionpodcast Follow Emma on Instagram: @emmasedition | Pinterest: @emmaseditionAnd sign up for our email newsletter.
CISA warns organizations of potential cyber threats from Iranian state-sponsored actors.Scattered Spider targets aviation and transportation. Workforce cuts at the State Department raise concerns about weakened cyber diplomacy. Canada bans Chinese security camera vendor Hikvision over national security concerns.Cisco Talos reports a rise in cybercriminals abusing Large Language Models. MacOS malware Poseidon Stealer rebrands.Researchers discover multiple vulnerabilities in Bluetooth chips used in headphones and earbuds. The FDA issues new guidance on medical device cybersecurity. Our guest is Debbie Gordon, Co-Founder of Cloud Range, looking “Beyond the Stack - Why Cyber Readiness Starts with People.” An IT worker's revenge plan backfires. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you'll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest On today's Industry Voices segment, Debbie Gordon, Co-Founder of Cloud Range, shares insights on looking “Beyond the Stack - Why Cyber Readiness Starts with People.” Learn more about what Debbie discusses in Cloud Range's blog: Bolstering Your Human Security Posture. You can hear Debbie's full conversation here. Selected Reading CISA and Partners Urge Critical Infrastructure to Stay Vigilant in the Current Geopolitical Environment (CISA) Joint Statement from CISA, FBI, DC3 and NSA on Potential Targeted Cyber Activity Against U.S. Critical Infrastructure by Iran (CISA, FBI, DOD Cyber Crime Center, NSA) Prolific cybercriminal group now targeting aviation, transportation companies (Axios) U.S. Cyber Diplomacy at Risk Amid State Department Shakeup (GovInfo Security) Canada Bans Chinese CCTV Vendor Hikvision Over National Security Concerns (Infosecurity Magazine) Malicious AI Models Are Behind a New Wave of Cybercrime, Cisco Talos (Hackread) MacOS malware Poseidon Stealer rebranded as Odyssey Stealer (SC Media) Airoha Chip Vulnerabilities Expose Headphones to Takeover (SecurityWeek) FDA Expands Premarket Medical Device Cyber Guidance (GovInfo Security) 'Disgruntled' British IT worker jailed for hacking employer after being suspended (The Record) Audience Survey Complete our annual audience survey before August 31. Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here's our media kit. Contact us at cyberwire@n2k.com to request more info. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
Everyone wants the latest and greatest AI buzzword. But at what cost? And what the heck is the difference between algos, LLMs, and agents anyway? Tune in to find out.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Choosing AI: Algorithms vs. AgentsUnderstanding AI Models and AgentsUsing Conditional Statements in AIImportance of Data in AI TrainingRisk Factors in Agentic AI ProjectsInnovation through AI ExperimentationEvaluating AI for Business SolutionsTimestamps:00:00 AWS AI Leader Departs Amid Talent War03:43 Meta Wins Copyright Lawsuit07:47 Choosing AI: Short or Long Term?12:58 Agentic AI: Dynamic Decision Models16:12 "Demanding Data-Driven Precision in Business"20:08 "Agentic AI: Adoption and Risks"22:05 Startup Challenges Amidst Tech Giants24:36 Balancing Innovation and Routine27:25 AGI: Future of Work and SurvivalKeywords:AI algorithms, Large Language Models, LLMs, Agents, Agentic AI, Multi agentic AI, Amazon Web Services, AWS, Vazhi Philemon, Gen AI efforts, Amazon Bedrock, talent wars in tech, OpenAI, Google, Meta, Copyright lawsuit, AI training, Sarah Silverman, Llama, Fair use in AI, Anthropic, AI deep research model, API, Webhooks, MCP, Code interpreter, Keymaker, Data labeling, Training datasets, Computer vision models, Block out time to experiment, Decision-making, If else conditional statements, Data-driven approach, AGI, Teleporting, Innovation in AI, Experiment with AI, Business leaders, Performance improvements, Sustainable business models, Corporate blade.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Try Gemini 2.5 Flash! Sign up at AIStudio.google.com to get started. Try Gemini 2.5 Flash! Sign up at AIStudio.google.com to get started.
This episode explores the fundamental mindset of building your vocabulary, extending beyond literal words to conceptual understanding and mental models, and how Large Language Models (LLMs) can be a powerful tool for expanding and refining this crucial skill for career growth, clarity, and navigating disruptions.Uncover why building your vocabulary is a fundamental skill that can help you navigate career transitions, disruptions (such as those caused by AI), and changes in roles.Understand that "vocabulary" goes beyond literal words to include mental models, understanding your own self, specific diagrams (like causal loop diagrams or C4 diagrams), and programming paradigms or design patterns. This conceptual vocabulary provides access to nuanced and powerful ways of thinking.Learn how LLMs can be incredibly useful for refining and expanding your conceptual vocabulary, allowing you to explore new subjects, understand systems, and identify leverage points. They can help you understand the connotations, origins, and applications of concepts, as well as how they piece together with adjacent ideas.Discover why starting with fundamental primitives like inputs, outputs, flows, and system types can help you develop vocabulary, and how LLMs can suggest widely used tools or visualisations based on these primitives (e.g., a scatter plot for XY data).Explore why focusing on understanding the "why" and "when" of using a concept or tool is a much higher leverage skill than merely knowing "how" to use it, enabling you to piece together different vocabulary pieces for deeper insights.
In episode 1883, Jack and Miles are joined by writer, comedian, and co-host of Yo, Is This Racist?, Andrew Ti, to discuss… America’s Cold War Strategy Is Coming Home To Roost Huh? Our Information Environment Is So F**ked, Couple Wild Stories About People Not Knowing How To Act Around AI and more! Tucker Vs. Ted Smackdown They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling. Father of man killed in Port St. Lucie officer-involved shooting: 'My son deserved better' LISTEN: Husk by Men I TrustSee omnystudio.com/listener for privacy information.