POPULARITY
Categories
From November 29, 2023: Unless you've been living under a rock, you've probably heard a great deal over the last year about generative AI and how it's going to reshape various aspects of our society. That includes elections. With one year until the 2024 U.S. presidential election, we thought it would be a good time to step back and take a look at how generative AI might and might not make a difference when it comes to the political landscape. Luckily, Matt Perault and Scott Babwah Brennen of the UNC Center on Technology Policy have a new report out on just that subject, examining generative AI and political ads.On this episode of Arbiters of Truth, our series on the information ecosystem, Lawfare Senior Editor Quinta Jurecic and Lawfare's Fellow in Technology Policy and Law Eugenia Lostri sat down with Matt and Scott to talk through the potential risks and benefits of generative AI when it comes to political advertising. Which concerns are overstated, and which are worth closer attention as we move toward 2024? How should policymakers respond to new uses of this technology in the context of elections?To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
As brands explore AI-driven marketing, the Children's Advertising Review Unit (CARU) has released a new framework outlining the key risks companies face when using AI to engage children—from misleading content and data privacy to mental health and transparency concerns. In this episode, we cover CARU's eight risk categories and practical steps companies can take to align with responsible advertising standards and reduce enforcement exposure. Hosted by Simone Roach. Based on a blog post by Gonzalo Mon.
** AWS re:Invent 2025 Dec 1-5, Las Vegas - Register Here! **Three enterprise AI leaders from Archer, Demandbase, and Highspot reveal how top companies are implementing AI responsibly while navigating data privacy, bias prevention, and regulatory compliance challenges.Topics Include:AWS Security GM Brian Shadpour hosts three AI leaders discussing responsible enterprise deploymentDemandbase's Umberto Milletti explains tenant-based models ensuring first-party customer data remains confidentialHighspot's Oliver Sharp uses behavior-specific feedback frameworks to eliminate bias in sales assessmentsReal-time AI evaluation proves challenging when assessing dynamic sales conversations and customer interactionsCompanies create "second-party data" networks where customers opt-in to share insights collectivelyOpen-source models gain traction but require significant expertise for enterprise-grade implementationEU AI Act mandates human oversight, reshaping how companies design AI systems globallyArcher's Kayvan Alikhani extends identity management principles from web applications to AI agentsUnattended AI agents performing tasks autonomously create new security and accountability challengesHuman-in-the-loop oversight remains essential, especially for high-stakes decisions affecting customersFuture challenge: Determining when AI accuracy justifies removing costly human oversightEnterprise data hygiene becomes critical as AI systems need clean, reviewed internal dataParticipants:Kayvan Alikhani - Global Head of Engineering- Emerging Solutions, Archer Integrated Risk ManagementUmberto Milletti - Chief R&D Officer, DemandbaseOliver Sharp - Co-Founder & Chief AI Officer, HighspotBrian Shadpour - General Manager, Security, Amazon Web ServicesSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon.com/isv/
00:00 Intro/Back from China03:56 Journalism in China & The West09:51 Harmony, rancor, journalism, editorial & social media19:49 Saint Paul to The Church in Rome on quiet living23:07 Vacation vs daily life in a place27:29 Nominalism cannot be the path to social harmony35:57 To love neighbor & enemy we must participate in God's holiness40:12 Humble repentance is the path to a just society43:11 Christianity isn't convenient, look at Syria50:10 Models of church & state53:20 No such thing as Orthodox government54:54 The Church is perfect, stop trying to defend it from itself58:55 The is no invisible church, there is only The Church63:45 Two ways to try to be above the bishops65:27 No, we can't all just get along/ClosingA huge part of the problem with "politics" in The United States is both "sides" have the same underlying assumption: we can't live the good life until everyone agrees with us & the laws reflect our agreement.Fr Symeon's recent trip to visit his family in China provides an opportunity to poke holes in this assumption by discussing the ways in which, simply as a practical example, we can see people living, as St Paul said "...aspire to live quietly, to mind your own affairs, and to work with your hands..." in a country most Americans would insist is about as far from what we want our society to be as is possible. Well, if people can do it in China, then Orthodox Christians can do it in the USA, whether the political winds are going their way, or against them.And if we can walk in the way of the life of faithfulness no matter the political climate, then why are we spending so much time getting wound up about it? Maybe what it boils down to, is we're afraid of inconvenience.Reference materials for this episode: St John Chrysostom's commentary on Romans -12:9-21 - https://www.newadvent.org/fathers/210221.htm & https://www.newadvent.org/fathers/210222.htm - 13:1-7 - https://www.newadvent.org/fathers/210223.htmScripture citations for this episode:Romans - 12:9-21 live in harmony - 13:1-7 be good citizens1 Corinthians - 6:1-11 lawsuitsEphesians - 6:5-9 no slave rebellionsPhilemon - no slave rebellions#china #syria #journalism #editorial #quietlifeThe Christian Saints Podcast is a joint production of Generative sounds & Paradosis Pavilion with oversight from Fr Symeon KeesParadosis Pavilion - https://youtube.com/@paradosispavilion9555https://www.instagram.com/christiansaintspodcasthttps://twitter.com/podcast_saintshttps://www.facebook.com/christiansaintspodcasthttps://www.threads.net/@christiansaintspodcastIconographic images used by kind permission of Nicholas Papas, who controls distribution rights of these imagesPrints of all of Nick's work can be found at Saint Demetrius Press - http://www.saintdemetriuspress.comAll music in these episodes is a production of Generative Soundshttps://generativesoundsjjm.bandcamp.comDistribution rights of this episode & all music contained in it are controlled by Generative SoundsCopyright 2021 - 2023
Adobe Generative Credits were introduced as a way to offset the larger amount of costs incurred by using AI in Photoshop and other Adobe products. Essentially it's a way to charge people that use more expensive features the most, rather than pass the costs on to everyone - no matter what tools they use. Well, it's getting out of hand a bit and Matt Blake take this episode to talk about some of the problems the industry has with it.
Digital intelligence is reshaping how organizations work, and success depends on integrating multiple domains, using real-time analytics, and ensuring strong cyber protections as data grows and risks increase This week, Dave, Esmee, and Rob talk with Chris Carter, Director - Key Accounts and Australia at BAE Systems Digital Intelligence, to explore the fast-moving world of digital intelligence, data, and analytics and dive into the complexities of the work, how rapidly the landscape is evolving, and the major challenges organizations face today. TLDR:00:41 Introduction of Chris Carter03:00 Rob is confused by the idea of renting out brain capacity for compute power07:13 Chris discusses the fusion of data, AI, and human judgment in complex environments34:30 Are we giving enough attention to human cognitive capacity?42:34 Rugby tickets with the family GuestChris Carter: https://www.linkedin.com/in/chriscarter3/ HostsDave Chapman: https://www.linkedin.com/in/chapmandr/Rob Kernahan: https://www.linkedin.com/in/rob-kernahan/Esmee van de Giessen: https://www.linkedin.com/in/esmeevandegiessen/ ProductionMarcel van der Burg: https://www.linkedin.com/in/marcel-vd-burg/Dave Chapman: https://www.linkedin.com/in/chapmandr/ SoundBen Corbett: https://www.linkedin.com/in/ben-corbett-3b6a11135/Louis Corbett: https://www.linkedin.com/in/louis-corbett-087250264/ 'Cloud Realities' is an original podcast from Capgemini
According to an October report from the Association of Corporate Counsel, generative AI use among in-house lawyers has more than doubled over the last year. More than half of respondents are now actively using generative AI in their practice—compared to only 23% in 2024. The survey also said two-thirds of respondents are using it so that they can eventually rely less on outside counsel, and over 60% are likely to push for a change in how legal services are priced. Learn more about your ad choices. Visit megaphone.fm/adchoices
According to an October report from the Association of Corporate Counsel, generative AI use among in-house lawyers has more than doubled over the last year. More than half of respondents are now actively using generative AI in their practice—compared to only 23% in 2024. The survey also said two-thirds of respondents are using it so that they can eventually rely less on outside counsel, and over 60% are likely to push for a change in how legal services are priced.
According to an October report from the Association of Corporate Counsel, generative AI use among in-house lawyers has more than doubled over the last year. More than half of respondents are now actively using generative AI in their practice—compared to only 23% in 2024. The survey also said two-thirds of respondents are using it so that they can eventually rely less on outside counsel, and over 60% are likely to push for a change in how legal services are priced.
AI platforms like Perplexity are redefining how clients find attorneys. MATCH10X, a content marketing agency, discusses the shift from keyword search to generative discovery and how law firms can stay visible.Visit https://mach10xmarketing.clientcabin.com/app/info MACH10X City: Southlake Address: 2600 E Southlake Blvd #120, Southlake, TX 76092 Website: https://mach10xmarketing.com/
Aram Fischer, Co-Founder and CEO of Change Agent, joins Deepak Puri, CEO of The Democracy Labs, to discuss values-aligned generative AI and how it contrasts with mainstream AI platforms. Aram warns about built-in societal biases and misinformation in tools like ChatGPT that need to be identified to better align data with proper action. Deepak and Aram talk about: Shortcomings in existing gen AI platforms related to data privacy and implicit and explicit biases Practical applications for AI for voter data analysis and fundraising Using AI to support campaign communications and messaging Creating a library of pre-built workflows to improve campaign efficiency and automate time-consuming tasks #TheDemLabs #TheChangeAI #ChangeAgent #Nonprofits #NonProfitTech #AI #GenAIinPolitics #PoliticalAI #EthicalAI #Activists #SocialJustice #Fundraising Change Agent TheDemLabs.org
Welcome to this inspiring episode of TBCY featuring Maryam Berijanian, a multi-award-winning researcher in Generative AI and a top student in STEM across Europe and the US. Hosted by Stephen Ibaraki, this conversation dives deep into Maryam's personal journey from being a curious child in a STEM-centric family to winning prestigious scholarships and awards, conducting groundbreaking research in AI, robotics, computer vision, and large language models (LLMs).Learn how Maryam navigated competitive academic environments, adapted to new cultures in the Netherlands and Germany, and ultimately found her passion at Michigan State University. Maryam provides valuable insights into the evolution of AI, her perspective on the future of AGI/ASI, and practical career advice for aspiring researchers and students—especially women in STEM.Discover tips on staying relevant in the rapidly changing AI landscape, the importance of networking, and leveraging cutting-edge tools like Gemini and OpenAI for research. Don't miss her encouraging message to young girls dreaming of careers in science and technology!
Listen now to learn:00:00 – Intro:02:35 – What Is Generative AI (Really)?06:04 – Compliance Concerns & Data Privacy:14:20 – Can You Trust AI to Run Your Practice?18:09 – Data Is the New Asset Class:22:19 – Trust, Credibility & the Human Factor:27:11 – The Future of Work in Financial Services:31:05 – Final Thoughts: Trust But Verify:Learn more about our companies and resources:-Elite Consulting Partners | Financial Advisor Transitions: https://eliteconsultingpartners.com-Elite Marketing Concepts | Marketing Services for Financial Advisors: https://elitemarketingconcepts.com-Elite Advisor Successions | Advisor Mergers and Acquisitions: https://eliteadvisorsuccessions.com-JEDI Database Solutions | Technology Solutions for Advisors: https://jedidatabasesolutions.com Listen to more Advisor Talk episodes: https://eliteconsultingpartners.com/podcasts/
What does it take to make AI truly work for people — not just profits?In this episode, Rafael Frankel, Director of Public Policy for Asia-Pacific at Meta, joins Danny Levy to share how one of the world's most influential technology companies is driving responsible, transparent, and scalable AI adoption across the region.A former journalist turned policy leader, Rafael brings a rare blend of storytelling, diplomacy, and innovation insight from over a decade shaping tech policy and trust frameworks across APAC.You'll learn:How Meta defines its role in the AI ecosystem — and how it's applying AI to create real-world impact for businesses and communitiesThe truth behind Generative, Predictive, and General AI — what's hype, what's here, and what's nextHow partnerships with Deloitte SEA, AiSee, and Meta's Llama program are accelerating accessible AI innovation in AsiaThe biggest barriers to AI adoption in APAC — and what forward-thinking leaders can do to overcome themRafael's personal lessons in leadership, resilience, and purpose from his journey across media, policy, and technologyIf you want to cut through the noise around AI, understand where the next decade is heading, and learn how to lead with clarity in an uncertain world — this is an episode you won't want to miss.Are you getting every episode of Digital Transformation & Leadership in your favourite podcast player? You can find us Apple Podcasts and Spotify to subscribe.
SEO is evolving. Learn about the next big thing: Generative Engine Optimization (GEO). This episode breaks down Adobe's new LLM Optimizer, a tool designed to help brands measure visibility, analyze sentiment, and optimize content for discovery by large language models.
From lesson planning and assessment design to AI-assisted tutoring and district-wide policy development, Chris explores how artificial intelligence is reshaping teaching and learning, not as a replacement for educators, but as a partner in creativity and efficiency. You will gain insights into the AI tipping point, practical classroom applications, real risks (including equity, bias, privacy, and the loss of human connection), and how teachers can lead responsibly through this transformation. The episode also includes a timely #EdTech Recommendation highlighting Google Chrome's live caption and translation features, and an invitation to participate in the 2025 EdTech SmackDown, the annual listener-powered finale of the show. Key Topics Discussed #EdTech Thought: The AI tipping point: Why AI in education is no longer "next" but "now." How educators are using generative AI for planning, differentiation, and feedback. Examples of AI tools supporting English language learners and accessibility. Major risks of AI in schools — equity, bias, privacy, and the human factor. How to lead responsibly with AI in your classroom or district: Build your AI literacy Put pedagogy before technology Model ethical transparency Advocate for clear district policies Share both wins and failures #EdTech Recommendation: Google Chrome's Live Caption & Translate feature. Relevant Links & Resources House of #EdTech Episode 246 – The Harms of Generative AI featuring Tom Mullaney
Silicon Bites Ep267 | News Update - Day 1,354 - 2025-11-08 | AI fakes try to poison Ukraine's mobilization debate; Washington grants Budapest a carve-out on Russian energy; Kyiv's defence industry talks up its potential with $30–35 billion in long-range output for 2026; the EU tightens visa screws on Russian nationals; Pokrovsk hangs in the balance; mysterious drones snarl Belgian airspace; and Germany's AfD faces “sleeper cell” accusations on the Bundestag floor. Let's try to anchor ourselves amidst the apparent chaos. “AI soldiers” and the mobilization disinfo surge. A new wave of AI-generated “soldier” videos — young men pleading that they're being “forced” to the front — has flooded TikTok, X, and Telegram in multiple languages this week. The goal: erode trust in Ukraine's leadership, spook families, and fracture Western support. (The Kyiv Independent) Despite the crudity and absurdity of many of the videos, some will be fooled, and it provides fodder for Kremlin assets and useful idiots to ‘flood the zone'.----------SOURCES: Kyiv Independent — “AI soldiers: How deepfakes are manipulating Ukraine's mobilization narrative” (Nov. 7, 2025)Kyiv Independent — “Zelensky blames ex-energy chief…” (Nov. 7, 2025)Kyiv Independent — “Ukraine estimates its long-range weapon production at over $30 billion in 2026” (Nov. 7, 2025)Interfax-Ukraine (summary of the same briefing) (Nov. 7, 2025)AP News — “Orbán says Trump will not punish Hungary for buying Russian energy…” (Nov. 7, 2025)The Guardian — “Trump–Orbán meeting: US gives Hungary exemption from sanctions on Russian oil and gas” (Nov. 7, 2025)European Commission (DG HOME) — Implementing decision on multiple-entry visas for Russian nationals (Nov. 7, 2025). ([Migration and Home Affairs)Reuters — “EU toughens visa rules for Russians” (Nov. 7, 2025)Kyiv Independent — “Ukraine's defense of Pokrovsk on a knife-edge…” (Nov. 6, 2025)AP News — “Drone sightings halt flights at major European cargo airport in Belgium” (Nov. 8, 2025)Al Jazeera — “Belgium's Liège airport temporarily halts flights after new drone sighting” (Nov. 7, 2025)The Guardian — “Belgium to hold security meeting after drone sightings… (source says Russia ‘very probably' responsible)” (Nov. 6, 2025)Euronews — “Lawmakers accuse AfD of spying… ‘sleeper cell loyal to Russia'” (Nov. 6, 2025)----------This is super important. There are so many Battalions in Ukraine, fighting to defend our freedoms, but lack basics such as vehicles. These are destroyed on a regular basis, and lack of transport is costs lives, and Ukrainian territory. Autumn Harvest: Silicon Curtain (Goal€22,000)https://car4ukraine.com/campaigns/autumn-harvest-silicon-curtain----------SILICON CURTAIN LIVE EVENTS - FUNDRAISER CAMPAIGN Events in 2025 - Advocacy for a Ukrainian victory with Silicon Curtainhttps://buymeacoffee.com/siliconcurtain/extrasOur events of the first half of the year in Lviv, Kyiv and Odesa were a huge success. Now we need to maintain this momentum, and change the tide towards a Ukrainian victory. The Silicon Curtain Roadshow is an ambitious campaign to run a minimum of 12 events in 2025, and potentially many more. Any support you can provide for the fundraising campaign would be gratefully appreciated. https://buymeacoffee.com/siliconcurtain/extrasWe need to scale up our support for Ukraine, and these events are designed to have a major impact. Your support in making it happen is greatly appreciated. All events will be recorded professionally and published for free on the Silicon Curtain channel. Where possible, we will also live-stream events.https://buymeacoffee.com/siliconcurtain/extras----------SUPPORT THE CHANNEL:https://www.buymeacoffee.com/siliconcurtainhttps://www.patreon.com/siliconcurtain----------
Speaker: Professor Bhamati Viswanathan, Visitor, Cambridge Law Faculty and Fellow at the Kernochan Center for Law, Media and the Arts at Columbia Law School Biography: Bhamati Viswanathan is a Senior Visitor at the University of Cambridge Faculty of Law and a Fellow (Non-Resident) at the Kernochan Center for Law, Media and the Arts at Columbia Law School (New York). Prior to joining the Cambridge Faculty of Law, she was Assistant Professor at New England Law | Boston, where she taught copyright law, artificial intelligence and the law, law and the visual arts, intellectual property law, and U.S. Constitutional law. She is the author of “Cultivating Copyright: How Creative Industries Can Harness Intellectual Property to Survive the Digital Age” (Routledge/Taylor & Francis Press). She currently holds an Edison Fellowship from the Intellectual Property Policy Institute at University of Akron Law School, under whose aegis she is writing a series of articles on the disparate impact of copyright law on women creators and women-centric work. She is also planning a book on the nexus of intellectual property and arts/culture in the age of artificial intelligence.Bhamati serves as Chair of the American Bar Association Intellectual Property Section: Visual and Dramatics Works Committee. She is a Faculty Advisor on the Copyright Alliance Academic Advisory Board. She serves as Faculty Partner to the News/Media Alliance. She is Education Advisor to the Volunteer Lawyers for the Arts (VLA)/ Massachusetts Arts and Business Council. She is also a Faculty Advisor to the Journal of the Copyright Society; and she was a Trustee of the Copyright Society, as well as Chair of its New England Chapter. She holds an S.J.D./LL.M. from University of Pennsylvania Law School; a J.D. from University of Michigan Law School; and a B.A. from Williams College. She is a competitive figure skater, violinist, and published poet/translator and lives in Boston.Abstract: The training of generativeAI models on ingested work is a hotly contested area of U.S. copyright law. In this Seminar, I will inquire whether such training may constitute “fair use” under the nonexclusive four-factor test of the U.S. Copyright Act. Currently, courts are wrestling with the fair use defense in several major cases, including Thompson Reuters v. ROSS Intelligence; Bartz v. Anthropic; Kadrey v. Meta; and the consolidated litigation of In re: OpenAI.Another open question is whether AI outputs infringe copyright in other works. Here, plaintiffs must establish that AI outputs infringe their works by passing the threshold of the “substantial similarity” test. I will discuss the test in the context of AI litigation, and will suggest that the relatively novel “market dilution” theory, focusing on harm caused by stylistically similar outputs, might be applied to weigh against a fair use defense for GenAI training. I will also address whether the theory of “vicarious liability” might be fruitfully brought to bear against certain genAI companies. Lastly, I will ask what action Congress can, or should, take, with a view to striking a fair balance between meeting the needs of innovative technologies and securing the rights of creative industries and creators. As an example, I will raise a recent proposal (in which I was involved) that Congress explicitly prohibit GenAI training on materials derived from digital repositories of unlicensed materials (so-called “shadow libraries”).For more information (and to download slides) see: https://www.cipil.law.cam.ac.uk/seminars-and-events/cipil-seminars
The OpenAI Sora 2 team (Bill Peebles, Thomas Dimson, Rohan Sahai) discuss how they compressed filmmaking from months to days, enabling anyone to create compelling video. Bill, who invented the diffusion transformer that powers Sora and most video generation models, explains how space-time tokens enable object permanence and physics understanding in AI-generated video, and why Sora 2 represents a leap for video. Thomas and Rohan share how they're intentionally designing the Sora product against mindless scrolling, optimizing for creative inspiration, and building the infrastructure for IP holders to participate in a new creator economy. The conversation goes beyond video generation into the team's vision for world simulators that could one day run scientific experiments, their perspective on co-evolving society alongside technology, and how digital simulations in alternate realities may become the future of knowledge work. Hosted by: Konstantine Buhler and Sonya Huang, Sequoia Capital
In this episode, Aydin chats with Allan Isfan, Senior Director of Global Video Platform at Warner Bros Discovery, about how AI is reshaping creativity, software development, and large-scale enterprise culture. Allan explains how he drives AI literacy for 1,500+ employees, the power of internal demos and sandboxes, and gives a hands-on walkthrough of generative video tools like Gemini V3, Flow, and Sora. He also dives into AI video analysis, the Wizard of Oz project at The Sphere, and the future of creative storytelling powered by AI.
In this second episode of the special AI mini-series, we now explore the human side of transformation, where technology meets purpose and people remain at the center. From future jobs and critical thinking to working with C-level leaders, how human intervention and high-quality data drive success in an AI-powered world.This week, Dave, Esmee, and Rob talk to Indhira Mani, CDO at Intact Insurance UK, about the Love for data, insights on leadership, resilience, and preparing the next generation for what's next. TLDR:01:30 Introduction of Indhira Mani and Scotch whisky05:45 Explaining the State of AI mini-series with Craig07:12 Conversation with Indi about her boyfriend called Data 38:33 Umbrella Sharing in Japan and the trust on AI45:15 The British Insurance Award and Women in Tech finalist GuestIndhira Mani: https://www.linkedin.com/in/indhira-mani-data/HostsDave Chapman: https://www.linkedin.com/in/chapmandr/Rob Kernahan: https://www.linkedin.com/in/rob-kernahan/Esmee van de Giessen: https://www.linkedin.com/in/esmeevandegiessen/with co-host Craig Suckling: https://www.linkedin.com/in/craigsuckling/ProductionMarcel van der Burg: https://www.linkedin.com/in/marcel-vd-burg/Dave Chapman: https://www.linkedin.com/in/chapmandr/ SoundBen Corbett: https://www.linkedin.com/in/ben-corbett-3b6a11135/Louis Corbett: https://www.linkedin.com/in/louis-corbett-087250264/ 'Cloud Realities' is an original podcast from Capgemini
While the prior decade was defined by disruption in content distribution, the next decade will be defined by disruption in content creation, augmented by generative AI. This month's Eye on the Market looks at the rapidly shifting fortunes in legacy cable/broadcast shares vs streaming, the rise of social media as a platform for consuming all forms of content, rising acceptance of user-generated content and the increasing democratization of text-to-video tools used to create it, the value of the legacy content moat in film/tv libraries and the best movies of the 21st century (as ranked by me). View video here
Stefano Puntoni, Wharton marketing professor and co-director of Wharton Human AI Research, discusses findings from a multi-year study on how companies are implementing generative AI, examining its impact on entry-level jobs, return on investment, and the growing intersection between human expertise and emerging technology. Hosted on Acast. See acast.com/privacy for more information.
Mastering Modern PR in the Age of AI: Expert Insights from Amy Rosenberg of VeracityIn an era where AI and digital media are rewriting the rules of communication, modern PR demands more strategy, agility, and authenticity than ever. In this episode of The Thoughtful Entrepreneur, host Josh Elledge sits down with Amy Rosenberg, Founder and President of Veracity, to explore how small businesses and PR professionals can adapt to today's fast-changing media landscape. Amy shares actionable tactics for building credibility, leveraging AI, and crafting stories that truly resonate with audiences.The Future of PR in a Digital-First WorldAmy explains that public relations has shifted from focusing solely on press releases and media placements to building holistic brand visibility across multiple platforms. In today's fast-changing world, credibility and consistency are the new currencies of PR success. She introduces the emerging concept of Generative Engine Optimization (GEO)—the process of optimizing your brand's presence not only for Google but also for AI-driven search tools like ChatGPT. By creating high-value, trustworthy content, brands can ensure they show up in AI-generated responses that influence decision-making.For small businesses, Amy offers practical ways to take control of their own PR efforts. She encourages using editorial calendars to time pitches with industry trends, focusing on earned media rather than paid placements, and telling authentic, human-centered stories. Even without a large budget, entrepreneurs can make an impact by consistently sharing relevant insights and staying connected to their audience's needs.Amy emphasizes that while technology like AI is transforming PR, the human element remains irreplaceable. Storytelling, emotional intelligence, and relationship-building still drive meaningful visibility and trust. By blending smart digital strategies with authentic communication, brands can thrive in an AI-powered, content-saturated world.About Amy RosenbergAmy Rosenberg is the Founder and President of Veracity, a Portland-based public relations and digital marketing agency. With over two decades of experience, Amy helps organizations and PR professionals navigate modern media through strategic communication, training, and her acclaimed book A Practical Guide to Public Relations for Businesses, Nonprofits, and PR Leaders.About VeracityVeracity is a PR and digital marketing agency specializing in strategic communications, media relations, and reputation management. The firm empowers businesses and marketing leaders to build credibility, enhance visibility, and connect meaningfully with their audiences. Learn more and access free resources at www.veracityagency.com/.Links Mentioned in This EpisodeAmy Rosenberg on LinkedInVeracity WebsiteKey Episode HighlightsPR success in 2024 depends on credibility, consistency, and digital visibility.Generative Engine Optimization (GEO) is the next evolution of PR and SEO strategy.Small businesses can leverage editorial calendars and strong storytelling to earn coverage.Earned media delivers more long-term value than paid placements.AI tools are powerful, but authentic...
Discover how educators are using generative AI not to automate, but to elevate critical thinking and collaboration in K-12 schools. In this episode of Shifting Schools, host Tricia Friedman shows how "disagreement by design" and intentional prompt-engineering transform student and leadership learning. What you'll learn: What disagreement by design looks like in real classrooms and leadership teams How prompt engineering unlocks student curiosity and systems-thinking mindset in K-12 Why writing bespoke GPT bots might just be the 'new essay' of our times Who this episode is for: Any educator, school leader or district-innovator exploring how to responsibly integrate companion AI, AI avatars and prompt-driven dialogue into a learning ecosystem.
In this episode, Claus Lauter, host of the Ecommerce Coffee Break Podcast, breaks down how AI is reshaping Q4 e-commerce strategy. He explains the rise of Generative Engine Optimization (GEO) — the new way to stay visible in AI-driven search — and shares practical tips on managing high ad spend while keeping profits strong. Claus also talks about the evolution of the podcast's YouTube format and invites listener feedback.Topics discussed in this episode: How AI is changing Q4 shopping.ChatGPT's new checkout and what it means for marketing.What Generative Engine Optimization (GEO) is.Why Q4 ad costs peak — and how to handle them.When high ad spend is worth it.Why email list growth can wait until January.The top KPI mistake new sellers make.YouTube's growing power in search and discovery.Why viewer feedback matters for growth.Learn why AI is rewriting the rules of search — and how GEO can future-proof your ecommerce brand.Links & Resources Website: https://ecommercecoffeebreak.com/LinkedIn: https://www.linkedin.com/company/ecommerce-coffee-break-podcast/X/Twitter: https://x.com/ecomcoffeebreakInstagram: https://www.instagram.com/ecommercecoffeebreak/Get access to more free resources by visiting the show notes at https://tinyurl.com/trr8wnvr______________________________________________________ LOVE THE SHOW? HERE ARE THE NEXT STEPS! Follow the podcast to get every bonus episode. Tap follow now and don't miss out! Rate & Review: Help others discover the show by rating the show on Apple Podcasts at https://tinyurl.com/ecb-apple-podcasts Join our Free Newsletter: https://newsletter.ecommercecoffeebreak.com/ Support The Show On Patreon: https://www.patreon.com/EcommerceCoffeeBreak Partner with us: https://ecommercecoffeebreak.com/partner-with-us/
We've all been there—juggling new tech, new expectations, and wondering if our students are really learning what matters. In this chat, we talk about “pedagogical debt” (the gap between what we know works in teaching and what we actually do), how AI is shaking things up, and why the right kind of curiosity can help. At its heart, it's a conversation about making sure technology serves learning—not the other way around. Dr. Punya Mishra (punyamishra.com) is the Associate Dean of Scholarship and Innovation at the Mary Lou Fulton Teachers College at Arizona State University. He has an undergraduate degree in Electrical Engineering, two Master's degrees in Visual Communication and Mass Communications, and a Ph.D. in Educational psychology. He co-developed the TPACK framework, described as “the most significant advancement in technology integration in the past 25 years.” Dr. Caroline Fell Kurban is the advisor to the Rector at MEF University. She was the founding Director of the Center of Research and Best Practices for Learning and Teaching (CELT) at MEF University and teaches in the Faculty of Education. She holds a BSc in Geology, an MSc in TESOL, an MA in Technology and Learning Design, and a PhD in Applied Linguistics. Fell Kurban is currently the head of the Global Terminology Project and the creator of the GenAI-U technology integration framework. Dr. Liz Kolb is a clinical professor at the University of Michigan and the author of several books, including Cell Phones in the Classroom and Help Your Child Learn with Cell Phones and Web 2.0. Kolb has been a featured and keynote speaker at conferences throughout the U.S. and Canada. She created the Triple E Framework for effective teaching with digital technologies and blogs at cellphonesinlearning.com. Dr. Puentedura is the Founder and President of Hippasus, a consulting practice focusing on transformative applications of information technologies to education. He has implemented these approaches for over thirty years at various K-20 institutions and health and arts organizations. He is the creator of the SAMR model for selecting, using, and evaluating technology in education and has guided multiple projects worldwide. Dr. Helen Crompton is the Executive Director of the Research Institute for Digital Innovation in Learning at ODUGlobal and Professor of Instructional Technology at Old Dominion University. Dr. Crompton earned her Ph.D. in educational technology and mathematics education from the University of North Carolina at Chapel ill. Dr. Crompton is recognized for her outstanding contributions and is on Stanford's esteemed list of the world's Top 2% of Scientists. She is the creator of the SETI framework. She frequently serves as a consultant for various governments and bilateral and multilateral organizations, such as the United Nations and the World Bank, on driving meaningful change in educational technology.
Send us a textInvest in pre-IPO stocks with AG Dillon & Co. Contact aaron.dillon@agdillon.com to learn more. Financial advisors only.00:00 - Intro00:08 - Mercor $350M Series C Quintuples Valuation to $10B Amid AI Data Pivot01:47 - OpenAI Restructuring to PBC Unlocks $40B Fundraising at $500B Valuation03:14 - SoftBank Greenlights $22.5B Final Tranche to OpenAI Contingent on PBC Shift04:06 - OpenAI Advances Generative Music Tool to Rival Suno in $200B Media Market05:14 - Poolside $2B Raise at $12B Valuation Backed by Nvidia's $1B Commitment05:49 - Bending Spoons $270M Raise at $11B Valuation Funds $1.4B AOL Buy06:56 - Whatnot $225M Series F at $11.5B Valuation Drives Global GMV Doubling08:08 - Figma $200M+ Weavy Acquisition Boosts AI Media Tools Post-IPO09:07 - MiniMax M2 Tops Global Open Models in Sovereign AI Push10:00 - 1X NEO Robot Preorders at $20K Target 2026 Deliveries11:10 - SpaceX $2B Pentagon Deal Bolsters $11B Starlink Revenue12:15 - Canva Debuts Foundational Design Model in Affinity Free Shift13:28 - Grammarly Rebrands to Superhuman with 40M DAU AI Suite
As we continue our discussion of the difficulty in integrating the journey on the way of the life of faithfulness to the evengalion of Jesus, The Christ, the Orthodox Christian way of life, into contemporary western culture, Jim & Fr Symeon take the opportunity to discuss "the other side". Last conversation we defined & critique "woke" as the civil religion of the so-called Left. This time we will define & critique "Make America Great Again" as the civil religion of the so-called Right. The reason we wish to critique it is because it uses Christian language but is not Christian. And yet we can end the episode with the hymns to The Theotokos which call her a military leader, proclaiming our victory & the establishment of our way of life. Find out why this isn't about theocracy & isn't about violence, by tuning in!Scripture citations for this episode: - Genesis 1 & 2 - Creation as a battle against chaos & death - Genesis 11 - Tower of Babel - Galatians 5:16-26 - Fruits of The SpiritThe Christian Saints Podcast is a joint production of Generative sounds & Paradosis Pavilion with oversight from Fr Symeon KeesParadosis Pavilion - https://youtube.com/@paradosispavilion9555https://www.instagram.com/christiansaintspodcasthttps://twitter.com/podcast_saintshttps://www.facebook.com/christiansaintspodcasthttps://www.threads.net/@christiansaintspodcastIconographic images used by kind permission of Nicholas Papas, who controls distribution rights of these imagesPrints of all of Nick's work can be found at Saint Demetrius Press - http://www.saintdemetriuspress.comAll music in these episodes is a production of Generative Soundshttps://generativesoundsjjm.bandcamp.comDistribution rights of this episode & all music contained in it are controlled by Generative SoundsCopyright 2021 - 2023
Generative artificial intelligence is emerging as a tool to look at how people learn language. University of Arizona professor Gondy Leroy discusses research into how advanced machine learning can help families diagnose autism through the way their children acquire speaking skills. Gondy Leroy spoke with Leslie Tolbert, Ph. D. Regent's professor in Neuroscience at the University of Arizona.
As agentic AI becomes a defining force in enterprise innovation, infrastructure has moved from a back-office concern to the beating heart of business transformation. On today's episode of the 'AI in Business' podcast, Ranjan Sinha, IBM Fellow, Vice President, and Chief Technology Officer for watsonx and IBM Research, joins Emerj Editorial Director Matthew DeMello to discuss the future of scalable AI infrastructure — from neuromorphic and quantum processing to open-source AI platforms built for trust and governance. Ranjan explains how enterprises are transitioning from isolated experiments to mission-critical AI applications, revealing why today's Fortune 500 leaders must reimagine compute, governance, and data pipelines to sustain automation and reliability at scale. He details IBM's breakthroughs in specialized processors, including the NorthPole neuromorphic chip and the company's roadmap for fault-tolerant quantum computing by 2029. Want to share your AI adoption story with executive peers? Click emerj.com/expert2 for more information and to be a potential future guest on the 'AI in Business' podcast! If you've enjoyed or benefited from some of the insights of this episode, consider leaving us a five-star review on Apple Podcasts, and let us know what you learned, found helpful, or liked most about this show! Watch Matthew and Ranjan's conversation on our new YouTube Channel: youtube.com/@EmerjAIResearch.
Sign up for Alex's first live cohort, about Hierarchical Model building!Get 25% off "Building AI Applications for Data Scientists and Software Engineers"Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Why GPs still matter: Gaussian Processes remain a go-to for function estimation, active learning, and experimental design – especially when calibrated uncertainty is non-negotiable.Scaling GP inference: Variational methods with inducing points (as in GPflow) make GPs practical on larger datasets without throwing away principled Bayes.MCMC in practice: Clever parameterizations and gradient-based samplers tighten mixing and efficiency; use MCMC when you need gold-standard posteriors.Bayesian deep learning, pragmatically: Stochastic-gradient training and approximate posteriors bring Bayesian ideas to neural networks at scale.Uncertainty that ships: Monte Carlo dropout and related tricks provide fast, usable uncertainty – even if they're approximations.Model complexity ≠ model quality: Understanding capacity, priors, and inductive bias is key to getting trustworthy predictions.Deep Gaussian Processes: Layered GPs offer flexibility for complex functions, with clear trade-offs in interpretability and compute.Generative models through a Bayesian lens: GANs and friends benefit from explicit priors and uncertainty – useful for safety and downstream decisions.Tooling that matters: Frameworks like GPflow lower the friction from idea to implementation, encouraging reproducible, well-tested modeling.Where we're headed: The future of ML is uncertainty-aware by default – integrating UQ tightly into optimization, design, and deployment.Chapters:08:44 Function Estimation and Bayesian Deep Learning10:41 Understanding Deep Gaussian Processes25:17 Choosing Between Deep GPs and Neural Networks32:01 Interpretability and Practical Tools for GPs43:52 Variational Methods in Gaussian Processes54:44 Deep Neural Networks and Bayesian Inference01:06:13 The Future of Bayesian Deep Learning01:12:28 Advice for Aspiring Researchers
Send us a textThis episode digs into the habits that actually hold up: learning from CTF wins and post-event reviews, exploring scholarships and Reno trainings that build technical muscle, and walking through expert-witness prep that turns courtroom stress into structured, confident testimony.We'll unpack Brett Shavers' reminder that truth alone doesn't win cases—procedure, documentation, and bias-aware methods do. Clear writing matters too; vague language can undermine solid work.On the tools side, RabbitHole v3 now recovers deleted SQLite records and rebuilds them into query-ready databases—speeding validation and reporting without losing traceability. We'll also demo the new Android Logical Extractor: pull device info, logs, and scoped chat data with hashes and ready-to-file PDFs. It's ideal when consent is limited or full file systems aren't on the table, and integrates cleanly with downstream workflows.Throughout, we emphasize one idea: tools are abstractions. If you can't explain how a result was produced or reproduce it, you don't own the finding. That's especially true with AI. Generative models are nondeterministic—useful when documented, risky when their prompts or scope stay hidden. We'll cover prompt disclosure, reproducibility, and how to write about “deleted” data with precision: previously existing, marked deleted, not referenced—describe state, not intent.If you're serious about improving testimony, validating results, and adopting new tools without losing forensic footing, join us. Then share your take on AI prompts and language precision—what will you change in your next report?Notes: IACIS Scholarshipshttps://www.iacis.com/awards-and-scholarships/will-docken-scholarship/https://www.iacis.com/awards-and-scholarships/womens-scholarship/Training Opportunities!IACIS Renohttps://www.iacis.com/events/in-person/reno-nv/Free DFIR Test Images + Industry Tools to Analyze Themhttps://www.dfir.training/downloads/test-imagesNew Blogs from Brett Shavers!https://www.linkedin.com/pulse/theres-lot-more-trial-than-you-may-know-even-have-100-brett-shavers-br4sc/https://www.linkedin.com/pulse/case-almost-made-me-quit-dfir-shouldve-news-brett-shavers-pie1c/https://www.linkedin.com/pulse/i-when-digital-forensics-lost-its-soul-brett-shavers-otkec/https://www.linkedin.com/pulse/end-dfir-again-dfir-training-ab5jc/https://www.linkedin.com/pulse/how-wreck-your-report-affidavit-testimony-one-word-brett-shavers-qkyvc/Free Webinarhttps://www.suspectbehindthekeyboard.com/fighting-city-hall-dfir-lessons-from-a-pro-se-plaintiffRabbithole Updatehttps://www.linkedin.com/posts/rabbithole-dataviewer-sqllite-ugcPost-7384144022065274880-0d0Dhttps://www.cclsolutionsgroup.com/forensic-products/rabbitholeALEX Releasehttps://github.com/prosch88/ALEXhttps://github.com/RealityNet/android_triage
Technology can scale almost everything—except human experience. In a world driven by efficiency, what does it mean to design for how people truly feel? It's about transforming user interactions into ongoing insight and innovation, rooted in empathy and understanding. This week, Dave, Esmee and Rob talk to Kevin Magee, Chief Technology Officer at All human about helping organizations transform customer experiences with a focus on design, engineering, and what is called "digital performance." TLDR:00:41 Introduction of Kevin Magee with Guinness or sparkling water?03:23 Rob wonders, is Apple really opening up its ecosystem?11:40 Deep dive with Kevin into design, engineering, and digital performance36:30 How tools built for one purpose can transform entire systems48:35 Weekend city breaks and pursuing a master's in psychology GuestKevin Magee: https://www.linkedin.com/in/kevinmagee/ HostsDave Chapman: https://www.linkedin.com/in/chapmandr/Rob Kernahan: https://www.linkedin.com/in/rob-kernahan/Esmee van de Giessen: https://www.linkedin.com/in/esmeevandegiessen/ ProductionMarcel van der Burg: https://www.linkedin.com/in/marcel-vd-burg/Dave Chapman: https://www.linkedin.com/in/chapmandr/ SoundBen Corbett: https://www.linkedin.com/in/ben-corbett-3b6a11135/Louis Corbett: https://www.linkedin.com/in/louis-corbett-087250264/ 'Cloud Realities' is an original podcast from Capgemini
In this special crossover edition of Working People and The Marc Steiner Show, hosts Maximillian Alvarez and Marc Steiner examine how the “artificial intelligence” (AI) boom is shaping the economy and the impact it is already having—and will continue to have—on working people's lives, livelihoods, and jobs. Alvarez and Steiner speak with two members of a new mutual aid and advocacy group called Stop Gen AI, which formed this year out of the critical need to provide material support for creatives, knowledge workers, and anyone else impacted by generative AI.Guests:Kim Crawley is a former cybersecurity professor and co-author of The Pentester Blueprint. She founded Stop Gen AI in May 2025 in response to the immense socioeconomic harm generative AI has done to her and her peers, and to the vast environmental, cultural, scientific, psychological, and economic harm it does to the world. Stop Gen AI is unique for its anticapitalist focus and commitment to raising survival funds for people who are struggling.Emmi is an information security expert with experience across many niches of the industry, including application security across a number of verticals, and she is a specialist in insider threat and cyber threat intelligence. She joined the efforts of Stop Gen AI in 2025 due to the overwhelming amount of friends she has seen lose their entire lives and careers due to the out-of-control AI bubble. She also has nearly two decades of experience with boots-on-the-ground union organizing, protesting, and activism.Additional links/info: Stop Gen AI website and Mastodon page Stop Gen AI Twitch Fest informationKhiree Stewart, WBALTV 11, “'Just holding a Doritos bag': Student handcuffed after AI system mistook bag of chips for weapon”Marc Steiner & Maximillian Alvarez, The Marc Steiner Show, “Trump and Silicon Valley's plan to rule the world with AI weapons”Credits:Featured Music: Jules Taylor, Working People Theme Song; Stephen Frank, Marc Steiner Show Theme SongStudio Production: David HebdenAudio Post-Production: Alina NehlichBecome a supporter of this podcast: https://www.spreaker.com/podcast/the-real-news-podcast--2952221/support.Help us continue producing radically independent news and in-depth analysis by following us and becoming a monthly sustainer.Follow us on:Bluesky: @therealnews.comFacebook: The Real News NetworkTwitter: @TheRealNewsYouTube: @therealnewsInstagram: @therealnewsnetworkBecome a member and join the Supporters Club for The Real News Podcast today!
In this special crossover edition of Working People and The Marc Steiner Show, hosts Maximillian Alvarez and Marc Steiner examine how the “artificial intelligence” (AI) boom is shaping the economy and the impact it is already having—and will continue to have—on working people's lives, livelihoods, and jobs. Alvarez and Steiner speak with two members of a new mutual aid and advocacy group called Stop Gen AI, which formed this year out of the critical need to provide material support for creatives, knowledge workers, and anyone else impacted by generative AI.Guests:Kim Crawley is a former cybersecurity professor and co-author of The Pentester Blueprint. She founded Stop Gen AI in May 2025 in response to the immense socioeconomic harm generative AI has done to her and her peers, and to the vast environmental, cultural, scientific, psychological, and economic harm it does to the world. Stop Gen AI is unique for its anticapitalist focus and commitment to raising survival funds for people who are struggling.Emmi is an information security expert with experience across many niches of the industry, including application security across a number of verticals, and she is a specialist in insider threat and cyber threat intelligence. She joined the efforts of Stop Gen AI in 2025 due to the overwhelming amount of friends she has seen lose their entire lives and careers due to the out-of-control AI bubble. She also has nearly two decades of experience with boots-on-the-ground union organizing, protesting, and activism.Additional links/info: Stop Gen AI website and Mastodon page Stop Gen AI Twitch Fest informationKhiree Stewart, WBALTV 11, “'Just holding a Doritos bag': Student handcuffed after AI system mistook bag of chips for weapon”Marc Steiner & Maximillian Alvarez, The Marc Steiner Show, “Trump and Silicon Valley's plan to rule the world with AI weapons”Credits:Featured Music: Jules Taylor, Working People Theme Song; Stephen Frank, Marc Steiner Show Theme SongStudio Production: David HebdenAudio Post-Production: Alina NehlichBecome a supporter of this podcast: https://www.spreaker.com/podcast/the-marc-steiner-show--4661751/support.
We will talk about the five key personal finance use cases for Generative AI, including how to use it for creating personalized budgets, setting financial goals, and simulating debt repayment scenarios. Today's Stocks & Topics: General Mills, Inc. (GIS), Market Wrap, The Hartford Insurance Group, Inc. (HIG), Generative AI in Finance: 5 Ways to Budget, Plan, and Save, Changing Taxes Status, Leveraged ETFs, STAAR Surgical Company (STAA), Verizon Communications Inc. (VZ), Civitas Resources, Inc. (CIVI).Our Sponsors:* Check out Anthropic: https://claude.ai/INVEST* Check out Gusto: https://gusto.com/investtalk* Check out Progressive: https://www.progressive.com* Check out TruDiagnostic and use my code INVEST for a great deal: https://www.trudiagnostic.comAdvertising Inquiries: https://redcircle.com/brands
Google Ads just turned 25, and it's entering a new era—one driven by AI, conversations, and context. In this episode, Dan Taylor, VP of Global Ads at Google, joins Mike Shields to unpack how the search giant is transforming its ads business for the age of AI Overviews, Performance Max, and long-form conversational queries. He explains why this shift feels bigger than mobile, how advertisers are adopting AI faster than ever, and why trust and accuracy remain Google's north stars.Dan also reveals how AI is expanding the search funnel, creating new commercial moments that brands never could have targeted before. From tools like AI Max to agent-powered shopping, the future of advertising is about reducing friction, improving relevance, and meeting consumers wherever their curiosity starts. It's a rare inside look at how Google plans to keep Search indispensable for the next 25 years.Key Highlights
Watch the full episode on our YouTube channel: youtube.com/@mreapodcastWe've all been told to create content — but few know how to make it *trustworthy* enough for people and AI to choose it. Marcus Sheridan does. Known for transforming his struggling pool company into a global content powerhouse, Marcus built his business on a simple idea: Answer every question your buyers are asking with radical honesty.In this episode, Marcus shows us how to win attention in the age of generative search by becoming the most known and trusted voice in your market. He breaks down his Four Pillars of Trust: 1. Say what others won't, 2. Show what others won't, 3. Sell in ways others won't, and 4. Be more human than your competition. Marcus also breaks down the five topics every client is already Googling before they ever call you.From car-recorded videos to transparent pricing pages and AI-proof content strategy, Marcus gives us the blueprint to stand out in a noisy world and make both people and machines believe, “You're the real estate agent we can trust.”Resources:Read: They Ask, You Answer by Marcus SheridanRead: Endless Customers by Marcus SheridanTry: Marcus Sheridan's custom GPTs — Endless Real Estate Content Titles; Show What Others Won't; Endless Self-Service Tools by Marcus Sheridan (search “Marcus Sheridan” in GPTs)Try: AITrustSignals.com website grader for AI visibilityListen: YouTube Strategies Every Real Estate Agent Needs With Sean Cannell | The MREA Podcast (EP.76)Listen: Difficult Conversations with Phil M Jones | The MREA Podcast (EP.32)Order the Millionaire Real Estate Agent Playbook | Volume 3Connect with Jason:LinkedinProduced by NOVAThis podcast is for general informational purposes only. The views, thoughts, and opinions of the guest represent those of the guest and not Keller Williams Realty, LLC and its affiliates, and should not be construed as financial, economic, legal, tax, or other advice. This podcast is provided without any warranty, or guarantee of its accuracy, completeness, timeliness, or results from using the information.WARNING! You must comply with the TCPA and any other federal, state or local laws, including for B2B calls and texts. Never call or text a number on any Do Not Call list, and do not use an autodialer or artificial voice or prerecorded messages without proper consent. Contact your attorney to ensure your compliance.The use of generative AI is subject to limitations, including the availability and quality of the training data used to train the AI model used. Users should exercise caution and independently verify any information or output generated by the AI system utilized and should apply their own judgment and critical thinking when interpreting and utilizing the outputs of generative AI.
Episode Topic: The Truth of the Matter in the Age of Generative AI Join Soc(AI)ety Seminars, for a discussion with Tina Eliassi-Rad, the Inaugural Joseph E. Aoun Professor at Northeastern University, about the challenges of generative AI tools, and how we should consider the challenges of governance of these tools as technology continues to change rapidly.Featured Speakers: Tina Eliassi-Rad, Northeastern UniversityRead this episode's recap over on the University of Notre Dame's open online learning community platform, ThinkND: https://go.nd.edu/b204ac.This podcast is a part of the ThinkND Series titled Soc(AI)ety Seminars.Thanks for listening! The ThinkND Podcast is brought to you by ThinkND, the University of Notre Dame's online learning community. We connect you with videos, podcasts, articles, courses, and other resources to inspire minds and spark conversations on topics that matter to you — everything from faith and politics, to science, technology, and your career. Learn more about ThinkND and register for upcoming live events at think.nd.edu. Join our LinkedIn community for updates, episode clips, and more.
Learn how DataStax transformed customer feedback into a hybrid search solution that powers Fortune 500 companies through their partnership with AWS.Topics Include:AWS and DataStax discuss how quality data powers AI workloads and applications.DataStax built on Apache Cassandra powers Starbucks, Netflix, and Uber at scale.Their TIL app collects outside-in customer feedback to drive product development decisions.Hybrid search and BM25 kept trending in customer requests for several months.Customers wanted to go beyond pure vector search, not specifically BM25 itself.Research showed hybrid search improves accuracy up to 40% over single methods.ML-based re-rankers substantially outperform score-based ones despite added latency and cost.DataStax repositioned their product as a knowledge layer above the data layer.Developer-first design prioritizes simple interfaces and eliminates manual data modeling headaches.Hybrid search API uses simple dollar-sign parameters and integrates with Langflow automatically.AWS PrivateLink ensures security while Graviton processors boost efficiency and tenant density.Graviton reduced total platform operating costs by 20-30% with higher throughput.Participants:Alejandro Cantarero – Field CTO, AI, DataStaxRuskin Dantra - Senior ISV Solution Architect, AWS, Amazon Web ServicesSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon.com/isv/
Ariel Weintraub is the Global CISO at Aon. In this episode, she joins Brendan Galla, Chief Product Officer at Exiger, and host Scott Schober, to discuss the widespread impact of generative AI, its role in the software development lifecycle, and more. Exiger is revolutionizing the way corporations, government agencies and banks navigate risk and compliance in their third-parties, supply chains and customers through its software and tech-enabled solutions. To learn more about our sponsor, visit https://exiger.com.
"Generative modeling is a game-changer. We can now capture high-dimensional statistics that we could never have captured in the past." Felix Herrmann explains how digital twins and generative AI are reshaping subsurface geophysics. He highlights the importance of open-source tools, multimodal data, and uncertainty-aware models for better decision-making in energy and storage projects. By combining physics with AI, his work shows how geophysics can move beyond silos and create more reliable and efficient solutions. KEY TAKEAWAYS > Digital twins informed by multimodal data can reduce uncertainty and improve reservoir management. > Open-source tools and agreed benchmarks are essential for accelerating innovation in geophysics. > Combining physics-based models with generative AI creates robust, practical solutions for complex subsurface challenges. Read Felix's article in The Leading Edge, "President's Page: Digital twins in the era of generative AI," at https://doi.org/10.1190/tle42110730.1. GUEST BIO Felix J. Herrmann earned his Ph.D. in engineering physics from Delft University of Technology in 1997, followed by research appointments at Stanford and MIT. He later joined the University of British Columbia faculty in 2002 and moved to the Georgia Institute of Technology in 2017, where he is the Georgia Research Alliance Scholar Chair in Energy with cross-appointments across multiple schools. Dr. Herrmann leads a cross-disciplinary research program in computational imaging, spanning seismic and medical applications, and is recognized for innovations in machine learning, optimization, and high-performance computing that have reduced costs in seismic data acquisition and imaging. A past SEG Distinguished Lecturer and recipient of the SEG Reginald Fessenden Award, he directs the Seismic Laboratory for Imaging and Modeling and co-founded the Center for Machine Learning for Seismic (ML4Seismic) to advance AI-assisted seismic imaging and reservoir monitoring through industry collaboration.
Dr. Aida Nematzadeh is a Senior Staff Research Scientist at Google DeepMind where her research focused on multimodal AI models. She works on developing evaluation methods and analyze model's learning abilities to detect failure modes and guide improvements. Before joining DeepMind, she was a postdoctoral researcher at UC Berkeley and completed her PhD and Masters in Computer Science from the University of Toronto. During her graduate studies she studied how children learn semantic information through computational (cognitive) modeling. Time stamps of the conversation00:00 Highlights01:20 Introduction02:08 Entry point in AI03:04 Background in Cognitive Science & Computer Science 04:55 Research at Google DeepMind05:47 Importance of language-vision in AI10:36 Impact of architecture vs. data on performance 13:06 Transformer architecture 14:30 Evaluating AI models19:02 Can LLMs understand numerical concepts 24:40 Theory-of-mind in AI27:58 Do LLMs learn theory of mind?29:25 LLMs as judge35:56 Publish vs. perish culture in AI research40:00 Working at Google DeepMind42:50 Doing a Ph.D. vs not in AI (at least in 2025)48:20 Looking back on research careerMore about Aida: http://www.aidanematzadeh.me/About the Host:Jay is a Machine Learning Engineer at PathAI working on improving AI for medical diagnosis and prognosis. Linkedin: shahjay22 Twitter: jaygshah22 Homepage: https://jaygshah.github.io/ for any queries.Stay tuned for upcoming webinars!**Disclaimer: The information in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.**
The skills we teach today will decide the world we live in tomorrow but the digital skills gap is something we've been dealing with for decades, but it's growing faster than ever, it starts with kids and stretches all the way into late IT careers, and now we're finally taking a more connected, lifelong approach to closing it. This week, Dave, Esmee, and Rob speak with Mike Nayler, Director, National Security, Defense & Public Safety at AWS about the digital skills gap and explore how tech companies can help close it. TLDR:00:45 Introduction of Mike Nayler and the pros and cons of enterprise architects, based on a survey03:30 Rob is confused about AI replacing prompt engineers07:55 Conversation with Mike on the digital skills gap25:15 The real gap is between institutions and the people they aim to serve33:24 Mike heading back to school and writing essays againGuest Mike Nayler: https://www.linkedin.com/in/nayler/ HostsDave Chapman: https://www.linkedin.com/in/chapmandr/Rob Kernahan: https://www.linkedin.com/in/rob-kernahan/Esmee van de Giessen: https://www.linkedin.com/in/esmeevandegiessen/ ProductionMarcel van der Burg: https://www.linkedin.com/in/marcel-vd-burg/Dave Chapman: https://www.linkedin.com/in/chapmandr/ SoundBen Corbett: https://www.linkedin.com/in/ben-corbett-3b6a11135/Louis Corbett: https://www.linkedin.com/in/louis-corbett-087250264/ 'Cloud Realities' is an original podcast from Capgemini
Gorkem Yurtseven is the co-founder and CEO of fal, the generative media platform powering the next wave of image, video, and audio applications. In less than two years, fal has scaled from $2M to over $100M in ARR, serving over 2 million developers and more than 300 enterprises, including Adobe, Canva, and Shopify. In this conversation, Gorkem shares the inside story of fal's pivot into explosive growth, the technical and cultural philosophies driving its success, and his predictions for the future of AI-generated media. In today's episode, we discuss: How fal pivoted from data infrastructure to generative inference fal's explosive year and how they scaled Why "generative media" is a greenfield new market fal's unique hiring philosophy and lean
Banking on Fraudology is part of the Fraudology Podcast Network. In this eye-opening episode of Banking on Fraudology, host Hailey Windham sits down with Ronan Burke, co-founder and CEO of Inscribe, to discuss the alarming rise of document fraud in financial services. As generative AI makes creating fake documents easier than ever, Burke reveals that AI-generated and template-based document fraud has surged over 200% in just the first half of 2023. The conversation dives deep into how fraudsters are weaponizing tools like ChatGPT to produce convincing forgeries of bank statements, pay stubs, and tax forms that can fool even experienced analysts. Burke explains that it's no longer just about spotting obvious fakes - financial institutions now face the challenge of proving documents are real. The episode explores why smaller banks and credit unions are especially vulnerable, and offers practical advice for fraud teams to keep up, including leveraging AI powered document verification tools. Windham and Burke stress that while AI is enabling more sophisticated fraud, it's also the key to detecting and preventing it. This timely discussion is a must listen for fraud fighters, compliance professionals, and financial leaders looking to understand and combat the next wave of AI-driven document fraud. Don't miss this chance to arm yourself with cutting edge insights. Tune in now and join the fight against financial crime in the AI era.Ronan Burkehttps://www.inscribe.ai/blog/from-forgeries-to-deepfakes-document-fraud-in-the-age-of-generative-aihttps://www.linkedin.com/in/rnnbrk/About Hailey Windham:As a 2023 CU Rockstar Recipient, Hailey Windham, CFCS (Certified Financial Crimes Specialist) demonstrated unbounding passion for educating her community, organization and credit union membership on scams in the market and best practices to avoid them. She has implemented several programs within her previous organizations that aim at holistically learning about how to prevent and detect fraud targeted at membership and employees. Windham's initiatives to build strong relationships and partnerships throughout the credit union community and industry experts have led to countless success stories. Her applied knowledge of payments system programs combined with her experience in fraud investigations offers practical concepts that are transferable, no matter the organization's size. Connect with Hailey on LinkedIn: https://www.linkedin.com/in/hailey-windham/ https://www.fraudfightclub.com/https://www.about-fraud.com/
From May 2, 2023: Generative AI products have been tearing up the headlines recently. Among the many issues these products raise is whether or not their outputs are protected by Section 230, the foundational statute that shields websites from liability for third-party content.On this episode of Arbiters of Truth, Lawfare's occasional series on the information ecosystem, Lawfare Senior Editor Quinta Jurecic and Matt Perault, Director of the Center on Technology and Policy at UNC-Chapel Hill, talked through this question with Senator Ron Wyden and Chris Cox, formerly a U.S. congressman and SEC chairman. Cox and Wyden drafted Section 230 together in 1996—and they're skeptical that its protections apply to generative AI. To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.