Best podcasts about genai

Show all podcasts related to genai

Latest podcast episodes about genai

Let's Talk Supply Chain
532: Turning Purposeful AI into Business Outcomes, with Infios

Let's Talk Supply Chain

Play Episode Listen Later Mar 18, 2026 40:05


Aadil Kazmi of Infios talks about purposeful innovation; intelligent execution; tech readiness; and turning AI into measurable business outcomes. IN THIS EPISODE WE DISCUSS: [02.14] An introduction to Aadil, and an overview of Infios. "Supply chains run best when operators can execute them on a single stack." [03.20] How Aadil's experience at Amazon sparked an entrepreneurial journey that ultimately led him to Infios. "When retailers and shippers offered faster shipping to their end customers, cart values and repeat rates went through the roof. That led me deep into supply chain..." [05.17] Aadil's focus in his role as Head of AI, what excites him, and what keeps him up at night. "We're using our own tools internally to produce what our customers will eventually use." [08.32] What purposeful innovation means to Infios, and the big unlock that happened three years ago that changed the game for AI. "We partner with customers to develop only use case driven AI workflows." "Purposeful innovation is looking at what workflows within our business depend on unstructured data and reasoning, and focusing on those for AI automation... Not everything is a fit for Gen AI." [12.08] What AI agents mean in the context of supply chain, how they're being used now, and how we can understand automation through a three-level phased framework. [15.31] How AI agents compare to traditional automation, and how businesses can decide which is the right fit for each challenge or workflow. "When companies embark on their automation journey, they should start with the highest leverage ROI workflows that have the lowest risk factor." [19.15] The challenge of organizational debt, and leaning into AI readiness by connecting people's tribal knowledge to contextualize AI decision-making. [21.41] How Infios are meeting customers where they are to overcome technology debt with intelligent orchestration. [24.55] What Aadil's Executive Roundtable at Manifest uncovered about intelligent supply chain, and how to get the most from AI adoption. "You can't just throw AI at a problem… The best way to adopt AI is to actually pull the workflow and, from a first principles perspective, re-engineer it from the ground up to be AI native." [28.31] Why technology readiness is still a big constraint on connected execution, and why AI ambition is yet to meet execution reality. [29.23] How businesses can move toward intelligent connected supply chain execution to turn purposeful AI into business outcomes, and how to measure success. [33.17] What teams should do now as they plan for the year ahead.   RESOURCES AND LINKS MENTIONED: Head over to Infios' website now to find out more and discover how they could help you too. You can also connect with Infios and keep up to date with the latest over on LinkedIn or YouTube, or you can connect with Aadil on LinkedIn. If you enjoyed this episode and want to hear more from Infios, check out 520: Enter the New Era of Supply Chain Management, with Infios. Check out our other podcasts HERE.  

New Books Network
Sam Illingworth and Rachel Forsyth, "GenAI in Higher Education: Redefining Teaching and Learning" (Bloomsbury, 2026)

New Books Network

Play Episode Listen Later Mar 17, 2026 35:47


GenAI in Higher Education: Redefining Teaching and Learning (Bloomsbury, 2026) provides practical guidance for higher education professionals looking to use Generative Artificial Intelligence (GenAI) technologies. Blending theoretical grounding with real-world examples and case studies, it gives step-by-step guidance on how to evaluate, select, and implement GenAI technologies in teaching, learning, assessment, and student support. It covers topics including automating administrative processes, adapting learning resources, and critiquing outputs. Each chapter includes reflective exercises and further reading lists and shows how AI can enhance accessibility, efficiency, and creativity in higher education. Alongside this, the many challenges and ethical considerations of using AI are introduced, including issues around plagiarism, quality control, and the need to establish governance protocols. Dr. Tiatemsu Longkumer, Senior Lecturer in Anthropology at Royal Thimphu College, Bhutan, researches indigenous religion and Christianity among the Nagas, Buddhism in Bhutan, and Generative AI in education. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network

This Week in Linux
340: Facebook behind Age Verify laws?, CachyOS, EndeavourOS, Lutris Gen AI, & more Linux news

This Week in Linux

Play Episode Listen Later Mar 15, 2026 32:51


video: https://youtu.be/wgqR6bt58-k This week in Linux, we've got some new distro releases from EndeavourOS, CachyOS, and AlmaLinux. Then we will cover some really impressive investigative work related to the age verification law nonsense that is happening and we've got a bit of drama to cover this episode with the Lutris game manager and what some are calling AI Slop. All of this and more on This Week in Linux, the weekly news show that keeps you up to date with what's going on in the Linux and Open Source world. Now let's jump right into Your Source for Linux GNews! Download as MP3 Support the Show Become a Patron = tuxdigital.com/membership Store = tuxdigital.com/store Chapters: 00:00 Intro 00:38 Help Support TWIL & Housekeeping 01:02 Lutris "AI Slop" Drama on Generative AI 05:39 Age Verification Research 12:40 AlmaLinux and NVIDIA Streamline Driver and CUDA Updates 14:31 CachyOS 260308 Released 19:03 EndeavourOS Titan Released 23:54 Ageless Linux Released 28:29 Firefox Redesign Coming 31:10 Outro Links: Help Support TWIL & Housekeeping https://patreon.com/tuxdigital React to LTT: https://youtu.be/3yR_bOMQt9s Lutris "AI Slop" Drama on Generative AI https://github.com/lutris/lutris/issues/6506 https://github.com/lutris/lutris/issues/6506#issuecomment-3976118573 https://github.com/lutris/lutris/discussions/6530#discussioncomment-16107836 https://www.gamingonlinux.com/2026/03/lutris-now-being-built-with-claude-ai-developer-decides-to-hide-it-after-backlash/ Age Verification Research https://tboteproject.com/ https://www.reddit.com/r/linux/comments/1rshc1f/comment/oac26yg/?context=1 https://github.com/upper-up/meta-lobbying-and-other-findings AlmaLinux and NVIDIA Streamline Driver and CUDA Updates https://almalinux.org/blog/2026-03-09-almalinux-and-nvidia-streamline-driver-and-cuda-updates/ CachyOS 260308 Released https://cachyos.org/blog/2603-march-release/ https://www.gamingonlinux.com/2026/03/cachyos-update-brings-improvements-for-pc-gaming-handhelds-the-installer-and-more/ https://www.phoronix.com/news/CachyOS-March-2026 https://boilingsteam.com/cachy-os-is-now-the-most-popular-distro-on-proton-db/ EndeavourOS Titan Released https://endeavouros.com/news/whats-new-in-endeavouros-titan-release/ https://9to5linux.com/endeavouros-titan-released-with-linux-kernel-6-19-and-kde-plasma-6-6 https://linuxiac.com/endeavouros-titan-launches-with-linux-6-19/ Ageless Linux Released https://agelesslinux.org/index.html Firefox Redesign Coming https://www.omgubuntu.co.uk/2026/03/firefox-nova-redesign https://itsfoss.com/news/firefox-nova-leak/ https://www.neowin.net/news/mozilla-is-working-on-a-big-firefox-redesign-here-is-what-it-looks-like/ Support the show https://tuxdigital.com/membership https://store.tuxdigital.com/

drama laws open source linux genai verify ltt your source almalinux lutris linux news michael tunnell endeavouros
More or Less with the Morins and the Lessins
Anthropic's Bet on Coding Is Working (OpenAI Shopping Pivot, A16Z's Top 50 List, $1B Tennis Channel)

More or Less with the Morins and the Lessins

Play Episode Listen Later Mar 13, 2026 58:42


It's an AI-heavy episode with real stakes: Jessica digs into OpenAI's evolving approach to shopping and why “closing the loop” on commerce could be the proving ground for consumer monetization. The group sparrs over charts: OpenAI vs. Anthropic annualized revenue, what “slope” investors actually care about, and whether Anthropic's developer-first strategy (code, tokens, and high ARPU) is the smarter path than consumer mindshare.Sam argues that “intelligence” is heading toward a global, frictionless commodity market (bad for margins, great for usage) and introduces the idea of “dark pools” (proprietary access/data/relationships) as the only durable moat. Dave counters with the more optimistic take: AI is collapsing the line between “consumer” and “developer,” turning everyone into a builder, and launching a new creative medium (with examples spanning from software to film). Brit adds fuel with “nano-targeted” commerce and a tour through A16Z's Top 50 GenAI web products list, highlighting both mainstream shifts and the internet's… more ‘unexpected' categories.Finally: a truly out-of-left-field deal pitch from Jess: should someone buy the Tennis Channel for ~$1B? Plus a rapid-fire pop culture close (Kelce's return, Oscars bets, and what everyone's watching) before Sam heads back to the sauna.Chapters:0:00 — Intro & Sam's Sauna Hat1:33 — First-Ever MOL Podcast Ad3:54 — ChatGPT's Shopping Pivot7:19 — The Chart: OpenAI vs Anthropic Revenue11:52 — The Slope: Linear or Super Linear?16:10 — Commerce Is Bad. Attention Is Good.19:04 — AI Is Turning Everyone Into a Builder20:15 — "$1B Raised, $900M spent on Inference"23:17 — AI Is Worse Than the Cable Business35:58 — Dark Pools: Death of the Open Marketplace40:45 — The P50 Problem: What Happens to Average People?42:38 — "Software Is Totally Commoditized"45:43 — Brit's Bot Corner: Anime Husband Chatbots 50:44 — Should You Buy the Tennis Channel for $1B?54:22 — Pop Culture CornerWe're also on ↓X: https://twitter.com/moreorlesspodInstagram: https://instagram.com/moreorlessSpotify: https://podcasters.spotify.com/pod/show/moreorlesspodOn demand reactions powered by AI: https://molchat.ai/ Connect with us here:1) Sam Lessin: https://x.com/lessin2) Dave Morin: https://x.com/davemorin3) Jessica Lessin: https://x.com/Jessicalessin4) Brit Morin: https://x.com/brit

Cloud Realities
RR004: The trust gap with Dr. Tim Currie, Author

Cloud Realities

Play Episode Listen Later Mar 12, 2026 57:44


Realities Remixed, formerly know as Cloud Realities, launches a new season exploring the intersection of people, culture, industry and tech.After years of remote‑first work built on swift trust, companies are asking a harder question: what does a organization really stand for when people rarely show up together? As AI accelerates change, leaders are rethinking presence, team design, and collaboration to fuel trust, innovation, and growth. This week, Dave, Esmee, and Rob are joined by Dr. Tim Currie, disruptor, author, innovator, and advisor, to examine transformation versus trust, the role of AI, and whether organisations can truly build culture without deeper human connection. TLDR00:42– Introduction01:10 –  Hang out: New film releases07:17 – Dig in: The trust gap in remote work17:57 – Conversation with Dr. Tim Currie54:07 – The Wizard of Oz at the Sphere in Las Vegas and staying connected GuestDr. Tim Currie: https://www.linkedin.com/in/dr-tim-currie-37756a/Book Swift Trust: https://swifttrustbook.com/HostsDave Chapman:  https://www.linkedin.com/in/chapmandr/Esmee van de Giessen:  https://www.linkedin.com/in/esmeevandegiessen/Rob Kernahan:  https://www.linkedin.com/in/rob-kernahan/ProductionMarcel van der Burg:  https://www.linkedin.com/in/marcel-vd-burg/Dave Chapman:  https://www.linkedin.com/in/chapmandr/ SoundBen Corbett:  https://www.linkedin.com/in/ben-corbett-3b6a11135/Louis Corbett:   https://www.linkedin.com/in/louis-corbett-087250264/ 'Realities Remixed' is an original podcast from Capgemini

The Watson Weekly - Your Essential eCommerce Digest
From 50% to 95% AI Adoption: How Mirakl's Chief Data & AI Officer Is Disrupting Enterprise Commerce

The Watson Weekly - Your Essential eCommerce Digest

Play Episode Listen Later Mar 11, 2026 40:28


What does it actually take to go from AI experimentation to real business impact? In this episode of the Watson Weekly, Rick Watson sits down with Anne-Claire Baschet, Chief Data and AI Officer at Mirakl — the enterprise marketplace platform powering some of the world's largest retail brands.Anne-Claire shares how she went from a passion for sci-fi and mathematics to leading AI transformation at scale — spanning French national rail, a refurbished car marketplace IPO, and now a bold mandate at Mirakl: disrupt the business responsibly with AI.In this conversation, you'll learn:Why 95% of AI projects fail to deliver ROI — and it's not the technology's faultHow Mirakl jumped from 50% to 95% Gen AI adoption by eliminating the "copy and paste game"The self-driving car framework (L0 → L1 → L2) and how to use it to assess your own SaaS AI journeyHow Mirakl's Catalog Transformer reduced categorization errors by 50% and cut onboarding time from 15 days to under 2 hoursWhy solving the right problem matters more than solving a problem the right wayWhat agent commerce means for the future of retail — and how Mirakl Nexus is preparing for itWhether you're a product leader, an AI executive, or building B2B SaaS, this episode is packed with hard-won lessons on strategy, adoption, and shipping AI that actually works.

Redefiners
Leadership Lounge: How GenAI Can Elevate—And Expose—Today's Leaders

Redefiners

Play Episode Listen Later Mar 11, 2026 19:01


Generative AI is reshaping how leaders think, decide, and communicate. Used thoughtfully, it can sharpen strategic thinking and accelerate decision-making. But when GenAI output replaces genuine insight, it can expose leaders who can't defend their thinking under pressure. In this episode of Leadership Lounge, Emma Combe sits down with Amy Scissons, Sean Dineen, and Fawad Bajwa to explore how senior leaders can harness GenAI's potential while avoiding its pitfalls. They discuss: How C-suite leaders are using GenAI in their work The growing concern about "workslop"—GenAI-generated output that lacks real insight The critical skills needed to pressure-test GenAI output and avoid shallow thinking Why transparency and accountability matter more as GenAI becomes embedded in workflows “Leaders carry a lot of wisdom. They need to feed that wisdom into AI and educate it. Context is king.” Sean Dineen, Leadership Advisor, Russell Reynolds Associates Four things you'll learn from this episode: 1. GenAI can elevate strategic thinking: senior leaders are using GenAI as a sparring partner to pressure-test decisions, create board personas, and refine their perspectives before high-stakes moments. 2. Polish doesn't equal depth: GenAI-generated output can look impressive, but it can also lack substance. Leaders who can't defend their thinking under scrutiny will be exposed. 3. Critical thinking is your competitive advantage: the ability to interrogate GenAI output, provide context, and apply wisdom separates leaders who use GenAI effectively from those who outsource their judgment. 4. Transparency and accountability are non-negotiable: leaders must own their outputs, regardless of how much GenAI contributed to the process. In this episode, we will cover: (02:00) Using GenAI as a sparring partner and creating board personas to pressure-test strategic thinking. (04:39) Why senior leaders should experiment with GenAI tools to build confidence before trusting them with critical decisions. (09:05) The personal risk for leaders who rely on seemingly polished GenAI output without developing their own point of view. (09:50) How GenAI can create false flattery and why leaders need to ask it to be critical. (10:50) Why credibility is a leader's currency—and how shallow GenAI-generated thinking can erode trust. (13.13) Why critical thinking—built through struggle and learning from mistakes—is the most essential leadership skill in a GenAI world. (16:20) The importance of transparency and accountability when using GenAI tools. (17:45) How GenAI output reflects your approach: clear and thoughtful inputs amplify clarity; rushed inputs amplify shallowness. A closer look at the research from this episode: Boston Consulting Group, AI at Work 2025: Momentum Builds, but Gaps Remain | BCG Russell Reynolds Associates, Season 5 - Ep. 2 | AI or Die: A Conversation with Coveo Chairman and CEO Louis Têtu | Redefiners - Podcast Series | Russell Reynolds Associates Harvard Business Review, AI-Generated “Workslop” Is Destroying Productivity Russell Reynolds Associates, Global Leadership Monitor | Russell Reynolds Associates Russell Reynolds Associates, Season 3 - Ep. 18 | A Front Row Seat to the AI Revolution with Microsoft Vice Chair and President Brad Smith – Part 2 | Redefiners - Podcast Series | Russell Reynolds Associates

Tech Disruptors
Reddit COO on Gen AI Models, Ads Platform

Tech Disruptors

Play Episode Listen Later Mar 11, 2026 40:28


Reddit's Chief Operating Officer Jen Wong discusses the impact of gen AI models on its platform and how the company is positioning itself in the era of chatbots and LLMs. Wong sits down with Bloomberg Intelligence's Global Head of Technology Research Mandeep Singh to discuss  the company's ads business and how it plans to leverage LLM search to boost engagement. All the metrics referenced in the episode are as of December 2025.

AI Tool Report Live
How Coursera Is Reskilling 7,000 Companies on AI — From the VP Leading It (Anthony Salcito)

AI Tool Report Live

Play Episode Listen Later Mar 11, 2026 54:54


How Coursera's VP of Enterprise Is Reskilling 7,000+ Organizations with AI — Anthony Salcito on the 234% GenAI Enrollment Surge, Verified Skills Paths, and the Human Side of AI TransformationAnthony Salcito is the Vice President of Enterprise at Coursera, where he leads a $239 million enterprise business partnering with over 7,000 organizations globally. In this episode, Anthony breaks down why GenAI enrollments on Coursera have surged 234% year over year, why 84% of leaders plan to increase AI investment while only 38% say their teams are ready, and what it actually takes to build AI skills that stick inside an organization.From his 20+ years leading Microsoft's global education efforts to his work at Nerdy and Varsity Tutors, Anthony shares his framework for human-first AI transformation. He explains how Coursera is using AI-powered coaching, role play simulations, verified skills paths, and Course Builder to close the enterprise AI skills gap — and why critical thinking, not just prompt engineering, is the skill that matters most.Key Topics Covered:The 234% year-over-year surge in GenAI enrollments on Coursera and what is driving global demandWhy 84% of leaders plan to increase AI investment but only 38% say their teams are readyCoursera's verified skills paths and how they provide stackable, demonstrable AI credentialsThe role of AI-powered Coach in improving course completion — 94% report improved experience, 9.5% higher quiz pass rateHow Course Builder lets enterprises customize world-class AI content from Google, Anthropic, and Microsoft for their specific business contextWhy critical thinking enrollments grew 185% alongside technical AI skillsThe four phases of technology adoption: displacement fear, skills erosion, complacency, and true transformationHow gamification and role play simulations make enterprise AI learning stickCoursera's integration with ChatGPT and the future of learning in the flow of workWhy the shift from "4 years for 40 years" to "40 for 4" demands lifelong micro-credentialingEpisode Timestamps:00:00 - Introduction and Anthony Salcito's background01:42 - Growing up in the Bronx and how technology became a catalyst04:10 - Teaching Girl Scouts Visual Basic in 1995 and the education spark06:18 - The through line from Microsoft to Nerdy to Coursera Enterprise08:24 - Walking into Coursera's $239M enterprise business — what surprised him11:22 - 234% GenAI enrollment growth and 15 enrollments per minute13:57 - Verified skills paths and proving AI competency beyond course completions16:19 - Why critical thinking grew 185% and how schools need to change20:41 - Hard skills vs. soft skills and the competency-based education gap23:58 - What makes AI learning stick: personalization, mixed modality, and Coach27:40 - Coach results: 94% improved experience and the power of gamification31:55 - Live role play: pitching AI reskilling to a 1,000-person construction company36:24 - The four phases of technology adoption and why complacency is the biggest threat40:25 - Human-first AI transformation and why people-centric companies win43:39 - How Coursera keeps up with fast-moving AI content creators46:20 - The 3-5 year vision: micro-credentials, learning in the flow of work, and ChatGPT integration50:55 - Why Anthony does what he doesAbout Anthony SalcitoAnthony Salcito is the Vice President of Enterprise at Coursera, where he leads the company's enterprise business serving over 7,000 organizations worldwide. Before joining Coursera, Anthony spent 20+ years at Microsoft leading global education efforts, visiting over 80 countries and nearly 3,000 classrooms. He also served in leadership roles at Nerdy and Varsity Tutors and chairs the nonprofit Network for Teaching Entrepreneurship.

a16z
The Top 100 Gen AI Consumer Apps

a16z

Play Episode Listen Later Mar 10, 2026 40:49


Anish Acharya speaks with Olivia Moore about the latest edition of the a16z Top 100 AI Apps report. They cover why ChatGPT is still 30 times bigger than Claude on web, how the three major platforms are specializing for different users, what global adoption data reveals about cultural attitudes toward AI, and why agents, memory, and voice are about to change everything.   Resources: Follow Anish Acharya on X: https://twitter.com/illscience Follow Olivia Moore on X: https://twitter.com/omooretweets Stay Updated:Find a16z on YouTube: YouTubeFind a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Fixing Healthcare Podcast
FHC #207: Three major healthcare threats GenAI can help solve

Fixing Healthcare Podcast

Play Episode Listen Later Mar 10, 2026 40:13


In this Diving Deep episode, Dr. Robert Pearl and Jeremy Cor return to a question listeners have been asking for months: What role will generative AI realistically play in American healthcare? Dr. Pearl opens the discussion around three urgent threats that, if ignored, may soon become too large and too expensive to solve: The affordability cliff The chronic disease crisis The risk of training doctors for the wrong future This examination offers a stark warning about healthcare's lack of flexibility. Unlike most industries, medicine cannot quickly reconfigure its workforce, adopt new care models or cut costs without years of delay. That rigidity, Pearl argues, is what makes the current moment so dangerous. By the time healthcare leaders respond to major problems, those problems often have already deepened into crises. The episode's second half explores whether generative AI could help avert that future. Pearl argues that the technology is already capable of improving chronic disease management, reducing medical errors and extending care into patients' homes. The larger barrier is no longer technical but cultural. To illustrate that divide, Pearl uses HBO's hit show The Pitt to examine how medicine still frames AI as either a helpful tool or an existential threat rather than what it could be: a valuable clinical partner. He credits the show for capturing physicians' skepticism and enthusiasm but argues that it misses the more important question: not whether AI is perfect, but whether it performs better than clinicians working alone in a system already riddled with error. Looking further ahead, Pearl argues that when it comes to GenAI taking on clinical tasks once exclusive to humans, the Rubicon has already been crossed. Major health systems are beginning to use generative AI for clinical intake and treatment planning. Large technology companies are building patient-facing health tools tied to personal medical data. And states such as Utah are already testing whether AI can safely handle parts of chronic disease care without direct physician oversight. Taken together, these developments point toward a new future for medicine. Primary care physicians may spend less time on routine algorithmic tasks and more time on complex patients. Specialists may become more procedural as outpatient evaluation shifts. And health systems that want to benefit from these changes will need to move away from fee-for-service and toward value-based care. For more on these developments, tune into this month's episode and check out the links below. Helpful links Three Healthcare Threats That Will Soon Become Too Big To Solve (Forbes) What The Pitt Gets Right And Wrong About Generative AI In Medicine (Forbes) GenAI Will Replace Much Of What Clinicians Do — It's Already Happening (Forbes) Monthly Musings on American Healthcare (RobertPearlMD.com) * * * Dr. Robert Pearl is the author of “ChatGPT, MD: How AI-Empowered Patients & Doctors Can Take Back Control of American Medicine.” Fixing Healthcare is a co-production of Dr. Robert Pearl and Jeremy Corr. Subscribe to the show via Apple, Spotify or wherever you find podcasts. Join the conversation or suggest a guest by following the show on Twitter and LinkedIn. The post FHC #207: Three major healthcare threats GenAI can help solve appeared first on Fixing Healthcare.

Gartner ThinkCast
No APIs, No AI: Organizing Software Engineering for Today's AI Reality

Gartner ThinkCast

Play Episode Listen Later Mar 10, 2026 17:55


How should software engineering evolve for the age of AI? As organizations rush to weave GenAI and agentic AI into every product and workflow, engineering teams face a new reality: you can't scale AI without redesigning how software gets built.   In this episode, Gartner experts Manjunath Bhat, Akis Sklavounakis and Shameen Pillai break down what that redesign actually looks like. They'll explore the team structures that help scale GenAI to repeatable delivery, the platform patterns that reduce cognitive load and drive value, and the API strategies that AI implementation can't function without.   You'll learn: Why GenAI efforts fail without the right team structures How platform engineering reduces complexity and accelerates delivery Why APIs are the backbone of GenAI and agentic AI The five‑dimension API maturity model and where to focus first The tools and next steps to scale AI safely and effectively   Dig deeper: Learn more with Gartner for Software Engineering Leaders See why Gartner is the world authority on AI Become a client to try AskGartner

Wise Decision Maker Show
How to Build a Gen AI Portal Employees Will Actually Use

Wise Decision Maker Show

Play Episode Listen Later Mar 10, 2026 4:24


A well-designed internal AI portal empowers employees to confidently adopt Gen AI, easing anxiety, boosting skills, and turning technology into a collaborative tool that drives both personal growth and organizational innovation. That's the key take-away message of this episode of the Wise Decision Maker Show, which talks about how to build a Gen AI portal employees will actually use.This article forms the basis for this episode: https://disasteravoidanceexperts.com/how-to-build-a-gen-ai-portal-employees-will-actually-use/

That Tech Pod
The Shiny Object Problem: Why AI Isn't Fixing IT Problems with Rob Calvert

That Tech Pod

Play Episode Listen Later Mar 10, 2026 21:39


In this episode of That Tech Pod, Kevin and Laura sit down with IT entrepreneur Rob Calvert, founder of Second Son Consulting and a longtime leader in the Apple enterprise ecosystem. After being laid off in the early 2000s, Rob built his consulting firm from a home office into the largest member of the Apple Consultants Network in Los Angeles and one of the top firms in the country. Drawing on more than 25 years advising companies across dozens of industries, he shares a grounded look at what actually makes technology succeed or fail inside real organizations. The conversation even opens with an unexpected detour into “Punch the monkey,” a viral zoo story that sparks a debate about how easily people question or misread what they see online in the age of AI (Laura swears this is a real monkey while Kevin thinks its GenAI to sell toys).From there, the conversation explores why most people only notice IT when something breaks and how that mindset leads to bad leadership decisions. Rob argues many “tech problems” are really culture and workflow problems, pointing to common mistakes like letting experimental tools quietly become production systems or constantly chasing new platforms without fully implementing the ones already in place. The result is wasted budgets, burned-out IT teams, and systems that drift away from how people actually work. They also get into the Mac vs. PC debate in the enterprise, the subtle ways companies waste millions in IT spending, and the gap between AI hype and real business impact. Rob says many small and mid-sized companies are spending a lot of time evaluating AI tools but seeing very little return so far, while larger organizations may eventually benefit through heavy customization. At the end of the episode, Rob finally agrees to go look up Punch the monkey.

The Cloudcast
Understanding NeoClouds with Crusoe

The Cloudcast

Play Episode Listen Later Mar 8, 2026 27:40


Erwan Menard - SVP Product Management @Crusoe talks about… SHOW: 1008SHOW TRANSCRIPT: The Reasoning Show #1008 TranscriptSHOW VIDEO: SPONSORS:VENTION - Ready for expert developers who actually deliver?Visit ventionteams.comSHOW NOTES:Topic 1 - Welcome to the show. Tell us a bit about your background, and what you focus on now at Crusoe. Topic 2 - There has obviously been a lot of coverage of AI data center buildouts all over the world for the last few years. Tell us about Crusoe, and your approach to providing “neocloud” services. Topic 3 - What are the biggest challenges facing Crusoe today and in the immediate future - is it technology, energy, financing for expansions, etc.?Topic 4 - Crusoe started as a bitcoin-focused company and has evolved to more of a GenAI-focus. What types of architectural changes did you have to make for this new type of workload? And how do those impact the quality of the services your customers expect from Crusoe?Topic 5 - Is your focus more on environments to enable model training and customization, or more focus on inference for customer-facing applications? Topic 6 - A lot has changed in AI in the last couple years. What has changed the most in the last couple years, and what are you expecting to change the most over the next couple years? Topic 7 - Sovereign AI and Private AI have become much bigger topics over the last 12-18 months, and we'd expect that to grow. What unique things is Crusoe doing to adapt to these changing requirements from customers?Send a textFEEDBACK? Email: show @ reasoning dot show Bluesky: @reasoningshow.bsky.social Twitter/X: @ReasoningShow Instagram: @reasoningshow TikTok: @reasoningshow

The Tech Blog Writer Podcast
d-Matrix - Ultra-low Latency Batched Inference for Gen AI

The Tech Blog Writer Podcast

Play Episode Listen Later Mar 7, 2026 26:28


What happens when the real bottleneck in artificial intelligence is no longer training models, but actually running them at scale? In this episode of Tech Talks Daily, I sit down with Satyam Srivastava from d-Matrix to explore a shift that is quietly reshaping the entire AI infrastructure landscape. While much of the early AI race focused on training ever larger models, the next phase of AI adoption is increasingly defined by inference. That is the moment when trained models are deployed and used to generate real-world results millions of times a day. Satyam brings a unique perspective shaped by years of experience in signal processing, machine learning, and hardware architecture, including time spent at NVIDIA and Intel working on graphics, media technologies, and AI systems. Now at d-Matrix, he is helping design next-generation computing architectures focused on one of the biggest challenges facing the AI industry today: efficiently running large language models without overwhelming data centers with unsustainable power and infrastructure demands. During our conversation, we explored why the industry underestimated the infrastructure implications of inference at scale. While training large models grabs headlines, the real operational pressure often comes later when those models must serve millions of queries in real time. That shift places enormous strain on memory bandwidth, energy consumption, and data movement inside modern data centers. Satyam explains how d-Matrix identified this challenge years before generative AI exploded into the mainstream. Instead of focusing on training hardware like many AI startups at the time, the company concentrated on inference efficiency. That decision is becoming increasingly relevant as organizations begin to realize that simply adding more GPUs to data centers is not a sustainable long-term strategy. We also discuss the growing power constraints surrounding AI infrastructure, and why efficiency-driven design may be the only realistic path forward. With electricity supply, cooling capacity, and semiconductor availability all becoming limiting factors, the industry is being forced to rethink how AI systems are architected. Custom silicon, purpose-built accelerators, and heterogeneous computing environments are now emerging as key pieces of the puzzle. The conversation also touches on the geopolitical and economic importance of AI semiconductor leadership, and why the relationship between frontier AI labs, infrastructure providers, and chip designers is becoming increasingly strategic. As governments and companies compete to maintain technological leadership, the question of who controls the hardware powering AI may prove just as important as the models themselves. Looking ahead, Satyam shares his perspective on how the role of engineers will evolve as AI infrastructure becomes more specialized and energy-aware. Foundational engineering skills remain essential, but the next generation of engineers will also need to think in terms of entire systems, combining software, hardware, and AI tools to build more efficient computing environments. As AI continues to move from research labs into everyday products and services, are organizations prepared for the infrastructure shift that comes with an inference-driven future? And could efficiency, rather than raw computing power, become the defining metric of the next phase of the AI race?

Thoughts on the Market
AI's Tangible Wins and Disruption

Thoughts on the Market

Play Episode Listen Later Mar 6, 2026 12:46


Live from Morgan Stanley's TMT conference, our panel break down where AI is already delivering real returns—and where rapid advances are raising new risks.Read more insights from Morgan Stanley.----- Transcript -----Michelle Weaver: Welcome to Thoughts on the Market. I'm Michelle Weaver, U.S. Thematic and Equity Strategist here at Morgan Stanley.Today we've got a special episode on AI adoption. And this is a first in a two-part conversation live from our Technology, Media and Telecom conference.It's Thursday, March 5th at 11am in San Francisco.We're really excited to be here with all of you taping live. And we've got on stage with me. Stephen Byrd, he's our Global Head of Thematic and Sustainability Research; Josh Baer, Software Analyst; and Lindsay Tyler, TMT Credit Research Analyst.So, Stephen, I want to start with you, pretty broad, pretty high level. We recently published our fifth AI Mapping Survey that identifies how different companies are exposed to the broad AI theme. Can you just share with us some insights from that piece and how stocks are performing with this AI exposure?Stephen Byrd: Yeah, it's interesting. I mean, we've been doing this survey now, thanks to you, Michelle, and your excellent work, for quite a while. And every six months it is pretty telling to see the progression.I would say a few things that got my attention from our most recent mapping was the number of companies that are quantifying the adoption benefits continues to go up quite a bit. And to me that feels like that's going to be table stakes very soon as in every industry you see two or three companies that are really laying out quite specifically what they expect to be able to do with AI and lay out the math. I think that really is going to pull all the other companies to follow suit. So, we're seeing that in a big way.We do see adopters, with real tangible benefits performing well. But a new thing that we're seeing now, of course, in the market is concerns that in some cases adoption can lead to dramatic deflation, disruption, et cetera. That's coming up as well. So, we're seeing greater concerns around disruption as well.But broadly, I'd say a proliferation of adoption, that that universe of companies continues to grow, increases in quantification of the benefits. So, that is good. What's really surprised me though, is the narrative among investors has so quickly moved from those benefits which we've talked about into flipping that to toggle all negative, which I know some of our analysts have to deal with every day. The mapping work suggests significant benefits. But the market is fast forwarding to very powerful AI that is very disruptive in deflation. And that's been a surprise to me.Michelle Weaver: Mm-hmm. Josh, I want to bring software into this. Your team has been arguing that AI is actually good for software. And it's really something that you need that application layer to then enable other companies to adopt AI. Can you tell us a little bit about how much GenAI could add to the broader enterprise software market? And how are you thinking about monetization these days?Josh Baer: Of course. I think the best starting place is a reminder that AI is software, and so we see software as a TAM expander. And in many ways, even though this is extremely exciting innovation, it's following past innovation trends where first you see value accrue and market cap accrue to semiconductors, and then hardware and devices, and then eventually software and services. And we do think that that absolutely will occur just given [$]3 trillion in infrastructure investment into data centers and GPUs.There's got to be an application layer that brings all of these productivity and efficiency gains to enterprises and advanced capabilities to consumers as well. And so we see AI more as an evolution for software than a revolution. An evolution of capabilities and expansion of capabilities. LLMs and diffusion engines absolutely unlocked all of these new features of what software can do. But incumbents will play a key role in this unlock.And our CIO surveys really support that. Quarterly we ask chief information officers about their spending intentions, and these application vendors who we cover in the public markets are increasingly selected as vendors that companies will go to, to help deploy and apply AI and LLM technologies.So, to answer your question, we estimate GenAI could unlock [$]400 billion in incremental TAM for software; for enterprise software by 2028. And this is based on looking at the type of work able to be automated, the labor costs associated with that work, the scope of automation, and then thinking about how much of that value is captured typically by software vendors.Michelle Weaver: And you have a bit of a different lens on AI adoption. So, what are some of the ways you're hearing software customers using these AI tools and anything interesting that popped up at the conference?Josh Baer: To echo what Stephen laid out, I mean, all of our software companies are using AI internally, both to drive efficiencies, but also to move faster. So thinking about product. Innovation, you know, the incumbents are able to use all of the same coding tools and, you know, …Michelle Weaver: Mm-hmm.Josh Bear: … products geared to developers to move faster and more efficiently on R&D. So, they're doing more. From a sales and marketing perspective, a G&A perspective, every area of OpEx, our software companies are in a great position to deploy the AI tools internally.I think more important[ly], speaking to this TAM and expanded opportunity, is our companies have skews that they're monetizing. It might be a separate suite that incorporates advanced AI functionality. It might be a standalone offering, or it might be embedded into the core platform because the essence of software is AI and it, you know, leading to better retention rates and acceleration from here.Michelle Weaver: Mm-hmm. And Stephen, going back to you on the state of play for AI, we had the AI labs here and we heard a lot about the developments and what's to come. So, what's your view on the trajectory for LLM advancements and what are some of the key signposts or catalysts you're watching here?Stephen Byrd: Yeah, this is for me, maybe the most important takeaway of the conference – is this continued non-linear improvement of LLMs, which we've been writing about for quite some time. And just to give you an example, we think many of the labs have achieved a step change up in terms of the compute that they have, in some cases 10 x the amount of compute to train their LLMs. And that [if] the scaling laws hold – and we see every sign that they will – a 10x increase in compute used to train the models results in about a doubling of the model capabilities.Now just let that sink in for a moment. Let's just think about that. A doubling from here in a relatively short period of time is difficult to predict. It's obviously very significant and I think several of the LLM execs at our event sounded to me extremely bullish on what that will be. A lot of that I think will be evident in greater agentic capabilities.But also, I'd say greater creativity. It was about three weeks ago, three of the best physics minds in the world worked with an LLM to achieve a true breakthrough in physics – solving a problem that had never been solved before. A couple of days ago, a math team did the same thing. And so, what we're seeing is sort of these breakthrough capabilities in creativity. This morning I thought Sam speaking to, you know, incredible increases in what these models can do – which also brings risk. You know, I think it was interesting he spoke to, you know, the risk of misalignment, the risk of what these models are doing.But for me, that's the single biggest thing that I'm thinking about, and that's going to be evident in the next several months.Michelle Weaver: Mm-hmm.Stephen Byrd: So, you know, on the positive side, it leads to greater benefits from AI adoption. And to Josh's point that, you know – more and more the economy can be addressed by AI, I do get concerned about the risk that that kind of step change will create greater concerns about disruption and deflation.That causes me to think a lot about that dynamic. Interestingly, we think the Chinese labs will not be able to keep pace just for one reason, which is compute. We think the Chinese labs have everything else they need. They have the talent, the infrastructure. They certainly have the energy, power. But they don't have the chips.If what we laid out with the American models turns out to be true, I could see a chain reaction where the Chinese government pushes the Trump administration for full transfer of the best technology to China. And China could use their rare earth trade position to ensure that. So, that's sort of the chain reaction I've been thinking about.Michelle Weaver: Mm-hmm. So, let's think about then bottlenecks in the U.S. Power is still one of the main bottlenecks. We had several of the solutions providers here at the conference. So, what are you thinking in terms of the size of the power bottleneck in the U.S. and how are we going to fix that?Stephen Byrd: Yeah, absolutely. I am bullish on the companies that can de-bottleneck power, not just in the U.S., a few other places. Let's go through the math in terms of the problem we face and then the solution.So, we have this very cool – it is cool if you're a nerd – power model that starts in the chip level up, from our semiconductor teams. And from that, we build a global power demand model for data centers. We then apply that to the U.S.Through 2028 we need about 74 gigawatts of data centers, both AI and non-AI to be built in the United States. I don't think we'll be able to achieve that for lots of reasons. But starting from that 74, we have sort of 10 gigs that have been recently built or are under construction. We have 15 gigs of incremental grid access, but after those two, we have to go to unconventional solutions, meaning typically off-grid solutions, over 40 gigawatts of unconventional solutions.So that will be repurposing Bitcoin sites, which could be sort of 10 to 15 gigawatts. That'll be big. Renewable energy, fuel cells will be part of the solution. Gas turbines will be a big part of the solution. Co-locating at a few nuclear plants. I'm less bullish than I used to be on that. But when we net all that out, we think the U.S. is likely to be 10 to 20 percent short of the data center capacity that will need to be in.It's not just a power grid access issue, though, that's a big one. Labor is now showing up as a huge issue. Many of the companies I speak to trying to develop data centers struggle with availability of labor. Electricians being one very tangible example. In the U.S. we need hundreds of thousands of additional electricians.So, for any of your children, like mine, thinking about careers, you know, you'd be surprised [at] the amount of money that people are making in the infrastructure business that does feel like it's a labor shift that's going to have to happen, but it's going to take years. So, in that context, we had a number of the Bitcoin companies at our event here. And the economics of turning a Bitcoin site into hosting a data center are extremely attractive. I mean, extremely attractive.To give you a sense of that. Before this opportunity presented itself to these Bitcoin players, those stocks tended to trade at an enterprise value per watt of about $1 to $2 a watt. Then we started to see these deals in which the Bitcoin players build a data center and lease them to hyperscalers. Those deals – depends a lot on the deal but – have created between $10 and $18 a watt of value. Let me repeat that. 10 to 18 – relative to where these stocks were at 1 to 2.Now many of these stocks have rerated, but not all of them. And there's still quite a bit of upside. And what we've noticed is the economics that the hyperscalers are paying are trending up and up and up. Because of this power shortage that we're dealing with. So, a lot of exciting opportunities are still in the power space.Michelle Weaver: Great. Well, I think that's a good place to wrap this first part of our conversation around AI adoption and the state of play. We'll be back again tomorrow with Part Two, looking at financing and risks.To our panelists, thank you for talking with me. And to our audience, thanks for listening. If you enjoy Thoughts on the Market, please leave us a review wherever you listen and share the podcast with a friend or colleague today.

LinkedUp: Breaking Boundaries in Education
Beyond the AI Inflection Point

LinkedUp: Breaking Boundaries in Education

Play Episode Listen Later Mar 6, 2026 44:21


The rapid spread of GenAI has schools at an unprecedented inflection point. While districts may not have all the answers yet, the need to act is clear. Preparing students for a future shaped by AI requires more than reactive guardrails. It calls for thoughtful planning, clear policy, and classroom practices that put student need—not technology—first.That's why this week we're sitting down with Amanda Bickerstaff, CEO of AI For Education, and Sari Factor, Vice Chair and Chief Strategy Officer at Imagine Learning. They'll share insights from their joint release, Beyond the AI Inflection Point: a new report and practical toolkit built to help schools move from uncertainty to intentional, student-centered action in the age of AI.---ABOUT OUR GUESTSAmanda Bickerstaff is the Co-Founder and CEO of AI for Education. A former high school biology teacher and EdTech CEO, she brings over 20 years of experience in the education sector. With a deep understanding of the challenges and opportunities that AI can offer, she is a sought-after keynote speaker and leading authority on AI and education.Sari Factor, Vice Chair and Chief Strategy Officer of Imagine Learning, leads the company's vision to empower educators and students' potential through innovative digital-first learning solutions. A seasoned K-12 education solutions leader with over 40 years of experience, she is passionate about using technology to enhance and support teachers, and to improve student engagement and learning outcomes.---SUBSCRIBE TO THE SERIES: ⁠⁠⁠YouTube⁠⁠⁠ | ⁠⁠⁠Spotify⁠⁠⁠ | ⁠⁠⁠Apple Podcasts⁠⁠⁠ | ⁠⁠⁠YouTube Music⁠⁠⁠ | ⁠⁠⁠Overcast⁠⁠⁠FOLLOW US: ⁠⁠⁠Website⁠⁠⁠ | ⁠⁠⁠Facebook⁠⁠⁠ | ⁠⁠⁠Twitter⁠⁠⁠ | ⁠⁠⁠LinkedIn⁠⁠⁠POWERED BY CLASSLINK: ClassLink provides one-click single sign-on into web and Windows applications, and instant access to files at school and in the cloud. Accessible from any computer, tablet, or smartphone, ClassLink is ideal for 1to1 and Bring Your Own Device (BYOD) initiatives. Learn more at ⁠⁠⁠classlink.com⁠⁠⁠.

Terminal Value
AI in Factories: Betting the House on Intelligence That Actually Works

Terminal Value

Play Episode Listen Later Mar 5, 2026 25:13


Renan Delonier joins me to unpack what happens when you stop talking about AI as a buzzword—and start putting it into real factories with real consequences.Most conversations about artificial intelligence live in demos and slide decks. This one lives in production environments, broken trust, failed deployments, and hard-earned iteration.Renan is the CEO and co-founder of OSS Ventures, a venture studio that co-builds software companies exclusively for factories. After visiting more than 800 industrial sites and launching 22 companies inside 3,000 factories, he's developed a structured approach to identifying operational pain—and solving it with software.But this episode isn't about hype.It's about what went wrong when they “bet the house” on AI.We walk through the painful reality of pushing generative AI into production too early—70% error rates, broken client confidence, and discovering that prompting large models with raw data is not a strategy. It's a demo.The breakthrough came from rethinking the problem entirely.Instead of asking AI to generate outcomes, they used it to generate code. Instead of trusting datasets, they extracted the undocumented rules living inside expert operators' heads. In one factory alone, they uncovered 550 decision rules that existed nowhere in any ERP system.We explore why most automation fails in manufacturing, why Excel survives inside billion-dollar operations, and why the real design target in AI systems isn't the frontline user—but the manager responsible for auditing and controlling the machine.This is a candid discussion about product-market fit in heavy industry, frictionless B2B offers, founder-led companies versus bureaucracies, and what it actually takes to deliver a 10x outcome in operational environments.The lesson isn't that AI will replace humans.It's that intelligence—codified correctly—can finally scale the complexity humans have been carrying in their heads for decades.TL;DRAI demos are easy. Production is brutal.Prompting models with raw data fails at scale.GenAI's real leverage is collapsing the cost of code generation.Most critical factory rules live only inside expert operators' heads.ERP systems automate the simple; humans absorb the complexity.Design AI systems for control and audit—not flash.A 10x offer changes how conservative industries respond.Memorable Lines“AI is not one prompt with data.”“Cost of code has collapsed 100x.”“The rules weren't in the system—they were in his head.”“Free until it works.”“Design for the person managing the machine, not the machine itself.”GuestRenan Delonier — CEO & Co-Founder, OSS VenturesFactory-focused venture builder launching AI-native software companies in industrial environments. OSS Ventures has co-founded 22 companies operating across thousands of factories worldwide.

Cloud Realities
RR003: Messaging the future with Kathleen Tandy, Meta

Cloud Realities

Play Episode Listen Later Mar 5, 2026 60:36


Realities Remixed, formerly know as Cloud Realities, launches a new season exploring the intersection of people, culture, industry and tech.Business messaging is transforming customer engagement by enabling brands to move conversations into familiar, always‑on messaging platforms. The result for customers is greater convenience, quicker resolutions, and more meaningful, personalized interactions. This week, Dave, Esmee, and Rob are joined by Kathleen Tandy, Global Director and Head of Business Messaging Marketing and WhatsApp for Business at Meta , to explore how companies are using messaging platforms to engage customers, what customers expect from these experiences, and the challenges of scaling messaging in tech.TLDR00:35 – Introduction01:00 –  Hang out: The new Remarkable05:25 – Dig in: Using messaging to enhance customer experiences20:49 – Conversation with Kathleen Tandy55:26 – The passion for college football and championship weekend!GuestKathleen Tandy: https://www.linkedin.com/in/kptandy/HostsDave Chapman: https://www.linkedin.com/in/chapmandr/Esmee van de Giessen: https://www.linkedin.com/in/esmeevandegiessen/Rob Kernahan: https://www.linkedin.com/in/rob-kernahan/ProductionMarcel van der Burg: https://www.linkedin.com/in/marcel-vd-burg/Dave Chapman: https://www.linkedin.com/in/chapmandr/ SoundBen Corbett: https://www.linkedin.com/in/ben-corbett-3b6a11135/Louis Corbett:  https://www.linkedin.com/in/louis-corbett-087250264/ 'Realities Remixed' is an original podcast from Capgemini

Trench Tech
Nouveaux maîtres de l'iA : pouvoir, paranoïa et prophéties - Guillaume Grallet

Trench Tech

Play Episode Listen Later Mar 5, 2026 63:06


Dans la tête des maîtres de l'iA.Que veulent vraiment ceux qui façonnent notre futur technologique ?Dans cet épisode, conversation sans filtre avec Guillaume Grallet, auteur du livre "Pionniers - Voyage aux frontières de l'intelligence artificielle" (Grasset, 2025) et rédacteur en chef sciences et tech au Point. Il a rencontré les figures les plus influentes de la tech mondiale.Il raconte ce qu'il a vu. Ce qu'il a compris. Et ce qui l'inquiète.De Mark Zuckerberg à Sam Altman, en passant par Dario Amodei, Yann LeCun, Jensen Huang ou Kai-Fu Lee : visions messianiques, doutes sincères, stratégie géopolitique, course à la puissance.Vie privée, climat, diversité linguistique, souveraineté : l'iA n'est pas qu'une course technologique, c'est une bataille culturelle et politique.Optimisme stratégique ou promesse dangereuse ?Hubris ou lucidité ?Un épisode pour comprendre ce qui se joue vraiment derrière les discours officiels.

Wise Decision Maker Show
Will Gen AI Drive Marketing Teams to One?

Wise Decision Maker Show

Play Episode Listen Later Mar 4, 2026 5:51


Automation and Gen AI drive marketing teams to downsize while managing high-stakes challenges in the areas of brand ethics, cultural sensitivity, and the risk of automated missteps. That's the key take-away message of this episode of the Wise Decision Maker Show, which talks about how Gen AI can drive marketing teams to one.This article forms the basis for this episode: https://disasteravoidanceexperts.com/will-gen-ai-drive-marketing-teams-to-one/

Irish Tech News Audio Articles
Nearly Two-thirds (65%) of Employees Use Free External GenAI in Work or Pay for the Tool Themselves

Irish Tech News Audio Articles

Play Episode Listen Later Mar 4, 2026 5:13


65% of respondents use free external GenAI in work or pay for the tool themselves, Deloitte's third GenAI pulse has found. Only 35% work for an employer that pays for their external GenAI. Despite this, the number of companies encouraging GenAI use has nearly doubled compared to 2024; nearly half (46%) of respondents work for a company that encourages the use of GenAI. In contrast to 2024, when less than one-quarter (24%) strongly agreed or agreed that their company encouraged its use. The findings come from Deloitte Ireland's third GenAI pulse survey as part of the Digital Consumer Trends report, where 1,000 people between the ages of 18 and 75 were surveyed in Ireland. The number of respondents who said their workplace has policies or guidance about the use of GenAI for work purposes has jumped. In 2025, just 19% of those surveyed said their company does not have a policy or guidance, but in 2024, 90% of employees reported a lack of guidance or policies. Talent remains the biggest challenge for embedding AI, with 84% citing skills gaps as the main barrier, according to additional Deloitte AI research published last week, which gathered views of C-suite leaders and directors in Ireland. Commenting on the report's findings, Lynn Guilbaud, Technology, Media & Telecommunications Leader in Deloitte Ireland, said: "Everyone has heard the expression people won't be replaced by AI, but will be replaced by people using it. This is why it's positive to see a growing number of organisations with policies and guidance around its use. This technology isn't just a tool; it's a game-changer that can revolutionise how we work, boosting efficiency, unlocking new levels of productivity and fundamentally transforming the competitiveness of organisations that embrace it. "But this won't happen overnight. To harness AI's potential, organisations need to invest in ongoing training and support, guiding their teams every step of the way." There is a clear gap between generations' GenAI use, despite awareness being high across age groups. The GenAI pulse survey shows more than 4 in 5 (83%) of Gen Zs and 76% of Millennials use GenAI, but this drops to just over half (57%) in Gen X and only one-in-three (33%) of those aged 60-75 years old. The most common reason to use the technology is for personal purposes (75%), followed by work (42%) and education (36%). The reasons for using GenAI are consistent with those reported in 2024, although there was a 12% increase in people using it to look up information (44% vs 56%). Searching for information, writing and editing emails, and generating ideas are the top three reasons GenAI is used. Since 2023, a consistent number of GenAI users (more than one-third) believe AI always produces factually accurate responses – 35% in 2023, 34% in 2024, and 34% in 2025. This is similar to the number of users who believe the technology's responses are unbiased – 31% in 2023, 28% in 2024, and 32% in 2025. 64% actively use AI tools. In contrast, 4 in 5 passively engage with GenAI, including web search summaries or AI-generated content on social media. Nearly two-thirds (62%) have noticed AI-generated web search summary and 64% AI-generated content on social media. 40% come across AI-generated news articles written by AI. Daily and weekly use of GenAI is nearly doubling year-on-year, while non-usage consistently drops. Daily Weekly Not used GenAI 2025 11% 21% 37% 2024 5% 13% 53% 2023 2% 7% 66% Colm McDonnell, Head of Technology, Media and Telecommunications in Deloitte Ireland, added: "Our Deloitte survey reveals a fascinating trend of young professionals leading the charge in adopting AI, highlighting the need for tailored training that speaks to different generations and skill levels. "While concerns around privacy and data security are valid, one way to manage these risks is by promoting the use of company-approved AI tools. With nearly two-thirds of respondents already using free or personally paid for AI platforms, its...

Data-Smart City Pod
How Cities Can Measure What Actually Matters

Data-Smart City Pod

Play Episode Listen Later Mar 4, 2026 16:43


What does a city government owe its residents? Host Stephen Goldsmith speaks with Eyal Feder-Levy, CEO of Zencity, to explore how GenAI is fundamentally transforming the way cities measure, understand, and respond to resident needs. For decades, performance management in government has relied on operational metrics like crime numbers, pothole repairs, traffic flow. But what happens when the data looks good, yet residents feel less safe? When efficiency improves, but trust declines? In this episode, Feder-Levy argues that citizen satisfaction and perception should be the true North Star for city government. Using social sentiment analysis, AI-powered data agents, and real-world examples, he explores how GenAI is cutting response times, revealing hidden patterns, and closing the gap between statistics and lived experience. Listener Survey: bit.ly/datasmartpod Music credit: Summer-Man by Ketsa About Data-Smart City Solutions Data-Smart City Solutions, housed at the Bloomberg Center for Cities at Harvard University, is working to catalyze the adoption of data projects on the local government level by serving as a central resource for cities interested in this emerging field. We highlight best practices, top innovators, and promising case studies while also connecting leading industry, academic, and government officials. Our research focus is the intersection of government and data, ranging from open data and predictive analytics to civic engagement technology. We seek to promote the combination of integrated, cross-agency data with community data to better discover and preemptively address civic problems. To learn more visit us online and follow us on LinkedIn.

Everyday AI Podcast – An AI and ChatGPT Podcast
Ep 725: Measuring AI ROI: Why you're doing it wrong and the 7 Steps to fix it (Start Here Series Vol 11)

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Mar 3, 2026 33:13


Why can't most companies show their ROI on GenAI? Because their implementation is backwards.If you're using the same digital transformation playbook that you used for the social media and cloud eras, you're in trouble. On this 'Start Here Series' episode, we break down what your company is doing wrong and the 7 Step process to properly calculate ROI on your AI efforts. Measuring AI ROI: Why you're doing it wrong and the 7 Steps to fix it -- An Everyday AI Chat with Jordan WilsonNewsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion on LinkedIn: Thoughts on this? Join the convo on LinkedIn and connect with other AI leaders.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Proving and Measuring AI ROI in CompaniesOpenAI's GDP Benchmark: AI vs ExpertsDebunking MIT's Viral Zero ROI StudyQuantitative AI ROI Data from Business StudiesInvisible Productivity: AI Savings Pocketed by WorkersLimitations of Pre-AI Job Roles and MetricsROI Calculation: Time Saved and Cost ReductionFive Main Reasons AI ROI Isn't MeasuredSeven-Step AI ROI Measurement BlueprintImportance of Ongoing AI Model RetestingAI ROI: Training, Education, and Implementation GapsTimestamps:00:00 "AI ROI: Fixing the Debate"04:50 "AI ROI Debate is Pointless"08:03 AI Implementation Yields Positive ROI12:13 AI Reshaping Work Structures15:08 AI ROI Challenges20:11 "Baseline Assessment Before AI Implementation"23:49 "AI Operating System Discussion"24:35 "Standardized Testing for AI Models"28:26 "Rethink AI ROI Urgency"31:37 "Everyday AI: Subscribe & Explore"Keywords: measuring AI ROI, AI return on investment, generative AI ROI, GenAI ROI measurement, ROI on artificial intelligence, AI productivity, time saved with AI, cost reduction with AI, AI-driven revenue increase, AI risk avoidance, AI implementation, baseline assessment of AI, BASE framework, pre-AI baseline, digital transformation and AI, quality metrics in AI, cost per task AI, error rates AI, throughput AI, AI utilization rate, AI pilot evaluation, AI bSend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Start Here ▶️Not sure where to start when it comes to AI? Start with our Start Here Series. You can listen to the first drop -- Episode 691 -- or get free access to our Inner Cricle community and all episodes: StartHereSeries.com Also, here's a link to the entire series on a Spotify playlist. 

Gartner ThinkCast
Mastering the Hype Cycle: How Cybersecurity Leaders Win With AI

Gartner ThinkCast

Play Episode Listen Later Mar 3, 2026 34:37


AI hype isn't slowing down, and for cybersecurity leaders, that's exactly the point. As organizations race to inject generative AI and AI agents into every corner of the business, CISOs face a new challenge: cutting through inflated expectations without getting left behind.  In this preview of the Opening Keynote from Gartner Security & Risk Management Summit, Gartner experts Christine Lee and Leigh McMullen explain why now is the moment for cybersecurity to stop resisting the Hype Cycle, and start using it as a strategic advantage.   You'll learn: Why AI hype is becoming a catalyst, not a distraction, for cybersecurity How to use outcome‑driven metrics to guide smarter investments The biggest risks and realities of GenAI and AI agents What early AI adopters in cybersecurity are doing differently How embracing hype can strengthen resilience and unlock innovation   Dig deeper: Explore more Gartner for CISOs insights Attend a Gartner Cybersecurity Conference near you See why Gartner is the world authority on AI Become a client to try AskGartner  

Create Tomorrow, The WGSN Podcast
157. AI x Human: Who's Doing the Thinking Anymore?

Create Tomorrow, The WGSN Podcast

Play Episode Listen Later Mar 3, 2026 26:25


In this Create Tomorrow episode, host Cassandra Napoli invites Adobe Firefly Enterprise's Director of Product Design Viggy Madhav to share how GenAI is transforming content creation, design processes and brand trust. Listen in to explore the evolving role of human creativity, ethical considerations and the future landscape as AI seamlessly integrates into our creative and marketing workflows.

Pearls On, Gloves Off
#89 - AT&T's New In House Law Firm

Pearls On, Gloves Off

Play Episode Listen Later Mar 3, 2026 48:05


In this episode of Pearls On, Gloves Off, Mary sits down with Bill Ryan, Senior Vice President and Chief Compliance Officer at AT&T, and Ava Guo, Assistant Vice President and Senior Legal Counsel, to talk about leading through the AI inflection point inside a global legal department. As tools evolve monthly and expectations rise, the real differentiator isn't access to technology, it's execution. Bill and Ava share how they're thinking about urgency, culture, ROI, and outside counsel alignment in a world where everyone has AI. If you're trying to separate hype from operational reality, this conversation delivers. In this episode: Execution is the differentiator: "Vision without execution is hallucination" - why moving from experimentation to delivery is what actually matters. ROI has to be measurable: Beyond "time saved," how they track precision, reduced document review, hours, dollars - and build trust over time. AI increases volume, not just efficiency: As courts, consumers, and counterparties use AI too, filings and complexity are rising. Mindset beats mastery: Why they're hiring for curiosity and adaptability over deep tenure or tool-specific expertise. Not everything is a GenAI problem: The continued importance of structured systems, legacy tools, and disciplined workflows. Law firm relationships are evolving: What urgency, partnership, and value look like from the client side in an AI-accelerated market. Join Mary's Substack Community Follow Mary on LinkedIn Rate and review on Apple Podcasts  

The SaaS CFO
Eikona Raises $5M Seed to Use GenAI Lifecyle Marketing to Sell More to Existing Customers

The SaaS CFO

Play Episode Listen Later Mar 3, 2026 31:56


Welcome to The SaaS CFO Podcast! In this episode, Ben Murray sits down with Nir Weingarten, co-founder and CEO of Eikona, to dive into the intersection of AI, lifecycle marketing, and the SaaS startup journey. Nir Weingarten shares his fascinating background in academia, AI research, and software development, and how that technical expertise fueled the birth of Eikona—a company aimed at transforming post-acquisition marketing through generative AI. You'll hear about the early days of Eikona, from ideation with his co-founder to securing $5.1 million in seed funding and finding that crucial product-market fit. Nir Weingarten breaks down how reinforcement learning and GenAI can disrupt traditional, manual approaches to email and SMS marketing, creating real ROI for brands with large customer bases. Whether you're interested in go-to-market strategies, startup fundraising lessons, or the future of AI adoption in SaaS, this conversation is packed with insights. Tune in to discover how Eikona is helping marketers unlock new value beyond customer acquisition and what's top of mind for building the next wave of AI-powered growth. Show Notes: 00:00 Lifecycle Marketing Revolution Explained 05:48 Social Media AI Disrupts Marketing 07:21 Mid-Market B2C Targeting Strategy 10:33 "GenAI Startup Journey" 14:00 "Fuzzy Autonomy in Marketing" 20:05 "The Right Kind of Stubborn" 23:17 "Building a Scalable Sales Funnel" 25:28 "Client Happiness Drives Success" 27:38 "AI Founder Insights & Metrics" 31:24 "Eikona.io: AI Resources & Networking" Links: SaaS Fundraising Stories: https://www.thesaasnews.com/news/eikona-raises-5-million-seed-round Nir Weingarten's LinkedIn: https://www.linkedin.com/in/nir-zvi-weingarten/ Eikona's LinkedIn: https://www.linkedin.com/company/eikonaio/ Eikona's Website: https://www.eikona.io/ To learn more about Ben check out the links below: Subscribe to Ben's daily metrics newsletter: https://saasmetricsschool.beehiiv.com/subscribe Subscribe to Ben's SaaS newsletter: https://mailchi.mp/df1db6bf8bca/the-saas-cfo-sign-up-landing-page SaaS Metrics courses here: https://www.thesaasacademy.com/ Join Ben's SaaS community here: https://www.thesaasacademy.com/offers/ivNjwYDx/checkout Follow Ben on LinkedIn: https://www.linkedin.com/in/benrmurray

Trench Tech
Ce que les prophètes de l'iA ne vous disent pas | On Refait La Tech - Gérald Holubowicz

Trench Tech

Play Episode Listen Later Mar 3, 2026 5:00


De Sam Altman à Peter Thiel, en passant par Elon Musk et Dario Amodei, les nouveaux messies de la Silicon Valley annoncent la fin du travail, des institutions… et peut-être de la démocratie.Et si le vrai sujet n'était pas l'iA, mais le projet politique de ceux qui la conçoivent… avec notre consentement silencieux ?On refait la Tech, la chronique socio-historique de Trench Tech animée par Gérald Holubowicz, en collaboration avec Synth. ***** À PROPOS DE TRENCH TECH *****LE talkshow « Esprits Critiques pour Tech Ethique »Écoutez-nous sur toutes les plateformes de podcast

Cloud Security Podcast by Google
EP265 Beyond Shadow IT: Unsanctioned AI Agents Don't Just Talk, They Act!

Cloud Security Podcast by Google

Play Episode Listen Later Mar 2, 2026 28:54


Guest: Alastair Paterson, CEO and co-founder @ Harmonic Security Topics: Harmonic Security focuses on securing generative AI in use. Can you walk us through a real, anonymized example of a data leak caused by employee AI usage that your platform has identified? AI governance gets thrown around a lot. What does this mean in the context of Shadow AI? How should organizations be thinking about governing AI in light of upcoming AI regulations in the US and in the EU? If we generally agree that employees are using AI tools before they are sanctioned, how can organizations control this? Network, API, endpoint? Many organizations struggle with the "ban vs. embrace" debate for generative AI. Based on your experience, what's a compelling argument for moving from a blanket ban to a managed, secure adoption model? Can you share a success story where this approach demonstrably reduced risk? The term "shadow AI" is often used interchangeably with "shadow IT" (but for AI-powered applications)  but you've highlighted that AI is a different beast. What is the single biggest distinction between managing the risk of unsanctioned AI tools versus unsanctioned IT applications? Looking forward, where do you see the biggest risks in the evolution of shadow AI? For instance, will the next threat be from highly specialized AI agents trained on proprietary data, or from the rapid proliferation of new, unmonitored open-source models? Given the speed of change in this space, what's one piece of advice you'd give to a CISO today who is just beginning to get a handle on their organization's shadow AI problem? Resources: Video version Harmonic Security research Shadow AI Strikes Back: Enterprise AI Absent Oversight in the Age of Gen AI blog Shadow Agents: A New Era of Shadow AI Risk in the Enterprise blog (RSA 2026 presentation coming!) Spotlighting 'shadow AI': How to protect against risky AI practices blog EP171 GenAI in the Wrong Hands: Unmasking the Threat of Malicious AI and Defending Against the Dark Side (aka "dirty bomb episode") A Conversation with Alastair Paterson from Harmonic Security video

Inside the Network
Sanjay Beri: Playing the long game and building Netskope into a public cybersecurity powerhouse

Inside the Network

Play Episode Listen Later Mar 2, 2026 53:09 Transcription Available


In this episode of Inside the Network, we sit down with Sanjay Beri, Founder and CEO of Netskope, a company that sits at the intersection of AI, data security, and global edge networking. Over thirteen years, Sanjay scaled Netskope to 4,000+ customers (including 30% of the Fortune 100), 3,000+ employees across 30 countries, and ultimately took the company public in September 2025 in one of the year's standout cybersecurity IPOs.Sanjay's journey is a masterclass in playing the long game. From growing up in Canada selling door-to-door with his mom, where he learned grit and resilience, to working at Microsoft during the early internet era, to becoming one of Juniper's youngest VPs running a large business, Sanjay built the rare blend of product, go-to-market, and leadership muscle it takes to build at scale.We talk about the origins of Netskope, why it was never “just a CASB,” how Sanjay recruited world-class early engineers and built a high-trust culture, and what product-market fit looked like in the first chapter of the company. He also breaks down some of Netskope's biggest bets, including building its own edge cloud instead of relying on public cloud networks, and launching AI Labs years before today's GenAI wave.Whether you're building a cyber startup, scaling into the enterprise, or studying what it takes to go from zero to IPO, this episode is packed with hard-earned lessons on conviction, the importance of building a winning culture, and endurance.

How I Work
The AI critique system we use to improve our work

How I Work

Play Episode Listen Later Mar 1, 2026 10:30 Transcription Available


Download Inventium.ai’s custom GPT instructions to create your own Personal AI Reviewer Buddy here: https://amantha-imber.kit.com/51dd2a9719 Producing high volumes of work isn’t the hard part anymore. Producing high quality is. In this How I AI episode, Neo and I walk through how to use AI as a rigorous reviewer of your work – not to replace your judgment, but to sharpen it. We go beyond a basic “please review this” prompt and share a structured way to pressure test emails, documents, slide decks and analysis before they leave your desk. Neo shares the exact system he uses, which he’s nicknamed Charles – a GPT designed to critique work properly, diagnose weaknesses and suggest stronger alternatives. And yes, we’re giving you Charles (via the link above!). Neo and I cover: How to write a simple but powerful critique prompt that goes beyond surface-level polishing What to ask AI to check for, including inaccuracies, weak support, bias, gaps, impracticality and verbosity How to customise your review criteria for specific roles, policies or stakeholders The quality gates Neo uses, including factual accuracy, logical soundness, completeness, relevance, clarity, structure, safety and practicality How AI can improve its own output if you’ve used it to draft something in the first place Why you should never treat a first AI response as gospel Connect with Neo Aplin on LinkedIn and via inventium.ai, where he leads Inventium’s AI training and upskilling work with organisations and teams. And if you’re ready to move beyond basic prompts and start using AI as a genuine thinking partner, check out inventium.ai. We help individuals, teams and organisations turn GenAI into a real work superpower – saving 10+ hours a week and staying future ready. My latest book The Health Habit is out now. You can order a copy here: https://www.amantha.com/the-health-habit/ Connect with me on the socials: Linkedin (https://www.linkedin.com/in/amanthaimber) Instagram (https://www.instagram.com/amanthai) If you are looking for more tips to improve the way you work and live, I write a weekly newsletter where I share practical and simple to apply tips to improve your life. You can sign up for that at https://amantha-imber.ck.page/subscribe Visit https://www.amantha.com/podcast for full show notes from all episodes. Get in touch at amantha@inventium.com.au Credits: Host: Amantha Imber Sound Engineer: Martin Imber See omnystudio.com/listener for privacy information.

Mentoring Matters
The Conversation about AI You Should Be Having with Your Graduate Students Right Now

Mentoring Matters

Play Episode Listen Later Feb 28, 2026 38:19


Send a textIn this episode of Mentoring Matters, we tackle a question that's been on a lot of mentors' minds: how do we guide our graduate students in using AI without letting it replace the critical thinking skills we're trying to build? Mary gets honest about her hesitation to even bring up AI with her students, and Steph shares how she's been integrating tools like Claude and Notebook LM into her mentoring and teaching.Together, we explore the idea of training students to be AI-assisted scientists rather than AI-dependent ones, and what it looks like to shift from content creator to creative director in your own work. We dig into the real risk of skill atrophy when students hand off tasks they haven't yet mastered, and we land on a practical gut check: if you wouldn't be comfortable telling your advisor exactly how you used AI, it's time to rethink your approach.Whether you're already using AI in your mentoring or still figuring out where to start, this conversation will give you a framework for setting expectations, encouraging transparency, and helping your students build the AI literacy they'll need in their careers. Spoiler: it starts with just having the conversation.For actionable tips and strategies for mentoring please check out The Graduate Mentor's Trail Map available in paperback and ebook! If you are enjoying this podcast please leave a rating or review which helps others find the conversation. Please share with others who would gain value from the show!

Wise Decision Maker Show
Gen AI Demands We Keep Learning or Become Obsolete

Wise Decision Maker Show

Play Episode Listen Later Feb 26, 2026 3:20


Gen AI success starts with a growth mindset and continuous upskilling. Organizations can meet AI demands by embedding learning into daily work and encouraging experimentation. That's the key take-away message of this episode of the Wise Decision Maker Show, which talks about why Gen AI demands we keep learning or become obsolete.This article forms the basis for this episode: https://disasteravoidanceexperts.com/gen-ai-demands-we-keep-learning-or-become-obsolete/

Cloud Realities
RR002: The business - tech divide with John Koerwer, UGI Corporation

Cloud Realities

Play Episode Listen Later Feb 26, 2026 65:34


Realities Remixed, formerly know as Cloud Realities, launches a new season exploring the intersection of people, culture, industry, and tech. Energy transportation is a deeply local business, safely delivering gas and electricity, more and more from renewable sources, directly to the communities it serves. Technology and AI help make that possible by strengthening safety, bringing companies closer to customers, and enabling teams to build the future together. This week, Dave, Esmee, and Rob are joined by John Koerwer, CIO of UGI Corporation, to explore explore why “the business” and tech still struggle to speak the same language, nd what helps close the gap.TLDR00:35 – Introduction01:17 – Hang out: new toys and coffee07:55 – Dig in: the business - tech divide21:07 – Conversation with John Koerwer59:40 – The amazing AI technology in The Sphere's version of The Wizard of OzGuestJohn Koerwer: https://www.linkedin.com/in/john-koerwer-46102127/HostsDave Chapman: https://www.linkedin.com/in/chapmandr/Esmee van de Giessen: https://www.linkedin.com/in/esmeevandegiessen/Rob Kernahan: https://www.linkedin.com/in/rob-kernahan/ProductionMarcel van der Burg: https://www.linkedin.com/in/marcel-vd-burg/Dave Chapman: https://www.linkedin.com/in/chapmandr/ SoundBen Corbett: https://www.linkedin.com/in/ben-corbett-3b6a11135/Louis Corbett:  https://www.linkedin.com/in/louis-corbett-087250264/ 'Realities Remixed' is an original podcast from Capgemini

Mystery AI Hype Theater 3000
How the War Department Learned to Stop Worrying and Love AI (with Naomi Klein), 2026.02.09

Mystery AI Hype Theater 3000

Play Episode Listen Later Feb 26, 2026 52:02 Transcription Available


AI boosters and the US military are engaged in a lethal love affair. Award-winning journalist Naomi Klein joins Emily and Alex to discuss how glitchy technology supports global imperialism — and vice versa. Plus, we explore which Dr. Strangelove characters are currently running the US war machine.Naomi Klein is a columnist for The Guardian and the international bestselling author of nine books published in over 35 languages. Her new book, End Times Fascism: And the Fight for the Living World, written with Astra Taylor, will be published in September 2026.References:War Department press release on GenAI.milOld-school racism from Anduril CEO Palmer Luckey"How artificial intelligence is reshaping the future of war"Previous episodes referenced:Episode 61: Winning the Race to Hell (with Sarah Myers West and Kate Brennan)Episode 50: Petro-Masculinity Versus the Planet (with Tamara Kneese)Fresh AI Hell:"Cops Forced to Explain Why AI Generated Police Report Claimed Officer Transformed Into Frog""Amazon outbids WA utility for one of nation's largest solar projects""AI data centers are forcing dirty 'peaker' power plants back into service""Mamdani Targets 'Unusable' AI Chatbot for Termination"Check out future streams on Twitch. Meanwhile, send us any AI Hell you see. Our merch store is now live on the DAIR website! Find our book, The AI Con, here. Subscribe to our newsletter via Buttondown. Follow us! Emily Bluesky: emilymbender.bsky.social Mastodon: dair-community.social/@EmilyMBender Alex Bluesky: alexhanna.bsky.social Mastodon: dair-community.social/@alex Twitter: @alexhanna Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Ozzy Llinas Goodman.

Connected Social Media
Cost-Effective and Scalable Infrastructure for GenAI Workloads

Connected Social Media

Play Episode Listen Later Feb 26, 2026


Intel IT is expanding and evolving our “Citizen AI” framework—named iGPT, which enables Intel employees to use Generative AI (GenAI)...

Enterprise Podcast Network – EPN
Where Enterprise AI Is Breaking Down and the Strategic Bet CIOs Must Rethink

Enterprise Podcast Network – EPN

Play Episode Listen Later Feb 26, 2026 13:52


Shomron Jacob, an AI Strategy Expert and Technology Advisor based in Silicon Valley with over a decade of experience in enterprise AI, GenAI, and machine … Read more The post Where Enterprise AI Is Breaking Down and the Strategic Bet CIOs Must Rethink appeared first on Top Entrepreneurs Podcast | Enterprise Podcast Network.

Fat Guy, Jacked Guy
The "Enhanced Games" is the GenAI Bubble of Sports

Fat Guy, Jacked Guy

Play Episode Listen Later Feb 25, 2026 50:10


If you're more normal than us, you probably haven't heard of the "Enhanced Games," a new event featuring a spate of athletes who are openly using performance enhancing drugs set to kick off May 2026. This week, we're discussing the details and nuances of what the "Enhanced Games" will supposedly be and the billionaire scum who are backing its production. There's a little twist in this one that you'll probably never see coming.You can find Fat Guy, Jacked Guy on ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Instagram⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠!AND you can support us on ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Patreon⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠! There's ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠extra content for Patreon supporters⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠, as well as opportunities to interact with us in other ways besides listening to the podcast. We appreciate any & all help you can provide, & we hope to keep this going for a long, long time. Thank you in advance for your support and love! You are our brothers!

Yo Videogames
YoVG # 527 Goodbye Phil, Hello AI!

Yo Videogames

Play Episode Listen Later Feb 25, 2026 72:53


It's like news breaks in chunks these days. Is that planned? It feels planned. In this episode we cover: Saudi Arabia buys the rest of EVO and announces plans for multiple cities worldwide to host their own. Phil Spencer has retired and his replacement is definitely a signal of what is to come. And Aussies are fun to hang with wherever you meet them. (Okay we don't really cover that but I do mention it.) In this episode we reference these articles a few times, and I recommend you read them if you have time because they add a ton of context: The Verge article about Xbox Leadership shake up (Sarah Bonds abrupt exit): https://www.theverge.com/tech/883015/microsoft-xbox-new-ceo-shakeup-notepad Seamus Blackley says Xbox is being sunsetted: https://gamesbeat.com/what-an-xbox-founder-thinks-of-the-new-xbox-ceo-seamus-blackley-interview/ Do I think Xbox is being sunsetted? No. If it gets back on track it is a big earner, and it fits with Satya Nadella's goals. Namely consumer cloud compute and AI adoption. I expect the next iteration of Xbox to have an AI assistant that you can't opt out of, and of course AI "tools" will be used in all projects moving forward. I expect it will be used as a selling point on every game. "New AI makes bots behave naturally!" "AI learns and adapts to how you play!!!!" " GenAI generated levels mean every playthrough is unique!!!!!!!!" Stuff like that. Do I think Xbox is being brought in line with Satya Nadella's AI push? Absolutely. Do I think Asha Sharma is going to bring about a new Xbox Golden Age? No. But I do think we need to wait a year to see what she actually does. Because this year will mostly be finishing up Phil's plans. We won't know her real priorities for 6 months minimum, and I am saying a year to be generous.  

Tech Law Talks
AI-enabled e-discovery: Speed to knowledge - how GenAI is reshaping litigation strategy

Tech Law Talks

Play Episode Listen Later Feb 25, 2026 21:33


Reed Smith partner Anthony Diana sits down with Dera Nevin of FTI Consulting to explore how AI-enabled e-discovery is transforming litigation-and why the best time to adopt these tools is now. From accelerating early case assessment to revolutionizing privilege review workflows, Anthony and Dera break down where generative AI is already delivering real results-not just theoretical possibilities. They discuss how forward-thinking legal teams are training large language models to surface key documents, generate case timelines, and dramatically reduce time-to-knowledge on complex matters.

AWS for Software Companies Podcast
Ep195: Tax Compliance at Scale: How Avalara Is Rewriting the Rules with AI and AWS

AWS for Software Companies Podcast

Play Episode Listen Later Feb 24, 2026 19:20


Avalara's Chief Architect Tim Diekmann reveals how AI and agentic technology are transforming tax compliance and accuracy across 40,000 jurisdictions leveraging AWS.Topics Include:Avalara provides tax compliance software across North America, Europe, and beyond.They operate between commerce and government, covering 40,000+ jurisdictions.Services span registration, sales tax calculation, and certificate management.Avalara was the only company keeping pace with rapid tariff changes.AI is used to parse unstructured documents like tax notices and publications.Intelligent mapping automates ERP integration across vastly different system configurations.GenAI lets customers query billions of transactions using plain conversational language.Avalara and AWS are now engaged in a promising co-selling motion.Time-to-go-live and transaction accuracy are the key success metrics tracked.Amazon Q was rolled out company-wide, achieving 95% developer adoption.AI literacy is now prioritized across legal, HR, and engineering teams alike.Agentic AI will embed Avalara directly inside customer ERP systems going forward.Participants:Tim Diekmann – SVP of Engineering, Chief Architect, AvalaraSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon.com/isv/

Intellicast
Defender Gets Backup: Inside the ReDem Acquisition with Vignesh Krishnan of Rep Data

Intellicast

Play Episode Listen Later Feb 24, 2026 21:41


Welcome back to Intellicast! On today's episode, Brian Peterson is joined by a friend of the podcast and the newly named Executive Vice President of Products for Rep Data, Vignesh Krishnan. He joins the podcast to discuss Rep Data's recent acquisition of ReDem, as well as data quality and fraud. Kicking off the episode, Vignesh gives Brian the inside details on how ReDem will integrate into the Rep Data/Research Defender ecosystem and how it will expand their fraud detection capabilities beyond the pre-survey phase. As the episode progresses, Brian and Vignesh touch on a variety of topics, including: The fraud detection ecosystem The layered approach to fraud detection How fraud and data quality are not the same thing Insights from each's research-on-research Gen AI's increased usage by fraudsters and respondents And much more! If you wanted to know more about Rep Data's acquisition of ReDem, this is the episode for you. You can learn more about the acquisition here. You can connect with Vignesh on LinkedIn: https://www.linkedin.com/in/vignesh-krishnan-08846aa/ Thanks for tuning in! We've just launched our new whitepaper, The Fraud Detection Reality Check. Get your copy and find out why relying on a single fraud detection tool is a mistake. Download here: https://content.emi-rs.com/fraud-detection-report   Did you miss one of our webinars or want to get some of our whitepapers and reports? You can find it all on our Resources page on our website here: https://emi-rs.com/resources/  Learn more about your ad choices. Visit megaphone.fm/adchoices

ILTA
#0161: (CT) To AFA or Not to AFA, That is the Question

ILTA

Play Episode Listen Later Feb 23, 2026 26:01


For decades, people have been suggesting that the death of the billable hour was inevitable.  Evan Chesler said, “[t]he billable hour makes no sense, not even for lawyers” in his 2008 Forbes article "Kill the Billable Hour.” We haven't seen a lot of movement since that article, but with the explosion of GenAI in legal, there has been increased discussion and focus suggesting that AFAs will replace the billable hour. Will we be looking back on this time in 20 years and saying, “Remember when we thought the billable hour would go the way of the dodo bird?” Moderator: @Michael Ertel - Director of Practice Innovation, Crowell & Moring    Speakers: @Jared Applegate - Chief Legal Operations Officer, Barnes & Thornburg @Purvi Sanghvi - Director of Strategic Pricing, Paul, Weiss, Rifkind, Wharton & Garrison LLP @Cassie Vertovec - Chief Practice Management Officer, Loeb & Loeb LLP Recorded on 02-23-26.

Project 38: The future of federal contracting
Generative AI's pitfalls and potential benefits in GovCon law

Project 38: The future of federal contracting

Play Episode Listen Later Feb 23, 2026 29:57


Humans in the loop are, in theory, supposed to be as much a part of all conversations surrounding the use of generative artificial intelligence tools as a way to safeguard against major mistakes. But as GovCon attorney David Timm has found out, errors showing misuse of the technology are starting to come up in bid protests and other legal rulings that show what can go wrong when relying on the tech too much. Timm, a partner at the law firm Burr & Forman, joins our Ross Wilkers for this episode to share his findings from those decisions and how they could help set some guardrails for the use of GenAI in GovCon law. Even with the problems he sees, Timm is an optimist for how the tech can remove what he calls “Entropy” from workflows and make some tasks easier. Gen-AI Misuse in Procurement Litigation Procurement is Not "Oready" for GenAI Misuse Can a federal agency adopt the output of a Gen-AI bid evaluation tool? Buying Blind: Corruption Risk and the Erosion of Oversight in Federal AI Procurement

The Daily Scoop Podcast
Federal CIO Greg Barbaccia tapped for new leadership roles at GSA

The Daily Scoop Podcast

Play Episode Listen Later Feb 20, 2026 5:01


Federal Chief Information Officer Greg Barbaccia will be adding two new titles — at least temporarily — to his work in government. The General Services Administration announced Thursday that Barbaccia will join the agency as the acting director of Technology Transformation Services. He'll be replacing Thomas Shedd, one of the few officials left at the agency who helped carry out the so-called Department of Government Efficiency's cost-cutting initiative last year. Shedd will remain at the agency as its senior advisor for fraud prevention, which the GSA said is “an area of increasing importance for the agency and the administration.” The federal CIO was also tapped as senior advisor to the GSA administrator, the agency said. In this role, advising former privacy equity executive Edward Forst, Barbaccia will focus on “emerging technologies, best practices in digital delivery, and cross-government collaboration.” The Pentagon will adhere to existing laws and regulations associated with surveillance, security and democratic processes as it fast-tracks the military's frontier AI adoption, but it won't permit companies supplying the technology to determine its rules for operation, Undersecretary of Defense for Research and Engineering Emil Michael told DefenseScoop. His comments come as the Defense Department is locked in a high-stakes dispute with Anthropic about the U.S. military's use of the startup's Claude AI model in real-world operations. During a meeting with a small group of reporters on the sidelines of the annual Microelectronics Commons summit Thursday, Michael provided updates on the department's GenAI.mil rollout and pushed for the ethics-related rift between the Pentagon and Anthropic to be resolved. “I believe and hope that they will ‘cross the Rubicon' and say, ‘This is common sense. The military has certain use cases. There are laws and regulations that govern how those use cases can be done. We're willing to comply with them,'” he said. The Daily Scoop Podcast is available every Monday-Friday afternoon. If you want to hear more of the latest from Washington, subscribe to The Daily Scoop Podcast  on Apple Podcasts, Soundcloud, Spotify and YouTube.

The Tech Blog Writer Podcast
From AI Pilot Purgatory To Real ROI With Bill Briggs Of Deloitte

The Tech Blog Writer Podcast

Play Episode Listen Later Feb 16, 2026 38:23


In this episode, I'm joined by Bill Briggs, CTO at Deloitte, for a straight-talking conversation about why so many organizations get stuck in what he calls "pilot purgatory," and what it takes to move from impressive demos to measurable outcomes. Bill has spent nearly three decades helping leaders translate the "what" of new technology into the "so what," and the "now what," and he brings that lens to everything from GenAI to agentic systems, core modernization, and the messy reality of technical debt. We start with a moment of real-world context, Bill calling in from San Francisco with Super Bowl week chaos nearby, and the funny way Waymo selfies quickly turn into "oh, another Waymo" once the novelty fades. That same pattern shows up in enterprise tech, where shiny tools can grab attention fast, while the harder work, data foundations, APIs, governance, and process redesign, gets pushed to the side. Bill breaks down why layering AI on top of old workflows can backfire, including the idea that you can "weaponize inefficiency" and end up paying for it twice, once in complexity and again in compute costs. From there, we get into his "innovation flywheel" view, where progress depends on getting AI into the hands of everyday teams, building trust beyond the C-suite, and embedding guardrails into engineering pipelines so safety and discipline do not rely on wishful thinking. We also dig into technical debt with a framing I suspect will stick with a lot of listeners. Bill explains three types, malfeasance, misfeasance, and non-feasance, and why most debt comes from understandable trade-offs, not bad intent. It leads into a practical discussion on how to prioritize modernization without falling for simplistic "cloud good, mainframe bad" narratives. We finish with a myth-busting riff on infrastructure choices, a quick look at what he sees coming next in physical AI and robotics, and a human ending that somehow lands on Beach Boys songs and pinball machines, because tech leadership is still leadership, and leaders are still people. So after hearing Bill's take, where do you think your organization is right now, measurable outcomes, success theater, or somewhere in between, and what would you change first, and please share your thoughts?   Useful Links Connect With Bill Briggs Deloitte Tech Trends 2026 report Deloitte The State of AI in the Enterprise report

Everyday AI Podcast – An AI and ChatGPT Podcast
Ep 710: Context Engineering: How to Get Expert-Level Outputs From AI Chatbots

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Feb 10, 2026 37:38


How did prompt engineering die so quickly? ☠️And what the heck does context engineering even mean? One of the trickiest things about LLMs is they're changing daily, yet they're the engines that drive business results. But if the engine is constantly changing, then you also have to change how you drive and the roads you take. That's why we're tackling context engineering in this installment of our Start Here Series, the essential beginners guide to understanding AI basics and growing your skills. Context Engineering: How to Get Expert-Level Outputs From AI Chatbots -- An Everyday AI Chat with Jordan WilsonNewsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion on LinkedIn: Thoughts on this? Join the convo on LinkedIn and connect with other AI leaders.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Evolution from Prompt to Context EngineeringWhy Prompt Engineering Is Now ObsoleteDefining Context Engineering in AI ChatbotsSix-Part Framework for Context EngineeringFour Layer System for Structuring AI ContextBuilding Reusable Context Vaults and SkillsConnecting Business Data to AI ModelsTechniques to Achieve Expert-Level AI OutputsImportance of Context Windows in Large Language ModelsContext Engineering Best Practices and ScalabilityTimestamps:00:00 "Access AI Community & Tools"03:08 "Mastering Context in AI"07:23 "Smart Models Require Less Precision"12:01 "Context Engineering Beats Prompt Engineering"15:49 "AI Context: Six Key Blocks"16:47 "Building Context for Better Results"19:53 "AI: Training, Not Easy Button"25:17 "Chain of Thought Prompting Decline"29:11 "Show, Don't Tell Techniques"32:13 "Context, Reuse, and Scalable Systems"33:19 "AI Chatbots: Memory and Skills"Keywords: context engineering, AI chatbots, expert level outputs, prompt engineering, large language models, business context, AI models, custom instructions, data access, context window, prime prompt polish, reusable context vaults, context vaults, skills file, memory enabled models, ChatGPT, Claude, Google Gemini, Microsoft Copilot, connectors, apps, searchable index, business data, personalized AI, context clues, reference material, examples, procedures, evaluation rubric, chain of thought prompting, generative AI, nondeterministic behavior, show don't tell technique, few shot examples, rubric first technique, grading criteria, output quality, scalable AI systems,Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner