Eye On A.I.

Follow Eye On A.I.
Share on
Copy link to clipboard

Eye on A.I. is a biweekly podcast, hosted by longtime New York Times correspondent Craig S. Smith. In each episode, Craig will talk to people making a difference in artificial intelligence. The podcast aims to put incremental advances into a broader context and consider the global implications of th…

Craig S. Smith


    • May 11, 2025 LATEST EPISODE
    • weekdays NEW EPISODES
    • 45m AVG DURATION
    • 256 EPISODES


    Search for episodes from Eye On A.I. with a specific topic:

    Latest episodes from Eye On A.I.

    #253 Ivan Shkvarun: Inside the Fight Against AI-Driven Cybercrime

    Play Episode Listen Later May 11, 2025 47:05


    This episode is brought to you by Extreme Networks, the company radically improving customer experiences with AI-powered automation for networking.Extreme is driving the convergence of AI, networking, and security to transform the way businesses connect and protect their networks, delivering faster performance, stronger security, and a seamless user experience.   Visit https://www.extremenetworks.com/ to learn more. In this episode of Eye on AI, we sit down with Ivan Shkvarun, CEO of Social Links and founder of the Dark Side AI Initiative, to uncover how cybercriminals are leveraging generative AI to orchestrate fraud, deepfakes, and large-scale digital attacks—often with just a few lines of code.   Ivan shares how his team is building real-time OSINT (Open Source Intelligence) tools to help governments, enterprises, and law enforcement fight back. From dark web monitoring to ethical AI frameworks, we explore what it takes to protect the digital world from the next wave of AI-powered crime.   Whether you're in tech, cybersecurity, or policy—this conversation is a wake-up call. AGNTCY - Unlock agents at scale with an open Internet of Agents. Visit https://agntcy.org/ and add your support.   From cybersecurity to law enforcement — discover how Social Links brings the full potential of OSINT to your team at http://bit.ly/44sytzk Stay Updated: Craig Smith on X:https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI (00:00) Preview  (02:11) Meet Ivan Shkvarun & Social Links (03:41) Launching the Dark Side AI Initiative (05:16) What OSINT Actually Means Today (08:39) How Law Enforcement Trace Digital Footprints (12:50) Connecting Surface Web to Darknet (16:12) OSINT Methodology in Action (20:23) Why Most Companies Waste Their Own Data (21:09) Cybersecurity Threats Beyond the IT Department (26:25) BrightSide AI vs. DarkSide AI (30:10) Should AI-Generated Content Be Labeled? (31:26) Why We Can't “Stop” AI (35:37) Why AI-Driven Fraud Is Exploding (41:39) The Reality of Criminal Syndicates

    #252 Jeffrey Hammond: How to Build Scalable GenAI Products (AWS Strategy)

    Play Episode Listen Later May 9, 2025 53:46


    This episode is sponsored by Netsuite by Oracle, the number one cloud financial system, streamlining accounting, financial management, inventory, HR, and more.NetSuite is offering a one-of-a-kind flexible financing program.    Head to  https://netsuite.com/EYEONAI to know more. AWS partnered with Forrester Research to understand how software providers (ISVs), in particular, plan to drive profitable growth with generative AI, how they are uniquely approaching generative AI development, and the key challenges they're facing.   In this conversation with Jeffrey Hammond, Global ISV Product Strategist at AWS, he dives into the findings of the research and discusses how — particularly with AWS's help — ISVs can drive profitable growth and succeed in the gen AI gold rush.   Jeffrey helps software product management leaders leverage AWS cloud services to accelerate product delivery, create new revenue streams, reduce technical debt, and optimize operational costs.   You'll learn: Why “toil reduction” is the fastest path to GenAI ROI How AWS's GenAI Innovation Center helps companies cut costs and ship faster What most ISVs get wrong about trust, security, and customer communication The secret to scalable AI product pricing—and what Canva got right Why agentic workflows and federated models are the next frontier in software   Whether you're building on AWS or just exploring GenAI adoption, this conversation is packed with frameworks, examples, and strategy. Stay Updated: Craig Smith on X:https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI (00:00) The Future of Work with Generative AI   (03:20) Inside AWS: How Jeffrey Supports AI Innovation   (06:00) What the Forrester Survey Reveals About AI Adoption   (09:15) From Hype to Value: Building Real GenAI Use Cases   (13:45) How ISVs Are Reducing Toil and Driving Efficiency   (17:10) Balancing Innovation with Trust and Security   (22:00) AWS Programs That Help ISVs Win with AI   (28:00) GenAI Product Strategy: Accuracy, Cost & Pricing Models   (34:30) Overcoming Infrastructure Challenges in GenAI   (39:45) The Rise of Agentic Workflows and Interoperability   (46:00) The Biggest Tech Disruption in Decades?

    #252 Sid Sheth: How d-Matrix is Disrupting AI Inference in 2025

    Play Episode Listen Later Apr 30, 2025 54:32


    This episode is sponsored by the DFINITY Foundation.  DFINITY Foundation's mission is to develop and contribute technology that enables the Internet Computer (ICP) blockchain and its ecosystem, aiming to shift cloud computing into a fully decentralized state. Find out more at https://internetcomputer.org/     In this episode of Eye on AI, we sit down with Sid Sheth, CEO and Co-Founder of d-Matrix, to explore how his company is revolutionizing AI inference hardware and taking on industry giants like NVIDIA.   Sid shares his journey from building multi-billion-dollar businesses in semiconductors to founding d-Matrix—a startup focused on generative AI inference, chiplet-based architecture, and ultra-low latency AI acceleration.   We break down: Why the future of AI lies in inference, not training How d-Matrix's Corsair PCIe accelerator outperforms NVIDIA's H200 The role of in-memory compute and high bandwidth memory in next-gen AI chips How d-Matrix integrates seamlessly into hyperscaler and enterprise cloud environments Why AI infrastructure is becoming heterogeneous and what that means for developers The global outlook on inference chips—from the US to APAC and beyond How Sid plans to build the next NVIDIA-level company from the ground up.   Whether you're building in AI infrastructure, investing in semiconductors, or just curious about the future of generative AI at scale, this episode is packed with value.     Stay Updated: Craig Smith on X:https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI   (00:00) Intro (02:46) Introducing Sid Sheth (05:27) Why He Started d-Matrix (07:28) Lessons from Building a $2.5B Chip Business (11:52) How d-Matrix Prototypes New Chips (15:06) Working with Hyperscalers Like Google & Amazon (17:27) What's Inside the Corsair AI Accelerator (21:12) How d-Matrix Beats NVIDIA on Chip Efficiency (24:10) The Memory Bandwidth Advantage Explained (26:27) Running Massive AI Models at High Speed (30:20) Why Inference Isn't One-Size-Fits-All (32:40) The Future of AI Hardware (36:28) Supporting Llama 3 and Other Open Models (40:16) Is the Inference Market Big Enough? (43:21) Why the US Is Still the Key Market (46:39) Can India Compete in the AI Chip Race? (49:09) Will China Catch Up on AI Hardware?  

    #250 Pedro Domingos on the Real Path to AGI

    Play Episode Listen Later Apr 24, 2025 68:12


    This episode is sponsored by Thuma. Thuma is a modern design company that specializes in timeless home essentials that are mindfully made with premium materials and intentional details. To get $100 towards your first bed purchase, go to http://thuma.co/eyeonai   Can AI Ever Reach AGI? Pedro Domingos Explains the Missing Link In this episode of Eye on AI, renowned computer scientist and author of The Master Algorithm, Pedro Domingos, breaks down what's still missing in our race toward Artificial General Intelligence (AGI) — and why the path forward requires a radical unification of AI's five foundational paradigms: Symbolists, Connectionists, Bayesians, Evolutionaries, and Analogizers.   Topics covered: Why deep learning alone won't achieve AGI How reasoning by analogy could unlock true machine creativity The role of evolutionary algorithms in building intelligent systems Why transformers like GPT-4 are impressive—but incomplete The danger of hype from tech leaders vs. the real science behind AGI What the Master Algorithm truly means — and why we haven't found it yet Pedro argues that creativity is easy, reliability is hard, and that reasoning by analogy — not just scaling LLMs — may be the key to Einstein-level breakthroughs in AI.   Whether you're an AI researcher, machine learning engineer, or just curious about the future of artificial intelligence, this is one of the most important conversations on how to actually reach AGI.    

    #249 Brice Challamel: How Moderna is Using AI to Disrupt Modern Healthcare

    Play Episode Listen Later Apr 20, 2025 50:04


    This episode is sponsored by Oracle. OCI is the next-generation cloud designed for every workload – where you can run any application, including any AI projects, faster and more securely for less.  On average, OCI costs 50% less for compute, 70% less for storage, and 80% less for networking. Join Modal, Skydance Animation, and today's innovative AI tech companies who upgraded to OCI…and saved.    Offer only for new US customers with a minimum financial commitment. See if you qualify for half off at http://oracle.com/eyeonai     In this episode of Eye on AI, Craig Smith sits down with Brice Challamel, Head of AI Products and Innovation at Moderna, to explore how one of the world's leading biotech companies is embedding artificial intelligence across every layer of its business—from drug discovery to regulatory approval.   Brice breaks down how Moderna treats AI not just as a tool, but as a utility—much like electricity or the internet—designed to empower every employee and drive innovation at scale. With over 1,800 GPTs in production and thousands of AI solutions running on internal platforms like Compute and MChat, Moderna is redefining what it means to be an AI-native company.   Key topics covered in this episode: How Moderna operationalizes AI at scale GenAI as the new interface for machine learning AI's role in speeding up drug approvals and clinical trials The future of personalized cancer treatment (INT) Moderna's platform mindset: AI + mRNA = next-gen medicine Collaborating with the FDA using AI-powered systems   Don't forget to like, comment, and subscribe for more interviews at the intersection of AI and innovation.     Stay Updated: Craig Smith on X:https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI     (00:00) Preview  (02:49) Brice Challamel's Background and Role at Moderna (05:51) Why AI Is Treated as a Utility at Moderna (09:01) Moderna's AI Infrastructure (11:53) GenAI vs Traditional ML (14:59) Combining mRNA and AI as Dual Platforms (18:15) AI's Impact on Regulatory & Clinical Acceleration (23:46) The Five Core Applications of AI at Moderna (26:33) How Teams Identify AI Use Cases Across the Business (29:01) Collaborating with the FDA Using AI Tools (33:55) How Moderna Is Personalizing Cancer Treatments (36:59) The Role of GenAI in Medical Care (40:10) Producing Personalized mRNA Medicines (42:33) Why Moderna Doesn't Sell AI Tools (45:30) The Future: AI and Democratized Biotech

    #248 Pedro Domingos: How Connectionism Is Reshaping the Future of Machine Learning

    Play Episode Listen Later Apr 17, 2025 59:56


    This episode is sponsored by Indeed.  Stop struggling to get your job post seen on other job sites. Indeed's Sponsored Jobs help you stand out and hire fast. With Sponsored Jobs your post jumps to the top of the page for your relevant candidates, so you can reach the people you want faster. Get a $75 Sponsored Job Credit to boost your job's visibility! Claim your offer now: https://www.indeed.com/EYEONAI     In this episode, renowned AI researcher Pedro Domingos, author of The Master Algorithm, takes us deep into the world of Connectionism—the AI tribe behind neural networks and the deep learning revolution.   From the birth of neural networks in the 1940s to the explosive rise of transformers and ChatGPT, Pedro unpacks the history, breakthroughs, and limitations of connectionist AI. Along the way, he explores how supervised learning continues to quietly power today's most impressive AI systems—and why reinforcement learning and unsupervised learning are still lagging behind.   We also dive into: The tribal war between Connectionists and Symbolists The surprising origins of Backpropagation How transformers redefined machine translation Why GANs and generative models exploded (and then faded) The myth of modern reinforcement learning (DeepSeek, RLHF, etc.) The danger of AI research narrowing too soon around one dominant approach Whether you're an AI enthusiast, a machine learning practitioner, or just curious about where intelligence is headed, this episode offers a rare deep dive into the ideological foundations of AI—and what's coming next. Don't forget to subscribe for more episodes on AI, data, and the future of tech.     Stay Updated: Craig Smith on X:https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI     (00:00) What Are Generative Models? (03:02) AI Progress and the Local Optimum Trap (06:30) The Five Tribes of AI and Why They Matter (09:07) The Rise of Connectionism (11:14) Rosenblatt's Perceptron and the First AI Hype Cycle (13:35) Backpropagation: The Algorithm That Changed Everything (19:39) How Backpropagation Actually Works (21:22) AlexNet and the Deep Learning Boom (23:22) Why the Vision Community Resisted Neural Nets (25:39) The Expansion of Deep Learning (28:48) NetTalk and the Baby Steps of Neural Speech (31:24) How Transformers (and Attention) Transformed AI (34:36) Why Attention Solved the Bottleneck in Translation (35:24) The Untold Story of Transformer Invention (38:35) LSTMs vs. Attention: Solving the Vanishing Gradient Problem (42:29) GANs: The Evolutionary Arms Race in AI (48:53) Reinforcement Learning Explained (52:46) Why RL Is Mostly Just Supervised Learning in Disguise (54:35) Where AI Research Should Go Next  

    #247 Barr Moses: Why Reliable Data is Key to Building Good AI Systems

    Play Episode Listen Later Apr 13, 2025 55:36


    This episode is sponsored by Netsuite by Oracle, the number one cloud financial system, streamlining accounting, financial management, inventory, HR, and more.   NetSuite is offering a one-of-a-kind flexible financing program. Head to  https://netsuite.com/EYEONAI to know more.   In this episode of Eye on AI, Craig Smith sits down with Barr Moses, Co-Founder & CEO of Monte Carlo, the pioneer of data and AI observability. Together, they explore the hidden force behind every great AI system: reliable, trustworthy data. With AI adoption soaring across industries, companies now face a critical question: Can we trust the data feeding our models? Barr unpacks why data quality is more important than ever, how observability helps detect and resolve data issues, and why clean data—not access to GPT or Claude—is the real competitive moat in AI today.   What You'll Learn in This Episode: Why access to AI models is no longer a competitive advantage How Monte Carlo helps teams monitor complex data estates in real-time The dangers of “data hallucinations” and how to prevent them Real-world examples of data failures and their impact on AI outputs The difference between data observability and explainability Why legacy methods of data review no longer work in an AI-first world Stay Updated: Craig Smith on X:https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI     (00:00) Intro (01:08) How Monte Carlo Fixed Broken Data   (03:08) What Is Data & AI Observability?   (05:00) Structured vs Unstructured Data Monitoring   (08:48) How Monte Carlo Integrates Across Data Stacks (13:35) Why Clean Data Is the New Competitive Advantage   (16:57) How Monte Carlo Uses AI Internally   (19:20) 4 Failure Points: Data, Systems, Code, Models   (23:08) Can Observability Detect Bias in Data?   (26:15) Why Data Quality Needs a Modern Definition   (29:22) Explosion of Data Tools & Monte Carlo's 50+ Integrations   (33:18) Data Observability vs Explainability   (36:18) Human Evaluation vs Automated Monitoring   (39:23) What Monte Carlo Looks Like for Users   (46:03) How Fast Can You Deploy Monte Carlo?   (51:56) Why Manual Data Checks No Longer Work   (53:26) The Future of AI Depends on Trustworthy Data 

    #246 Will Granis: How Google Cloud is Powering the Future of Agentic AI

    Play Episode Listen Later Apr 9, 2025 57:44


    This episode is sponsored by Thuma. Thuma is a modern design company that specializes in timeless home essentials that are mindfully made with premium materials and intentional details.   To get $100 towards your first bed purchase, go to http://thuma.co/eyeonai     What happens when AI agents start negotiating, automating workflows, and rewriting how the enterprise world operates?   In this episode of the Eye on AI podcast, Will Grannis, CTO of Google Cloud, reveals how Google is leading the charge into the next frontier of artificial intelligence: agentic AI. From multi-agent systems that can file your expenses to futuristic R2-D2-style assistants in real-time race strategy, this episode dives deep into how AI is no longer just about models—it's about autonomous action. In this episode, we explore: How AgentSpace is transforming how enterprises build AI agents The evolution from rule-based workflows to intelligent orchestration Real-world use cases: expense automation, content creation, code generation Trust, sovereignty, and securing agentic systems at scale The future of multi-agent ecosystems and AI-driven scientific discovery How large enterprises can match startup agility using their data advantage   Whether you're a founder, engineer, or enterprise leader—this episode will shift how you think about deploying AI in the real world.   Subscribe for more deep dives with tech leaders and AI visionaries. Drop a comment with your thoughts on where agentic AI is headed!     (00:00) Preview and Intro (02:34) Will Grannis' Role at Google Cloud (05:14) Origins of Agentic Workflows at Google (09:10) How Generative AI Changed the Agent Game (12:29) Agents, Tool Access & Trust Infrastructure (14:01) What is Agent Space? (16:30) Creative & Marketing Agents in Action (23:29) Core Components of Building Agents (25:29) Introducing the Agent Garden (28:06) The “Cloud of Connected Agents” Concept (33:53) Solving Agent Quality & Self-Evaluation (37:19) The Future of Autonomous Finance Agents (40:55) How Enterprises Choose Cloud Partners for Agents (43:50) Google Cloud's Principles in Practice (46:27) Gemini's Context Power in Cybersecurity (49:50) Robotics and R2D2-Inspired AI Projects (52:39) How to Try Agent Space Yourself  

    #245 Rajat Taneja: Visa's President of Technology Reveals Their $3.3 Billion AI Strategy

    Play Episode Listen Later Apr 2, 2025 24:49


    This episode is sponsored by Thuma.   Thuma is a modern design company that specializes in timeless home essentials that are mindfully made with premium materials and intentional details.   To get $100 towards your first bed purchase, go to http://thuma.co/eyeonai Visa's President of Technology, Rajat Taneja, pulls back the curtain on the $3.3 billion AI transformation powering one of the world's most trusted financial networks.   In this episode, Taneja shares how Visa—a company processing over $16 trillion annually across 300 billion real-time transactions—is leveraging AI not just to stop fraud, but to redefine the future of commerce.   From deep neural networks trained on decades of transaction data to generative AI tools powering next-gen agentic systems, Visa has quietly been an AI-first company since the 1990s. Now, with 500+ petabytes of data and 2,900 open APIs, it's preparing for a future where agents, biometrics, and behavioral signals shape every interaction.   Taneja also reveals how Visa's models can mimic bank decisions in milliseconds, stop enumeration attacks, and even detect fraud based on how you type. This is AI at global scale—with zero room for error.   What You'll Learn in This Episode: How Visa's $3.3B data platform powers 24/7 AI-driven decisioning The fraud models behind stopping $40 billion in criminal transactions What “agentic commerce” means—and why Visa is betting big on it How Visa uses behavioral biometrics to detect account takeovers Why Visa rebuilt its infrastructure for the AI era—10 years ahead of the curve The role of generative AI, biometric identity, and APIs in the next wave of payments   The future of commerce isn't just cashless—it's intelligent, autonomous, and trust-driven.   If you're curious about how AI is redefining payments, security, and digital identity at massive scale, this episode is essential viewing.   Subscribe for more deep dives into the future of AI, commerce, and innovation. Stay Updated: Craig Smith on X:https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI (00:00) Introduction (02:57) Meet Rajat Taneja, Visa's President of Technology (04:02) Scaling AI for 300 Billion Transactions Annually (05:27) The Models Behind Visa's Fraud Detection (08:02) Visa's In-House AI Models vs Open-Source Tools (10:54) Inside Visa's $3.3B AI Data Platform (12:29) Visa's Role in E-Commerce Innovation (16:24) Biometrics, Identity & Tokenization at Visa (21:14) Visa's Vision for AI-Driven Commerce

    #244 Yoav Shoham on Jamba Models, Maestro and The Future of Enterprise AI

    Play Episode Listen Later Mar 27, 2025 52:16


    This episode is sponsored by the DFINITY Foundation.  DFINITY Foundation's mission is to develop and contribute technology that enables the Internet Computer (ICP) blockchain and its ecosystem, aiming to shift cloud computing into a fully decentralized state.   Find out more at https://internetcomputer.org/ In this episode of Eye on AI, Yoav Shoham, co-founder of AI21 Labs, shares his insights on the evolution of AI, touching on key advancements such as Jamba and Maestro. From the early days of his career to the latest developments in AI systems, Yoav offers a comprehensive look into the future of artificial intelligence.   Yoav opens up about his journey in AI, beginning with his academic roots in game theory and logic, followed by his entrepreneurial ventures that led to the creation of AI21 Labs. He explains the founding of AI21 Labs and the company's mission to combine traditional AI approaches with modern deep learning methods, leading to innovations like Jamba—a highly efficient hybrid AI model that's disrupting the traditional transformer architecture.   He also introduces Maestro, AI21's orchestrator that works with multiple large language models (LLMs) and AI tools to create more reliable, predictable, and efficient systems for enterprises. Yoav discusses how Maestro is tackling real-world challenges in enterprise AI, moving beyond flashy demos to practical, scalable solutions.   Throughout the conversation, Yoav emphasizes the limitations of current large language models (LLMs), even those with reasoning capabilities, and explains how AI systems, rather than just pure language models, are becoming the future of AI. He also delves into the philosophical side of AI, discussing whether models truly "understand" and what that means for the future of artificial intelligence.   Whether you're deeply invested in AI research or curious about its applications in business, this episode is filled with valuable insights into the current and future landscape of artificial intelligence.     Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI     (00:00) Introduction: The Future of AI Systems (02:33) Yoav's Journey: From Academia to AI21 Labs (05:57) The Evolution of AI: Symbolic AI and Deep Learning (07:38) Jurassic One: AI21 Labs' First Language Model (10:39) Jamba: Revolutionizing AI Model Architecture (16:11) Benchmarking AI Models: Challenges and Criticisms (22:18) Reinforcement Learning in AI Models (24:33) The Future of AI: Is Jamba the End of Larger Models? (27:31) Applications of Jamba: Real-World Use Cases in Enterprise (29:56) The Transition to Mass AI Deployment in Enterprises (33:47) Maestro: The Orchestrator of AI Tools and Language Models (36:03) GPT-4.5 and Reasoning Models: Are They the Future of AI? (38:09) Yoav's Pet Project: The Philosophical Side of AI Understanding (41:27) The Philosophy of AI Understanding (45:32) Explanations and Competence in AI (48:59) Where to Access Jamba and Maestro  

    #243 Greg Osuri: Why the Future of AI Depends on Decentralized Cloud Platforms

    Play Episode Listen Later Mar 18, 2025 59:19


    This episode is sponsored by Indeed.  Stop struggling to get your job post seen on other job sites. Indeed's Sponsored Jobs help you stand out and hire fast. With Sponsored Jobs your post jumps to the top of the page for your relevant candidates, so you can reach the people you want faster. Get a $75 Sponsored Job Credit to boost your job's visibility! Claim your offer now: https://www.indeed.com/EYEONAI Greg Osuri's Vision for Decentralized Cloud Computing | The Future of AI & Web3 Infrastructure The cloud is broken—can decentralization fix it? In this episode, Greg Osuri, founder of Akash Network, shares his groundbreaking approach to decentralized cloud computing and how it's disrupting hyperscalers like AWS, Google Cloud, and Microsoft Azure.  Discover how Akash Network's peer-to-peer marketplace is slashing cloud costs, unlocking unused compute power, and paving the way for AI-driven infrastructure without Big Tech's control. What You'll Learn in This Episode: - Why AI training is hitting an energy bottleneck and how decentralization solves it - How Akash Network creates a global marketplace for underutilized compute power - The role of blockchain in securing cloud resources and enforcing smart contracts - The privacy risks of hyperscalers—and why sovereign AI in the home is the future - How Akash Network is evolving from a resource marketplace to a full-fledged services economy - The future of AI, energy-efficient cloud solutions, and decentralized infrastructure The battle for the future of cloud computing is on—and decentralization is winning. If you're interested in AI, blockchain, Web3, or the economics of cloud infrastructure, this episode is a must-watch! Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI   (00:00) Introduction & The Biggest Challenges in AI Training   (02:36) Greg Osuri's Background (04:50) The Problem with AWS, Google Cloud & Traditional Cloud Providers   (06:40) How To Use Blockchain for a Decentralized Cloud   (10:17) Akash Network's Marketplace Matches Compute Buyers & Sellers   (14:42) Security & Privacy: Protecting Users from Data Risks   (18:25) The Energy Crisis: Why Hyperscalers Are Unsustainable   (21:51) The Future of AI: Decentralized Cloud & Home AI Computing   (26:42) How AI Workloads Are Routed & Optimized (30:24) Big Companies Using Akash Network: NVIDIA, Prime Intellect & More   (45:49) Building a Decentralized AI Services Marketplace   (55:09) Why the Future of AI Needs a Decentralized Cloud  

    #242 Dylan Arena: The AI Education Revolution: How AI is Changing the Way We Learn

    Play Episode Listen Later Mar 12, 2025 57:58


    This episode is brought to you by Extreme Networks, the company radically improving customer experiences with AI-powered automation for networking. Extreme is driving the convergence of AI, networking, and security to transform the way businesses connect and protect their networks, delivering faster performance, stronger security, and a seamless user experience. Visit extremenetworks.com to learn more. ———————————————————————————————————————— The Role of AI in Education | Dylan Arena on Learning, AI Tutoring & The Future of Teaching How can AI enhance education without replacing the human touch? In this episode, Dylan Arena, Chief Data Science and AI Officer at McGraw Hill, shares his insights on the intersection of AI and learning. Dylan's background in learning sciences and technology design has shaped his approach to AI-powered tools that help students and teachers—not replace them. He discusses how AI can augment human relationships in education, improve personalized learning, and assist teachers with real-time insights while avoiding the pitfalls of over-reliance on automation. With AI playing an increasingly central role in education, are we at risk of losing the essential human connections that define great learning experiences? What You'll Learn in This Episode: - Why AI should be used to enhance not replace teachers - The risks and rewards of AI-powered tutoring - How AI-driven assessments can improve personalized learning - Why AI chatbots in education need careful ethical considerations - The future of gamification and AI-driven engagement in classrooms - How McGraw Hill is integrating AI into its learning platforms If you care about the future of education, AI, and ethical tech development, this episode is a must-watch.  ———————————————————————————————————————— This episode is sponsored by Oracle. Oracle Cloud Infrastructure (OCI) is a blazing-fast and secure platform for your infrastructure, database, application development, plus all your AI and machine learning workloads. OCI costs 50% less for compute and 80% less for networking—so you're saving a pile of money. Thousands of businesses have already upgraded to OCI, including MGM Resorts, Specialized Bikes, and Fireworks AI. Cut your current cloud bill in HALF if you move to OCI now: https://oracle.com/eyeonai ———————————————————————————————————————— Chapters: (00:00) The Role of AI in Augmenting Human Learning (02:10) Dylan's Background in Learning Sciences & AI (08:23) The Risks of AI-Powered Education Tools (11:08) AI Tutoring: Can It Replace Human Teachers? (16:28) AI's Role in Personalized Learning & Adaptive Assessments (22:47) How AI Can Assist, Not Replace, Teachers (29:36) The Future of AI-Driven Gamification in Education (36:41) Ethical Concerns Around AI Chatbots & Student Relationships (45:02) The Impact of AI on Student Learning & Memory Retention (50:19) How McGraw Hill is Innovating with AI in Education (54:44) Final Thoughts: AI's Role in Shaping the Future of Learning

    #241 Patrick M. Pilarski: The Alberta Plan's Roadmap to AI and AGI

    Play Episode Listen Later Mar 7, 2025 61:44


    This episode is sponsored by Netsuite by Oracle, the number one cloud financial system, streamlining accounting, financial management, inventory, HR, and more. NetSuite is offering a one-of-a-kind flexible financing program. Head to  https://netsuite.com/EYEONAI to know more. Can AI learn like humans? In this episode, Patrick Pilarski, Canada CIFAR AI Chair and professor at the University of Alberta, breaks down The Alberta Plan—a bold roadmap for achieving Artificial General Intelligence (AGI) through reinforcement learning and real-time experience-based AI. Unlike large pre-trained models that rely on massive datasets, The Alberta Plan champions continual learning, where AI evolves from raw sensory experience, much like a child learning through trial and error. Could this be the key to unlocking true intelligence? Pilarski also shares insights from his groundbreaking work in bionic medicine, where AI-powered prosthetics are transforming human-machine interaction. From neuroprostheses to reinforcement learning-driven robotics, this conversation explores how AI can enhance—not just replace—human intelligence. What You'll Learn in This Episode: Why reinforcement learning is a better path to AGI than pre-trained models The four core principles of The Alberta Plan and why they matter How AI-driven bionic prosthetics are revolutionizing human-machine integration The battle between reinforcement learning and traditional control systems in robotics Why continual learning is critical for AI to avoid catastrophic forgetting How reinforcement learning is already powering real-world breakthroughs in plasma control, industrial automation, and beyond The future of AI isn't just about more data—it's about AI that thinks, adapts, and learns from experience. If you're curious about the next frontier of AI, the rise of reinforcement learning, and the quest for true intelligence, this episode is a must-watch. Subscribe for more AI deep dives! (00:00) The Alberta Plan: A Roadmap to AGI   (02:22) Introducing Patrick Pilarski (05:49) Breaking Down The Alberta Plan's Core Principles   (07:46) The Role of Experience-Based Learning in AI   (08:40) Reinforcement Learning vs. Pre-Trained Models   (12:45) The Relationship Between AI, the Environment, and Learning   (16:23) The Power of Reward in AI Decision-Making   (18:26) Continual Learning & Avoiding Catastrophic Forgetting   (21:57) AI in the Real World: Applications in Fusion, Data Centers & Robotics   (27:56) AI Learning Like Humans: The Role of Predictive Models   (31:24) Can AI Learn Without Massive Pre-Trained Models?   (35:19) Control Theory vs. Reinforcement Learning in Robotics   (40:16) The Future of Continual Learning in AI   (44:33) Reinforcement Learning in Prosthetics: AI & Human Interaction   (50:47) The End Goal of The Alberta Plan  

    #240 Manos Koukoumidis: Why The Future of AI is Open-Source

    Play Episode Listen Later Mar 4, 2025 66:03


    This episode is brought to you by Sonar, the creators of SonarQube Server, Cloud, IDE, and the open source Community Build.    Sonar unlocks actionable code intelligence, helping to redefine the software development lifecycle by use of AI and AI agentic systems, to continuously improve quality and security while reducing developer toil. By analyzing all code, regardless of who writes it—your internal team or genAI—Sonar enables more secure, reliable, and maintainable software. Join the over 7 million developers from organizations like the DoD, Microsoft, NASA, MasterCard, Siemens, and T-Mobile, who use Sonar.    Visit http://sonarsource.com/eyeonai to try SonarQube for free today.   ———————————————————————————————————————— The Future of AI is Open-Source | Manos Koukoumidis on UMI & The AI Revolution Is closed AI holding back innovation? In this episode, Manos Koukoumidis, CEO of Oumi, makes the case for why the future of AI must be open-source. OUMI (Open Universal Machine Intelligence) is redefining how AI is built—offering fully open models, open data, and open collaboration to make AI development more transparent, accessible, and community-driven. Big Tech has dominated AI, but UMI is challenging the status quo by creating a platform where anyone can train, fine-tune, and deploy AI models with just a few commands. Could this be the Linux moment for AI? What You'll Learn in This Episode: Why open-source AI is the only sustainable path forward The difference between “open-source” AI and true open AI How OUMI enables researchers and enterprises to build better AI models Why Big Tech's closed AI systems are losing their competitive edge The impact of open AI on healthcare, science, and enterprise innovation The future of AI models—will proprietary AI survive? The AI revolution is happening—and it's open-source. If you care about the future of AI, innovation, and ethical tech development, this episode is a must-watch. ————————————————————————————————————————   This episode is sponsored by Thuma.   Thuma is a modern design company that specializes in timeless home essentials that are mindfully made with premium materials and intentional details.   To get $100 towards your first bed purchase, go to http://thuma.co/eyeonai   ————————————————————————————————————————   (00:00) The True Meaning of Open-Source AI   (02:15) The Open vs. Closed AI Debate   (07:54) Why Open AI Models Are Safer  (10:34) Defining Open Data (13:21)Beating GPT-4-O with an Open AI Model   (16:36) Open AI in Healthcare (19:31) Why Open Models Will Dominate   (23:07) How OUMI Makes AI Training Fully Accessible & Reproducible   (28:44) UMI's Collaboration with Universities   (32:29) The Shift Toward Open A (36:41) Can We Build Truly Open AI Models from Scratch?   (40:20) The Role of Open AI in Eliminating Bias (45:02) Will Open AI Replace Proprietary AI Models?   (50:19) How OUMI Works (54:44) The Open AI Revolution Has Begun  

    #241 Tuhin Srivatsa: How Baseten is Disrupting AI Deployment & Scaling in 2025

    Play Episode Listen Later Feb 26, 2025 46:17


    This episode is sponsored by Thuma.   Thuma is a modern design company that specializes in timeless home essentials that are mindfully made with premium materials and intentional details.   To get $100 towards your first bed purchase, go to http://thuma.co/eyeonai   ————————————————————————————————————————— AI deployment is broken—can it be fixed? In this episode, Tuhin Srivatsa, CEO & Co-Founder of Baseten, reveals how his company is DISRUPTING AI infrastructure, making it easier, faster, and more cost-effective to deploy and scale AI models in production. As enterprises increasingly turn to open-source AI models and grapple with the high costs and complexity of scaling, Baseten offers a game-changing solution that eliminates bottlenecks and simplifies the process. Discover how Baseten is taking on AWS SageMaker, OpenAI, and cloud-based AI deployment platforms to reshape the future of AI model deployment. What You'll Learn in This Episode: Why AI deployment & scaling is one of the biggest challenges in 2025 How Baseten enables enterprises to run AI models faster & more efficiently The shift from closed-source to open-source AI models—and why it matters The hidden costs of AI inference & how to optimize for performance Why most AI models fail in production and how to prevent it The future of AI infrastructure: What comes next for scalable AI Whether you're a machine learning engineer, AI researcher, startup founder, or enterprise leader, this episode is packed with actionable insights to help you scale AI models without the headaches. Don't miss this conversation on the next era of AI deployment! #AI #ArtificialIntelligence #MachineLearning #Baseten #AIDeployment #AIScaling #Inference #MLInfrastructure #TechPodcast Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI   —————————————————————————————————————————   (00:00) Tuhin Srivatsa's Journey in AI & Baseten   (01:50) What is AI Infrastructure & Why It Matters   (03:30) How Baseten Optimizes AI Model Deployment   (05:19) Why Most AI Deployments Fail (And How to Fix It)   (09:17) The Future of Open-Source AI Models in Enterprise   (11:01) How Baseten Automates AI Scaling & Inference   (14:12) Why AI Developers Struggle with Cloud-Based AI Tools   (18:47) The Real Cost of AI Inference (And How to Reduce It)   (20:44) Why AI Scaling is the Biggest Challenge in 2025   (26:55) Can AI Run on Non-NVIDIA Chips? (The Hardware Debate)   (31:23) The Future of AI Model Deployment & Inference   (37:05) How AI Agents & Reasoning Models Are Changing the Game   (40:39) The Truth About AI Hype vs. Reality   (45:04) How to Get Started with Baseten   (45:48) The Future of AI Infrastructure  

    #240 Dominic Williams Reveals His Vision for the Internet Computer (ICP)

    Play Episode Listen Later Feb 20, 2025 74:52


    This episode is sponsored by Indeed.  Stop struggling to get your job post seen on other job sites. Indeed's Sponsored Jobs help you stand out and hire fast. With Sponsored Jobs your post jumps to the top of the page for your relevant candidates, so you can reach the people you want faster.   Get a $75 Sponsored Job Credit to boost your job's visibility! Claim your offer now: https://www.indeed.com/EYEONAI     Dominic Williams' Bold Vision for The Internet Computer (ICP) | The Future of Decentralized Computing   The internet is broken—can blockchain fix it? In this episode, Dominic Williams, the visionary behind The Internet Computer (ICP) and founder of DFINITY, reveals his plan to build a decentralized alternative to cloud computing. Discover how ICP is challenging Big Tech, replacing traditional IT infrastructure, and creating a tamper-proof, autonomous internet powered by smart contracts.   What You'll Learn in This Episode: Why Dominic Williams believes the current internet is flawed How ICP aims to replace centralized cloud providers like AWS & Google Cloud The role of smart contracts in making the internet more secure and censorship-resistant The mission of DFINITY and how it started in 2016 The future of Web3, decentralized applications (dApps), and blockchain governance   Don't miss this deep dive into the future of the internet! If you're interested in blockchain, decentralization, and the next evolution of the web, this episode is for you. Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI   (00:00) The Origins of The Internet Computer   (02:57) Dominic Williams' Background in Tech   (04:28) Early Innovations in Distributed Computing   (07:08) The Birth of a 'World Computer' Concept   (11:22) Reimagining IT: A Decentralized Alternative   (13:45) The Creation of DFINITY and ICP   (16:29) How ICP Differs from Traditional Blockchains   (22:05) The Problem with Cloud-Based Blockchains   (25:35) How ICP Ensures True Decentralization   (29:25) AI & The Self-Writing Internet   (35:24) How ICP Hosts AI & Smart Contracts   (40:23) Understanding Reverse Gas and ICP's Economy   (45:03) The Vision: A Truly Decentralized Internet   (49:09) How To Use The Internet Computer   (52:01) The Role of Nodes & Incentives in ICP   (56:53) The Future of Web3 & Decentralized Applications   (01:05:49) The Misconception of ‘On-Chain' & Blockchain Hype   

    #239 Pedro Domingos Breaks Down The Symbolist Approach to AI

    Play Episode Listen Later Feb 17, 2025 48:12


    This episode is sponsored by Thuma.   Thuma is a modern design company that specializes in timeless home essentials that are mindfully made with premium materials and intentional details.   To get $100 towards your first bed purchase, go to http://thuma.co/eyeonai In this episode of the Eye on AI podcast, Pedro Domingos—renowned AI researcher and author of The Master Algorithm—joins Craig Smith to break down the Symbolist approach to artificial intelligence, one of the Five Tribes of Machine Learning. Pedro explains how Symbolic AI dominated the field for decades, from the 1950s to the early 2000s, and why it's still playing a crucial role in modern AI. He dives into the Physical Symbol System Hypothesis, the idea that intelligence can emerge purely from symbol manipulation, and how AI pioneers like Marvin Minsky and John McCarthy built the foundation for rule-based AI systems. The conversation unpacks inverse deduction—the Symbolists' "Master Algorithm"—and how it allows AI to infer general rules from specific examples. Pedro also explores how decision trees, random forests, and boosting methods remain some of the most powerful AI techniques today, often outperforming deep learning in real-world applications. We also discuss why expert systems failed, the knowledge acquisition bottleneck, and how machine learning helped solve Symbolic AI's biggest challenges. Pedro shares insights on the heated debate between Symbolists and Connectionists, the ongoing battle between logic-based reasoning and neural networks, and why the future of AI lies in combining these paradigms. From AlphaGo's hybrid approach to modern AI models integrating logic and reasoning, this episode is a deep dive into the past, present, and future of Symbolic AI—and why it might be making a comeback. Don't forget to like, subscribe, and hit the notification bell for more expert discussions on AI, technology, and the future of intelligence!   Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI   (00:00) Pedro Domingos onThe Five Tribes of Machine Learning   (02:23) What is Symbolic AI?   (04:46) The Physical Symbol System Hypothesis Explained   (07:05) Understanding Symbols in AI   (11:51) What is Inverse Deduction?   (15:10) Symbolic AI in Medical Diagnosis   (17:35) The Knowledge Acquisition Bottleneck   (19:05) Why Symbolic AI Struggled with Uncertainty   (20:40) Machine Learning in Symbolic AI – More Than Just Connectionism   (24:08) Decision Trees & Their Role in Symbolic Learning   (26:55) The Myth of Feature Engineering in Deep Learning   (30:18) How Symbolic AI Invents Its Own Rules   (31:54) The Rise and Fall of Expert Systems – The CYCL Project   (38:53) Symbolic AI vs. Connectionism   (41:53) Is Symbolic AI Still Relevant Today?   (43:29) How AlphaGo Combined Symbolic AI & Neural Networks   (45:07) What Symbolic AI is Best At – System 2 Thinking   (47:18) Is GPT-4o Using Symbolic AI?   

    #238 Dr. Mark Bailey: How AI Will Shape the Future of War

    Play Episode Listen Later Feb 12, 2025 30:44


    This episode is sponsored by Oracle.   Oracle Cloud Infrastructure, or OCI is a blazing fast and secure platform for your infrastructure, database, application development, plus all your AI and machine learning workloads. OCI costs 50% less for compute and 80% less for networking. So you're saving a pile of money. Thousands of businesses have already upgraded to OCI, including MGM Resorts, Specialized Bikes, and Fireworks AI.   Cut your current cloud bill in HALF if you move to OCI now:  https://oracle.com/eyeonai In this episode of Eye on AI, Mark Bailey, Associate Professor at the National Intelligence University, joins Craig Smith to explore the rapidly evolving role of AI in modern warfare—its promises, risks, and the ethical dilemmas it presents. Mark shares his expertise on AI autonomy in military strategy, breaking down the differences between automation and true autonomy. We discuss how AI-driven systems could revolutionize combat by reducing human casualties, improving precision, and enhancing battlefield decision-making. But with these advancements come serious concerns—how do we prevent automation bias? Can we trust AI to make life-or-death decisions? And will AI-driven warfare lower the threshold for conflict, making war more frequent? We also examine the global AI arms race, the impact of AI on defense policies, and the ethical implications of fully autonomous weapons. Mark unpacks key challenges like the black box problem, AI alignment issues, and the long-term consequences of integrating AI into military operations. He also shares insights from his latest book, where he calls for international AI regulations to prevent an uncontrolled escalation of AI warfare. With AI-driven drone swarms, autonomous targeting systems, and defense innovations shaping the future of global security, this conversation is a must-watch for anyone interested in AI, defense technology, and the moral questions of war in the digital age. Don't forget to like, subscribe, and hit the notification bell for more discussions on AI, technology, and the future of intelligence!   Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI   (00:00) AI's Role in Warfare   (02:02) Introducing Dr. Mark Bailey  (04:02) Automation vs. Autonomy in Military AI   (12:02) AI Warfare: A Threat to Global Stability?   (17:10) Inside Dr. Bailey's Book: Ethics & AI in War   (20:05) AI Reliability in Warfare (23:28) The Future of AI Swarms & Autonomous Warfare   (24:17) Who Decides How AI is Used in War?   (28:05) The Future of AI & Military Ethics   

    #237 Pedro Domingo's on Bayesians and Analogical Learning in AI

    Play Episode Listen Later Feb 9, 2025 56:43


    This episode is sponsored by Thuma. Thuma is a modern design company that specializes in timeless home essentials that are mindfully made with premium materials and intentional details. To get $100 towards your first bed purchase, go to http://thuma.co/eyeonai   In this episode of the Eye on AI podcast, Pedro Domingos, renowned AI researcher and author of The Master Algorithm, joins Craig Smith to explore the evolution of machine learning, the resurgence of Bayesian AI, and the future of artificial intelligence. Pedro unpacks the ongoing battle between Bayesian and Frequentist approaches, explaining why probability is one of the most misunderstood concepts in AI. He delves into Bayesian networks, their role in AI decision-making, and how they powered Google's ad system before deep learning. We also discuss how Bayesian learning is still outperforming humans in medical diagnosis, search & rescue, and predictive modeling, despite its computational challenges. The conversation shifts to deep learning's limitations, with Pedro revealing how neural networks might be just a disguised form of nearest-neighbor learning. He challenges conventional wisdom on AGI, AI regulation, and the scalability of deep learning, offering insights into why Bayesian reasoning and analogical learning might be the future of AI. We also dive into analogical learning—a field championed by Douglas Hofstadter—exploring its impact on pattern recognition, case-based reasoning, and support vector machines (SVMs). Pedro highlights how AI has cycled through different paradigms, from symbolic AI in the '80s to SVMs in the 2000s, and why the next big breakthrough may not come from neural networks at all. From theoretical AI debates to real-world applications, this episode offers a deep dive into the science behind AI learning methods, their limitations, and what's next for machine intelligence. Don't forget to like, subscribe, and hit the notification bell for more expert discussions on AI, technology, and the future of innovation!    Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI (00:00) Introduction (02:55) The Five Tribes of Machine Learning Explained   (06:34) Bayesian vs. Frequentist: The Probability Debate   (08:27) What is Bayes' Theorem & How AI Uses It   (12:46) The Power & Limitations of Bayesian Networks   (16:43) How Bayesian Inference Works in AI   (18:56) The Rise & Fall of Bayesian Machine Learning   (20:31) Bayesian AI in Medical Diagnosis & Search and Rescue   (25:07) How Google Used Bayesian Networks for Ads   (28:56) The Role of Uncertainty in AI Decision-Making   (30:34) Why Bayesian Learning is Computationally Hard   (34:18) Analogical Learning – The Overlooked AI Paradigm   (38:09) Support Vector Machines vs. Neural Networks   (41:29) How SVMs Once Dominated Machine Learning   (45:30) The Future of AI – Bayesian, Neural, or Hybrid?   (50:38) Where AI is Heading Next  

    #236 Vall Herard: The Future of AI-Driven Compliance (Saifr.ai)

    Play Episode Listen Later Feb 3, 2025 51:55


    This episode is sponsored by Netsuite by Oracle, the number one cloud financial system, streamlining accounting, financial management, inventory, HR, and more. NetSuite is offering a one-of-a-kind flexible financing program. Head to  https://netsuite.com/EYEONAI to know more.     In this episode of Eye on AI, Vall Herard, CEO of Saifr.ai, joins Craig Smith to explore how AI is transforming compliance in financial services.   Saifr.ai acts as a "grammar check" for regulatory compliance, ensuring AI-generated content meets SEC, FINRA, and global financial regulations. Vall explains how Saifr integrates into Microsoft Word, Outlook, and Adobe, reducing compliance risks in marketing, emails, and AI chatbots.   We also discuss Saifr.ai's partnership with Microsoft, AI's role in regulated industries, and how businesses can safely adopt generative AI without violating compliance laws. - How does AI reduce compliance friction? - Why is regulatory oversight a barrier to AI adoption? - What does AI safety really mean for financial services? Find out in this deep dive into AI, compliance, and the future of regulation. Like, subscribe, and hit the notification bell for more AI insights!   Strengthen your compliance controls with AI: https://saifr.ai/   Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI (00:00) Introduction to Generative AI and Compliance   (02:47) Meet Vall Herard, CEO of Saifr.ai   (05:28) What Saifr.ai Does and Its Mission   (08:25) How Saifr.ai Ensures Regulatory Compliance   (12:13) Overcoming AI Adoption Barriers in Finance   (19:58) Saifr.ai's Partnership with Microsoft   (24:11) How SaferAI Integrates with Microsoft Office   (29:33) AI in Podcast and Audio Compliance Review   (33:54) Saifr.ai's Business Model and Pricing   (38:09) How Saifr.ai Works with Generative AI Chatbots   (42:36) Supporting Multiple Languages for Compliance   (50:08) Future Outlook

    #235 Tyler Xuan Saltsman: How AI is Shaping the Future of Combat & Warfare

    Play Episode Listen Later Jan 29, 2025 38:48


    In this episode of the Eye on AI podcast, Tyler Xuan Saltsman, CEO of Edgerunner, joins Craig Smith to explore how AI is reshaping military strategy, logistics, and defense technology—pushing the boundaries of what's possible in modern warfare.   Tyler shares the vision behind Edgerunner, a company at the cutting edge of generative AI for military applications. From logistics and mission planning to autonomous drones and battlefield intelligence, Edgerunner is building domain-specific AI that enhances decision-making, ensuring national security while keeping humans in control.   We dive into how AI-powered military agents work, including the LoRA (Low-Rank Adaptation) model, which fine-tunes AI to think and act like military specialists—whether in logistics, aircraft maintenance, or real-time combat scenarios. Tyler explains how retrieval-augmented generation (RAG) and small language models allow warfighters to access mission-critical intelligence without relying on the internet, bringing real-time AI support directly to the battlefield.   Tyler also discusses the future of drone warfare—how AI-driven, vision-enabled drones can neutralize threats autonomously, reducing reliance on human pilots while increasing battlefield efficiency. With autonomous swarms, AI-powered kamikaze drones, and real-time situational awareness, the landscape of modern warfare is evolving fast.   Beyond combat, we explore AI's role in security, including advanced weapons detection systems that can safeguard military bases, schools, and public spaces. Tyler highlights the urgent need for transparency in AI, contrasting Edgerunner's open and auditable AI models with the black-box approaches of major tech companies.   Discover how AI is transforming military operations, from logistics to combat strategy, and what this means for the future of defense technology.   Don't forget to like, subscribe, and hit the notification bell for more deep dives into AI, defense, and cutting-edge technology!   Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI 00:00) Introduction – AI for the Warfighter (01:34) How AI is Transforming Military Logistics( 04:44) Running AI on the Edge – No Internet Required (06:49) AI-Powered Mission Planning & Risk Mitigation (14:32) The Future of AI in Drone Warfare (22:17) AI's Role in Strategic Defense & Economic Warfare (26:34) The U.S.-China AI Race – Are We Falling Behind? (35:17) The Future of AI in Warfare

    #234 Matt Price: How Crescendo is Disrupting Customer Service with Gen AI

    Play Episode Listen Later Jan 26, 2025 44:53


    In this episode of the Eye on AI podcast, Matt Price, CEO of Crescendo, joins Craig Smith to discuss how generative AI is reshaping customer service and blending seamlessly with human expertise to create next-level customer experiences.   Matt shares the story behind Crescendo, a company at the forefront of revolutionizing customer service by integrating advanced AI technology with human-driven solutions. With a focus on outcome-based service delivery and quality assurance, Crescendo is setting a new standard for customer engagement.   We dive into Crescendo's innovative approach, including its use of large language models (LLMs) combined with proprietary IP to deliver consistent, high-quality support across 56 languages. Matt explains how Crescendo's AI tools are designed to handle routine tasks while enabling human agents to focus on complex, empathy-driven interactions—resulting in higher job satisfaction and better customer outcomes.   Matt highlights how Crescendo is redefining the BPO industry, combining AI and human capabilities to reduce costs while improving the quality of customer interactions. From enhancing agent retention to enabling scalable, multilingual support, Crescendo's impact is transformative.   Discover how Matt and his team are designing a future where AI and humans work together to deliver exceptional customer experiences—reimagining what's possible in the world of customer service.   Don't forget to like, subscribe, and hit the notification bell for more insights into AI, technology, and innovation! Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI (00:00) Introduction to Matt Price and Crescendo   (01:49) The rise of AI in customer service   (05:34) Using AI and human expertise for better customer experiences   (07:47) How Gen AI reduces costs and improves engagement   (09:37) Challenges in customer service design and innovation   (11:32) Moving from hidden chatbots to front-and-center customer interaction   (14:08) Training human agents to work seamlessly with AI   (17:02) Using AI to analyze and improve service interactions   (19:15) Outcome-based pricing vs traditional headcount models   (21:53) Improving contact center roles with AI integration   (25:08) The importance of curating accurate knowledge bases for AI   (28:05) Crescendo's acquisition of PartnerHero and its impact   (30:39) Scaling customer service with AI-human collaboration   (32:06) Multilingual support: AI in 56 languages   (33:49) The vast market potential of AI-driven customer service   (36:28) How Crescendo is reshaping customer service with AI innovation   (42:42) Building customer profiles for personalized support  

    #232 Sepp Hochreiter: How LSTMs Power Modern AI System's

    Play Episode Listen Later Jan 22, 2025 51:08


    In this special episode of the Eye on AI podcast, Sepp Hochreiter, the inventor of Long Short-Term Memory (LSTM) networks, joins Craig Smith to discuss the profound impact of LSTMs on artificial intelligence, from language models to real-time robotics. Sepp reflects on the early days of LSTM development, sharing insights into his collaboration with Jürgen Schmidhuber and the challenges they faced in gaining recognition for their groundbreaking work. He explains how LSTMs became the foundation for technologies used by giants like Amazon, Apple, and Google, and how they paved the way for modern advancements like transformers. Topics include: - The origin story of LSTMs and their unique architecture. - Why LSTMs were crucial for sequence data like speech and text. - The rise of transformers and how they compare to LSTMs. - Real-time robotics: using LSTMs to build energy-efficient, autonomous systems. The next big challenges for AI and robotics in the era of generative AI. Sepp also shares his optimistic vision for the future of AI, emphasizing the importance of efficient, scalable models and their potential to revolutionize industries from healthcare to autonomous vehicles. Don't miss this deep dive into the history and future of AI, featuring one of its most influential pioneers. (00:00) Introduction: Meet Sepp Hochreiter (01:10) The Origins of LSTMs (02:26) Understanding the Vanishing Gradient Problem (05:12) Memory Cells and LSTM Architecture (06:35) Early Applications of LSTMs in Technology (09:38) How Transformers Differ from LSTMs (13:38) Exploring XLSTM for Industrial Applications (15:17) AI for Robotics and Real-Time Systems (18:55) Expanding LSTM Memory with Hopfield Networks (21:18) The Road to XLSTM Development (23:17) Industrial Use Cases of XLSTM (27:49) AI in Simulation: A New Frontier (32:26) The Future of LSTMs and Scalability (35:48) Inference Efficiency and Potential Applications (39:53) Continuous Learning and Adaptability in AI (42:59) Training Robots with XLSTM Technology (44:47) NXAI: Advancing AI in Industry

    #231 Paras Jain: The Future of AI Video Generation with Genmo

    Play Episode Listen Later Jan 16, 2025 47:49


    In this episode of the Eye on AI podcast, Paras Jain, CEO and Co-founder of Genmo, joins Craig Smith to explore the cutting-edge world of AI-driven video generation, the open-source revolution, and the future of creative storytelling. Paras shares the story behind Genmo, a company at the forefront of advancing video generation technologies, and their groundbreaking model, Mochi One. With a focus on motion quality and prompt adherence, Genmo is redefining what's possible in generative AI for video, offering unmatched precision and creative possibilities. We delve into the innovative approach behind Mochi One, including its state-of-the-art architecture, which enables fast, high-quality video generation. Paras explains how Genmo's commitment to open source empowers developers and researchers worldwide, fostering rapid advancements, customization, and the creation of new tools, like video-to-video editing. The conversation touches on key themes such as scalability, synthetic data pipelines, and the transformative potential of AI in creating immersive virtual worlds. Paras also explores how Genmo is bridging the gap between cutting-edge AI and practical applications, from TikTok-ready videos to future possibilities like interactive environments and real-time video game-like experiences. Discover how Paras and his team are shaping the future of video creation, blending art, science, and open collaboration to push the boundaries of generative AI. Don't forget to like, subscribe, and hit the notification bell for more insightful conversations on AI, technology, and innovation!   Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI (00:00) Introduction to Paras Jain and Genmo   (01:45) Video generation with Mochi One   (04:41) Open-source AI in video generation   (06:08) Building Mochi One (12:03) Simulating complex physics in video models   (14:35) Reducing latency: Fast video generation at scale   (20:34) Tackling cost challenges for longer videos   (23:14) Character consistency in AI-generated videos   (27:17) Why video models represent intelligence's next frontier   (30:18) How diffusion models create sharp, realistic videos   (34:02) Visualizing the denoising process in video generation   (39:36) Exploring user-generated video creations with Mochi One   (42:05) Monetizing open-source AI for video generation   (45:47) Video generation's potential in the metaverse   (47:19) Collaborating with universities to advance AI   (48:39) The future of generative AI

    #230 Jamie Lerner: How Quantum Solves AI's Need for Unstructured Data Solutions

    Play Episode Listen Later Jan 13, 2025 53:23


    This episode is sponsored by Oracle.   Oracle Cloud Infrastructure, or OCI is a blazing fast and secure platform for your infrastructure, database, application development, plus all your AI and machine learning workloads. OCI costs 50% less for compute and 80% less for networking. So you're saving a pile of money. Thousands of businesses have already upgraded to OCI, including MGM Resorts, Specialized Bikes, and Fireworks AI.   Cut your current cloud bill in HALF if you move to OCI now:  https://oracle.com/eyeonai In this episode of the Eye on AI podcast, Jamie Lerner, CEO of Quantum, joins Craig Smith to discuss the future of data storage, unstructured data management, and AI's transformative role in modern workflows.   Jamie shares his journey leading Quantum, a company revolutionizing the storage and management of unstructured data for industries like healthcare, media, and AI research. With decades of expertise in creating innovative data solutions, Quantum is at the forefront of enabling efficient, secure, and scalable data workflows.   We dive into Quantum's cutting-edge technologies, from high-speed flash storage systems like the Myriad file system to cost-effective, long-term archival solutions such as tape systems. Jamie unpacks how Quantum supports AI-powered workflows, enabling seamless data movement, metadata tagging, and policy-driven automation for unstructured data like medical imaging, genomics, and video archives.   Jamie also explores the critical role of data sovereignty in today's global landscape, the growing importance of "forever archives," and how Quantum's tools help organizations balance exponential data growth with flat budgets. He sheds light on innovations like synthetic DNA and compressed storage mediums, providing a glimpse into the future of data storage.   Don't forget to like, subscribe, and hit the notification bell for more engaging discussions on AI, technology, and innovation! Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI (00:00) Introduction to Jamie Lerner and Quantum (02:21) Quantum's Focus on Unstructured Data Storage (05:19) Structured vs. Unstructured Data: Key Differences (07:52) Managing Data Workflows with AI and Automation (10:55) Quantum's Role in Long-Term Data Archives (13:32) Data Sovereignty and Security (16:18) How Data is Stored and Protected Across Mediums (19:54) Metadata in AI and Data Management (21:29) Quantum's Role in Building Forever Archives (24:16) Tape Storage: Efficiency and Longevity (29:11) Innovations in Data Storage (34:39) Competing in the Evolving Data Storage Industry (37:56) Innovations in Flash Storage (40:55) Balancing Cost and Efficiency in Data Storage (44:28) The Future of Data Storage and AI Integration (50:07) Quantum's Vision for the Future

    #229 Mitesh Agrawal: Why Lambda Labs' AI Cloud Is a Game-Changer for Developers

    Play Episode Listen Later Jan 8, 2025 56:07


    This episode is sponsored by Netsuite by Oracle, the number one cloud financial system, streamlining accounting, financial management, inventory, HR, and more.   NetSuite is offering a one-of-a-kind flexible financing program. Head to  https://netsuite.com/EYEONAI to know more.  In this episode of the Eye on AI podcast, we dive into the transformative world of AI compute infrastructure with Mitesh Agrawal, Head of Cloud/COO at Lambda   Mitesh takes us on a journey from Lambda Labs' early days as a style transfer app to its rise as a leader in providing scalable, deep learning infrastructure. Learn how Lambda Labs is reshaping AI compute by delivering cutting-edge GPU solutions and accessible cloud platforms tailored for developers, researchers, and enterprises alike.   Throughout the episode, Mitesh unpacks Lambda Labs' unique approach to optimizing AI infrastructure—from reducing costs with transparent pricing to tackling the global GPU shortage through innovative supply chain strategies. He explains how the company supports deep learning workloads, including training and inference, and why their AI cloud is a game-changer for scaling next-gen applications.   We also explore the broader landscape of AI, touching on the future of AI compute, the role of reasoning and video models, and the potential for localized data centers to meet the growing demand for low-latency solutions. Mitesh shares his vision for a world where AI applications, powered by Lambda Labs, drive innovation across industries.   Tune in to discover how Lambda Labs is democratizing access to deep learning compute and paving the way for the future of AI infrastructure.   Don't forget to like, subscribe, and hit the notification bell to stay updated on the latest in AI, deep learning, and transformative tech! Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI (00:00) Introduction and Lambda Labs' Mission (01:37) Origins: From DreamScope to AI Compute Infrastructure (04:10) Pivoting to Deep Learning Infrastructure (06:23) Building Lambda Cloud: An AI-Focused Cloud Platform (09:16) Transparent Pricing vs. Hyperscalers (12:52) Managing GPU Supply and Demand (16:34) Evolution of AI Workloads: Training vs. Inference (20:02) Why Lambda Labs Sticks with NVIDIA GPUs (24:21) The Future of AI Compute: Localized Data Centers (28:30) Global Accessibility and Regulatory Challenges (32:13) China's AI Development and GPU Restrictions (39:50) Scaling Lambda Labs: Data Centers and Growth (45:22) Advancing AI Models and Video Generation (50:24) Optimism for AI's Future (53:48) How to Access Lambda Cloud  

    #228 Rodrigo Liang: How SambaNova Systems Is Disrupting AI Inference

    Play Episode Listen Later Jan 1, 2025 23:54


    This episode is sponsored by RapidSOS. Close the safety gap and transform your emergency response with RapidSOS.   Visit https://rapidsos.com/eyeonai/ today to learn how AI-powered safety can protect your people and boost your bottom line. In this episode of the Eye on AI podcast, we explore the world of AI inference technology with Rodrigo Liang, co-founder and CEO of SambaNova Systems.   Rodrigo shares his journey from high-performance chip design to building SambaNova, a company revolutionizing how enterprises leverage AI through scalable, power-efficient solutions. We dive into SambaNova's groundbreaking achievements, including their record-breaking inference models, the Lama 405B and 70B, which deliver unparalleled speed and accuracy—all on a single rack consuming less than 10 kilowatts of power.   Throughout the conversation, Rodrigo highlights the seismic shift from AI training to inference, explaining why production AI is now about speed, efficiency, and real-time applications. He details SambaNova's approach to open-source models, modular deployment, and multi-tenancy, enabling enterprises to scale AI without costly infrastructure overhauls.   We also discuss the competitive landscape of AI hardware, the challenges of NVIDIA's dominance, and how SambaNova is paving the way for a new era of AI innovation. Rodrigo explains the critical importance of power efficiency and how SambaNova's technology is unlocking opportunities for enterprises to deploy private, secure AI systems on-premises and in the cloud.   Discover how SambaNova is redefining AI for enterprise adoption, enabling real-time AI, and setting new standards in efficiency and scalability.    Don't forget to like, subscribe, and hit the notification bell to stay updated on the latest breakthroughs in AI, technology, and enterprise innovation! Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI

    #227 Sedarius Tekara Perrotta: The Importance of Data Quality in AI Systems

    Play Episode Listen Later Dec 29, 2024 45:00


    In this episode of the Eye on AI podcast, we dive into the critical issue of data quality for AI systems with Sedarius Perrotta, co-founder of Shelf. Sedarius takes us on a journey through his experience in knowledge management and how Shelf was built to solve one of AI's most pressing challenges—unstructured data chaos. He shares how Shelf's innovative solutions enhance retrieval-augmented generation (RAG) and ensure tools like Microsoft Copilot can perform at their best by tackling inaccuracies, duplications, and outdated information in real-time. Throughout the episode, we explore how unstructured data acts as the "fuel" for AI systems and why its quality determines success. Sedarius explains Shelf's approach to data observability, transparency, and proactive monitoring to help organizations fix "garbage in, garbage out" issues, ensuring scalable and trusted AI initiatives. We also discuss the accelerating adoption of generative AI, the future of data management, and why building a strategy for clean and trusted data is vital for 2025 and beyond. Learn how Shelf enables businesses to unlock the full potential of their unstructured data for AI-driven productivity and innovation. Don't forget to like, subscribe, and hit the notification bell to stay updated on the latest advancements in AI, data management, and next-gen automation! Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI (00:00) Introduction and Shelf's Mission   (03:01) Understanding SharePoint and Data Challenges   (05:29) Tackling Data Entropy in AI Systems   (08:13) Using AI to Solve Data Quality Issues   (12:30) Fixing AI Hallucinations with Trusted Data   (21:01) Gen AI Adoption Insights and Trends   (28:44) Benefits of Curated Data for AI Training   (37:38) Future of Unstructured Data Management

    #226 Nathan Michael: The Future of AI-Powered Drones and Autonomous Warfare (with Shield AI)

    Play Episode Listen Later Dec 23, 2024 58:17


    Launch, run, and protect your business to make it official TODAY at https://www.legalzoom.com/ and use promo code EYEONAI to get 10% off any LegalZoom business formation product excluding subscriptions and renewals. In this episode of the Eye on AI podcast, we explore the cutting-edge world of AI-powered autonomy with Nathan Michael, CTO of Shield AI.   Nathan shares his journey from academia to leading one of the most innovative companies in defense technology, revealing how Shield AI is transforming autonomous systems to operate in the most challenging environments.     Throughout the episode, Nathan dives into Shield AI's groundbreaking technologies, including the revolutionary Hivemind platform, which enables drones and uncrewed jets to think, adapt, and act independently—even in GPS-denied and communication-jammed conditions. He explains how these systems are deployed across defense and commercial applications, reshaping intelligence, surveillance, and reconnaissance missions with unprecedented precision and resilience.     We also discuss the future of warfare and technology, from the commoditization of AI-driven autonomy to the ethical and strategic considerations of deploying these systems at scale. Nathan offers insights into how rapid iteration cycles and resilient intelligence will define the next era of defense innovation, ensuring mission success in increasingly complex scenarios.     Don't forget to like, subscribe, and hit the notification bell to stay updated on the latest advancements in AI, robotics, and the future of autonomous systems! Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI (00:00) Introduction to Shield AI and Hivemind (03:02) Nathan Michael's journey to Shield AI (06:07) Shield AI's mission and technology overview (10:18) VBAT: A versatile tail-sitter aircraft (12:44) Hivemind's swarm capabilities and applications (14:32) Ethics and societal guardrails for autonomous systems (17:24) Combat applications of quadcopters (19:12) Navigating and mapping unknown environments (20:38) Use of VBAT in Ukraine operations (22:10) Intelligence gathering and mission versatility (24:42) Transforming warfare with AI-driven autonomy (28:47) Teaming and communication in autonomous swarms (32:29) Communication networks in jammed conditions (35:34) Coordinated exploration with autonomous systems (38:13) Exploring buildings with quadcopter teams (40:57) Crewed-uncrewed teaming in modern combat (43:28) Global competition in AI and autonomy (46:03) Use of autonomous drones in Ukraine (50:28) Shield AI's vision: Proliferation of resilient intelligence (55:26) Expanding AI autonomy to commercial applications  

    #225 Christian Marek: How AI is Shaping the Future of Product Management

    Play Episode Listen Later Dec 18, 2024 50:14


    This episode is sponsored by RapidSOS. Close the safety gap and transform your emergency response with RapidSOS.   Visit https://rapidsos.com/eyeonai/ today to learn how AI-powered safety can protect your people and boost your bottom line. In this episode of the Eye on AI podcast, we delve into the role of AI in product management with Christian Marek, VP of Product at Productboard.     Christian shares his journey in shaping the future of product management, exploring how AI is changing the way we ideate, build, and deliver customer-centric products.     Throughout the episode, Christian unveils Productboard's innovative AI-driven solutions, including the groundbreaking Productboard Pulse, which accelerates ideation and discovery by aggregating and analyzing customer feedback. He explains how AI enables product managers to prioritize features, write detailed specifications, and align product roadmaps with organizational goals—all while driving ROI and delivering exceptional customer experiences.     We also discuss the evolution of product management, from its origins to its rise as a mainstream discipline in organizations of all sizes. Christian offers insights into how AI tools are reshaping the product management landscape, making it more efficient, collaborative, and impactful than ever before.     Don't forget to like, subscribe, and hit the notification bell to stay updated on the latest breakthroughs in AI, product innovation, and customer-centric technology!   Checkout Productboard Pulse: https://www.productboard.com/product/voice-of-customer/ Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI (00:00) Introduction (03:18) What is Productboard? (05:04) Christian Marek's Journey (07:20) Defining Product Management (08:45) How Productboard Was Born (10:05) Transforming Product Management   (14:32) Exploring Use Cases: Existing and New Product Development   (18:13) Scaling Productboard (20:33) Managing Complex Projects Without Productboard   (24:59) Industries Using Productboard (26:29) Why Startups Benefit from Productboard   (29:25) Closing the Feedback Loop on Product Management   (33:16) Non-Digital Applications of Productboard (36:27) AI Model Selection (39:48) The Future of Product Management with AI (43:30) C-Suite Engagement and ROI with Productboard   (45:15) How to Get Started with Productboard    

    #224 Tariq Shaukat: How Safe Is AI-Assisted Coding?

    Play Episode Listen Later Dec 11, 2024 53:27


    This episode is sponsored by Oracle. Oracle Cloud Infrastructure, or OCI is a blazing fast and secure platform for your infrastructure, database, application development, plus all your AI and machine learning workloads. OCI costs 50% less for compute and 80% less for networking. So you're saving a pile of money. Thousands of businesses have already upgraded to OCI, including MGM Resorts, Specialized Bikes, and Fireworks AI.   Cut your current cloud bill in HALF if you move to OCI now:  https://oracle.com/eyeonai   In this episode of the Eye on AI podcast, Tariq Shaukat, CEO of Sonar, joins Craig Smith to explore the future of code quality, security, and AI's role in software development.   Tariq shares his journey from leading roles at Google Cloud and Bumble to helming Sonar, a company disrupting code assurance for developers worldwide. With over 7 million users and support for 30+ programming languages, Sonar has become a critical tool in ensuring clean, maintainable, and secure code.   We dive into Sonar's innovative AI Code Assurance Workflow, which integrates seamlessly with generative AI tools like Copilot and Codium. Tariq discusses how Sonar addresses the challenges of AI-generated code, tackling issues like security vulnerabilities, maintainability problems, and the accountability crisis in today's coding landscape.   Tariq also unpacks the importance of hybrid deterministic and AI-driven approaches, the role of design and architecture in modern software development, and how Sonar is helping companies manage tech debt across billions of lines of code.   With Sonar's recent enterprise-grade SaaS launch and commitment to reducing developer toil, this episode offers valuable insights for developers, tech leaders, and anyone interested in the evolving intersection of AI and software engineering.   Don't forget to like, subscribe, and hit the notification bell for more discussions on AI, technology, and innovation!     Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI     (00:00) Introduction to Tariq Shaukat and Sonar (01:23) Overview of SonarQube (03:03) Deterministic Systems and AI Integration (07:36) Challenges of AI-Generated Code (10:12) Early Issue Detection in Development (12:33) Accountability in AI Code Generation (16:20) Importance of Rigorous Code Reviews (19:34) Managing Tech Debt with Continuous Improvement (22:16) Why Sonar Focuses on Integration (25:08) Reviewing Billion-Line Code Bases with Sonar (29:37) Tailoring Sonar for Specific Codebases and Workflows (32:40) Avoiding Overwhelming Developers with Noise (37:49) Governance and Managing Complex Codebases (40:50) Addressing Tech Debt in Legacy Systems (45:07) Sonar's Open-Source Model and Philosophy (48:11) What's Next for Sonar

    #224 Matt Panousis: How AI Is Disrupting The Dubbing Industry (LipDub AI)

    Play Episode Listen Later Dec 8, 2024 52:23


    Launch, run, and protect your business to make it official TODAY at https://www.legalzoom.com and use promo code EYEONAI to get 10% off any LegalZoom business formation product excluding subscriptions and renewals.     In this episode of the Eye on AI podcast, Matt Panousis, Co-Founder & COO @ LipDub AI, shares the journey behind the platforms a cutting-edge AI-powered lip-syncing tool revolutionizing the dubbing industry.   Matt joins Craig to explore the future of content localization, the evolution of LipDub, and how generative AI is transforming how audiences experience video content worldwide.   As a leader in AI and visual effects, Matt discusses how LipDub integrates advanced AI to enable real-time lip-syncing for dubbed audio, making Hollywood-quality dubbing affordable and accessible. From processing hours of video footage to handling complex, dynamic scenes, LipDub is empowering content creators to reach global audiences like never before.   We explore the challenges of creating realistic lip movements, the importance of mouth internals, textures, and fidelity, and how tools like LipDub stand apart from competitors like Rasque AI. Matt also introduces Vanity AI, LipDub's sister technology for automated digital makeup and de-aging, used by MARZ for high-end visual effects work.   Learn how LipDub is redefining the possibilities of content localization with AI and what the future holds as this technology evolves.   Don't miss this deep dive into AI-powered innovation with Matt Panousis. Like, subscribe, and hit the notification bell for more episodes exploring cutting-edge advancements in AI!   Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI     (00:00) Introduction & Matt Panousis Background (02:45) Origins of MARZ and LipDub (04:20) LipDub vs. Competitors (06:15) Integration with Dubbing Solutions (08:00) LipDub Technical Process (10:20) Competition (12:20) Time and Cost (14:00) Target Markets and Use Cases (19:00) Future Developments (22:00) Translation and Timing Challenges (28:00) Ethical Concerns and Misuse (30:30) Digital Watermarking (32:25) Real-World Examples and Future Products (34:00) Team Building and Research (37:00) Building the Advisory Board (38:10) Securing Research Talent (40:00) Funding and the Board (40:35) Future Products

    #224 Matt Panousis: How AI Is Disrupting The Dubbing Industry (LipDub AI) | Chinese Version

    Play Episode Listen Later Dec 8, 2024 49:26


    Launch, run, and protect your business to make it official TODAY at https://www.legalzoom.com and use promo code EYEONAI to get 10% off any LegalZoom business formation product excluding subscriptions and renewals.     In this episode of the Eye on AI podcast, Matt Panousis, Co-Founder & COO @ LipDub AI, shares the journey behind the platforms a cutting-edge AI-powered lip-syncing tool revolutionizing the dubbing industry.   Matt joins Craig to explore the future of content localization, the evolution of LipDub, and how generative AI is transforming how audiences experience video content worldwide.   As a leader in AI and visual effects, Matt discusses how LipDub integrates advanced AI to enable real-time lip-syncing for dubbed audio, making Hollywood-quality dubbing affordable and accessible. From processing hours of video footage to handling complex, dynamic scenes, LipDub is empowering content creators to reach global audiences like never before.   We explore the challenges of creating realistic lip movements, the importance of mouth internals, textures, and fidelity, and how tools like LipDub stand apart from competitors like Rasque AI. Matt also introduces Vanity AI, LipDub's sister technology for automated digital makeup and de-aging, used by MARZ for high-end visual effects work.   Learn how LipDub is redefining the possibilities of content localization with AI and what the future holds as this technology evolves.   Don't miss this deep dive into AI-powered innovation with Matt Panousis. Like, subscribe, and hit the notification bell for more episodes exploring cutting-edge advancements in AI!   Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI     (00:00) Introduction & Matt Panousis Background (02:45) Origins of MARZ and LipDub (04:20) LipDub vs. Competitors (06:15) Integration with Dubbing Solutions (08:00) LipDub Technical Process (10:20) Competition (12:20) Time and Cost (14:00) Target Markets and Use Cases (19:00) Future Developments (22:00) Translation and Timing Challenges (28:00) Ethical Concerns and Misuse (30:30) Digital Watermarking (32:25) Real-World Examples and Future Products (34:00) Team Building and Research (37:00) Building the Advisory Board (38:10) Securing Research Talent (40:00) Funding and the Board (40:35) Future Products

    #223 Kai Beckmann: Why Next-Gen Chips Are Critical for AI's Future

    Play Episode Listen Later Dec 4, 2024 44:39


    This episode is sponsored by Netsuite by Oracle, the number one cloud financial system, streamlining accounting, financial management, inventory, HR, and more.   NetSuite is offering a one-of-a-kind flexible financing program. Head to  https://netsuite.com/EYEONAI to know more.  In this episode of the Eye on AI podcast, we explore the cutting-edge world of semiconductor innovation and its role in the future of artificial intelligence with Kai Beckmann, CEO of Merck KGaA.   Kai takes us on a journey into the heart of semiconductor manufacturing, revealing how next-generation chips are driving the AI revolution. From the complex process of creating advanced chips to the increasing demands of AI on semiconductor technology, Kai shares how Merck is pioneering materials science to unlock unprecedented levels of computational power.   Throughout the conversation, Kai explains how AI's growth is reshaping the semiconductor industry, with innovations like edge AI, heterogeneous integration, and 3D chip architectures pushing the boundaries of performance. He highlights how Merck is using artificial intelligence to accelerate material discovery, reduce experimentation cycles, and create smarter, more efficient processes for the chips that power everything from smartphones to data centers.   Kai also delves into the global landscape of semiconductor manufacturing, discussing the challenges of supply chains, the cyclical nature of the industry, and the rapid technological advancements needed to meet AI's demands. He explains why the semiconductor sector is entering the "Age of Materials," where breakthroughs in materials science are enabling the next wave of AI-driven devices.   Like, subscribe, and hit the notification bell to stay tuned for more episodes! Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI (00:00) Introduction  (02:48) Merck KGaA (05:21) Foundations of Semiconductor Manufacturing   (07:57) How Chips Are Made (09:24) Exploring Materials Science (13:59) Growth and Trends in the Semiconductor Industry   (15:44) Semiconductor Manufacturing   (17:34) AI's Growing Demands on Semiconductor Tech (20:34) The Future of Edge AI  (22:10) Using AI to Disrupt Material Discovery   (24:58) How AI Accelerates Innovation in Semiconductors   (27:32) Evolution of Semiconductor Fabrication Processes   (30:08) Advanced Techniques: Chiplets, 3D Stacking, and Beyond   (32:29) Merck's Role in Global Semiconductor Innovation   (34:03) Major Markets for Semiconductor Manufacturing   (37:18) Challenges in Reducing Latency and Energy Consumption   (40:21) Exploring New Conductive Materials for Efficiency   

    #222 Andrew Feldman: How Cerebras Systems Is Disrupting AI Inference Technology

    Play Episode Listen Later Nov 28, 2024 42:26


    This episode is sponsored by Shopify.  Shopify is a commerce platform that allows anyone to set up an online store and sell their products. Whether you're selling online, on social media, or in person, Shopify has you covered on every base. With Shopify you can sell physical and digital products. You can sell services, memberships, ticketed events, rentals and even classes and lessons.   Sign up for a $1 per month trial period at http://shopify.com/eyeonai In this episode of the Eye on AI podcast, Andrew D. Feldman, Co-Founder and CEO of Cerebras Systems, unveils how Cerebras is disrupting AI inference and high-performance computing.   Andrew joins Craig Smith to discuss the groundbreaking wafer-scale engine, Cerebras' record-breaking inference speeds, and the future of AI in enterprise workflows. From designing the fastest inference platform to simplifying AI deployment with an API-driven cloud service, Cerebras is setting new standards in AI hardware innovation.   We explore the shift from GPUs to custom architectures, the rise of large language models like Llama and GPT, and how AI is driving enterprise transformation. Andrew also dives into the debate over open-source vs. proprietary models, AI's role in climate mitigation, and Cerebras' partnerships with global supercomputing centers and industry leaders.   Discover how Cerebras is shaping the future of AI inference and why speed and scalability are redefining what's possible in computing.   Don't miss this deep dive into AI's next frontier with Andrew Feldman.   Like, subscribe, and hit the notification bell for more episodes! Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI (00:00) Intro to Andrew Feldman & Cerebras Systems (00:43) The rise of AI inference (03:16) Cerebras' API-powered cloud (04:48) Competing with NVIDIA's CUDA (06:52) The rise of Llama and LLMs (07:40) OpenAI's hardware strategy (10:06) Shifting focus from training to inference (13:28) Open-source vs proprietary AI (15:00) AI's role in enterprise workflows (17:42) Edge computing vs cloud AI (19:08) Edge AI for consumer apps (20:51) Machine-to-machine AI inference (24:20) Managing uncertainty with models (27:24) Impact of U.S.–China export rules (30:29) U.S. innovation policy challenges (33:31) Developing wafer-scale engines (34:45) Cerebras' fast inference service (37:40) Global partnerships in AI (38:14) AI in climate & energy solutions (39:58) Training and inference cycles (41:33) AI training market competition  

    #221 Varun Mohan: The AI Tool That's Changing How Code is Written (Codeium)

    Play Episode Listen Later Nov 25, 2024 23:57


    Launch, run, and protect your business to make it official TODAY at https://www.legalzoom.com/ and use promo code EYEONAI to get 10% off any LegalZoom business formation product excluding subscriptions and renewals.  In this episode of the Eye on AI podcast, Varun Mohan, Co-Founder and CEO of Codeium, shares the journey behind building one of the most innovative AI-powered coding tools available today.    Varun joins Craig Smith to explore the future of software development, the evolution of Codeium, and how generative AI is revolutionizing the way developers interact with codebases.    As a leader in AI and software engineering, Varun discusses how Codeium integrates advanced AI to enable real-time code completion, refactoring, and even end-to-end application building. From processing billions of lines of code to making complex legacy systems like COBOL more accessible, Codeium is empowering developers to work smarter and faster.    We delve into the challenges of managing massive codebases, the importance of maintaining security and self-hosted solutions, and how tools like Codeium stand apart from competitors like Copilot and CodeWhisperer. Varun also introduces **Cascade**, the AI assistant within Codeium's custom IDE, Windsurf, which merges human ingenuity with AI's reasoning capabilities for a seamless coding experience.   Other fascinating topics include the role of AI in onboarding developers to complex systems, its ability to handle rare programming languages, and how Codeium is reshaping the software lifecycle from design to deployment.   Learn how Codeium is redefining the possibilities of software development with AI, and what the future holds as this technology continues to evolve.    Don't miss this deep dive into AI-powered innovation with Varun Mohan. Like, subscribe, and hit the notification bell for more episodes exploring cutting-edge advancements in AI! Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI (00:00) Introduction to Varun Mohan and Codeium (05:32) Introducing Codeium: AI-powered coding revolution (08:42) Competing with Copilot and CodeWhisperer (11:46) Using Codeium to streamline software development (14:01) Features of Cascade: Refactoring and automation (17:26) Building applications from scratch with Codeium (22:00) Enterprise adoption and recognition by JPMorgan Chase (23:14) The pivot to Codeium and lessons learned (24:32) The vision for Codeium's future  

    #220 Terry Sejnowski: The Future of AI, ChatGPT & Deep Learning

    Play Episode Listen Later Nov 22, 2024 60:28


    This episode of Eye on AI is sponsored by Citrusx. Unlock reliable AI with Citrusx! Our platform simplifies validation and risk management, empowering you to make smarter decisions and stay compliant. Detects and mitigate AI vulnerabilities, biases, and errors with ease.    Visit http://email.citrusx.ai/eyeonai to download our free fairness use case and see the solution in action.     In this episode of the Eye on AI podcast, Terry Sejnowski, a pioneer in neural networks and computational neuroscience, joins Craig Smith to discuss the future of AI, the evolution of ChatGPT, and the challenges of understanding intelligence.   Terry, a key figure in the deep learning revolution, shares insights into how neural networks laid the foundation for modern AI, including ChatGPT's groundbreaking generative capabilities. From its ability to mimic human-like creativity to its limitations in true understanding, we explore what makes ChatGPT remarkable and what it still lacks compared to human cognition.   We also dive into fascinating topics like the debate over AI sentience, the concept of "hallucinations" in AI models, and how language models like ChatGPT act as mirrors reflecting user input rather than possessing intrinsic intelligence. Terry explains how understanding language and meaning in AI remains one of the field's greatest challenges.   Additionally, Terry shares his perspective on nature-inspired AI and what it will take to develop systems that go beyond prediction to exhibit true autonomy and decision-making.   Learn why AI models like ChatGPT are revolutionary yet incomplete, how generative AI might redefine creativity, and what the future holds for AI as we continue to push its boundaries.   Don't miss this deep dive into the fascinating world of AI with Terry Sejnowski. Like, subscribe, and hit the notification bell for more cutting-edge AI insights!   Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI     (00:00) Introduction to Terry Sejnowski and His Work  (03:02) The Origins of Modern AI and Neural Networks  (05:29) The Deep Learning Revolution and ImageNet  (07:11) Understanding ChatGPT and Generative AI  (12:34) Exploring AI Creativity (16:03) Lessons from Gaming AI: AlphaGo and Backgammon  (18:37) Early Insights into AI's Affinity for Language  (24:48) Syntax vs. Semantics: The Purpose of Language  (30:00) How Written Language Transformed AI Training  (35:10) Can AI Become Sentient?  (41:37) AI Agents and the Next Frontier in Automation  (45:43) Nature-Inspired AI: Lessons from Biology  (50:02) Digital vs. Biological Computation: Key Differences  (54:29) Will AI Replace Jobs? (57:07) The Future of AI

    #219 Avthar Sewrathan: How to Build Smarter AI Applications with PostgreSQL

    Play Episode Listen Later Nov 18, 2024 51:09


    This episode is sponsored by Netsuite by Oracle, the number one cloud financial system, streamlining accounting, financial management, inventory, HR, and more.   NetSuite is offering a one-of-a-kind flexible financing program. Head to  https://netsuite.com/EYEONAI to know more.     In this episode of the Eye on AI podcast, Avthar Sewrathan, Lead Technical Product Marketing Manager at Timescale joins Craig Smith to explore how Postgres is transforming AI development with cutting-edge tools and open-source innovation.   With its robust, extensible framework, Postgres has become the go-to database for AI applications, from semantic search to retrieval-augmented generation (RAG). Avthar takes us through Timescale's journey from its IoT origins to disrupting the way developers handle vector search, embedding management, and high-performance AI workloads—all within Postgres.   We dive into Timescale's tools like PGVector, PGVector Scale, and PGAI Vectorizer, uncovering how they aid developers to build AI-powered systems without the complexity of managing multiple databases. Avthar explains how Postgres seamlessly handles structured and unstructured data, making it the perfect foundation for next-gen AI applications.   Learn how Postgres supports AI-driven use cases across industries like IoT, finance, and crypto, and why its open-source ecosystem is key to fostering collaboration and innovation.    Tune in to discover how Postgres is redefining AI databases, why Timescale's tools are a game-changer for developers, and what the future holds for AI innovation in the database space.   Don't forget to like, subscribe, and hit the notification bell for more AI insights!      Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI (00:00) Introduction to Avthar and Timescale (02:35) The origins of Timescale and TimescaleDB (05:06) What makes Postgres unique and reliable (07:17) Open-source philosophy at Timescale (12:04) Timescale's early focus on IoT and time series data (16:17) Applications in finance, crypto, and IoT (19:03) Postgres in AI: From RAG to semantic search (22:00) Overcoming scalability challenges with PGVector Scale (24:33) PGAI Vectorizer: Managing embeddings seamlessly (28:09) The PGAI suite: Tools for AI developers (30:33) Vectorization explained: Foundations of AI search (32:24) LLM integration within Postgres (35:26) Natural language interfaces and database workflows (38:11) Structured and unstructured data in Postgres (41:17) Postgres for everything: Simplifying complexity (44:52) Timescale's accessibility for startups and enterprises (47:46) The power of open source in AI 

    #218 Jeff Boudier: The Future of Open Source AI Development (Hugging Face)

    Play Episode Listen Later Nov 13, 2024 49:55


    This episode is sponsored by Oracle. Oracle Cloud Infrastructure, or OCI is a blazing fast and secure platform for your infrastructure, database, application development, plus all your AI and machine learning workloads. OCI costs 50% less for compute and 80% less for networking. So you're saving a pile of money. Thousands of businesses have already upgraded to OCI, including MGM Resorts, Specialized Bikes, and Fireworks AI.   Cut your current cloud bill in HALF if you move to OCI now:  https://oracle.com/eyeonai     In this episode of the Eye on AI podcast, Jeff Boudier, Head of Product and Growth at Hugging Face, joins Craig Smith to uncover how the platform is empowering AI builders and driving the open-source AI revolution.   With a mission to democratize AI, Jeff walks us through Hugging Face's journey from a chatbot for teens to the leading platform hosting over 1 million public AI models, datasets, and applications. We explore how Hugging Face is bridging the gap between enterprises and open-source innovation, enabling developers to build cutting-edge AI solutions with transparency and collaboration.   Jeff dives deep into Hugging Face's tools and features, from hosting private and public models to fostering a thriving ecosystem of AI builders. He shares insights on the transformative impact of technologies like Transformers, transfer learning, and no-code solutions that make AI accessible to more creators than ever before.   We also discuss Hugging Face's latest innovation, ‘Hugs,' designed to help enterprises seamlessly integrate open-source AI within their infrastructure while retaining full control over their data and models.   Tune in to discover how Hugging Face is shaping the future of AI development, why open-source models are catching up with proprietary ones, and what trends are driving innovation across AI disciplines.   Don't forget to like, subscribe, and hit the notification bell for more!     Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI     (00:00) Introduction to Jeff Boudier (02:16) How Hugging Face Empowers AI Builders (05:26) Transition from Chatbot to Leading AI Platform (07:07) Hosting AI Models: Public and Private Options (10:13) What Does Hosting Models on Hugging Face Mean? (14:22) Hugging Face vs. GitHub: Key Differences (19:09) Navigating 1 Million Models on the Hugging Face Hub (22:33) Leaderboards and Filtering AI Models (25:26) Building Applications with Hugging Face Models (28:03) AI Innovation: From Code to Model-Driven Development (30:45) Frameworks for Agentic Systems and Hugging Chat (35:20) Open Source vs. Proprietary AI: The Future (40:41) Introducing ‘Hugs': Open AI for Enterprises (44:59) The Role of No-Code in AI Development (47:26) Hugging Face's Vision  

    #217 Ben Goertzel: The Path to Artificial General Intelligence, Decentralized AI and AI Consciousness

    Play Episode Listen Later Nov 6, 2024 56:30


    This episode is sponsored by Legal Zoom.  Launch, run, and protect your business to make it official TODAY at https://www.legalzoom.com/ and use promo code Smith10 to get 10% off any LegalZoom business formation product excluding subscriptions and renewals. In this episode of the Eye on AI podcast, we dive into the world of Artificial General Intelligence (AGI) with Ben Goertzel, CEO of SingularityNET and a leading pioneer in AGI development.   Ben shares his vision for building machines that go beyond task-specific capabilities to achieve true, human-like intelligence. He explores how AGI could reshape society, from revolutionizing industries to redefining creativity, learning, and autonomous decision-making.    Throughout the conversation, Ben discusses his unique approach to AGI, which combines decentralized AI systems and blockchain technology to create open, scalable, and ethically aligned AI networks. He explains how his work with SingularityNET aims to democratize AI, making AGI development transparent and accessible while mitigating risks associated with centralized control.   Ben also delves into the philosophical and ethical questions surrounding AGI, offering insights into consciousness, the role of empathy, and the potential for building machines that not only think but also align with humanity's best values. He shares his thoughts on how decentralized AGI can avoid the narrow, profit-driven goals of traditional AI and instead evolve in ways that benefit society as a whole.   This episode offers a thought-provoking glimpse into the future of AGI, touching on the technical challenges, societal impact, and ethical considerations that come with creating truly intelligent machines.   Ben's perspective will leave you questioning not only what AGI can achieve, but also how we can guide it toward a positive future.   Don't forget to like, subscribe, and hit the notification bell to stay tuned for more!     Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI     (00:00) Introduction to Ben Goertzel (01:21) Overview of "The Consciousness Explosion"  (02:28) Ben's Background in AI and AGI (04:39) Exploring Consciousness and AI (08:22) Panpsychism and Views on Consciousness (10:32) The Path to the Singularity (13:28) Critique of Modern AI Systems (18:30) Perspectives on Human-Level AI and Creativity (21:42) Ben's AGI Paradigm and Approach (25:39) OpenCog Hyperon and Knowledge Graphs (31:12) Integrating Perception and Experience in AI (34:02) Robotics in AGI Development (35:06) Virtual Learning Environment for AGI (39:01) Creativity in AI vs. Human Intelligence (44:21) User Interaction with AGI Systems (48:22) Funding AGI Research Through Cryptocurrency (53:03) Final Thoughts on Compassionate AI (55:21) How to Get "The Consciousness Explosion" Book

    #216 Nenshad Bardoliwalla: Inside Vertex AI - The Ultimate AI Toolkit by Google Cloud

    Play Episode Listen Later Oct 30, 2024 51:47


    This episode of the Eye on AI podcast is sponsored by JLL. JLL's AI solutions are transforming the real estate landscape, accelerating growth, streamlining operations and unlocking hidden value in properties and portfolios. From predictive analytics to intelligent automation, JLL is creating smarter buildings, more efficient workplaces and sustainable cities. To learn more about JLL and AI, visit: jll.com/AI     In this episode of the *Eye on AI* podcast, we explore the world of AI at Google Cloud with Nenshad Bardoliwalla, Director of Product Management for Vertex AI.   Nenshad unpacks the three core layers of Vertex AI: the Model Garden, where users can access and evaluate a diverse range of models; the Model Builder, which supports model fine-tuning and prompt optimization; and the Agent Builder, designed to develop AI agents that can perform complex, goal-oriented tasks.   He shares insights into model evaluation strategies, the role of Google's Tensor Processing Units (TPUs) in scaling AI infrastructure, and how enterprises can choose the right models based on performance, cost, and regulatory requirements.   Nenshad also delves into the challenges and opportunities of AI prompt optimization, highlighting Google's approach to ensuring consistent outputs across different models. He discusses the ethical considerations in AI design, emphasizing the need for human oversight and clear guardrails to maintain safety.   Whether you're in AI, tech, or curious about AI's potential impact, this episode is packed with insights on next-gen AI deployment.   Don't forget to like, subscribe, and turn on notifications for more episodes!     Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI (00:00) Introduction to Nenshad Bardoliwalla & Vertex AI (01:52) Overview of Vertex AI's Three Core Layers (05:35) Nenshad's Journey to Google Cloud (06:36) Choosing the Right AI Model (08:00) Google's AI Infrastructure & Tensor Processing Units (TPUs) (10:15) Model Builder: Fine-Tuning & Prompt Optimization (12:11) Agent Builder: Building AI Agents with Tools & Planning (17:57) Model Evaluation & Prompt Management (21:23) Generative AI for Business Analysts (23:24) AI Model Modality & Use Case Selection (25:23) Popularity Distribution of AI Models (28:18) Prompt Optimization Tools (34:20) Building AI Agents: Real-World Use Cases & Ethical Safeguards (40:13) The Capabilities & Limitations of AI Agents (45:48) TPU vs. GPU (50:33) Future of AI at Google Cloud

    #215 Manuel Haug: How Celonis Uses AI to Optimize Business Processes

    Play Episode Listen Later Oct 23, 2024 44:23


    In this episode of the Eye on AI podcast, we dive into process intelligence with Manuel Haug, Field CTO at Celonis.   Manuel shares how process mining is transforming business operations by connecting directly to digital systems to map workflows, optimize processes, and boost efficiency. He explains how Celonis builds digital twins of business processes, allowing companies to visualize and resolve inefficiencies with data-driven insights.   Manuel also explores the role of AI in process optimization, discussing the integration of generative AI and AI agents in Celonis. From automating repetitive tasks to enabling strategic decision-making, Manuel details how AI agents can enhance workflows, reduce costs, and deliver measurable productivity gains across industries.   Tune in to learn how AI agents are evolving, the potential of process intelligence graphs, and how Celonis is pioneering the future of autonomous enterprises.   Whether you're in business, tech, or curious about the impact of AI, this episode offers valuable insights into next-gen process optimization.   Don't forget to like, subscribe, and turn on notifications for more episodes on AI, automation, and digital transformation!     This episode of Eye on AI is sponsored by Citrusx. Navigating the complexities of AI risk management? Citrusx has you covered. Their innovative platform helps you make better business decisions and achieve reliable AI outcomes while staying compliant with regulatory standards. Citrusx enables you to manage AI risks effortlessly, connecting all stakeholders and providing continuous validation. Their solution can detect and mitigate vulnerabilities, biases, and errors, ensuring the accuracy, robustness, and compliance of AI models Visit https://email.citrusx.ai/eyeonai to book your free demo today!     Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI     (00:00) Introduction to Manuel Haug & Celonis (01:07) Understanding Process Intelligence (07:14) Integrating AI into Process Mining (12:12) Generative AI's Role in Process Optimization (15:18) Challenges of Large-Scale Process Integration (20:24) Role of AI Agents in Process Automation (23:43) AI Maturity & Implementation in Enterprises (27:23) Real-Life AI Agent Examples (30:47) Building and Integrating AI Agents (34:53) Future of AI Agents in Enterprises (37:56) Introducing the Process Intelligence Graph (39:20) Addressing Data Privacy Concerns

    #214 Ece Kamar: Why AI Agents Are the Next Big Thing in Tech (Microsoft Research)

    Play Episode Listen Later Oct 17, 2024 61:01


    This episode is sponsored by RapidSOS. Close the safety gap and transform your emergency response with RapidSOS. Visit https://rapidsos.com/eyeonai/ today to learn how AI-powered safety can protect your people and boost your bottom line.     In this episode of the Eye on AI podcast, we dive deep into the world of AI agents with Ece Kamar, VP of Research and Managing Director of AI Frontiers Lab at Microsoft.   Ece shares her unique insights on the future of AI, discussing how AI agents are reshaping the way we interact with technology and perform tasks.   Throughout the episode, Ece explains the groundbreaking potential of AI agents, describing how they act as autonomous entities that can perceive, learn, and carry out complex tasks in real time. She discusses the revolutionary shift from traditional AI models to agentic workflows, highlighting how multi-agent systems like Microsoft's AutoGen are creating scalable solutions for industries and everyday life. Ece also shares her thoughts on building responsible AI, touching on the ethical challenges and safety concerns that come with the rise of autonomous agents.   We explore how multi-agent systems can scale to millions of agents, and how they are transforming enterprises by automating complex workflows, personalizing customer experiences, and pushing the boundaries of AI development. Ece's perspective on the future of AI in scientific discovery, as well as her work in responsible AI, offers a thought-provoking glimpse into what lies ahead.   Don't forget to like, subscribe, and hit the notification bell to stay updated on the latest in AI, automation, and ethical tech!     Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI     (00:00) Preview and Introduction (03:11) What Are AI Agents? (04:43) Building Responsible AI at Microsoft (10:55) The Rise of Agentic Workflows (12:30) Multi-Agent Systems and AutoGen (18:04) Scaling Multi-Agent Systems (20:22) The Creation and Evolution of AutoGen (23:07) Real-World Applications of AutoGen (25:52) Large-Scale Simulations with AI Agents (27:36) The Role of AI Agents in Scientific Discovery (31:20) AI Agents and Complex Reasoning (36:49) Challenges in Defining Agent Boundaries (39:12) The Risk of Agents Interacting with Each Other (43:59) Building Trustworthy and Safe AI Agents (48:44) Learning from Human Factors in Automation (50:50) Why Speed and Coordination Matter in AI Development (55:08) The Future of AI Agents in Enterprises (57:47) Low-Code/No-Code Development for AI Agents

    #213 Mark Surman: How Mozilla Is Shaping the Future of Open-Source AI

    Play Episode Listen Later Oct 13, 2024 47:14


    This episode is sponsored by Oracle. AI is revolutionizing industries, but needs power without breaking the bank. Enter Oracle Cloud Infrastructure (OCI): the one-stop platform for all your AI needs, with 4-8x the bandwidth of other clouds. Train AI models faster and at half the cost. Be ahead like Uber and Cohere.   If you want to do more and spend less like Uber, 8x8, and Databricks Mosaic - take a free test drive of OCI at https://oracle.com/eyeonai     In this episode of the Eye on AI podcast, we sit down with Mark Surman, President of Mozilla, to explore the future of open-source AI and how Mozilla is leading the charge for privacy, transparency, and ethical technology.   Mark shares Mozilla's vision for AI, detailing the company's innovative approach to building trustworthy AI and the launch of Mozilla AI. He explains how Mozilla is working to make AI open, accessible, and secure for everyone—just as it did for the web with Firefox. We also dive into the growing importance of federated learning and AI governance, and how Mozilla Ventures is supporting groundbreaking companies like Flower AI.   Throughout the conversation, Mark discusses the critical need for open-source AI alternatives to proprietary models like OpenAI and Meta's LLaMA. He outlines the challenges with closed systems and highlights Mozilla's work in giving users the freedom to choose AI models directly in Firefox.   Mark provides a fascinating look into the future of AI and how open-source technologies can create trillions in economic value while maintaining privacy and inclusivity. He also sheds light on the global race for AI innovation, touching on developments from China and the impact of public AI funding.   Don't forget to like, subscribe, and hit the notification bell to stay up to date with the latest trends in AI, open-source tech, and machine learning!     Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI   (00:00) Introduction to Mark Surman and Mozilla's Mission (02:01) The Evolution of Mozilla: From Firefox to AI (04:40) Open-Source Movement and Mozilla's Legacy (06:58) The Role of Open-Source in AI (11:06) Advancing Federated Learning and AI Governance (14:10) Integrating AI Models into Firefox (16:28) Open vs Closed Models (22:09) Partnering with Non-Profit AI Labs for Open-Source AI (25:08) How Meta's Strategy Compares to OpenAI and Others (27:58) Global Competition in AI Innovation (31:17) The Cost of Training AI Models (33:36) Public AI Funding and the Role of Government (37:40) The Geopolitics of AI and Open Source (41:35) Mozilla's Vision for the Future of AI and Responsible Tech

    #212 Thomas Dietterich: The Future of Machine Learning, Deep Learning and Computer Vision

    Play Episode Listen Later Oct 9, 2024 56:27


    This episode is sponsored by Speechmatics. Check it out at www.speechmatics.com/realtime     In this episode of the Eye on AI podcast, host Craig Smith sits down with Thomas G. Dietterich, a pioneer in the field of machine learning, to explore the evolving landscape of AI and its application in real-world problems.   Thomas shares his journey from the early days of AI, where rule-based systems dominated, to the breakthroughs in deep learning that have revolutionized computer vision. He delves into the challenges of detecting novelty in AI, emphasizing the importance of teaching machines to recognize "unknown unknowns."   The conversation highlights the growing field of computational sustainability, where AI is used to solve pressing environmental problems, from designing new materials to optimizing wildfire management. Thomas also provides insights into the role of transformers and generative AI, discussing their power and limitations, particularly in tasks like object recognition and problem formulation.   Join us for a deep dive into the future of AI, where Thomas explains why the development of novel materials and drugs may have the most transformative impact on our economy. Plus, hear about his latest work on multi-instance learning, weak supervision, and the role of reinforcement learning in real-world applications like wildfire management.   Don't forget to like, subscribe, and hit the notification bell to stay updated on the latest trends and insights in AI and machine learning!     Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI     (00:00) Introduction to Thomas Dietterich's Machine Learning Journey (02:34) The Early Days of Machine Learning and AI Systems (04:29) Tackling the Multiple Instance Problem in Drug Design (05:41) AI in Sustainability (07:17) The Challenge of Novelty Detection in AI Systems (08:00) Addressing the Open Set Problem in Cybersecurity and Computer Vision (09:11) The Evolution of Deep Learning in Computer Vision (11:21) How Deep Learning Handles Novel Representations (12:01) Foundation Models and Self-Supervised Learning (14:11) Vision Transformers vs. Convolutional Neural Networks (16:05) The Role of Multi-Instance Learning in Weakly Labeled Data (18:36) Ensemble Learning and Deep Networks in Machine Learning (20:33) The Future of AI: Large Language Models and Their Applications (23:51) Symbolic Regression and AI's Role in Scientific Discovery (34:44) AI in Wildfire Management: Using Reinforcement Learning (39:32) AI-Driven Problem Formulation and Optimization in Industry (41:30) The Future of AI Reasoning Systems and Problem Solving (45:03) The Limits of Large Language Models in Scientific Research (50:12) Closing Thoughts: Open Challenges and Opportunities in AI  

    #211 Stuart Hameroff: Why AI Will Never Fully Replicate Human Consciousness

    Play Episode Listen Later Oct 6, 2024 75:46


    This episode is sponsored by Netsuite by Oracle, the number one cloud financial system, streamlining accounting, financial management, inventory, HR, and more. NetSuite is offering a one-of-a-kind flexible financing program. Head to https://netsuite.com/EYEONAI to know more.   In this episode of the Eye on AI podcast, we dive into the world of quantum consciousness with Stuart Hameroff, a pioneer in the field of consciousness studies and co-developer of the controversial Orch OR theory.   Stuart Hameroff takes us on a journey through the intersection of quantum mechanics and the human mind, explaining how microtubules within neurons could be the key to unlocking the mysteries of consciousness.   Stuart delves into his work with physicist Roger Penrose, where they propose that consciousness arises from quantum processes in the brain, deeply embedded in the fabric of spacetime itself. We explore how this theory challenges mainstream neuroscience, which often reduces the mind to simple neural activity, and instead suggests that consciousness may have a profound connection to the universe's underlying structure.   Throughout the conversation, Stuart addresses the debate over AI consciousness, asserting that true conscious experience cannot arise from mere computation but requires quantum processes. He shares insights on the latest experiments in anesthesia and quantum biology, offering a fresh perspective on how the brain might function on a deeper, quantum level.   Join us as we unpack the groundbreaking Orch OR theory and what it could mean for the future of science, technology, and our understanding of reality.   Don't forget to like, subscribe, and hit the notification bell to stay updated on the latest cutting-edge discussions in AI, quantum theory, and consciousness research!     Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI     (00:00) Preview (03:26) Consciousness and Dualism vs. Materialism (04:51) Anesthesia and Consciousness: Hameroff's Perspective (07:30) Roger Penrose's Perspective on Consciousness (09:51) Penrose's Explanation of Quantum Superposition (12:52) The Collapse of Quantum Superposition and Consciousness (14:41) Microtubules and Their Role in Consciousness (17:08) Critique of Current Neuroscience Approaches (22:28) Discovering the Microtubule's Role in Information Processing (26:08) Microtubules as Cellular Automata (28:48) The Role of Frohlich Coherence in Quantum Biology (31:27) Meeting Roger Penrose and Connecting with His Work (33:52) Collaboration with Penrose: Developing the Theory (37:05) Challenges and Criticisms of the Theory (43:06) Advances in Quantum Consciousness Research (46:18) Hierarchical Models in the Brain (51:10) Entanglement and Consciousness (55:03) The Mystery of Anesthesia's Selective Impact on Consciousness (57:07) Quantum Effects and Anesthesia's Mechanism (01:00:22) The Search for Anesthesia's Target Protein (01:04:30) Experimental Evidence for Quantum Effects in Biology (01:09:33) Consciousness as a Quantum Physical Effect

    #210 Pedro Domingos: Exploring AI's Impact on Politics and Society

    Play Episode Listen Later Sep 30, 2024 57:27


    This episode is sponsored by Bloomreach. Bloomreach is a cloud-based e-commerce experience platform and B2B service specializing in marketing automation, product discovery, and content management systems.   Check out Bloomreach: https://www.bloomreach.com Explore Loomi AI: https://www.bloomreach.com/en/products/loomi Other Bloomreach products: https://www.bloomreach.com/en/products     In this episode of the Eye on AI podcast, we sit down with Pedro Domingos, professor of computer science and author of The Master Algorithm and 2040, to dive deep into the future of artificial intelligence, machine learning, and AI governance.   Pedro shares his expertise in AI, offering a unique perspective on the real dangers and potential of AI, far from the apocalyptic fears of superintelligence taking over. We explore his satirical novel, 2040, where an AI candidate for president—Prezibot—raises questions about control, democracy, and the flaws in both AI systems and human decision-makers.   Throughout the episode, Pedro sheds light on Silicon Valley's utopian dreams clashing with its dystopian realities, highlighting the contrast between tech innovation and societal challenges like homelessness. He discusses how AI has already integrated into our daily lives, from recommendation systems to decision-making tools, and what this means for the future.   We also unpack the ongoing debate around AI safety, the limits of current AI models like ChatGPT, and why he believes AI is more of a tool to amplify human intelligence rather than an existential threat. Pedro offers his insights into the future of AI development, focusing on how symbolic AI and neural networks could pave the way for more reliable and intelligent systems.   Don't forget to like, subscribe, and hit the notification bell to stay updated on the latest insights into AI, machine learning, and tech culture.   Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI     (00:00) Preview and Introduction (01:06) Pedro's Background and Contributions to AI (03:36) The Satirical Take on AI in '2040' (05:42) AI Safety Debate: Geoffrey Hinton vs. Yann LeCun (08:06) Debunking AI's Real Risks (12:45) Satirical Elements in '2040': HappyNet and Prezibot (17:57) AI as a Decision-Making Tool: Potential and Risks (22:55) The Limits of AI as an Arbiter of Truth (27:35) Crowdsourced AI: PreziBot 2.0 and Real-Time Decision Making (29:54) AI Governance and the Kill Switch Debate (37:42) Integrating AI into Society: Challenges and Optimism (47:11) Pedro's Current Research and Future of AI (55:17) Scaling AI and the Future of Reinforcement Learning  

    #209 Bill Moore: How BCG Created a Conversational AI Co-Host (Inside the Development of GENE)

    Play Episode Listen Later Sep 25, 2024 39:42


    In this episode of the Eye on AI podcast, Craig Smith sits down with Bill Moore BCGX to explore the development of Gene, an AI-powered co-host transforming the way we engage with conversational AI.   Bill walks us through the journey of creating Gene, a real-time conversational agent built to interact seamlessly with human hosts. He shares insights into the technical challenges they faced, from improving latency to expanding context windows, and how Speechmatics' cutting-edge speech-to-text technology helped achieve over 90% accuracy in real-time.   We dive deep into the ethics of AI, particularly gender representation, as the conversation touches on why assigning gender to AI assistants can reinforce outdated stereotypes. Gene, as a gender-neutral conversational AI, challenges these norms, providing a clear distinction between human-like interaction and AI-driven dialogue.   Join us for an in-depth discussion on the future of AI in media, the balance between human creativity and AI's analytical power, and the ethical considerations every company should be mindful of when integrating AI into their operations.   Don't forget to like, subscribe, and hit the notification bell for more deep dives into AI innovation, ethics, and the future of conversational agents!   Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI   (00:00) Introduction to Gene: BCG's Conversational AI (02:19) Bill Moore's Background (04:22) Early Challenges in Building Real-Time AI (07:20) Technical Aspects: Prompting and Configuring Gene (09:28) The Use of Vector Databases and Sparse Priming (12:01) How Gene Handles Long Conversations (14:46) Building Conversational AI (19:07) Gene Introduces Itself: AI as a Podcast Co-Host (22:00) The Ethics of AI (29:35) Should AI Have a Physical Representation? (32:05) AI and Children: Nurturing Healthy Interactions (36:16) Adoption of Conversational AI in Businesses (38:11) Ethical Considerations for AI in the Future

    #208 Michael Martin: How AI is Saving Lives Every Day (Inside RapidSOS's Groundbreaking Tech)

    Play Episode Listen Later Sep 18, 2024 54:26


    This episode is sponsored by Shopify. Shopify is a commerce platform that allows anyone to set up an online store and sell their products. Whether you're selling online, on social media, or in person, Shopify has you covered on every base. With Shopify you can sell physical and digital products. You can sell services, memberships, ticketed events, rentals and even classes and lessons.   Sign up for a $1 per month trial period at http://shopify.com/eyeonai     In this episode of the Eye on AI podcast, Craig Smith sits down with Michael Martin, CEO of RapidSOS, to explore how AI and connected devices are aiding emergency response and public safety.   With billions of sensor feeds from over 540 million devices—including wearables, vehicles, and security systems—RapidSOS integrates life-saving data to provide real-time support for 911 and first responders across the U.S. and beyond. Michael shares how their platform is reducing response times, improving accuracy, and transforming the way emergencies are handled by public safety agencies.   We dive deep into RapidSOS's AI-powered platform, which fuses human and machine intelligence to deliver faster, more effective emergency responses. Michael also discusses the challenges of scaling such technology globally, the role of AI in predictive emergency management, and the importance of integrating these solutions into legacy systems used by first responders.   Whether it's saving lives through faster 911 responses, preventing emergencies before they happen, or leveraging AI to enhance public safety, this episode offers a compelling look at the future of emergency services. Join us to uncover how technology is reshaping public safety and emergency response on a global scale.   Don't forget to like, subscribe, and hit the notification bell for more cutting-edge discussions on AI, emergency services, and technology innovation!   Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI     (00:00) Preview (01:58) RapidSOS and Michael Martin's Journey (06:39) Integration w/ Wearable Devices and Home Systems (11:59) RapidSOS's Use Cases (23:12) RapidSOS' Foundational Models (31:31) How RapidSOS Handles Data (36:11) How RapidSOS Aids Public Safety (49:56) Is RapidSOS Integrated into Hospitals? (53:00) EmergencyProfile.org Role In Helping First Responders (55:36) The Future of Public Safety with AI and RapidSOS

    #207 Noa Srebrnik: How to Mitigate AI Risk in High-Stakes Industries

    Play Episode Listen Later Sep 15, 2024 47:23


    This episode is sponsored by Oracle. AI is revolutionizing industries, but needs power without breaking the bank. Enter Oracle Cloud Infrastructure (OCI): the one-stop platform for all your AI needs, with 4-8x the bandwidth of other clouds. Train AI models faster and at half the cost. Be ahead like Uber and Cohere.   If you want to do more and spend less like Uber, 8x8, and Databricks Mosaic - take a free test drive of OCI at https://oracle.com/eyeonai     In this episode of the Eye on AI podcast, Craig Smith sits down with Noa Srebrnik from Citrusx.ai to explore how AI is transforming risk management and compliance in high-risk industries like finance and insurance.   With a background spanning regulation, risk, and technology, Noa guides us through Citrusx.ai's innovative approach to enabling organizations to leverage AI responsibly while adhering to complex regulatory environments.   We dive deep into Citrusx.ai's AI platform, designed to validate, monitor, and explain AI models, ensuring transparency, fairness, and compliance. Noa explains how their proprietary synthetic data technology identifies hidden risks, allowing companies to confidently scale AI solutions across different use cases without compromising governance.   We also discuss the evolving regulatory landscape, including the challenges of navigating fragmented AI regulations in the U.S. and the EU AI Act's influence on high-risk sectors.   Join us as we uncover how AI can drive innovation while maintaining regulatory accountability and why risk management is crucial for the future of AI in finance and beyond.   Don't forget to like, subscribe, and hit the notification bell for more in-depth discussions on AI, regulation, and risk management!   Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI

    #206 Krishna Rangasayee: Why Edge AI is the Next Big Thing in Tech

    Play Episode Listen Later Sep 4, 2024 50:25


    This episode is sponsored by Netsuite by Oracle, the number one cloud financial system, streamlining accounting, financial management, inventory, HR, and more. NetSuite is offering a one-of-a-kind flexible financing program. Head to  https://netsuite.com/EYEONAI to know more.     In this episode of the Eye on AI podcast, Craig Smith sits down with Krishna Rangasayee, CEO of SiMa.ai, to explore the innovations in AI at the edge and how embedded systems are transforming industries like robotics, automotive, and industrial automation.   Krishna brings over 30 years of experience in software and silicon design, leading us through SiMa.ai's vision of disrupting edge computing with their Machine Learning System on Chip (MLSoC).   We dive deep into how SiMa.ai's chips are designed to accelerate AI workloads at the edge, providing unmatched performance and efficiency compared to cloud-based solutions.   We also discuss the future of AI hardware, including the role of open-source platforms, the challenges of scaling AI in embedded systems, and why the edge market is poised to outgrow the cloud in the coming years.   Krishna shares his insights on the importance of AI software architecture, how SiMa.ai's platform simplifies integration for developers, and the emerging potential of neuromorphic and quantum computing.   Join us as we delve into the next big AI gold rush, the edge market, and learn why AI-powered edge devices will reshape the way industries operate.   Don't forget to like, subscribe, and hit the notification bell for more exclusive insights into the future of AI and edge computing!   Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI   (00:00) Preview and Introduction (02:13) SiMa.ai's Focus on Chip Design for the Edge (05:38) How is SiMa.ai Different from NVIDIA's CUDA? (08:29) The Evolution of Chip Architecture (11:50) AI's Move to the Edge Market? (19:19) Industries Driving Demand for SiMa.ai's Technology (21:25) How Do Edge Applications Work? (25:09) Chips Design for Edge Devices (33:01) SiMa.ai Scaling Plans (36:12) What is SiMa.ai Focusing On in the Future? (40:30) SiMa.ai's Fundraising Journey (42:57) Hiring and Retaining Top Talent in the Industry (46:01) The Future of Chip Technology

    Claim Eye On A.I.

    In order to claim this podcast we'll send an email to with a verification link. Simply click the link and you will be able to edit tags, request a refresh, and other features to take control of your podcast page!

    Claim Cancel