POPULARITY
In this episode, Dipendra Kumar, Staff Research Scientist, and Alnur Ali, Staff Software Engineer at Databricks, discuss the challenges of applying AI in enterprise environments and the tools being developed to bridge the gap between research and real-world deployment.Highlights include:- The challenges of real-world AI—messy data, security, and scalability.- Why enterprises need high-accuracy, fine-tuned models over generic AI APIs.- How QuickFix learns from user edits to improve AI-driven coding assistance.- The collaboration between research & engineering in building AI-powered tools.- The evolving role of developers in the age of generative AI.
Episode: Balancing AI and Human Connection in B2B MarketingIn this episode of the B2B Marketing Excellence Podcast, I'm diving into a topic that's close to my heart — how we can embrace AI while still preserving the human touch that builds strong, lasting business relationships.At World Innovators, our family-run agency has spent the last 44 years helping B2B companies reach the right audience through strategic marketing rooted in trust, personalization, and genuine care. Today, AI is offering incredible tools to support those efforts — but it's how we use them that matters most.In this episode, I share real stories (including a powerful post-conference message I received) and practical ways to use AI as a supportive partner — not a substitute. Whether it's using AI for note-taking, preparing for outreach, or personalizing communication, I walk through how to make these tools work for your brand while keeping your values front and center.Let's talk about how to build deeper, more meaningful connections — with a little help from technology, and a lot of heart.⏱️ Episode Breakdown:• 00:00 – Introduction: Balancing AI and Human Connection• 00:13 – The Legacy of Building Relationships• 00:53 – Leveraging AI Without Losing the Human Touch• 02:05 – Real-Life Experiences and Insights• 02:57 – The Role of AI in Enhancing Relationships• 05:47 – Practical Applications of AI in Business• 08:40 – The Human Element in AI-Assisted Communication• 12:31 – AI as a Support System, Not a Substitute• 16:25 – Conclusion: Embracing AI for Deeper ConnectionsAt World Innovators, we believe B2B marketing should be about building relationships, not just generating clicks. For 44 years, we've helped industrial and executive education brands find the right people through trusted media sources, curated email lists, and strategic outreach.
In this episode, Josiah Mackenzie shares some top takeaways from his latest research, including how 87% of hospitality professionals participating in the study already use AI to improve efficiency, creativity, and guest experience. Listen now for practical examples, underutilized AI opportunities, and actionable insights you can use in your hotel or hospitality business.Also see:AI 2027 ProjectWhat AI Might Bring Hotels in 2025 - Martin SolerAmerica's Chief AI Officer for Travel Shares Advice for 2025 - Janette Roush, Brand USALess Ringing, More Hospitality: AI-Powered PBX To Give Our Teams More Time for Guests - Steven Marais, Noble House Hotels & ResortsAI & Hotel Tech Bets For Our People-First Approach - Dina Belon, Staypineapple HotelsThe Future of Hotel Management: Automation, AI, and Innovation - Sloan Dean, Remington HospitalityAI's Impact On Our Business - Ernest Lee, citizenMHow AI Helps Me Run More Profitable Hotels - Sean Murphy, The Bower50 Days, 50 Concepts: Rethinking Experiential Hospitality with Generative AI - Dylan Barahona A few more resources: If you're new to Hospitality Daily, start here. You can send me a message here with questions, comments, or guest suggestions If you want to get my summary and actionable insights from each episode delivered to your inbox each day, subscribe here for free. Follow Hospitality Daily and join the conversation on YouTube, LinkedIn, and Instagram. If you want to advertise on Hospitality Daily, here are the ways we can work together. If you found this episode interesting or helpful, send it to someone on your team so you can turn the ideas into action and benefit your business and the people you serve! Music for this show is produced by Clay Bassford of Bespoke Sound: Music Identity Design for Hospitality Brands
Alejandro and Julia of theluddite.org join us to debunk some terrible AI research, and the bad reporting compounding the problems on top of that. Also, what is AI? Can it ever think for itself? Are you an expert in something and want to be on the show? Apply here! Please support the show on patreon! You get ad free episodes, early episodes, and other bonus content! This content is CAN credentialed, which means you can report instances of harassment, abuse, or other harm on their hotline at (617) 249-4255, or on their website at creatoraccountabilitynetwork.org.
Members of Elon Musk's Department of Government Efficiency now have access to technical systems maintained by United States Citizenship and Immigration Services, according to a recent memorandum viewed by FedScoop. The memo, which was sent from and digitally signed by USCIS Chief Information Officer William McElhaney, states that Kyle Shutt, Edward Coristine, Aram Mogahaddassi and Payton Rehling were granted access to USCIS systems and data repositories, and that a Department of Homeland Security review was required to determine whether that access should continue. Coristine, 19, is one of the more polarizing members of DOGE. He previously provided assistance to a cybercrime ring through a company he operated while he was in high school, according to other news outlets. Coristine worked for a short period at Neuralink, Musk's brain implant company, and was previously stationed by DOGE at the Cybersecurity and Infrastructure Security Agency. The memo, dated March 28, asks DHS Deputy Secretary Troy Edgar to have his office review and provide direction for the four DOGE men regarding their access to the agency's “data lake” — called USCIS Data Business Intelligence Services — as well as two associated enabling technologies, Databricks and Github. The document says DHS CIO Antoine McCord and Michael Weissman, the agency's chief data officer, asked USCIS to enable Shutt and Coristine's access to the USCIS data lake in mid-March, and Mogahaddassi requested similar access days later. A bipartisan bill to fully establish a National Science Foundation-based resource aimed at providing essential tools for AI research to academics, nonprofits, small businesses and others was reintroduced in the House last week. Under the Creating Resources for Every American To Experiment with Artificial Intelligence (CREATE AI) Act of 2025 (H.R. 2385), a full-scale National AI Research Resource would be codified at NSF. While that resource currently exists in pilot form, legislation authorizing the NAIRR is needed to continue that work. Rep. Jay Obernolte, R-Calif., who sponsors the bill, said in a written statement announcing the reintroduction: “By empowering students, universities, startups, and small businesses to participate in the future of AI, we can drive innovation, strengthen our workforce, and ensure that American leadership in this critical field is broad-based and secure.” The NAIRR pilot, as it stands, is a collection of resources from the public and private sectors — such as computing power, storage, AI models, and data — that are made available to those researching AI to make the process of accessing those types of tools easier. The Daily Scoop Podcast is available every Monday-Friday afternoon. If you want to hear more of the latest from Washington, subscribe to The Daily Scoop Podcast on Apple Podcasts, Soundcloud, Spotify and YouTube.
OpenAI researcher Adam Kalai sits down with UC San Diego professor to discuss his work in machine learning, algorithmic fairness, and artificial intelligence. Kalai has contributed research in areas like fairness in AI models, word embeddings, and human-AI collaboration. He has worked at Microsoft Research and has published influential papers on bias in machine learning models. His work has helped shape discussions on ethical AI and the development of more equitable AI systems. Series: "Data Science Channel" [Science] [Show ID: 40264]
OpenAI researcher Adam Kalai sits down with UC San Diego professor Mikhail Belkin to discuss his work in machine learning, algorithmic fairness, and artificial intelligence. Kalai has contributed research in areas like fairness in AI models, word embeddings, and human-AI collaboration. He has worked at Microsoft Research and has published influential papers on bias in machine learning models. His work has helped shape discussions on ethical AI and the development of more equitable AI systems. Series: "Data Science Channel" [Science] [Show ID: 40264]
OpenAI researcher Adam Kalai sits down with UC San Diego professor Mikhail Belkin to discuss his work in machine learning, algorithmic fairness, and artificial intelligence. Kalai has contributed research in areas like fairness in AI models, word embeddings, and human-AI collaboration. He has worked at Microsoft Research and has published influential papers on bias in machine learning models. His work has helped shape discussions on ethical AI and the development of more equitable AI systems. Series: "Data Science Channel" [Science] [Show ID: 40264]
OpenAI researcher Adam Kalai sits down with UC San Diego professor Mikhail Belkin to discuss his work in machine learning, algorithmic fairness, and artificial intelligence. Kalai has contributed research in areas like fairness in AI models, word embeddings, and human-AI collaboration. He has worked at Microsoft Research and has published influential papers on bias in machine learning models. His work has helped shape discussions on ethical AI and the development of more equitable AI systems. Series: "Data Science Channel" [Science] [Show ID: 40264]
In today's show: The news of the day; Czech family's plea for bone marrow donor sparks nationwide response; bringing global and space phenomena to life in Žatec; and for our feature, we have Jakub Ferenčík's interview with Lea-Ann Germinder an AI researcher on Czech-US alignment in AI regulation, and more.
In today's show: The news of the day; Czech family's plea for bone marrow donor sparks nationwide response; bringing global and space phenomena to life in Žatec; and for our feature, we have Jakub Ferenčík's interview with Lea-Ann Germinder an AI researcher on Czech-US alignment in AI regulation, and more.
Send Everyday AI and Jordan a text messageStartups are changing quickly. That means Venture Capital is changing just as fast.
In this episode of the Security Matters podcast, host David Puner is joined by Lavi Lazarovitz, Vice President of Cyber Research at CyberArk Labs, to explore the transformative impact of AI agents on cybersecurity and automation. They discuss real-world scenarios where AI agents monitor security logs, flag anomalies, and automate responses, highlighting both the opportunities and risks associated with these advanced technologies.Lavi shares insights into the evolution of AI agents, from chatbots to agentic AI, and the challenges of building trust and resilience in AI-driven systems. The conversation delves into the latest research areas, including safety, privacy, and security, and examines how different industries are adopting AI agents to handle vast amounts of data.Tune in to learn about the critical security challenges posed by AI agents, the importance of trust in automation, and the strategies organizations can implement to protect their systems and data. Whether you're a cybersecurity professional or simply curious about the future of AI, this episode offers valuable insights into the rapidly evolving world of AI agents.
AI is changing the way businesses grow, and if you're not using it for research, you're falling behind. Whether it's market trends, audience insights, or competitor analysis, AI tools like ChatGPT and Gemini can give you a massive edge. In this podcast, I'll show you how to use AI research to make smarter business decisions, build a strong brand, and scale faster than ever.Want to see exactly how it works? Watch now and start leveraging AI to boost your business today.
Kevin Reid-Morris has led much of MDM's qualitative AI industry research over the past few years, and he's set to share the results of his major two-year project at our SHIFT conference. Here, he and Tom Gale discuss the transformative impact of AI in distribution and that project which identified over 50 practical AI use cases that distributors can implement to enhance operational efficiency and gain a competitive edge.
This episode is sponsored by Netsuite by Oracle, the number one cloud financial system, streamlining accounting, financial management, inventory, HR, and more. NetSuite is offering a one-of-a-kind flexible financing program. Head to https://netsuite.com/EYEONAI to know more. Can AI learn like humans? In this episode, Patrick Pilarski, Canada CIFAR AI Chair and professor at the University of Alberta, breaks down The Alberta Plan—a bold roadmap for achieving Artificial General Intelligence (AGI) through reinforcement learning and real-time experience-based AI. Unlike large pre-trained models that rely on massive datasets, The Alberta Plan champions continual learning, where AI evolves from raw sensory experience, much like a child learning through trial and error. Could this be the key to unlocking true intelligence? Pilarski also shares insights from his groundbreaking work in bionic medicine, where AI-powered prosthetics are transforming human-machine interaction. From neuroprostheses to reinforcement learning-driven robotics, this conversation explores how AI can enhance—not just replace—human intelligence. What You'll Learn in This Episode: Why reinforcement learning is a better path to AGI than pre-trained models The four core principles of The Alberta Plan and why they matter How AI-driven bionic prosthetics are revolutionizing human-machine integration The battle between reinforcement learning and traditional control systems in robotics Why continual learning is critical for AI to avoid catastrophic forgetting How reinforcement learning is already powering real-world breakthroughs in plasma control, industrial automation, and beyond The future of AI isn't just about more data—it's about AI that thinks, adapts, and learns from experience. If you're curious about the next frontier of AI, the rise of reinforcement learning, and the quest for true intelligence, this episode is a must-watch. Subscribe for more AI deep dives! (00:00) The Alberta Plan: A Roadmap to AGI (02:22) Introducing Patrick Pilarski (05:49) Breaking Down The Alberta Plan's Core Principles (07:46) The Role of Experience-Based Learning in AI (08:40) Reinforcement Learning vs. Pre-Trained Models (12:45) The Relationship Between AI, the Environment, and Learning (16:23) The Power of Reward in AI Decision-Making (18:26) Continual Learning & Avoiding Catastrophic Forgetting (21:57) AI in the Real World: Applications in Fusion, Data Centers & Robotics (27:56) AI Learning Like Humans: The Role of Predictive Models (31:24) Can AI Learn Without Massive Pre-Trained Models? (35:19) Control Theory vs. Reinforcement Learning in Robotics (40:16) The Future of Continual Learning in AI (44:33) Reinforcement Learning in Prosthetics: AI & Human Interaction (50:47) The End Goal of The Alberta Plan
In this episode of “Waking Up With AI,” Katherine Forrest and Anna Gressel examine the integration of end-to-end reasoning and agentic AI capabilities, with new developments from OpenAI, DeepMind and other leading AI labs. Katherine also shares her firsthand experience with OpenAI's new deep research capability, which is transforming academic applications of AI. ## Learn More About Paul, Weiss's Artificial Intelligence Practice: https://www.paulweiss.com/practices/litigation/artificial-intelligence
Most people are barely scratching the surface of what generative AI can do. While some fear it will replace their jobs, others dismiss it as a passing trend—but both extremes miss the point. In this episode, Ashok Sivanand breaks down the real opportunity AI presents: not as a replacement for human judgment, but as a powerful tool that can act as both a dutiful intern and an expert consultant. Learn how to integrate AI into your daily work, from automating tedious tasks to sharpening your strategic thinking, all while staying in control. Unlock the full potential of your product team with Integral's player coaches, experts in lean, human-centered design. Visit integral.io/convergence for a free Product Success Lab workshop to gain clarity and confidence in tackling any product design or engineering challenge. Inside the episode... Why so few people are using generative AI daily—and why that needs to change The two key roles AI can play: the intern and the consultant How AI can help professionals streamline research, analysis, and decision-making Practical prompts and frameworks for getting the most out of AI tools The dangers of "AI autopilot" and why staying in the driver's seat is critical Security and privacy concerns: What every AI user should know The best AI tools for different use cases—beyond just ChatGPT How companies can encourage AI adoption without creating unnecessary friction Mentioned in this episode AI Tools: ChatGPT, Claude, Perplexity, Gemini, Copilot, Grok Amazon's six-page memo template for effective decision-making: https://medium.com/@info_14390/the-ultimate-guide-to-amazons-6-pager-memo-method-c4b683441593 Ready Signal for external market factor analysis: https://www.readysignal.com/ AI prompting frameworks from Geoff Woods of AI Leadership: https://www.youtube.com/watch?v=HToY8gDTk6E Andrej Karpathy's Deep Dive into LLMs: https://www.youtube.com/watch?v=7xTGNNLPyMI Books by Carmine Gallo: The Presentation Secrets of Steve Jobs & Talk Like TED: https://www.amazon.com/Presentation-Secrets-Steve-Jobs-Insanely/dp/1491514310 Subscribe to the Convergence podcast wherever you get podcasts—including video episodes on YouTube at youtube.com/@convergencefmpodcast Learn something? Give the podcast a 5-star review and like the episode on YouTube. It's how the show grows. Follow the Pod Linkedin: https://www.linkedin.com/company/convergence-podcast/ X: https://twitter.com/podconvergence Instagram: @podconvergence
Join Diane Gutiw, VP Global AI Research at CGI, as she discusses agentic systems - collaborative ecosystems of specialized AI tools that work together to solve complex problems. She explains how RAG is evolving as one component within broader agentic workflows, addresses challenges in moving AI from POC to production, and emphasizes pragmatic AI governance. Diane also explains digital triplets - AI layers built on existing data infrastructures that enable natural language conversations with information ecosystems across healthcare, utilities, and infrastructure management.
Artificial intelligence is radically transforming software development. AI-assisted coding tools are generating billions in investment, promising faster development cycles, and shifting engineering roles from code authors to code editors. But how does this impact software quality, security, and team dynamics? How can product teams embrace AI without falling into the hype? In this episode, AI assisted Agile expert Mike Gehard shares his hands-on experiments with AI in software development. From his deep background at Pivotal Labs to his current work pushing the boundaries of AI-assisted coding, Mike reveals how AI tools can amplify quality practices, speed up prototyping, and even challenge the way we think about source code. He discusses the future of pair programming, the evolving role of test-driven development, and how engineers can better focus on delivering user value. Unlock the full potential of your product team with Integral's player coaches, experts in lean, human-centered design. Visit integral.io/convergence for a free Product Success Lab workshop to gain clarity and confidence in tackling any product design or engineering challenge. Inside the episode... Mike's background at Pivotal Labs and why he kept returning How AI is changing the way we think about source code as a liability Why test-driven development still matters in an AI-assisted world The future of pair programming with AI copilots The importance of designing better software in an AI-driven development process Using AI to prototype faster and build user-facing value sooner Lessons learned from real-world experiments with AI-driven development The risks of AI-assisted software, from hallucinations to security Mentioned in this episode Mike's Substack: https://aiassistedagiledevelopment.substack.com/ Mike's Github repo: https://github.com/mikegehard/ai-assisted-agile-development Pivotal Labs: https://en.wikipedia.org/wiki/Pivotal_Labs 12-Factor Apps: https://12factor.net/ GitHub Copilot: https://github.com/features/copilot Cloud Foundry: https://en.wikipedia.org/wiki/Cloud_Foundry Lean Startup by Eric Ries: https://www.amazon.com/Lean-Startup-Entrepreneurs-Continuous-Innovation/dp/0307887898 Refactoring by Martin Fowler and Kent Beck https://www.amazon.com/Refactoring-Improving-Existing-Addison-Wesley-Signature/dp/0134757599 Dependabot: https://github.com/dependabot Tessl CEO Guy Podjarny's talk: https://youtu.be/e1a3WuxTY-k Aider AI Pair programming terminal: https://aider.chat/ Gemini LLM: https://gemini.google.com/app Perplexity AI: https://www.perplexity.ai/ DeepSeek: https://www.deepseek.com/ Ian Cooper's talk on TDD: https://www.youtube.com/watch?v=IN9lftH0cJc Mike's newest Mountain Bike IBIS Ripmo V2S: https://www.ibiscycles.com/bikes/past-models/ripmo-v2s Mike's recommended house slippers: https://us.giesswein.com/collections/mens-wool-slippers/products/wool-slippers-dannheim Sorba Chattanooga Mountain Biking Trails: https://www.sorbachattanooga.org/localtrails Subscribe to the Convergence podcast wherever you get podcasts, including video episodes on YouTube at youtube.com/@convergencefmpodcast Learn something? Give us a 5-star review and like the podcast on YouTube. It's how we grow.
There's a huge number of AI tools emerging and we're testing them to see if they can help with different aspects of investing. From filtering, researching, and valuing opportunities to constructing a portfolio and monitoring positions - the impact of AI on investing is going to be profound.This year, we want to trial as many platforms as possible and share how we think about incorporating them into our process.In today's episode we trial Google's Notebook LM. Check out Ren's notes.—------Want to get involved in the podcast? Record a voice note or send us a message on our website and we'll play it on the podcast.—------Keep up with the news moving markets with the Equity Mates daily email and podcast:Sign up to our daily email to get the news delivered to your inbox at 6am every weekday morningPrefer to hear the news? We've turned our email into a podcast using AI - listen on Apple or Spotify—------Want more Equity Mates?Listen to our basics-of-investing podcast: Get Started Investing (Apple | Spotify)Watch Equity Mates on YouTubePick up our books: Get Started Investing and Don't Stress, Just InvestFollow us on social media: Instagram, TikTok, & LinkedIn—------In the spirit of reconciliation, Equity Mates Media and the hosts of Equity Mates Investing acknowledge the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respects to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander people today. —------Equity Mates Investing is a product of Equity Mates Media. This podcast is intended for education and entertainment purposes. Any advice is general advice only, and has not taken into account your personal financial circumstances, needs or objectives. Before acting on general advice, you should consider if it is relevant to your needs and read the relevant Product Disclosure Statement. And if you are unsure, please speak to a financial professional. Equity Mates Media operates under Australian Financial Services Licence 540697. Hosted on Acast. See acast.com/privacy for more information.
It's that time of week where I'll take you through a rundown on some of the latest happenings at the critical intersection of business, tech, and human experience. While love is supposed to be in the air given it's Valentine's Day, I'm not sure the headlines got the memo.With that, let's get started.Elon's $97B OpenAI Takeover Stunt - Musk made a shock bid to buy OpenAI for $97 billion, raising questions about his true motives. Given his history with OpenAI and his own AI venture (xAI), this move had many wondering if he was serious or just trolling. Given OpenAI is hemorrhaging cash alongside its plans to pivot to a for-profit model, Altman is in a tricky position. Musk's bid seems designed to force OpenAI into staying a nonprofit, showing how billionaires use their wealth to manipulate industries, not always in ways that benefit the public.Is Google Now Pro-Harmful AI? - Google silently removed its long-standing ethical commitment to not creating AI for harmful purposes. This change, combined with its growing partnerships in military AI, raises major concerns about the direction big tech is taking. It's worth exploring how AI development is shifting toward militarization and how companies like Google are increasingly prioritizing government and defense contracts over consumer interests.The AI Agent Hype Cycle - AI agents are being hyped as the future of work, with companies slashing jobs in anticipation of AI taking over. However, there's more than meets the eye. While AI agents are getting more powerful, they're still unreliable, messy, and require human oversight. Companies are overinvesting in AI agents and quickly realizing they don't work as well as advertised. While that may sound good for human workers, I predict it will get worse before it gets better.Does Microsoft Research Show AI is Killing Critical Thinking? - A recent Microsoft study is making waves with claims that AI is eroding critical thinking and creativity. This week, I took a closer look at the research and explained why the media's fearmongering isn't entirely accurate. And yet, we should take this seriously. The real issue isn't AI itself; it's how we use it. If we keep becoming over-reliant on AI for thinking, problem-solving, and creativity, it will inevitably lead to cognitive atrophy.Show Notes:In this Weekly Update, Christopher explores the latest developments at the intersection of business, technology, and the human experience. The episode covers Elon Musk's surprising $97 billion bid to acquire OpenAI, its implications, and the debate over whether OpenAI should remain a nonprofit. The discussion also explores the military applications of AI, Google's recent shift away from its 'don't create harmful AI' policy, and the consequences of large-scale investments in AI for militaristic purposes. Additionally, Christopher examines the rise of AI agents, their potential to change the workforce, and the challenges they present. Finally, Microsoft's study on the erosion of critical thinking and empathy due to AI usage is analyzed, emphasizing the need for thoughtful and intentional application of AI technologies.00:00 - Introduction01:53 - Elon Musk's Shocking Offer to Buy OpenAI15:27 - Google's Controversial Shift in AI Ethics27:20 - Navigating the Hype of AI Agents29:41 - The Rise of AI Agents in the Workplace41:35 - Does AI Destroy Critical Thinking in Humans?52:49 - Concluding Thoughts and Future Outlook#AI #OpenAI #Microsoft #CriticalThinking #ElonMusk
AI copilots have changed a range of professions, from healthcare to finance, by automating tasks and enhancing productivity. But can copilots also create value for people performing more mechanical, hands-on tasks or figuring out how to bring factories online? In this episode, Barbara welcomes Olympia Brikis, Director of AI Research at Siemens, to show how generative AI is shaping new industrial tech jobs at the convergence of the real and digital worlds. Olympia sheds light on the unique career opportunities in AI and what it takes to thrive in this dynamic, emerging field. Whether you're a tech enthusiast or someone curious about tech careers, this episode offers a unique perspective on how AI is reshaping the landscape of mechanical and industrial professions. Tune in to learn about the exciting innovations and the future of AI in industry! Show notes In this episode, Barbara asks Olympia to share some resources that can help all of us get smarter on industrial AI. Here are Olympia's recommendations: For everyone just getting started with (Generative) AI: Elements of AI – great for learning how AI work and what it is https://www.elementsofai.com/ Generative AI for Everyone: https://www.coursera.org/learn/generative-ai-for-everyone Co-Intelligence: Living and Working with AI, by Ethan Mollick For those want to dive deeper into the technical aspects of Deep Neural Networks and Generative AI: Deep Learning Specialization: https://www.coursera.org/specializations/deep-learning Stanford University Lecture CS336: Language Modeling from Scratch https://stanford-cs336.github.io/spring2024/
A groundbreaking research project funded by Lero, the Research Ireland Centre for Software, and the IRFU is deploying artificial intelligence (AI) to analyse the tackle event in rugby to enhance player welfare and performance, resulting in a more exciting and dynamic game. The project team, led by Professor Anthony Ventresque, Director of the Complex Software Lab at the School of Computer Science and Statistics at Trinity College Dublin, has the potential to provide coaches, players and referees with incredible insights into tackle technique, identify areas for improvement, and ultimately reduce the risk of injury. "Our research is focused on developing AI that can understand the complexities of rugby tackles. By analysing large amounts of video data, we can identify patterns and trends that may not be apparent to the human eye. This information can be used to develop targeted training programs to improve tackle technique and player safety." PhD researchers Will Connors and Caoilfhionn Ní Dheoráin, are teaming up with Dr Kathryn Dane to harness the power of AI in the world of rugby. This collaborative research project aims to develop AI models capable of automatically identifying and analysing the tackle event, with the potential of improving training techniques. Will Connors, who has represented Ireland at senior, U20, and sevens levels, said that as a rugby player with a Computer Science background, he is fascinated by AI's potential to analyse and optimise tackle technique. "I believe this research can help players at all levels improve their tackling skills and contribute to a more exciting and dynamic game." Dr Kathryn Dane, who has also represented Ireland at senior international level, said this project highlights the crucial link between technique and safety in rugby at all levels of the game. "By using AI to analyse a large number of tackles, we can identify specific areas where technique can be improved to enhance both performance and player welfare." Computer Scientist Caoilfhionn Ní Dheoráin said she is excited by the challenge of applying Machine Learning at scale to analyse rugby tackles in the domestic club and school game. "This project offers a unique opportunity to push the boundaries of AI and contribute to a deeper understanding of this complex and dynamic sport." The IRFU's Medical Manager, Dr Caithríona Yeomans, who holds a PhD in Sports Sciences, said this research will be hugely helpful to enhancing player welfare in rugby "By understanding the mechanics of tackles and identifying areas for improvement, we can help players develop safer and more effective techniques. The collaboration with Lero and the Complex Software Lab at Trinity College Dublin is invaluable in our ongoing efforts to make rugby a safer sport for all." This collaboration stems from the IRFU's decision to lower the tackle height in the domestic game. The insights gained from the video analysis will help identify the trial's impact on player welfare and the overall game. The IRFU's National Rugby Development Manager and Tackle Trial project lead, Colm Finnegan, says: "We are excited to work with Lero to be at the forefront of innovation in such an important area of Rugby, which reaffirms our aim of making the sport as safe as possible whilst also being an enjoyable game for all." One of the project's recent publications, 'Frisbees and Dogs: Domain Adaptation For Object Detection with Limited Labels in Rugby Data', explores how AI can be trained to accurately detect essential elements in rugby videos, even with limited training data. This is a significant breakthrough in this area. Another recent publication, 'Are we tackle ready? Cross-sectional video analysis of match tackle characteristics in elite women's Rugby Union' examines tackle techniques in the women's game. This study found that many tackles lacked full completion of World Rugby's 'Tackle Ready' recommended techniques, highlighting the need for targeted interventions to r...
Hey tech lovers! In this episode of The LEO Podcast, we dive into three hot tech stories. First, El Salvador is officially stepping away from Bitcoin as legal tender—was the crypto experiment a failure, or just ahead of its time? Then, AI-powered research assistants from OpenAI and Google are changing how scientists work, but can we really trust them to get the facts right? And finally, Apple is making big moves in Latin America, with eSIM technology giving it an edge over Samsung in the region. Tune in for all this and more onThe LEO Podcast!
This episode is sponsored by Thuma. Thuma is a modern design company that specializes in timeless home essentials that are mindfully made with premium materials and intentional details. To get $100 towards your first bed purchase, go to http://thuma.co/eyeonai In this episode of the Eye on AI podcast, Pedro Domingos, renowned AI researcher and author of The Master Algorithm, joins Craig Smith to explore the evolution of machine learning, the resurgence of Bayesian AI, and the future of artificial intelligence. Pedro unpacks the ongoing battle between Bayesian and Frequentist approaches, explaining why probability is one of the most misunderstood concepts in AI. He delves into Bayesian networks, their role in AI decision-making, and how they powered Google's ad system before deep learning. We also discuss how Bayesian learning is still outperforming humans in medical diagnosis, search & rescue, and predictive modeling, despite its computational challenges. The conversation shifts to deep learning's limitations, with Pedro revealing how neural networks might be just a disguised form of nearest-neighbor learning. He challenges conventional wisdom on AGI, AI regulation, and the scalability of deep learning, offering insights into why Bayesian reasoning and analogical learning might be the future of AI. We also dive into analogical learning—a field championed by Douglas Hofstadter—exploring its impact on pattern recognition, case-based reasoning, and support vector machines (SVMs). Pedro highlights how AI has cycled through different paradigms, from symbolic AI in the '80s to SVMs in the 2000s, and why the next big breakthrough may not come from neural networks at all. From theoretical AI debates to real-world applications, this episode offers a deep dive into the science behind AI learning methods, their limitations, and what's next for machine intelligence. Don't forget to like, subscribe, and hit the notification bell for more expert discussions on AI, technology, and the future of innovation! Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI (00:00) Introduction (02:55) The Five Tribes of Machine Learning Explained (06:34) Bayesian vs. Frequentist: The Probability Debate (08:27) What is Bayes' Theorem & How AI Uses It (12:46) The Power & Limitations of Bayesian Networks (16:43) How Bayesian Inference Works in AI (18:56) The Rise & Fall of Bayesian Machine Learning (20:31) Bayesian AI in Medical Diagnosis & Search and Rescue (25:07) How Google Used Bayesian Networks for Ads (28:56) The Role of Uncertainty in AI Decision-Making (30:34) Why Bayesian Learning is Computationally Hard (34:18) Analogical Learning – The Overlooked AI Paradigm (38:09) Support Vector Machines vs. Neural Networks (41:29) How SVMs Once Dominated Machine Learning (45:30) The Future of AI – Bayesian, Neural, or Hybrid? (50:38) Where AI is Heading Next
The release of DeepSeek's AI models at the end of January 2025 sent shockwaves around the world. The weeks that followed have been rife with hype and rumor, ranging from suggestions that DeepSeek has completely upended the tech industry to claims the efficiency gains ostensibly unlocked by DeepSeek are exagerrated. So, what's the reality? And what does it all really mean for the tech industry? In this episode of the Technology Podcast, two of Thoughtworks' AI leaders — Prasanna Pendse (Global Director of AI Strategy) and Shayan Mohanty (Head of AI Research) — join hosts Prem Chandrasekaran and Ken Mugrage to provide a much-needed clear and sober perspective on DeepSeek. They dig into some of the technical details and discuss how the DeepSeek team was able to optimize the limited hardware at their disposal, and think through what the implications might be for the industry in the months to come. Read Prasanna's take on DeepSeek on the Thoughtworks blog: https://www.thoughtworks.com/insights/blog/generative-ai/demystifying-deepseek
Niloofar is a Postdoctoral researcher at University of Washington with research interests in building privacy preserving AI systems and studying the societal implications of machine learning models. She received her PhD in Computer Science from UC San Diego in 2023 and has received multiple awards and honors for research contributions. Time stamps of the conversation 00:00:00 Highlights 00:01:35 Introduction 00:02:56 Entry point in AI 00:06:50 Differential privacy in AI systems 00:11:08 Privacy leaks in large language models 00:15:30 Dangers of training AI on public data on internet 00:23:28 How auto-regressive training makes things worse 00:30:46 Impact of Synthetic data for fine-tuning 00:37:38 Most critical stage in AI pipeline to combat data leaks 00:44:20 Contextual Integrity 00:47:10 Are LLMs creative? 00:55:24 Under vs. Overpromises of LLMs 01:01:40 Publish vs. perish culture in AI research recently 01:07:50 Role of academia in LLM research 01:11:35 Choosing academia vs. industry 01:17:34 Mental Health and overarching More about Niloofar: https://homes.cs.washington.edu/~niloofar/ And references to some of the papers discussed: https://arxiv.org/pdf/2310.17884 https://arxiv.org/pdf/2410.17566 https://arxiv.org/abs/2202.05520 About the Host: Jay is a PhD student at Arizona State University working on improving AI for medical diagnosis and prognosis. Linkedin: https://www.linkedin.com/in/shahjay22/ Twitter: https://twitter.com/jaygshah22 Homepage: http://jayshah.me/ for any queries. Stay tuned for upcoming webinars! ***Disclaimer: The information in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***
How can AI revolutionize the way we research and understand complex topics? In this Tech Talks Daily episode, I speak with Mel Morris, founder and CEO of Corpora.AI. This research engine redefines how individuals, businesses, and institutions approach knowledge discovery. With traditional search engines struggling to deliver depth and AI tools often relying on outdated or incomplete data, Corpora.AI takes a different approach. The platform processes millions of documents per second using advanced AI and proprietary language graph technology, delivering research reports with real-time insights and source attribution. Unlike conventional AI models that generate content from limited datasets, Corpora.AI dynamically ingests over 100 petabytes of open-source intelligence, ensuring users can access the most comprehensive, accurate, and up-to-date information. Mel shares his vision for democratizing access to high-level research, making it possible for users across academia, medicine, law, finance, government, and journalism to gain deeper insights faster. We explore how Corpora.AI's real-time data ingestion and multilingual capabilities allow professionals to conduct advanced research in one language and receive results in another. From patent research and market analysis to education and rapid learning, the applications of this research engine extend far beyond what traditional AI-powered search tools can offer. We also discuss how Corpora.AI is tackling some of the biggest challenges in AI-driven research, including bias, credibility, and transparency. By providing research reports with 400-500 cited sources per query, the platform ensures that every insight is traceable, allowing users to verify information and make informed decisions. With AI reshaping how we access and interpret knowledge, what does the future hold for research, education, and data-driven decision-making? Will AI-driven research engines like Corpora.AI replace traditional search methods? And how can businesses and institutions leverage these tools to stay ahead of the curve? Join me for this fascinating discussion as we explore the future of AI-powered research and how Corpora.AI is setting a new standard for knowledge discovery.
In this episode of the Effortless Podcast, hosts Dheeraj Pandey and Amit Prakash sit down with Alex Dimakis, a renowned AI researcher and professor at UC Berkeley. With a background in deep learning, graphical models, and foundational AI frameworks, Alex provides unparalleled insights into the evolving landscape of AI.The discussion delves into the detailing of foundation models, modular AI architectures, fine-tuning, and the role of synthetic data in post-training. They also explore practical applications, challenges in creating reasoning frameworks, and the future of AI specialization and generalization.As Alex puts it, "To deep seek or not, that's the $1 trillion question." Tune in to hear his take on how companies can bridge the gap between large generalist models and smaller specialized agents to achieve meaningful AI outcomes.Key Topics and Chapter Markers:Introduction to Alex Dimakis & His Journey [0:00]From Foundation Models to Modular AI Systems [6:00]Fine-Tuning vs. Prompting: Understanding Post-Training [15:00]Synthetic Data in AI Development: Challenges and Solutions [25:00]The Role of Reasoning and Chain of Thought in AI [45:00]AI's Future: Specialized Models vs. General Systems [1:05:00]Alex's Reflections on AI Research and Innovation [1:20:00]Hosts:Dheeraj Pandey: Co-founder and CEO at DevRev, formerly Co-founder and CEO of Nutanix. A tech visionary with a deep interest in AI and systems thinking.Amit Prakash: Co-founder and CTO at ThoughtSpot, formerly at Google AdSense and Bing, with extensive expertise in analytics and large-scale systems.Guest:Alex Dimakis: Professor at UC Berkeley and co-founder of Bespoke Labs, Alex has made significant contributions to deep learning, machine learning infrastructure, and the development of AI reasoning frameworks.Follow the Hosts and the Guest:Dheeraj Pandey:LinkedIn: Dheeraj PandeyTwitter: @dheeraj Amit Prakash:LinkedIn: Amit PrakashTwitter: @amitp42 Alex Shtoyanov:LinkedIn: Alex DimakisTwitter: @AlexGDimakisShare Your Thoughts:Have questions, comments, or ideas for future episodes? Email us at EffortlessPodcastHQ@gmail.comDon't forget to Like, Comment, and Subscribe for more in-depth discussions on AI, technology, and innovation!
Generative AI's popularity has led to a renewed interest in quality assurance — perhaps unsurprising given the inherent unpredictability of the technology. This is why, over the last year, the field has seen a number of techniques and approaches emerge, including evals, benchmarking and guardrails. While these terms all refer to different things, grouped together they all aim to improve the reliability and accuracy of generative AI. To discuss these techniques and the renewed enthusiasm for testing across the industry, host Lilly Ryan is joined by Shayan Mohanty, Head of AI Research at Thoughtworks, and John Singleton, Program Manager for Thoughtworks' AI Lab. They discuss the differences between evals, benchmarking and testing and explore both what they mean for businesses venturing into generative AI and how they can be implemented effectively. Learn more about evals, benchmarks and testing in this blog post by Shayan and John (written with Parag Mahajani): https://www.thoughtworks.com/insights/blog/generative-ai/LLM-benchmarks,-evals,-and-tests
Tune into our new episode of #SCIP I_STREAM and join Paul Santilli in conversation with Richard Klavans, Founder and CEO, SciTech Strategies. Discover how SciTech Strategies uses AI to identify promising research areas and how this can be used for business strategy. This podcast also discusses AI's role in Intelligence development, business resiliency, early-stage research challenges, and research and development expertise.#SCIP #Intelligence #Podcast #AI #Research #Data
This episode is sponsored by Netsuite by Oracle, the number one cloud financial system, streamlining accounting, financial management, inventory, HR, and more. NetSuite is offering a one-of-a-kind flexible financing program. Head to https://netsuite.com/EYEONAI to know more. In this episode of the Eye on AI podcast, we dive into the transformative world of AI compute infrastructure with Mitesh Agrawal, Head of Cloud/COO at Lambda Mitesh takes us on a journey from Lambda Labs' early days as a style transfer app to its rise as a leader in providing scalable, deep learning infrastructure. Learn how Lambda Labs is reshaping AI compute by delivering cutting-edge GPU solutions and accessible cloud platforms tailored for developers, researchers, and enterprises alike. Throughout the episode, Mitesh unpacks Lambda Labs' unique approach to optimizing AI infrastructure—from reducing costs with transparent pricing to tackling the global GPU shortage through innovative supply chain strategies. He explains how the company supports deep learning workloads, including training and inference, and why their AI cloud is a game-changer for scaling next-gen applications. We also explore the broader landscape of AI, touching on the future of AI compute, the role of reasoning and video models, and the potential for localized data centers to meet the growing demand for low-latency solutions. Mitesh shares his vision for a world where AI applications, powered by Lambda Labs, drive innovation across industries. Tune in to discover how Lambda Labs is democratizing access to deep learning compute and paving the way for the future of AI infrastructure. Don't forget to like, subscribe, and hit the notification bell to stay updated on the latest in AI, deep learning, and transformative tech! Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI (00:00) Introduction and Lambda Labs' Mission (01:37) Origins: From DreamScope to AI Compute Infrastructure (04:10) Pivoting to Deep Learning Infrastructure (06:23) Building Lambda Cloud: An AI-Focused Cloud Platform (09:16) Transparent Pricing vs. Hyperscalers (12:52) Managing GPU Supply and Demand (16:34) Evolution of AI Workloads: Training vs. Inference (20:02) Why Lambda Labs Sticks with NVIDIA GPUs (24:21) The Future of AI Compute: Localized Data Centers (28:30) Global Accessibility and Regulatory Challenges (32:13) China's AI Development and GPU Restrictions (39:50) Scaling Lambda Labs: Data Centers and Growth (45:22) Advancing AI Models and Video Generation (50:24) Optimism for AI's Future (53:48) How to Access Lambda Cloud
What if AI were the key to innovation inside your company?Today's guest suggests that AI puts innovation in the hands of people who aren't necessarily scientists or programmers.Travis Hoppe is the Assistant Director of AI Research and Development at The White House Office of Science and Technology Policy. He co-authored The Pile, a pioneering open source dataset used for training large language models that served as a catalyst for promoting open science within the field of AI, and he holds a PhD in physics.In this conversation, Daniel and Travis discuss everything AI–from the basics of machine learning and algorithms to implications for leaders to the most promising applications of AI.“Now, people can experiment with some really good idea,” Travis says. About 20% of your organization really wants to build stuff. “Oftentimes you just need to bring them together and you need to give them the freedom to do so.”Tune in to learn:Why guardrails in AI innovation are so importantWhy leaders have a unique opportunity to be pioneers right nowWhy you don't need to fear “the singularity”Join us for a fascinating conversation about the present–and future–of AI.In this episode:1:35 – Introduction: Travis Hoppe2:53 – What is AI?9:25 – Algorithms: A Brief Review13:05 – How Should Leaders Think About AI?18:40 – AI Guidance for Teams and Businesses28:00 – AI in Practice32:40 – Lightning RoundTravis Hoppe profiles:@metasemantic on XLinkedInGoogle ScholarThe PileMemorandum M-24-10 (listed under “Memoranda 2024”)Stewart Leadership Insights and Resources:4 Ways to Encourage a Healthy Failure CultureThe Power of Imagination in Planning7 Ways to Prepare Leaders for Disruption5 Advantages of Becoming a Digitally Literate Change Leader5 Ways to Help Manage Your Team's Change ExhaustionAI-Powered Talent RetentionWomen and AIIf you liked this episode, please share it with a friend or colleague, or, better yet, leave a review to help other listeners find our show, and remember to subscribe so you never miss an episode. For more great content or to learn about how Stewart Leadership can help you grow your ability to lead effectively, please visit stewartleadership.com and follow us on LinkedIn, Instagram, and YouTube.
Send us a textOn this encore episode of the CMAJ Podcast, Dr. Blair Bigham and Dr. Mojola Omole discuss how artificial intelligence (AI) significantly improves the identification of hospital patients at risk of clinical deterioration compared to physician assessments alone. They are joined by Dr. Amol Verma, a general internist at St. Michael's Hospital in Toronto, an associate professor at the University of Toronto, and the holder of the Temerty Professorship in AI Research and Education, who shares findings from his recent CMAJ article, “Clinical evaluation of a machine learning-based early warning system for patient deterioration”.Dr. Verma explains how the AI system, ChartWatch, analyzes over 100 variables from a patient's electronic medical record to predict deterioration more accurately than traditional early warning scores like the NEWS score. He discusses how the integration of AI into clinical workflows improves patient outcomes by complementing human decision-making, leading to better results than relying on physicians or AI alone.The episode also looks at the potential future of AI in medicine, with Dr. Verma sharing insights on how AI tools should be thoughtfully integrated to support clinicians without overwhelming them. He stresses the need for AI systems to fit seamlessly into clinical workflows, ensuring patient care remains the priority. While AI is currently a tool to assist clinicians, Dr. Verma argues that the full extent of AI's role in healthcare—and its impact on the physician's place within it—remains ultimately unknowable.Join us as we explore medical solutions that address the urgent need to change healthcare. Reach out to us about this or any episode you hear. Or tell us about something you'd like to hear on the leading Canadian medical podcast.You can find Blair and Mojola on X @BlairBigham and @DrmojolaomoleX (in English): @CMAJ X (en français): @JAMC FacebookInstagram: @CMAJ.ca The CMAJ Podcast is produced by PodCraft Productions
Vivek is an Assistant Professor at Arizona State university. Prior to that, he was at the University of Pennsylvania as a postdoctoral researcher and completed his PhD in CS from the University of Utah. His PhD research focused on inference and reasoning for semi structured data and his current research spans reasoning in large language models (LLMs), multimodal learning, and instilling models with common sense for question answering. He has also received multiple awards and fellowships for his research works over the years. Conversation time stamps: 00:01:40 Introduction 00:02:52 Background in AI research 00:05:00 Finding your niche 00:12:42 Traditional AI models vs. LLMs in semi-structured data 00:18:00 Why is reasoning hard in LLMs? 00:27:10 Will scaling AI models hit a plateau? 00:31:02 Has ChatGPT pushed boundaries of AI research 00:38:28 Role of Academia in AI research in the era of LLMs 00:56:35 Keeping up with research: filtering noise vs. signal 01:09:14 Getting started in AI in 2024? 01:20:25 Maintaining mental health in research (especially AI) 01:34:18 Building good habits 01:37:22 Do you need a PhD to contribute to AI? 01:45:42 Wrap up More about Vivek: https://vgupta123.github.io/ ASU lab website: https://coral-lab-asu.github.io/ And Vivek's blog on research struggles: https://vgupta123.github.io/docs/phd_struggles.pdf About the Host:Jay is a PhD student at Arizona State University working on improving AI for medical diagnosis and prognosis. Linkedin: https://www.linkedin.com/in/shahjay22/ Twitter: https://twitter.com/jaygshah22 Homepage: http://jayshah.me/ for any queries. Stay tuned for upcoming webinars! ***Disclaimer: The information in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***
Lin Qiao, the co-founder of Fireworks.ai, sits down for a deep dive into the future of AI. Lin ran the PyTorch team at Meta, which developed some of the most fundamental open-source AI software in use today. She's got a riveting perspective on the AI landscape that is a must-listen. [0:00] Intro[1:06] Fireworks: Revolutionizing AI Inference[2:12] Challenges in AI Model Development[4:05] The Future of AI: Compound Systems[4:32] Designing Effective AI Tools[10:26] Customization and Fine-Tuning in AI[14:06] Human-in-the-Loop Automation[16:38] Evaluating AI Models[19:18] Building Complex AI Systems[21:18] Function Calling and AI Orchestration[26:52] AI Infrastructure and Hardware[31:08] Small Expert Models[31:27] Hyperscalers and Resource Management[32:14] Inference Systems and Scalability[33:08] Running Models Locally: Cost and Privacy[35:20] Open Source Models and Meta's Role[36:41] The Evolution of AI Training and Inference[38:04] Fireworks' Vision and Market Strategy[40:46] The Impact of Generative AI[45:18] AI Research and Future Trends[46:58] Building for a Rapidly Changing AI Landscape[49:36] Quickfire With your co-hosts: @jacobeffron - Partner at Redpoint, Former PM Flatiron Health @patrickachase - Partner at Redpoint, Former ML Engineer LinkedIn @ericabrescia - Former COO Github, Founder Bitnami (acq'd by VMWare) @jordan_segall - Partner at Redpoint
Current AI practice is not engineering, even when it aims for practical applications, because it is not based on scientific understanding. Enforcing engineering norms on the field could lead to considerably safer systems. https://betterwithout.ai/AI-as-engineering This episode has a lot of links! Here they are. Michael Nielsen's “The role of ‘explanation' in AI”. https://michaelnotebook.com/ongoing/sporadica.html#role_of_explanation_in_AI Subbarao Kambhampati's “Changing the Nature of AI Research”. https://dl.acm.org/doi/pdf/10.1145/3546954 Chris Olah and his collaborators: “Thread: Circuits”. distill.pub/2020/circuits/ “An Overview of Early Vision in InceptionV1”. distill.pub/2020/circuits/early-vision/ Dai et al., “Knowledge Neurons in Pretrained Transformers”. https://arxiv.org/pdf/2104.08696.pdf Meng et al.: “Locating and Editing Factual Associations in GPT.” rome.baulab.info “Mass-Editing Memory in a Transformer,” https://arxiv.org/pdf/2210.07229.pdf François Chollet on image generators putting the wrong number of legs on horses: twitter.com/fchollet/status/1573879858203340800 Neel Nanda's “Longlist of Theories of Impact for Interpretability”, https://www.lesswrong.com/posts/uK6sQCNMw8WKzJeCQ/a-longlist-of-theories-of-impact-for-interpretability Zachary C. Lipton's “The Mythos of Model Interpretability”. https://arxiv.org/abs/1606.03490 Meng et al., “Locating and Editing Factual Associations in GPT”. https://arxiv.org/pdf/2202.05262.pdf Belrose et al., “Eliciting Latent Predictions from Transformers with the Tuned Lens”. https://arxiv.org/abs/2303.08112 “Progress measures for grokking via mechanistic interpretability”. https://arxiv.org/abs/2301.05217 Conmy et al., “Towards Automated Circuit Discovery for Mechanistic Interpretability”. https://arxiv.org/abs/2304.14997 Elhage et al., “Softmax Linear Units,” transformer-circuits.pub/2022/solu/index.html Filan et al., “Clusterability in Neural Networks,” https://arxiv.org/pdf/2103.03386.pdf Cammarata et al., “Curve circuits,” distill.pub/2020/circuits/curve-circuits/ You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.
This week, we are joined by Andrew Morris, Founder and CTO of GreyNoise, to discuss their work on "GreyNoise Intelligence Discovers Zero-Day Vulnerabilities in Live Streaming Cameras with the Help of AI." GreyNoise discovered two critical zero-day vulnerabilities in IoT-connected live streaming cameras, used in sensitive environments like healthcare and industrial operations, by leveraging its AI-powered detection system, Sift. The vulnerabilities, CVE-2024-8956 (insufficient authentication) and CVE-2024-8957 (OS command injection), could allow attackers to take full control of affected devices, manipulate video feeds, or integrate them into botnets for broader attacks. This breakthrough underscores the transformative role of AI in identifying threats that traditional systems might miss, highlighting the urgent need for robust cybersecurity measures in the expanding IoT landscape. The research can be found here: GreyNoise Intelligence Discovers Zero-Day Vulnerabilities in Live Streaming Cameras with the Help of AI Learn more about your ad choices. Visit megaphone.fm/adchoices
We're experimenting and would love to hear from you!In today's episode of Discover Daily, we begin with a development for artificial intelligence research. Harvard University has unveiled a comprehensive AI training dataset, marking a significant step forward in democratizing AI education and development. This innovative release provides researchers and developers with high-quality, ethically sourced data that will accelerate the advancement of machine learning applications while addressing crucial concerns about data privacy and bias in AI systems.Google has revolutionized the AI landscape with the launch of Gemini 2.0, their most powerful and versatile AI model to date. This next-generation model demonstrates unprecedented capabilities in multimodal understanding, complex reasoning, and real-world problem-solving, setting new benchmarks in natural language processing and computational efficiency. Gemini 2.0's enhanced architecture represents a quantum leap in AI technology, promising to transform industries from healthcare to creative content generation.Mathematicians have made a remarkable discovery in the field of infinity, identifying two entirely new types that challenge our fundamental understanding of mathematical concepts. This breakthrough expands the hierarchy of infinite numbers, building upon Cantor's groundbreaking work and opening new avenues for research in set theory and mathematical logic. The discovery has profound implications for both pure mathematics and theoretical computer science, potentially influencing how we approach computational limits and mathematical modeling.From Perplexity's Discover Feed: https://www.perplexity.ai/page/harvard-releases-ai-training-d-iDxkgfrfQZO79hEZ_5Ogdghttps://www.perplexity.ai/page/google-releases-gemini-2-0-.8X4jPJYT7CayycbJ5aBrQhttps://www.perplexity.ai/page/two-new-types-of-infinity-R4h9JUauS0OvbMKosWRH9wPerplexity is the fastest and most powerful way to search the web. Perplexity crawls the web and curates the most relevant and up-to-date sources (from academic papers to Reddit threads) to create the perfect response to any question or topic you're interested in. Take the world's knowledge with you anywhere. Available on iOS and Android Join our growing Discord community for the latest updates and exclusive content. Follow us on: Instagram Threads X (Twitter) YouTube Linkedin
Noam Brown, renowned AI researcher and key figure at OpenAI, joins us for a deep dive into the o1 release. Recorded just one day before o1's full public debut, this episode explores the groundbreaking advancements and challenges behind this innovative test-time compute model.We discuss the technical breakthroughs that set o1 apart, its unique capabilities compared to previous models, and how it disrupts traditional paradigms in AI development. Noam also shares insights into OpenAI's approach to innovation, the economic realities of scaling AI, and what the future holds for the field. [0:00] Intro[0:50] Scaling Model Capabilities and Economic Constraints[2:48] Excitement Around Test Time Compute[4:50] Challenges and Future Directions in AI Research[8:11] Noam Brown's Journey and OpenAI's Research Focus[16:08] The Role of Specialized Models and Tools[21:18] Unexpected Use Cases and Future Milestones[23:44] Proof of Concept: o1's Capabilities[24:48] The Bitter Lesson: Insights from Richard Sutton[25:59] Scaffolding Techniques and Their Future[27:56] Challenges in Academia and AI Research[30:30] Evaluating AI Models: Metrics and Trends[34:47] The Role of AI in Social Sciences[39:39] AI Agents and Emergent Communication[40:17] Future of AI Robotics[41:13] Advancing Scientific Research with AI[43:30] Quickfire With your co-hosts: @jacobeffron - Partner at Redpoint, Former PM Flatiron Health @patrickachase - Partner at Redpoint, Former ML Engineer LinkedIn @ericabrescia - Former COO Github, Founder Bitnami (acq'd by VMWare) @jordan_segall - Partner at Redpoint
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Today, we're joined by Arash Behboodi, director of engineering at Qualcomm AI Research to discuss the papers and workshops Qualcomm will be presenting at this year's NeurIPS conference. We dig into the challenges and opportunities presented by differentiable simulation in wireless systems, the sciences, and beyond. We also explore recent work that ties conformal prediction to information theory, yielding a novel approach to incorporating uncertainty quantification directly into machine learning models. Finally, we review several papers enabling the efficient use of LoRA (Low-Rank Adaptation) on mobile devices (Hollowed Net, ShiRA, FouRA). Arash also previews the demos Qualcomm will be hosting at NeurIPS, including new video editing diffusion and 3D content generation models running on-device, Qualcomm's AI Hub, and more! The complete show notes for this episode can be found at https://twimlai.com/go/711.
How is Sharon AI adapting its GPU-as-a-service offerings for organizations of all sizes? On this episode of the Six Five On The Road at SC24, hosts Keith Townsend and David Nicholson are joined by Lenovo's Sinisa Nikolic and Sharon AI's Andrew Leece for a conversation on how SharonAI, a leading GPU-as-a-service provider along with Lenovo's innovative technology, is driving the AI research boom. Catch the full into this episode and: Discover how Sharon AI differentiates itself with its cutting-edge, informative, and engaging conversation capabilities Learn how Sharon AI is optimizing its infrastructure for high-performance computing (HPC), serving higher education, research, and now expanding into enterprise and government sectors Learn about the critical role of Lenovo's TruScale in enabling Sharon AI's rapid and capital-efficient scaling Explore the exciting advancements in water-cooling technology and its impact on data center sustainability and performance Get details on how Lenovo TruScale has powered SharonAI's offerings and what future developments to anticipate from the company
In this experimental episode, we're talking about a new AI research platform I am developing that blends scientific literature, lived experience on the podcast, and questions and comments from the community. You can learn more about the details and some research proposals it has generated at https://misophoniapodcast.com/research.In this episode, I take it a step further and use AI to generate a conversation about the platform and some of the proposals. Does the world need an AI conversation about AI? Maybe not, but this is a window into making more research more accessible to more people. And I am more than happy to use this podcast to experiment. -----Web: https://misophoniapodcast.comOrder "Sounds like Misophonia" - by Dr. Jane Gregory and ISupport the podcast at https://misophonia.shopEmail: hello@misophoniapodcast.comSend me any feedback! Also, if you want some beautiful podcast stickers shoot over your address.YouTube channel (with caption transcriptions)Social:Instagram - @misophoniapodcastFacebook - misophoniapodcastTwitter/X - @misophoniashowSoQuiet - Misophonia Advocacyhttps://soquiet.orgSupport the show
Julianna Ianni from Proscia joins me today. What we discuss with Julianna: Her background in biomedical engineering and biomedical imaging. Transition from radiology to pathology and the overlap between the two fields. Role as VP of AI Research and Development at Proscia, focusing on AI applications in digital pathology. An overview of foundation models Embeddings and their role in feature extraction from data Concentriq Embeddings by Proscia: The importance of data diversity in training AI models Importance of collaboration with other companies in developing AI solutions. Future possibilities for foundation models in pathology, including multimodal applications. Links for this episode: Health Podcast Network LabVine Learning Dress A Med scrubs Digital Pathology Club Concentriq Embeddings Overview "How Proscia's AI R&D Team Leveraged Foundation Models at Scale to Build 80 Breast Cancer Biomarker Prediction Models in Under 24 Hours" Foundation Models For Pathology AI Development At Your Fingertips Accelerating Tumor Segmentation Model Development with Concentriq Embeddings The Hidden Costs of AI Development in Pathology and How Concentriq Embeddings Helps Life Sciences Organizations Mitigate Them People of Pathology Podcast: Twitter Instagram
Jonathan Frankle is the Chief AI Scientist at Databricks ($43B), which he joined through the acquisition of MosaicML in July 2023. Databricks has over 12,000 customers on the cutting edge of AI; Jonathan works to anticipate their needs and offer solutions even as the tech is rapidly evolving. [0:00] Intro[0:52] Incentives and Team Motivation at Databricks[2:40] The Evolution of AI Models: Transformers vs. LSTMs[5:27] Mosaic and Databricks: A Strategic Merger[7:31] Guidance on AI Model Training and Fine-Tuning[11:11] Building Effective AI Evaluations[16:02] Domain-Specific AI Models and Their Importance[19:37] The Future of AI: Challenges and Opportunities[25:07] Ethical Considerations and Human-AI Interaction[29:13] Customer Collaboration and AI Implementation[30:45] Navigating AI Tools and Techniques[35:41] The Role of Open Source Models[36:46] AI Infrastructure and Partnerships[48:27] Academia's Role in AI Research[52:09] Ethics and Policy in AI[57:47] Quickfire With your co-hosts: @jacobeffron - Partner at Redpoint, Former PM Flatiron Health @patrickachase - Partner at Redpoint, Former ML Engineer LinkedIn @ericabrescia - Former COO Github, Founder Bitnami (acq'd by VMWare) @jordan_segall - Partner at Redpoint
Danny joins Katie in London for the Times Tech Summit, where the co-founder and boss of Google DeepMind Sir Demis Hassabis sets out his startling view that AI has the potential "to cure all diseases" and could 'have general human cognitive abilities within ten years." But fundamentally - do we really understand what AI is? Professor Neil Lawrence, the inaugural DeepMind Professor of Machine Learning at Cambridge University, Faculty AI CEO, Marc Warner, and Naila Murray, Director of AI Research at Meta share their views. And Danny and Katie ponder whether AI mania could be more about money than the mind? Hosted on Acast. See acast.com/privacy for more information.
Associate Professor at the University of Minnesota Law School and Lawfare Senior Editor Alan Rozenshtein sits down with Kevin Frazier, Assistant Professor of Law at St. Thomas University College of Law, Co-Director of the Center for Law and AI Risk, and a Tarbell Fellow at Lawfare. They discuss a new paper that Kevin has published as part of Lawfare's ongoing Digital Social Contract paper series titled “Prioritizing International AI Research, Not Regulations.”Frazier sheds light on the current state of AI regulation, noting that it's still in its early stages and is often under-theorized and under-enforced. He underscores the need for more targeted research to better understand the specific risks associated with AI models. Drawing parallels to risk research in the automobile industry, Frazier also explores the potential role of international institutions in consolidating expertise and establishing legitimacy in AI risk research and regulation.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/c/trumptrials.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.