Podcasts about ML

  • 3,781PODCASTS
  • 12,997EPISODES
  • 40mAVG DURATION
  • 2DAILY NEW EPISODES
  • Jun 2, 2025LATEST

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about ML

Show all podcasts related to ml

Latest podcast episodes about ML

DanceSpeak
213 - Gerran Reese - Dance Industry Truths, Social Media, and Staying Rooted

DanceSpeak

Play Episode Listen Later Jun 2, 2025 107:02


In episode 213, host Galit Friedlander and guest, Gerran Reese (Beyoncé, Kaytranda, Dancing With the Stars, Nike, Monsters of Hip-Hop), deconstruct the topic of virality in the dance world, Gerran's journey from a young working dancer in PDX to becoming a sought-after teacher in LA/globally, and the deeper work of staying true to yourself in an industry that doesn't always make it easy. Follow Galit: Instagram - https://www.instagram.com/gogalit Website - https://www.gogalit.com/ On-Demand Workout Programs -https://galit-s-school-0397.thinkific.com/collections You can connect with Gerran Reese on Instagram. Listen to DanceSpeak on Apple Podcasts and Spotify.

Machine Learning Guide
MLG 036 Autoencoders

Machine Learning Guide

Play Episode Listen Later May 30, 2025 65:55


Auto encoders are neural networks that compress data into a smaller "code," enabling dimensionality reduction, data cleaning, and lossy compression by reconstructing original inputs from this code. Advanced auto encoder types, such as denoising, sparse, and variational auto encoders, extend these concepts for applications in generative modeling, interpretability, and synthetic data generation. Links Notes and resources at ocdevel.com/mlg/36 Try a walking desk - stay healthy & sharp while you learn & code Build the future of multi-agent software with AGNTCY. Thanks to T.J. Wilder from intrep.io for recording this episode! Fundamentals of Autoencoders Autoencoders are neural networks designed to reconstruct their input data by passing data through a compressed intermediate representation called a “code.” The architecture typically follows an hourglass shape: a wide input and output separated by a narrower bottleneck layer that enforces information compression. The encoder compresses input data into the code, while the decoder reconstructs the original input from this code. Comparison with Supervised Learning Unlike traditional supervised learning, where the output differs from the input (e.g., image classification), autoencoders use the same vector for both input and output. Use Cases: Dimensionality Reduction and Representation Autoencoders perform dimensionality reduction by learning compressed forms of high-dimensional data, making it easier to visualize and process data with many features. The compressed code can be used for clustering, visualization in 2D or 3D graphs, and input into subsequent machine learning models, saving computational resources and improving scalability. Feature Learning and Embeddings Autoencoders enable feature learning by extracting abstract representations from the input data, similar in concept to learned embeddings in large language models (LLMs). While effective for many data types, autoencoder-based encodings are less suited for variable-length text compared to LLM embeddings. Data Search, Clustering, and Compression By reducing dimensionality, autoencoders facilitate vector searches, efficient clustering, and similarity retrieval. The compressed codes enable lossy compression analogous to audio codecs like MP3, with the difference that autoencoders lack domain-specific optimizations for preserving perceptually important data. Reconstruction Fidelity and Loss Types Loss functions in autoencoders are defined to compare reconstructed outputs to original inputs, often using different loss types depending on input variable types (e.g., Boolean vs. continuous). Compression via autoencoders is typically lossy, meaning some information from the input is lost during reconstruction, and the areas of information lost may not be easily controlled. Outlier Detection and Noise Reduction Since reconstruction errors tend to move data toward the mean, autoencoders can be used to reduce noise and identify data outliers. Large reconstruction errors can signal atypical or outlier samples in the dataset. Denoising Autoencoders Denoising autoencoders are trained to reconstruct clean data from noisy inputs, making them valuable for applications in image and audio de-noising as well as signal smoothing. Iterative denoising as a principle forms the basis for diffusion models, where repeated application of a denoising autoencoder can gradually turn random noise into structured output. Data Imputation Autoencoders can aid in data imputation by filling in missing values: training on complete records and reconstructing missing entries for incomplete records using learned code representations. This approach leverages the model's propensity to output ‘plausible' values learned from overall data structure. Cryptographic Analogy The separation of encoding and decoding can draw parallels to encryption and decryption, though autoencoders are not intended or suitable for secure communication due to their inherent lossiness. Advanced Architectures: Sparse and Overcomplete Autoencoders Sparse autoencoders use constraints to encourage code representations with only a few active values, increasing interpretability and explainability. Overcomplete autoencoders have a code size larger than the input, often in applications that require extraction of distinct, interpretable features from complex model states. Interpretability and Research Example Research such as Anthropic's “Towards Monosemanticity” applies sparse autoencoders to the internal activations of language models to identify interpretable features correlated with concrete linguistic or semantic concepts. These models can be used to monitor and potentially control model behaviors (e.g., detecting specific language usage or enforcing safety constraints) by manipulating feature activations. Variational Autoencoders (VAEs) VAEs extend autoencoder architecture by encoding inputs as distributions (means and standard deviations) instead of point values, enforcing a continuous, normalized code space. Decoding from sampled points within this space enables synthetic data generation, as any point near the center of the code space corresponds to plausible data according to the model. VAEs for Synthetic Data and Rare Event Amplification VAEs are powerful in domains with sparse data or rare events (e.g., healthcare), allowing generation of synthetic samples representing underrepresented cases. They can increase model performance by augmenting datasets without requiring changes to existing model pipelines. Conditional Generative Techniques Conditional autoencoders extend VAEs by allowing controlled generation based on specified conditions (e.g., generating a house with a pool), through additional decoder inputs and conditional loss terms. Practical Considerations and Limitations Training autoencoders and their variants requires computational resources, and their stochastic training can produce differing code representations across runs. Lossy reconstruction, lack of domain-specific optimizations, and limited code interpretability restrict some use cases, particularly where exact data preservation or meaningful decompositions are required.

The Industrial Talk Podcast with Scott MacKenzie
Lisa Pansing with Fluke Reliability

The Industrial Talk Podcast with Scott MacKenzie

Play Episode Listen Later May 27, 2025 24:48


Industrial Talk is onsite at Xcelerate 2025 and talking to Lisa Pansing, Key Account Manager at Fluke Reliability about "Powerful Vibration Asset Management Solution". Scott MacKenzie promotes the MEDivice 2025 event in Boston, organized by Informa, highlighting its importance for the medical device industry. At the Xcelerate 2025 event in Austin, Texas, hosted by Fluke Reliability, Lisa from Azima discusses the advancements in maintenance, reliability, and operations. Lisa emphasizes the importance of continuous improvement, the role of AI and ML in predictive maintenance, and the benefits of Azima's wireless and wired solutions for vibration and temperature monitoring. She also touches on the future potential of augmented reality in maintenance. Scott encourages listeners to connect with industry professionals and stay updated on industry trends. Action Items [ ] @Scott MacKenzie - Promote the ME Device Boston event on September 30 - October 1, 2025. [ ] Reach out to Lisa on LinkedIn to discuss Azima's solutions further. Outline MEDevice Boston Event Overview Scott MacKenzie introduces the MEDevice Boston event, scheduled for September 30 through October 1, 2025. The event is organized by Informa and is aimed at professionals in the medical device industry. Scott emphasizes the importance of education, collaboration, and innovation at the event. The event is described as a significant opportunity for networking and learning within the industry. Introduction to Industrial Talk Podcast Scott thanks the audience for joining the top industry-related podcast in the universe. The podcast celebrates industry professionals who innovate, collaborate, and solve problems daily. Scott mentions the current location of the podcast, Xcelerate 2025 in Austin, Texas, organized by Fluke Reliability. Interview with Lisa from Fluke Reliability Scott introduces Lisa, who is with the Azima brand at Fluke Reliability. Lisa shares her background in maintenance, reliability, and operations, highlighting her experience over 20 years. Discussion on the advancements in technology, such as AI, augmented reality, and virtual reality, in the manufacturing world. Scott and Lisa talk about the ongoing conversations around reliability and the slow adoption of wireless technology by some companies. Challenges in Reliability and Maintenance Lisa discusses the importance of understanding the reliability journey of companies. Scott and Lisa talk about the challenges of reactive thinking and the need for continuous training and improvement. The conversation touches on the importance of providing more than just financial compensation to retain employees. Scott emphasizes the need for stringent technology solutions to address the challenges of attracting and retaining talent. Azima Solutions and Their Impact Lisa explains the history and capabilities of Azima DLI, which started with analysis for the Navy in the 1960s. Discussion on the various solutions offered by Azima, including wireless and wired solutions for vibration and temperature monitoring. Scott and Lisa talk about the importance of catching equipment issues early to prevent catastrophic failures. The conversation highlights the benefits of Azima's cloud-based solutions and the role of AI and ML in providing actionable insights. ...

ML Sports Platter
NFL Draft QB Recap.

ML Sports Platter

Play Episode Listen Later May 27, 2025 20:33


00:00-25:00: ML recaps the quarterbacks taken in the 2025 NFL Draft.

Dr. Joseph Mercola - Take Control of Your Health
How Vitamin D Protects Your Brain from Parkinson's - AI Podcast

Dr. Joseph Mercola - Take Control of Your Health

Play Episode Listen Later May 26, 2025 7:24


Story at-a-glance Vitamin D may play a protective role in Parkinson's disease, with clinical trials showing improvements in balance and mobility for patients taking supplements of 1,000 to 10,000 IU daily The "sunshine vitamin" has neuroprotective effects in the brain, particularly in a key area affected by Parkinson's that helps produce dopamine Vitamin D deficiency is common in Parkinson's patients and contributes to disease progression, as this nutrient helps regulate inflammation and protects brain cells from damage Sunlight is the optimal source of vitamin D; ideally get daily exposure until just before your skin turns slightly pink — though those with darker skin need longer exposure times If sun exposure is limited, vitamin D supplements are useful to help maintain optimal levels (60 to 80 ng/mL); take supplements with healthy fats and monitor your levels with regular blood tests

Machine Learning Street Talk
"Blurring Reality" - Chai's Social AI Platform (SPONSORED)

Machine Learning Street Talk

Play Episode Listen Later May 26, 2025 50:59


"Blurring Reality" - Chai's Social AI Platform - sponsoredThis episode of MLST explores the groundbreaking work of Chai, a social AI platform that quietly built one of the world's largest AI companion ecosystems before ChatGPT's mainstream adoption. With over 10 million active users and just 13 engineers serving 2 trillion tokens per day, Chai discovered the massive appetite for AI companionship through serendipity while searching for product-market fit.CHAI sponsored this show *because they want to hire amazing engineers* -- CAREER OPPORTUNITIES AT CHAIChai is actively hiring in Palo Alto with competitive compensation ($300K-$800K+ equity) for roles including AI Infrastructure Engineers, Software Engineers, Applied AI Researchers, and more. Fast-track qualification available for candidates with significant product launches, open source contributions, or entrepreneurial success.https://www.chai-research.com/jobs/The conversation with founder William Beauchamp and engineers Tom Lu and Nischay Dhankhar covers Chai's innovative technical approaches including reinforcement learning from human feedback (RLHF), model blending techniques that combine smaller models to outperform larger ones, and their unique infrastructure challenges running exaflop-class compute.SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers in Zurich and SF. Goto https://tufalabs.ai/***Key themes explored include:- The ethics of AI engagement optimization and attention hacking- Content moderation at scale with a lean engineering team- The shift from AI as utility tool to AI as social companion- How users form deep emotional bonds with artificial intelligence- The broader implications of AI becoming a social mediumWe also examine OpenAI's recent pivot toward companion AI with April's new GPT-4o, suggesting a fundamental shift in how we interact with artificial intelligence - from utility-focused tools to companion-like experiences that blur the lines between human and artificial intimacy.The episode also covers Chai's unconventional approach to hiring only top-tier engineers, their bootstrap funding strategy focused on user revenue over VC funding, and their rapid experimentation culture where one in five experiments succeed.TOC:00:00:00 - Introduction: Steve Jobs' AI Vision & Chai's Scale00:04:02 - Chapter 1: Simulators - The Birth of Social AI00:13:34 - Chapter 2: Engineering at Chai - RLHF & Model Blending00:21:49 - Chapter 3: Social Impact of GenAI - Ethics & Safety00:33:55 - Chapter 4: The Lean Machine - 13 Engineers, Millions of Users00:42:38 - Chapter 5: GPT-4o Becoming a Companion - OpenAI's Pivot00:50:10 - Chapter 6: What Comes Next - The Future of AI Intimacy TRANSCRIPT: https://www.dropbox.com/scl/fi/yz2ewkzmwz9rbbturfbap/CHAI.pdf?rlkey=uuyk2nfhjzezucwdgntg5ubqb&dl=0

ML Sports Platter
Freddie Freeman. Best in Baseball?

ML Sports Platter

Play Episode Listen Later May 22, 2025 14:49


00:00-15:00: ML says Freddie Freeman is right there with Ohtani and Judge as the best in MLB.

Cell & Gene: The Podcast
Harnessing AI and Synthetic Biology for Cell Therapies with Generate:Biomedicines' Dr. Alex Snyder

Cell & Gene: The Podcast

Play Episode Listen Later May 22, 2025 19:28


We love to hear from our listeners. Send us a message.Host Erin Harris talks to Generate:Biomedicines' EVP of R&D, Dr. Alex Snyder about the convergence of AI, machine learning (ML), and synthetic biology in the development of next-generation therapies. They cover how AI is transforming drug discovery by enabling the rapid design and optimization of therapeutic candidates, particularly in complex fields like immuno-oncology and cell therapy. Dr. Snyder shares how that in the context of CAR-T therapies, AI-driven approaches are used to design and refine each component of the CAR construct. And, Dr. Snyder sets the stage for a broader conversation about how the integration of AI and synthetic biology is not only accelerating drug development timelines but also expanding the realm of what's possible in cell-based therapeutics.Subscribe to the podcast!Apple | Spotify | YouTube

The Steve Gruber Show
J.C. Sheppard | China says fentanyl issue is responsibility of the United States

The Steve Gruber Show

Play Episode Listen Later May 21, 2025 10:34


J.C. Sheppard, is the Founder of The Fentanyl Test, the world's first FDA-CLEARED, CLIA-waived dip-card test for over-the-counter use, and the world's only Harm Reduction test strip to reach the cutoff level of 1ng/mL which is also the world's only test to not require dilution of any tested substance. China says fentanyl issue is responsibility of the United States

The Future of Customer Engagement and Experience Podcast
AI in esports: How generative AI + data analytics help Team Liquid win

The Future of Customer Engagement and Experience Podcast

Play Episode Listen Later May 21, 2025 8:07


Esports may be built on fast reflexes, but sustained victory comes from deep strategy. In this episode, we explore how Team Liquid, one of the biggest names in competitive gaming, is redefining the game using AI and analytics.Powered by SAP's Business Technology Platform, Team Liquid is tapping into over 6 million match records to drive smarter decisions during gameplay—and beyond. From AI-assisted draft picks to automated teamfight analysis, this episode breaks down the tools, tactics, and outcomes of Liquid's AI-driven transformation.What You'll Learn in This Episode:

The Joe Reis Show
Ryan Russon - Practical ML Engineering

The Joe Reis Show

Play Episode Listen Later May 21, 2025 67:12


Ryan Russon is an ML Engineer. He stopped by my house for a practical and grounded chat about ML and AI. Enjoy!---------Join dbt Labs May 28 for the dbt Launch Showcase to hear from executives and product leaders about the latest features landing in dbt. See firsthand how features will empower data practitioners and organizations in the age of AI.Thanks to dat Labs for sponsoring this episode.

MLOps.community
A Candid Conversation Around MCP and A2A // Rahul Parundekar and Sam Partee // #316 SF Live

MLOps.community

Play Episode Listen Later May 21, 2025 64:42


Demetrios, Sam Partee, and Rahul Parundekar unpack the chaos of AI agent tools and the evolving world of MCP (Model Context Protocol). With sharp insights and plenty of laughs, they dig into tool permissions, security quirks, agent memory, and the messy path to making agents actually useful.// BioSam ParteeSam Partee is the CTO and Co-Founder of Arcade AI. Previously a Principal Engineer leading the Applied AI team at Redis, Sam led the effort in creating the ecosystem around Redis as a vector database. He is a contributor to multiple OSS projects including Langchain, DeterminedAI, LlamaIndex and Chapel amongst others. While at Cray/HPE he created the SmartSim AI framework which is now used at national labs around the country to integrate HPC simulations like climate models with AI. Rahul ParundekarRahul Parundekar is the founder of AI Hero. He graduated with a Master's in Computer Science from USC Los Angeles in 2010, and embarked on a career focused on Artificial Intelligence. From 2010-2017, he worked as a Senior Researcher at Toyota ITC working on agent autonomy within vehicles. His journey continued as the Director of Data Science at FigureEight (later acquired by Appen), where he and his team developed an architecture supporting over 36 ML models and managing over a million predictions daily. Since 2021, he has been working on AI Hero, aiming to democratize AI access, while also consulting on LLMOps(Large Language Model Operations), and AI system scalability. Other than his full time role as a founder, he is also passionate about community engagement, and actively organizes MLOps events in SF, and contributes educational content on RAG and LLMOps at learn.mlops.community.// Related LinksWebsites: arcade.devaihero.studio~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreMLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Rahul on LinkedIn: /rparundekarConnect with Sam on LinkedIn: /samparteeTimestamps:[00:00] Agents & Tools, Explained (Without Melting Your Brain)[09:51] MVP Servers: Why Everything's on Fire (and How to Fix It)[13:18] Can We Actually Trust the Protocol?[18:13] KYC, But Make It AI (and Less Painful)[25:25] Web Automation Tests: The Bugs Strike Back[28:18] MCP Dev: What Went Wrong (and What Saved Us)[33:53] Social Login: One Button to Rule Them All[39:33] What Even Is an AI-Native Developer?[42:21] Betting Big on Smarter Models (High Risk, High Reward)[51:40] Harrison's Bold New Tactic (With Real-Life Magic Tricks)[55:31] Async Task Handoffs: Herding Cats, But Digitally[1:00:37] Getting AI to Actually Help Your Workflow[1:03:53] The Infamous Varma System Error (And How We Dodge It)

Machine Ethics podcast
100. DeepDive: AI and the Environment

Machine Ethics podcast

Play Episode Listen Later May 20, 2025 30:39


This is our 100th episode! A super special look at AI and the Environment, we interviewed 4 experts for this DeepDive episode. We chatted about water stress, the energy usage of AI systems and data centres, using AI for fossil fuel discovery, the geo-political nature of AI, GenAI vs other ML alogrithms for energy use, demanding transparency on energy usage for training and operating AI, more AI regulation for carbon consumption, things we can change today like picking renewable hosting solutions, publishing your data, when doing "responsible AI" you must include the environment, considering who are the controllers of the technology and what do they want, and more...

[KBS] 스포츠 스포츠
(05/19/월) [스포츠스포츠] 2025 프로야구, 230경기 만에 400만 관중_스트라이크존

[KBS] 스포츠 스포츠

Play Episode Listen Later May 20, 2025 27:50


스포츠 타임라인 ▸감독 이상민, 2028년까지 KCC 지휘봉 잡는다 ▸김혜성, 치열한 로스터 경쟁 끝 ‘ML 생존' 성공 ▸텍사스, 김성준 영입 공식 발표…국제자유계약으로 합류 ▸서부 결승서 만난다! 오클라호마시티 vs 미네소타 빅매치 성사 스트라이크존 ▸2025 프로야구, 230경기 만에 400만 관중…KBO리그 신기록 ▸한화, 18경기 연속 홈 매진, 삼성 최다 관중 기록 ▸역대급 순위경쟁과 한화 롯데의 상위권 점령 ▸헤드샷-위협구-벤클 발발...다사다난 클래식시리즈 ▸한화 폰세, 18탈삼진 쾌투…34년 만에 선동열과 최다 타이 ▸다시 뜨거워진 ‘5월 광주'…KIA, 시즌 첫 4연승 ▸최초 500홈런 시대 연 최정 ▸프로야구 NC, 한 달 만에 '홈 경기'…16일부터 울산 3연전 ▸KBO 역사에 최초, ‘엘롯한' 3강, 이대로 계속될까? 출연: 김효경 기자(중앙일보) 정세영 기자(문화일보) 진행: 남현종 아나운서

The Gary Null Show
The Gary Null Show 5.16.25

The Gary Null Show

Play Episode Listen Later May 16, 2025 57:11


HEALTH NEWS   Black tea kombucha reduces harmful gut microbes linked to obesity Adding nutrients to the diet may benefit people with COPD Newborn vitamin D deficiency linked to higher risk of ADHD, schizophrenia and autism How 7,000 Steps A Day Could Help Reduce Your Risk Of Cancer Always Tired? A Mini-Stroke You Didn't Notice Could Be Why Cancer risk significantly lower when vitamin D levels hit 40 ng/mL

Chasing Heroine: On This Day, Recovery Podcast
Strung Out, Trafficked as a Sex Worker and Stuck Homeless in Hawaii for Ten Years, Endocarditis and more, PLUS a Successful Methadone Taper from 170 ml, Christina Garofalo is a SURVIVOR

Chasing Heroine: On This Day, Recovery Podcast

Play Episode Listen Later May 15, 2025 94:56


Note from Jeannine: Christina's story is one of my favorite all time episodes of the show. Just an incredible story of strength and resilience. This is an encore run of her episode, new episodes return next week after my TedX Talk! Thank you for being patient with me, I love you guys!TRIGGER WARNING******sex trafficking, domestic abuse, assault, SA and pregnancy termination My conversation today with Christina Garofalo will have you both laughing and crying. Christina is a survivor in the truest form of the word. I was blown away by her vulnerability, authenticity and the strength she has shown in escaping the world she was trapped in, making it back to her hometown and family in San Diego AND tapering down from an incredibly high dose of methadone (170 mL). Christina now has a sponsor, works steps, goes on twelve step retreats and does EMDR therapy - she has worked so hard to find healing, peace and safety and I am personally so proud of her.Connect with Christina on ⁠Instagram⁠Connect with Christina on ⁠TikTok⁠DM me on ⁠⁠⁠⁠⁠⁠⁠⁠Instagram⁠⁠⁠⁠⁠⁠⁠⁠Message me on ⁠⁠⁠⁠⁠⁠⁠⁠Facebook⁠⁠⁠⁠⁠⁠⁠⁠Listen AD FREE & workout with me on ⁠⁠⁠⁠⁠⁠⁠⁠Patreon⁠⁠⁠⁠⁠⁠⁠⁠ Connect with me on ⁠⁠⁠⁠⁠⁠⁠⁠TikTok⁠⁠⁠⁠⁠⁠⁠⁠Email me chasingheroine@gmail.comSee you next week!

Machine Learning Street Talk
Google AlphaEvolve - Discovering new science (exclusive interview)

Machine Learning Street Talk

Play Episode Listen Later May 14, 2025 73:58


Today GoogleDeepMind released AlphaEvolve: a Gemini coding agent for algorithm discovery. It beat the famous Strassen algorithm for matrix multiplication set 56 years ago. Google has been killing it recently. We had early access to the paper and interviewed the researchers behind the work.AlphaEvolve: A Gemini-powered coding agent for designing advanced algorithmshttps://deepmind.google/discover/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/Authors: Alexander Novikov*, Ngân Vũ*, Marvin Eisenberger*, Emilien Dupont*, Po-Sen Huang*, Adam Zsolt Wagner*, Sergey Shirobokov*, Borislav Kozlovskii*, Francisco J. R. Ruiz, Abbas Mehrabian, M. Pawan Kumar, Abigail See, Swarat Chaudhuri, George Holland, Alex Davies, Sebastian Nowozin, Pushmeet Kohli, Matej Balog*(* indicates equal contribution or special designation, if defined elsewhere)SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***AlphaEvolve works like a very smart, tireless programmer. It uses powerful AI language models (like Gemini) to generate ideas for computer code. Then, it uses an "evolutionary" process – like survival of the fittest for programs. It tries out many different program ideas, automatically tests how well they solve a problem, and then uses the best ones to inspire new, even better programs.Beyond this mathematical breakthrough, AlphaEvolve has already been used to improve real-world systems at Google, such as making their massive data centers run more efficiently and even speeding up the training of the AI models that power AlphaEvolve itself. The discussion also covers how humans work with AlphaEvolve, the challenges of making AI discover things, and the exciting future of AI helping scientists make new discoveries.In short, AlphaEvolve is a powerful new AI tool that can invent new algorithms and solve complex problems, showing how AI can be a creative partner in science and engineering.Guests:Matej Balog: https://x.com/matejbalogAlexander Novikov: https://x.com/SashaVNovikovREFS:MAP Elites [Jean-Baptiste Mouret, Jeff Clune]https://arxiv.org/abs/1504.04909FunSearch [Bernardino Romera-Paredes, Mohammadamin Barekatain, Alexander Novikov, Matej Balog, M. Pawan Kumar, Emilien Dupont, Francisco J. R. Ruiz, Jordan S. Ellenberg, Pengming Wang, Omar Fawzi, Pushmeet Kohli & Alhussein Fawzi]https://www.nature.com/articles/s41586-023-06924-6TOC:[00:00:00] Introduction: Alpha Evolve's Breakthroughs, DeepMind's Lineage, and Real-World Impact[00:12:06] Introducing AlphaEvolve: Concept, Evolutionary Algorithms, and Architecture[00:16:56] Search Challenges: The Halting Problem and Enabling Creative Leaps[00:23:20] Knowledge Augmentation: Self-Generated Data, Meta-Prompting, and Library Learning[00:29:08] Matrix Multiplication Breakthrough: From Strassen to AlphaEvolve's 48 Multiplications[00:39:11] Problem Representation: Direct Solutions, Constructors, and Search Algorithms[00:46:06] Developer Reflections: Surprising Outcomes and Superiority over Simple LLM Sampling[00:51:42] Algorithmic Improvement: Hill Climbing, Program Synthesis, and Intelligibility[01:00:24] Real-World Application: Complex Evaluations and Robotics[01:05:39] Role of LLMs & Future: Advanced Models, Recursive Self-Improvement, and Human-AI Collaboration[01:11:22] Resource Considerations: Compute Costs of AlphaEvolveThis is a trial of posting videos on Spotify, thoughts? Email me or chat in our Discord

The ERP Advisor
The ERP Minute Episode 186 - May 13th, 2025

The ERP Advisor

Play Episode Listen Later May 14, 2025 3:00


In ERP news this week, Oracle announced new cloud services – Oracle Banking Retail Lending Servicing Cloud Service and Oracle Banking Collections Cloud Service, Sage announced significant progress in its ongoing collaboration with Amazon Web Services (AWS), OneStream announced financial results for its first quarter ended March 31, 2025, and Epicor announced at its Epicor Insights 2025 user conference, the general availability of its latest agentic AI capabilities of Epicor Prism and predictive ML of Epicor Grow AI. Connect with us!https://www.erpadvisorsgroup.com866-499-8550LinkedIn:https://www.linkedin.com/company/erp-advisors-groupTwitter:https://twitter.com/erpadvisorsgrpFacebook:https://www.facebook.com/erpadvisorsInstagram:https://www.instagram.com/erpadvisorsgroupPinterest:https://www.pinterest.com/erpadvisorsgroupMedium:https://medium.com/@erpadvisorsgroup

PING
DFOH,MVP & GILL: New ways of looking at BGP

PING

Play Episode Listen Later May 14, 2025 37:08


In this episode of PING, Professor Cristel Pelsser who holds the chair of critical embedded systems at UCLouvain Discusses her work measuring BGP and in particular the system described in the 2024 SIGCOMM “best paper” award winning research: “The Next Generation of BGP Data Collection Platforms” Cristel and her collaborators Thomas Alfroy, Thomas Holterbach, Thomas Krenc and K. C. Claffy have built a system they call GILL, available on the web at https://bgproutes.io This work also features a new service called MVP, to help find the “most valuable vantage point” in the BGP collection system for your particular needs. GILL has been designed for scale, and will be capable of encompassing thousands of peerings. it also has an innovative approach to holding BGP data, focussed on the removal of demonstrably redundant information, and therefore significantly higher compression of the data stream compared to e.g. holding MRT files. The MVP system exploits machine learning methods to aide in the selection of the most advantageous data collection point reflecting a researchers specific needs. Application of ML methods here permits a significant amount of data to be managed and change reflected in the selection of vantage points. Their system has already been able to support DFOH, an approach to finding forged origin attacks from peering relationships seen online in BGP, as opposed to the peering expected both from location, and declarations of intent inside systems like peeringDB.

ML Sports Platter
ML Archive: Say Yes Buffalo CEO Dave Rust.

ML Sports Platter

Play Episode Listen Later May 13, 2025 11:43


00:00-15:00: 2024 NFL/Buffalo Bills Inspire Change Changemaker Award winner/Say Yes Buffalo CEO Dave Rust chats about the big award and what it means, going to the Super Bowl, what's next for Say Yes Buffalo and what they have recently accomplished, Josh Allen's impact and more. Plus, ML and Dave's friendship through the years and Dave's belief in a Super Bowl win in Western NY.

Facts First with Christian Esguerra
Ep. 16: Leila De Lima set to join Congress

Facts First with Christian Esguerra

Play Episode Listen Later May 13, 2025 33:24


Christian Esguerra speaks with former senator Leila De Lima, who is set to join the House of Representatives as the top nominee of the ML party-list group.

house congress lima ml christian esguerra
Neurocareers: How to be successful in STEM?
From Hobby to Startup: Pi-EEG and Neurotech Education Tools with Ildar Rakhmatulin, PhD

Neurocareers: How to be successful in STEM?

Play Episode Listen Later May 9, 2025 63:45


How does a personal passion project turn into a groundbreaking neurotech startup? In this episode, we sit down with Dr. Ildar Rakhmatulin to explore his remarkable journey from academia to entrepreneurship — and how a global chip shortage sparked the creation of Pi-EEG, a Raspberry Pi-based BCI device that's transforming neuroscience education. Discover how Ildar's open-source innovation makes brain-computer interfaces more accessible, engaging both the research community and curious learners. We dive into the evolution of his work, from the RMBCI project to the Pi-EEG platform, and explore its exciting integration with tools like ChatGPT and P300 gaming applications. In this episode, you'll learn about: The evolution from RMBCI to the Pi-EEG device The power of open-source collaboration in neurotech How Pi-EEG connects with ChatGPT and brain-signal-based gaming The educational impact on neuroscience and signal processing Join us for an inspiring conversation on turning persistence and creativity into cutting-edge innovation in the world of brain-computer interfaces. Chapters: 00:00:02 - Launching Personal Projects in Neurotech 00:05:12 - Development of the Pyg Device 00:09:31 - Benefits of Open Source Collaboration 00:13:55 - Challenges in EEG Device Development 00:17:16 - Motivation Behind Passion Projects 00:20:00 - Introducing the Latest PiEG Device 00:25:49 - Measuring Multiple Biological Signals 00:29:02 - Introduction to EEG Signal Processing 00:31:06 - Understanding EEG and Signal Processing 00:38:52 - Finding Passion in Neurotechnology Careers 00:43:50 - Balancing Work and Passion Projects 00:47:49 - Real-World Problems and Neurotechnology Trends 00:50:43 - Careers in Neurotechnology 00:59:38 - Advancing Your Neurocareer About the Podcast Guest: Dr. Ildar Rakhmatulin is a scientist, engineer, and entrepreneur based in the United Kingdom, working at the intersection of neuroscience, biosignal processing, and brain-computer interface (BCI) innovation. He is the founder of PiEEG, an open-source, low-cost BCI platform built on Raspberry Pi, designed to democratize access to neurotechnology for students, researchers, and developers around the world. With a Ph.D. in hardware and software engineering, Dr. Rakhmatulin specializes in real-time biodata acquisition, including EEG, PPG, and EKG, and applies machine learning and deep learning algorithms to brain signal classification. His engineering work bridges research and accessibility—helping transform neuroscience education and experimentation through affordable, modular tools.

Machine Learning Guide
MLG 035 Large Language Models 2

Machine Learning Guide

Play Episode Listen Later May 8, 2025 45:25


At inference, large language models use in-context learning with zero-, one-, or few-shot examples to perform new tasks without weight updates, and can be grounded with Retrieval Augmented Generation (RAG) by embedding documents into vector databases for real-time factual lookup using cosine similarity. LLM agents autonomously plan, act, and use external tools via orchestrated loops with persistent memory, while recent benchmarks like GPQA (STEM reasoning), SWE Bench (agentic coding), and MMMU (multimodal college-level tasks) test performance alongside prompt engineering techniques such as chain-of-thought reasoning, structured few-shot prompts, positive instruction framing, and iterative self-correction. Links Notes and resources at ocdevel.com/mlg/mlg35 Build the future of multi-agent software with AGNTCY Try a walking desk stay healthy & sharp while you learn & code In-Context Learning (ICL) Definition: LLMs can perform tasks by learning from examples provided directly in the prompt without updating their parameters. Types: Zero-shot: Direct query, no examples provided. One-shot: Single example provided. Few-shot: Multiple examples, balancing quantity with context window limitations. Mechanism: ICL works through analogy and Bayesian inference, using examples as semantic priors to activate relevant internal representations. Emergent Properties: ICL is an "inference-time training" approach, leveraging the model's pre-trained knowledge without gradient updates; its effectiveness can be enhanced with diverse, non-redundant examples. Retrieval Augmented Generation (RAG) and Grounding Grounding: Connecting LLMs with external knowledge bases to supplement or update static training data. Motivation: LLMs' training data becomes outdated or lacks proprietary/specialized knowledge. Benefit: Reduces hallucinations and improves factual accuracy by incorporating current or domain-specific information. RAG Workflow: Embedding: Documents are converted into vector embeddings (using sentence transformers or representation models). Storage: Vectors are stored in a vector database (e.g., FAISS, ChromaDB, Qdrant). Retrieval: When a query is made, relevant chunks are extracted based on similarity, possibly with re-ranking or additional query processing. Augmentation: Retrieved chunks are added to the prompt to provide up-to-date context for generation. Generation: The LLM generates responses informed by the augmented context. Advanced RAG: Includes agentic approaches—self-correction, aggregation, or multi-agent contribution to source ingestion, and can integrate external document sources (e.g., web search for real-time info, or custom datasets for private knowledge). LLM Agents Overview: Agents extend LLMs by providing goal-oriented, iterative problem-solving through interaction, memory, planning, and tool usage. Key Components: Reasoning Engine (LLM Core): Interprets goals, states, and makes decisions. Planning Module: Breaks down complex tasks using strategies such as Chain of Thought or ReAct; can incorporate reflection and adjustment. Memory: Short-term via context window; long-term via persistent storage like RAG-integrated databases or special memory systems. Tools and APIs: Agents select and use external functions—file manipulation, browser control, code execution, database queries, or invoking smaller/fine-tuned models. Capabilities: Support self-evaluation, correction, and multi-step planning; allow integration with other agents (multi-agent systems); face limitations in memory continuity, adaptivity, and controllability. Current Trends: Research and development are shifting toward these agentic paradigms as LLM core scaling saturates. Multimodal Large Language Models (MLLMs) Definition: Models capable of ingesting and generating across different modalities (text, image, audio, video). Architecture: Modality-Specific Encoders: Convert raw modalities (text, image, audio) into numeric embeddings (e.g., vision transformers for images). Fusion/Alignment Layer: Embeddings from different modalities are projected into a shared space, often via cross-attention or concatenation, allowing the model to jointly reason about their content. Unified Transformer Backbone: Processes fused embeddings to allow cross-modal reasoning and generates outputs in the required format. Recent Advances: Unified architectures (e.g., GPT-4o) use a single model for all modalities rather than switching between separate sub-models. Functionality: Enables actions such as image analysis via text prompts, visual Q&A, and integrated speech recognition/generation. Advanced LLM Architectures and Training Directions Predictive Abstract Representation: Incorporating latent concept prediction alongside token prediction (e.g., via autoencoders). Patch-Level Training: Predicting larger “patches” of tokens to reduce sequence lengths and computation. Concept-Centric Modeling: Moving from next-token prediction to predicting sequences of semantic concepts (e.g., Meta's Large Concept Model). Multi-Token Prediction: Training models to predict multiple future tokens for broader context capture. Evaluation Benchmarks (as of 2025) Key Benchmarks Used for LLM Evaluation: GPQA (Diamond): Graduate-level STEM reasoning. SWE Bench Verified: Real-world software engineering, verifying agentic code abilities. MMMU: Multimodal, college-level cross-disciplinary reasoning. HumanEval: Python coding correctness. HLE (Human's Last Exam): Extremely challenging, multimodal knowledge assessment. LiveCodeBench: Coding with contamination-free, up-to-date problems. MLPerf Inference v5.0 Long Context: Throughput/latency for processing long contexts. MultiChallenge Conversational AI: Multiturn dialogue, in-context reasoning. TAUBench/PFCL: Tool utilization in agentic tasks. TruthfulnessQA: Measures tendency toward factual accuracy/robustness against misinformation. Prompt Engineering: High-Impact Techniques Foundational Approaches: Few-Shot Prompting: Provide pairs of inputs and desired outputs to steer the LLM. Chain of Thought: Instructing the LLM to think step-by-step, either explicitly or through internal self-reprompting, enhances reasoning and output quality. Clarity and Structure: Use clear, detailed, and structured instructions—task definition, context, constraints, output format, use of delimiters or markdown structuring. Affirmative Directives: Phrase instructions positively (“write a concise summary” instead of “don't write a long summary”). Iterative Self-Refinement: Prompt the LLM to review and improve its prior response for better completeness, clarity, and factuality. System Prompt/Role Assignment: Assign a persona or role to the LLM for tailored behavior (e.g., “You are an expert Python programmer”). Guideline: Regularly consult official prompting guides from model developers as model capabilities evolve. Trends and Research Outlook Inference-time compute is increasingly important for pushing the boundaries of LLM task performance. Agentic LLMs and multimodal reasoning represent the primary frontiers for innovation. Prompt engineering and benchmarking remain essential for extracting optimal performance and assessing progress. Models are expected to continue evolving with research into new architectures, memory systems, and integration techniques.

ML Sports Platter
NFL Draft RB Recap.

ML Sports Platter

Play Episode Listen Later May 8, 2025 15:47


00:00-15:00: ML recaps the running backs taken in the 2025 NFL Draft. Thanks to Rosie's Corner and CH Insurance.

Machine Learning Guide
MLG 034 Large Language Models 1

Machine Learning Guide

Play Episode Listen Later May 7, 2025 50:48


Explains language models (LLMs) advancements. Scaling laws - the relationships among model size, data size, and compute - and how emergent abilities such as in-context learning, multi-step reasoning, and instruction following arise once certain scaling thresholds are crossed. The evolution of the transformer architecture with Mixture of Experts (MoE), describes the three-phase training process culminating in Reinforcement Learning from Human Feedback (RLHF) for model alignment, and explores advanced reasoning techniques such as chain-of-thought prompting which significantly improve complex task performance. Links Notes and resources at ocdevel.com/mlg/mlg34 Build the future of multi-agent software with AGNTCY Try a walking desk stay healthy & sharp while you learn & code Transformer Foundations and Scaling Laws Transformers: Introduced by the 2017 "Attention is All You Need" paper, transformers allow for parallel training and inference of sequences using self-attention, in contrast to the sequential nature of RNNs. Scaling Laws: Empirical research revealed that LLM performance improves predictably as model size (parameters), data size (training tokens), and compute are increased together, with diminishing returns if only one variable is scaled disproportionately. The "Chinchilla scaling law" (DeepMind, 2022) established the optimal model/data/compute ratio for efficient model performance: earlier large models like GPT-3 were undertrained relative to their size, whereas right-sized models with more training data (e.g., Chinchilla, LLaMA series) proved more compute and inference efficient. Emergent Abilities in LLMs Emergence: When trained beyond a certain scale, LLMs display abilities not present in smaller models, including: In-Context Learning (ICL): Performing new tasks based solely on prompt examples at inference time. Instruction Following: Executing natural language tasks not seen during training. Multi-Step Reasoning & Chain of Thought (CoT): Solving arithmetic, logic, or symbolic reasoning by generating intermediate reasoning steps. Discontinuity & Debate: These abilities appear abruptly in larger models, though recent research suggests that this could result from non-linearities in evaluation metrics rather than innate model properties. Architectural Evolutions: Mixture of Experts (MoE) MoE Layers: Modern LLMs often replace standard feed-forward layers with MoE structures. Composed of many independent "expert" networks specializing in different subdomains or latent structures. A gating network routes tokens to the most relevant experts per input, activating only a subset of parameters—this is called "sparse activation." Enables much larger overall models without proportional increases in compute per inference, but requires the entire model in memory and introduces new challenges like load balancing and communication overhead. Specialization & Efficiency: Experts learn different data/knowledge types, boosting model specialization and throughput, though care is needed to avoid overfitting and underutilization of specialists. The Three-Phase Training Process 1. Unsupervised Pre-Training: Next-token prediction on massive datasets—builds a foundation model capturing general language patterns. 2. Supervised Fine Tuning (SFT): Training on labeled prompt-response pairs to teach the model how to perform specific tasks (e.g., question answering, summarization, code generation). Overfitting and "catastrophic forgetting" are risks if not carefully managed. 3. Reinforcement Learning from Human Feedback (RLHF): Collects human preference data by generating multiple responses to prompts and then having annotators rank them. Builds a reward model (often PPO) based on these rankings, then updates the LLM to maximize alignment with human preferences (helpfulness, harmlessness, truthfulness). Introduces complexity and risk of reward hacking (specification gaming), where the model may exploit the reward system in unanticipated ways. Advanced Reasoning Techniques Prompt Engineering: The art/science of crafting prompts that elicit better model responses, shown to dramatically affect model output quality. Chain of Thought (CoT) Prompting: Guides models to elaborate step-by-step reasoning before arriving at final answers—demonstrably improves results on complex tasks. Variants include zero-shot CoT ("let's think step by step"), few-shot CoT with worked examples, self-consistency (voting among multiple reasoning chains), and Tree of Thought (explores multiple reasoning branches in parallel). Automated Reasoning Optimization: Frontier models selectively apply these advanced reasoning techniques, balancing compute costs with gains in accuracy and transparency. Optimization for Training and Inference Tradeoffs: The optimal balance between model size, data, and compute is determined not only for pretraining but also for inference efficiency, as lifetime inference costs may exceed initial training costs. Current Trends: Efficient scaling, model specialization (MoE), careful fine-tuning, RLHF alignment, and automated reasoning techniques define state-of-the-art LLM development.

ITSPmagazine | Technology. Cybersecurity. Society
From Tools to Trust: Why Integration Beats Innovation Hype in Cybersecurity | A Brand Story with Vivin Sathyan from ManageEngine | An On Location RSAC Conference 2025 Brand Story

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later May 7, 2025 20:05


Organizations are demanding more from their IT management platforms—not just toolsets, but tailored systems that meet specific business and security objectives. Vivin Sathyan, Senior Technology Evangelist at ManageEngine, shares how the company is responding with an integrated approach that connects IT, security, and business outcomes.ManageEngine, a division of Zoho Corporation, now offers a suite of over 60 products that span identity and access management, SIEM, endpoint protection, service management, and analytics. These components don't just coexist—they interact contextually. Vivin outlines a real-world example from the healthcare sector, where a SIM tool detects abnormal login behavior, triggers an identity system to challenge access, and then logs the incident for IT service resolution. This integrated chain reflects a philosophy where response is not just fast, but connected and accountable.At the heart of the platform's effectiveness is contextual intelligence—layered between artificial intelligence and business insights—to power decision-making that aligns with enterprise risk and compliance needs. Whether it's SOC analysts triaging events, CIS admins handling system hygiene, or CISOs aligning actions with corporate goals, the tools are tailored to fit roles, not just generic functions. According to Vivin, this role-based approach is critical to eliminating silos and ensuring teams speak the same operational and risk language.AI continues to play a role in enhancing that coordination, but ManageEngine is cautious not to follow hype for its own sake. The company has invested in its own AI and ML capabilities since 2012, and recently launched an agent studio—but only after evaluating how new models can meaningfully add value. Vivin points out that enterprise use cases often benefit more from small, purpose-built language models than from massive general-purpose ones.Perhaps most compelling is ManageEngine's global-first strategy. With operations in nearly 190 countries and 18+ of its own data centers, the company prioritizes proximity to customers—not just for technical support, but for cultural understanding and local compliance. That closeness informs both product design and customer trust, especially as regulations around data sovereignty intensify.This episode challenges listeners to consider whether their tools are merely present—or actually connected. Are you enabling collaboration through context, or just stitching systems together and calling it a platform?Learn more about ManageEngine: https://itspm.ag/manageen-631623Note: This story contains promotional content. Learn more.Guest: Vivin Sathyan, Senior Technology Evangelist, ManageEngine | https://www.linkedin.com/in/vivin-sathyan/ResourcesLearn more and catch more stories from ManageEngine: https://www.itspmagazine.com/directory/manageengineLearn more and catch more stories from RSA Conference 2025 coverage: https://www.itspmagazine.com/rsac25______________________Keywords:sean martin, vivin sathyan, cybersecurity, ai, siem, identity, analytics, integration, platform, risk, brand story, brand marketing, marketing podcast, brand story podcast______________________Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to tell your Brand Story Briefing as part of our event coverage? Learn More 

Cybersecurity Where You Are
Episode 134: How GenAI Lowers Bar for Cyber Threat Actors

Cybersecurity Where You Are

Play Episode Listen Later May 7, 2025 39:48


In episode 134 of Cybersecurity Where You Are, Sean Atkinson is joined by Randy Rose, VP of Security Operations & Intelligence at the Center for Internet Security® (CIS®); and Timothy Davis, Lead Cyber Threat Intelligence (CTI) Analyst at CIS. Together, they discuss how generative artificial intelligence (GenAI) lowers the barrier of entry for cyber threat actors (CTAs). Here are some highlights from our episode:01:37. CTAs' use of GenAI to improve their existing campaigns03:38. The need for CTI teams to look beyond language in analyzing GenAI-enabled threats07:22. The evolving impact of GenAI on phishing campaigns, malware development, deepfakes, and malicious Artificial Intelligence as a Service (AIaaS) offerings12:28. How GenAI increases the the speed at which CTAs can scale their efforts17:29. Technical barriers and other limitations that shape CTAs' use of GenAI22:46. A historical perspective of AI-enabled cybersecurity and how GenAI can support cybersecurity awareness training26:50. The cybersecurity benefits of AI and machine learning (ML) capabilities for clustering data29:05. What the future might hold for GenAI from an offensive and defensive perspectiveResourcesThe Evolving Role of Generative Artificial Intelligence in the Cyber Threat LandscapeEpisode 89: How Threat Actors Are Using GenAI as an EnablerEpisode 95: AI Augmentation and Its Impact on Cyber Defense12 CIS Experts' Cybersecurity Predictions for 2025CIS Critical Security Controls®Multi-State Information Sharing and Analysis Center®If you have some feedback or an idea for an upcoming episode of Cybersecurity Where You Are, let us know by emailing podcast@cisecurity.org.

ML Soul of Detroit
Kwame Begs Your Pardon – May 6, 2025

ML Soul of Detroit

Play Episode Listen Later May 6, 2025 85:08


New Kwame Kilpatrick texts have Hizzoner back in the news, so ML and Marc chat with the lawmaker who says […]

Highest Aspirations
Colorado's investment in bilingualism with Alice Collins and Dr. Ester de Jong

Highest Aspirations

Play Episode Listen Later May 6, 2025 50:06


This episode of Highest Aspirations welcomes Alice Collins from the Colorado Department of Education and Dr. Esther De Jong from the University of Colorado Denver to explore the dynamic landscape of bilingual education in the state. Discover the innovative programs designed to support Colorado's growing number of multilingual learners and the crucial initiatives aimed at building a strong pipeline of qualified bilingual educators.Tune in to gain insights into the collaborative efforts between the state and universities to equip teachers with the specialized skills needed to serve multilingual students effectively. Learn about the various courses and programs available that empower educators to create inclusive and successful learning environments for all students, fostering academic growth and linguistic development.Key questions we address:What types of bilingual education programs are available for Colorado's multilingual learners?How does the University of Colorado Denver support the training and development of teachers for multilingual students?What are the key strategies discussed for supporting and retaining qualified teachers of multilingual learners in Colorado?For additional episode and community resources:Download the transcript here.Newcomer resources course - Free 1-hour webinarOnline, self-directed newcomer course with deep dives (ideal for Professional Learning Communities)Radical Candor: Be a Kick-Ass Boss Without Losing Your HumanityDiscover, Connect, RespondFinding Me: A MemoirT-PREP: The Partnership for Rural Educator Preparation at University of Colorado Denver Learn more about the Ellevation Scholarship and how to apply. For additional free resources geared toward supporting English learners, ⁠visit our blog.Alice Collins is an ELD Senior Consultant with the Colorado Department of Education serving the state of Colorado in the office of Culturally and Linguistically Diverse Education. She partners with districts across the state to ensure quality language programming for the Multilingual Learners. Alice has many years of experience serving MLs, some of those roles include, teacher, CLDE Specialist, Instructional Coach, Assistant Principal, and CLDE Director. She has received multiple Teacher of the Year awards as well as CLDE Director of the Year. Alice is dedicated to providing every opportunity possible for ML students to succeed in education.Dr. Ester de Jong is a Professor in Culturally and Linguistically Diverse Education and Interim Associate Dean for Graduate Education and Advanced programs at the University of Colorado Denver.  Her research interests include two-way bilingual education and other integrated models for language minority schooling, educational language policy, and teacher preparation for bilingual students.  Prior to UC Denver, she was the Director of the School of Teaching and Learning and Professor in ESOL/Bilingual Education at the University of Florida in Gainesville, Florida. She has been in the field of ESL/bilingual education for over thirty years, as a practitioner and a researcher. Her research focuses on preparing teachers to work with bilingual learners in K-12 schools, and integrated approaches to the schooling of bilingual learners, including two-way bilingual education. Her book, “Foundations of Multilingualism in Education” lays out a principles-based approach to educational equity for bilingual learners.  Dr. de Jong was President of TESOL International Association (2017-2018). She is the co-editor of the Handbook of Research on Dual Language Bilingual Education (Routledge, 2023) and co-Editor of the Bilingual Research Journal.

ACRO's Good Clinical Podcast
S3: E3 Future-Proofing Drug Development: AI, Old Data, and New Rules

ACRO's Good Clinical Podcast

Play Episode Listen Later May 6, 2025 33:10


On this week's episode, Lisa Moneymaker (SVP, Head of Strategic Customer Engagement, Medidata Solutions) and Adam Aten (Legislative & Regulatory Policy Lead, Verily) join the podcast to discuss how the clinical research industry must use insights from the past to better prepare our AI models and other technologies to meet the needs of patients in the present and future. They dive deeper into the role that collaboration between technologists and clinical scientists can play in helping to reduce bias in our AI models, what legislators and regulators should be keeping top of mind as they write new rules of the road for AI and ML, and ACRO's ongoing efforts to promote the responsible use of AI in clinical research.

From Lab to Launch by Qualio
Hunting the perfect molecule with Dr. James Field, CEO of LabGenius Therapeutics

From Lab to Launch by Qualio

Play Episode Listen Later May 6, 2025 23:17


Today we're heading across the Atlantic to speak to Dr. James Field, CEO of London-based LabGenius Therapeutics.LabGenius has a really exciting mission. James and his team of 60 scientists and engineers believe in the powerful combination of human and artificial intelligence, and are pioneering an ML-driven protein engineering platform - EVA - that can design and execute experiments for the development of next-gen antibodies, including complex multispecifics.LabGenius has already raised over $75m in venture capital and built R&D deals with some major pharma and biotech players. After founding the company in 2012, James was named in Forbes' ‘30 Under 30' list for science and healthcare, and even won Innovator of the Year award from the UK's Biotech & Biological Sciences Research Council. Qualio website:https://www.qualio.com/ Previous episodes:https://www.qualio.com/from-lab-to-launch-podcast Apply to be on the show:https://forms.gle/uUH2YtCFxJHrVGeL8 Music by keldez

DanceSpeak
212 - Lisa Ebeyer – What Keeps You Dancing, Even When You Think You're Done

DanceSpeak

Play Episode Listen Later May 5, 2025 70:56


In episode 212, host Galit Friedlander welcomes beloved ballet educator and coach Lisa Ebeyer (Jackson Ballet, Snow White, and LA's go-to ballet whisperer for commercial dancers) for a no-holds-barred convo on dancing through every season of life—without apology. Join Galit and Lisa as they talk about ditching perfectionism, keeping your body strong without surgery, and why “every age is a transitional time.” From her early pro career at 16 to falling back in love with dance at 48, Lisa shares what it means to keep showing up, stretch your big toe (yes, really), and coach with zero BS. This episode is equal parts wisdom, grit, and the kind of honesty dancers don't hear enough of. Follow Galit: Instagram - https://www.instagram.com/gogalit Website - https://www.gogalit.com/ Fit From Home - https://galit-s-school-0397.thinkific.com/courses/fit-from-home You can connect with Lisa Ebeyer on Instagram: @lisaebeyer Listen to DanceSpeak on Apple Podcasts and Spotify.

Dr. Joseph Mercola - Take Control of Your Health
Fight Multiple Sclerosis With Vitamin D - AI Podcast

Dr. Joseph Mercola - Take Control of Your Health

Play Episode Listen Later May 1, 2025 9:44


Story at-a-glance Multiple sclerosis (MS) is an autoimmune condition attacking myelin in the central nervous system. Symptoms vary based on nerve damage location and often begin as clinically isolated syndrome (CIS) Low vitamin D is consistently linked to higher MS risk, with people living closer to the equator having lower MS rates due to greater sun exposure A 2025 clinical trial showed that high-dose vitamin D delayed MS progression in CIS patients, doubling time before new disease activity appeared compared to placebo Vitamin D stimulates myelin-rebuilding cells, boosts neurotrophins, reprograms microglia from inflammatory to healing states, and protects the blood-brain barrier Optimal vitamin D levels (60 to 80 ng/mL) can be maintained through sensible sun exposure or D3 supplementation, with regular testing recommended to adjust intake accordingly

This Week in Startups
What's Next for AI Infrastructure with Amin Vahdat | AI Basics with Google Cloud

This Week in Startups

Play Episode Listen Later May 1, 2025 27:34


In this episode of AI Basics, Jason sits down with Amin Vahdat, VP of ML at Google Cloud, to unpack the mind-blowing infrastructure behind modern AI. They dive into how Google's TPUs power massive queries, why 2025 is the “Year of Inference,” and how startups can now build what once felt impossible. From real-time agents to exponential speed gains, this is a look inside the AI engine that's rewriting the future.*Timestamps:(0:00) Jason introduces today's guest Amin Vahdat(3:18) Data movement implications for founders and historical bandwidth perspective(5:29) The shift to inference and AI infrastructure trends in startups and enterprises(8:40) Evolution of productivity and potential of low-code/no-code development(11:20) AI infrastructure pricing, cost efficiency, and historical innovation(17:53) Google's TPU technology and infrastructure scale(23:21) Building AI agents for startup evaluation and supervised associate agents(26:08) Documenting decisions for AI learning and early AI agent development*Uncover more valuable insights from AI leaders in Google Cloud's 'Future of AI: Perspectives for Startups' report. Discover what 23 AI industry leaders think about the future of AI—and how it impacts your business. Read their perspectives here: https://goo.gle/futureofai*Check out all of the Startup Basics episodes here: https://thisweekinstartups.com/basicsCheck out Google Cloud: https://cloud.google.com/*Follow Amin:LinkedIn: https://www.linkedin.com/in/vahdat/?trk=public_post_feed-actor-name*Follow Jason:X: https://twitter.com/JasonLinkedIn: https://www.linkedin.com/in/jasoncalacanis*Follow TWiST:Twitter: https://twitter.com/TWiStartupsYouTube: https://www.youtube.com/thisweekinInstagram: https://www.instagram.com/thisweekinstartupsTikTok: https://www.tiktok.com/@thisweekinstartupsSubstack: https://twistartups.substack.com

Gradient Dissent - A Machine Learning Podcast by W&B
Inside Cursor: The future of AI coding with Co-founder Sualeh Asif

Gradient Dissent - A Machine Learning Podcast by W&B

Play Episode Listen Later Apr 29, 2025 49:36


In this episode of Gradient Dissent, host Lukas Biewald talks with Sualeh Asif, the CPO and co-founder of Cursor, one of the fastest-growing and most loved AI-powered coding platforms. Sualeh shares the story behind Cursor's creation, the technical and design decisions that set it apart, and how AI models are changing the way we build software. They dive deep into infrastructure challenges, the importance of speed and user experience, and how emerging trends in agents and reasoning models are reshaping the developer workflow.Sualeh also discusses scaling AI inference to support hundreds of millions of requests per day, building trust through product quality, and his vision for how programming will evolve in the next few years.⏳Timestamps:00:00 How Cursor got started and why it took off04:50 Switching from Vim to VS Code and the rise of CoPilot08:10 Why Cursor won among competitors: product philosophy and execution10:30 How user data and feedback loops drive Cursor's improvements12:20 Iterating on AI agents: what made Cursor hold back and wait13:30 Competitive coding background: advantage or challenge?16:30 Making coding fun again: latency, flow, and model choices19:10 Building Cursor's infrastructure: from GPUs to indexing billions of files26:00 How Cursor prioritizes compute allocation for indexing30:00 Running massive ML infrastructure: surprises and scaling lessons34:50 Why Cursor chose DeepSeek models early36:00 Where AI agents are heading next40:07 Debugging and evaluating complex AI agents42:00 How coding workflows will change over the next 2–3 years46:20 Dream future projects: AI for reading codebases and papers

Cardiology Trials
Review of the V-HEFT I Trial

Cardiology Trials

Play Episode Listen Later Apr 29, 2025 12:58


N Engl J Med 1986; 314:1547-52Background Into the mid-1980's, digoxin and diuretics were the mainstay of chronic disease management for congestive heart failure. Vasodilator agents were also commonly used based on limited data of their favorable hemodynamic effects. No sufficiently powered trials in this space had been performed to assess whether administration of vasodilators or any other agents improved long-term morbidity or mortality for heart failure patients. The V-HEFT trial was undertaken to test the hypotheses that 2 widely used vasodilator regimens (prazosin or a combination of hydralazine and isosorbide dinitrate) were superior for reducing death versus placebo. The trial was sponsored by the Veterans Administration and only enrolled men.Patients Men between the ages of 18 and 75 were recruited from 11 participating Veterans Administration hospitals and had to have chronic congestive heart failure based on either evidence of cardiac dilatation or left ventricular dysfunction (EF 0.7 ng/mL and euvolemic volume status. Clinical evaluations and exercise-tolerance tests on 2 consecutive visits, two weeks apart, had to reveal clinical and exercise stability before randomization could occur. Following randomization, patients continued to receive the optimal dose of digoxin and diuretic along with 1 of 3 study regimens. The placebo group was given placebo tablets and placebo capsules and instructed to take them 4 times a day. The prazosin group took 2.5 mg prazosin capsules and placebo tablets 4 times a day. The hydralazine-isosorbide dinitrate group took 37.5 mg hydralazine capsules and 20 mg isosorbide dinitrate tablets 4 times a day.In all groups, therapy began with 1 capsule and 1 tablet to be taken 4 times a day. In the absence of side effects, this was increased to 2 capsules and 2 tablets 4 times a day for a total of 20 mg of prazosin or 300-160 mg of hydralazine-isosorbide dinitrate. If drug-related side effects occurred, the dose could be reduced to half a tablet 4 times per day or to one capsule 2 times per day. If the dose was reduced, an attempt was made later to reinstitute the full dose.In order to limit dropouts, rigorous criteria were established for “treatment failures.” Physicians were advised to hospitalize patients with worsening symptoms, and, if appropriate, to use temporary intravenous vasodilator or inotropic interventions for stabilization. Physicians were encouraged to resume study medications upon discharge. At least 2 such hospitalizations were required, along with objective evidence of deterioration, before the study medications were discontinued and replaced with known therapy.Endpoints The primary endpoint was all-cause mortality.Results 642 patients were enrolled (273 in placebo group, 183 in prazosin group and 186 in the hydralazine-isosorbide dinitrate group). Excluding discontinuations that took place within 1 month before death, 47 patients (17%) discontinued one or both types of placebos, 43 patients (23%) discontinued prazosin, and 60 patients (32%) discontinued either one or both drugs in the hydralazine-isosorbide group. Six months after randomization, the average prescribed doses were 18.6 mg per day of prazosin, 270 mg per day of hydralazine, and 136 mg per day of isosorbide dinitrate. More than 85% of the prescribed drugs were taken in each treatment group.The mean follow-up was 2.3 years (range 6 months to 5.7 years). Only 4 patients were lost to follow up (2 in placebo group, 1 in prazosin group, and 1 in hydralazine-dinitrate group). There were 120 deaths in placebo group (44%; 19 per 100 patient years), 91 in the prazosin group (50%; 22 per 100 patient years), 72 in the hydralazine-dinitrate group (39%; 17 per 100 patient years). A reduction in mortality over the entire follow-up period was observed in the hydralazine-nitrate group compared with placebo (p = 0.093 on the log-rank test and p = 0.046 on the generalized Wilcoxon test, which gives more weight to treatment differences occurring in the earlier part of the mortality curves and less weight to the latter part, where the numbers are smaller). The absolute difference in mortality between these groups increased during three years and then began to diminish. The absolute difference in mortality between the placebo group and hydralazine-isosorbide groups at years 1 through 4 was 7%, 9%, 11% and 4%, respectively.Prespecified subgroup analysis in CAD vs no CAD stratification showed no significant treatment effect heterogeneity for hydralazine-nitrate among those with CAD although the absolute difference in mortality between groups was numerically higher for patients with CAD.At 8 weeks and 1 year, SBP (-4.1 and -4.6 mmHg) and DBP (-3.2 and -2.7 mmHg) decreased the most in the prazosin group compared to placebo. Hydralazine-nitrate was not associated with a statistically significant nor clinically significant difference in BP with exception of DBP at 8 weeks. The EF rose significantly at 8 weeks and 1 year in the hydralazine-nitrate group (+2.9 and +4.2) compared to placebo but not in the prazosin group.Side effects were reported in 4.0% of placebo patients, 11% of prazosin patients and 19% of hydralazine-nitrate patients, respectively. The most common side effects were headache and dizziness. Headache was reported in 12% of hydralazine-nitrate patients.Conclusions This study compared the combination of hydralazine-isosorbide dinitrate or prazosin to placebo in patients with chronic congestive heart failure who were optimized on digoxin and diuretic therapy. In what appears to be a young (58 years) and highly selected population of clinically stable, male veterans with dilated cardiomyopathies and low symptom burdens, the combination of hydralazine-isosorbide reduced death by 2 per 100 patient years, increased EF by 4% at 1 year and did not significantly alter BP compared to placebo. Side effects were reported in approximately 1 out of 5 patients with the most common being headache and approximately 1 out of 3 discontinued 1 or both study drugs. Prasozin did not reduce death or increase EF but did reduce BP compared to placebo. The internal validity of the study is high with only a few minor imbalances in baseline characteristics, which do not appear clinically relevant nor to consistently favor any one group. Less than 1% of patients were lost to follow up with no significant imbalances between groups. The external validity is limited by the fact that this is a population of male veterans and the etiologic distribution of cardiomyopathy and heart failure is likely different from a general heart failure population; etiologic causes of death are also likely to be different. Furthermore, the population is highly selected and its unclear how many patients from the general heart failure population would meet study criteria.Cardiology Trial's Substack is a reader-supported publication. To receive new posts and support our work, consider becoming a free or paid subscriber. Get full access to Cardiology Trial's Substack at cardiologytrials.substack.com/subscribe

Dr. Joseph Mercola - Take Control of Your Health
Iron and the Stroke Crisis: A Hidden Catalyst for Brain Damage - AI Podcast

Dr. Joseph Mercola - Take Control of Your Health

Play Episode Listen Later Apr 28, 2025 11:15


Story at-a-glance Ischemic strokes block blood flow to brain cells, causing damage through three distinct cell death mechanisms, with iron overload playing a key role Excess iron accumulation during strokes accelerates cell death, functioning like "gasoline on a fire" and worsening brain damage significantly New research shows targeting iron-related cell death could lead to better stroke treatments that protect more brain cells from damage A simple blood test called serum ferritin measures your iron stores. Keeping levels below 100 ng/mL, ideally between 20 and 40 ng/mL, helps protect your brain Regular blood donation (two to four times yearly) is an effective strategy to manage iron levels and reduce stroke risk and severity

Play Me or Fade Me Sports Betting Picks Podcast
That's a winner! Positive spike day: 9-2, up 5.99 units. 4 MLB Bets, 3 NBA Bets including props on Amen Thompson/Andrew Wiggins, and 2 NHL Bets for Monday.

Play Me or Fade Me Sports Betting Picks Podcast

Play Episode Listen Later Apr 28, 2025 19:19


Underdog Promo Code: PLAYME Signup Link: ⁠⁠⁠https://play.underdogfantasy.com/p-play-me-or-fade-me⁠⁠⁠ Podcast Card: New York Yankees/Baltimore Over 9.5 (-113) NRFI: St. Louis/Cincinnati (-105) Houston First 5 Team Total Over 1.5 (-135) Athletics First 5 ML at Texas (-110) Houston +3.5 at Golden State (-112) Amen Thompson Over 11.5 (-115) Andrew Wiggins Over 15.5 (-125) Florida ML vs. Tampa Bay (-142) Dallas ML vs. Colorado (+120) Action YTD Results - Active: NHL/4 Nations: 76-63, (54.7%), up 11.3753 units MLB: 77-52 (59.7%), up 11.1823 units Multi-Sport Parlays: 3-1 (75%), up 2.7372 units NBA Prop Bets: 18-13, (58.1%), up 2.4438 units PGA Golf: 16-20 (44.4%), up 1.0247 units NBA Sides/Totals: 50-45, (52.6%), up 0.7571 units NASCAR: 0-1 (0%), down 1 unit Cricket 0-1 (0%), down 1 unit NLL: 0-1 (0%), down 1 unit Discord Link: ⁠https://discord.gg/GUJN8VVv⁠ Contact Me: X: @MrActionJunkie1 Email: mractionjunkie@gmail.com Learn more about your ad choices. Visit megaphone.fm/adchoices

Explaining Mexican History

Play Episode Listen Later Apr 28, 2025 125:51


In this episode of History 102, 'WhatIfAltHist' creator Rudyard Lynch and co-host Austin Padgett explore Mexican history from pre-colonial Mesoamerican civilizations through Spanish conquest to modern times. They examines cultural evolution, governance challenges, and social transformations while highlighting how geographic, racial, and colonial legacies shaped Mexico's development. --

Podcast Notes Playlist: Latest Episodes
Explaining Native American History

Podcast Notes Playlist: Latest Episodes

Play Episode Listen Later Apr 27, 2025


"History 102" with WhatifAltHist's Rudyard Lynch and Erik Torenberg: Read the notes at at podcastnotes.org. Don't forget to subscribe for free to our newsletter, the top 10 ideas of the week, every Monday --------- In this episode of History 102, 'WhatIfAltHist' creator Rudyard Lynch and co-host Austin Padgett examine Native American genetic origins and cultural complexity, challenging popular narratives of "noble savages." They discuss surprising genetic links to Pacific Islanders and ancient Europeans, the sophisticated warfare and political structures of tribes like the Iroquois and Cherokee, and how European contact transformed indigenous societies through disease, technology, and shifting power dynamics. --

The Cabral Concept
3368: Crusty Eyes in Morning, Slow Gut Motility, Demodex Mites, Kids & Testing, Natural Replacement for Testosterone (HouseCall)

The Cabral Concept

Play Episode Listen Later Apr 26, 2025 18:26


Welcome back to our weekend Cabral HouseCall shows! This is where we answer our community's wellness, weight loss, and anti-aging questions to help people get back on track! Check out today's questions:    Bryan: Hi Dr. Cabral. Dunno if there's a limit to how many questions I can ask but I'll try not to be too much of a pest with the house calls. Almost every single morning I seem to wake up with tears and "crud" around my eyes. Often when I yawn in bed. Did a search online and dry eyes kept coming up as the culprit. Maybe you've answered this one before but do you agree? Is there anything I can do about it? I usually put Omega 3 drops in my eyes before bed and thought they would take care of any dry eyes in morning but I guess not. Appreciate any tips or natural remedies you think might help this problem. Thanks,                                                                                                                                                             Bryan: Hi Dr. C. About a year ago I did a Barium swallow test and was diagnosed with a slow gut and mild esophagus dysmotility. Took about 4+ hours for the barium I drank to pass into my stomach when for most people it takes like an hour or less. My gastroenterologist told me I had a "slow gut" and taking milk of magnesia before bed would help. Results have been just so-so. Wrote in before but want to let you know I do appreciate all your work and renaissance style approach to all things related to health and wellness. Best,                Katie: Please discuss your thoughts on Demodex mites. Especially possibly affecting eyes or skin? Ways to improve if necessary                                                                                                                                                                                 Bob: Hi Dr Cabral I been listening to you for a few years on and off and really have a learned a lot from through your podcasts. My question is regards to kids. I have a son that is turning 7 this month and I was wondering what tests and also basic supplements I should be giving him. I remember with adults you say there is a level 1, 2 ,3 of supplements all depending on how far you want to take it. Do you have recommendations for kids on maintaining their health and making sure they get what they need and how often and which tests to run annually. Thanks and appreciate all you do for us                Kay: Dear Dr. Cabral, I am an IHP student and really love your modules and teaching approach! I'm writing because I have been on a very low dose of testosterone cypionate (25 mg/mL) 30 cc's subcutaneous injection every 10 days for the last several years. I am 59 years of age and I have a petite frame and build. My physician prescribed it to help with bone density, muscle retention and libido. My question is- if I wish to no longer take testosterone, what kinds of supplements can you recommend to target those areas? I really prefer to not have exogenous hormones if at all possible. Thank you,       Thank you for tuning into today's Cabral HouseCall and be sure to check back tomorrow where we answer more of our community's questions!    - - - Show Notes and Resources: StephenCabral.com/3368 - - - Get a FREE Copy of Dr. Cabral's Book: The Rain Barrel Effect - - - Join the Community & Get Your Questions Answered: CabralSupportGroup.com - - - Dr. Cabral's Most Popular At-Home Lab Tests: > Complete Minerals & Metals Test (Test for mineral imbalances & heavy metal toxicity) - - - > Complete Candida, Metabolic & Vitamins Test (Test for 75 biomarkers including yeast & bacterial gut overgrowth, as well as vitamin levels) - - - > Complete Stress, Mood & Metabolism Test (Discover your complete thyroid, adrenal, hormone, vitamin D & insulin levels) - - - > Complete Food Sensitivity Test (Find out your hidden food sensitivities) - - - > Complete Omega-3 & Inflammation Test (Discover your levels of inflammation related to your omega-6 to omega-3 levels) - - - Get Your Question Answered On An Upcoming HouseCall: StephenCabral.com/askcabral - - - Would You Take 30 Seconds To Rate & Review The Cabral Concept? The best way to help me spread our mission of true natural health is to pass on the good word, and I read and appreciate every review!  

kids testing natural results omega replacement ml testosterone mites cabral crusty free copy dunno motility ihp barium demodex complete stress complete omega complete candida metabolic vitamins test test mood metabolism test discover complete food sensitivity test find inflammation test discover
Dr. Joseph Mercola - Take Control of Your Health
Is Vitamin D the New Steroid? - AI Podcast

Dr. Joseph Mercola - Take Control of Your Health

Play Episode Listen Later Apr 26, 2025 10:51


Story at-a-glance Vitamin D influences key hormones like leptin, an energy balancer, and myostatin, which limits muscle growth. It plays a direct role in how your body manages energy, builds muscle and regulates fat storage through various metabolic pathways A study published as a preprint on Research Square found that high-dose vitamin D increases muscle strength, reduces myostatin, shifts calories toward muscle development and enhances metabolic rate Vitamin D mimics anabolic steroids by suppressing myostatin to optimize muscle growth, redirecting energy from fat storage to muscle tissue and boosting metabolic rate Optimal vitamin D levels for health and disease prevention range from 60 to 80 ng/mL (150 to 200 nmol/L). Test twice a year and adjust your supplemental dose based on your results Natural sunlight is the ideal vitamin D source, as it provides benefits beyond vitamin D production. However, make sure to reduce your consumption of vegetable oil before sun exposure

ML Sports Platter
Will the SD Padres Ever Win the World Series?

ML Sports Platter

Play Episode Listen Later Apr 25, 2025 16:11


00:00-20:00: ML wonders if the Padres will ever break through and win it all.

ML Soul of Detroit
Tigers Town – April 22, 2025

ML Soul of Detroit

Play Episode Listen Later Apr 22, 2025 57:09


Shawn is back, bemoaning Detroit's preference of the Tigers over the Pistons, while Marc is still amazed that ML has […]

ML Sports Platter
Alexander Ovechkin Sets Goal Mark. A Look Back.

ML Sports Platter

Play Episode Listen Later Apr 21, 2025 14:43


00:00-15:00: ML looks back at Alexander Ovechkin breaking the all-time NHL goal record.

Paul's Security Weekly
The past, present, and future of enterprise AI - Matthew Toussain, Pravi Devineni - ESW #403

Paul's Security Weekly

Play Episode Listen Later Apr 21, 2025 131:51


In this interview, we're excited to speak with Pravi Devineni, who was into AI before it was insane. Pravi has a PhD in AI and remembers the days when machine learning (ML) and AI were synonymous. This is where we'll start our conversation: trying to get some perspective around how generative AI has changed the overall landscape of AI in the enterprise. Then, we move on to the topic of AI safety and whether that should be the CISO's job, or someone else's. Finally, we'll discuss the future of AI and try to end on a positive or hopeful note! What a time to have this conversation! Mere days from the certain destruction of CVE, averted only in the 11th hour, we have a chat about vulnerability management lifecycles. CVEs are definitely part of them. Vulnerability management is very much a hot mess at the moment for many reasons. Even with perfectly stable support from the institutions that catalog and label vulnerabilities from vendors, we'd still have some serious issues to address, like: disconnects between vulnerability analysts and asset owners gaps and issues in vulnerability discovery and asset management different options for workflows between security and IT: which is best? patching it like you stole it Oh, did we mention Matt built an open source vuln scanner? https://sirius.publickey.io/ In the enterprise security news, lots of funding, but no acquisitions? New companies new tools including a SecOps chrome plugin and a chrome plugin that tells you the price of enterprise software prompt engineering tips from google being an Innovation Sandbox finalist will cost you Security brutalism CVE dumpster fires and a heartwarming story about a dog, because we need to end on something happy! All that and more, on this episode of Enterprise Security Weekly. Visit https://www.securityweekly.com/esw for all the latest episodes! Show Notes: https://securityweekly.com/esw-403

ML Soul of Detroit
Mike Young: Hollywood Baller – April 15, 2025

ML Soul of Detroit

Play Episode Listen Later Apr 15, 2025


Tax Day is no laughing matter, so comedian Mike Young tells Erika, ML and Marc some amazing true Hollywood stories. […]

Dr. Berg’s Healthy Keto and Intermittent Fasting Podcast

Did you know that many conditions treated by Big Pharma can be inexpensively corrected with vitamin D?Leo Pharma and Almirall are two pharmaceutical companies that specialize in dermatology. Many of their products contain synthetic vitamin D for skin problems. Coincidentally, these companies also played a role in the sun-phobia movement. The sun is one of the best remedies for skin problems like eczema and psoriasis. You need at least 10,000 IUs of vitamin d3 every day to maintain healthy vitamin D levels!Sunbathing was once popular worldwide and was used to prevent tuberculosis and various health conditions. However, after the development of antibiotics, its popularity drastically declined.Dr. William Grant has done extensive research on vitamin D. He explains that 84,532 papers have been published on vitamin D, the most studied bioactive molecule. Despite the evidence, there is still controversy regarding the health benefits of vitamin D. Tens of thousands of research studies have been conducted on the benefits of vitamin D for cancer. Big Pharma has influenced studies that discredit the benefits of vitamin D. Only small amounts of vitamin D were used in most studies that do not show favorable outcomes with vitamin D. Dr. Cicero Coimbra, who developed the Coimbra protocol, has seen a 90% success rate in using vitamin D for autoimmune diseases. Some people have vitamin D resistance, which involves a genetic problem with vitamin D. Blood tests can not accurately determine vitamin D deficiency. It is vital to keep your vitamin D levels over 50 ng/mL to avoid vitamin D deficiency. Dr. Eric Berg DC Bio:Dr. Berg, age 59, is a chiropractor who specializes in Healthy Ketosis & Intermittent Fasting. He is the author of the best-selling book The Healthy Keto Plan, and is the Director of Dr. Berg Nutritionals. He no longer practices, but focuses on health education through social media.

Machine Learning Guide
MLA 024 Code AI MCP Servers, ML Engineering

Machine Learning Guide

Play Episode Listen Later Apr 13, 2025 43:38


Tool Use and Model Context Protocol (MCP) Notes and resources at  ocdevel.com/mlg/mla-24 Try a walking desk to stay healthy while you study or work! Tool Use in Vibe Coding Agents File Operations: Agents can read, edit, and search files using sophisticated regular expressions. Executable Commands: They can recommend and perform installations like pip or npm installs, with user approval. Browser Integration: Allows agents to perform actions and verify outcomes through browser interactions. Model Context Protocol (MCP) Standardization: MCP was created by Anthropic to standardize how AI tools and agents communicate with each other and with external tools. Implementation: MCP Client: Converts AI agent requests into structured commands. MCP Server: Executes commands and sends structured responses back to the client. Local and Cloud Frameworks: Local (S-T-D-I-O MCP): Examples include utilizing Playwright for local browser automation and connecting to local databases like Postgres. Cloud (SSE MCP): SaaS providers offer cloud-hosted MCPs to enhance external integrations. Expanding AI Capabilities with MCP Servers Directories: Various directories exist listing MCP servers for diverse functions beyond programming. modelcontextprotocol/servers Use Cases: Automation Beyond Coding: Implementing MCPs that extend automation into non-programming tasks like sales, marketing, or personal project management. Creative Solutions: Encourages innovation in automating routine tasks by integrating diverse MCP functionalities. AI Tools in Machine Learning Automating ML Process: Auto ML and Feature Engineering: AI tools assist in transforming raw data, optimizing hyperparameters, and inventing new ML solutions. Pipeline Construction and Deployment: Facilitates the use of infrastructure as code for deploying ML models efficiently. Active Experimentation: Jupyter Integration Challenges: While integrations are possible, they often lag and may not support the latest models. Practical Strategies: Suggests alternating between Jupyter and traditional Python files to maximize tool efficiency. Conclusion Action Plan for ML Engineers: Setup structured folders and documentation to leverage AI tools effectively. Encourage systematic exploration of MCPs to enhance both direct programming tasks and associated workflows.