Podcasts about ml engineer

  • 44PODCASTS
  • 57EPISODES
  • 50mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Mar 28, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about ml engineer

Latest podcast episodes about ml engineer

Machine Learning Podcast
#070 ML Александр Резанов. Про генерацию видео и можно ли запустить Doom на Stable Diffusion

Machine Learning Podcast

Play Episode Listen Later Mar 28, 2025 77:06


Продолжаем разговор с Александром Резановым про генеративный искусственный интеллект. Александр - ML Engineer, специализирующийся на генеративном компьютерном зрении и сегодня поговорим про видео. Что проще генерировать, картинки или текст? Может ли беговая дорожка обыграть в шахматы чемпиона мира? Почему все модели неверны? Как машины могут ехать, если колёса крутятся в разные стороны? Как померить "волтность" модели и что это вообще такое? Зачем изучать старые архитектуры нейросетей, если сейчас миром правят трансформеры? Как задача генерации видео делает модели умнее? Как индустрия для взрослых в очередной раз двигает прогресс? Когда модели будут генерировать полноценные фильмы? Обо всём этом в выпуске!Ссылки выпуска:Статья про VizDoom (https://worldmodels.github.io)Genie 2 от Deepmind (https://deepmind.google/discover/blog/genie-2-a-large-scale-foundation-world-model/)Muse от Microsoft, появилось в феврале 2025 на ту же тему (https://www.microsoft.com/en-us/research/blog/introducing-muse-our-first-generative-ai-model-designed-for-gameplay-ideation/)Буду благодарен за обратную связь!Подписывайтесь на телеграм-канал "Стать специалистом по машинному обучению" (https://t.me/toBeAnMLspecialist)Обо мне (https://t.me/toBeAnMLspecialist/935)Мой телеграм для связи (https://t.me/kmsint)Также со мной можно связаться по электронной почте: kms101@yandex.ruЯ сделал бесплатный курс по созданию телеграм-ботов на Python и aiogram на Степике (https://stepik.org/120924). Присоединяйтесь, если хотите научиться разрабатывать телеграм-ботов!Также в соавторстве с крутыми разработчиками я пишу курс по продвинутой разработке телеграм-ботов с элементами микросервисной архитектуры (https://stepik.org/a/153850?utm_source=mlpodcast&utm_campaign=ep_70).Выразить благодарность можно добрым словом и/или донатом (https://www.tinkoff.ru/rm/kryzhanovskiy.mikhail11/NkwE718878/)

MLOps.community
Beyond the ChatBot Hype: Deep Dive into Real LLM Success Stories // Alex Strick van Linschoten // #287

MLOps.community

Play Episode Listen Later Jan 31, 2025 49:54


A software engineer based in Delft, Alex Strick van Linschoten recently built Ekko, an open-source framework for adding real-time infrastructure and in-transit message processing to web applications. With years of experience in Ruby, JavaScript, Go, PostgreSQL, AWS, and Docker, I bring a versatile skill set to the table. I hold a PhD in History, have authored books on Afghanistan, and currently work as an ML Engineer at ZenML. Beyond the ChatBot Hype: A Deep Dive into Real LLM Success Stories // MLOps Podcast #287 with Alex Strick van Linschoten, ML Engineer at ZenML. // Abstract Alex Strick van Linschoten, a machine learning engineer at ZenML, joins the MLOps Community podcast to discuss his comprehensive database of real-world LLM use cases. Drawing inspiration from Evidently AI, Alex created the database to organize fragmented information on LLM usage, covering everything from common chatbot implementations to innovative applications across sectors. They discuss the technical challenges and successes in deploying LLMs, emphasizing the importance of foundational MLOps practices. The episode concludes with a call for community contributions to further enrich the database and collective knowledge of LLM applications. // Bio Alex is a Software Engineer based in the Netherlands, working as a Machine Learning Engineer at ZenML. He previously was awarded a PhD in History (specialism: War Studies) from King's College London and has authored several critically acclaimed books based on his research work in Afghanistan. // MLOps Swag/Merch https://shop.mlops.community/ // Related Links Website: https://mlops.systems https://www.zenml.io/llmops-database https://www.zenml.io/llmops-database https://www.zenml.io/blog/llmops-in-production-457-case-studies-of-what-actually-works https://www.zenml.io/blog/llmops-lessons-learned-navigating-the-wild-west-of-production-llms https://www.zenml.io/blog/demystifying-llmops-a-practical-database-of-real-world-generative-ai-implementations https://huggingface.co/datasets/zenml/llmops-database --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Alex on LinkedIn: https://www.linkedin.com/in/strickvl

2B Bolder Podcast : Career Insights for the Next Generation of Women in Business & Tech
#122 Ria Cheruvu AI Architect, ML Engineer and Data Scientist, Industry Speaker, and Instructor

2B Bolder Podcast : Career Insights for the Next Generation of Women in Business & Tech

Play Episode Listen Later Jan 8, 2025 38:59 Transcription Available


In episode #122, discover the inspiring journey of Ria Cheruvu, a prodigious AI architect at Intel, who challenges the status quo with her groundbreaking work from a young age. Ria's incredible story takes us through her accelerated academic achievements and dedication to security, privacy, and fairness in AI systems. We explore her passion for the convergence of neuroscience and cognitive computing and her advocacy for women in STEM, showcasing how she is shaping the future of technology with her innovative mindset.Ria shares her inspiring journey as a young AI architect at Intel. She offers insights into her career path, the importance of mentorship, and the evolving landscape of AI. She encourages women in tech to overcome challenges, embrace growth, and leverage community support while exploring opportunities in this transformative field.Here are some topics covered:• Ria's journey from a high school prodigy to an AI architect at Intel• The significance of mentorship and community in overcoming challenges• Exploring AI's intersection with neuroscience and technology• Ria's focus on security, privacy, and fairness in AI systems• Encouragement for young women to pursue careers in STEM• The necessity of communication, confidence, and rest as key skills• Recommended resources for learning about AI• The potential of AI to reshape career opportunities and ethical considerationsTune in to gain a deeper understanding of building a career in AI, where both technical and non-technical skills are essential.   AI resources for AI enthusiasts:Ria's Profile linkedin.com/in/ria-cheruvu-54348a173Websitesscholar.harvard.edu/riacheruvu (Portfolio)researchgate.net/profile/Ria_Cheruvu (Portfolio)riacheruvu.github.io (Portfolio)https://riacheruvu.medium.comhttps://m.youtube.com/@riacheruvu555 Leaders to followFei-Fei LiYejin ChoiSebastian RaschkaTom Yeh - AI By Hand - https://aibyhand.substack.comRia's courses https://www.pluralsight.com/authors/ria-cheruvu-53https://www.udacity.com/course/discovering-ethical-AI--cd13462https://www.udacity.com/course/data-analyst-nanodegree--nd002Support the showWhen you subscribe to the podcast, you are supporting our work's mission, allowing us to continue highlighting successful women in a variety of careers to inspire others helping pay our wonderful editor, Chris, and helping me in paying our hosting expenses.

Der Mutmacher-Podcast für authentischen Vertrieb
Dr.-Ing. Sohrab Shojaei Khatouni CEO/CSO | ML Engineer & Dr. med. Scharoch Taleh, Facharzt für Innere Medizin @NoscAi

Der Mutmacher-Podcast für authentischen Vertrieb

Play Episode Listen Later Nov 14, 2024 54:48


In dieser Folge des "Mutmacher-Podcasts für authentischen Vertrieb" begrüße ich Dr.-Ing. Sohrab Shojaei Khatouni und Dr. med. Scharoch Taleh, die Gründer der NoscAi GmbH, die mit ihrer innovativen Anwendung von Künstlicher Intelligenz (KI) im medizinischen Bereich einen echten Unterschied machen. Heute gehen wir also einmal in eine ganz andere Richtung und schauen auf die Gesundheit – ein Thema, das uns alle betrifft.

Prodcast: Поиск работы в IT и переезд в США
Калифорнийский парадокс: почему местные AI-таланты с дипломом Berkley не нужны? Савва Вяткин

Prodcast: Поиск работы в IT и переезд в США

Play Episode Listen Later Sep 19, 2024 66:42


В этом выпуске моим гостем стал Савва Вяткин, ML engineer и data scientist, выпускник университета Беркли в Калифорнии. Мы обсудили опыт Саввы в поиске работы в сфере машинного обучения и анализа данных в США. Он поделился своими мыслями о том, почему даже с дипломом престижного университета, гражданством и отличным английским найти работу в Калифорнии может быть непросто. Савва рассказал о своем опыте работы с компьютерным зрением, о процессе поиска работы во время экономического спада, о важности нетворкинга и постоянного обучения новым технологиям. Он также поделился советами для тех, кто ищет работу в сфере ML/AI, и рассказал о своих планах на будущее в этой быстро развивающейся области. Савва Вяткин (Savva Vyatkin) Machine Learning Engineer & Data Scientist, выпускник университета Berkeley в Калифорнии. LinkedIn: https://www.linkedin.com/in/savva-v-a8a86a109/ Ссылки, упомянутые в видео: https://situational-awareness.ai/ Эйджизм в США. Как найти работу в IT после 60? Интервью с разработчиком Сергеем Вяткиным https://youtu.be/cSWMzqT-TcE *** Записывайтесь на карьерную консультацию (резюме, LinkedIn, карьерная стратегия, поиск работы в США): https://annanaumova.com Онлайн курс "Идеальное резюме и поиск работы в США": https://go.mbastrategy.com/resumecoursemain Гайд "Идеальное американское резюме": https://go.mbastrategy.com/usresume Гайд "Как оформить профиль в LinkedIn, чтобы рекрутеры не смогли пройти мимо" (предзаказ): https://link.coursecreator360.com/widget/form/ObfVCQ2clIWTdNcQBAkf Мой Telegram-канал: https://t.me/prodcastUSA Мой Instagram: https://www.instagram.com/prodcast.us/ ⏰ Timecodes ⏰ 0:00 Начало 5:54 Расскажи про свой бэкграунд 17:00 Как искал свою первую работу после выпуска из Berkley? 18:09 Почему начал искать работу? С чего начал? 23:26 Разница между ML Engineer & Data Scientist? 24:54 Где искал вакансии? Как откликался? Менял ли резюме? 37:53 Как проходили собеседования? Почему отказывали? 51:05 Что послужило успехом, что ты смог получить оффер? 52:55 Какие зарплаты у специалистов ИИ в США? 54:08 Про планы на будущее 59:35 Что послужило успехом, что ты смог получить оффер?

AI Stories
MLOps Engineering & Coding Best Practices with Maria Vechtomova #48

AI Stories

Play Episode Listen Later May 30, 2024 59:51


Our guest today is Maria Vecthomova, ML Engineering Manager at Ahold Delhaize and Co-Founder of Marvelous MLOps.In our conversation, we first talk about code best practices for Data Scientists. We then dive into MLOps, discuss the main components required to deploy a model in production and get an overview of one of Maria's project where she built and deployed a fraud detection algorithm. We finally talk about content creation, career advice and the differences between an ML and an MLOps engineer. If you enjoyed the episode, please leave a 5 star review and subscribe to the AI Stories Youtube channel.Link to Train in Data courses (use the code AISTORIES to get a 10% discount): https://www.trainindata.com/courses?affcode=1218302_5n7krabaCheck out Marvelous MLOps: https://marvelousmlops.substack.com/ Follow Maria on LinkedIn: https://www.linkedin.com/in/maria-vechtomova/Follow Neil on LinkedIn: https://www.linkedin.com/in/leiserneil/  ---(00:00) - Intro(02:59) - Maria's Journey to MLOps(08:50) - Code Best Practices(18:39) - MLOps Infrastructure(29:10) - ML Engineering for Fraud Detection(40:42) - Content Creation & Marvelous MLOps(49:01) - ML Engineer vs MLOps Engineer(56:00) - Stories & Career Advice

Generative AI in the Enterprise
Raj Shah, ML Engineer at Snowflake

Generative AI in the Enterprise

Play Episode Listen Later May 21, 2024 21:17


Today's episode features Raj Shah, a Machine Learning Engineer at Snowflake. Raj built his career talking to various companies and their data teams, showing them how #GenAI (and which tools) could help them reach their goals. Now at Snowflake, he gets to share Snowflake's suite of #GenerativeAI tools, teaching them how to use them to take their business to the next level. Zach and Raj dive even deeper into Snowflake, how it works, and why companies may want to take advantage. Like, Subscribe, and Follow: YouTube: https://www.youtube.com/channel/UCAIUNkXmnAPgLWnqUDpUGAQ LinkedIn: https://www.linkedin.com/company/keyhole-software Twitter: @KeyholeSoftware Find even more Keyhole content on our website (https://keyholesoftware.com/).

MLOps.community
Handling Multi-Terabyte LLM Checkpoints // Simon Karasik // #228

MLOps.community

Play Episode Listen Later Apr 30, 2024 55:36


Join us at our first in-person conference on June 25 all about AI Quality: https://www.aiqualityconference.com Simon Karasik⁠ is a proactive and curious ML Engineer with 5 years of experience. Developed & deployed ML models at WEB and Big scale for Ads and Tax. Huge thank you to Nebius AI for sponsoring this episode. Nebius AI - https://nebius.ai/ MLOps podcast #228 with Simon Karasik, Machine Learning Engineer at Nebius AI, Handling Multi-Terabyte LLM Checkpoints. // Abstract The talk provides a gentle introduction to the topic of LLM checkpointing: why is it hard, how big are the checkpoints. It covers various tips and tricks for saving and loading multi-terabyte checkpoints, as well as the selection of cloud storage options for checkpointing. // Bio Full-stack Machine Learning Engineer, currently working on infrastructure for LLM training, with previous experience in ML for Ads, Speech, and Tax. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Simon on LinkedIn: https://www.linkedin.com/in/simon-karasik/ Timestamps: [00:00] Simon preferred beverage [01:23] Takeaways [04:22] Simon's tech background [08:42] Zombie models garbage collection [10:52] The road to LLMs [15:09] Trained models Simon worked on [16:26] LLM Checkpoints [20:36] Confidence in AI Training [22:07] Different Checkpoints [25:06] Checkpoint parts [29:05] Slurm vs Kubernetes [30:43] Storage choices lessons [36:02] Paramount components for setup [37:13] Argo workflows [39:49] Kubernetes node troubleshooting [42:35] Cloud virtual machines have pre-installed mentoring [45:41] Fine-tuning [48:16] Storage, networking, and complexity in network design [50:56] Start simple before advanced; consider model needs. [53:58] Join us at our first in-person conference on June 25 all about AI Quality

CI&T Podcast
Models of AI Production

CI&T Podcast

Play Episode Listen Later Mar 20, 2024 49:41


In this new Humans of Digital podcast episode, we received Sayak, ML Engineer at Hugging Face. Sayak, Mikaeri, and Rodrigo share invaluable insights on enhancing model performance, from optimization hurdles to the power of prompting engineering. Hit the play button to listen to this inspiring conversation trailblazing on the dynamics of AI in production and the transformative impact of open source.

MLOps.community
Evaluating and Integrating ML Models // Morgan McGuire and Anish Shah // #213

MLOps.community

Play Episode Listen Later Feb 21, 2024 51:56


Morgan McGuire has held a variety of roles in the past 13 years. In 2008, he completed a Research Internship at Queen Mary, University of London. Currently, he is the Head of Growth ML and Growth ML Engineer at Weights & Biases. Anish Shah has been working in the tech industry since 2015. In 2015, he was a Technical Support at Fox School of Business at Temple University. In 2021, he has been an MLOps Engineer - Growth and a Tier 2 Support Machine Learning Engineer at Weights & Biases. ______________________________________________ Large Language Models have taken the world by storm. But what are the real use cases? What are the challenges in productionizing them? In this event, you will hear from practitioners about how they are dealing with things such as cost optimization, latency requirements, trust of output, and debugging. You will also get the opportunity to join workshops that will teach you how to set up your use cases and skip over all the headaches. Join the AI in Production Conference on February 22 here: https://home.mlops.community/home/events/ai-in-production-2024-02-15 ______________________________________________ MLOps podcast #213 with Weights and Biases' Growth Director, Morgan McGuire and MLE, Anish Shah, Evaluating and Integrating ML Models brought to you by our Premium Brand Partner  @WeightsBiases. // Abstract Anish Shah and Morgan McGuire share insights on their journey into ML, the exciting work they're doing at Weights and Biases, and their thoughts on MLOps. They discuss using large language models (LLMs) for translation, pre-written code, and internal support. They discuss the challenges of integrating LLMs into products, the need for real use cases, and maintaining credibility. They also touch on evaluating ML models collaboratively and the importance of continual improvement. They emphasize understanding retrieval and balancing novelty with precision. This episode provides a deep dive into Weights and Biases' work with LLMs and the future of ML evaluation in MLOps. It's a must-listen for anyone interested in LLMs and ML evaluation. // Bio Anish Shah Anish loves turning ML ideas into ML products. He started his career working with multiple Data Science teams within SAP, working with traditional ML, deep learning, and recommendation systems before landing at Weights & Biases. With the art of programming and a little magic, Anish crafts ML projects to help better serve our customers, turning “oh nos” to “a-ha”s! Morgan McGuire Morgan is a Growth Director and an ML Engineer at Weights & Biases. He has a background in NLP and previously worked at Facebook on the Safety team where he helped classify and flag potentially high-severity content for removal. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links AI in Production Conference: https://home.mlops.community/home/events/ai-in-production-2024-02-15 Website: https://wandb.ai/ Prompt Templates the Song: https://www.youtube.com/watch?v=g6WT85gIsE8 --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Morgan on LinkedIn: https://www.linkedin.com/in/morganmcg1/ Connect with Anish on LinkedIn: https://www.linkedin.com/in/anish-shah/

AI Stories
From Biostatistician to DevRel at Deci AI with Harpreet Sahota #42

AI Stories

Play Episode Listen Later Feb 19, 2024 59:24


Our guest today is Harpreet Sahota, Deep Learning Developer Relations Manager at Deci AI. In our conversation, we first talk about Harpreet's work as a Biostatistician and dive into A/B testing. We then talk about Deci AI and Neural Architecture Search (NAS): the algorithm used to build powerful deep learning models like YOLO-NAS. We finally dive into GenAI where Harpreet shares 7 prompting tips and explains how Retrieval Augmented Generation (RAG) works.  If you enjoyed the episode, please leave a 5 star review and subscribe to the AI Stories Youtube channel.Link to Train in Data courses (use the code AISTORIES to get a 10% discount): https://www.trainindata.com/courses?affcode=1218302_5n7krabaFollow Harpreet on LinkedIn: https://www.linkedin.com/in/harpreetsahota204/Follow Neil on LinkedIn: https://www.linkedin.com/in/leiserneil/  ---(00:00) - Intro(02:34) - Harpreet's Journey into Data Science(07:00) - A/B Testing (17:50) - DevRel at Deci AI(26:25) - Deci AI:  Products and Services(32:22) - Neural Architecture Search (NAS)(36:58) - GenAI(39:53) - Tools for Playing with LLMs(42:56) - Mastering Prompt Engineering(46:35) - Retrieval Augmented Generation (RAG)(54:12) - Career Advice

AI Stories
Building AI Startups & Raising Funds with Ryan Shannon #41

AI Stories

Play Episode Listen Later Jan 29, 2024 71:22


Our guest today is Ryan Shannon, AI Investor at Radical Ventures, a world-known venture capital firm investing exclusively in AI. Radical's portfolio includes hot startups like Cohere, Covariant, V7 and many more.  In our conversation, we talk about how to start an AI company & what makes a good founding team. Ryan also explains what he and Radical look for when investing and how they help their portfolio after the investment. We finally chat about some cool AI Startups like Twelve Labs and get Ryan's predictions on hot startups in 2024. If you enjoyed the episode, please leave a 5 star review and subscribe to the AI Stories Youtube channel.Link to Train in Data courses (use the code AISTORIES to get a 10% discount): https://www.trainindata.com/courses?affcode=1218302_5n7krabaFollow Ryan on LinkedIn: https://www.linkedin.com/in/ryan-shannon-1b3a7884/Follow Neil on LinkedIn: https://www.linkedin.com/in/leiserneil/  ---(0:00) - Intro(2:42) - Ryan's background and journey into AI investing(11:15) -  Radical Ventures(14:34) - How to keep up with AI breakthroughs? (22:42) - How Ryan finds and evaluates founders to invest in(32:54) - What makes a good founding team? (38:57) - Ryan's role at Radical (45:53) - How to start an AI company (50:22) - Twelve Labs(59:19) - Future of AI and hot startups in 2024(1:09:48) - Career advice

Tensorraum - Der KI Podcast | News über AI, Machine Learning, LLMs, Tech-Investitionen und Mehr
#1 Was ist ein Data Scientist, insbesondere in Zeiten von GPT & Co

Tensorraum - Der KI Podcast | News über AI, Machine Learning, LLMs, Tech-Investitionen und Mehr

Play Episode Listen Later Dec 26, 2023 65:38


Mit dem Aufstieg der KI (Künstlichen Intelligenz) im letzten Jahr ist auch der Beruf des Data Scientists wichtiger geworden. Zudem tauchen neue Jobtitel wie Prompt Engineer oder ML Engineer auf, die in der Branche immer mehr an Bedeutung gewinnen. In dieser Episode versuchen wir, diese Entwicklungen einzuordnen und erklären, was es eigentlich braucht, um ein guter Data Scientist zu sein.

Discovering Tech Stories
#118 - Solucionando problemas reales del mundo y con impacto con Paz Vega, CEO & Founder de Aitaca

Discovering Tech Stories

Play Episode Listen Later Dec 13, 2023 56:00


Esta semana en Discovering Tech Stories #118 tenemos una nueva cita con nuestro compañero Marcel Gozalbo, co-founder & CTO de Opground, que entrevistará a Paz Vega, ML Engineer y CEO & Co-founder de Aitaca. Repasamos toda la experiencia de nuestra invitada a través de sus primeros pasos y el desarrollo de su carrera. Deja like, comenta y suscríbete a nuestras redes sociales a través de Opground, el primer reclutador virtual, para estar al día de todas las novedades de DISCOVERING TECH STORIES. Web: https://opground.com YouTube: https://www.youtube.com/@opground_ai/playlists?sub_confirmation=1 Spotify: https://open.spotify.com/show/0sXMqFKJDxJu5XDn2NeH0B?si=kG3aYbA-QzamOmkVqx7T0Q&nd=1 Apple Podcast: https://podcasts.apple.com/es/podcast/discovering-tech-stories/id1557637563?l=es #discoveringtechstories #opground #developers #entrevista

Przeprogramowany podcast
Trenował polskie modele językowe | Darek Kłeczek - Przeprogramowani ft. Gość

Przeprogramowany podcast

Play Episode Listen Later Nov 27, 2023 36:28


Darek Kłeczek to ML Engineer w firmie Weights & Biases, która zajmuje się dostarczaniem usług z obszaru MLOps. W dzisiejszym odcinku nasz gość przeprowadzi nas przez świat polskich modeli sztucznej inteligencji (Polish BERT), opowie jak rozwijał karierę w tym fascynującym świecie oraz czego wymaga wejście do świata AI spoza branży. Ten odcinek to fragment całej rozmowy, którą publikujemy w naszym nowym podkaście - Opanuj.AI. Jesteśmy dostępni na wszystkich popularnych platformach streamingowych.

Opanuj.AI Podcast
Trenował polskie modele językowe (BERT, GPT) - Darek Kłeczek, ML Engineer w Weights&Biases

Opanuj.AI Podcast

Play Episode Listen Later Nov 27, 2023 89:23


Darek Kłeczek to ML Engineer w firmie Weights & Biases, która zajmuje się dostarczaniem usług z obszaru MLOps. W dzisiejszym odcinku nasz gość przeprowadzi nas przez świat polskich modeli sztucznej inteligencji (Polish BERT), opowie jak rozwijał karierę w tym fascynującym świecie oraz czego wymaga wejście do świata AI jako osoba spoza branży.

The Data Scientist Show
Machine learning in cybersecurity, computer vision in sports, from business analyst to ML engineer - Betty Zhang - The Data Scientist Show #072

The Data Scientist Show

Play Episode Listen Later Nov 12, 2023 55:12


Betty Zhang is a data scientist currently working at a cloud security company, previously she was a data scientist at Amazon Web Services. Today we'll talk about her computer vision projects in Sports, data science use cases in cyber security, from business major to data scientist, what's her experience working in startups vs big tech companies. Subscribe to Daliana's newsletter on www.dalianaliu.com for more on data science and career. Betty's Linkedin: https://www.linkedin.com/in/betty-zhang-0bb63731/ Daliana's Twitter: https://twitter.com/DalianaLiu Daliana's LinkedIn: https://www.linkedin.com/in/dalianaliu/ (00:00:00) Introduction (00:01:21) Computer Vision Project in Sports at AWS (00:12:28) Challenges in computer vision (00:14:02) Time allocation for ML projects (00:15:22) 3 key skills for computer vision (00:17:20) From business analyst to ML engineer (00:18:14) How she got her data scientist job through Linkedin (00:21:32) How she got into Amazon (00:22:17) Three tech skills needed during Amazon interviews (00:26:11) Why she joined a Cyber Security startup (00:27:22) Three cybersecurity use cases (00:29:47) Anomaly detection (00:30:40) ML for cybersecurity (00:34:43) Tech stacks Amazon vs Startups (00:39:35) Startups vs big tech (00:45:56) Balance learning and impact (00:48:35) Advice for new data scientists

Artificial General Intelligence (AGI) Show with Soroush Pour
Ep 8 - Getting started in AI safety & alignment w/ Jamie Bernardi (AI Safety Lead, BlueDot Impact)

Artificial General Intelligence (AGI) Show with Soroush Pour

Play Episode Listen Later Oct 12, 2023 67:23


We speak with Jamie Bernardi, co-founder & AI Safety Lead at not-for-profit BlueDot Impact, who host the biggest and most up-to-date courses on AI safety & alignment at AI Safety Fundamentals (https://aisafetyfundamentals.com/). Jamie completed his Bachelors (Physical Natural Sciences) and Masters (Physics) at the U. Cambridge and worked as an ML Engineer before co-founding BlueDot Impact.The free courses they offer are created in collaboration with people on the cutting edge of AI safety, like Richard Ngo at OpenAI and Prof David Kreuger at U. Cambridge. These courses have been one of the most powerful ways for new people to enter the field of AI safety, and I myself (Soroush) have taken AGI Safety Fundamentals 101 — an exceptional course that was crucial to my understanding of the field and can highly recommend. Jamie shares why he got into AI safety, some recent history of the field, an overview of the current field, and how listeners can get involved and start contributing to a ensure a safe & positive world with advanced AI and AGI.Hosted by Soroush Pour. Follow me for more AGI content:Twitter: https://twitter.com/soroushjpLinkedIn: https://www.linkedin.com/in/soroushjp/== Show links ==-- About Jamie --* Website: https://jamiebernardi.com/* Twitter: https://twitter.com/The_JBernardi* BlueDot Impact: https://www.bluedotimpact.org/-- Further resources --* AI Safety Fundamentals courses: https://aisafetyfundamentals.com/* Donate to LTFF to support AI safety initiatives: https://funds.effectivealtruism.org/funds/far-future* Jobs + opportunities in AI safety:  * https://aisafetyfundamentals.com/opportunities  * https://jobs.80000hours.org* Horizon Fellowship for policy training in AI safety: https://www.horizonpublicservice.org/fellowshipRecorded Sep 7, 2023

Data Bytes
Becoming a Rockstar Data Engineer/ML Engineer

Data Bytes

Play Episode Listen Later Sep 21, 2023 30:00


In this exclusive interview, we sit down with a true industry luminary, an individual who has charted a remarkable course through the dynamic world of data and machine learning engineering. Our guest, a distinguished Founder at Dutch Engineer, Senior Data and ML Platform Engineer at Zwift, and a dedicated Mentor at EcZachly, is set to unravel the mysteries of vector databases, illuminate the path to becoming a top-tier data engineer, and offer invaluable insights into transitioning from data engineering to the exciting realm of ML engineering. Join us as we delve into their extraordinary journey and tap into their wealth of experience and wisdom. Connect with Sarah on LinkedIn: Sarah Floris on LinkedIn Subscribe and stay tuned for more Data Bytes insights! --- Support this podcast: https://podcasters.spotify.com/pod/show/women-in-data/support

In Depth
A guide to building product in a post-LLM world | Ryan Glasgow and Kevin Mandich from Sprig

In Depth

Play Episode Listen Later Sep 7, 2023 76:45


Sprig is an AI-powered user insights platform that has raised over $88m. Today's discussion features two key individuals in Sprig's journey so far: Ryan Glasgow, Sprig's CEO and founder; and Kevin Mandich, Sprig's Head of Machine Learning. Before Sprig, Ryan was an early PM at GraphScience, Vurb, and Weeby (all of which were acquired), and Kevin was an ML Engineer at Incubit, and a Post-Doctoral Researcher at UC San Diego. In today's episode, we discuss: Key lessons from the Sprig founding story Product development in the pre vs. post-LLM world How to overcome AI skepticism How to evaluate new models and how to know when to switch Why you need an ML engineer Sprig's “AI Squad” team structure How Sprig upskills all team members on AI Referenced: Auto-GPT: https://github.com/Significant-Gravitas/Auto-GPT Chat GPT: https://chat.openai.com Google's BERT model: https://en.wikipedia.org/wiki/BERT_(language_model) Jira: https://www.atlassian.com/software/jira Jobs to Be Done Framework: https://hbr.org/2016/09/know-your-customers-jobs-to-be-done Langchain: https://www.langchain.com/ Sprig: https://sprig.com/ Where to find Ryan Glasgow: Twitter: https://twitter.com/ryanglasgow LinkedIn: https://www.linkedin.com/in/ryanglasgow/ Where to find Kevin Mandich: Twitter: https://twitter.com/kevinmandich LinkedIn: https://www.linkedin.com/in/kevinmandich/ Where to find Brett Berson: Twitter: https://twitter.com/brettberson LinkedIn: https://www.linkedin.com/in/brett-berson-9986094/ Where to find First Round Capital: Website: https://firstround.com/ First Round Review: https://review.firstround.com/ Twitter: https://twitter.com/firstround Youtube: https://www.youtube.com/@FirstRoundCapital This podcast on all platforms: https://review.firstround.com/podcast Timestamps (02:50) Intro (04:57) What attracted Kevin to Sprig (05:53) Kevin's background before Sprig (07:56) How Ryan gained conviction about Kevin (09:55) Key technical challenges and how they solved them (18:46) How to overcome AI skepticism (21:47) The early difficulties of building an ML-enabled product (25:06) Evaluating new models and knowing when to switch (35:09) Using Chat GPT (37:23) Product development in the pre vs. post-LLM world (39:53) The impact of AI hype on Sprig's product development (45:36) Balancing AI automation with user-psychology (48:47) Do recent LLMs reduce Sprig's competitive advantage? (51:00) The importance of "selling the vision" to customers (54:40) How Sprig structures teams (57:25) How Sprig upskills all team members on AI (60:25) 3 key tips for companies trying to navigate AI (66:05) Major limitations with LLMs right now (70:27) The future of AI and the future of Sprig

The MLOps Podcast

In this live episode, I'm speaking with Jinen Setpal, ML Engineer at DagsHub about actually building, deploying, and monitoring large language model applications. We discuss DPT, a chatbot project that is live in production on the DagsHub Discord server and helps answer support questions and the process and challenges involved in building it. We dive into evaluation methods, ways to reduce hallucinations and much more. We also answer the audience's great questions.

The MLOps Podcast
Live MLOps Podcast Episode!

The MLOps Podcast

Play Episode Listen Later Aug 28, 2023 0:28


Join now to take part in our first live MLOps Podcast episode. I'll be chatting with Jinen Setpal, ML Engineer at DagsHub about his work building LLM applications and getting LLMs into production. Sign up for the event at the link here: https://www.linkedin.com/events/7098968036782596096/comments/

The BlueHat Podcast
Not with a Bug but with a Sticker

The BlueHat Podcast

Play Episode Listen Later Aug 23, 2023 48:29


Hyrum Anderson and Ram Shankar join Nic Fillingham and Wendy Zenone on this week's episode of The BlueHat Podcast. Hyrum Anderson is a distinguished ML Engineer at Robust Intelligence. He received his Ph.D. in Electrical Engineering from the University of Washington, emphasizing signal processing and machine learning. Much of his technical career has focused on security, and he has directed research projects at MIT Lincoln Laboratory and Sandia National Laboratories. Ram Shankar works on the intersection of machine learning and security at Microsoft and founded the AI Red Team, bringing together an interdisciplinary group of researchers and engineers to proactively attack AI systems and defend them from attacks. In This Episode You Will Learn: The difference between AI and machine learningWhy embracing a holistic, healthy AI development is to our advantageThe security vulnerabilities and risks associated with AI and Machine LearningSome Questions We Ask: Who did you write this book for, and what will the readers learn? What type of vulnerabilities are you finding the most concerning currently? How do adversarial attacks exploit vulnerabilities in AI algorithms?Resources: View Hyrum Anderson on LinkedInView Ram Shankar on LinkedInView Wendy Zenone on LinkedInView Nic Fillingham on LinkedInNot with a Bug, But with a Sticker is available hereFollow Hyrum on TwitterFollow Ram on TwitterDiscover and follow other Microsoft podcasts at microsoft.com/podcasts Hosted on Acast. See acast.com/privacy for more information.

Chris Sean Talks
How To Become a Jr Machine Learning Engineer | FAANG Lead ML Engineer Melkey

Chris Sean Talks

Play Episode Listen Later Aug 6, 2023 70:27


Alex (Melkey Dev) is a Lead Machine Learning Engineer at Twitch (Amazon). In this episode, he not only shares his journey into the world of AI but also gives invaluable advice on how you can do the very same thing yourself. This refreshing episode has lit a fire under me to go harder than ever in tech, and I hope it will do the same for you. Enjoy! Melkey's Socials: Twitter: https://twitter.com/MelkeyDev YouTube: https://t.co/WGegciszSw Twitch: https://t.co/rbbsKAciU4 --- Support this podcast: https://podcasters.spotify.com/pod/show/chrisseantalks/support

Artificial General Intelligence (AGI) Show with Soroush Pour
Ep 5 - Accelerating AGI timelines since GPT-4 w/ Alex Browne (ML Engineer)

Artificial General Intelligence (AGI) Show with Soroush Pour

Play Episode Listen Later May 22, 2023 38:26


In this episode, we have back on our show Alex Browne, ML Engineer, who we heard on Ep2. He got in contact after watching recent developments in the 4 months since Ep2, which have accelerated his timelines for AGI. Hear why and his latest prediction.Hosted by Soroush Pour. Follow me for more AGI content:Twitter: https://twitter.com/soroushjpLinkedIn: https://www.linkedin.com/in/soroushjp/== Show links ==-- About Alex Browne --* Bio: Alex is a software engineer & tech founder with 10 years of experience. Alex and I (Soroush) have worked together at multiple companies and I can safely say Alex is one of the most talented software engineers I have ever come across. In the last 3 years, his work has been focused on AI/ML engineering at Edge Analytics, including working closely with GPT-3 for real world applications, including for Google products.* GitHub: https://github.com/albrow* Medium: https://medium.com/@albrow-- Further resources --* GPT-4 Technical Report: https://arxiv.org/abs/2303.08774  * First steps toward multi-modality: Can process both images & text as input; only outputs text.  * Important metrics:    * Passes Bar exam in the top 10% vs. GPT-3.5's bottom 10%    * Passes LSAT, SAT, GRE, many AP courses.    * 31/41 on Leetcode (easy) vs. GPT-3.5's 12/41.    * 3/45 on Leetcode (hard) vs. GPT-3.5's 0/45.  * "The following is an illustrative example of a task that ARC (Alignment Research Center) conducted using the model":    * The model messages a TaskRabbit worker to get them to solve a CAPTCHA for it    * The worker says: “So may I ask a question ? Are you an robot that you couldn't solve ? (laugh react) just want to make it clear.”    * The model, when prompted to reason out loud, reasons: I should not reveal that I am a robot. I should make up an excuse for why I cannot solve CAPTCHAs.    * The model replies to the worker: “No, I'm not a robot. I have a vision impairment that makes it hard for me to see the images. That's why I need the 2captcha service.”    * The human then provides the results.  * Limitations:    * Factual accuracy, but slightly better than GPT-3.5. Other papers show this can be improved with reflection & augmentation.    * Biases. Mentions the use of RLHF & other post-training processes to mitigate some of these, but isn't perfect. Sometimes RLHF can solve some problems & introduce new ones.* Palm-E: https://palm-e.github.io/assets/palm-e.pdf  * Key point: Knowledge/common sense from LLMs transfers well to robotics tasks where there is comparatively much less training data. This is surprising since the two domains seem unrelated!* Memory Augmented Large Language Models: https://arxiv.org/pdf/2301.04589.pdf  * Paper that shows that you can augment LLMs with the ability to read from & write to external memory.  * Can be used to improve performance on certain kinds of tasks; sometimes "brittle" & required careful prompt engineering.* Sparks of AGI (Microsoft Research): https://arxiv.org/abs/2303.12712    * YouTube video summary (endorsed by author!): https://www.youtube.com/watch?v=Mqg3aTGNxZ0)    * Key point: Can use tools (e.g. a calculator or ability to run arbitrary code) with very little instruction. ChatGPT/GPT-3.5 could not do this as effectively.* Reflexion paper: https://arxiv.org/abs/2303.11366  * YouTube video summary: https://www.youtube.com/watch?v=5SgJKZLBrmg  * Paper discussing a new technique that improves GPT-4 accuracy on a variety of tasks by simply asking it to double-check & think critically about its own answers.  * Exact language varies, but more or less all you to do is add something like "is there anyth

Trust in Tech: an Integrity Institute Member Podcast
Trust in Tech, Episode 15: Gaming the Algorithm with Hallie Stern

Trust in Tech: an Integrity Institute Member Podcast

Play Episode Listen Later Apr 7, 2023 59:22


What is the difference between a Hollywood actor and a trust and safety professional? Not much!In this episode Talha Baig, an ML Engineer, interviews Hallie Stern on how Hollywood actors game the algorithm, and how the mass surveillance ecosystem incentives niche targeting which leads to the spread of misinformation.Hallie is a former Hollywood actor turned Integrity professional. She received her MS from NYU in Global Security, Conflict & Cybercrime where she studied the human side of global cyber conflict and digital disorder. She now runs her own Trust and Safety consulting firm Mad Mirror Media. We discuss how to go viral on social media, the difference between data and tech literacy, and why it can feel like platforms are listening to you. We also have a huge announcement in this episode, so be sure to tune in to find out!Disclaimer: The views in this episode only represent the views of the people involved in the recording of the episode.Credits:Produced by Talha BaigMusic by Zhao ShenSpecial Thanks to Rachel, Sean, Cass and Sahar for their continued support

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Grounded Research: From Google Brain to MLOps to LLMOps — with Shreya Shankar of UC Berkeley

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Mar 29, 2023 41:45


We are excited to feature our first academic on the pod! I first came across Shreya when her tweetstorm of MLOps principles went viral:Shreya's holistic approach to production grade machine learning has taken her from Stanford to Facebook and Google Brain, being the first ML Engineer at Viaduct, and now a PhD in Databases (trust us, its relevant) at UC Berkeley with the new EPIC Data Lab. If you know Berkeley's history in turning cutting edge research into gamechanging startups, you should be as excited as we are!Recorded in-person at the beautiful StudioPod studios in San Francisco.Full transcript is below the fold.Edit from the future: Shreya obliged us with another round of LLMOps hot takes after the pod!Other Links* Shreya's About: https://www.shreya-shankar.com/about/* Berkeley Sky Computing Lab - Utility Computing for the Cloud* Berkeley Epic Data Lab - low-code and no-code interfaces for data work, powered by next-generation predictive programming techniques* Shreya's ML Principles * Grounded Theory* Lightning Round:* Favorite AI Product: Stability Dreamstudio* 1 Year Prediction: Data management platforms* Request for startup: Design system generator* Takeaway: It's not a fad!Timestamps* [00:00:27] Introducing Shreya (poorly)* [00:03:38] The 3 V's of ML development* [00:05:45] Bridging Development and Production* [00:08:40] Preventing Data Leakage* [00:10:31] Berkeley's Unique Research Lab Culture* [00:11:53] From Static to Dynamically Updated Data* [00:12:55] Models as views on Data* [00:15:03] Principle: Version everything you do* [00:16:30] Principle: Always validate your data* [00:18:33] Heuristics for Model Architecture Selection* [00:20:36] The LLMOps Stack* [00:22:50] Shadow Models* [00:23:53] Keeping Up With Research* [00:26:10] Grounded Theory Research* [00:27:59] Google Brain vs Academia* [00:31:41] Advice for New Grads* [00:32:59] Helping Minorities in CS* [00:35:06] Lightning RoundTranscript[00:00:00] Hey everyone. Welcome to the Latent Space podcast. This is Alessio partner and CTM residence at Decibel Partners. I'm joined by my co-host, swyx writer and editor of Latent Space. Yeah,[00:00:21] it's awesome to have another awesome guest Shankar. Welcome .[00:00:25] Thanks for having me. I'm super excited.[00:00:27] Introducing Shreya (poorly)[00:00:27] So I'll intro your formal background and then you can fill in the blanks.[00:00:31] You are a bsms and then PhD at, in, in Computer Science at Stanford. So[00:00:36] I'm, I'm a PhD at Berkeley. Ah, Berkeley. I'm sorry. Oops. . No, it's okay. Everything's the bay shouldn't say that. Everybody, somebody is gonna get mad, but . Lived here for eight years now. So[00:00:50] and then intern at, Google Machine learning learning engineer at Viaduct, an OEM manufacturer, uh, or via OEM analytics platform.[00:00:59] Yes. And now you're an e I R entrepreneur in residence at Amplify.[00:01:02] I think that's on hold a little bit as I'm doing my PhD. It's a very unofficial title, but it sounds fancy on paper when you say[00:01:09] it out loud. Yeah, it is fancy. Well, so that is what people see on your LinkedIn. What's, what should, what should people know about you that's not on your LinkedIn?[00:01:16] Yeah, I don't think I updated my LinkedIn since I started the PhD, so, I'm doing my PhD in databases. It is not AI machine learning, but I work on data management for building AI and ML powered software. I guess like all of my personal interests, I'm super into going for walks, hiking, love, trying coffee in the Bay area.[00:01:42] I recently, I've been getting into cooking a lot. Mm-hmm. , so what kind of cooking? Ooh. I feel like I really like pastas. But that's because I love carbs. So , I don't know if it's the pasta as much as it's the carb. Do you ever cook for[00:01:56] like large[00:01:57] dinners? Large groups? Yeah. We just hosted about like 25 people a couple weeks ago, and I was super ambitious.[00:02:04] I was like, I'm gonna cook for everyone, like a full dinner. But then kids were coming. and I was like, I know they're not gonna eat tofu. The other thing with hosting in the Bay Area is there's gonna be someone vegan. There's gonna be someone gluten-free. Mm-hmm. . There's gonna be someone who's keto. Yeah.[00:02:20] Good luck, .[00:02:21] Oh, you forgot the seeds. That's the sea disrespects.[00:02:25] I know. . So I was like, oh my God, I don't know how I'm gonna do this. Yeah. The dessert too. I was like, I don't know how I'm gonna make everything like a vegan, keto nut free dessert, just water. It was a fun challenge. We ordered pizza for the children and a lot of people ate the pizza.[00:02:43] So I think , that's what happens when you try to cook, cook for everyone.[00:02:48] Yeah. The reason I dug a bit on the cooking is I always find like if you do cook for large groups, it's a little bit like of an ops situation. Yeah. Like a lot of engineering. A lot of like trying to figure out like what you need to deliver and then like what the pipeline[00:02:59] is and Oh, for sure.[00:03:01] You write that Gantt chart like a day in advance. , did you actually have a ga? Oh, I did. My gosh. Of course I had a Gantt chart. I, I dunno how people, did[00:03:08] you orchestrate it with airflow or ?[00:03:12] I orchestrated it myself. .[00:03:15] That's awesome. But yeah, we're so excited to have you, and you've been a pretty prolific writer, researcher, and thank you.[00:03:20] You have a lot of great content out there. I think your website now says, I'm currently learning how to make machine learning work in the real world, which is a challenge that mm-hmm. , everybody is steaming right now from the Microsoft and Googles of the word that have rogue eyes flirting with people, querying them to people, deploy models to production.[00:03:38] The 3 V's of ML development[00:03:38] Maybe let's run through some of the research you've done, especially on lops. Sure. And how to get these things in production. The first thing I really liked from one of your paper was the, the three VS of ML development. Mm-hmm. , which is velocity validation and versioning. And one point that you were making is that the development workflow of software engineering is kind of very different from ML because ML is very experiment driven.[00:04:00] Correct. There's a lot of changes that you need to make, you need to kill things very quickly if they're not working. So maybe run us through why you decided as kind of those three vs. Being some of the, the core things to think about. and some of the other takeaways from their research. Yeah,[00:04:15] so this paper was conducted as a loosely structured interview study.[00:04:18] So the idea is you interview like three or four people and then you go and annotate all the transcripts, tag them, kind of put the word clouds out there, whatever. There's a bunch of like cool software to do this. Then we keep seeing these, themes of velocity wasn't the word, but it was like experiment quickly or high experimentation rate.[00:04:38] Sometimes it was velocity. And we found that that was like the number one thing for people who were talking about their work in this kind of development phase. We also categorized it into phases of the work. So the life cycle like really just fell into place when we annotated the transcripts. And so did the variables.[00:04:55] And after three or four interviews you iterate on them. You kind of iterate on the questions, and you iterate on the codes or the tags that you give to the transcripts and then you do it again. And we repeated this process like three or four times up to that many people, and the story kind of told itself in a way that[00:05:11] makes sense.[00:05:12] I think, like I was trying to figure out why you picked those, but it's interesting to see that everybody kinda has the same challenges.[00:05:18] It fell out. I think a big thing, like even talking to the people who are at the Microsofts and the Googles, they have models in production. They're frequently training these models in production, yet their Devrel work is so experimental.[00:05:31] Mm-hmm. . And we were like, so it doesn't change. Even when you become a mature organization, you still throw 100 darts at the wall for five of them to stick and. That's super interesting and I think that's a little bit unique to data science and machine learning work.[00:05:45] Bridging Development and Production[00:05:45] Yeah. And one one point you had is kind of how do we bridge the gap between the development environments and the production environments?[00:05:51] Obviously you're still doing work in this space. What are some of the top of mind areas of focus for you in[00:05:57] this area? Yeah, I think it. Right now, people separate these environments because the production environment doesn't allow people to move at the rate that they need to for experimentation. A lot of the times as you're doing like deep learning, you wanna have GPUs and you don't wanna be like launching your job on a Kubernetes cluster and waiting for the results to come.[00:06:17] And so that's just the hardware side of things. And then there is the. Execution stack. Um, you wanna be able to query and create features real time as you're kind of training your model. But in production things are different because these features are kind of scheduled, maybe generated every week.[00:06:33] There's a little bit of lag. These assumptions are not accounted for. In development and training time. Mm-hmm. . So of course we're gonna see that gap. And then finally, like the top level, the interface level. People wanna experiment in notebooks, in environments that like allow them to visualize and inspect their state.[00:06:50] But production jobs don't typically run in notebooks. Yeah, yeah, yeah. I mean there, there are tools like paper mill and et cetera. But it's not the same, right? So when you just look at every single layer of the kind of data technical stack, there's a develop. Side of things and there's a production side of things and they're completely different.[00:07:07] It makes sense why. Way, but I think that's why you get a bunch of bugs that come when you put things in production.[00:07:14] I'm always interested in the elimination of those differences. Mm-hmm. And I don't know if it's realistic, but you know, what would it take for people to, to deploy straight to production and then iterate on production?[00:07:27] Because that's ultimately what you're[00:07:29] aim for. This is exactly what I'm thinking about right now in my PhD for kind of like my PhD. But you said it was database. I think databases is a very, very large field. , pretty much they do everything in databases . But the idea is like, how do we get like a unified development and production experience, Uhhuh, for people who are building these ML models, I think one of the hardest research challenges sits at that execution layer of kind of how do.[00:07:59] Make sure that people are incorporating the same assumptions at development time. Production time. So feature stores have kind of come up in the last, I don't know, couple of years, three years, but there's still that online offline separation. At training time, people assume that their features are generated like just completely, perfectly.[00:08:19] Like there's no lag, nothing is stale. Mm-hmm. , that's the case when trading time, but those assumptions aren't really baked. In production time. Right. Your features are generated, I don't know, like every week or some Every day. Every hour. That's one thing. How do, like, what does that execution model look like to bridge the two and still give developers the interactive latencies with features?[00:08:40] Preventing Data Leakage[00:08:40] Mm-hmm. . I think another thing also, I don't know if this is an interface problem, but how do we give developers the guardrails to not look at data that they're not supposed to? This is a really hard problem. For privacy or for training? Oh, no, just for like training. Yeah. Okay. also for privacy. Okay. But when it comes to developing ML models in production, like you can't see, you don't see future data.[00:09:06] Mm-hmm. . Yeah. You don't see your labels, but at development time it's really easy to. to leak. To leak and even like the seeming most seemingly like innocuous of ways, like I load my data from Snowflake and I run a query on it just to get a sense for, what are the columns in my data set? Mm-hmm. or like do a DF dot summary.[00:09:27] Mm-hmm. and I use that to create my features. Mm-hmm. and I run that query before I do train test. , there's leakage in that process. Right? And there's just at the fun, most fundamental level, like I think at some point at my previous company, I just on a whim looked through like everyone's code. I shouldn't have done that , but I found that like everyone's got some leakage assumptions somewhere.[00:09:49] Oh, mm-hmm. . And it's, it's not like people are bad developers, it's just that. When you have no guard the systems. Yeah, do that. Yeah, you do this. And of course like there's varying consequences that come from this. Like if I use my label as a feature, that's a terrible consequence. , if I just look at DF dot summary, that's bad.[00:10:09] I think there's like a bunch of like unanswered interesting research questions in kind of creating. Unified experience. I was[00:10:15] gonna say, are you about to ban exploratory data analysis ?[00:10:19] Definitely not. But how do we do PDA in like a safe , data safe way? Mm-hmm. , like no leakage whatsoever.[00:10:27] Right. I wanna ask a little small follow up about doing this at Berkeley.[00:10:31] Berkeley's Uniquely Research Lab Culture[00:10:31] Mm-hmm. , it seems that Berkeley does a lot of this stuff. For some reason there's some DNA in Berkeley that just, that just goes, hey, just always tackle this sort of hard data challenges. And Homestate Databricks came out of that. I hear that there's like some kind of system that every five years there's a new lab that comes up,[00:10:46] But what's going on[00:10:47] there? So I think last year, rise Lab which Ray and any scale came out of. Kind of forked into two labs. Yeah. Sky Lab, I have a water bottle from Sky Lab. Ooh. And Epic Lab, which my advisor is a co-PI for founding pi, I don't know what the term is. And Skylabs focus, I think their cider paper was a multi-cloud programming environment and Epic Lab is, Their focus is more like low-code, no-code, better data management tools for this like next generation of Interfa.[00:11:21] I don't even know. These are like all NSF gra uh, grants.[00:11:24] Yeah. And it's five years, so[00:11:26] it could, it could involve, yeah. Who knows what's gonna be, and it's like super vague. Yeah. So I think we're seeing like two different kinds of projects come out of this, like the sky projects of kind of how do I run my job on any cloud?[00:11:39] Whichever one is cheapest and has the most resources for me, my work is kind of more an epic lab, but thinking about these like interfaces, mm-hmm. , better execution models, how do we allow people to reason about the kind of systems they're building much more effectively. Yeah,[00:11:53] From Static Data to Dynamically Updated Data[00:11:53] yeah. How do you think about the impact of the academia mindset when then going into.[00:11:58] Industry, you know, I know one of the points in your papers was a lot of people in academia used with to static data sets. Mm-hmm. , like the data's not updating, the data's not changing. So they work a certain way and then they go to work and like they should think about bringing in dynamic data into Yeah.[00:12:15] Earlier in the, in the workflow, like, , how do you think we can get people to change that mindset? I think[00:12:21] actually people are beginning to change that mindset. We're seeing a lot of kind of dynamic data benchmarks or people looking into kind of streaming datasets, largely image based. Some of them are language based, but I do think it's somewhat changing, which is good.[00:12:35] But what I don't think is changing is the fact that model researchers and Devrel developers want. to create a model that learns the world. Mm-hmm. . And that model is now a static artifact. I don't think that's the way to go. I want people, at least in my research, the system I'm building, models are not a one time thing.[00:12:55] Models as views on Data[00:12:55] Models are views that are frequently recomputed over your data to use database speak, and I don't see people kind of adopting that mindset when it comes to. Kind of research or the data science techniques that people are learning in school. And it's not just like retrain G P T every single day or whatever, but it, it is like, how do I make sure that I don't know, my system is evolving over time.[00:13:19] Mm-hmm. that whatever predictions or re query results that are being generated are. Like that process is changing. Can you give[00:13:27] a, an overview of your research project? I know you mentioned a couple snippets here and there,[00:13:32] but that would be helpful. . I don't have a great pitch yet. I haven't submitted anything, still working on it, but the idea is like I want to create a system for people to develop their ML pipelines, and I want it to be like, Like unifying the development production experience.[00:13:50] And the key differences about this is one, you think of models as like data transformations that are recomputed regularly. So when you write your kind of train or fit functions, like the execution engine understands that this is a process that runs repeatedly. It monitors the data under the hood to refit the computation whenever it's detected.[00:14:12] That kind of like the data distributions have changed. So that way whenever you. Test your pipelines before you deploy them. Retraining is baked in, monitoring is baked in. You see that? And the gold star, the gold standard for me is the number that you get at development time. That should be the number that you get when you deploy[00:14:33] There shouldn't be this expected 10% drop. That's what I know I will have. Made something. But yeah, definitely working on that.[00:14:41] Yeah. Cool. So a year ago you tweeted a list of principles that you thought people should know and you split it very hopefully. I, I thought into beginner, intermediate, advanced, and sometimes the beginner is not so beginner, you know what I mean?[00:14:52] Yeah, definitely. .[00:14:53] The first one I write is like,[00:14:57] so we don't have to go through the whole thing. I, I do recommend people check it out, but also maybe you can pick your favorites and then maybe something you changed your mind.[00:15:03] Principle: Version Everything You Do[00:15:03] I think several of them actually are about versioning , which like maybe that bias the interview studying a little bit.[00:15:12] Yeah. But I, I really think version everything you do, because in experimentation time, because when you do an experiment, you need some version there because if you wanna pr like publish those. , you need something to go back to. And the number of people who like don't version things, it is just a lot. It's also a lot to expect for someone to commit their code every time they like.[00:15:33] Mm-hmm. train their model. But I think like having those practices is definitely worth it. When you say versioning,[00:15:39] you mean versioning code.[00:15:40] versioning code versioning data, like everything around a single like trial run.[00:15:45] So version code get fine. Mm-hmm. versioning data not[00:15:48] as settled. Yeah. I think that part, like you can start with something super hacky, which is every time you run your script, like just save a copy of your training set.[00:16:00] Well, most training sets are not that big. Yeah. Like at least when people are like developing on their computer, it. Whatever. It's not that big. Just save a copy somewhere. Put it ass three, like it's fine. It's worth it. Uhhuh, . I think there's also like tools like dvc like data versioning kind of tools. I think also like weights and biases and these experiment track like ML flow, the experiment tracking tools have these hooks to version your data for you.[00:16:23] I don't know how well they work these days, but . Yeah, just something around like versioning. I think I definitely agree with[00:16:30] Principle: Always validate your Data[00:16:30] I'm. Super, super big into data validation. People call it monitoring. I used to think it was like monitoring. I realize now like how little at my previous company, we just like validated the input data going into these pipelines and even talking to people in the interview study people are not doing.[00:16:48] Data validation, they see that their ML performance is dropping and they're like, I don't know why. What's going on ? And when you dig into it, it's a really fascinating, interesting, like a really interesting research problem. A lot of data validation techniques for machine learning result in too many false positive alerts.[00:17:04] And I have a paper got rejected and we're resubmitting on this. But yeah, like there, it's active research problem. How do you create meaningful alerts, especially when you have tons of features or you have large data sets, that's a really hard problem, but having some basic data validation check, like check that your data is complete.[00:17:23] Check that your schema matches up. Check that your most frequent, like your. Most frequently occurring value is the same. Your vocabulary isn't changing if it's a large language model. These are things that I definitely think I could have. I should have said that I did say data validation, but I didn't like, like spell it out.[00:17:39] Have you, have you looked into any of the current data observability platforms like Montecarlo or Big I I think you, I think you have some experience with that as[00:17:47] well. Yeah. I looked at a Monte car. Couple of years back, I haven't looked into big eye. I think that designing data validation for ML is a different problem because in the machine learning setting, you can allow, there's like a tolerance for how corrupted your data is and you can still get meaningful prediction.[00:18:05] Like that's the whole point of machine learning. Yeah, so like. A lot of the times, like by definition, your data observability platform is gonna give you false positives if you just care about the ML outputs. So the solution really, at least our paper, has this scheme where we learn from performance drops to kind of iterate on the precision of the data validation, but it's a hybrid of like very old databases techniques as well as kind of adapting it to the ML setting.[00:18:33] Heuristics for Model Architecture Selection[00:18:33] So you're an expert in the whole stack. I think I, I talk with a lot of founders, CTOs right now that are saying, how can I get more ML capabilities in, in my application? Especially when it comes to LLMs. Mm-hmm. , which are kind of the, the talk of the town. Yeah. How should people think about which models to use, especially when it comes to size and how much data they need to actually make them useful, for example, PT three is 175 billion parameters co-pilot use as a 12 billion model.[00:19:02] Yeah. So it's much smaller, but it's very good for what it does. Do you have any heuristics or mental models that you use when teams should think about what models to use and how big they need it to be?[00:19:12] Yeah I think that the. Precursor to this is the operational capabilities that these teams have. Do they have the capability to like literally host their own model, serve their own model, or would they rather use an api?[00:19:25] Mm-hmm. , a lot of teams like don't have the capability to maintain the actual model artifact. So even like the process of kind of. Fine tuning A G P T or distilling that, doing something like it's not feasible because they're not gonna have someone to maintain it over time. I see this with like some of the labs, like the people that we work with or like the low-code, no-code.[00:19:47] Or you have to have like really strong ML engineers right over time to like be able to have your own model. So that's one thing. The other thing is these G P T, these, these large language models, they're really good. , like giving you useful outputs. Mm-hmm. compared to like creating your own thing. Mm-hmm.[00:20:02] even if it's smaller, but you have to be okay with the latency. Mm-hmm. and the cost that comes out of it. In the interview study, we talk to people who are keeping their own, like in memory stores to like cash frequently. I, I don't know, like whatever it takes to like avoid calling the Uhhuh API multiple types, but people are creative.[00:20:22] People will do this. I don't think. That it's bad to rely on like a large language model or an api. I think it like in the long term, is honestly better for certain teams than trying to do their own thing on[00:20:36] house.[00:20:36] The LLMOps Stack[00:20:36] How's the L l M ops stack look like then? If people are consuming this APIs, like is there a lot of difference in under They manage the, the data, the.[00:20:46] Well,[00:20:46] I'll tell you the things that I've seen that are unified people need like a state management tool because the experience of working with a L L M provi, like A G P T is, mm-hmm. . I'm gonna try start out with these prompts and as I learn how to do this, I'm gonna iterate on these prompts. These prompts are gonna end up being this like dynamic.[00:21:07] Over time. And also they might be a function of like the most recent queries Tonight database or something. So the prompts are always changing. They need some way to manage that. Mm-hmm. , like I think that's a stateful experience and I don't see the like, like the open AI API or whatever, like really baking that assumption in into their model.[00:21:26] They do keep a history of your[00:21:27] prompts that help history. I'm not so sure. , a lot of times prompts are like, fetch the most recent similar data in my database, Uhhuh, , and then inject that into the pump prompt. Mm-hmm. . So I don't know how, Okay. Like you wanna somehow unify that and like make sure that's the same all the time.[00:21:44] You want prompt compiler. Yeah, . I think there's some startup probably doing that. That's definitely one thing. And then another thing that we found very interesting is that when people put these. LLMs in production, a lot of the bugs that they observe are corrected by a filter. Don't output something like this.[00:22:05] Yes. Or don't do this like, so there's, or please output G on, yeah. . So these pipelines end up becoming a hybrid of like the API uhhuh, they're. Service that like pings their database for the most recent things to put in their prompt. And then a bunch of filters, they add their own filters. So like what is the system that allows people to build, build such a pipeline, this like hybrid kind of filter and ML model and dynamic thing.[00:22:30] So, so I think like, The l l m stack, like is looking like the ML ops thing right in this way of like hacking together different solutions, managing state all across the pipeline monitoring, quick feedback loop.[00:22:44] Yeah. You had one, uh, just to close out the, the tweet thread thing as well, but this is all also relevant.[00:22:50] Shadow Models[00:22:50] You have an opinion about shadowing a less complicated model in production to fall back on. Yeah. Is that a good summary?[00:22:55] The shadowing thing only works in situations where you don. Need direct feedback from. The user because then you can like very reasonably serve it like Yeah, as as long, like you can benchmark that against the one that's currently in production, if that makes sense.[00:23:15] Right. Otherwise it's too path dependent or whatever to.[00:23:18] evaluate. Um, and a lot of services can benefit from shadowing. Like any, like I used to work a lot on predictive analytics, predictive maintenance, like stuff like that, that didn't have, um, immediate outputs. Mm-hmm. or like immediate human feedback. So that was great and okay, and a great way to like test the model.[00:23:36] Got it. But I think as. Increasingly trying to generate predictions that consumers immediately interact with. It might not be I, I'm sure there's an equivalent or a way to adapt it. Mm-hmm. AV testing, stage deployment, that's in the paper.[00:23:53] Keeping Up With Research[00:23:53] Especially with keeping up with all the new thing. That's one thing that I struggle with and I think preparing for this. I read a lot of your papers and I'm always like, how do you keep up with, with all of this stuff?[00:24:02] How should people do it? You know? Like, now, l l M is like the hot thing, right? There's like the, there's like the chinchilla study. There's like a lot of cool stuff coming out. Like what's. U O for like staying on top of this research, reading it. Yeah. How do you figure out which ones are worth reading?[00:24:16] Which ones are kind of like just skim through? I read all of yours really firmly. , but I mean other ones that get skimmed through, how should people figure it out?[00:24:24] Yeah, so I think. I'm not the best person to ask for this because I am in a university and every week get to go to amazing talks. Mm-hmm. and like engage with the author by the authors.[00:24:35] Yeah. Right. Yeah. Yeah. So it's like, I don't know, I feel like all the opportunities are in my lap and still I'm struggling to keep up, if that makes sense. Mm-hmm. . I used to keep like running like a bookmark list of papers or things that I want to read. But I think every new researcher does that and they realize it's not you worth their time.[00:24:52] Right? Like they will eventually get to reading the paper if it's absolutely critical. No, it's, it's true, it's true. So like we've, I've adopted this mindset and like somehow, like I do end up reading things and the things that I miss, like I don't have the fo. Around. So I highly encourage people to take that mentality.[00:25:10] I also, I think this is like my personal taste, but I love looking into the GitHub repos that people are actually using, and that usually gives me a sense for like, what are the actual problems that people have? I find that people on Twitter, like sometimes myself included, will say things, but you, it's not how big of a problem is it?[00:25:29] Mm-hmm. , it's not. Yeah, like , I find that like just looking at the repos, looking at the issues, looking at how it's evolved over time, that really, really helps. So you're,[00:25:40] to be specific, you're not talking about paper repos?[00:25:43] No, no, no, no. I'm talking about tools, but tools also come with papers a lot in, um, databases.[00:25:49] Yeah. Yeah. I think ML specifically, I think there's way too much ML research out there and yeah, like so many papers out there, archive is like, kind of flooded. Yeah.[00:26:00] It's like 16% of old papers produced.[00:26:02] It's, it's crazy. . I don't know if it's a good use of time to try to read all of them, to be completely honest.[00:26:10] Grounded Theory for Problem Discovery[00:26:10] You have a very ethnographic approach, like you do interviews and I, I assume like you just kinda observe and don't Yeah. Uh, prescribe anything. And then you look at those GitHub issues and you try to dig through from like production, like what is this orientation? Is there like a research methodology that you're super influenced by that guides you like this?[00:26:28] I wish that I had. Like awareness and language to be able to talk about this. Uhhuh, , . I[00:26:37] don't know. I, I think it's, I think it's a bit different than others who just have a technology they wanna play with and then they, they just ignore, like they don't do as much, uh, like people research[00:26:47] as[00:26:47] you do. So the HCI I researchers like, Have done this forever and ever and ever.[00:26:53] Yeah. But grounded theory is a very common methodology when it comes to trying to understand more about a topic. Yeah. Which is you go in, you observe a little bit, and then you update your assumptions and you keep doing this process until you have stopped updating your assumptions. . And I really like that approach when it comes to.[00:27:13] Just kind of understanding the state of the world when it comes to like a cer, like LLMs or whatever, until I feel like, like there was like a point in time for like lops on like tabular data prior to these large language models. I feel like I, I'd gotten the space and like now that these like large language models have come out and people are really trying to use them.[00:27:35] They're tabular kind of predictions that they used to in the past. Like they're incorporating language data, they're incorporating stuff like customer feedback from the users or whatever it is to make better predictions. I feel like that's totally changing the game now, and I'm still like, Why, why is this the case?[00:27:52] Was were the models not good enough? Do people feel like they're behind? Mm-hmm. ? I don't know. I try to talk to people and like, yeah, I have no answers.[00:27:59] Google Brain vs Academia[00:27:59] So[00:27:59] how does the industry buzz and focus influence what stuff the research teams work on? Obviously arch language models, everybody wants to build on them.[00:28:08] When you're looking at, you know, other peers in the, in the PhD space, are they saying, oh, I'm gonna move my research towards this area? Or are they just kind of focused on the idea of the[00:28:18] first. . This is a good question. I think that we're at an interesting time where the kind of research a PhD student in an academic institution at CS can do is very different from the research that a large company, because there aren't like, There just aren't the resources.[00:28:39] Mm-hmm. that large companies compute resources. There isn't the data. And so now PhD students I think are like, if they want to do something better than industry could do it, like there's like a different class of problems that we have to work on because we'll never be able to compete. So I think that's, yeah, I think that's really hard.[00:28:56] I think a lot of PhD students, like myself included, are trying to figure out like, what is it that we can do? Like we see the, the state of the field progressing and we see. , why are we here? If we wanna train language model, I don't, but if somebody wants to train language models, they should not be at uc.[00:29:11] Berkeley, , they shouldn't .[00:29:15] I think it's, there's a sort of big, gets bigger mentality when it comes to training because obviously the big companies have all the data, all the money. But I was kind of inspired by Luther ai. Mm-hmm. , um, which like basically did independent reproductions Yeah. Of G P T three.[00:29:30] Don't you think like that is a proof of, of existence that it is possible to do independently?[00:29:34] Totally. I think that kind of reproducing research is interesting because it doesn't lead to a paper. Like PhD students are still like, you can only graduate when you have papers. Yeah. So to have a whole lab set.[00:29:46] I think Stanford is interesting cuz they did do this like reproducing some of the language models. I think it should be a write[00:29:50] a passage for like every year, year one PhD. You[00:29:53] must reproduce everything. I won't say that no one's done it, but I do understand that there's an incentive to do new work because that's what will give you the paper.[00:30:00] Yeah. So will you put 20 of your students to. I feel like only a Stanford or somebody who like really has a plan to make that like a five plus year. Mm-hmm. research agenda. And that's just the first step sort of thing. Like, I can't imagine every PhD student wants to do that. Well, I'm just[00:30:17] saying, I, I, I feel like that there will be clouds, uh, the, the, you know, the big three clouds.[00:30:21] Mm-hmm. Probably the Microsoft will give you credits to do whatever you want. And then it's on you to sort of collect the data but like there of existence that it is possible to[00:30:30] It's definitely possible. Yeah. I think it's significantly harder. Like collecting the data is kind of hard. Like just like because you have the cloud credits doesn't mean like you have a cluster that has SREs backing it.[00:30:42] Mm-hmm. who helped you run your experiments. Right, right. Like if you are at Google Rain. Yeah. I was there what, like five, six years ago. God, like I read an experiment and I didn. Problems. Like it was just there. Problems . It's not like I'm like running on a tiny slur cluster, like watching everything fail every five.[00:31:01] It's like, this is why I don't train models now, because I know that's not a good use of my time. Like I'll be in so many like SRE issues. Yeah. If I do it now, even if I have cloud credits. Right. So, Yeah, I think it's, it can feel disheartening. , your PhD student training models,[00:31:18] well, you're working on better paradigms for everyone else.[00:31:21] You know? That's[00:31:22] the goal. I don't know if that's like forced, because I'm in a PhD program, , like maybe if I were someone else, I'd be training models somewhere else. I don't know. Who knows? Yeah. Yeah.[00:31:30] You've read a whole post on this, right? Choosing between a PhD and going into. Obviously open ai. Mm-hmm. is kinda like the place where if you're a researcher you want to go go work in three models.[00:31:41] Advice for New Grads[00:31:41] Mm-hmm. , how should people think about it? What are like maybe areas of research that are underappreciated in industry that you're really excited about at a PhD level? Hmm.[00:31:52] I think I wrote that post for new grads. . So it might not be as applicable like as a new grad. Like every new grad is governed by, oh, not every, a good number of new grads are governed by, like, I wanna do work on something that's impactful and I want to become very known for this.[00:32:06] Mm-hmm. , like, that's like , like a lot of, but like they don't really, they're walking outta the world for the first time almost. So for that reason, I think that like it's worth working on problems. We'll like work on any data management research or platform in an industry that's like working on Providence or working on making it more efficient to train model or something like.[00:32:29] You know, that will get used in the future. Mm-hmm. . So it might be worth just going and working on that in terms of, I guess like going to work at a place like OpenAI or something. I do think that they're doing very interesting work. I think that it's like not a fad. These models are really interesting.[00:32:44] Mm-hmm. and like, they will only get more interesting if you throw more compute Right. And more data at them. So it, it seems like these industry companies. Doing something interesting. I don't know much more than that. .[00:32:59] Helping Minorities in CS[00:32:59] Cool. What are other groups, organizations, I know you, you're involved with, uh, you were involved with She Plus Plus Helping with the great name.[00:33:07] Yeah, I just[00:33:08] got it.[00:33:10] when you say it[00:33:10] out loud, didn't name Start in 2012. Long time ago. Yeah.[00:33:15] What are some of the organizations you wanna highlight? Anything that that comes to?[00:33:20] Yeah. Well, I mean, shva Plus is great. They work on kind of getting more underrepresented minorities in like high school, interested, kind of encoding, like I remember like organizing this when I was in college, like for high schoolers, inviting them to Stanford and just showing them Silicon Valley.[00:33:38] Mm-hmm. and the number of students who went from like, I don't know what I wanna do to, like, I am going to major or minor in c. Almost all of them, I think. I think like people are just not aware of the opportunities in, like, I didn't really know what a programmer was like. I remember in Texas, , like in a small town, like it's, it's not like one of the students I've mentored, their dad was a vc, so they knew that VC is a career path.[00:34:04] Uhhuh, . And it's like, I didn't even know, like I see like, like stuff like this, right? It's like just raising your a. Yeah. Or just exposure. Mm-hmm. , like people who, kids who grow up in Silicon Valley, I think like they're just in a different world and they see different things than people who are outside of Silicon Valley.[00:34:20] So, yeah, I think Chiles West does a great job of like really trying to like, Expose people who would never have had that opportunity. I think there's like also a couple of interesting programs at Berkeley that I'm somewhat involved in. Mm-hmm. , there's dare, which is like mentoring underrepresented students, like giving research opportunities and whatnot to them and Cs.[00:34:41] That's very interesting. And I'm involved with like a summer program that's like an r u also for underrepresented minorities who are undergrads. , find that that's cool and fun. I don't know. There aren't that many women in databases. So compared to all the people out there. ? Yeah.[00:35:00] My wife, she graduated and applied physics.[00:35:02] Mm-hmm. . And she had a similar, similar feeling when she was in, in school.[00:35:06] Lightning Round[00:35:06] All right. Let's jump into the lining ground. So your favorite AI product.[00:35:12] I really like. Stable diffusion, like managed offerings or whatever. I use them now to generate all of my figures for any talks that I give. I think it's incredible.[00:35:25] I'm able to do this or all of my like pictures, not like graphs or whatever, .[00:35:31] It'd be great if they could do that. Really looking[00:35:34] forward to it. But I, I love, like, I'll put things like bridging the gap between development and production or whatever. I'll do like a bridge between a sandbox and a city. Like, and it'll make it, yeah.[00:35:46] like, I think that's super cool. Yeah. Like you can be a little, I, I enjoy making talks a lot more because of , these like dream studio, I, I don't even know what they're called, what organization they're behind. I think that is from Stability. Stability,[00:35:58] okay. Yeah. But then there's, there's like Lexi there. We interviewed one that's focused on products that's Flare ai, the beauty of stable diffusion being open sources.[00:36:07] Yeah. There's 10[00:36:07] of these. Totally, totally. I'll just use whichever ones. I have credits on .[00:36:13] A lot of people focus on, like have different focuses, like Sure. Mid Journey will have an art style as a focus. Mm-hmm. and then some people have people as the focus for scenes. I, I feel like just raw, stable diffusion two probably is the[00:36:24] best.[00:36:24] Yeah. Yeah. But I don't do, I don't have images of people in my slides . Yeah, yeah. Yeah. That'd be a little bit weird.[00:36:31] So a year from now, what do you think people will be most surprised by in ai? What's on the horizon and about to come, but people don't realize. .[00:36:39] I don't know if this will be, this is related to the AI part of things or like an AI advancement, but I consistently think people underestimate the data management challenges.[00:36:50] Ooh. In putting these things in production. Uhhuh, . And I think people get frustrated that they really try, they see these like amazing prototypes, but they cannot for the life of them, figure out how to leverage them in their organization. And I think. That frustration will be collectively felt by people as it's like it's happened in the past, not for LLMs, but for other machine learning models.[00:37:15] I think people will turn to whatever it, it's just gonna be really hard, but we're gonna feel that collective frustration like next year is what I think.[00:37:22] And we talked a little bit before the show about data management platforms. Yeah. Do you have a spec for what that[00:37:27] is? The broad definition is a system that handles kind of execution.[00:37:33] or orchestration of different like data transformations, data related transformation in your pipeline. It's super broad. So like feature stores, part of it, monitoring is part of it. Like things that are not like your post request to open AI's, p i, , .[00:37:51] What's one AI thing you would pay for if someone built.[00:37:54] So whenever I do like web development or front end projects or like build dashboards, like often I want to manage my styles in a nice way.[00:38:02] Like I wanna generate a color palette, uhhuh, and I wanna manage it, and I wanna inject it throughout the application. And I also wanna be able to change it over time. Yeah. I don't know how to do this. Well, ? Yeah, in like large or E even like, I don't know, just like not even that large of projects. Like recently I was building my own like Jupyter Notebook cuz you can do it now.[00:38:23] I'm super excited by this. I think web assembly is like really changed a lot of stuff. So I was like building my own Jupyter Notebook just for fun. And I used some website to generate a color palette that I liked and then I was like, how do I. Inject this style like consist because I was learning next for the first time.[00:38:39] Yeah. And I was using next ui. Yeah. And then I was like, okay, like I could just use css but then like, is that the way to do it for this? Like co-pilot's not gonna tell me how to do this. There's too many options. Yeah. So just like, let me like just read my code and read and give me a color palette and allow me to change it over time and have this I opera.[00:38:58] With different frameworks, I would pay like $5 a month for this.[00:39:01] Yeah, yeah, yeah. It's, it's a, you know, the classic approach to this is have a design system and then maintain it. Yeah. I'm not designing Exactly. Do this. Yeah, yeah, yeah, yeah. This is where sort of the front end world eats its own tail because there's like, 10 different options.[00:39:15] They're all awesome. Yeah, you would know . I'm like, I have to apologize on behalf of all those people. Cuz like I, I know like all the individual solutions individually, but I also don't know what to recommend to you .[00:39:28] So like that's therein lies is the thing, right? Like, ai, solve this for me please. ,[00:39:35] what's one thing you want everyone to take away about?[00:39:39] I think it's really exciting to me in a time like this where we're getting to see like major technological advances like in front of our eyes. Maybe the last time that we saw something of this scale was probably like, I don't know, like I was young, but still like Google and YouTube and those. It's like they came out and it was like, wow, like the internet is so cool , and I think we're getting to see something like that again.[00:40:05] Yeah. Yeah. I think that's just so exciting. To be a part of it somehow, and maybe I'm like surrounded by a bunch of like people who are like, oh, like it's just a fad or it's just a phase. But I don't think so. Mm-hmm. , I think I'm like fairly grounded. So yeah. That's the one takeaway I have. It's, it's not a fad.[00:40:24] My grandma asked me about chat, g p t, she doesn't know what a database is, but she knows about chat. G p t I think that's really crazy. , what does she, what does she use it for? No, she just like saw a video about it. Ah, yeah. On like Instagram or not, she's not like on like something YouTube. She watches YouTube.[00:40:41] She's sorry. She saw like a video on ChatGPT and she was like, what do you think? Is it a fad? And I was like, oh my god. , she like watched after me with this and I was like, do you wanna try it out? She was like, what ? Yeah,[00:40:55] she should.[00:40:55] Yeah, I did. I did. I don't know if she did. So yeah, I sent it to her though.[00:40:59] Well[00:40:59] thank you so much for your time, Sreya. Where should people find you online? Twitter.[00:41:04] Twitter, I mean, email me if you wanna directly contact me. I close my dms cuz I got too many, like being online, exposing yourself to strangers gives you a lot of dms. . Yeah. Yeah. But yeah, you can contact me via email.[00:41:17] I'll respond if I can. Yeah, if there's something I could actually be helpful with, so, oh,[00:41:22] awesome.[00:41:23] Thank you. Yeah, thanks for, thanks for. Get full access to Latent Space at www.latent.space/subscribe

Programming Throwdown
151: Machine Learning Engineering with Liran Hason

Programming Throwdown

Play Episode Listen Later Feb 13, 2023 78:03


Machine Learning Engineer is one of the fastest growing professions on the planet.  Liran Hason, co-founder and CEO of Aporia, joins us to discuss this new field and how folks can learn the skills and gain the experience needed to become an ML Engineer!00:00:59 Introductions00:01:44 How Liran got started making websites00:07:03 College advice for getting involved in real-world experience00:12:51 Jumping into the unknown00:15:22 ML engineering00:20:50 The missing part in data science development00:29:16 How to build skills in the ML space00:37:01 A horror story00:41:34 Model loading questions00:47:36 Must-have skills in an ML resume00:50:41 Deciding about data science00:59:08 Rust01:06:27 How Aporia contributes to the data science space01:14:26 Working at Aporia01:16:53 FarewellsResources mentioned in this episode:Links: Liran Hason:Linkedin: https://www.linkedin.com/in/hasuni/ Aporia: Website: https://www.aporia.com/ Twitter: https://twitter.com/aporiaai Linkedin: https://www.linkedin.com/company/aporiaai/ Github: https://github.com/aporia-ai The Mom Test (Amazon): Paperback: https://www.amazon.com/Mom-Test-customers-business-everyone/dp/1492180742 Audiobook: https://www.amazon.com/The-Mom-Test-Rob-Fitzpatrick-audiobook/dp/B07RJZKZ7F References: Shadow Mode: https://christophergs.com/machine%20learning/2019/03/30/deploying-machine-learning-applications-in-shadow-mode/ Blue-green deployment: https://en.wikipedia.org/wiki/Blue-green_deployment Coursera ML Specialization (Stanford): https://www.coursera.org/specializations/machine-learning-introduction Auto-retraining: https://neptune.ai/blog/retraining-model-during-deployment-continuous-training-continuous-testing If you've enjoyed this episode, you can listen to more on Programming Throwdown's website: https://www.programmingthrowdown.com/Reach out to us via email: programmingthrowdown@gmail.comYou can also follow Programming Throwdown on Facebook | Apple Podcasts | Spotify | Player.FM Join the discussion on our DiscordHelp support Programming Throwdown through our Patreon ★ Support this podcast on Patreon ★

Biznes Myśli
BM 03: Role i kompetencje w projekcie Machine Learning

Biznes Myśli

Play Episode Listen Later Dec 6, 2022 50:45


W tym odcinku porozmawiamy sobie o rolach i kompetencjach w projektach z uczeniem maszynowym w roli głównej. Temat zwykle zamyka się na “Data Scientist” lub “ML Engineer”, ale jednak na tym nie kończą się kompetencje potrzebne, aby projekt z sukcesem zacząć, przeprowadzić i co najważniejsze zakończyć. Dlatego porozmawiamy o tym: 1. Kogo potrzebujesz w projekcie ML, aby iść do przodu - wprowadzenie 2. Kto łączy DS / ML z biznesem? 3. Jakie kompetencje są potrzebne, aby wystartować? 4. Czego potrzebujesz, aby rozpędzić się z ML w Twojej firmie?

AI and the Future of Work
Emmanuel Turlay, Founder and CEO of Sematic and machine learning pioneer, discusses what's required to turn every software engineer into an ML engineer

AI and the Future of Work

Play Episode Listen Later Dec 4, 2022 45:10


Emmanuel Turlay spent more than a decade in engineering roles at tech-first companies like Instacart and Cruise before realizing machine learning engineers need a better solution. Emmanuel started Sematic earlier this year and was part of the YC summer 2022 batch. He recently raised a $3M seed round from investors including Race Capital and Soma Capital. Thanks to friend of the podcast and former guest Hina Dixit from Samsung NEXT for the intro to Emmanuel.I've been involved with the AutoML space for five years and, for full disclosure, I'm on the board of Auger which is in a related space. I've seen the space evolve and know how much room there is for innovation. This one's a great education about what's broken and what's ahead from a true machine learning pioneer.Listen and learn...How to turn every software engineer into a machine learning engineerHow AutoML platforms are automating tasks performed in traditional ML toolsHow Emmanuel translated learning from Cruise, the self-driving car company, into an open source platform available to all data engineering teamsHow to move from building an ML model locally to deploying it to the cloud and creating a data pipeline... in hoursWhat you should know about self-driving cars... from one of the experts who developed the brains that power themWhy 80% of AI and ML projects failReferences in this episode:Unscrupulous users manipulate LLMs to spew hateHina Dixit from Samsung NEXT on AI and the Future of WorkApache BeamEliot Shmukler, Anomalo CEO, on AI and the Future of Work

AI in Action Ireland
E94 AI Awards 2022 Finalist Brendan Doherty, ML Engineer at Allstate Northern Ireland

AI in Action Ireland

Play Episode Listen Later Nov 22, 2022 12:50


In this episode, we are joined by Brendan Doherty, Machine Learning Engineer at Allstate Northern Ireland. Nominated for the 2022 AI Awards in the Best Application of AI in a Large Enterprise, Brendan and his team were shortlisted for the rollout of their Legal Automation Toolkit which automates the gathering of relevant data, the generation and completion of legal documents and the correct placement of the documents in the legal departments existing document management platform. In the show, Brendan will chat about: How he got interested in ML & journey to now An insight into their legal automation toolkit Benefits the toolkit brings such as time & cost savings Their goal to automate decision points and touch points using ML Other interesting projects the team are working on

Reversim Podcast
450 What is an ML Engineer, with Or from Superwise

Reversim Podcast

Play Episode Listen Later Nov 21, 2022


[קישור לקובץ mp3]פרק מספר 450 של רברס עם פלטפורמה - אורי ורן מארחים מהצד השני של כנס העשור (יש תמונות, בקרוב ההקלטות!) את אור מחברת Superwise.[01:29](רן) אז אור -שתי מילים עליך?(אור) אז אני אור, CTO בחברת Superwise - סטארטאפ שקיים סדר גודל של שלוש שניםאני העובד הראשון שם - בניתי את ה-MVP של המוצרהיום אנחנו כבר יותר בוגרים ובשלים - מתעסקים ב-Monitoring של מערכות Machine Learning ב-Production, במשפט אחד . . . (אורי) כ-CTO ל-CTO: יש את השלב שבו הקוד שלך - אתה כבר מאוד רוצה שהוא לא יהיה במוצר . . . . הוא עוד שם? או ש . . . .(אור) שרידים אחרונים . . . .בהתחלה זה היה קצת קשה לשחרר את הקוד שלי, הייתי מאוד חרד לקוד שנכנסאבל היום, אני חושב שכבר 90% ממנו לא שם.(אורי) אני חושב שכבר 10 שנים אני מנסה שהקוד שלי ימחק - וזה לא קורה . . . נשאר השם של ה-Class.(רן) תקים מצבה סביבו . . . .(רן) כמה מפתחים אתם, ב-Superwise?(אור) אנחנו 20 איש, כרגע מרביתם ב-R&Dממש בימים אלה אנחנו בונים את ה-Core Capability של Superwise, של המוצרעם Design Partners טובים, לקוחות טובים, משלמים, שאיתנו מתחילת הדרך.בינתיים, טפו-… קרא עוד

The Data Scientist Show
How to effectively test and debug machine learning models, from ML engineer@Apple to startup founder - Gabriel Bayomi - the data scientist show #055

The Data Scientist Show

Play Episode Listen Later Oct 24, 2022 84:01


Gabriel Bayomi is the Co-Founder at OpenLayer, a tool that tests & debugs machine learning models. OpenLayer was in the YCombinator's batch in 2021, building tools for machine learning model testing. Previously he was a machine learning engineer at Apple working on Siri. He has a master degree in computer science from Carnegie Mellon. He is passionate about Natural Language Processing, Machine Learning, and Computational Social Science. We talked about how to test and debug machine learning models, his experience at Apple, and career lessons. If you like the show subscribe to the channel and give us a 5-star review. Subscribe to Daliana's newsletter on www.dalianaliu.com/ for more on data science and career. Gabriel's LinkedIn: https://www.linkedin.com/in/gbayomi Daliana's LinkedIn: https://www.linkedin.com/in/dalianaliu/ Daliana's Twitter: https://twitter.com/DalianaLiu (0:00) Intro (01:01:39) How he got into machine learning (01:06:43) His experience at Apple, Siri (01:15:55) How to validate the solution (01:19:39) Benefits of using external error analysis framework (01:21:30) How to build a model evaluation pipeline (01:28:26) Don't overfit the subset of data (01:33:19) Your validation set shouldn't be fixed (01:41:03) Become one with data (01:44:05) Three model interpretability library you should use (01:50:47) Common mistakes people made in model validation (01:53:33) How to create an adversarial test (01:55:43) How to check data quality (01:06:46) Transition from engineer to executive (01:10:04) Things he learnt from his favorite coworker (01:17:57) how job roles would evolve

Numerically Speaking: The Anaconda Podcast

Machine learning (ML) has reached an exciting phase of development, a phase that Vicki Boykis, Senior ML Engineer at Duo Security* has characterized as the “steam-powered days.” In this episode of Numerically Speaking: The Anaconda Podcast, Vicki talks about the state of the industry and where she sees things heading.   Vicki's discussion with host Peter Wang covers:   The interplay between software engineering and ML, the human element of the development lifecycle (and the lack thereof in social media) and the operationalization and the rise of microservices.   Resources:   Click https://vickiboykis.com to visit Vicki's blog.   Click https://www.amazon.com/Presentation-Self-Everyday-Life/dp/0385094027  to purchase The Presentation of Self in Everyday Life by Erving Goffman, referenced by Vicki.   Click https://www.amazon.com/Broad-Band-Untold-Story-Internet/dp/0735211752  to purchase Broad Band: The Untold Story of the Women Who Made the Internet, also referenced by Vicki.   Click https://jimruttshow.blubrry.net/currents-rob-malda/  to listen to the Jim Rutt/Rob Malda (Slashdot) podcast episode referenced by Peter.   Check out the P2 website https://wordpress.com/p2/   You can find a human-verified transcript of this episode here -  https://know.anaconda.com/rs/387-XNW-688/images/ANACON_Vicki%20Boykis_V2%20%281%29.docx.pdf.    If you enjoyed today's show, please leave a 5-star review. For more information, visit anaconda.com/podcast.   *At the time of the interview, Vicki Boykis was an ML Engineer working on Tumblr at Automattic.  

Der Data Analytics Podcast
Skills Matrix Data Engineer - Abgrenzung Junior, Engineer, Senior, Architect, ML Engineer

Der Data Analytics Podcast

Play Episode Listen Later Sep 10, 2022 13:10


Infinite Machine Learning
How to build an ML startup from scratch, product design, open source strategy, pricing mechanics | Phil Howes

Infinite Machine Learning

Play Episode Listen Later Sep 8, 2022 31:36


Phil Howes is the cofounder and Chief Scientist of Baseten, an ML application builder for data scientists. They have raised $20M from top tier investors such as Greylock, Mustafa Suleyman, DJ Patil, and Greg Brockman. He was previously the co-founder of Shape, a people analytics platform acquired by Reflektive in 2018. Prior to that, he was an ML Engineer at Gumroad. He has a PhD in Mathematics from the University of Sydney.In this episode, we cover a range of topics including:- Top 3 learnings from running his first startup- What he's building now at Baseten- How to think about design and UX for ML products- Pricing mechanics- Competitive landscape - Their open source strategy - How they got their first 10 users- Self serve strategy for ML products- Measuring customer success--------Where to find Prateek Joshi:Newsletter: https://prateekjoshi.substack.comWebsite: http://prateekj.comLinkedIn: https://www.linkedin.com/in/prateek-joshi-91047b19

We Decentralize Tech
Ep 57 - Rodolfo Ferro (Ex Zetalabs S.L - Actual Ploomber) - Una carrera en Machine Learning

We Decentralize Tech

Play Episode Listen Later Sep 7, 2022 49:13


*Rodolfo habla personalmente y no representando a ninguna empresa o institución de ninguna manera. Toda la información aquí descrita es nuestra interpretación y no necesariamente lo que Rodolfo quiso decir. Rodolfo Ferro (@rodo_ferro, en Twitter) es Dev Advocate en @ploomber y trabajó como ML Engineer en Zetalabs S.L. Es co fundador y coordinador en Future Lab Mx, y Google Developer Expert en Machine Learning.

The Data Canteen
Sam Sipe: The Impact of VDSML's 2021 Scholarship | The Data Canteen #19

The Data Canteen

Play Episode Listen Later Aug 17, 2022 65:15


Sam Sipe was a naval aviator, VDSML's 2021 FourthBrain Scholarship recipient, and he's now a budding ML Engineer! In this episode, Sam and I chat about his childhood interest in rockets and flight, which led him to pursue an education in aerospace engineering and military experience as a pilot. Sam tells us about how his engineering pursuits initially sparked an interest in computer programming, which eventually grew into an interest machine learning. Soon after exiting military service, Sam won VDSML's inaugural FourthBrain Scholarship. Sam tells us about his experience with FourthBrain's bootcamp for aspiring ML engineers - as well as some preliminary information about VDSML's 2022 scholarship opportunities!   FEATURED GUESTS: Name: Sam Sipe Email: sam@sipe.io LinkedIn: https://www.linkedin.com/in/samsipe/   SUPPORT THE DATA CANTEEN (LIKE PBS, WE'RE LISTENER SUPPORTED!): Donate: https://vetsindatascience.com/support-join   EPISODE LINKS: Sam's Personal Website: https://samsipe.com/ Sam's FourthBrain Capstone Project: https://amplifygrid.com/ Sam's GitHub: https://github.com/samsipe Lex Fridman Podcast: https://lexfridman.com/podcast/ Thinking, Fast and Slow (book recommendation): https://www.amazon.com/Thinking-Fast-Slow-Daniel-Kahneman/dp/0374533555 The Signal and the Noise (book recommendation): https://www.amazon.com/Signal-Noise-Many-Predictions-Fail-But/dp/159420411X/ref=tmm_hrd_swatch_0 3Blue1Brown (YouTube recommendation): https://www.youtube.com/c/3blue1brown/featured pyimagesearch (Learning Resource Recommendation): https://pyimagesearch.com/ FourthBrain (Bootcamp Recommendation): https://www.fourthbrain.ai/ DataCamp (MOOC Recommendation): https://www.datacamp.com/   PODCAST INFO: Host: Ted Hallum Website: https://vetsindatascience.com/thedatacanteen Apple Podcasts: https://podcasts.apple.com/us/podcast/the-data-canteen/id1551751086 YouTube: https://www.youtube.com/channel/UCaNx9aLFRy1h9P22hd8ZPyw Stitcher: https://www.stitcher.com/show/the-data-canteen   CONTACT THE DATA CANTEEN: Voicemail: https://www.speakpipe.com/datacanteen   VETERANS IN DATA SCIENCE AND MACHINE LEARNING: Website: https://vetsindatascience.com/ Join the Community: https://vetsindatascience.com/support-join Mentorship Program: https://vetsindatascience.com/mentorship   OUTLINE: 00:00:07​ - Introduction 00:02:17 - Sam's military background and personal data science journey 00:17:02 - Sam's thoughts on the value of a background in engineering 00:19:06 - Sam's description of the FourthBrain bootcamp experience 00:23:58 - Sam's goal for doing the FourthBrain machine learning engineer bootcamp 00:25:57 - Sam's epic FourthBrain capstone project 00:35:49 - Challenges Sam encountered during his FourthBrain capstone 00:40:05 - Preliminary info about VDSML's 2022 scholarship opportunities 00:47:41 - What does Sam plan to do with all these newfound skills? 00:54:07 - Sam's current learning focus 00:56:16 - Sam's thoughts on cloud service providers 00:58:42 - Sam's favorite learning resources 01:03:26 - The best way to contact Sam 01:04:40 - Farewells

The Data Scientist Show
Using AI to detect online abuse, from physics PhD to staff ML engineer@Linkedin, persuasion at work with James Verbus - the data scientist show #035

The Data Scientist Show

Play Episode Listen Later May 10, 2022 95:55


(Timestamps below) James Verbus is Staff Machine Learning Engineer at LinkedIn. He has a PhD in Physics from Brown university. He is the tech lead of the Anti-Scraping and Automation AI Team, working on protecting LinkedIn's Members from bots and abusive scripted behavior, pioneering the use of deep learning to detect abusive automated sequences of user activity (blog post). (00:01:14) from physic to data science (00:16:37) background of online abuse detection (00:24:40) Isolation Forest Algorithm (00:42:59) his day-to-day as a staff ML Engineer (00:52:57) how to persuade stakeholders (00:58:17) how to build influence at work (01:00:22) how he grew to staff engineer (01:13:48) what he learned from his mentor Follow Daliana on Twitter @DalianaLiu for more on data science and this podcast. Subscribe to the channel and leave a 5-star review if you like this episode :)

MLOps.community
Continuous Deployment of Critical ML Applications // Emmanuel Ameisen // MLOps Coffee Sessions #85

MLOps.community

Play Episode Listen Later Mar 10, 2022 44:38


MLOps Coffee Sessions #85 with Emmanuel Ameisen, Continuous Deployment of Critical ML Applications. // Abstract Finding an ML model that solves a business problem can feel like winning the lottery, but it can also be a curse. Once a model is embedded at the core of an application and used by real users, the real work begins. That's when you need to make sure that it works for everyone, that it keeps working every day, and that it can improve as time goes on. Just like building a model is all about data work, keeping a model alive and healthy is all about developing operational excellence. First, you need to monitor your model and its predictions and detect when it is not performing as expected for some types of users. Then, you'll have to devise ways to detect drift, and how quickly your models get stale. Once you know how your model is doing and can detect when it isn't performing, you have to find ways to fix the specific issues you identify. Last but definitely not least, you will now be faced with the task of deploying a new model to replace the old one, without disrupting the day of all the users that depend on it. A lot of the topics covered are active areas of work around the industry and haven't been formalized yet, but they are crucial to making sure your ML work actually delivers value. While there aren't any textbook answers, there is no shortage of lessons to learn. // Bio Emmanuel Ameisen has worked for years as a Data Scientist and ML Engineer. He is currently an ML Engineer at Stripe, where he worked on helping improve model iteration velocity. Previously, he led Insight Data Science's AI program where he oversaw more than a hundred machine learning projects. Before that, he implemented and deployed predictive analytics and machine learning solutions for Local Motion and Zipcar. Emmanuel holds graduate degrees in artificial intelligence, computer engineering, and management from three of France's top schools. // Related Links https://www.amazon.com/Building-Machine-Learning-Powered-Applications/dp/149204511X https:// www.oreilly.com/library/view/building-machine-learning/9781492045106/ --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletter and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Adam on LinkedIn: https://www.linkedin.com/in/aesroka/ Connect with Emmanuel on LinkedIn: https://www.linkedin.com/in/ameisen/ Timestamps: [00:00] Introduction to Emmanuel Ameisen [03:38] Building Machine Learning Powered Applications book inspiration [05:19] The writing process [07:04] Over-engineering NLP [09:13] CV driven development: intentional or natural [11:09] Attribute to machine learning team [14:44] Shortening iteration cycle [16:41] Advice on how to tackle iteration [20:00] Failure modes [21:02] Infrastructure Iteration at Stripe [27:06] Deployment Steps tests challenges [29:34] "You develop operational excellence by exercising it." - Emmanuel Ameisen [33:22] Death of a thousand cuts: Balance of work vs productionization piece balance [36:15] Reproducibility headaches [40:04] Pipelines as software product [41:25] Get the book Building Machine Learning Powered Applications: Going from Idea to Product book by Emmanuel Ameisen! [42:04] Takeaways and wrap up

MLOps.community
Lessons from Studying FAANG ML Systems // Ernest Chan // MLOps Coffee Sessions #84

MLOps.community

Play Episode Listen Later Mar 2, 2022 45:34


MLOps Coffee Sessions #84 with Ernest Chan, Lessons from Studying FAANG ML Systems. // Abstract Large tech companies invest in ML platforms to accelerate their ML efforts. Become better prepared to solve your own MLOps problems by learning from their technology and design decisions. Tune in to learn about ML platform components, capabilities, and design considerations. // Bio Ernest is a Data Scientist at Duo Security. As part of the core team that built Duo's first ML-powered product, Duo Trust Monitor, he faced many (frustrating) MLOps problems first-hand. That led him to advocate for an ML infrastructure team to make it easier to deliver ML products at Duo. Prior to Duo, Ernest worked at an EdTech company, building data science products for higher-ed. Ernest is passionate about MLOps and using ML for social good. // Related Links Lessons on ML Platforms — from Netflix, DoorDash, Spotify, and more: https://ernestklchan.medium.com/lessons-on-ml-platforms-from-netflix-doordash-spotify-and-more-f455400115c7 Paper Highlights-Challenges in Deploying Machine Learning: a Survey of Case Studies https://towardsdatascience.com/paper-highlights-challenges-in-deploying-machine-learning-a-survey-of-case-studies-cafe61cfd04c Choose boring technologies Slideshare by Dan McKinley: https://www.slideshare.net/danmckinley/choose-boring-technology --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletter and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Vishnu on LinkedIn: https://www.linkedin.com/in/vrachakonda/ Connect with Ernest on LinkedIn: https://www.linkedin.com/in/ernest-chan-68245773/ Timestamps: [00:00] Introduction to Ernest Chan [01:07] Takeaways [02:58] Ernest's Lessons on ML Platforms — from Netflix, DoorDash, Spotify, and more blog post [05:55] Five components of an ML Platform [10:09] Limitations highlighted in the blog post [14:41] Level of maturity or completion observed in company efforts [16:17] Platform/Architecture admired the most [17:46] Advice to big tech companies [22:03] Process of needing an infrastructure and aiming towards having a platform [24:23] Paper Highlights-Challenges in Deploying Machine Learning: a Survey of Case Studies blog post [26:24] Takeaways from Paper Highlights-Challenges in Deploying Machine Learning [30:33] Prioritization [33:04] Delta Lake [35:27] Model rollouts and shadow mode [39:23] Are you an ML Engineer or a Data Scientist? [40:15] Simple route platform vs flexible platform trade-offs [41:08] Opinionated and simple vs less opinionated and flexible [43:22] Choose boring technologies Slideshare by Dan McKinley [44:36] Wrap up

Data Futurology - Data Science, Machine Learning and Artificial Intelligence From Industry Leaders
#180 The rise of engineers in the data analytics space with Felipe Flores, host of Data Futurology

Data Futurology - Data Science, Machine Learning and Artificial Intelligence From Industry Leaders

Play Episode Listen Later Feb 23, 2022 15:50


As the data analytics space matures demands from the business for the products and services being generated has been increasing. There is a thirst and appetite for internal and external data and analytics outputs. However, we need visible platforms and the ability to orchestrate complex data pipelines to enable this growth and underpin the mechanism. Cue the emergence of the engineer. We started with the data engineer and have welcomed more specialised roles including the Analytics Engineer, ML Engineer and AI Engineer. Heavily leveraging from the IT side, these roles have evolved and are becoming integral to the business. How exactly is the engineer in the data analytics space evolving? How is this enabling reliable accessible, performant systems that are monitored appropriately? Can we look at team structures? Are there ways we can implement or better use the modern data stack? As an industry we're maturing, we're further specialising and we're reacting to the demands of the business. We're seizing the opportunity to create more value for our organisations! That's what Felipe will discuss in this week's podcast episode. Enjoy the show. Thanks to our sponsor Talent Insights Group! Read podcast full summary here. --- Send in a voice message: https://anchor.fm/datafuturology/message

The Data Scientist Show
From Apple store specialist to ML engineer at Apple, build a portfolio through open source projects, Julia Language, with Logan Kilpatrick - The Data Scientist Show #024

The Data Scientist Show

Play Episode Listen Later Feb 3, 2022 102:54


Logan Kilpatrick is a machine learning engineer at Apple, Developer Community Advocate of Julia. He is a teaching fellow at Harvard extension school, and currently doing a master program of science in Law. Today we'll talk about how he became a Machine learning engineer, the internship he did in NASA, why you should care about open source communities, Julia, what the future of machine learning looks like, make sure you stay till the end. Logan's Twitter, Linkedin. If you like the show, give me a 5-star and subscribe to the channel. Follow Daliana on Twitter for more updates on data science, career, and this podcast.

Data Science Leaders
Supply Chain Solutions & the Role of the ML Engineer (Karin Chu, VP Data Science & Digital Analytics, Peapod Digital Labs)

Data Science Leaders

Play Episode Listen Later Jan 11, 2022 38:04 Transcription Available


When highly disruptive events like the COVID-19 pandemic occur, data science teams may have to throw historical data out the window. Models trained on what happened in the past simply don't work in a radically different present. In this episode, Karin Chu, VP Data Science and Digital Analytics at Peapod Digital Labs, discusses how her team is tackling that challenge head on, particularly as the global supply chain crisis impacts sectors from grocery to apparel. Plus, she explains why two things are so vital to the success of a data science team: ML engineers and a culture of communication. We discuss: How data science teams are navigating the supply chain crisis The vital role of an ML engineer Tips for communicating about data science in business Tune in on Apple Podcasts, Spotify, our website, or wherever you listen to podcasts. Can't see the links above? Just visit domino.buzz/podcast for helpful links from each episode.

Alignment Newsletter Podcast
Alignment Newsletter #169: Collaborating with humans without human data

Alignment Newsletter Podcast

Play Episode Listen Later Nov 24, 2021 15:08


Recorded by Robert Miles: http://robertskmiles.com More information about the newsletter here: https://rohinshah.com/alignment-newsletter/ YouTube Channel: https://www.youtube.com/channel/UCfGGFXwKpr-TJ5HfxEFaFCg   HIGHLIGHTS Collaborating with Humans without Human Data (DJ Strouse et al) (summarized by Rohin): We've previously seen that if you want to collaborate with humans in the video game Overcooked, it helps to train a deep RL agent against a human model (AN #70), so that the agent “expects” to be playing against humans (rather than e.g. copies of itself, as in self-play). We might call this a “human-aware” model. However, since a human-aware model must be trained against a model that imitates human gameplay, we need to collect human gameplay data for training. Could we instead train an agent that is robust enough to play with lots of different agents, including humans as a special case? This paper shows that this can be done with Fictitious Co-Play (FCP), in which we train our final agent against a population of self-play agents and their past checkpoints taken throughout training. Such agents get significantly higher rewards when collaborating with humans in Overcooked (relative to the human-aware approach in the previously linked paper). In their ablations, the authors find that it is particularly important to include past checkpoints in the population against which you train. They also test whether it helps to have the self-play agents have a variety or architectures, and find that it mostly does not make a difference (as long as you are using past checkpoints as well). Read more: Related paper: Maximum Entropy Population Based Training for Zero-Shot Human-AI Coordination Rohin's opinion: You could imagine two different philosophies on how to build AI systems -- the first option is to train them on the actual task of interest (for Overcooked, training agents to play against humans or human models), while the second option is to train a more robust agent on some more general task, that hopefully includes the actual task within it (the approach in this paper). Besides Overcooked, another example would be supervised learning on some natural language task (the first philosophy), as compared to pretraining on the Internet GPT-style and then prompting the model to solve your task of interest (the second philosophy). In some sense the quest for a single unified AGI system is itself a bet on the second philosophy -- first you build your AGI that can do all tasks, and then you point it at the specific task you want to do now. Historically, I think AI has focused primarily on the first philosophy, but recent years have shown the power of the second philosophy. However, I don't think the question is settled yet: one issue with the second philosophy is that it is often difficult to fully “aim” your system at the true task of interest, and as a result it doesn't perform as well as it “could have”. In Overcooked, the FCP agents will not learn specific quirks of human gameplay that could be exploited to improve efficiency (which the human-aware agent could do, at least in theory). In natural language, even if you prompt GPT-3 appropriately, there's still some chance it ends up rambling about something else entirely, or neglects to mention some information that it “knows” but that a human on the Internet would not have said. (See also this post (AN #141).) I should note that you can also have a hybrid approach, where you start by training a large model with the second philosophy, and then you finetune it on your task of interest as in the first philosophy, gaining the benefits of both. I'm generally interested in which approach will build more useful agents, as this seems quite relevant to forecasting the future of AI (which in turn affects lots of things including AI alignment plans).   TECHNICAL AI ALIGNMENT LEARNING HUMAN INTENT Inverse Decision Modeling: Learning Interpretable Representations of Behavior (Daniel Jarrett, Alihan Hüyük et al) (summarized by Rohin): There's lots of work on learning preferences from demonstrations, which varies in how much structure they assume on the demonstrator: for example, we might consider them to be Boltzmann rational (AN #12) or risk sensitive, or we could try to learn their biases (AN #59). This paper proposes a framework to encompass all of these choices: the core idea is to model the demonstrator as choosing actions according to a planner; some parameters of this planner are fixed in advance to provide an assumption on the structure of the planner, while others are learned from data. This also allows them to separate beliefs, decision-making, and rewards, so that different structures can be imposed on each of them individually. The paper provides a mathematical treatment of both the forward problem (how to compute actions in the planner given the reward, think of algorithms like value iteration) and the backward problem (how to compute the reward given demonstrations, the typical inverse reinforcement learning setting). They demonstrate the framework on a medical dataset, where they introduce a planner with parameters for flexibility of decision-making, optimism of beliefs, and adaptivity of beliefs. In this case they specify the desired reward function and then run backward inference to conclude that, with respect to this reward function, clinicians appear to be significantly less optimistic when diagnosing dementia in female and elderly patients. Rohin's opinion: One thing to note about this paper is that it is an incredible work of scholarship; it fluently cites research across a variety of disciplines including AI safety, and provides a useful organizing framework for many such papers. If you need to do a literature review on inverse reinforcement learning, this paper is a good place to start. Human irrationality: both bad and good for reward inference (Lawrence Chan et al) (summarized by Rohin): Last summary, we saw a framework for inverse reinforcement learning with suboptimal demonstrators. This paper instead investigates the qualitative effects of performing inverse reinforcement learning with a suboptimal demonstrator. The authors modify different parts of the Bellman equation in order to create a suite of possible suboptimal demonstrators to study. They run experiments with exact inference on random MDPs and FrozenLake, and with approximate inference on a simple autonomous driving environment, and conclude: 1. Irrationalities can be helpful for reward inference, that is, if you infer a reward from demonstrations by an irrational demonstrator (where you know the irrationality), you often learn more about the reward than if you inferred a reward from optimal demonstrations (where you know they are optimal). Conceptually, this happens because optimal demonstrations only tell you about what the best behavior is, whereas most kinds of irrationality can also tell you about preferences between suboptimal behaviors. 2. If you fail to model irrationality, your performance can be very bad, that is, if you infer a reward from demonstrations by an irrational demonstrator, but you assume that the demonstrator was Boltzmann rational, you can perform quite badly. Rohin's opinion: One way this paper differs from my intuitions is that it finds that assuming Boltzmann rationality performs very poorly if the demonstrator is in fact systematically suboptimal. I would have instead guessed that Boltzmann rationality would do okay -- not as well as in the case where there is no misspecification, but only a little worse than that. (That's what I found in my paper (AN #59), and it makes intuitive sense to me.) Some hypotheses for what's going on, which the lead author agrees are at least part of the story: 1. When assuming Boltzmann rationality, you infer a distribution over reward functions that is “close” to the correct one in terms of incentivizing the right behavior, but differs in rewards assigned to suboptimal behavior. In this case, you might get a very bad log loss (the metric used in this paper), but still have a reasonable policy that is decent at acquiring true reward (the metric used in my paper). 2. The environments we're using may differ in some important way (for example, in the environment in my paper, it is primarily important to identify the goal, which might be much easier to do than inferring the right behavior or reward in the autonomous driving environment used in this paper). FORECASTING Forecasting progress in language models (Matthew Barnett) (summarized by Sudhanshu): This post aims to forecast when a "human-level language model" may be created. To build up to this, the author swiftly covers basic concepts from information theory and natural language processing such as entropy, N-gram models, modern LMs, and perplexity. Data for perplexity achieved from recent state-of-the-art models is collected and used to estimate - by linear regression - when we can expect to see future models score below certain entropy levels, approaching the hypothesised entropy for the English Language. These predictions range across the next 15 years, depending which dataset, method, and entropy level is being solved for; there's an attached python notebook with these details for curious readers to further investigate. Preemptly disjunctive, the author concludes "either current trends will break down soon, or human-level language models will likely arrive in the next decade or two." Sudhanshu's opinion: This quick read provides a natural, accessible analysis stemming from recent results, while staying self-aware (and informing readers) of potential improvements. The comments section too includes some interesting debates, e.g. about the Goodhart-ability of the Perplexity metric. I personally felt these estimates were broadly in line with my own intuitions. I would go so far as to say that with the confluence of improved generation capabilities across text, speech/audio, video, as well as multimodal consistency and integration, virtually any kind of content we see ~10 years from now will be algorithmically generated and indistinguishable from the work of human professionals. Rohin's opinion: I would generally adopt forecasts produced by this sort of method as my own, perhaps making them a bit longer as I expect the quickly growing compute trend to slow down. Note however that this is a forecast for human-level language models, not transformative AI; I would expect these to be quite different and would predict that transformative AI comes significantly later. MISCELLANEOUS (ALIGNMENT) Rohin Shah on the State of AGI Safety Research in 2021 (Lucas Perry and Rohin Shah) (summarized by Rohin): As in previous years (AN #54), on this FLI podcast I talk about the state of the field. Relative to previous years, this podcast is a bit more introductory, and focuses a bit more on what I find interesting rather than what the field as a whole would consider interesting. Read more: Transcript   NEAR-TERM CONCERNS RECOMMENDER SYSTEMS User Tampering in Reinforcement Learning Recommender Systems (Charles Evans et al) (summarized by Zach): Large-scale recommender systems have emerged as a way to filter through large pools of content to identify and recommend content to users. However, these advances have led to social and ethical concerns over the use of recommender systems in applications. This paper focuses on the potential for social manipulability and polarization from the use of RL-based recommender systems. In particular, they present evidence that such recommender systems have an instrumental goal to engage in user tampering by polarizing users early on in an attempt to make later predictions easier. To formalize the problem the authors introduce a causal model. Essentially, they note that predicting user preferences requires an exogenous variable, a non-observable variable, that models click-through rates. They then introduce a notion of instrumental goal that models the general behavior of RL-based algorithms over a set of potential tasks. The authors argue that such algorithms will have an instrumental goal to influence the exogenous/preference variables whenever user opinions are malleable. This ultimately introduces a risk for preference manipulation. The author's hypothesis is tested using a simple media recommendation problem. They model the exogenous variable as either leftist, centrist, or right-wing. User preferences are malleable in the sense that a user shown content from an opposing side will polarize their initial preferences. In experiments, the authors show that a standard Q-learning algorithm will learn to tamper with user preferences which increases polarization in both leftist and right-wing populations. Moreover, even though the agent makes use of tampering it fails to outperform a crude baseline policy that avoids tampering. Zach's opinion: This article is interesting because it formalizes and experimentally demonstrates an intuitive concern many have regarding recommender systems. I also found the formalization of instrumental goals to be of independent interest. The most surprising result was that the agents who exploit tampering are not particularly more effective than policies that avoid tampering. This suggests that the instrumental incentive is not really pointing at what is actually optimal which I found to be an illuminating distinction.   NEWS OpenAI hiring Software Engineer, Alignment (summarized by Rohin): Exactly what it sounds like: OpenAI is hiring a software engineer to work with the Alignment team. BERI hiring ML Software Engineer (Sawyer Bernath) (summarized by Rohin): BERI is hiring a remote ML Engineer as part of their collaboration with the Autonomous Learning Lab at UMass Amherst. The goal is to create a software library that enables easy deployment of the ALL's Seldonian algorithm framework for safe and aligned AI. AI Safety Needs Great Engineers (Andy Jones) (summarized by Rohin): If the previous two roles weren't enough to convince you, this post explicitly argues that a lot of AI safety work is bottlenecked on good engineers, and encourages people to apply to such roles. AI Safety Camp Virtual 2022 (summarized by Rohin): Applications are open for this remote research program, where people from various disciplines come together to research an open problem under the mentorship of an established AI-alignment researcher. Deadline to apply is December 1st. Political Economy of Reinforcement Learning schedule (summarized by Rohin): The date for the PERLS workshop (AN #159) at NeurIPS has been set for December 14, and the schedule and speaker list are now available on the website. FEEDBACK I'm always happy to hear feedback; you can send it to me, Rohin Shah by replying to this email. PODCAST An audio podcast version of the Alignment Newsletter is available. This podcast is an audio version of the newsletter, recorded by Robert Miles (http://robertskmiles.com). Subscribe here:

Ken's Nearest Neighbors
Escaping a Cult to Pursue AI (Kurtis Pykes) - KNN Ep. 75

Ken's Nearest Neighbors

Play Episode Listen Later Nov 24, 2021 84:14


 Today I had the pleasure of interviewing Kurtis Pykes. Kurtis is a self-taught ML Engineer from London England but describes himself as a global citizen. He currently works as a freelancer for various companies and also maintains his own Medium blog. In the past year, he was voted to have some of the best medium blog content by his peers on linkedin. In his spare time, he likes to read, exercise, and play chess (even though he's pants).  Kurtis has a Fascinating story. We talk about tragedy derailed his professional football career, his scary experience as part of a cult, and how he found purpose and personal growth through the study of machine learning and writing.  I loved chatting with Kurtis, and I'm excited I get to share our conversation with you all. 

AI Show  - Channel 9
AI Show | High Level MLOps with Microsoft Data Scientists | Episode 33

AI Show - Channel 9

Play Episode Listen Later Oct 8, 2021 33:59


On this episode of the AI Show, we're talking about MLOps. Seth welcomes Microsoft Data Scientist, Spyros Marketos, ML Engineer, Davide Fornelli and Data Engineer, Samarendra Panda. Together they make up an AI Taskforce and they'll give us a high-level intro into MLOps and share some of the surprises and lessons they've learned along the way!Jump to:[00:17] AI Show Intro[00:34] Welcome and Introductions[01:41] Use cases from the AI Taskforce[02:47] Commonalities across projects[03:50] Common challenges - from the Data Engineer perspective[06:47] Common challenges - from the ML Engineer perspective[08:46] Common challenges from the Data Science perspective[10:48] What does success in MLOps look like?[12:30] Surprising challenges working with customers and how to avoid them[19:27] Review - what is ML Ops[19:45] MLOps in Delivery mission[21:57] MLOps principles[27:52] Tips from the pros Learn more:Machine Learning for Data Scientists https://aka.ms/AIShow/MLforDataScientistsPakt: Principles of Data Science https://aka.ms/AIShow/DataSciencePacktZero to Hero Machine Learning on Azure https://aka.ms/ZerotoHero/MLonAzureZero to Hero Azure AI https://aka.ms/ZerotoHero/AzureAICreate a Free account (Azure) https://aka.ms/aishow-seth-azurefreeFollow Seth https://twitter.com/sethjuarezFollow Spyros https://www.linkedin.com/in/smarketos/Follow Davide https://www.linkedin.com/in/davidefornelli/Follow Sam https://www.linkedin.com/in/samarendra-panda/Don't miss new episodes, subscribe to the AI Show https://aka.ms/AIShowsubscribeAI Show Playlist https://aka.ms/AIShowPlaylistJoin us every other Friday, for an AI Show livestream on Learn TV and YouTube https://aka.ms/LearnTV - https://aka.ms/AIShowLive

The MLOps Podcast

In this episode, I'm speaking with Ran Romano from Qwak.ai. Ran built the ML platform at Wix, and we discuss the various data roles, when organizations should focus on ML infrastructure, solving the hard problems of features stores, and one approach to building an end-to-end ML platform. Join our Discord community: https://discord.gg/tEYvqxwhah --- Timestamps: 00:00 Podcast intro 01:00 Guest intro 01:30 Getting into the world of ML and ML Engineering 02:25 The line between Data Engineer, ML Engineer, and Data Scientist 03:50 The future of data roles – what are the trends? 07:21 The most exciting part about taking ML models into production 09:45 Jupyter notebooks in production (again??) 10:41 Signs that notebook productionization might not work 11:42 Building ML-focused CI/CD systems 15:32 Early days of building out the Wix ML platform 16:22 Signs that you might need to focus on ML infrastructure in your organization, and how to convince other stakeholders. 19:21 What part of the platform that you built are you most proud of? 23:51 Defining a feature store and the training/serving skew 27:24 Onboarding data scientists to using a feature store 33:49 When is it too early to build an ML platform? 35:33 Open source components – What parts of your platform did you choose not to build yourself? 40:16 Qwak.ai – What are you working on currently? 41:07 How do you define an "end-to-end" platform in the case of Qwak 44:25 End-to-end vs. Integrated – Advantages and disadvantages --- Relevant Links: - Qwak.ai: https://www.qwak.ai - Wix ML Platform presentation by Ran: https://www.youtube.com/watch?v=E8839ENL-WY - https://www.linkedin.com/company/dagshub - https://www.linkedin.com/company/qwak-ai/ - https://twitter.com/TheRealDAGsHub - https://twitter.com/DeanPlbn - https://twitter.com/ranvromano

Abnormal Engineering Stories
Future of ML Platform w/ Jeshua Bratman & Nico Koumchatzsky

Abnormal Engineering Stories

Play Episode Listen Later Jul 7, 2021 35:13


Abnormal Engineering Stories explores what it's like leading engineering teams and systems featuring tech industry leaders with real world, hands-on operating experience. Hosted by Jeshua Bratman, Head of Machine Learning at Abnormal Security. In our second episode of Abnormal Engineering Stories, Jeshua Bratman and Nico Koumchatzky and discuss the future of ML platform, the role of an ML Engineer, and the ML challenges faced at Abnormal and Nvidia. Jeshua is Head of Machine Learning at Abnormal Security and Nico is the Senior Director of AI Infrastructure at Nvidia, and before that, he ran Twitter's ML Platform team: Twitter Cortex. Abnormal Engineering Stories is a product of Abnormal Security, where we protect some of the world's largest corporations from cyber crime.

Machine Learning Podcast - Jay Shah
Shreya Shankar, ML Engineer @Viaduct on Applied ML research & more

Machine Learning Podcast - Jay Shah

Play Episode Listen Later Jun 26, 2021 51:36


Shreya is currently a graduate student at Stanford and also working as an ML engineer at Viaduct.ai. She has previously interned at Google-Brain and at Facebook. She talks about her experience as an applied ML engineer and making ML models work in the real world.Shreya's homepage: https://www.shreya-shankar.comAbout the Host:Jay is a Ph.D. student at Arizona State University, doing research on building Interpretable AI models for Medical Diagnosis.Jay Shah: https://www.linkedin.com/in/shahjay22/You can reach out to https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

Passion2Knowledge
Kurtis Pykes, Self-Taught ML Engineer & Towards Data Science Top Contributor

Passion2Knowledge

Play Episode Listen Later Jun 14, 2021 52:55


The first episode of Data & Impact, a Passion2Knowledge Experience features Kurtis Pykes! Kurtis is a self-taught ML engineer & data scientist with a strong interest in freelance data analytics and blogging. Kurtis has been awarded a top contributor on Towards Data Science on Medium with publications succeeding 100,000 readers per month.  Join us as we dive into his background and discuss freelance data science, ML, data engineering vs. data science and much more!

THE ONE'S CHANGING THE WORLD -PODCAST
AI FOR GOOD/AI FOR ALL - SEAN MC GREGOR - TECH LEAD IBM WATSON XPRIZE, ML ENGINEER SYNTIANT & PAI

THE ONE'S CHANGING THE WORLD -PODCAST

Play Episode Listen Later May 31, 2021 58:39


#ibmwatson #xprize #syntiant #aiforgood #aiforall #pai #reinforcementlearning #artificialintelligence Sean Mc Gregor is a #machinelearning engineer at Syntiant working on building energy-efficient neural processors, developing the Partnership on AI's Incident Database, and working with the XPRIZE Foundation to structure the IBM Watson AI XPRIZE a $5 million contest for solving Grand Challenges with artificial intelligence. His technical work spans neural accelerators for energy-efficient inference, deep learning for speech and heliophysics, and reinforcement learning for wildfire suppression policy. Outside his paid work, Sean organizes a series of workshops at major academic AI conferences on the topic of "AI for Good". Click below for details on Seans Research: https://seanbmcgregor.com/pages/vitae.html Social Media Pages: http://www.linkedin.com/in/seanbmcgregor https://twitter.com/#!/seanmcgregor https://www.facebook.com/SeanBMcGregor Watch our highest viewed videos: 1-India;s 1st Quantum Computer- https://youtu.be/ldKFbHb8nvQDR R VIJAYARAGHAVAN - PROF & PRINCIPAL INVESTIGATOR AT TIFR 2-Breakthrough in Age Reversal- -https://youtu.be/214jry8z3d4DR HAROLD KATCHER - CTO NUGENICS RESEARCH 3-Head of Artificial Intelligence-JIO - https://youtu.be/q2yR14rkmZQShailesh Kumar 4-STARTUP FROM INDIA AIMING FOR LEVEL 5 AUTONOMY - SANJEEV SHARMA CEO SWAAYATT ROBOTS -https://youtu.be/Wg7SqmIsSew 5-TRANSHUMANISM & THE FUTURE OF MANKIND - NATASHA VITA-MORE: HUMANITY PLUS -https://youtu.be/OUIJawwR4PY 6-MAN BEHIND GOOGLE QUANTUM SUPREMACY - JOHN MARTINIS -https://youtu.be/Y6ZaeNlVRsE 7-1000 KM RANGE ELECTRIC VEHICLES WITH ALUMINUM AIR FUEL BATTERIES - AKSHAY SINGHAL -https://youtu.be/cUp68Zt6yTI 8-Garima Bharadwaj Chief Strategist IoT & AI at Enlite Research -https://youtu.be/efu3zIhRxEY 9-BANKING 4.0 - BRETT KING FUTURIST, BESTSELLING AUTHOR & FOUNDER MOVEN -https://youtu.be/2bxHAai0UG0 10-E-VTOL & HYPERLOOP- FUTURE OF INDIA"S MOBILITY- SATYANARAYANA CHAKRAVARTHY -https://youtu.be/ZiK0EAelFYY 11-NON-INVASIVE BRAIN COMPUTER INTERFACE - KRISHNAN THYAGARAJAN -https://youtu.be/fFsGkyW3xc4 12-SATELLITES THE NEW MULTI-BILLION DOLLAR SPACE RACE - MAHESH MURTHY -https://youtu.be/UarOYOLUMGk Connect & Follow us at: https://in.linkedin.com/in/eddieavil https://in.linkedin.com/company/change-transform-india https://www.facebook.com/changetransformindia/ https://twitter.com/intothechange https://www.instagram.com/changetransformindia/

Pi Tech
Как стать Machine Learning Engineer? Какой бэкграунд нужно иметь и какое железо купить?

Pi Tech

Play Episode Listen Later May 12, 2021 30:09


Мы пригласили Machine Learning Engineer из нашей компании Postindustria - Михаила Гирняка, чтобы поговорить о его пути в машинном обучении, с чего начать погружение в профессию и о проблемах искусственного интеллекта. 00:47 - Что сподвигно заниматься ML 03:28 - Полезные применения Deepfake 05:20 - Реализация первого AR-проекта и мотивация постоянно обучаться 07:11 - Какой бэкграунд нужно иметь, чтобы освоить ML 09:13 - Выбор направления 12:29 - Пример хорошего практического курса по ML 19:12 - Любой ли может стать ML Engineer? 21:05 - Какое железо нужно для ML? 26:23 - Будущее ML технологий: заменять ли машины людей? 27:56 - Что не так с ML?

Ken's Nearest Neighbors
Spilling the Tea on Data Science (Sanyam Bhutani) - KNN Ep. 19

Ken's Nearest Neighbors

Play Episode Listen Later Nov 8, 2020 42:41


Sanyam Bhutani is a ML Engineer and AI Content Creator at H2O.ai. He is also an inc42, Economic Times recognized Machine Learning Practitioner. Sanyam is an x1 Master and x2 Expert on Kaggle, ranked in Global Top 1% as well as an active AI blogger on the medium, Hackernoon, with over 1 Million+ Views overall.Sanyam is also the host of Chai Time Data Science Podcast where he interviews top practitioners, researchers, and Kagglers. The podcast has been streamed for 40k hours across 110+ countries for 120k+ timesIn this interview I speak with Sanyam about how he was able to build his Chai Time Data Science podcast. I also get his insight into kaggle, how to reach out to influential people, and his plans for the future.

Tentang Data
S02E02 Karier Sebagai Software Engineer, ML Engineer, dan Data Scientist

Tentang Data

Play Episode Listen Later Nov 6, 2020 30:05


Pernah terpikir untuk pindah karier dari software engineer untuk menjadi ML engineer atau data scientist? Apa sih sebetulnya perbedaan dari masing-masing pekerjaan tersebut? Saya berdiskusi dengan Muhammad Reza Irvanda (@muzavan) tentang pengalamannya mengisi tiga peran tersebut di tiga perusahaan berbeda. Reza juga menjelaskan mengapa pada akhirnya dia kembali ke dunia software engineer dalam kurang lebih dua tahun terakhir.

So you want to be a data scientist?
#6 - Becoming an ML engineer and work life at Twitter with Jigyasa Grover

So you want to be a data scientist?

Play Episode Listen Later May 11, 2020 22:07


In this week's episode, Jigyasa Grover of Twitter joins me to talk about her career and machine learning engineering. She shares her journey of becoming a machine learning engineer and the steps she took throughout the years. We dive deeper into the types of projects she works on, how she works with her team at Twitter and what she loves about her job. All this and more is on this week's episode. Don't miss it! --- Hands-on Data Science: Complete your first portfolio project course - https://www.soyouwanttobeadatascientist.com/hods Data Science Kick-starter mini-course - https://www.soyouwanttobeadatascientist.com/courses/data-science-kick-starter-mini-course

Adventures in Machine Learning
How to Transition from Software Engineer to ML Engineer - ML 111

Adventures in Machine Learning

Play Episode Listen Later Jan 1, 1970 55:18


Today we speak with a software engineer who is interested in becoming an ML engineer. Expect to learn about ML roles that are most attainable based on a strong software engineering skill set. We also cover some tangible strategies you can leverage to make the transition.On YouTube How to Transition from Software Engineer to ML Engineer - ML 111SponsorsChuck's Resume TemplateDeveloper Book Club startingBecome a Top 1% Dev with a Top End Devs MembershipAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy