Machine Learning Podcast - Jay Shah

Follow Machine Learning Podcast - Jay Shah
Share on
Copy link to clipboard

Talks with young engineers working in Machine Learning and AI research on how to get started and breakthrough it.

Jay Shah


    • Apr 15, 2025 LATEST EPISODE
    • infrequent NEW EPISODES
    • 46m AVG DURATION
    • 94 EPISODES


    Search for episodes from Machine Learning Podcast - Jay Shah with a specific topic:

    Latest episodes from Machine Learning Podcast - Jay Shah

    Why Open-Source AI Is the Future and needs its 'Linux Moment'? | Manos Koukoumidis

    Play Episode Listen Later Apr 15, 2025 79:38


    Manos is the CEO of Oumi, a platform focused on open sourcing the entire lifecycle of foundation and large models. Prior to that he was at Google leading efforts on developing large language models within Cloud services. He also has experience working at Facebook on AR/VR projects and at Microsoft's cloud division developing machine learning based services. Manos received his PhD in computer engineering from Princeton University and has extensive hands-on experience building and deploying models at large scale. Time stamps of the conversation00:00:00 Highlights00:01:20 Introduction00:02:08 From Google to Oumi00:08:58 Why big tech models cannot beat ChatGPT00:12:00 Future of open-source AI00:18:00 Performance gap between open-source and closed AI models00:23:58 Parts of the AI stack that must remain open for innovation00:27:45 Risks of open-sourcing AI00:34:38 Current limitations of Large Language Models00:39:15 Deepseek moment 00:44:38 Maintaining AI leadership - USA vs. China00:48:16 Oumi 00:55:38 Open-sourcing a model with AGI tomorrow, or wait for safeguards?00:58:12 Milestones in open-source AI01:02:50 Nurturing a developers community01:06:12 Ongoing research projects01:09:50 Tips for AI enthusiasts 01:13:00 Competition in AI nowadays More about Manos: https://www.linkedin.com/in/koukoumidis/And Oumi: https://github.com/oumi-ai/oumiAbout the Host:Jay is a PhD student at Arizona State University working on improving AI for medical diagnosis and prognosis. Linkedin: https://www.linkedin.com/in/shahjay22/Twitter: https://twitter.com/jaygshah22Homepage: https://jaygshah.github.io/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    Differential Privacy, Creativity & future of AI research in the LLM era | Niloofar Mireshghallah

    Play Episode Listen Later Feb 4, 2025 89:23


    Niloofar is a Postdoctoral researcher at University of Washington with research interests in building privacy preserving AI systems and studying the societal implications of machine learning models. She received her PhD in Computer Science from UC San Diego in 2023 and has received multiple awards and honors for research contributions. Time stamps of the conversation 00:00:00 Highlights 00:01:35 Introduction 00:02:56 Entry point in AI 00:06:50 Differential privacy in AI systems 00:11:08 Privacy leaks in large language models 00:15:30 Dangers of training AI on public data on internet 00:23:28 How auto-regressive training makes things worse 00:30:46 Impact of Synthetic data for fine-tuning 00:37:38 Most critical stage in AI pipeline to combat data leaks 00:44:20 Contextual Integrity 00:47:10 Are LLMs creative? 00:55:24 Under vs. Overpromises of LLMs 01:01:40 Publish vs. perish culture in AI research recently 01:07:50 Role of academia in LLM research 01:11:35 Choosing academia vs. industry 01:17:34 Mental Health and overarching More about Niloofar: https://homes.cs.washington.edu/~niloofar/ And references to some of the papers discussed: https://arxiv.org/pdf/2310.17884 https://arxiv.org/pdf/2410.17566 https://arxiv.org/abs/2202.05520 About the Host: Jay is a PhD student at Arizona State University working on improving AI for medical diagnosis and prognosis. Linkedin: https://www.linkedin.com/in/shahjay22/ Twitter: https://twitter.com/jaygshah22 Homepage: http://jayshah.me/ for any queries. Stay tuned for upcoming webinars! ***Disclaimer: The information in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    Reasoning in LLMs, role of academia and keeping up with AI research | Dr. Vivek Gupta

    Play Episode Listen Later Dec 24, 2024 108:32


    Vivek is an Assistant Professor at Arizona State university. Prior to that, he was at the University of Pennsylvania as a postdoctoral researcher and completed his PhD in CS from the University of Utah. His PhD research focused on inference and reasoning for semi structured data and his current research spans reasoning in large language models (LLMs), multimodal learning, and instilling models with common sense for question answering. He has also received multiple awards and fellowships for his research works over the years. Conversation time stamps: 00:01:40 Introduction 00:02:52 Background in AI research 00:05:00 Finding your niche 00:12:42 Traditional AI models vs. LLMs in semi-structured data 00:18:00 Why is reasoning hard in LLMs? 00:27:10 Will scaling AI models hit a plateau? 00:31:02 Has ChatGPT pushed boundaries of AI research 00:38:28 Role of Academia in AI research in the era of LLMs 00:56:35 Keeping up with research: filtering noise vs. signal 01:09:14 Getting started in AI in 2024? 01:20:25 Maintaining mental health in research (especially AI) 01:34:18 Building good habits 01:37:22 Do you need a PhD to contribute to AI? 01:45:42 Wrap up More about Vivek: https://vgupta123.github.io/ ASU lab website: https://coral-lab-asu.github.io/ And Vivek's blog on research struggles: https://vgupta123.github.io/docs/phd_struggles.pdf About the Host:Jay is a PhD student at Arizona State University working on improving AI for medical diagnosis and prognosis. Linkedin: https://www.linkedin.com/in/shahjay22/ Twitter: https://twitter.com/jaygshah22 Homepage: http://jayshah.me/ for any queries. Stay tuned for upcoming webinars! ***Disclaimer: The information in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    Time series Forecasting using GPT models | Max Mergenthaler Canseco

    Play Episode Listen Later Sep 19, 2024 70:21


    Max is the CEO and co-founder of Nixtla, where he is developing highly accurate forecasting models using time series data and deep learning techniques, which developers can use to build their own pipelines. Max is a self-taught programmer and researcher with a lot of prior experience building things from scratch. 00:00:50 Introduction 00:01:26 Entry point in AI 00:04:25 Origins of Nixtla 00:07:30 Idea to product 00:11:21 Behavioral economics & psychology to time series prediction 00:16:00 Landscape of time series prediction 00:26:10 Foundation models in time series 00:29:15 Building TimeGPT 00:31:36 Numbers and GPT models 00:34:35 Generalization to real-world datasets 00:38:10 Math reasoning with LLMs 00:40:48 Neural Hierarchical Interpolation for Time Series Forecasting 00:47:15 TimeGPT applications 00:52:20 Pros and Cons of open-source in AI 00:57:20 Insights from building AI products 01:02:15 Tips to researchers & hype vs Reality of AI More about Max: https://www.linkedin.com/in/mergenthaler/ and Nixtla: https://www.nixtla.io/ Check out TimeGPT: https://github.com/Nixtla/nixtla About the Host: Jay is a PhD student at Arizona State University working on improving AI for medical diagnosis and prognosis. Linkedin: https://www.linkedin.com/in/shahjay22/ Twitter: https://twitter.com/jaygshah22 Homepage: https://www.public.asu.edu/~jgshah1/ for any queries. Stay tuned for upcoming webinars! ***Disclaimer: The information in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    Generative AI and the Art of Product Engineering | Golnaz Abdollahian

    Play Episode Listen Later Sep 5, 2024 35:22


    Golnaz Abdollahian is currently the senior director of big idea innovation at Dolby Laboratories. She has a lot of experience developing and shaping technological products around augmented and virtual reality, smart homes, and generative AI. Before joining Dolby, she had experience working at Microsoft, Apple, and Sony. She also holds PhD in electrical engineering from Purdue University. Time stamps of the conversation 00:00 Highlights 01:08 Introduction 01:52 Entry point in AI 03:00 Leading Big Idea Innovation at Dolby 06:55 Generative AI, Entertainment and Dolby 08:45 How do content creators feel about AI? 10:30 From a Researcher to a Product person 14:27 Traditional Tech products versus AI products 17:52 From concept to product 20:35 Lesson in Product design from - Apple, Microsoft, Song & Dolby 25:34 Interpreting trends in AI 29:25 Good versus Bad Product 31:25 Advice to people interested in productization More about Golnaz: https://www.linkedin.com/in/golnaz-abdollahian-93938a5/ About the Host: Jay is a PhD student at Arizona State University working on improving AI for medical diagnosis and prognosis. Linkedin: https://www.linkedin.com/in/shahjay22/ Twitter: https://twitter.com/jaygshah22 Homepage: https://www.public.asu.edu/~jgshah1/ for any queries. Stay tuned for upcoming webinars! ***Disclaimer: The information in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    Future of Software Development with LLMs, Advice on Building Tech startups & more | Pritika Mehta

    Play Episode Listen Later Aug 14, 2024 37:31


    Pritika is the co-founder of Butternut AI, a platform that allows the creation of professional websites without hiring web developers. Before butternut, Pritika had entrepreneurship experience building some other products, which later got acquired. Time stamps of the conversation 00:00 Highlights 01:15 Introduction 01:50 Entry point in AI 03:04 Motivation behind Butternut AI 05:00 Can software engineering be automated? 06:36 Large Language Models in Software Development 08:00 AI as a replacement vs assistant 10:32 Automating website development 13:40 Limitations of current LLMs 18:12 Landscape of startups using LLMs 19:50 Going from an idea to a product 27:48 Background in AI for building AI-based startup 30:00 Entrepreneurship 34:32 Startup Culture in USA vs. India More about Butternut AI: https://butternut.ai/ Pritika's Twitter: https://x.com/pritika_mehta And LinkedIn: https://www.linkedin.com/in/pritikam/ About the Host: Jay is a PhD student at Arizona State University working on improving AI for medical diagnosis and prognosis. Linkedin: https://www.linkedin.com/in/shahjay22/ Twitter: https://twitter.com/jaygshah22 Homepage: https://www.public.asu.edu/~jgshah1/ for any queries. Stay tuned for upcoming webinars! ***Disclaimer: The information in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    Instruction Tuning, Prompt Engineering and Self Improving Large Language Models | Dr. Swaroop Mishra

    Play Episode Listen Later Jul 9, 2024 91:39


    Swaroop is a research scientist at Google-Deepmind, working on improving Gemini. His research expertise includes instruction tuning and different prompt engineering techniques to improve reasoning and generalization performance in large language models (LLMs) and tackle induced biases in training. Before joining DeepMind, Swaroop graduated from Arizona State University, where his research focused on developing methods that allow models to learn new tasks from instructions. Swaroop has also interned at Microsoft, Allen AI, and Google, and his research on instruction tuning has been influential in the recent developments of LLMs. Time stamps of the conversation: 00:00:50 Introduction 00:01:40 Entry point in AI 00:03:08 Motivation behind Instruction tuning in LLMs 00:08:40 Generalizing to unseen tasks 00:14:05 Prompt engineering vs. Instruction Tuning 00:18:42 Does prompt engineering induce bias? 00:21:25 Future of prompt engineering 00:27:48 Quality checks on Instruction tuning dataset 00:34:27 Future applications of LLMs 00:42:20 Trip planning using LLM 00:47:30 Scaling AI models vs making them efficient 00:52:05 Reasoning abilities of LLMs in mathematics 00:57:16 LLM-based approaches vs. traditional AI 01:00:46 Benefits of doing research internships in industry 01:06:15 Should I work on LLM-related research? 01:09:45 Narrowing down your research interest 01:13:05 Skills needed to be a researcher in industry 01:22:38 On publish or perish culture in AI research More about Swaroop: https://swarooprm.github.io/ And his research works: https://scholar.google.com/citations?user=-7LK2SwAAAAJ&hl=en Twitter: https://x.com/Swarooprm7 About the Host: Jay is a PhD student at Arizona State University working on improving AI for medical diagnosis and prognosis. Linkedin: https://www.linkedin.com/in/shahjay22/ Twitter: https://twitter.com/jaygshah22 Homepage: https://www.public.asu.edu/~jgshah1/ for any queries. Stay tuned for upcoming webinars! ***Disclaimer: The information in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    Role of Large Language Models in AI-driven medical research | Dr. Imon Banerjee

    Play Episode Listen Later Apr 23, 2024 46:49


    Dr. Imon Banerjee is an Associate Professor at Mayo Clinic in Arizona, working at the intersection of AI and healthcare research. Her research focuses on multi-modality fusion, mitigating bias in AI models specifically in the context of medical applications & more broadly building predictive models using different data sources. Before joining the Mayo Clinic, she was at Emory University as an Assistant Professor and at Stanford as a Postdoctoral fellow. Time stamps of the conversation 00:00 Highlights 01:00 Introduction 01:50 Entry point in AI 04:41 Landscape of AI in healthcare so far 06:15 Research to practice 07:50 Challenges of AI Democratization 11:56 Era of Generative AI in Medical Research 15:57 Responsibilities to realize 16:40 Are LLMs a world model? 17:50 Training on medical data 19:55 AI as a tool in clinical workflows 23:36 Scientific discovery in medicine 27:08 Dangers of biased AI models in healthcare applications 28:40 Good vs Bad bias 33:33 Scaling models - the current trend in AI research 35:05 Current focus of research 36:41 Advice on getting started 39:46 Interdisciplinary efforts for efficiency 42:22 Personalities for getting into research More about Dr. Banerjee's lab and research: https://labs.engineering.asu.edu/banerjeelab/person/imon-banerjee/ About the Host: Jay is a PhD student at Arizona State University. Linkedin: https://www.linkedin.com/in/shahjay22/ Twitter: https://twitter.com/jaygshah22 Homepage: https://www.public.asu.edu/~jgshah1/ for any queries. Stay tuned for upcoming webinars! ***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    Algorithmic Reasoning, Graph Neural Nets, AGI and Tips to researchers | Petar Veličković

    Play Episode Listen Later Oct 27, 2023 72:29


    Dr. Petar Veličković is a Staff Research Scientist at Googe DeepMind and an Affiliated lecturer at the University of Cambridge. He is known for his research contributions in graph representation learning; particularly graph neural networks and graph attention networks. At DeepMind, he has been working on Neural Algorithmic Reasoning which we talk about more in this podcast. Petar's research has been featured in numerous media articles and has been impactful in many ways including Google Maps's improved predictions. Time stamps 00:00:00 Highlights 00:01:00 Introduction 00:01:50 Entry point in AI 00:03:44 Idea of Graph Attention Networks 00:06:50 Towards AGI 00:09:58 Attention in Deep learning 00:13:15 Attention vs Convolutions 00:20:20 Neural Algorithmic Reasoning (NAR) 00:25:40 End-to-end learning vs NAR 00:30:40 Improving Google Map predictions 00:34:08 Interpretability 00:41:28 Working at Google DeepMind 00:47:25 Fundamental vs Applied side of research 00:50:58 Industry vs Academia in AI Research 00:54:25 Tips to young researchers 01:05:55 Is a PhD required for AI research? More about Petar: https://petar-v.com/ Graph Attention Networks: https://arxiv.org/abs/1710.10903 Neural Algorithmic Reasoning: https://www.cell.com/patterns/pdf/S2666-3899(21)00099-4.pdf TacticAI paper: https://arxiv.org/abs/2310.10553 And his collection of invited talks:  @petarvelickovic6033  About the Host: Jay is a PhD student at Arizona State University. Linkedin: https://www.linkedin.com/in/shahjay22/ Twitter: https://twitter.com/jaygshah22 Homepage: https://www.public.asu.edu/~jgshah1/ for any queries. Stay tuned for upcoming webinars! ***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    Combining Vision & Language in AI perception and the era of LLMs & LMMs | Dr. Yezhou Yang

    Play Episode Listen Later Oct 10, 2023 113:47


    Dr. Yezhou Yang is an Associate Professor at Arizona State University and director of the Active Perception Group at ASU. He has research interests in Cognitive Robotics and Computer Vision, and understanding human actions from visual input and grounding them by natural language. Prior to joining ASU, he completed his Ph.D. from the University of Maryland and his postdoctoral at the Computer Vision Lab and Perception and Robotics Lab. Timestamps of the conversation 00:01:02 Introduction 00:01:46 Interest in AI 00:17:04 Entry in Robotics & AI Perception 00:20:59 Combining Vision & language to Improve Robot Perception 00:23:30 End-to-end learning vs traditional knowledge graphs 00:28:28 What do LLMs learn? 00:30:30 Nature of AI research 00:36:00 Why vision & language in AI? 00:45:40 Learning vs Reasoning in neural networks 00:53:05 Bringing AI to the general crowd 01:00:10 Transformers in Vision 01:08:54 Democratization of AI 01:13:42 Motivation for research: theory or application? 01:18:50 Surpassing human intelligence 01:25:13 Open challenges in computer vision research 01:30:19 Doing research is a privilege 01:35:00 Rejections, tips to read & write good papers 01:43:37 Tips for AI Enthusiasts 01:47:35 What is a good research problem? 01:50:30 Dos and Don'ts in AI research More about Dr. Yang: https://yezhouyang.engineering.asu.edu/ And his Twitter handle: https://twitter.com/Yezhou_Yang About the Host: Jay is a PhD student at Arizona State University. Linkedin: https://www.linkedin.com/in/shahjay22/ Twitter: https://twitter.com/jaygshah22 Homepage: https://www.public.asu.edu/~jgshah1/ for any queries. Check-out Rora: https://teamrora.com/jayshah Guide to STEM PhD AI Researcher + Research Scientist pay: https://www.teamrora.com/post/ai-researchers-salary-negotiation-report-2023 Stay tuned for upcoming webinars! ***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    Risks of AI in real-world and towards Building Robust Security measures | Hyrum Anderson

    Play Episode Listen Later Jul 12, 2023 51:33


    Dr Hyrum Anderson is a Distinguished Machine Learning Engineer at Robust Intelligence. Prior to that, he was Principal Architect of Trustworthy Machine Learning at Microsoft where he also founded Microsoft's AI Red Team; he also led security research at MIT Lincoln Laboratory, Sandia National Laboratories, and Mendiant, and was Chief Scientist at Endgame (later acquired by Elastic). He's also the co-author of the book “Not a Bug, But with a Sticker” and his research interests include assessing the security and privacy of ML systems and building Robust AI models. Timestamps of the conversation 00:50 Introduction 01:40 Background in AI and ML security 04:45 Attacks on ML systems 08:20 Fractions of ML systems prone to Attacks 10:38 Operational risks with security measures 13:40 Solution from an algorithmic or policy perspective 15:46 AI regulation and policy making 22:40 Co-development of AI and security measures 24:06 Risks of Generative AI and Mitigation 27:45 Influencing an AI model 30:08 Prompt stealing on ChatGPT 33:50 Microsoft AI Red Team 38:46 Managing risks 39:41 Government Regulations 43:04 What to expect from the Book 46:40 Black in AI & Bountiful Children's Foundation Check out Rora: https://teamrora.com/jayshah Guide to STEM Ph.D. AI Researcher + Research Scientist pay: https://www.teamrora.com/post/ai-researchers-salary-negotiation-report-2023 Rora's negotiation philosophy: https://www.teamrora.com/post/the-biggest-misconception-about-negotiating-salaryhttps://www.teamrora.com/post/job-offer-negotiation-lies Hyrum's Linkedin: https://www.linkedin.com/in/hyrumanderson/ And Research: https://scholar.google.com/citations?user=pP6yo9EAAAAJ&hl=en Book - Not a Bug, But with a Sticker: https://www.amazon.com/Not-Bug-But-Sticker-Learning/dp/1119883989/ About the Host: Jay is a Ph.D. student at Arizona State University. Linkedin: https://www.linkedin.com/in/shahjay22/ Twitter: https://twitter.com/jaygshah22 Homepage: https://www.public.asu.edu/~jgshah1/ for any queries. Stay tuned for upcoming webinars! ***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    Being aware of Systematic Biases and Over-trust in AI | Meredith Broussard

    Play Episode Listen Later Jul 10, 2023 37:15


    Meredith is an associate professor at New York University and research director at the NYU Alliance for Public Interest Technology. Her research interests include using data analysis for good and ethical AI. She is also the author of the book “More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech” and we will discuss more about this with her in this podcast. Time stamps of the conversation 00:42 Introduction 01:17 Background 02:17 Meaning of “it is not a glitch” in the book title 04:40 How are biases coded into AI systems? 08:45 AI is not the solution to every problem 09:55 Algorithm Auditing 11:57 Why do organizations don't use algorithmic auditing more often? 15:12 Techno-chauvinism and drawing boundaries 23:18 Bias issues with ChatGPT and Auditing the model 27:55 Using AI for Public Good - AI on context 31:52 Advice to young researchers in AI Meredith's homepage: https://meredithbroussard.com/ And her Book: https://mitpress.mit.edu/9780262047654/more-than-a-glitch/ About the Host: Jay is a Ph.D. student at Arizona State University. Linkedin: https://www.linkedin.com/in/shahjay22/ Twitter: https://twitter.com/jaygshah22 Homepage: https://www.public.asu.edu/~jgshah1/ for any queries. Stay tuned for upcoming webinars! ***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    P2 Working at DeepMind, Interview Tips & doing a PhD for a career in AI | Dr. David Stutz

    Play Episode Listen Later Jul 10, 2023 102:28


    Part-2 of my podcast with David Stutz. (Part-1: https://youtu.be/J7hzMYUcfto) David is a research scientist at DeepMind working on building robust and safe deep learning models. Prior to joining DeepMind, he was a PhD student at the Max Plank Institute of Informatics. He also maintains a fantastic blog on various topics related to machine learning and graduate life which is insightful to young researchers out there. 00:00:00 Working at DeepMind 00:08:20 Importance of Abstraction and Collaboration in Research 00:13:08 DeepMind internship project 00:19:39 What drives research projects at DeepMind 00:27:45 Research in Industry vs Academia 00:30:45 Interview tips for research roles, at DeepMind or other companies 00:44:38 Finding the right Advisor & Institute for PhD 01:02:12 Do you really need a Ph.D. to do AI/ML research? 01:08:28 Academia vs Industry: Making the choice 01:10:49 Pressure to publish more papers 01:21:35 Artificial General Intelligence (AGI) 01:33:24 Advice to young enthusiasts on getting started David's Homepage: https://davidstutz.de/ And his blog: https://davidstutz.de/category/blog/ Research work: https://scholar.google.com/citations?user=TxEy3cwAAAAJ&hl=en About the Host: Jay is a Ph.D. student at Arizona State University. Linkedin: https://www.linkedin.com/in/shahjay22/ Twitter: https://twitter.com/jaygshah22 Homepage: https://www.public.asu.edu/~jgshah1/ for any queries. Stay tuned for upcoming webinars! ***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    Negotiating Higher Salary for AI & Tech roles after Job Offer | Jordan Sale

    Play Episode Listen Later Jul 9, 2023 57:43


    Rora helps top AI researchers and professionals negotiate their pay -- often as they transition from academia into industry. Moving into tech is a huge transition for many PhDs and post-docs -- the pay is much more significant and the terms of employment are often quite different. In the past 5 years, the Rora team has helped over 1000 STEM professionals negotiate more than $10M in additional earnings from companies like DeepMind, OpenAI, Google Brain, and Anthropic -- and advocate for better roles, more alignment with their managers, and more flexible work. Referral link: https://teamrora.com/jayshah Guide to STEM Ph.D. AI Researcher + Research Scientist pay: https://www.teamrora.com/post/ai-researchers-salary-negotiation-report-2023 (the majority of the STEM PhDs we support are going into tech roles) Rora's negotiation philosophy: https://www.teamrora.com/post/the-biggest-misconception-about-negotiating-salaryhttps://www.teamrora.com/post/job-offer-negotiation-lieshttps://www.teamrora.com/post/roras-3-keys-to-negotiating-a-new-job-offer00:00 Highlights 00:55 Introduction 01:42 About Rora 05:40 Myths in Job Negotiations 08:58 Fear of losing job offers 12:36 30-60-90 day roadmap for negotiation 15:28 Knowing if you should negotiate 20:46 Negotiating with only one offer 24:40 What to negotiate? 29:00 Knowing if you're low-balled in offers 31:31 When negotiations don't workout 35:00 When & How to Negotiate? 43:00 Negotiating promotions 46:45 Is there always room for Negotiation? 49:42 Quick advice to people who have offers in hand 55:32 Wrong assumptions Learn more about Jordan: https://www.linkedin.com/in/jordansale And Rora: https://teamrora.com/jayshah Also check-out these talks on all available podcast platforms: https://jayshah.buzzsprout.com About the Host: Jay is a Ph.D. student at Arizona State University. Linkedin: https://www.linkedin.com/in/shahjay22/ Twitter: https://twitter.com/jaygshah22 Homepage: https://www.public.asu.edu/~jgshah1/ for any queries. Stay tuned for upcoming webinars! ***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    P1 Adversarial robustness in Neural Networks, Quantization and working at DeepMind | David Stutz

    Play Episode Listen Later Jul 9, 2023 92:28


    Part-1 of my podcast with David Stutz. (Part-2: https://youtu.be/IumJcB7bE20) David is a research scientist at DeepMind working on building robust and safe deep learning models. Prior to joining DeepMind, he was a Ph.D. student at the Max Plank Institute of Informatics. He also maintains a fantastic blog on various topics related to machine learning and graduate life which is insightful to young researchers out there. Check out Rora: https://teamrora.com/jayshah Guide to STEM Ph.D. AI Researcher + Research Scientist pay: https://www.teamrora.com/post/ai-researchers-salary-negotiation-report-202300:00:00 Highlights and Sponsors 00:01:22 Intro 00:02:14 Interest in AI 00:12:26 Finding research interests 00:22:41 Robustness vs Generalization in deep neural networks 00:28:03 Generalization vs model performance trade-off 00:37:30 On-manifold adversarial examples for better generalization 00:48:20 Vision transformers 00:49:45 Confidence-calibrated adversarial training 00:59:25 Improving hardware architecture for deep neural networks 01:08:45 What's the tradeoff in quantization? 01:19:07 Amazing aspects of working at DeepMind 01:27:38 Learning the skills of Abstraction when collaborating David's Homepage: https://davidstutz.de/ And his blog: https://davidstutz.de/category/blog/ Research work: https://scholar.google.com/citations?user=TxEy3cwAAAAJ&hl=en About the Host: Jay is a Ph.D. student at Arizona State University. Linkedin: https://www.linkedin.com/in/shahjay22/ Twitter: https://twitter.com/jaygshah22 Homepage: https://www.public.asu.edu/~jgshah1/ for any queries. Stay tuned for upcoming webinars! ***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    Promises and Lies of ChatGPT - understanding how it works | Subbarao Kambhampati

    Play Episode Listen Later Jun 7, 2023 166:43


    Dr. Subbarao Kambhampati is a Professor of Computer Science at Arizona State University and the director of the Yochan lab where his research focuses on decision-making and planning, specifically in the context of human-aware AI systems. He has been named a fellow of AAAI, AAAS, and ACM in recognition of his research contributions and also received a distinguished alumnus award from the University of Maryland and IIT Madras.Check out Rora: https://teamrora.com/jayshahGuide to STEM Ph.D. AI Researcher + Research Scientist pay: https://www.teamrora.com/post/ai-researchers-salary-negotiation-report-2023Rora's negotiation philosophy:https://www.teamrora.com/post/the-biggest-misconception-about-negotiating-salaryhttps://www.teamrora.com/post/job-offer-negotiation-lies00:00:00 Highlights and Intro00:02:16 What is chatgpt doing?00:10:27 Does it really learn anything?00:17:28 Chatgpt hallucinations & getting facts wrong00:23:29 Generative vs Predictive Modeling in AI00:41:51 Learning common patterns from Language00:57:00 Implications in society01:03:28 Can we fix chatgpt hallucinations? 01:26:24 RLHF is not enough01:32:47 Existential risk of AI (or chatgpt) 01:49:04 Open sourcing in AI02:04:32 OpenAI is not "open" anymore02:08:51 Can AI program itself in the future?02:25:08 Deep & Narrow AI to Broad & Shallow AI02:30:03 AI as assistive technology - understanding its strengths & limitations02:44:14 SummaryArticles referred to in the conversationhttps://thehill.com/opinion/technology/3861182-beauty-lies-chatgpt-welcome-to-the-post-truth-world/More about Prof. RaoHomepage: https://rakaposhi.eas.asu.edu/Twitter: https://twitter.com/rao2zAlso check-out these talks on all available podcast platforms: https://jayshah.buzzsprout.comAbout the Host:Jay is a Ph.D. student at Arizona State University.Linkedin: https://www.linkedin.com/in/shahjay22/Twitter: https://twitter.com/jaygshah22Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***Checkout these Podcasts on YouTube: https://www.youtube.com/c/JayShahmlAbout the author: https://www.public.asu.edu/~jgshah1/

    Building a company in middle of War, Pandemic and Economic Crisis | Karyna Naminas

    Play Episode Listen Later Jun 4, 2023 74:10


    Karyna Naminas is the CEO of Label Your Data which provides data annotation services to different organizations interested in developing AI-based solutions.Check out Rora: https://teamrora.com/jayshahGuide to STEM Ph.D. AI Researcher + Research Scientist pay: https://www.teamrora.com/post/ai-researchers-salary-negotiation-report-2023Rora's negotiation philosophy:https://www.teamrora.com/post/the-biggest-misconception-about-negotiating-salaryhttps://www.teamrora.com/post/job-offer-negotiation-lies00:00:00 Introduction and Sponsors00:02:28 Background before being a CEO00:06:38 Fascinating aspects of AI00:09:10 Data annotation outside of AI00:10:21 Effect of COVID, Russia-Ukraine War, and economic crisis on Business00:18:47 Sourcing data annotators 00:22:40 Challenges in annotation00:31:00 Data annotation for Military applications in Ukraine00:41:42 Tools used for annotation00:44:56 Segment anything and chatgpt to facilitate annotation00:51:00 Key responsibilities as a CEO00:53:58 Metrics for performance evaluation00:59:56 Building leadership01:06:06 Advice to aspiring entrepreneurs01:09:34 Dealing with failures as a CEO Learn more about Karyna: https://www.linkedin.com/in/karyna-naminas-923908200Label Your Data: https://labelyourdata.com/LinkedIn: https://www.linkedin.com/company/label-your-data/Also check-out these talks on all available podcast platforms: https://jayshah.buzzsprout.comAbout the Host:Jay is a Ph.D. student at Arizona State University.Linkedin: https://www.linkedin.com/in/shahjay22/Twitter: https://twitter.com/jaygshah22Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***Checkout these Podcasts on YouTube: https://www.youtube.com/c/JayShahmlAbout the author: https://www.public.asu.edu/~jgshah1/

    Video recommendations using Machine Learning at Facebook, News feed & Ads ranking | Amey Dharwadker

    Play Episode Listen Later Jun 4, 2023 76:06


    Amey Dharwadker works as a Machine Learning Tech Lead Manager at Meta, supporting Facebook's Video Recommendations Ranking team and working on building and deploying personalization models for billions of users. He has also been instrumental in driving a significant increase in user engagement and revenue for the company through his work on News Feed and Ads ranking ML models. As an experienced researcher, he has co-authored publications at various AI/ML conferences and patents in the fields of recommender systems and machine learning. He has undergraduate and graduate degrees from the National Institute of Technology Tiruchirappalli (India) and Columbia University.Time stamps of the conversation00:00:46 Introduction00:01:46 Getting into recommendation systems00:05:25 Projects currently working on at Facebook, Meta00:06:55 User satisfaction to improve recommendations00:08:25 Implicit Metrics to improve engagement00:11:34 Video vs product recommendations based on fixed attributes00:13:20 Understanding video content00:15:55 Working at Scale00:20:02 Cold start problem00:22:41 Data privacy concerns00:24:36 Challenges of deploying machine learning models00:30:56 Trade-off in metrics to boost user engagement00:33:47 Introspecting recommender systems - Interpretability 00:37:14 Long video vs short video - how to adapt algorithms?00:42:17 Being a Machine Learning Tech Lead Manager at Meta - work routine00:45:00 Transitioning to leadership roles00:50:55 Tips on interviewing for Machine Learning roles00:57:23 Machine Learning job interviews01:02:30 Finding your interest in AI/machine learning01:05:24 Transitioning to ML roles within the industry 01:08:36 Remaining updated to research 01:12:00 Advice to young computer science studentsMore about Amey: https://research.facebook.com/people/dharwadker-amey-porobo/Linkedin: https://www.linkedin.com/in/ameydharwadker/Also check-out these talks on all available podcast platforms: https://jayshah.buzzsprout.comAbout the Host:Jay is a Ph.D. student at Arizona State University.Linkedin: https://www.linkedin.com/in/shahjay22/Twitter: https://twitter.com/jaygshah22Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***Checkout these Podcasts on YouTube: https://www.youtube.com/c/JayShahmlAbout the author: https://www.public.asu.edu/~jgshah1/

    Using AI to improve maternal & child health in underserved communities of India | Aparna Taneja

    Play Episode Listen Later May 11, 2023 75:15


    Dr. Aparna Taneja works at Google Research in India on innovative projects driving real-world social impact. Her team collaborates with an NGO called ARMMAN with the mission to improve maternal and child health outcomes in underserved communities of India. Prior to Google she was a Post-Doc at Disney Research, Zurich, and has a PhD from the Computer Vision and Geometry Group in ETH Zurich and a Bachelor's in Computer Science from the Indian Institute of Technology, Delhi.Time stamps of the conversation00:00:46 Introductions00:01:20 Background and Interest in AI00:03:59 Satellite imaging and AI at Google00:08:30 Multi-Agent systems for social impact - part of AI for social good00:10:30 Awareness of AI benefits in non-tech fields00:13:42 Project SAHELI - improving maternal and child health using AI00:20:05 Intuition for methodology 00:22:07 Measuring impact on health00:27:42 Challenges when working with real-world data00:32:58 Problem scoping and defining research statements00:38:16 Disconnect between tech and non-tech communities while collaborating00:43:22 What motivates you, the theoretical or application side of research00:47:17 What research skills are a must when working on real-world challenges using AI00:50:33 Factors considered before doing a PhD00:54:08 Significance of Ph.D. for research roles in the industry00:58:15 Choosing industry vs Academia01:02:38 Managing personal life with a research career01:07:58 Advice to young students interested in AI on getting startedLearn more about Aparna here: https://research.google/people/106890/Research: https://scholar.google.com/citations?user=XtMi1L0AAAAJ&hl=enAbout the Host:Jay is a Ph.D. student at Arizona State University.Linkedin: https://www.linkedin.com/in/shahjay22/Twitter: https://twitter.com/jaygshah22Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***Checkout these Podcasts on YouTube: https://www.youtube.com/c/JayShahmlAbout the author: https://www.public.asu.edu/~jgshah1/

    Fixing fake news and misinformation online using Robust AI models | Prof. Srijan Kumar

    Play Episode Listen Later May 3, 2023 93:34


    Dr. Srijan Kumar is an Assistant professor at Georgia Tech with research interests in combating misinformation and harmful content on online platforms, building robust AI models prone to adversarial attacks, and behavior modeling for more accurate recommender systems. Before joining Georgia Tech, he was a postdoctoral fellow at Stanford University and completed his Ph.D. in computer science from the University of Maryland. He has received multiple awards for his research work, including Forbes 30u30 and being named a Kavli Fellow by the National Academy of Sciences.Time stamps of the conversation00:01:00 Introductions00:01:45 Background and Interest in AI00:05:27 Current research interests00:09:50 What is misinformation?00:15:07 ChatGPT and misinformation00:23:40 How can AI help detect misinformation?00:39:15 Twitter's Birdwatch platform to detect fake/misleading news00:56:38 Detecting fake bots on Twitter01:03:39 Adversarial training to build robust AI models01:05:31 Robustness vs Generalizability in machine learning01:11:40 Navigating your interest in the field of AI/machine learning01:19:22 Doing a Ph.D. and working in Industry vs Academia01:24:22 Focusing on Quality of Research rather than Quantity01:31:23 Advice to young people interested in AIDr. Kumar's homepage: https://cc.gatech.edu/~srijan/Twitter: https://twitter.com/srijankediaLinkedin: https://www.linkedin.com/in/srijankrAbout the Host:Jay is a Ph.D. student at Arizona State University.Linkedin: https://www.linkedin.com/in/shahjay22/Twitter: https://twitter.com/jaygshah22Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***Checkout these Podcasts on YouTube: https://www.youtube.com/c/JayShahmlAbout the author: https://www.public.asu.edu/~jgshah1/

    Combining knowledge of clinical medicine and Artificial Intelligence | Emma Rocheteau

    Play Episode Listen Later Mar 31, 2023 96:51


    Emma is a final-year medical student at the University of Cambridge and also pursuing her Ph.D. in Machine Learning. With her knowledge of clinical decision-making, she is working on research projects that leverage machine-learning techniques to improve clinical workflow. She will be taking her role as an academic doctor post her graduation. Time stamps of the conversation00:00:00 Introduction00:02:08 From clinical science to learning AI00:13:15 Learning the basics of Artificial Intelligence00:20:12 Promise of AI in medicine00:30:13 Do we really need interpretable AI models for clinical decision-making? 00:38:47 Using AI for more clinically-useful problems00:50:55 Facilitating interdisciplinary efforts00:54:06 Predicting length of stay in ICUs using convolutional neural networks01:03:04 AI for improving clinical workflows and biomarker discovery   01:07:55 Clustering disease trajectories in mechanically ventilated patients using machine learning01:16:37 ChatGPT for medical research or clinical decision making01:25:21 Quality over quantity of AI works published nowadays01:31:07 Advice to researchersEmma's Homepage: https://emmarocheteau.com/LinkedIn: https://www.linkedin.com/in/emma-rocheteau-125384132/Also check-out these talks on all available podcast platforms: https://jayshah.buzzsprout.comAbout the Host:Jay is a Ph.D. student at Arizona State University.Linkedin: https://www.linkedin.com/in/shahjay22/Twitter: https://twitter.com/jaygshah22Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***Checkout these Podcasts on YouTube: https://www.youtube.com/c/JayShahmlAbout the author: https://www.public.asu.edu/~jgshah1/

    Why are Transformer so effective in Large Language Models like ChatGPT

    Play Episode Listen Later Mar 29, 2023 9:43


    Understanding why and how transformers are so efficient in large language models nowadays such as #chatgpt and more.Watch the full podcast with Dr. Surbhi Goel here: https://youtu.be/stB0cY_fffoFind Dr. Goel on social media Website: https://www.surbhigoel.com/Linkedin: https://www.linkedin.com/in/surbhi-goel-5455b25aTwitter: https://twitter.com/surbhigoel_?lang=enLearning Theory Alliance: https://let-all.com/index.htmlAbout the Host:Jay is a Ph.D. student at Arizona State University.Linkedin: https://www.linkedin.com/in/shahjay22/Twitter: https://twitter.com/jaygshah22Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***Checkout these Podcasts on YouTube: https://www.youtube.com/c/JayShahmlAbout the author: https://www.public.asu.edu/~jgshah1/

    History of Large Language Models, Trustworthy AI, ChatGPT & more | Dr. Anupam Datta

    Play Episode Listen Later Feb 23, 2023 46:21


    Anupam is the co-founder and President of TruEra and prior to that, he was a Professor at Carnegie Mellon University for 15 years. TruEra provides AI solutions that help enterprises use machine learning, improve and monitor model quality, and build trust. His research and other efforts are focused on privacy, fairness, and building trustworthy machine-learning models. He holds a Ph.D. in computer science from Stanford University and Bachelor's degree in same from IIT Kharagpur in India.Time stamps of the conversation00:50 Introductions01:45 Background and TruEra05:30 Trustworthy AI11:55 Validating Large models in the real world 16:15 History of NLP and large language models29:25 Opportunities and challenges with ChatGPT36:52 Evaluating the reliability of ChatGPT39:10 Existing tools that aid explainability 43:12 AI trends to look for in 2023 More about Dr. DattaWebsite: https://www.andrew.cmu.edu/user/danupam/Linkedin: https://www.linkedin.com/in/anupamdattaResearch: https://scholar.google.com/citations?user=oK3QM1wAAAAJ&hl=enAbout TruEra: https://truera.com/About the Host:Jay is a Ph.D. student at Arizona State University.Linkedin: https://www.linkedin.com/in/shahjay22/Twitter: https://twitter.com/jaygshah22Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***Checkout these Podcasts on YouTube: https://www.youtube.com/c/JayShahmlAbout the author: https://www.public.asu.edu/~jgshah1/

    Theory of Machine Learning, Transformer models, ChatGPT & tips for research career | Dr. Surbhi Goel

    Play Episode Listen Later Feb 16, 2023 91:25


    Surbhi is an Assistant Professor at the University of Pennsylvania. She got her Ph.D. in Computer Science from UT Austin and prior to joining UPenn as an Assistant Professor, she was a postdoctoral researcher at Microsoft Research NYC in the Machine Learning group. She has research expertise in theoretical computer science & machine learning, with a particular focus on developing theoretical foundations for modern deep learning paradigms. She also is a part of building the Learning Theory Alliance community that organizes and conducts several events useful for researchers and students in their careers. Time stamps of the conversation00:00:54 Introduction00:01:54 Background and research interests00:05:03 Interest in Machine Learning Theory00:13:02 Understanding how deep learning works00:16:30 Transformer architecture00:25:40 Scale of data and big models00:31:28 Reasoning in deep learning 00:38:52 Theoretical perspective on AGI, consciousness, and sentience in AI00:46:00 Remaining updated to the latest research00:53:38 Should one do a Ph.D.? 00:57:45 Is a Ph.D. mandatory for machine learning industry positions?01:01:38 What makes a good research thesis?01:05:30 Some best practices in research01:12:20 Learning Theory Alliance Group01:14:25 Job interviews in academia for researchers01:20:00 Advice to young researchers and students01:25:02 Decision to become a ProfessorFind Dr. Goel on social media Website: https://www.surbhigoel.com/Linkedin: https://www.linkedin.com/in/surbhi-goel-5455b25aTwitter: https://twitter.com/surbhigoel_?lang=enLearning Theory Alliance: https://let-all.com/index.htmlAbout the Host:Jay is a Ph.D. student at Arizona State University.Linkedin: https://www.linkedin.com/in/shahjay22/Twitter: https://twitter.com/jaygshah22Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***Checkout these Podcasts on YouTube: https://www.youtube.com/c/JayShahmlAbout the author: https://www.public.asu.edu/~jgshah1/

    Making Machine Learning more accessible | Sebastian Raschka

    Play Episode Listen Later Dec 29, 2022 82:39


    Sebastian Raschka​ is the lead AI educator at GridAI. He is the author of the book "Machine Learning with PyTorch and Scikit Learn" and also a few other books that cover the fundamentals of #machinelearning and #deeplearning techniques and implementing them with Python. He is also an Assistant Professor of Statistics at the University of Wisconsin-Madison and has been actively involved in making ML more accessible to beginners through his blogs, video tutorials, tweets and of course his books. He also holds a doctorate in Computational and Quantitative Biology from Michigan State University.Time Stamps of the Podcast00:00:00 Introductions00:02:40 Entry point in AI/ML that made you interested in it00:05:30 How did you go about learning the basics and implementation of various methods?00:11:45 What makes Python ideal for learning Machine Learning recently?00:21:54 What is your book about and who is this for?00:33:55 What goes into writing a good technical book?00:40:50 Applying ML to toy datasets vs real-world research problems00:47:40 Choosing b/w machine learning methods & deep learning methods00:56:22 Large models vs architecture efficient models 01:01:25 Interpretability & Explainability in AI01:08:45 Insights for people interested in machine learning research, academia or PhD01:14:17 Keeping up with research in deep learningSebastian's homepage: https://sebastianraschka.com/Twitter: https://mobile.twitter.com/rasbtLinkedIn: https://www.linkedin.com/in/sebastianraschka/His book: https://www.amazon.com/Machine-Learning-PyTorch-Scikit-Learn-scikit-learn-ebook-dp-B09NW48MR1/dp/B09NW48MR1/Video Tutorials:  @SebastianRaschka  About the Host:Jay is a Ph.D. student at Arizona State University.Linkedin: https://www.linkedin.com/in/shahjay22/Twitter: https://twitter.com/jaygshah22Reach out to https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***Checkout these Podcasts on YouTube: https://www.youtube.com/c/JayShahmlAbout the author: https://www.public.asu.edu/~jgshah1/

    Current and future state of Artificial Intelligence in Healthcare | Dr. Matthew Lungren

    Play Episode Listen Later Dec 28, 2022 65:31


    Dr. Matthew Lungren is currently the Chief Medical Information Officer at Nuance Communications - Microsoft company, and also holds part-time appointments with the University of California San Francisco as an Associate Clinical Professor and also as adjunct faculty at Stanford and Duke University. He is a radiologist by training and has led and contributed to multiple projects that use AI and deep learning for medical imaging and precision medicine. Time stamps from the conversation00:00:55 Introduction00:01:46 Role as a Chief Medical Information Officer 00:05:25  Leading research projects in the industry00:08:45 Is AI ready for primetime use cases in the real world?00:12:40 Regulations on AI systems in healthcare00:17:25 Interpretability vs a robust validation framework00:25:22 Promising directions to mitigate data issues in medical research00:32:24 Stable diffusion models 00:34:06 Making datasets public00:39:00 Vision transformers for multi-modal models00:44:35 Biomarker discovery00:48:20 Sentiment of AI in medicine 00:53:26 Bridging the communication gap between computer scientists and medical experts01:01:42 Advice to young researchers from medical and engineering schoolsFind Dr. Lungren on social media Twitter: https://twitter.com/mattlungrenmdLinkedIn: https://www.linkedin.com/in/mattlungrenmd/About the Host:Jay is a Ph.D. student at Arizona State University.Linkedin: https://www.linkedin.com/in/shahjay22/Twitter: https://twitter.com/jaygshah22Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***Checkout these Podcasts on YouTube: https://www.youtube.com/c/JayShahmlAbout the author: https://www.public.asu.edu/~jgshah1/

    AI for improving clinical trials & drug development, entrepreneurship & AI safety | Charles Fisher

    Play Episode Listen Later Oct 31, 2022 72:23


    Dr. Charles Fisher is the CEO and Founder of Unlearn(dot)AI which helps in faster drug development and efficient clinical trials. This year they also raised a series B funding of 50 million dollars. Charles holds a Ph.D. in biophysics from Harvard University and prior to founding Unlearn, he did his Postdoctorate at Boston University, followed by being a principal scientist at Pfizer and a machine learning engineer at a virtual reality company in silicon valley. Time stamps of the conversation00:00:30 Introduction00:01:16 What got you into Machine Learning?00:04:10 Learning the basics and implementation00:07:55 Digital twins for clinical trials and drug development00:13:06 Patient heterogeneity in medical research00:16:05 Error quantification of models00:17:17 ML models for drug development00:22:45 Adoption of AI in medical applications00:25:35 Building trust in AI systems 00:35:10 How to show AI models are safe in the real world?00:38:38 Moving from academia to industry to entrepreneurship00:45:08 Research projects in startups vs academia vs big companies00:53:12 Routine as a CEO00:57:50 Is a Ph.D. necessary for a research career in the industry?01:01:20 Taking inspiration from biology to improve machine learning01:05:25 Advice to young peopleAbout Charles:LinkedIn: https://www.linkedin.com/in/drckf/More about Unlearn: https://www.unlearn.ai/About the Host:Jay is a Ph.D. student at Arizona State University.Linkedin: https://www.linkedin.com/in/shahjay22/Twitter: https://twitter.com/jaygshah22Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    Recommendation systems, being an Applied Scientist & Building a good research career | Mina Ghashami

    Play Episode Listen Later Sep 14, 2022 75:26


    Mina Ghashami is an Applied Scientist in the Alexa Video team at Amazon Science alongside being a lecturer at Stanford University. Prior to joining Amazon, she was a Research Scientist at Visa Research working on recommendation systems built on transactions from users and a few other projects. She completed her Ph.D. in Computer Science from the University of Utah followed by a PostDoctoral position at Rutgers University. At Amazon, she is mainly focused on Video-based ranking recommendation systems, something we talk about in detail in this conversation. Time stamps of the conversation00:00:50 Introductions00:01:40 Alexa Video - Ranking and Recommendation research00:05:25 Feature engineering for recommendation systems00:08:30  Ground truth for training recommendation systems00:12:46 What does an Applied Scientist do? (at Amazon)00:19:17 What got you into AI? And specifically recommendation systems00:24:30 Matrix approximation00:27:15 Challenges in recommendation research00:32:00 What's more interesting, theoretical or applied side of research?00:37:10 Over parametrization vs generalizability 00:39:55 Managing academic and industry positions at the same time00:46:26 Should one do a Ph.D. for research roles in the industry?00:50:00 Skills learned while pursuing a PhD00:54:22 Deciding industry vs academia00:56:20 Coping up with research in deep learning01:02:14 What makes a good research dissertation?01:04:16 Advice to young students navigating their interest in machine learningTo learn more about Mina:Homepage: https://mina-ghashami.github.io/Linkedin: https://www.linkedin.com/in/minaghashamiResearch: https://scholar.google.com/citations?user=msJHsYcAAAAJ&hl=enAbout the Host:Jay is a Ph.D. student at Arizona State University.Linkedin: https://www.linkedin.com/in/shahjay22/Twitter: https://twitter.com/jaygshah22Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    Role of a Principal Scientist do & AI in medicine | Alberto Santamaria-Pang, Microsoft

    Play Episode Listen Later Sep 12, 2022 94:20


    Alberto Santamaria-Pang is a Principal Applied Data Scientist at Microsoft.  He did his Ph.D. in computer science from the University of Houston and has a long experience in research and development on various AI projects including but not limited to medical imaging and deep learning. Prior to Microsoft, he was a principal scientist at GE research. He has led many research projects in industry and also government-funded projects, a few of which we will be discussing today. Time stamps of conversations:00:00:37 Introduction00:01:25 Background before you got into the industry00:04:17 Interest in AI and Medical Imaging00:05:54 What does a Principal Scientist do?00:10:00 What drives research in industry? Product or Theoretical pursuit?00:11:35 Learning skills relevant to a principal scientist00:15:14 Principal Investigator vs Principal Scientist00:21:00 How do industry and academia collaborate on research projects?00:25:30 Promise & challenges of AI in medical research and applications00:31:53 What should explainable AI look like?00:38:35 Adoption of AI in medical research00:43:00 Is AI generalizable? 00:44:36 AI for biomarker discovery00:51:42 Are large models useful in AI & Med space00:58:00 Why is there a lack of datasets?01:01:02 Do you think AI is scary?01:04:00 Where do we need innovation in AI precisely?01:10:20 Getting inspiration from bio-research to improve algorithms01:13:19 AI and molecular pathology for cancer research00:20:30 Should one get a Ph.D.?01:27:38 Advice for young people About Alberto:His research works: https://scholar.google.com/citations?user=sVahJxsAAAAJ&hl=enLinkedIn: https://www.linkedin.com/in/alberto-santamariaAbout the Host:Jay is a Ph.D. student at Arizona State University.Linkedin: https://www.linkedin.com/in/shahjay22/Twitter: https://twitter.com/jaygshah22Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    Explainability, Human Aware AI & sentience in large language models | Dr. Subbarao Kambhampati

    Play Episode Listen Later Jun 27, 2022 144:42


    Are large language models really sentient or conscious? What is explainability (XAI) and how can we create human-aware AI systems for collaborative tasks? Dr. Subbarao Kambhampati sheds some light on these topics, generating explanations for human-in-loop AI systems and understanding 'intelligence' in context to AI systems. He is a Prof of Computer Science at Arizona State University and director of the Yochan lab at ASU where his research focuses on decision-making and planning specifically in the context of human-aware AI systems. He has received multiple awards for his research contributions. He has also been named a fellow of AAAI, AAAS, and ACM and also a distinguished alumnus from the University of Maryland and also recently IIT Madras.Time stamps of conversations:00:00:40 Introduction00:01:32 What got you interested in AI?00:07:40 Definition of intelligence that is not related to human intelligence00:13:40 Sentience vs intelligence in modern AI systems00:24:06 Human aware AI systems for better collaboration00:31:25 Modern AI becoming natural science instead of an engineering task00:37:35 Understanding symbolic concepts to generate accurate explanations00:56:45 Need for explainability and where01:13:00 What motivates you for research, the application associated or theoretical pursuit?01:18:47 Research in academia vs industry01:24:38 DALL-E performance and critiques01:45:40 What makes for a good research thesis? 01:59:06 Different trajectories of a good CS PhD student02:03:42 Focusing on measures vs metrics 02:15:23 Advice to students on getting started with AIArticles referred in the conversationAI as Natural Science?: https://cacm.acm.org/blogs/blog-cacm/261732-ai-as-an-ersatz-natural-science/fulltextPolanyi's Revenge and AI's New Romance with Tacit Knowledge: https://cacm.acm.org/magazines/2021/2/250077-polanyis-revenge-and-ais-new-romance-with-tacit-knowledge/fulltextMore about Prof. RaoHomepage: https://rakaposhi.eas.asu.edu/Twitter: https://twitter.com/rao2zAbout the Host:Jay is a PhD student at Arizona State University.Linkedin: https://www.linkedin.com/in/shahjay22/Twitter: https://twitter.com/jaygshah22Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    Tips & Insights into Program Manager Role | Divy Thakkar, PM at Google Research

    Play Episode Listen Later Jan 17, 2022 63:04


    Divy is a Program Manager and one of the founding members of  Google research in India. He is actively involved and leading strategic programs that connect academia and research at Google, programs that focus on AI-for-Social-Good initiatives, and educational programs for schools in India with a particular focus on building computer science foundations.00:00:12 Introductions00:01:12 Background prior to joining Google00:08:40 Programs you are working on as a Program Manager at Google Research and what are your responsibilities00:14:35 Lifecycle of a program & various phases00:17:55 Getting involved in research while being a Program Manager00:25:55 Learning skills for strategic thinking as a PM00:35:08 How did you get your PM role at Google and what was the interview like?00:40:35 Resources people can use to prepare for PM interviews00:41:58 Difference b/w Product vs Program vs Technical Manager00:46:10 Previous experiences that helped develop skills for Program Manage role00:53:58 Tips on being more organized with workDivy's Homepage: https://sites.google.com/view/divythakkarTwitter: https://twitter.com/divy93tLinkedIn: https://www.linkedin.com/in/divythakkar/About the Host:Jay is a Ph.D. student at Arizona State University, doing research on building Interpretable AI models for Medical Diagnosis.Jay Shah: https://www.linkedin.com/in/shahjay22/You can reach out to https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!#research #programmanager #googleresearch #india #aiforsocialgood #manager ***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    How do you decide which research problems to work on? Manish Gupta, Director of Google Research, India

    Play Episode Listen Later Jan 2, 2022 13:44


    Watch the full conversation with Dr. Manish Gupta here: https://youtu.be/-Tl6-DKxEMUDr. Manish Gupta is currently the Director of Google Research in India. Prior to that he was the Vice-president and led the Xerox Research Center in India majorly working on data analytics and mobile computing. Before that, he was also at IBM research in India leading the efforts and building a lab focused on high-performance computing and business analytics. He also led the efforts at Goldman Sachs developing technologies relating to cloud, databases, and networking aiding business functions. We also co-founded and was the CEO of an educational technology startup called VideoKen. Dr. Manish Gupta's Homepagehttps://www.iiitb.ac.in/faculty/manish-guptahttps://research.google/people/106704/About the Host:Jay is a Ph.D. student at Arizona State University, doing research on building Interpretable AI models for Medical Diagnosis.Jay Shah: https://www.linkedin.com/in/shahjay22/You can reach out to https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!#research #ai #machinelearning #googleresearch #india #aiforsocialgood***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    Writing a good Research Thesis | Dr. Hanie Sedghi, Google Research

    Play Episode Listen Later Dec 9, 2021 5:52


    Check out the full conversation with Hanie here: https://youtu.be/hFJLuqaSakAHanie is a senior research scientist at Google Brain working on research problems related to understanding and improving deep learning techniques. She works on designing algorithms with theoretical guarantees such that they work efficiently in real-world applications.  Prior to that, she was a research scientist at Allen Institute of AI, and before that she was a Post-Doc fellow at UC Irvine. She graduated from USC with a Ph.D. with minors in Mathematics. Dr. Hanie Sedghi's linksTwitter: https://twitter.com/haniesedghi?ref_src=twsrc%5EtfwHomepage: https://haniesedghi.com/About the Host:Jay is a Ph.D. student at Arizona State University, doing research on building Interpretable AI models for Medical Diagnosis.Jay Shah: https://www.linkedin.com/in/shahjay22/You can reach out to https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!#theoryofmachinelearning #deeplearning #ai #machinelearning #fundamentals***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    Building a research lab from scratch & using AI for Social Good, Manish Gupta Google Research India

    Play Episode Listen Later Nov 11, 2021 72:26


    Dr. Manish Gupta is currently the Director of Google Research in India. Prior to that he was the Vice-president and led the Xerox Research Center in India majorly working on data analytics and mobile computing. Before that he also at IBM research in India leading the efforts and building a lab focused on high-performance computing and business analytics. He also worked on the IBM Blue Gene supercomputer project in the early 2000s at IBM TJ Watson research center for which IBM received the National Medal of Technology and Innovation from the President of the US. He also led the efforts at Goldman Sachs developing technologies relating to cloud, databases, and networking aiding business functions. We also co-founded and was the CEO of an educational technology startup called VideoKen. He has received many distinguished awards for his efforts and has coauthored many academic papers in the domains of computer science. Time-Stamps00:00:00 Introductions00:01:50 What kind of research projects are you currently spearheading at Google Research and what does your work routine look like?00:06:30 What was your thought process prior to joining Google Research?00:13:40 What's the difference between a Researcher | Senior Researcher | Director of research?00:23:00 What should robust AI systems look like?00:33:22 How do you decide which research problems to work on?00:46:46 What kind of challenges have you encountered while working on AI research problems specific to India?00:56:15 How do you design and evaluate the impact of these AI projects?00:59:27 What made you consider shifting back to India after a long streak of a career in the USA? What factors did you consider?01:03:06 Any skills you would suggest students nurture apart from technical expertise?01:05:40 There's always a concern about AI usage and automation. Where do you think the balance lies?Dr. Manish Gupta's Homepagehttps://www.iiitb.ac.in/faculty/manish-guptahttps://research.google/people/106704/About the Host:Jay is a Ph.D. student at Arizona State University, doing research on building Interpretable AI models for Medical Diagnosis.Jay Shah: https://www.linkedin.com/in/shahjay22/You can reach out to https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!#research #ai #machinelearning #googleresearch #india #aiforsocialgood***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    Benefits of understanding Theory of Deep Learning | Dr. Hanie Sedghi, Google Brain

    Play Episode Listen Later Nov 6, 2021 53:26


    Hanie is a senior research scientist at Google Brain working on research problems related to understanding and improving deep learning techniques. She works on designing algorithms with theoretical guarantees such that they work efficiently in real-world applications.  Prior to that, she was a research scientist at  Allen Institute for AI, and before that she was a Post-Doc fellow at UC-Irvine. She graduated from USC with a Ph.D. with minors in Mathematics. Dr. Hanie Sedghi's linksTwitter: https://twitter.com/haniesedghi?ref_src=twsrc%5EtfwHomepage: https://haniesedghi.com/About the Host:Jay is a Ph.D. student at Arizona State University, doing research on building Interpretable AI models for Medical Diagnosis.Jay Shah: https://www.linkedin.com/in/shahjay22/You can reach out to https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!#theoryofmachinelearning #deeplearning #ai #machinelearning #fundamentals***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    CNNs & ViTs (Vision Transfomers) - Comparing the internal structures, Maithra Raghu, Google ​

    Play Episode Listen Later Oct 29, 2021 5:21


    Do Vision Transformers work in the same way as CNNs? Do the internal representational structures of ViTs and CNNs differ? An in-depth analysis article: https://arxiv.org/pdf/2108.08810.pdfListen to the full conversation here: https://youtu.be/htnJxcwJqeADr. Maithra Raghu is a senior research scientist at Google working on analyzing the internal workings of deep neural networks so that we can deploy them better keeping humans in the loop. She recently graduated from Cornell University with a Ph.D. in CS and previously graduated from Cambridge University with BA and Masters in Mathematics. She has received multiple awards for her research work including the Forbes 30 under 30.Maithra's Homepage: https://maithraraghu.comAbout the Host:Jay is a Ph.D. student at Arizona State University, doing research on building Interpretable AI models for Medical Diagnosis.Jay Shah: https://www.linkedin.com/in/shahjay22/You can reach out to https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!#explainableai #reliableai #robustai #machinelearning***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    How to find a Research Topic that interests you?

    Play Episode Listen Later Oct 4, 2021 15:15


    How to decide and choose a research/thesis to work on that interests you and is also relevant to current research directions.Full episodes with Maithra, Google: https://youtu.be/htnJxcwJqeANatasha  Google: https://youtu.be/8XpCnmvq49sMilind  Google: https://youtu.be/eqwF3NpZFb4Hima  Harvard University: https://youtu.be/8Ym4oYTd8FoIshan  Facebook AI: https://youtu.be/Pb5RQAEtznkAbout the Host:Jay is a PhD student at Arizona State University, doing research on building Interpretable AI models for Medical Diagnosis.Jay Shah: https://www.linkedin.com/in/shahjay22/You can reach out to https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!#machinelearning #ai #phd #research #thesis***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    Learning the internals of Machine Learning systems and tips for PhD | Maithra Raghu, Google Brain

    Play Episode Listen Later Sep 30, 2021 75:27


    Dr. Maithra Raghu is a senior research scientist at Google working on analyzing the internal workings of deep neural networks so that we can deploy them better keeping humans in the loop. She recently graduated from Cornell University with a PhD in CS and previously graduated from  Cambridge University with BA and Masters in Mathematics. She has received multiple awards for her research work including the Forbes 30 under 30.Questions that we cover00:00:00 Introductions00:01:00 To understand more about your research interests, can you tell us what kind of research questions you are interested in while working at Google Brain?00:04:45 What interested you about it and how did you get started?00:15:00 What is one thing that surprises/puzzles you about deep learning effectiveness to date?00:22:05 What's the difference between being a researcher in academia/PhD student vs being a researcher at a big organization (Google)?00:28:35 In what use cases do you think ViTs might be a good choice to perform image analysis over CNN vs where do you think CNNs still have an undoubted advantage?00:37:15 Why does ViT perform better than ResNet only on larger datasets and not on mid-sized datasets or smaller? 00:43:55 In regards to medical imaging tasks, would it be theoretically wrong to pre-train the model on dataset A and fine-tune it on dataset B?00:47:35 Do you think ViT or transformer-based models already have/have the potential to cause a paradigm shift in the way we approach imaging tasks? Why?00:5:25 Medical datasets are often limited in size, what are your views on tackling these problems in the near future00:55:55 From an internal representation perspective, do you think deep neural networks can have the ability of reasoning?00:58:20 How did you decide on your own PhD research topic? Advice you would give to graduate researchers trying to find a research problem for their thesis?01:04:00 Many times researchers/students feel stuck/overwhelmed with a particular project they are working on, how do you suggest based on experience to tackling that?01:10:35 How do you now/as a graduate student used to keep up with the latest research in ML/DL?Maithra's Homepage: https://maithraraghu.comBlogpost talked about: https://maithraraghu.com/blog/2020/Reflections_on_my_Machine_Learning_PhD_Journey/Her Twitter: https://twitter.com/maithra_raghuAbout the Host:Jay is a PhD student at Arizona State University, doing research on building Interpretable AI models for Medical Diagnosis.Jay Shah: https://www.linkedin.com/in/shahjay22/You can reach out to https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!#explainableai #reliableai #robustai #machinelearning***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    How can students contribute to Impactful AI projects? | Dr. Milind Tambe, Google Research

    Play Episode Listen Later Sep 23, 2021 4:47


    How can you students and researchers work on projects using AI that creates a real impact and difference in society? Watch the full podcast with Dr. Tambe here: https://youtu.be/eqwF3NpZFb4Dr. Milind Tambe is a Professor of Computer Science at  Harvard University and Director of the Center for Research in Computation and Society. He is also the Director of AI for Social Good at  Google  Research in India. He has been leading and working on projects that are creating an impact, ranging from wildlife conservation, public health, and safety using AI techniques.Prof. Milind Tambe's Homepage: https://teamcore.seas.harvard.edu/tambeAbout the Host:Jay is a PhD student at Arizona State University, doing research on building Interpretable AI models for Medical Diagnosis.Jay Shah: https://www.linkedin.com/in/shahjay22/You can reach out to https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!#aiforsocialgood #googleai #ai #machinelearning #socialimpact***Disclaimer: The information contained in this video represents the views and opinions

    Challenges and beauties of building a Tech Startup | Nasrin Mostafazadeh

    Play Episode Listen Later Jul 20, 2021 5:21


    Watch the full conversation with Nasrin here: https://youtu.be/59kRUmhA5yINasrin is the co-founder of a deep-tech startup Verneek and has been in the space of AI startups for the past 5 years now. Before that, she was a senior research scientist at Elemental Cognition & BenevolentAI, and prior to which she graduated with a Ph.D. from the University of Rochester and her major research interests are in building intelligent systems that can demonstrate commonsense reasoning & generate causal explanations in order to improve human-AI collaborations. She was featured in Forbes 30u30 for her work in NLU. Nasrin's LinkedIn Profile: https://www.linkedin.com/in/nasrinm/Her startup Verneek: https://www.linkedin.com/company/verneek/About the Host:Jay is a Ph.D. student at Arizona State University, doing research on building Interpretable AI models for Medical Diagnosis.Jay Shah: https://www.linkedin.com/in/shahjay22/You can reach out to https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming podcasts!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    Intuition for research in Social Reinforcement Learning | Natasha Jacques

    Play Episode Listen Later Jul 19, 2021 6:52


    How can we build intuition for interdisciplinary fields in order to tackle challenges in social reinforcement learning?Natasha Jaques is currently a Research Scientist at Google Brain and a post-doc fellow at UC Berkeley, where her research interests are in designing multi-agent RL algorithms while focusing on social reinforcement learning. She received her Ph.D. from MIT and has also received multiple awards for her research works submitted to venues like ICML and NeurIPS She has interned at DeepMind, Google Brain, and is an OpenAI  Scholars mentor.About the Host:Jay is a Ph.D. student at Arizona State University, doing research on building Interpretable AI models for Medical Diagnosis.Jay Shah: https://www.linkedin.com/in/shahjay22/You can reach out to https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    Challenges of productionizing Machine Learning Research in Industry | Aarti Bagul

    Play Episode Listen Later Jul 19, 2021 5:35


    Why and where do companies fail at productionizing ML models? Watch the full podcast with Aarti here: https://youtu.be/VWJXiszQpTUAarti is a machine learning engineer at Snorkel AI. Prior to that, she worked closely with Andrew Ng in various capacities. She graduated with a master's in CS from Stanford, and bachelor's in CS and Computer Engineering from @New York University, and at @Microsoft  Research as a research intern for John Langford, where she contributed to Vowpal Wabbit, an open-source project. About the Host:Jay is a Ph.D. student at Arizona State University, doing research on building Interpretable AI models for Medical Diagnosis.Jay Shah: https://www.linkedin.com/in/shahjay22/You can reach out to https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    Doing a PhD & Excelling at Research | Dr. Hima Lakkaraju

    Play Episode Listen Later Jul 18, 2021 18:47


    Deciding whether to do a Ph.D. or not and things to focus on while writing your research thesis.Watch the full podcast here: https://youtu.be/8Ym4oYTd8FoDr. Himabindu Lakkaraju is an Assistant Professor at Harvard University and her major research interests are along the lines of explainability, fairness, and robustness in AI systems. Prior to that she graduated with a Ph.D. from  @Stanford  and has received multiple awards for her research work.Dr. Lakkaraju's homepage: https://himalakkaraju.github.io/About the Host:Jay is a Ph.D. student at Arizona State University, doing research on building Interpretable AI models for Medical Diagnosis.Jay Shah: https://www.linkedin.com/in/shahjay22/You can reach out to https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!#explainableai #reliableai #robustai #machinelearning***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    Founding a Startup, doing a PhD, Interpretability | Dr. Geneviève Patterson

    Play Episode Listen Later Jul 2, 2021 93:22


    Dr. Geneviève Patterson is the head of applied research at VSCO. Prior to that, she was the CTO of a video editing company she co-founded called TRASH - later acquired by VSCO. She holds a Ph.D. in CS from Brown University with a focus on research in video understanding. She was also a postdoctoral researcher at Microsoft Research where she worked on interpreting deep neural networks and much more.Geneviève Homepage: http://genp.github.io/About the Host:Jay is a Ph.D. student at Arizona State University, doing research on building Interpretable AI models for Medical Diagnosis.Jay Shah: https://www.linkedin.com/in/shahjay22/You can reach out to https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    Using AI for Social Good | Dr. Milind Tambe

    Play Episode Listen Later Jul 2, 2021 33:33


    Dr. Milind Tambe is a Professor of Computer Science at Harvard University and Director of the Center for Research in Computation and Society. He is also the Director of AI for Social Good at Google Research in India. He has been leading and working on projects that are creating an impact, ranging from wildlife conservation, public health, and safety using AI techniques.Time Stamps:00:00 Introductions01:05 In concrete terms what projects are you currently working on?03:18 Do you think there is a disconnect between the scientific/tech communities and the social sector while trying to make use of AI? If so, who should be taking more lead for making that gap small?05:26 Do these applications in any way inspire novelty in the theoretical aspects of Machine Learning research?08:15 How do you design and evaluate the impact of these projects?11:20 How do you define Interpretable or Explainable AI, at the intersection of social sciences and AI?16:50 Concern of AI usage and Automation. Where do you think the balance lies?19:50 What bits students can do researchers to work on projects that have a real impact and just pure novelty?23:45 Roadblocks to more widespread adoption of AI tools for social good?29:18 What motivates you personally about using AI for social good and not just theoretical exploration of new techniques?Prof. Milind Tambe's Homepage: https://teamcore.seas.harvard.edu/tambeAbout the Host:Jay is a Ph.D. student at Arizona State University, doing research on building Interpretable AI models for Medical Diagnosis.Jay Shah: https://www.linkedin.com/in/shahjay22/You can reach out to https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!#aiforsocialgood #googleai #ai #machinelearning #socialimpact***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    "What got you into AI?" - Answers from 30 different researchers & engineers

    Play Episode Listen Later Jun 26, 2021 35:17


    “What got you into AI? And what part of it really interests you about it?” — a simple question that I have asked more than 30 researchers from different backgrounds and applications on my podcasts to learn more about their motivation. Thought of compiling them into one; to whosoever, it proves to be insightful with the aim to learn more what drives researchers explore and work in the domain of Machine Learning and what got them started in it!About the Host:I am a Ph.D. student at Arizona State University, doing research on building Interpretable AI models for Medical Diagnosis.LinkedIn: https://www.linkedin.com/in/shahjay22/You can reach out to https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming podcasts!#machinelearning #artificialintelligence #research***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    Choosing to build an AI startup instead of Corporate Jobs | Nasrin Mostafazadeh

    Play Episode Listen Later Jun 26, 2021 5:20


    Read more about this here in her Twitter thread: https://twitter.com/nasrinmmm/status/1374372131207806976Nasrin is the co-founder of a deep-tech startup Verneek and has been in the space of AI startups for the past 5 years now. Before that, she was a senior research scientist at Elemental Cognition & BenevolentAI, and prior to which she graduated with a Ph.D. from the University of Rochester and her major research interests are in building intelligent systems that can demonstrate commonsense reasoning & generate causal explanations in order to improve human-AI collaborations. She was featured in Forbes 30u30 for her work in NLU. We talk about her background and story in AI, some details of her research work, and insights about being in the AI startup space.Nasrin's LinkedIn Profile: https://www.linkedin.com/in/nasrinm/About the Host:Jay is a Ph.D. student at Arizona State University, doing research on building Interpretable AI models for Medical Diagnosis.Jay Shah: https://www.linkedin.com/in/shahjay22/You can reach out to https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming podcasts!#machinelearning #artificialintelligence #aistartups #buildingastartup***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    How did I get into Machine Learning research? | Sara Hooker, Azalia Mirhoseini & Natasha Jacques - Google

    Play Episode Listen Later Jun 26, 2021 11:28


    Three research scientists from Google share their journey about interest in Machine Learning research and how they got started with it.Watch full podcasts with each of these speakers:Azalia Mirhoseini: https://youtu.be/5LCfH8YiOv4Sara Hooker: https://youtu.be/MHtbZls2utsNatasha Jacques: https://youtu.be/8XpCnmvq49sAbout the Host:Jay is a Ph.D. student at Arizona State University, doing research on building Interpretable AI models for Medical Diagnosis.Jay Shah: https://www.linkedin.com/in/shahjay22/You can reach out to https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    Shreya Shankar, ML Engineer @Viaduct on Applied ML research & more

    Play Episode Listen Later Jun 26, 2021 51:36


    Shreya is currently a graduate student at Stanford and also working as an ML engineer at Viaduct.ai. She has previously interned at Google-Brain and at Facebook. She talks about her experience as an applied ML engineer and making ML models work in the real world.Shreya's homepage: https://www.shreya-shankar.comAbout the Host:Jay is a Ph.D. student at Arizona State University, doing research on building Interpretable AI models for Medical Diagnosis.Jay Shah: https://www.linkedin.com/in/shahjay22/You can reach out to https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    Machine Learning Engineer vs Data Scientist | Jineet Doshi, Data Scientist @Intuit

    Play Episode Listen Later Jun 26, 2021 4:22


    Full episode available here: https://youtu.be/V1mDR4x_JY0Jineet, a Data Scientist at Intuit and a graduate of Carnegie Mellon University shares some amazing insights about data science and machine learning probing his experiences.Our aim of these webinars is to connect you with the brightest minds in the field of Machine Learning/Data Science so that you can learn how to break into the field and build an incredible career!You can reach out to https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    What drives Research: Pursuit of Theory or Application? | Ishan & Michal, Facebook AI ​

    Play Episode Listen Later Jun 23, 2021 5:29


    What really drives the innovation in research projects? Is it a pure pursuit of theory or application-oriented?Ishan is a Research Scientist at @Facebook AI. Much of his recent research work revolves around self-supervised learning is known for this works like including SwAV and PIRL and, Michal Drozdzal is also a Research Scientist at @Facebook AI with major research interests in computer vision, Machine Learning, and medical image analysis. The full conversation with Ishan: https://youtu.be/uOVxndMyascAnd Michal: https://youtu.be/9gKwux0r0KYAbout the Host:Jay is a Ph.D. student at Arizona State University, doing research on building Interpretable AI models for Medical Diagnosis.Jay Shah: https://www.linkedin.com/in/shahjay22/You can reach out to https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming podcasts!#artificialintelligence #machinelearning #aiinhealthcare #medicalimaging***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    Claim Machine Learning Podcast - Jay Shah

    In order to claim this podcast we'll send an email to with a verification link. Simply click the link and you will be able to edit tags, request a refresh, and other features to take control of your podcast page!

    Claim Cancel