POPULARITY
Is there a stable state the US and China can hope for on the road to AGI? To discuss we have on today Dan Hendrycks. A CS PhD, Dan runs the Center for AI Safety and is an advisor at xAI and Scale AI. Here's his superintelligence strategy: https://www.nationalsecurity.ai/ For some more direct lessons from the Cold War to today's US-China dynamics, check out the show I did with Hal Brands (https://www.chinatalk.media/p/cold-war-lessons-for-us-china-today) Learn more about your ad choices. Visit megaphone.fm/adchoices
Is there a stable state the US and China can hope for on the road to AGI? To discuss we have on today Dan Hendrycks. A CS PhD, Dan runs the Center for AI Safety and is an advisor at xAI and Scale AI. Here's his superintelligence strategy: https://www.nationalsecurity.ai/ For some more direct lessons from the Cold War to today's US-China dynamics, check out the show I did with Hal Brands (https://www.chinatalk.media/p/cold-war-lessons-for-us-china-today) Learn more about your ad choices. Visit megaphone.fm/adchoices
Vinu Sankar Sadasivan is a CS PhD ... Currently, I am working as a full-time Student Researcher at Google DeepMind on jailbreaking multimodal AI models. Robustness, Detectability, and Data Privacy in AI // MLOps Podcast #289 with Vinu Sankar Sadasivan, Student Researcher at Google DeepMind. // Abstract Recent rapid advancements in Artificial Intelligence (AI) have made it widely applicable across various domains, from autonomous systems to multimodal content generation. However, these models remain susceptible to significant security and safety vulnerabilities. Such weaknesses can enable attackers to jailbreak systems, allowing them to perform harmful tasks or leak sensitive information. As AI becomes increasingly integrated into critical applications like autonomous robotics and healthcare, the importance of ensuring AI safety is growing. Understanding the vulnerabilities in today's AI systems is crucial to addressing these concerns. // Bio Vinu Sankar Sadasivan is a final-year Computer Science PhD candidate at The University of Maryland, College Park, advised by Prof. Soheil Feizi. His research focuses on Security and Privacy in AI, with a particular emphasis on AI robustness, detectability, and user privacy. Currently, Vinu is a full-time Student Researcher at Google DeepMind, working on jailbreaking multimodal AI models. Previously, Vinu was a Research Scientist intern at Meta FAIR in Paris, where he worked on AI watermarking. Vinu is a recipient of the 2023 Kulkarni Fellowship and has earned several distinctions, including the prestigious Director's Silver Medal. He completed a Bachelor's degree in Computer Science & Engineering at IIT Gandhinagar in 2020. Prior to their PhD, Vinu gained research experience as a Junior Research Fellow in the Data Science Lab at IIT Gandhinagar and through internships at Caltech, Microsoft Research India, and IISc. // MLOps Swag/Merch https://shop.mlops.community/ // Related Links Website: https://vinusankars.github.io/ --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Richard on LinkedIn: https://www.linkedin.com/in/vinusankars/
Guanhua Wang is a Senior Researcher in DeepSpeed Team at Microsoft. Before Microsoft, Guanhua earned his Computer Science PhD from UC Berkeley. Domino: Communication-Free LLM Training Engine // MLOps Podcast #278 with Guanhua "Alex" Wang, Senior Researcher at Microsoft. // Abstract Given the popularity of generative AI, Large Language Models (LLMs) often consume hundreds or thousands of GPUs to parallelize and accelerate the training process. Communication overhead becomes more pronounced when training LLMs at scale. To eliminate communication overhead in distributed LLM training, we propose Domino, which provides a generic scheme to hide communication behind computation. By breaking the data dependency of a single batch training into smaller independent pieces, Domino pipelines these independent pieces of training and provides a generic strategy of fine-grained communication and computation overlapping. Extensive results show that compared with Megatron-LM, Domino achieves up to 1.3x speedup for LLM training on Nvidia DGX-H100 GPUs. // Bio Guanhua Wang is a Senior Researcher in the DeepSpeed team at Microsoft. His research focuses on large-scale LLM training and serving. Previously, he led the ZeRO++ project at Microsoft which helped reduce over half of model training time inside Microsoft and Linkedin. He also led and was a major contributor to Microsoft Phi-3 model training. He holds a CS PhD from UC Berkeley advised by Prof Ion Stoica. // MLOps Swag/Merch https://shop.mlops.community/ // Related Links Website: https://guanhuawang.github.io/ DeepSpeed hiring: https://www.microsoft.com/en-us/research/project/deepspeed/opportunities/ Large Model Training and Inference with DeepSpeed // Samyam Rajbhandari // LLMs in Prod Conference: https://youtu.be/cntxC3g22oU --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Guanhua on LinkedIn: https://www.linkedin.com/in/guanhua-wang/ Timestamps: [00:00] Guanhua's preferred coffee [00:17] Takeaways [01:36] Please like, share, leave a review, and subscribe to our MLOps channels! [01:47] Phi model explanation [06:29] Small Language Models optimization challenges [07:29] DeepSpeed overview and benefits [10:58] Crazy unimplemented crazy AI ideas [17:15] Post training vs QAT [19:44] Quantization over distillation [24:15] Using Lauras [27:04] LLM scaling sweet spot [28:28] Quantization techniques [32:38] Domino overview [38:02] Training performance benchmark [42:44] Data dependency-breaking strategies [49:14] Wrap up
过去的18期中每每提到嘉宾的就业前景,总能听到“也可以转码”,本期播客我们便邀请到了两位宇宙尽头的Computer Science博士来给大家分享他们的科研和生活!来自港三的Harry博一刚入学,研究的是那些可以让文字自己“开口说话”的大模型。阿张则是来自UC Riverside的博士二年级生,目前的研究方向是系统与网络。本期节目中我们会聊到如何将烧脑又烧钱的AI大模型价格打下来?香港博士与北美和内地博士有何不同?完全可以远程工作的他们是生活工作再也没有边界,还是是朝九晚五准点下班?拥有不同志向的计算机人到底应该选择工作还是读博?就让我们跟随Harry和阿张的分享,一起来了解CS博士生的世界吧~时间码:01:51 学好系统与网络,把GPT的价格打下来!04:35 Text to speech,当心科技作恶。07:29 计算机科学领域大观园10:24CS=AL=MachineLearing?才不是13:30读博到底为了什么?为了科学为了钱为了更高的职业天花板16:55Attentionisallyouneed?还是Moneyisallyouneed!20:10 煮饺子环节: 计算机科学存在真正的失败吗?21:30宇宙的尽头是CS,CS的尽头是工业界?24:18 Computer Science是realscience吗?29:17 我的多巴胺来自Debug32:12GPT时代: 2024年最好的编程语言是英语35:19香港内地北美,读博是团队作战还是单枪匹马?37:45 程序员的一天:workfromhome,fromanywhere,anytime.40:21 搞科研还是创业挣钱?拥有科研激情的老板到底有多少47:25 读博读硕还是工作?不同志向的计算机人该怎么选择本期节目提到的专业:Computer Science (计算机科学)BGM by Windows XP Type Beat
Ezz Tahoun, a distinguished cyber-security data scientist, who won AI & innovation awards at Yale, Princeton and Northwestern. He also got innovation awards from Canada's Communications Security Establishment, Microsoft US, Trustwave US, PIA US, NATO, and more. He ran data science innovation programs and projects for OrangeCyber Defense, Forescout Technologies, Royal bank of Canada, Governments, and Huawei Technologies US. He has published 20 papers, countless articles and 15 open source projects in the domain. When he was 19 years old he started his CS PhD in one of the top 5 labs in the world for cyber & AI, in the prestigious University of Waterloo, where he published numerous papers and became a reviewer for top conferences. His designations include: SANS/GIAC-Advisory-Board, aCCISO, CISM, CRISC, GCIH, GFACT, GSEC, CEH, GCP-Professional-Cloud-Architect, PMP, BENG and MMATH. He was an adjunct professor of cyber defense and warfare at Toronto's school of management. Ezz has cofounded Cypienta, an on-prem rule-less event correlation & contextualization solution that plugs into SIEMs, XDRs, and SOARs, to help SOCs find relevant alerts, logs, and events to any investigation in real-time. Cypienta is backed by Techstars, ORNL, TVA, Univ of Tennessee Sys, and supported by 35Mules-Next Era, BAE Systems, and others. Ezz authored MITRE Attack Flow DetectorFor more SecTools podcast episodes, visit https://infoseccampus.com
年初火爆一时的斯坦福 Smallville「虚拟小镇」,现在宣布正式开源。硅谷人正在 all-in(全力投入) AI 智能体,希望能够创造出一个真实存在,又让人惊喜的「西部世界」,甚至让 AI 智能体们走入寻常人家,为人类的生活和工作效率带来巨大提升。 本期「科技早知道」与「OnBoard!」串台,由硅谷徐老师和合作主播 Monica 共同主持,邀请了曾在 OpenAI 工作和实习,从事 AI 智能体、大模型研究领域的两位大牛参与录制。他们分别是英伟达高级 AI 研究科学家 Jim Fan,和谷歌 DeepMind 研究员戴涵俊。 AI 智能体何时能够进入我们的生活,面临哪些挑战?为什么当今的大模型更像是「炼金术」? Llama 2 为何刚一发布就刺激大量创新出现? 在 AI 研究的道路上,如何避免被「贫穷限制了想象」?如果你想通过硅谷 AI 一线精英从业者那里了解未来即将发生的事情,这期节目一定不能错过。 (考虑到节目时间总长,分为上下两期发布,你现在听到的是上半期。下半期将于次日发布。由于话题专业性和嘉宾表达习惯,在本期节目中你可能会听到更多英文术语,请听友见谅。不明白的可以在小宇宙上留言提问,我们会尽力回答!) CS PHD 的车牌 https://files.fireside.fm/file/fireside-uploads/images/4/4931937e-0184-4c61-a658-6b03c254754d/hPh4THfu.jpg 图:CS PHD车牌 主要话题 [03:55] 嘉宾个人+主攻方向/项目介绍 [11:48] Agent(AI 智能体)应该具备哪些核心构成? [16:00] 在企业场景里,有哪些和 AI 智能体有关的尝试?遇到哪些挑战? [21:43] 从 AINPC 和斯坦福虚拟小镇,看 AI 智能体技术在游戏方面的创新进展 [31:19] AI 写代码的精准度,何时能够追上人类工程师的水平? [39:11] 在充分利用 AI 智能体的未来,软件的世界会被怎样改变? [47:48] AI 智能体的市场,会出现「赢家通吃」现象吗? [54:20] Meta 的 Llama 2 问世,为何立刻刺激了大量大模型创新? [61:01] 你的论文,OpenAI 半年前就玩过了:开源和闭源模型差距只会越来越大 [63:57] 大模型像炼金,顶级人才都在 OpenAI、Google、Anthropic 之间流动 [67:57] 基础模型越来越强,特定领域模型的壁垒还存在吗? 本期人物 Jim Fan,英伟达高级 AI 研究科学家,曾在 OpenAI 实习,博士期间就读于斯坦福大学 戴涵俊,Google DeepMind 研究员,曾在 OpenAI 工作,博士期间就读于乔治亚理工大学 硅谷徐老师,硅谷连续创业者、人工智能高管、斯坦福商学院客座讲师,「科技早知道」主播 |推特:@H0wie_Xu| 微信公众号:硅谷云| AI 英文博客:howiexu.substack.com Monica,播客节目「Onboard!」主理人,美元 VC 投资人,前 AWS 硅谷团队+AI 创业公司打工人,公众号:M小姐研习录 (ID: MissMStudy) 主理人 | 即刻:莫妮卡同学 往期节目 - 超级独角兽 Databricks 联合创始人:从对决 Snowflake,到人类如何与 AI 共存 | S7E21 硅谷徐老师 (https://guiguzaozhidao.fireside.fm/20220174) - 通用人工智能离我们多远,大模型专家访谈 |S7E11 硅谷徐老师 x OnBoard! (https://guiguzaozhidao.fireside.fm/20220162) - AI 大神贾扬清离职阿里后首次受访:创业为什么不做大模型|硅谷徐老师 S7E07 (https://guiguzaozhidao.fireside.fm/20220158) 加入我们 声动活泼正在招聘「节目监制」和「声音设计师」,查看详细讯息请 点击链接 (https://sourl.cn/j8tk2g) 。如果你正准备在相关领域发挥专长、贡献能量,请联系我们。 欢迎加入声动胡同会员计划 (https://sourl.cn/iCVg6n) 成为声动活泼会员,支持我们独立而无畏地持续创作,并让更多人听到这些声音。 支付 ¥365/年 (https://sourl.cn/ZPb9Dm) 成为声动胡同常住民。加入后,你将会在「声动胡同」里体验到专属内容、参与社群活动,和听友们一起「声动活泼」。 在此之前,也欢迎你成为声动胡同闲逛者 (https://sourl.cn/ZPb9Dm) ,免费体验会员内容、感受社群氛围。 了解更多会员计划详情,我们在声动胡同等你。 (https://sourl.cn/4xPkEf) 幕后制作 监制:杜晨、刘灿、东君、闻晓(实习) 后期:迪卡普里鑫、六工(实习) 运营:瑞涵、Babs 设计:饭团 商务合作 声动活泼商务合作咨询 (https://sourl.cn/6vdmQT) 关于声动活泼 用声音碰撞世界。声动活泼致力于为人们提供源源不断的思考养料。 我们还有这些播客:声东击西 (https://etw.fm/episodes)、What's Next|科技早知道 (https://guiguzaozhidao.fireside.fm/episodes)、声动早咖啡 (https://sheng-espresso.fireside.fm/)、商业WHY酱 (https://msbussinesswhy.fireside.fm/)、跳进兔子洞 (https://therabbithole.fireside.fm/)、反潮流俱乐部 (https://fanchaoliuclub.fireside.fm/)、泡腾 VC (https://popvc.fireside.fm/)、吃喝玩乐了不起 (https://urbanfloat.fireside.fm/) 如果你想获取热门节目文字稿,请添加微信公众号 声动活泼 如果想与我们交流,欢迎到即刻 (https://okjk.co/Qd43ia)找到我们 也期待你给我们写邮件交流,邮箱地址是:ting@sheng.fm 如果你喜欢我们的节目,欢迎 打赏 (https://etw.fm/donation) 支持,或把我们的节目推荐给朋友 Special Guests: Jim Fan, Monica, and 戴涵俊.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 13 background claims about EA, published by Akash on September 7, 2022 on The Effective Altruism Forum. I recently attended EAGxSingapore. In 1-1s, I realized that I have picked up a lot of information from living in an EA hub and surrounding myself with highly-involved EAs. In this post, I explicitly lay out some of this information. I hope that it will be useful for people who are new to EA or people who are not living an EA Hub. Here are some things that I believe to be important “background claims” that often guide EA decision-making, strategy, and career decisions. (In parentheses, I add things that I believe, but these are "Akash's opinions" as opposed to "background claims.") Note that this perspective is based largely on my experiences around longtermists & the Berkeley AI safety community. General 1. Many of the most influential EA leaders believe that there is a >10% chance that humanity goes extinct in the next 100 years. (Several of them have stronger beliefs, like a 50% of extinction in the next 10-30 years). 2. Many EA leaders are primarily concerned about AI safety (and to a lesser extent, other threats to humanity's long-term future). Several believe that artificial general intelligence is likely to be developed in the next 10-50 years. Much of the value of the present/future will be shaped by the extent to which these systems are aligned with human values. 3. Many of the most important discussions, research, and debates are happening in-person in major EA hubs. (I claim that visiting an EA Hub is one of the best ways to understand what's going on, engage in meaningful debates about cause prioritization, and receive feedback on your plans.) 4. Several “EA organizations” are not doing highly impactful work, and there are major differences in impact between & within orgs. Some people find it politically/socially incorrect to point out publicly which organizations are failing & why. (I claim people who are trying to use their careers in a valuable way should evaluate organizations/opportunities for themselves, and they should not assume that generically joining an “EA org” is the best strategy.) AI Safety 5. Many AI safety researchers and organizations are making decisions on relatively short AI timelines (e.g., artificial general intelligence within the next 10-50 years). Career plans or research proposals that take a long time to generate value are considered infeasible. (I claim that people should think about ways to make their current trajectory radically faster— e.g., if someone is an undergraduate planning a CS PhD, they may want to consider alternative ways to get research expertise more quickly). 6. There is widespread disagreement in AI safety about which research agendas are promising, what the core problems in AI alignment are, and how people should get started in AI safety. 7. There are several programs designed to help people get started in AI safety. Examples include SERI-Mats (for alignment research & theory), MLAB (for ML engineering), the ML Safety Scholars Program (for ML skills), AGI Safety Fundamentals (for AI alignment knowledge), PIBBS (for social scientists), and the newly-announced Philosophy Fellowship. (I suggest people keep point #6 in mind, though, and not assume that everything they need to know is captured in a well-packaged Program or Reading List). 8. There are not many senior AIS researchers or AIS mentors, and the ones who exist are often busy. (I claim that the best way to “get started in AI safety research” is to apply for a grant to spend ~1 month reading research, understanding the core parts of the alignment problem, evaluating research agendas, writing about what you've learned, and visiting an EA hub). 9. People can apply for grants to skill-up in AI safety. You do not have to propose an extremely specific project...
This week we are joined by Julius Adebayo. Julius is a CS PhD student at MIT, interested in safe deployment of ML based systems as it relates to privacy/security, interpretability, fairness and robustness.He is motivated by the need to ensure that ML based systems demonstrate safe behaviour when deployed.On this weeks episode we discuss how the evolution of hardware has progressed overtime and what that means for deep learning research. We also analyse how microprocessors can aid developments in neuroscience understanding.Underrated ML Twitter: https://twitter.com/underrated_mlJulius Adebayo Twitter: https://twitter.com/julius_adebayoPlease let us know who you thought presented the most underrated paper in the form below: https://forms.gle/97MgHvTkXgdB41TC8Links to the papers:"Could a Neuroscientist Understand a Microprocessor?" [paper]"When will computer hardware match the human brain?" - [paper]
Are large language models really sentient or conscious? What is explainability (XAI) and how can we create human-aware AI systems for collaborative tasks? Dr. Subbarao Kambhampati sheds some light on these topics, generating explanations for human-in-loop AI systems and understanding 'intelligence' in context to AI systems. He is a Prof of Computer Science at Arizona State University and director of the Yochan lab at ASU where his research focuses on decision-making and planning specifically in the context of human-aware AI systems. He has received multiple awards for his research contributions. He has also been named a fellow of AAAI, AAAS, and ACM and also a distinguished alumnus from the University of Maryland and also recently IIT Madras.Time stamps of conversations:00:00:40 Introduction00:01:32 What got you interested in AI?00:07:40 Definition of intelligence that is not related to human intelligence00:13:40 Sentience vs intelligence in modern AI systems00:24:06 Human aware AI systems for better collaboration00:31:25 Modern AI becoming natural science instead of an engineering task00:37:35 Understanding symbolic concepts to generate accurate explanations00:56:45 Need for explainability and where01:13:00 What motivates you for research, the application associated or theoretical pursuit?01:18:47 Research in academia vs industry01:24:38 DALL-E performance and critiques01:45:40 What makes for a good research thesis? 01:59:06 Different trajectories of a good CS PhD student02:03:42 Focusing on measures vs metrics 02:15:23 Advice to students on getting started with AIArticles referred in the conversationAI as Natural Science?: https://cacm.acm.org/blogs/blog-cacm/261732-ai-as-an-ersatz-natural-science/fulltextPolanyi's Revenge and AI's New Romance with Tacit Knowledge: https://cacm.acm.org/magazines/2021/2/250077-polanyis-revenge-and-ais-new-romance-with-tacit-knowledge/fulltextMore about Prof. RaoHomepage: https://rakaposhi.eas.asu.edu/Twitter: https://twitter.com/rao2zAbout the Host:Jay is a PhD student at Arizona State University.Linkedin: https://www.linkedin.com/in/shahjay22/Twitter: https://twitter.com/jaygshah22Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: 2017 Donor Lottery Report, published by AdamGleave on the effective altruism forum. Write a Review I am the winner of the 2017 donor lottery. This write-up documents my decision process. The primary intended audience are other donors: several of the organisations I decided to donate to still have substantial funding gaps. I also expect this to be of interest to individuals considering working for one of the organisations reviewed. To recap, in a donor lottery many individuals make small contributions. The accumulated sum is then distributed to a randomly selected participant. Your probability of winning is proportional to the amount donated, such that the expected amount of donations you control is the same as the amount you contribute. This is advantageous since the winner (given the extra work, arguably the "loser") of the lottery can justify spending substantially more time evaluating organisations than if he or she were controlling only their smaller personal donations. In 2017, the Centre for Effective Altruism ran a donor lottery, and I won one of the two blocks of $100,000. After careful deliberation, I recommended that CEA make the following regrants: $70,000 to ALLFED. $20,000 to the Global Catastrophic Risk Institute (GCRI). $5,000 to AI Impacts. $5,000 to Wild Animal Suffering Research. In the remainder of this document, I describe the selection process I used, and then provide detailed evaluations of each of these organisations. Selection Process I am a CS PhD student at UC Berkeley, working to develop reliable artificial intelligence. Prior to starting my PhD, I worked in quantitative finance. This document is independent work and is not endorsed by CEA, the organisations evaluated, or by my current or previous employers. I assign comparable value to future and present lives, place significant weight on animal welfare (with high uncertainty) and am risk neutral. I have some moral uncertainty but would endorse these statements with >90% probability. Moreover, I largely endorse the standard arguments regarding the overwhelming importance of the far future. Since I am mostly in agreement with major donors, notably Open Philanthropy, I tried to focus on areas that are the comparative advantage of smaller donors. In particular, I focused my investigation on small organisations with a significant funding gap. To generate an initial list of possible organisations, I (a) wrote down organisations that immediately came to mind, (b) solicited recommendations from trusted individuals in my network, and (c) reviewed the list of 2017 EA grant recipients. I shortlisted four organisations from a superficial review of the longlist. Ultimately all the organisations on my shortlist were also organisations that immediately came to my mind in (a). This either indicates I already had a good understanding of the space, or that I am poor at updating my opinion. I then conducted a detailed review of each of the shortlisted organisations. This included reading a representative sample of their published work, soliciting comments from individuals working in related areas, and discussion with staff at the organisation until I felt I had a good understanding of their strategy. In the next section, I summarise my current views on the shortlisted organisations. The organisations evaluated were provided with a draft of this document and given 14 days to respond prior to publication. I have corrected any mistakes brought to my attention, and have also included a statement from ALLFED; other organisations were provided with the option to include a statement but chose not to do so. Some confidential details have been withheld, either at the request of the organisation or the individual who provided the information. Summary of conclusions I ranked ALLFED above GCRI as I view their research ...
How to decide and choose a research/thesis to work on that interests you and is also relevant to current research directions.Full episodes with Maithra, Google: https://youtu.be/htnJxcwJqeANatasha Google: https://youtu.be/8XpCnmvq49sMilind Google: https://youtu.be/eqwF3NpZFb4Hima Harvard University: https://youtu.be/8Ym4oYTd8FoIshan Facebook AI: https://youtu.be/Pb5RQAEtznkAbout the Host:Jay is a PhD student at Arizona State University, doing research on building Interpretable AI models for Medical Diagnosis.Jay Shah: https://www.linkedin.com/in/shahjay22/You can reach out to https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!#machinelearning #ai #phd #research #thesis***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***
Deciding whether to do a Ph.D. or not and things to focus on while writing your research thesis.Watch the full podcast here: https://youtu.be/8Ym4oYTd8FoDr. Himabindu Lakkaraju is an Assistant Professor at Harvard University and her major research interests are along the lines of explainability, fairness, and robustness in AI systems. Prior to that she graduated with a Ph.D. from @Stanford and has received multiple awards for her research work.Dr. Lakkaraju's homepage: https://himalakkaraju.github.io/About the Host:Jay is a Ph.D. student at Arizona State University, doing research on building Interpretable AI models for Medical Diagnosis.Jay Shah: https://www.linkedin.com/in/shahjay22/You can reach out to https://www.public.asu.edu/~jgshah1/ for any queries.Stay tuned for upcoming webinars!#explainableai #reliableai #robustai #machinelearning***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***
Reflecting back on days at RICE University, Shlok started his journey with a group of students. He appreciates the community at RICE because of which his ideas and opinions have been shaped about learning throughout 4 years. He embraces the support system at RICE which is always for the students and never against themShlok has transitioned from Mechanical engineering to CS PHD under the guidance of his Mentor. He has published a research paper on Human Robot Interaction. He quotes a few examples of Computational power in the field of Biology and medicine. Shlok is geared up for his research journey, know more about it in the podcast.
In this episode of GeeksBlabla, we discuss with our guests Karim Mouhssine and Mohamed, who had experience in PhD before, everything about PhD and how to apply for it in Morocco. Guests Karim Benzidane Mouhssine Lakhdissi Mohammed Ez-zarghili Notes 0:02 - Introduction 0:03 - What is PhD, How long does it take, Objective of a PHD and How to apply for a CS PHD in Morocco ? 0:06 - Do you have any advices for choosing a subject and the supervisor? 0:09 - Advice for getting an additional funding if possible 0:12 - When you apply for a PhD in university, are you affected automatically to another institution? 0:13 - Is ther a precise time of the year when we can start working on our PhD? 0:20 - What are the objectives and responsabilities of a PhD syudent during his cursus? 0:32 - One of the requirements to aply for PhD is having indexed articles 0:38 - Do salaried PhD students pay subscribtion fees to PhD program? 0:39 - Any advices on how to write a scientific article? 0:42 - Different categories of academic journals. How to know the reputation (or impact) of a given journal ? Is there a ranking of academic journals ? How is a researcher's impact calculated ? 0:49 - Work opportunities for PHDs in Morocco beside a career in academia. Does having a PHD make any difference for working in a multinational company in Morocco like Oracle or Microsoft ? 1:00 - Is it difficult to publish an article? 1:02 - For CS PhDs, are there subjects that englobe other fields? (Industrial for example) 1:03 - Are Maths a principal requirement to apply for PhD in CS? 1:04 - Are there any collaborations of Moroccan universities with foreign universities in computer science research 1:22 - Recent important research made by moroccan universities in computer science. Leading moroccan universities in computer science research 1:25 - Why industries are not pushing forward subjects to universities? 1:44 - Wrap up & Goodbye Links Oracle Labs projects IBM PhD Fellowship Award program Computer search & education portal Erasmus Mundos Scimago Journal & Country rank Prepared and Presented by Ismail Tlemçani Meriem Zaid Hamza Makraz
Josh Tobin holds a CS PhD from UC Berkeley, which he completed in four years while also working at OpenAI as a research scientist. His focus was on robotic perception and control, and contributed to the famous Rubik's cube robot hand video. He co-organizes the phenomenal Full Stack Deep Learning course and is now working on a new stealth startup. Learn more about Josh: http://josh-tobin.com/ (http://josh-tobin.com/) https://twitter.com/josh_tobin_ (https://twitter.com/josh_tobin_) Want to level-up your skills in machine learning and software engineering? Join the ML Engineered Newsletter: https://mlengineered.ck.page/943aa3fd46 (https://mlengineered.ck.page/943aa3fd46) Comments? Questions? Submit them here: https://charlie266.typeform.com/to/DA2j9Md9 (https://charlie266.typeform.com/to/DA2j9Md9) Follow Charlie on Twitter: https://twitter.com/CharlieYouAI (https://twitter.com/CharlieYouAI) Take the Giving What We Can Pledge: https://www.givingwhatwecan.org/ (https://www.givingwhatwecan.org/) Subscribe to ML Engineered: https://mlengineered.com/listen (https://mlengineered.com/listen) Timestamps: 01:32 Follow Charlie on Twitter (http://twitter.com/charlieyouai (twitter.com/charlieyouai)) 02:43 How Josh got started in CS and ML 11:05 Why Josh worked on ML for robotics 15:03 ML for Robotics research at OpenAI 28:20 Josh's research process 34:56 Why putting ML into production is so difficult 44:46 What Josh thinks the ML Ops landscape will look like 49:49 Common mistakes that production ML teams and companies make 53:11 How ML systems will be built in the future 59:37 The most valuable skills that ML engineers should develop 01:03:50 Rapid Fire Questions Links https://course.fullstackdeeplearning.com/ (Full Stack Deep Learning) https://arxiv.org/abs/1703.06907 (Domain Randomization for Transferring Deep Neural Networks from Simulation to the Real World) https://arxiv.org/abs/1710.06425 (Domain Randomization and Generative Models for Robotic Grasping) https://deepmind.com/blog/article/neural-scene-representation-and-rendering (DeepMind Generative Query Network (GQN) paper) https://arxiv.org/abs/1911.04554 (Geometry Aware Neural Rendering) https://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-104.pdf (Josh's PhD Thesis) https://www.youtube.com/watch?v=x4O8pojMF0w (OpenAI Rubik's Cube Robot Hand video) https://www.wandb.com/podcast/josh-tobin (Weights and Biases interview with Josh) https://www.oreilly.com/library/view/designing-data-intensive-applications/9781491903063/ (Building Data Intensive Applications) http://creativeselection.io/ (Creative Selection)
This week we are joined by Julius Adebayo. Julius is a CS PhD student at MIT, interested in safe deployment of ML based systems as it relates to privacy/security, interpretability, fairness and robustness.He is motivated by the need to ensure that ML based systems demonstrate safe behaviour when deployed.On this weeks episode we discuss how the evolution of hardware has progressed overtime and what that means for deep learning research. We also analyse how microprocessors can aid developments in neuroscience understanding. Underrated ML Twitter: https://twitter.com/underrated_mlJulius Adebayo Twitter: https://twitter.com/julius_adebayoPlease let us know who you thought presented the most underrated paper in the form below: https://forms.gle/97MgHvTkXgdB41TC8Links to the papers:"Could a Neuroscientist Understand a Microprocessor?" [paper]"When will computer hardware match the human brain?" - [paper]
Get to know Nancy Amato, the first woman to lead the Department of Computer Science at Illinois. In addition to some interesting personal background, she discusses her research in robotics, how the computer science field has become even more interdisciplinary, the success of the CS + X degree, and the upcoming Rising Stars Workshop, a gathering of top female CS PhD students.
Support these videos: http://pgbovine.net/support.htmhttp://pgbovine.net/PG-Vlog-204-casual-dismissals.htm- [PG Vlog #203 - finding computer science Ph.D. programs to apply to (csrankings.org can help)](http://pgbovine.net/PG-Vlog-203-applying-to-CS-PhD-programs.htm)- [my response to Kathleen's original tweet](https://twitter.com/pgbovine/status/1039898214814478336) (see my webpage for archived version)- [my lessons-learned tweet](https://twitter.com/pgbovine/status/1039898993877168129) (see my webpage for archived version)- [PG Vlog #50 - The Importance of Good Mentorship](http://pgbovine.net/PG-Vlog-50-good-mentorship.htm)- [PG Vlog #51 - Word Choice](http://pgbovine.net/PG-Vlog-51-word-choice.htm)Recorded: 2018-09-12
Support these videos: http://pgbovine.net/support.htmhttp://pgbovine.net/PG-Vlog-203-applying-to-CS-PhD-programs.htm- [Twitter thread that inspired this video](https://twitter.com/pgbovine/status/1039555870286262272) (see my webpage for archived version)- [PG Podcast - Episode 38 - Chris Martens on assistant professoring](http://pgbovine.net/PG-Podcast-38-Chris-Martens.htm)- [CSRankings: Computer Science Rankings](http://csrankings.org/)Caveats: CSRankings may not fully represent all research areas within computer science, especially interdisciplinary areas (e.g., for technical games research, check out [Institutions Active in Technical Games Research](http://www.kmjn.org/game-rankings/)). CSRankings may also not capture the work of certain faculty, especially those who publish in non-included venues or who have affiliations with other types of organizations. Remember that Godel's Incompleteness Theorems state that *any* rankings webpage will necessarily be incomplete.Recorded: 2018-09-11
What is a depressing fact you've realized after/during your PhD? Rishabh Jain, MIT PhD Written Apr 8 · Upvoted by Jessica Su, CS PhD student at Stanford and Steve Johnston, MBA Degree from Wharton I met my old advisor last year at an alumni event for the department I got my PhD in. We spoke for several minutes about how people in the lab were doing, how some new work was getting published in hot journals, and about the alumni who are now faculty. What was somewhat sad to me was that there was no conversation about the non-faculty alumni… really says something about the strong bias for staying in academia faculty will often have.