Podcasts about research engineer

  • 114PODCASTS
  • 145EPISODES
  • 39mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Apr 9, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about research engineer

Latest podcast episodes about research engineer

Thursday Breakfast
Mai Saif on Australia's Double Standards, The Pavilion School Part 2, We Outside Exhibition, Selective Attention and Cyclist Safety

Thursday Breakfast

Play Episode Listen Later Apr 9, 2025


Acknowledgement of Country// Headlines:Supreme court ruling on public housing class actionGaza updatesUniversity of Melbourne updates wi-fi terms of use, surveils protestorsGeelong Council plans to close in-home aged care serviceMyanmar earthquake updates  We hear a speech by Palestinian activist Mai Saif from the Sunday 23rd March Rally for a Free Palestine, where Mai condemned the Australian government's double-standards that suppress Palestinian voices and prioritise Jewish feelings over Muslim lives. Weekly rallies for Palestine are back every Sunday from 12PM outside the State Library of Victoria.// We listen to part 2 of Priya's conversation with staff and students from The Pavilion School, a State School based in Naarm's northern suburbs that focuses on flexible education and a supportive environment for students who have been disengaged or excluded from mainstream schooling. Participating in this conversation are students Sarah, Nathaniel and Jannaya, student support worker and former student Stephanie, and Mes, who is also a student support worker and is doing their PhD in education at La Trobe. In the second half of this conversation, we hear about Mes' doctoral research, which has been co-designed with staff and students at The Pavilion School and aims to evaluate the impact of the school's alternative education model. Listen back to part 1 of our conversation here.// We hear a conversation between Inez and Nigerian-Australian photographer Dr Ayooluwatomiwa ‘Ibukun' Oloruntoba about his exhibition We Outside, which captures event spaces created by and for young African-Australians in Melbourne. Video and images bring to life the resilience, diversity and richness of the African-Australian community. Ayooluwatomiwa is an analogue photographer based in Melbourne. With an MD/PhD in AI and dermatology from Monash University, his medical interest in the human condition informs his passion for documentary photography, capturing people, events, and cultural narratives. We Outside is running from Friday the 14th of March until Thursday the 17th of August, 10AM-5PM at the Grand Foyer, Community Gallery at Melbourne's Immigration Museum.//Giulio Ponte, Research Engineer at the Centre for Automotive Safety Research at the University of Adelaide, joins us to discuss the risk to cyclists of selective attention by motorists, and what can be done to improve cyclist safety. Giulio has a degree in mechanical engineering and a graduate diploma in transport and traffic engineering, and has considerable experience in at-scene crash investigation, and vehicle-pedestrian safety assessment. Giulio also has an ongoing involvement in the pedestrian testing program undertaken for the Australasian New Car Assessment Program. Read Giulio and Jamie McKenzie's article ‘‘Sorry mate, I didn't see you': when drivers look but don't see cyclists on the road,' published by The Conversation on April 2nd, here.//

Lay of The Land
#203: Geoff Bruder (Sonic Fire Tech) — Fighting Fires with Sound and Acoustic Waves

Lay of The Land

Play Episode Listen Later Mar 20, 2025 39:46


Geoff Bruder, co-founder of GhostWorks Engineering and Sonic Fire Tech—a company redefining fire suppression—is fighting fires with acoustic wave technology.Geoff's journey is rooted in his work at NASA Glenn Research Center here in Cleveland, where he focused as a Research Engineer on projects like the Advanced Stirling Radioisotope Generator (ASRG), thermoacoustic stirling heat engine designs, and led the design for the Glenn Extreme Environment Rig (GEER) — the largest chamber on earth able to recreate surface conditions on planet Venus, so they could, amidst many other things, extend the useful life of power and cooling systems there from 10 hours to nearly 243 days.Since 2019, Geoff has been building Sonic Fire Tech applying this expertise to develop fire suppression systems that use acoustics — silent infrasound waves — instead of water or chemicals— to extinguish wood and chemical fires from a significant distance away from the fire itself. This approach offers a safer, more sustainable solution for wildfire prevention, home protection, industrial, and other applications where traditional fire suppression methods fall short.I'd encourage anyone curious what this actually looks like in practice to reference the link to a youtube video in our shownotes that demonstrates how Geoff is putting out fire with sound… the video is worth thousands of words.With over 15 patents, 3 publications, a Bachelor of Science in Aerospace Engineering from Embry-Riddle Aeronautical University, graduate work at Case Western Reserve University, Geoff's journey from innovating at NASA to his own entrepreneurial ventures is one of ingenuity and impact, and I'm thrilled to share our fascinating conversation about his vision to transforming firefighting — please enjoy.-----LINKS:https://www.linkedin.com/in/geoffreybruder/https://sonicfiretech.com/https://www.youtube.com/shorts/9cxlUOrIrcM-----SPONSORS: Impact Architects & NinetyImpact Architects & NinetyLay of The Land is brought to you by Ninety. As a Lay of The Land listener, you can leverage a free trial with Ninety, the platform that helps teams build great companies and the only officially licensed software for EOS® — used by over 7,000 companies and 100,000 users!This episode is brought to you by Impact Architects. As we share the stories of entrepreneurs building incredible organizations throughout NEO, Impact Architects helps those leaders — many of whom we've heard from as guests on Lay of The Land — realize their visions and build great organizations. I believe in Impact Architects and the people behind it so much, that I have actually joined them personally in their mission to help leaders gain focus, align together, and thrive by doing what they love! As a listener, you can sit down for a free consultation with Impact Architects by visiting ia.layoftheland.fm!-----Stay up to date by signing up for Lay of The Land's weekly newsletter — sign up here.Past guests include Justin Bibb (Mayor of Cleveland), Pat Conway (Great Lakes Brewing), Steve Potash (OverDrive), Umberto P. Fedeli (The Fedeli Group), Lila Mills (Signal Cleveland), Stewart Kohl (The Riverside Company), Mitch Kroll (Findaway — Acquired by Spotify), and over 200 other Cleveland Entrepreneurs.Connect with Jeffrey Stern on LinkedIn — https://www.linkedin.com/in/jeffreypstern/Follow Jeffrey Stern on X @sternJefe — https://twitter.com/sternjefeFollow Lay of The Land on X @podlayofthelandhttps://www.jeffreys.page/

In the Passenger's Seat
... with Maggie Donnelly, ACV Research Engineer

In the Passenger's Seat

Play Episode Listen Later Feb 14, 2025 7:43


In this special edition episode recorded live at NADA 2025, host Matt Mitteldorfer sits down with ACV Research Engineer, Maggie Donnelly, to discuss the power of AI and technology-driven solutions in the auto industry. Maggie breaks down how ACV is helping dealers streamline operations, make smarter decisions and drive profitability—all through cutting-edge innovation. They also unpack how ACV's APEX and Virtual Lift solutions, powered by a plethora of data, are giving dealers deeper insights and greater confidence in their decisions. AI isn't something to fear—it's a tool to help dealers uplevel their businesses and stay ahead in an evolving market. Tune in for an insightful conversation on the future of automotive technology!

MLOps.community
PyTorch for Control Systems and Decision Making // Vincent Moens // #276

MLOps.community

Play Episode Listen Later Dec 4, 2024 56:39


Dr Vincent Moens is an Applied Machine Learning Research Scientist at Meta and an author of TorchRL and TensorDict in Pytorch. PyTorch for Control Systems and Decision Making // MLOps Podcast #276 with Vincent Moens, Research Engineer at Meta. // Abstract PyTorch is widely adopted across the machine learning community for its flexibility and ease of use in applications such as computer vision and natural language processing. However, supporting reinforcement learning, decision-making, and control communities is equally crucial, as these fields drive innovation in areas like robotics, autonomous systems, and game-playing. This podcast explores the intersection of PyTorch and these fields, covering practical tips and tricks for working with PyTorch, an in-depth look at TorchRL, and discussions on debugging techniques, optimization strategies, and testing frameworks. By examining these topics, listeners will understand how to effectively use PyTorch for control systems and decision-making applications. // Bio Vincent Moens is a research engineer on the PyTorch core team at Meta, based in London. As the maintainer of TorchRL (https://github.com/pytorch/rl) and TensorDict (https://github.com/pytorch/tensordict), Vincent plays a key role in supporting the decision-making community within the PyTorch ecosystem. Alongside his technical role in the PyTorch community, Vincent also actively contributes to AI-related research projects. Before joining Meta, Vincent worked as an ML researcher at Huawei and AIG. Vincent holds a Medical Degree and a PhD in Computational Neuroscience. // MLOps Swag/Merch https://shop.mlops.community/ // Related Links Musical recommendation: https://open.spotify.com/artist/1Uff91EOsvd99rtAupatMP?si=jVkoFiq8Tmq0fqK_OIEglg Website: github.com/vmoens TorchRL: https://github.com/pytorch/rl TensorDict: https://github.com/pytorch/tensordict LinkedIn post: https://www.linkedin.com/posts/vincent-moens-9bb91972_join-the-tensordict-discord-server-activity-7189297643322253312-Wo9J?utm_source=share&utm_medium=member_desktop --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Vincent on LinkedIn: https://www.linkedin.com/in/mvi/

Second Amendment Radio
Weapons detectors at the St. Louis Zoo – brilliant or unnecessary?

Second Amendment Radio

Play Episode Listen Later Nov 16, 2024 38:05


Weapons detectors at the St. Louis Zoo – brilliant or unnecessary? full Welcome to 2nd Amendment Radio & the Great Outdoors with Marc Cox & Bo Matthews – as always produced by Carl Middleman (Pew Pew)! The EVOLV system is coming to the St. Louis Zoo. Nikita Ermolaev the Sr. Research Engineer on weapons detectors at IPVM joins the show to discuss. Plus cabinet picks and Dan Kremer of Festus Firearms. 2285 Sat, 16 Nov 2024 21:59:00 +0000 weXdlaMsA4nE3k5B8OcFI2xVXpm8rsmi sports,leisure,government Second Amendment Radio sports,leisure,government Weapons detectors at the St. Louis Zoo – brilliant or unnecessary? Each week 2nd Amendment Radio talks about anything and everything that impacts the rights and passions of today's firearm's enthusiats. Conceal carry laws, hunting, training, new products and celebrity interviews are all part of this power packed hour. 2024 © 2021 Audacy, Inc. Sports Leisure Government False https://player.amperwave

Multiply Your Success with Tom DuFore
221. A Franchisee's Story: From Corporate to Franchise Ownership—Adrienne Sienkowski, Owner, IMAGE Studios Salon Suites of Alpharetta

Multiply Your Success with Tom DuFore

Play Episode Listen Later Sep 2, 2024 35:21


Do you know what a franchisee is thinking when they buy a franchise? Or maybe you should have a better idea than you do? If you are franchising your business, thinking about franchising your business, or in franchise sales, this episode is for you.Our guest today is Adrienne Sienkowski, and she shares with us the journey and she and her husband have been on from buying a franchise to opening and operating. TODAY'S WIN-WIN: If you're on the fence, and you want to do it, you eventually need to take that step or just go for it. LINKS FROM THE EPISODE:You can visit our guest's website at: https://imagestudios360.com/alpharetta/Attend our Franchise Sales Training Workshop:  https://bigskyfranchiseteam.com/franchisesalestraining/If you are ready to franchise your business or take it to the next level: CLICK HERE.Connect with our guest on social:https://www.instagram.com/imagestudios_alpharetta/ABOUT OUR GUEST:Adrienne comes from a family of entrepreneurs. Her great-grandfather opened a small community grocery store during the Great Depression in Hartsville, South Carolina. Her grandfather started a USDA-approved meat processing plant and organic farm during the height of the Civil Rights era. His company went on to become one of the first 8(a) certified businesses in South Carolina. Continuing on the entrepreneur path, her father, Dr. Louis B. Lynn, a former R&D executive at Monsanto, went on to open the largest minority-owned Commercial Landscape and Construction company in South Carolina. To say that small business holds a special place in Adrienne's heart is an understatement.Adrienne started her career in Corporate at Sonoco Products Company as a Sales Associate. She then went on to utilize her Packaging degree as Research Engineer at Sonoco headquarters working on the iconic Pringles and Pillsbury brands. Adrienne continued her career at Campbell Soup Company where she progressively moved up in Packaging positions with key development and leadership roles focused on packaging improvements, cost savings, and production line efficiency improvements. She holds multiple patents in packaging and process improvements from her tenures at Sonoco and Campbell Soup Company.When her husband, Philip, had a career opportunity to relocate from Philadelphia to Georgia, Adrienne joined ENVIRO AgScienceas Chief Operations Officer, the company her father founded in 1985. There she learned the nuances and challenges between Fortune 500 companies and small businesses. To open the inaugural IMAGE Studios in Georgia will allow Adrienne to continue in the same robust entrepreneur spirit that has run in her family for four generations. Adrienne holds a degree in Packaging from Clemson University. She currently serves as Chair of the Clemson University Board of Visitors and is an active member of her kids PTA. ABOUT BIG SKY FRANCHISE TEAM:This episode is powered by Big Sky Franchise Team. If you are ready to talk about franchising your business you can schedule your free, no-obligation, franchise consultation online at: https://bigskyfranchiseteam.com/ or by calling Big Sky Franchise Team at: 855-824-4759.

Elixir em Foco
41. Explorando Elixir: A Trajetória de George Guimarães (CloudWalk e ex-Plataformatec) e suas Contribuições

Elixir em Foco

Play Episode Listen Later Aug 7, 2024 68:17


Neste episódio do podcast Elixir em Foco, Adolfo Neto, Herminio Torres, Cristine Guadelupe e Zoey Pessanha recebem George Guimarães, um dos fundadores da Plataformatec e atualmente Engineering Manager na Cloudwalk. George compartilha sua experiência com o Elixir e suas funções na Cloudwalk, onde está envolvido em projetos de Acquiring e Agents AI. Ele também discute o impacto do Elixir no campo de aprendizado de máquina, apresentando seu repositório no GitHub, “Awesome Machine Learning and Elixir”, que reúne recursos valiosos para a comunidade de desenvolvedores. Além disso, George fala sobre sua trajetória na Plataformatec e sua experiência como CEO da SourceLevel, uma empresa focada em medir a produtividade de equipes de software. O episódio também aborda sua iniciativa de criar o Protótipo Podcast, onde busca compartilhar o que está acontecendo no ecossistema Elixir, e sua participação em webinars e meetups que promovem a linguagem e suas aplicações. George Guimarães: Perfil no Linkedin https://www.linkedin.com/in/georgeguimaraes/ GitHub https://github.com/georgeguimaraes Twitter  https://x.com/georgeguimaraes   Mastodon @georgeguimaraes@mastodon.social  Rebrandly https://rebrand.ly/georgeguimaraes  CloudWalk https://www.cloudwalk.io/ JIM https://www.jim.com/  Awesome Machine Learning and Elixir A curated list of Machine Learning packages and resources for the Elixir programming language. https://github.com/georgeguimaraes/awesome-machine-learning-elixir  Agilistas estão morrendo! https://bit.ly/4d9LY7Y  Spotify Model Nunca Existiu? Como Assim? https://www.youtube.com/watch?v=1wstlCLmT7w  The Real Real https://www.youtube.com/watch?v=6A6eXFd8olw https://www.youtube.com/watch?v=sTs_4T1ufLY Webinar de Elixir da Source Level em abril de 2020 https://archive.is/o7thF “da quarentena” Protótipo Podcast https://prototipo.tech/ ttps://open.spotify.com/show/5MErBHgSpMATMy6So1UKIh  Castálio Podcast - 141 - George Guimarães - Code Review (2019) https://www.youtube.com/watch?v=mvLL5QS34No Marcelo Park | Posicionamento Estratégico para diferenciação e tracionamento | UNITE 2020 https://www.youtube.com/watch?v=aUcm0bam2fc  Chat do Vagas ponto com https://www.vagas.com.br/ Geovane Fedrecheski PhD, Research Engineer at Inria Paris Egresso Poli https://scholar.google.com/citations?hl=pt-BR&user=TBf0rLkAAAAJ&view_op=list_works&sortby=pubdate https://www.linkedin.com/in/geovane-fedrecheski  https://www.teses.usp.br/teses/disponiveis/3/3142/tde-26072023-074626/pt-br.php https://geonnave.com/  https://x.com/geonnave Venmo https://www.remessaonline.com.br/blog/venmo/  Canal Plataformatec https://www.youtube.com/@PlataformatecCompany A RAG for Elixir https://bitcrowd.dev/a-rag-for-elixir/  How even the simplest RAG can empower your team https://bitcrowd.dev/how-even-the-simplest-RAG-can-empower-your-team/  ElixirLab https://elixirlab.org/  Assista a esta entrevista no YouTube em⁠⁠ ⁠⁠⁠https://www.youtube.com/@elixiremfoco  . Escute a esta entrevista no Spotify em⁠ ⁠https://podcasters.spotify.com/pod/show/elixiremfoco   Associe-se à Erlang Ecosystem Foundation em⁠⁠ ⁠⁠⁠https://bit.ly/3Sl8XTO⁠⁠ ⁠⁠⁠ . O site da fundação é⁠⁠ ⁠⁠⁠https://bit.ly/3Jma95g ⁠⁠  Nosso site é⁠⁠ ⁠⁠⁠https://elixiremfoco.com   Estamos no X em @elixiremfoco⁠⁠ ⁠⁠⁠https://x.com/elixiremfoco  Nosso email é ⁠⁠elixiremfoco@gmail.com  .

The Nonlinear Library
LW - AI Alignment Research Engineer Accelerator (ARENA): Call for applicants v4.0 by James Fox

The Nonlinear Library

Play Episode Listen Later Jul 7, 2024 9:55


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Alignment Research Engineer Accelerator (ARENA): Call for applicants v4.0, published by James Fox on July 7, 2024 on LessWrong. TL;DR We are excited to announce the fourth iteration of ARENA (Alignment Research Engineer Accelerator), a 4-5 week ML bootcamp with a focus on AI safety! ARENA's mission is to provide talented individuals with the skills, tools, and environment necessary for upskilling in ML engineering, for the purpose of contributing directly to AI alignment in technical roles. ARENA will be running in-person from LISA from 2nd September - 4th October (the first week is an optional review of the fundamentals of neural networks). Apply here before 23:59 July 20th anywhere on Earth! Summary ARENA has been successfully run three times, with alumni going on to become MATS scholars and LASR participants; AI safety engineers at Apollo Research, Anthropic, METR, and OpenAI; and even starting their own AI safety organisations! This iteration will run from 2nd September - 4th October (the first week is an optional review of the fundamentals of neural networks) at the London Initiative for Safe AI (LISA) in Old Street, London. LISA houses small organisations (e.g., Apollo Research, BlueDot Impact), several other AI safety researcher development programmes (e.g., LASR Labs, MATS extension, PIBBS, Pivotal), and many individual researchers (independent and externally affiliated). Being situated at LISA, therefore, brings several benefits, e.g. facilitating productive discussions about AI safety & different agendas, allowing participants to form a better picture of what working on AI safety can look like in practice, and offering chances for research collaborations post-ARENA. The main goals of ARENA are to: Help participants skill up in ML relevant for AI alignment. Produce researchers and engineers who want to work in alignment and help them make concrete next career steps. Help participants develop inside views about AI safety and the paths to impact of different agendas. The programme's structure will remain broadly the same as ARENA 3.0 (see below); however, we are also adding an additional week on evaluations. For more information, see our website. Also, note that we have a Slack group designed to support the independent study of the material (join link here). Outline of Content The 4-5 week program will be structured as follows: Chapter 0 - Fundamentals Before getting into more advanced topics, we first cover the basics of deep learning, including basic machine learning terminology, what neural networks are, and how to train them. We will also cover some subjects we expect to be useful going forward, e.g. using GPT-3 and 4 to streamline your learning, good coding practices, and version control. Note: Participants can optionally skip the program this week and join us at the start of Chapter 1 if they'd prefer this option and if we're confident that they are already comfortable with the material in this chapter. Topics include: PyTorch basics CNNs, Residual Neural Networks Optimization (SGD, Adam, etc) Backpropagation Hyperparameter search with Weights and Biases GANs & VAEs Chapter 1 - Transformers & Interpretability In this chapter, you will learn all about transformers and build and train your own. You'll also study LLM interpretability, a field which has been advanced by Anthropic's Transformer Circuits sequence, and open-source work by Neel Nanda. This chapter will also branch into areas more accurately classed as "model internals" than interpretability, e.g. recent work on steering vectors. Topics include: GPT models (building your own GPT-2) Training and sampling from transformers TransformerLens In-context Learning and Induction Heads Indirect Object Identification Superposition Steering Vectors Chapter 2 - Reinforcement Learning In this chapter, you w...

The Nonlinear Library: LessWrong
LW - AI Alignment Research Engineer Accelerator (ARENA): Call for applicants v4.0 by James Fox

The Nonlinear Library: LessWrong

Play Episode Listen Later Jul 7, 2024 9:55


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Alignment Research Engineer Accelerator (ARENA): Call for applicants v4.0, published by James Fox on July 7, 2024 on LessWrong. TL;DR We are excited to announce the fourth iteration of ARENA (Alignment Research Engineer Accelerator), a 4-5 week ML bootcamp with a focus on AI safety! ARENA's mission is to provide talented individuals with the skills, tools, and environment necessary for upskilling in ML engineering, for the purpose of contributing directly to AI alignment in technical roles. ARENA will be running in-person from LISA from 2nd September - 4th October (the first week is an optional review of the fundamentals of neural networks). Apply here before 23:59 July 20th anywhere on Earth! Summary ARENA has been successfully run three times, with alumni going on to become MATS scholars and LASR participants; AI safety engineers at Apollo Research, Anthropic, METR, and OpenAI; and even starting their own AI safety organisations! This iteration will run from 2nd September - 4th October (the first week is an optional review of the fundamentals of neural networks) at the London Initiative for Safe AI (LISA) in Old Street, London. LISA houses small organisations (e.g., Apollo Research, BlueDot Impact), several other AI safety researcher development programmes (e.g., LASR Labs, MATS extension, PIBBS, Pivotal), and many individual researchers (independent and externally affiliated). Being situated at LISA, therefore, brings several benefits, e.g. facilitating productive discussions about AI safety & different agendas, allowing participants to form a better picture of what working on AI safety can look like in practice, and offering chances for research collaborations post-ARENA. The main goals of ARENA are to: Help participants skill up in ML relevant for AI alignment. Produce researchers and engineers who want to work in alignment and help them make concrete next career steps. Help participants develop inside views about AI safety and the paths to impact of different agendas. The programme's structure will remain broadly the same as ARENA 3.0 (see below); however, we are also adding an additional week on evaluations. For more information, see our website. Also, note that we have a Slack group designed to support the independent study of the material (join link here). Outline of Content The 4-5 week program will be structured as follows: Chapter 0 - Fundamentals Before getting into more advanced topics, we first cover the basics of deep learning, including basic machine learning terminology, what neural networks are, and how to train them. We will also cover some subjects we expect to be useful going forward, e.g. using GPT-3 and 4 to streamline your learning, good coding practices, and version control. Note: Participants can optionally skip the program this week and join us at the start of Chapter 1 if they'd prefer this option and if we're confident that they are already comfortable with the material in this chapter. Topics include: PyTorch basics CNNs, Residual Neural Networks Optimization (SGD, Adam, etc) Backpropagation Hyperparameter search with Weights and Biases GANs & VAEs Chapter 1 - Transformers & Interpretability In this chapter, you will learn all about transformers and build and train your own. You'll also study LLM interpretability, a field which has been advanced by Anthropic's Transformer Circuits sequence, and open-source work by Neel Nanda. This chapter will also branch into areas more accurately classed as "model internals" than interpretability, e.g. recent work on steering vectors. Topics include: GPT models (building your own GPT-2) Training and sampling from transformers TransformerLens In-context Learning and Induction Heads Indirect Object Identification Superposition Steering Vectors Chapter 2 - Reinforcement Learning In this chapter, you w...

AgEmerge Podcast
133 AgEmerge Podcast With Chris Jones

AgEmerge Podcast

Play Episode Listen Later Apr 9, 2024 54:34


Thanks for joining us! Today we welcome Chris Jones a Retired Research Engineer from the University of Iowa and author of The Swine Republic, Struggles with Truth About Agriculture and Water Quality. Today he and Monte discuss the agricultural efficiencies and practices that we can make happen in more environmentally responsible ways. That includes a system approach to our entire management practices. Chris talks about how we can work to deploy these practices in enough areas that they can make a difference at the landscape scale. It's a powerful conversation so let's jump right in. Chris Jones retired in May of 2023 as a Research Engineer from IIHR-Hydroscience & Engineering at the University of Iowa, where his work focused on water quality and water monitoring in agricultural landscapes. Previous to that he worked at the Des Moines Water Works and the Iowa Soybean Association. Chris has a BA in Biology and Chemistry from Simpson College and a PhD in Analytical Chemistry from Montana State University. He has authored 55 scientific journal articles, several book chapters and is author of the book The Swine Republic, Struggles with Truth About Agriculture and Water Quality. He also writes a weekly column posted on Substack at Riverraccoon.substack.com. He lives in Iowa City, Iowa. Tell us your background and how you reached today's line of work. I was born in Illinois and spent most of the rest of my life in Iowa and have observed firsthand how the state and its agricultural production systems have changed since the 1960s. While some things about agriculture's impact on the environment have improved, the environmental consequences of cornbelt agriculture are still severe and affect the quality of life of the region's residents. Consolidation in agriculture since that time has also had dire consequences for the prosperity and vitality of small-town Iowa. My work has focused on these consequences and how the condition of our environment can be improved within the backdrop of intense crop and livestock production. Got questions you want answered? Send them our way and we'll do our best to research and find answers. Know someone you think would be great on the AgEmerge stage or podcast? Send your questions or suggestions to kim@asn.farm we'd love to hear from you.

Web3 Unlocked
DC Builder | Research Engineer at Worldcoin Foundation: Proving Personhood In A Digital Era

Web3 Unlocked

Play Episode Listen Later Apr 9, 2024 58:30


DC Builder, Research Engineer at Worldcoin Foundation, shares the project's approach towards decentralization and its mission to democratize digital identity and finance globally, ensuring fair wealth distribution. He highlights the challenges of the web of trust and biometrics in verifying personhood, shedding light on the Semaphore protocol's innovative use. We also delve into the emerging ZKML space and other crypto-AI projects that are reshaping the landscape. We hope you enjoy DC Builder's conversation with hosts Kenzi Wang, Sachi Kamiya, and Diksha Dutta, where he talks in-depth about his Web3 journey.

Connect
Real Intelligence on AI: Insights from the Experts

Connect

Play Episode Listen Later Mar 7, 2024 49:44


Artificial Intelligence is here. It's already impacting your daily life, and it's quickly evolving. So, what exactly is it (and what isn't it)?  How do you cut through all the hype? And how can you leverage it?In order to properly tackle the topic of AI, we required a panel of heavyweights. Accordingly, we're pleased to welcome three esteemed industry experts to this episode of Connect: Adina Trufinescu, Principal Program Manager at Microsoft, responsible for computer vision and cognitive services for MS Azure Cloud and Edge; Mert Karakaya, Research Engineer at IPVM, in charge of research and testing on video surveillance, AI and other technologies; and Mike Jude, Research Director and author at IDC, who oversees global market research, focusing on video surveillance and computer vision applications.In this episode we'll peel back the layers of AI to discover what makes it tick. Adina, Mert and Mike will discuss the building blocks of AI and the ongoing evolution. We'll delve into different types of AI, examining the latest developments in generative AI, computer vision and multimodal learning. From a more practical standpoint we'll take a behind-the-scenes look at product testing and discover the factors that contribute to performance, how to detect biases and what it means to break an algorithm. And of course, we'll evaluate the current market—and future prospects—for AI technology including adoption rates, use cases and major trends.Join Scott Dunn as he hosts the top experts from companies that are leading the way in AI development, testing and research. For more information about Axis Communications, visit us at www.axis.com Follow us on social media at Axis Communications - Home | Facebook Axis Communications: My Company | LinkedIn Axis North America (@Axis_NA) / Twitter

Red Planet Live
Red Planet Live -- Jose & Dr. Julio Hernandez

Red Planet Live

Play Episode Listen Later Mar 6, 2024 62:15


Ashton Zeth interviews former NASA astronaut Jose M. Hernandez alongside his son, Dr. Julio Hernandez, a researcher at Purdue University and former Mars analog astronaut.Jose worked at NASA Johnson Space Center from 2001 to 2011, where he served as an astronaut, legislative analyst, branch chief, and materials research engineer. He was on the STS-128 shuttle mission, during which he oversaw the transfer of thousands of pounds of equipment between the shuttle and the International Space Station and helped with robotics operations.More recently, Jose serves as President and CEO of Tierra Luna Engineering, LLC, a California-based engineering consulting firm that focuses on aerospace engineering, non-destructive evaluation, failure analysis, and general engineering services.Last year, Amazon released “A Million Miles Away”, a biopic film about Jose and his inspiring path as a migrant worker from Mexico to an American astronaut. Starring actor Michael Pena, the movie received high acclaim as a “great family movie with messages of determination and perseverance.”Julio is a Characterization and Research Engineer at the Hypersonics Advanced Manufacturing Technology Center within the Purdue Applied Research Institute. He obtained his Ph.D. in Aeronautical and Astronautical Engineering in July 2023, focusing on self-sensing composite materials and additive manufacturing. In addition, he previously served as a botanist for Crew 245 during his two-week tenure as a Mars Desert Research Station (MDRS) analog astronaut in 2021. 

Knowledge Distillation with Helen Byrne
Papers of the Month with Charlie Blake, Research Engineer at Graphcore

Knowledge Distillation with Helen Byrne

Play Episode Listen Later Feb 2, 2024 43:48


Charlie Blake from Graphcore's research team discusses their AI Papers of the Month for January 2024. Graphcore research has been collating and sharing a review of the most consequential AI papers internally, every month, for a number of years. Now – for the first time – the research team is making this valuable resource public, to help the wider AI community keep up-to-date with the most exciting breakthroughs. Papers of the Month for January 2024 (with some work from December 2023) includes: Bad Students Make Great Teachers: Active Learning Accelerates Large-Scale Visual Understanding https://arxiv.org/abs/2312.05328Authors: Talfan Evans, Shreya Pathak, Hamza Merzic, et al. (Google DeepMind, UCL) Beyond Chinchilla-Optimal: Accounting for Inference in Language Model Scaling Lawshttps://arxiv.org/abs/2401.00448Authors: Nikhil Sardana and Jonathan Frankle (MosaicML) Analyzing and Improving the Training Dynamics of Diffusion Modelshttps://arxiv.org/abs/2312.02696Authors: Tero Karras et al. (Nvidia, Aalto University) Solving olympiad geometry without human demonstrationshttps://www.nature.com/articles/s41586-023-06747-5Authors: Trieu H. Trinh, Yuhuai Wu, Quoc V. Le, He He and Thang Luong (Google DeepMind, New York University) To read about January's Papers of the Month, visit the Graphcore blog.https://www.graphcore.ai/posts/great-teachers-and-beyond-chinchilla-papers-of-the-month-jan-2024

Drone Radio Show
Autonomous Air Taxis Are Coming! - Brian Yutko, CEO of Wisk Aero

Drone Radio Show

Play Episode Listen Later Nov 14, 2023 28:24


How close are we to seeing autonomous air taxis? Brian Yutko is CEO of Wisk Aero, an advanced air mobility company dedicated to delivering safe, everyday flight for everyone. Wisk's self-flying, electric vertical takeoff and landing air taxi will make it possible for passengers to skip the traffic and get to their destination faster. Headquartered in the Mountain View, California, with locations around the world, Wisk is an independent company backed by The Boeing Company and Kitty Hawk Corporation. With over a decade of experience and over 1600 test flights, Wisk is shaping the future of daily commutes and urban travel, safely and sustainably.  Prior to joining Wisk, Brian served as Vice President and Chief Engineer for Sustainability and Future Mobility at Boeing, where he led a team that leveraged company's technical expertise in sustainability technologies as well as emerging, advanced aerial mobility applications.  He was the Chief Technologist for Boeing NeXt and sat on the Board of Directors for Wisk following its founding as a joint venture with Boeing.  Prior to his roles at Boeing, Brian was the Senior Vice President of Programs and Vice President of Research and Technology at Aurora Flight Sciences, an industry leader in aviation autonomy and rapid prototyping. Brian was a Research Engineer in the International Center for Air Transportation at MIT after earning a Ph.D. and Master of Science in Aeronautics and Astronautics from MIT. He also earned a Bachelor of Science in Aerospace Engineering from Pennsylvania State University. In this edition of the Drone Radio Show, Brian talks about the progress that Wisk Aero is making to develop and deploy autonomous air taxis, including the results of a recent successful air taxi flight demonstration at the Long Beach Airport. 

The Institute of World Politics
Professor Oskar Halecki - Polish Scholar In-Exile (1939 - 1973), with Dr. Alexander M. Jablonski

The Institute of World Politics

Play Episode Listen Later Nov 11, 2023 28:49


Dr. Alexander M. Jablonski discusses "Professor Oskar Halecki - Polish Scholar In-Exile (1939 - 1973)." This lecture is part of the 16th annual Kościuszko Chair Conference and the 4th Oskar Halecki Symposium. About the Speaker Dr. Alexander Maciej Jabłoński, P.Eng. received his BSc & MS (civil engineering) from the Technical University of Cracow, Poland (1970), MS (mechanics and materials engineering) from the University of Illinois at Chicago (1982) and PhD (structural dynamics) from Carleton University in Ottawa, Canada (1989). He has more than 50 years of experience in various fields of engineering, reconnaissance projects, project management and strategy planning, and about 130 publications. He worked as engineer in Poland, Finland, Norway, Germany, the USA, and Canada. Since 1992, he has been working as Research Scientist, Research Engineer, and Manager in Canadian federal laboratories. He is working now at the David Florida Laboratory, Canadian Space Agency. He is also an Adjunct Research Professor at the Department of Mechanical and Aerospace Engineering, Carleton University in Ottawa. He is Fellow of the Canadian Aeronautics and Space Institute (CASI), Associate Fellow of the American Institute of Aeronautics and Astronautics (AIAA), Member of Aerospace Division (ASD) of the American Society of Civil Engineers (ASD ASCE), and recipient of various engineering and scientific awards. Since his early life in Poland, he has studied Polish and world history for decades. He writes historical essays and presentations, especially on the modern history of Poland, including World War II and the post-war era of the Soviet occupation. Currently, he is the President of the Oskar Halecki Institute in Canada and a member of the Program Council of the Institute of Heritage of the National Thought (IDMN), Warsaw, Poland. About the Symposium This virtual joint symposium is organized by The Institute of World Politics, in Washington, D.C., USA, and The Oskar Halecki Institute in Ottawa, ON, Canada to commemorate the 50th anniversary of the passing away of Professor Oskar Halecki. Sponsors The Institute of World Politics, Washington, D.C., United States The Oskar Halecki Institute, Ottawa, ON, Canada Co-Sponsors Institute of History of the Polish Academy of Sciences (IH PAN) Institute of Heritage of the Polish National Thought (IDMN) Instytut Historii USKW (Stefan Cardinal Wyszynski University) ***Learn more about IWP graduate programs: https://www.iwp.edu/academic-programs/ ***Make a gift to IWP: https://interland3.donorperfect.net/weblink/WebLink.aspx?name=E231090&id=18

The Nonlinear Library
EA - AI Alignment Research Engineer Accelerator (ARENA): call for applicants by TheMcDouglas

The Nonlinear Library

Play Episode Listen Later Nov 7, 2023 11:53


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Alignment Research Engineer Accelerator (ARENA): call for applicants, published by TheMcDouglas on November 7, 2023 on The Effective Altruism Forum. TL;DR Apply here for the third iteration of ARENA (Jan 8th - Feb 2nd)! Introduction We are excited to announce the third iteration of ARENA (Alignment Research Engineer Accelerator), a 4-week ML bootcamp with a focus on AI safety. Our mission is to prepare participants for full-time careers as research engineers in AI safety, e.g. at leading organizations or as independent researchers. The program will run from January 8th - February 2nd 2024[1], and will be held at the offices of the London Initiative for Safe AI. These offices are also being used by several safety orgs (BlueDot, Apollo, Leap Labs), as well as the current London MATS cohort, and several independent researchers. We expect this to bring several benefits, e.g. facilitating productive discussions about AI safety & different agendas, and allowing participants to form a better picture of what working on AI safety can look like in practice. ARENA offers a unique opportunity for those interested in AI safety to learn valuable technical skills, work in their own projects, and make open-source contributions to AI safety-related libraries. The program is comparable to MLAB or WMLB, but extends over a longer period to facilitate deeper dives into the content, and more open-ended project work with supervision. For more information, see our website. Outline of Content The 4-week program will be structured as follows: Chapter 0 - Fundamentals Before getting into more advanced topics, we first cover the basics of deep learning, including basic machine learning terminology, what neural networks are, and how to train them. We will also cover some subjects we expect to be useful going forwards, e.g. using GPT-3 and 4 to streamline your learning, good coding practices, and version control.Note - participants can optionally not attend the program during this week, and instead join us at the start of Chapter 1, if they'd prefer this option and if we're confident that they are already comfortable with the material in this chapter. Topics include: PyTorch basics CNNs, Residual Neural Networks Optimization (SGD, Adam, etc) Backpropagation Hyperparameter search with Weights and Biases GANs & VAEsDuration: 5 days Chapter 1 - Transformers & Interpretability In this chapter, you will learn all about transformers, and build and train your own. You'll also study LLM interpretability, a field which has been advanced by Anthropic's Transformer Circuits sequence, and open-source work by Neel Nanda. This chapter will also branch into areas more accurately classed as "model internals" than interpretability, e.g. recent work on steering vectors. Topics include: GPT models (building your own GPT-2) Training and sampling from transformersTransformerLensIn-context Learning and Induction HeadsIndirect Object IdentificationSuperpositionSteering VectorsDuration: 5 days Chapter 2 - Reinforcement Learning In this chapter, you will learn about some of the fundamentals of RL, and work with OpenAI's Gym environment to run their own experiments. Topics include: Fundamentals of RLVanilla Policy GradientProximal Policy Gradient RLHF (& finetuning LLMs with RLHF) Gym & Gymnasium environmentsDuration: 5 days Chapter 3 - Paper Replications We will conclude this program with paper replications, where participants will get guidance and mentorship while they replicate a paper containing material relevant to this course. This should draw on much of the skills and knowledge participants will have accumulated over the last 3 weeks.Duration: 5 days Below is a diagram of the curriculum as a whole, and the dependencies between sections. Note that this may change slightly in the lead-up to the program.Here is som...

The Nonlinear Library
EA - EAGxVirtual: Speaker announcements, timings, and other updates by Sasha Berezhnoi

The Nonlinear Library

Play Episode Listen Later Nov 2, 2023 4:24


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EAGxVirtual: Speaker announcements, timings, and other updates, published by Sasha Berezhnoi on November 2, 2023 on The Effective Altruism Forum. EAGxVirtual is fast approaching and we're excited to share some more details about the event! This post covers updates from the team, including dates and times, content, unique features, and demographic data. In the previous post , we covered the conference theme, reasons to attend, and reviews from the previous attendees. Content: what to expect We're very excited to announce our key speakers for this event: Peter Singer on the most pressing moral issues facing humanity . Bruce Friedrich, President of The Good Food Institute on longtermism and alternative proteins. Carl Robichaud, Co-lead on nuclear policy grantmaking at Longview Philanthropy on a turning point in the story of nuclear weapons. Olga Kikou, Head of the EU Office of Compassion in World Farming on ending the cage age in the EU. Neel Nanda, Research Engineer at DeepMind on open problems in mechanistic interpretability. We are working hard on the program. Beyond the above talks (and many more talks and workshops!), you can expect office hours hosted by experts and EA orgs, fireside chats, group meetups and icebreakers, lightning talks from attendees, and unofficial satellite events. The tentative schedule is available here (all times are in UTC). Please note that the schedule is subject to change. The final schedule will be available on the Swapcard app, which we aim to launch next week. Taking action anywhere in the world We have already received 600 applications from people representing over 70 countries. We welcome all who have a genuine interest in learning more or connecting, including those who are new to effective altruism. If you are a highly-engaged EA, you can make a difference by being responsive to requests from first-time attendees. The map below shows the geographical distribution of the participants: We would love to see more applications. If you know someone who you think should attend the conference, please encourage them to apply by sending them this link: eagxvirtual.com The deadline for applications is 11:59 pm UTC on Thursday, 16 November. Apply here if you haven't already. Dates and times The conference will be taking place from 10 am UTC on Friday, November 17th, until 11:59 pm UTC on Sunday, November 19th. We don't expect you to always be online you can be flexible with your participation! It's completely okay if you can attend only on one of the days. Recordings will be available for registered attendees, so you can watch the sessions you missed later. Friday will feature introductory-level content for participants who are relatively new to EA and a career fair on Gather Town . Saturday and Sunday will have full-day schedules, starting at 7 am UTC each day. There will be a break in the program on Sunday between 2 am and 7 am UTC. Conference features Our main content and networking platform for the conference is the Swapcard . We will share access to the app with all the attendees on November 6 and provide guidance on how to use it and get the most out of the conference. We collaborate with EA Gather Town to make an always-available virtual venue for the attendees to spark more connections and unstructured discussions throughout the conference. Extensive stewardship program . We will highlight ambassadors across different cause areas whom you can speak to get advice or feedback on your career plans. Evergreen discussion space : we are inviting everyone to use EA Anywhere Slack as a discussion space. No more Slacks that are abandoned immediately after the conference is over! Ways to contribute If you want to represent your organization at the career fair or host office hours, please, fill out this form . Apply to give a Lightning talk if ...

Swapmoto Live Podcast
Cobra Moto's American Made Mini Bikes With CEO Sean Hilbert | Midweek Podcast

Swapmoto Live Podcast

Play Episode Listen Later Oct 25, 2023 49:23


After five years of employment with the Ford Motor Company as a Research Engineer, Sean Hilbert decided to step back from the workforce and pursue higher education through the Massachusetts Institute of Technology and MIT Sloan School of Management. During Hilbert's time at the prestigious schools, he put together a vision for the future of the powersports industry that would use "individualized focus" to make off-road motorcycles custom-built for the consumer and soon gathered business partners and financial support that made the idea a reality. The group acquired Cobra Moto from Bud Maimone, a machinist who revolutionized amateur motocross with his American-made mini bikes, in 2003 and has spent the past twenty years growing the brand with a lineup of performance-focused models, an international dealer network, a large production facility in Michigan, and direct support for the racing families who at the track every weekend.

MID-WEST FARM REPORT - MADISON
Building Your Barn From A Single Output

MID-WEST FARM REPORT - MADISON

Play Episode Listen Later Oct 24, 2023 10:11


You may have heard that dairy barns are like cow spots – there are no two alike. Just like when building a house you have your family in mind and what will be the most comfortable for you. The same goes for farmers as when they're designing a barn, they have their herd in mind. But Dr. Mario Mondaca, Senior Technical Applications and Research Engineer of VES-Artex says while every barn is different, the design phase can be automated. He explains how the tool he developed helps farmers simplify their barn design into a single output.  He says, "regardless of who you're working with to build a barn, you should be asking questions about ventilation and design earlier during the building process rather than afterwards."See omnystudio.com/listener for privacy information.

The Deep-Sea Podcast
PRESSURISED: 014 - Space pt 1 - Deep sea of other worlds with Kevin Hand and Casey Machado

The Deep-Sea Podcast

Play Episode Listen Later Sep 29, 2023 29:43


Our short and to the point PRESSURISED version of episode 14. If you don't have time for the full episode and want to get right to the science without any of our waffle, this is the place to be! Read the show notes and find the full episode here: www.armatusoceanic.com/podcast/014-space-pt1     The topic for this double episode is, as ever, the deep sea. But this time we are talking about the deep sea… Innnnnn Spaaaaaaaace! It turns out there is deep sea outside of earth in our solar system in the ice-covered moons of the gas giants. How likely are they to contain deep-sea life completely independent from the life that originated on earth? Thom chats with astrobiologist Kevin Peter Hand, author of Alien Oceans: The search for life in the depths of space, and director of NASA's Ocean Worlds Lab. What are the best candidates for alien life in our solar system? What could that life be like and why does our own deep ocean provide a perfect test bed? It turns out that the early versions of the vehicles which may some day explore these ocean worlds are already in development. Thom chats with Casey Machado, Research Engineer at Woods Hole Oceanographic Institution about the Orpheus AUV, the first generation of such vehicle and a new asset to the exploration of our own deep ocean. Feel free to get in touch with us with questions or your own tales from the high seas on: podcast@armatusoceanic.com We are also on: Twitter: @ArmatusO Facebook: ArmatusOceanic Instagram: @armatusoceanic Read the show notes and find out more about us at: www.armatusoceanic.com

Awkward Silences
Designing Generative AI at Notion with Linus Lee

Awkward Silences

Play Episode Listen Later Sep 19, 2023 45:28


AI is a big part of Notion's strategy for the future of knowledge work, and they're exploring ways to use AI to make people's lives easier. Linus Lee, a Research Engineer at Notion, spends his time researching and learning about AI—how users interact with it, the challenges of building trustworthy AI systems, and its - potential to help people work more effectively. In this episode of Awkward Silences, he sits down with Erin and Carol to discuss Notion's approach to designing interfaces for Generative AI tools.  Tune into this episode to learn more about the challenges of instructing AI, the importance of user feedback to the AI design process, security considerations, use cases, and more. Highlights:  [00:05:03] Tools Linus is excited to see come to market [00:06:50] Generative AI and its function at Notion [00:10:22] Recognizing AI is useful versus finding the use cases for it [00:13:58] Lessons from the Notion design process [00:22:09] Breadth versus depth in workflows [00:26:10] Learning about models in the wild [00:28:52] Security considerations with AI products [00:32:13] Understanding the difference between AI generated and human written text [00:36:34] Should we be scared of AI? Sources mentioned in the episode: Awkward Homepage About Our Guest Linus Lee is a Research Engineer at Notion, prototyping new software interfaces for augmenting our collaborative work and creativity with AI. He has spent the last few years experimenting with AI-augmented tools for thinking, like a canvas for exploring the latent space of neural networks and writing tools where ideas connect themselves. Before Notion, Linus spent a year as an independent researcher in New York City. --- Send in a voice message: https://podcasters.spotify.com/pod/show/awkwardsilences/message

The Nonlinear Library
LW - Report on Frontier Model Training by YafahEdelman

The Nonlinear Library

Play Episode Listen Later Aug 31, 2023 39:27


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Report on Frontier Model Training, published by YafahEdelman on August 31, 2023 on LessWrong. Understanding what drives the rising capabilities of AI is important for those who work to forecast, regulate, or ensure the safety of AI. Regulations on the export of powerful GPUs need to be informed by understanding of how these GPUs are used, forecasts need to be informed by bottlenecks, and safety needs to be informed by an understanding of how the models of the future might be trained. A clearer understanding would enable policy makers to target regulations in such a way that they are difficult for companies to circumvent with only technically compliant GPUs, forecasters to avoid focus on unreliable metrics, and technical research working on mitigating the downsides of AI to understand what data models might be trained on. This doc is built from a collection of smaller docs I wrote on a bunch of different aspects of frontier model training I consider important. I hope for people to be able to use this document as a collection of resources, to draw from it the information they find important and inform their own models. I do not expect this doc to have a substantial impact on any serious AI labs capabilities efforts - I think my conclusions are largely discoverable in the process of attempting to scale AIs or for substantially less money than a serious such attempt would cost. Additionally I expect major labs already know many of the things in this report. Acknowledgements I'd like to thank the following people for their feedback, advice, and discussion: James Bradbury, Software Engineer, Google DeepMind Benjamin Edelman, Ph.D. Candidate, Harvard University Horace He, Software Engineer, PyTorch/Meta Lukas Finnveden, Research Analyst, Open Philanthropy Project Joanna Morningstar, Chief Scientific Officer, Nanotronics Keller Scholl, Ph.D. Candidate, Pardee RAND Graduate School Jaime Sevilla, Director, Epoch Cody Wild, Research Engineer, Google Index Cost Breakdown of ML Training Estimates the costs of training a frontier (state of the art) model, drawing on leaks and analysis. Power usage is a small portion of the cost, GPUs are likely a slim majority. Why ML GPUs Cost So Much ML GPUs are expensive largely because of their communication and memory capabilities - not because of their processing power. NVIDIA's best gaming GPU provides greater ML processing power than the GPU used to train GPT-4, for only a tenth the price. Note that NVIDIA's near monopoly plausibly explains some of the price differential. Contra FLOPs Argues that the most common metric of ML computing power - floating point operations - is flawed, due to the rise of different types of floating point numbers making standardization difficult and the cost of processing power representing a small portion of the cost of ML. ML Parallelism An overview of ML parallelism techniques, showing how the common notion that "ML is embarrassingly parallel" is simplistic and breaks down at large scales - where any simple method of parallelizing a model starts to hit bottlenecks as the capabilities of individual devices become bottlenecks regardless of the number of devices involved. We (Probably) Won't Run Out of Data There are many routes toward preventing data from becoming a major bottleneck to ML scaling, though it's not certain any of them enable scaling as fast as has occurred historically. AI Energy Use and Heat Signatures ML energy usage may become important in the near future, even if it's a relatively minor concern for frontier model training right now. If current trends continue, energy usage could limit scaling, determine major engineering challenges, and provide a novel approach to surveillance of training runs using satellites and multispectral photography. Cost Breakdown of ML Training This section is an att...

The Microscopists
Jean-Yves Tinevez (Institut Pasteur)

The Microscopists

Play Episode Listen Later Aug 10, 2023 61:35


#64 — Jean-Yves Tinevez is a Research Engineer at Institut Pasteur. In this episode of The Microscopists, Jean-Yves joins Peter O'Toole to discuss why he finds supporting other researchers at a core imaging facility so rewarding. They also chat about storing his extensive collection of comic books in his Parisian apartment and some favorite countries to work in. Watch or listen to all episodes of The Microscopists: themicroscopists.bitesizebio.com

ITSPmagazine | Technology. Cybersecurity. Society
The Critical Role of Hypersonics for the Future Security of the United States | A Conversation with Dr. Mark Lewis | Leading Edge Discovery Podcast with Astronaut Charlie Camarda Ph.D

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Jul 13, 2023 67:16


Guest: Dr. Mark Lewis, CEO of Purdue Applied Research Institute [@Research_Purdue]On Twitter | https://twitter.com/Research_PurdueOn LinkedIn | https://www.linkedin.com/in/mark-lewis-43b9742/On YouTube | https://www.youtube.com/channel/UCOSY-LKMuJZn4HGsJZHfQDwHost: Charlie Camarda Ph.DOn ITSPmagazine  

ITSPmagazine | Technology. Cybersecurity. Society
Supercomputing Analysis for NASA Missions | A Conversation with Dr. Olaf Storaasli | Leading Edge Discovery Podcast with Astronaut Charlie Camarda Ph.D

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Jun 8, 2023 64:21


Leading Edge Discovery Podcast
Supercomputing Analysis for NASA Missions | A Conversation with Dr. Olaf Storaasli | Leading Edge Discovery Podcast with Astronaut Charlie Camarda Ph.D

Leading Edge Discovery Podcast

Play Episode Listen Later Jun 8, 2023 64:21


The Thriving Farmer Podcast
232. Chris Jones on the Precarious Future of Rural America

The Thriving Farmer Podcast

Play Episode Listen Later May 23, 2023 32:36


Has rural farming and small town America changed where you live?  On today's episode of the Thriving Farmer Podcast, our guest is Chris Jones who has recently served as a Research Engineer at the University of Iowa. While there, he studied and wrote about the declining quality of water as a result of the changes in agriculture and the environment over the past century. He shares those thoughts and critiques through his blog which was released as a book entitled The Swine Republic on May 19th. Through authoring books, advocacy and research, few have attracted more attention to the causes and sources of farm-related pollution than Chris Jones.  Tune in to hear about Chris' thoughts on how we can protect our environment and the future of rural America. In this episode, you'll hear: How Chris got started in water quality research 1:51 About the water quality of the places Chris has researched 5:12 More on what happens when nitrate gets into the water 9:39 About Chris' role throughout the majority of his career 13:11 How people in Iowa can give political pushback in support of cleaner water 21:00 What Chris' new book is about and the future state of rural America 26:17 Products mentioned in the show: Chris' book The Swine Republic (or pre-order via Amazon) About the Guest: Chris Jones is a Research Engineer with IIHR-Hydroscience & Engineering at the University of Iowa. He holds a PhD in Analytical Chemistry from Montana State University and a BA in chemistry and biology from Simpson College. Previous career stops include the Des Moines Water Works and the Iowa Soybean Association. As an avid outdoorsman, he enjoys fishing, bird watching, gardening, and mushroom hunting in both Iowa and Wisconsin. While he spends most of his time in Iowa City, he is especially fond of the Upper Mississippi River and the Driftless Area. Resources: Blog: https://riverraccoon.blogspot.com/ Twitter: https://twitter.com/RiverRaccoon Substack: https://riverraccoon.substack.com/     This podcast is sponsored by Small Farm University, THE go-to resource for gardeners, homesteaders, and farmers around the world. Small Farm University delivers classes online and on demand, with training on how to grow crops AND how to grow a profitable farm business that serves you, your family, and your community well.  Applying what you learn in SFU could save you countless hours and thousands of dollars. And, it can save you the agony of costly mistakes some make, just because they “don't know what they don't know.” Delivered by real farmers with hands-on experience and expertise, it is unique in its approach, using the RIPEN method for growing and building a farm or farm business. To learn more, visit:  GrowingFarmers.com today!  

ITSPmagazine | Technology. Cybersecurity. Society
Dr. Edwin Fasanella: The Senior Researcher that led the team that proved the technical cause of the Space Shuttle Columbia tragedy| Leading Edge Discovery Podcast with Astronaut Charlie Camarda Ph.D

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later May 5, 2023 51:15


Leading Edge Discovery Podcast
Dr. Edwin Fasanella: The Senior Researcher that led the team that proved the technical cause of the Space Shuttle Columbia tragedy| Leading Edge Discovery Podcast with Astronaut Charlie Camarda Ph.D

Leading Edge Discovery Podcast

Play Episode Listen Later May 5, 2023 51:15


MBIT: Venture Capital | Entrepreneurship | Technology
From A Research Engineer To Launching Future Ventures w/ Maryanna Saenko (Co-Founder)

MBIT: Venture Capital | Entrepreneurship | Technology

Play Episode Listen Later May 4, 2023 27:42


Maryanna Saenko is an early-stage venture capitalist and co-founder of Future Ventures. She started her career as a research engineer at Cabot Corporation and later moved on to venture capital working at firms such as DFJ and Khosla Ventures. Maryanna invests in frontier technologies that make the world a better place, including nuclear fusion, sustainable agricultural and land management, bee immunology, and women's reproductive longevity. Twitter of Host (Shamus Madan): @mbitpodcast

The Nonlinear Library
LW - AI Alignment Research Engineer Accelerator (ARENA): call for applicants by TheMcDouglas

The Nonlinear Library

Play Episode Listen Later Apr 19, 2023 10:58


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Alignment Research Engineer Accelerator (ARENA): call for applicants, published by TheMcDouglas on April 17, 2023 on LessWrong. TL;DR Apply here for the second iteration of ARENA! Introduction We are excited to announce the second iteration of ARENA (Alignment Research Engineer Accelerator), a 6-week ML bootcamp with a focus on AI safety. Our mission is to prepare participants for full-time careers as research engineers in AI safety, e.g. at leading organizations or as independent researchers. The program will commence on May 22nd, 2023, and will be held at the Moorgate WeWork offices in London. This will overlap with SERI MATS, who are also using these offices. We expect this to bring several benefits, e.g. facilitating productive discussions about AI safety & different agendas, and allowing participants to form a better picture of what working on AI safety can look like in practice. ARENA offers a unique opportunity for those interested in AI safety to learn valuable technical skills, engage in their own projects, and make open-source contributions to AI safety-related libraries. The program is comparable to MLAB or WMLB, but extends over a longer period to facilitate deeper dives into the content, and more open-ended project work with supervision. For more information, see our website. Outline of Content The 6-week program will be structured as follows: Chapter 0 - Fundamentals Before getting into more advanced topics, we first cover the basics of deep learning, including basic machine learning terminology, what neural networks are, and how to train them. We will also cover some subjects we expect to be useful going forwards, e.g. using GPT-3 and 4 to streamline your learning, good coding practices, and version control. Topics include: PyTorch basics CNNs, Residual Neural Networks Optimization Backpropagation Hyperparameter search with Weights and Biases Model training & PyTorch Lightning Duration: 5 days Chapter 1 - Transformers & Mechanistic Interpretability In this chapter, you will learn all about transformers, and build and train your own. You'll also learn about Mechanistic Interpretability of transformers, a field which has been advanced by Anthropic's Transformer Circuits sequence, and open-source work by Neel Nanda. Topics include: GPT models (building your own GPT-2) Training and sampling from transformers TransformerLens In-context Learning and Induction Heads Indirect Object Identification Superposition Duration: 9 days Chapter 2 - Reinforcement Learning In this chapter, you will learn about some of the fundamentals of RL, and work with OpenAI's Gym environment to run their own experiments. Topics include: Fundamentals of RL Vanilla Policy Gradient PPO Deep Q-learning RLHF Gym & Gymnasium environments Duration: 6 days Chapter 3 - Training at Scale There are a number of techniques that are helpful for training large-scale models efficiently. Here, you will learn more about these techniques and how to use them. The focus is on hands-on learning, rather than just a theoretical understanding. Topics include: GPUs Distributed computing Data/tensor/pipeline parallelism Finetuning LLMs Duration: 4 days Chapter 4 - Capstone Projects We will conclude this program with capstone projects, where participants get to dig into something related to the course. This should draw on much of the skills and knowledge participants will have accumulated over the last 5 weeks. Duration: 6 days Below is a diagram of the curriculum as a whole, and the dependencies between sections. Here is some sample material from the course, which you will be able to full understand once you reach that point in the course. This notebook is on Indirect Object Identification (from the chapter on Transformers & Mechanistic Interpretability), it will represent one of a set of optional 2-day mi...

The Gradient Podcast
Soumith Chintala: PyTorch

The Gradient Podcast

Play Episode Listen Later Mar 30, 2023 68:20


In episode 66 of The Gradient Podcast, Daniel Bashir speaks to Soumith Chintala.Soumith is a Research Engineer at Meta AI Research in NYC. He is the co-creator and lead of Pytorch, and maintains a number of other open-source ML projects including Torch-7 and EBLearn. Soumith has previously worked on robotics, object and human detection, generative modeling, AI for video games, and ML systems research.Have suggestions for future podcast guests (or other feedback)? Let us know here!Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:30) Soumith's intro to AI journey to Pytorch* (05:00) State of computer vision early in Soumith's career* (09:15) Institutional inertia and sunk costs in academia, identifying fads* (12:45) How Soumith started working on GANs, frustrations* (17:45) State of ML frameworks early in the deep learning era, differentiators* (23:50) Frameworks and leveling the playing field, exceptions* (25:00) Contributing to Torch and evolution into Pytorch* (29:15) Soumith's product vision for ML frameworks* (32:30) From product vision to concrete features in Pytorch* (39:15) Progressive disclosure of complexity (Chollet) in Pytorch* (41:35) Building an open source community* (43:25) The different players in today's ML framework ecosystem* (49:35) ML frameworks pioneered by Yann LeCun and Léon Bottou, their influences on Pytorch* (54:37) Pytorch 2.0 and looking to the future* (58:00) Soumith's adventures in household robotics* (1:03:25) Advice for aspiring ML practitioners* (1:07:10) Be cool like Soumith and subscribe :)* (1:07:33) OutroLinks:* Soumith's Twitter and homepage* Papers* Convolutional Neural Networks Applied to House Numbers Digit Classification* GANs: LAPGAN, DCGAN, Wasserstein GAN* Automatic differentiation in PyTorch* PyTorch: An Imperative Style, High-Performance Deep Learning Library Get full access to The Gradient at thegradientpub.substack.com/subscribe

Georgia Tech Research Podcast

Episode Summary  In this episode of the Georgia Tech Research Podcast, host Scott McAtee speaks with Dr. Carl DiSalvo, associate professor in the College of Computing at Georgia Tech, and Greg McCormick, senior Research Engineer in the Electro-Optical Systems Lab of Georgia Tech Research Institute. This episode focuses on the concept of "smart cities" and what they can mean for society at large. Dr. DeSalvo's work for more than a decade has focused on how communities use technology. For four years, McCormick has been  one of the leaders of the Georgia Smart Communities Challenge.   Talking Points In this episode, listeners will hear about: ·        What are Smart Cities. ·        The areas of impact for Smart Cities, including: economic development, public safety, energy environment, infrastructure and transportation. ·        The Georgia Smart Communities Challenge (Georgia Smart). ·        Public response and “buy in.” ·        Challenges and long-term outlook.   Resources Email Have feedback or additional questions about the podcast? Reach out to podcast@gtri.gatech.edu.    Links https://smartcities.gatech.edu/georgia-smart https://www.carldisalvo.com/portfolio/dataworks   Episode Credits  Episode Team  ·        Host, Scott McAtee, chelsea.selby@gtri.gatech.edu  ·        Producer, Dylan Husserl, Dylan.Husserl@gtri.gatech.edu ·        Editor, Christopher Weems, christopher.weems@gtri.gatech.edu; ·        Announcer, Eric Klein, eric.klein@gtri.gatech.edu ·        Audio Engineer, Amanda Kieffer, Amanda.Kieffer@gtri.gatech.edu  

Women Who Code Radio
Talks Tech #35: Measuring and Remediating Open Source Software Risks

Women Who Code Radio

Play Episode Listen Later Mar 16, 2023 27:15


Nirvi Badyal, Research Engineer, Scantist and Founder & President at HeForShe-NTU, shares her talk, “Nirvi Badyal, Software Engineer at Scantist and Founder & President at HeForShe-NTU, shares her talk, “Measuring and Remediating Open Source Software Risks.” She talks about the convenience of using open source, as well as the risks. She discusses was to measure vulnerabilities in dependency trees.

B Shifter
Lithium Ion Battery Fires with FSRI's Adam Barowy

B Shifter

Play Episode Listen Later Feb 5, 2023 46:23


This episode is co-hosted by Josh Blum and John Vance.On this B Shifter our guest Adam Barowy, Research Engineer from UL's Fire Safety Research Institute will talk about fires involving lithium ion batteries.Sign up for the B Shifter Buckslip, our free weekly newsletter here.Shop B Shifter here. The Hazard Zone Conference is back. Register before April 1 and save $100! Register here!Please subscribe and share. Thank you for listening! Below is the link  for public education. It is updated as FDNY develops new and better information that can be translated to safety recommendations, and UL is supporting them with their research.https://www.fdnysmart.org/be-fdnysmart-when-using-any-devices-powered-by-lithium-ion-batteries/Here is a link to the Australian group who is keeping tabs on EV fire incidents. Amazingly, their count is at only about 130. As Adam mentioned, that is likely underreported, but gives some impression that EVs are at least not bursting into flames everywhere with what is on the road today.https://www.evfiresafe.com/Last but not least, here is the training course FSRI just released on li-ion battery fire and explosion hazards and a link to the tests we ran on e-bike thermal runaways. https://fsri.org/research-update/online-training-available-science-fire-and-explosion-hazards-lithium-ion-batteries https://fsri.org/research/examining-fire-safety-hazards-lithium-ion-battery-powered-e-mobility-devices-homes This episode was recorded on January 25, 2023 in Phoenix, AZFeb 4, 2023

Morning Majlis
Sharjah Sat 1 launches into space tonight (3.01.23)

Morning Majlis

Play Episode Listen Later Jan 3, 2023 13:32


Eng. Mohamed BinAshour, Research Engineer at Sharjah Academy for Astronomy, Space Sciences and Technology joins us on the Morning Majlis to discuss what the Sharjah Sat 1 space mission will entail.

The Nonlinear Library
EA - Reflections on my 5-month AI alignment upskilling grant by Jay Bailey

The Nonlinear Library

Play Episode Listen Later Dec 28, 2022 12:44


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reflections on my 5-month AI alignment upskilling grant, published by Jay Bailey on December 28, 2022 on The Effective Altruism Forum. Five months ago, I received a grant from the Long Term Future Fund to upskill in AI alignment. As of a few days ago, I was invited to Berkeley for two months of full-time alignment research under Owain Evans's stream in the SERIMATS program. This post is about how I got there. The post is partially a retrospective for myself, and partially a sketch of the path I took so that others can decide if it's right for them. This post was written relatively quickly - I'm happy to answer more questions via PM or in the comments. Summary I was a software engineer for 3-4 years with little to no ML experience before I was accepted for my grant. I did a bunch of stuff around fundamental ML maths, understanding RL and transformers, and improving my alignment understanding. Having tutors, getting feedback on my plan early on, and being able to pivot as I went were all very useful for not getting stuck doing stuff that was no longer useful. I probably wouldn't have gotten into SERIMATS without that ability to pivot midway through. After SERIMATS, I want to finish off the last part of the grant while I find work, then start work as a Research Engineer at an alignment organisation. If in doubt, put in an application! My Background My background is more professional and less academic than most. Until I was 23, I didn't do much of anything - then I got a Bachelor of Computer Science from a university ranked around 1,000th, with little maths and no intent to study ML at all, let alone alignment. It was known for strong graduate employment though, so I went straight into industry from there. I had 3.5 years of software engineering experience (1.5 at Amazon, 2 as a senior engineer at other jobs) before applying for the LTFF grant. I had no ML experience at the time, besides being halfway through doing the fast.ai course in my spare time. Not going to lie, seeing how many Top-20 university PhD students I was sharing my cohort with (At least three!) was a tad intimidating - but I made it in the end, so industry experience clearly has a role to play as well. Grant The details of the grant are one of the main reasons I wrote this - I've been asked for 1:1's and details on this at least three times in the last six months, and if you get asked something from at least three different people, it might be worth writing it up and sharing it around. Firstly, the process. Applying for the grant is pretty painless. As long as you have a learning plan already in place, the official guidance is to take 1-2 hours on it. I took a bit longer, polishing it more than required. I later found out my plan was more detailed than it probably had to be. In retrospect, I think my level of detail was good, but I spent too much time editing. AI Safety Support helped me with administration. The main benefit that I got from it was that the tutoring and compute money was tax free (since I didn't get the money personally, rather I used a card they provided me) and I didn't have to worry about tax withholding throughout the year. Secondly, the money. I agonized over how much money to ask for. This took me days. I asked myself how much I really needed, then I asked myself how much I would actually accept gladly with no regrets, then I balked at those numbers, even knowing that most people ask for too little, not too much. I still balk at the numbers, to be honest, but it would have been so much easier to write this if I had other grants to go off. So, in the interest of transparency and hopefully preventing someone else going through the same level of anguish, I'm sharing the full text of my grant request, including money requested (in Australian dollars, but you can always convert it) here....

The Nonlinear Library
LW - Reflections on my 5-month alignment upskilling grant by Jay Bailey

The Nonlinear Library

Play Episode Listen Later Dec 27, 2022 12:42


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reflections on my 5-month alignment upskilling grant, published by Jay Bailey on December 27, 2022 on LessWrong. Five months ago, I received a grant from the Long Term Future Fund to upskill in AI alignment. As of a few days ago, I was invited to Berkeley for two months of full-time alignment research under Owain Evans's stream in the SERIMATS program. This post is about how I got there. The post is partially a retrospective for myself, and partially a sketch of the path I took so that others can decide if it's right for them. This post was written relatively quickly - I'm happy to answer more questions via PM or in the comments. Summary I was a software engineer for 3-4 years with little to no ML experience before I was accepted for my grant. I did a bunch of stuff around fundamental ML maths, understanding RL and transformers, and improving my alignment understanding. Having tutors, getting feedback on my plan early on, and being able to pivot as I went were all very useful for not getting stuck doing stuff that was no longer useful. I probably wouldn't have gotten into SERIMATS without that ability to pivot midway through. After SERIMATS, I want to finish off the last part of the grant while I find work, then start work as a Research Engineer at an alignment organisation. If in doubt, put in an application! My Background My background is more professional and less academic than most. Until I was 23, I didn't do much of anything - then I got a Bachelor of Computer Science from a university ranked around 1,000th, with little maths and no intent to study ML at all, let alone alignment. It was known for strong graduate employment though, so I went straight into industry from there. I had 3.5 years of software engineering experience (1.5 at Amazon, 2 as a senior engineer at other jobs) before applying for the LTFF grant. I had no ML experience at the time, besides being halfway through doing the fast.ai course in my spare time. Not going to lie, seeing how many Top-20 university PhD students I was sharing my cohort with (At least three!) was a tad intimidating - but I made it in the end, so industry experience clearly has a role to play as well. Grant The details of the grant are one of the main reasons I wrote this - I've been asked for 1:1's and details on this at least three times in the last six months, and if you get asked something from at least three different people, it might be worth writing it up and sharing it around. Firstly, the process. Applying for the grant is pretty painless. As long as you have a learning plan already in place, the official guidance is to take 1-2 hours on it. I took a bit longer, polishing it more than required. I later found out my plan was more detailed than it probably had to be. In retrospect, I think my level of detail was good, but I spent too much time editing. AI Safety Support helped me with administration. The main benefit that I got from it was that the tutoring and compute money was tax free (since I didn't get the money personally, rather I used a card they provided me) and I didn't have to worry about tax withholding throughout the year. Secondly, the money. I agonized over how much money to ask for. This took me days. I asked myself how much I really needed, then I asked myself how much I would actually accept gladly with no regrets, then I balked at those numbers, even knowing that most people ask for too little, not too much. I still balk at the numbers, to be honest, but it would have been so much easier to write this if I had other grants to go off. So, in the interest of transparency and hopefully preventing someone else going through the same level of anguish, I'm sharing the full text of my grant request, including money requested (in Australian dollars, but you can always convert it) here. Personal embarrassmen...

How to Disaster
008: How Building Materials Make a Difference in Disasters with Alister Watt and Daniel Gorham

How to Disaster

Play Episode Listen Later Nov 7, 2022 56:28


"We know that we're not powerless against wildfires. It's a question of, 'what are the steps? What are the actions that really make a difference?'" -Alister Watt   "Wildfire doesn't acknowledge boundaries— we need to be thinking about resilience and preparedness with that in mind." -Daniel Gorham Episode Description:  When it comes to disasters, building materials are not just useful—they're essential. Every time we build something that can help prevent fires from happening or spreading, we're influencing how wildfire impacts our properties and the community we live in.    The Insurance Institute for Business & Home Safety (IBHS), an independent, 501 (C) nonprofit scientific research and communications organization, works to provide practical solutions to build more resilient communities.    This week, we are joined by the organization's Chief Product Officer Alister Watt and Research Engineer, Daniel Gorham to talk about how our choice of building materials can make a difference during a disaster. They also help us understand the trend of fires in the wildlands vs urban communities, how the layout of an environment influences the extent and gravity of a disaster's impact, why we need to change our view of aesthetics, what ember casts are and how to prevent them from entering our homes, and how Wildfire Prepared, a project of IBHS, can help us get better insured. Connect with After The Fire USA:  Website: https://afterthefireusa.org/ Facebook: https://www.facebook.com/RebuildNorthBay/ Twitter: https://twitter.com/AfterTheFireUSA Instagram: https://www.instagram.com/afterthefireusa/ LinkedIn: https://www.linkedin.com/company/atf3r YouTube: https://www.youtube.com/channel/UCuTefxZgWUJkDVoZGZQpxgQ Highlights: 02:13 Meet Alister and Daniel and Their Work with IBHS 06:29 Understanding the Trend: Wildland vs Urban Fires 12:09 Rural Community Fire vs Condensed Fire 17:42 The Role of Insurance in Disaster Preparedness 21:04 A Change Needed in Our View of Aesthetics 24:50 It's All About Ember 30:24 How to Prevent Embers From Entering Our Homes 39:37 We Are Not Powerless Against Wildfires 45:13 Recreating Hazards: What Happens "During"  49:53 Spend More to Build It Right 52:58 The Benefits of Being Prepared

The Nonlinear Library
EA - I'm interviewing prolific AI safety researcher Richard Ngo (now at OpenAI and previously DeepMind). What should I ask him? by Robert Wiblin

The Nonlinear Library

Play Episode Listen Later Sep 29, 2022 1:24


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I'm interviewing prolific AI safety researcher Richard Ngo (now at OpenAI and previously DeepMind). What should I ask him?, published by Robert Wiblin on September 29, 2022 on The Effective Altruism Forum. Next week I'm interviewing Richard Ngo, current AI (Safety) Governance Researcher at OpenAI and previous Research Engineer at DeepMind. Before that he was doing a PhD in the Philosophy of Machine Learning at Cambridge, on the topic of "to what extent is the development of artificial intelligence analogous to the biological and cultural evolution of human intelligence?" He is focused on making the development and deployment of AGI more likely to go well and less likely to go badly. Richard is also a highly prolific contributor to online discussion of AI safety in a range of places, for instance: Moral strategies at different capability levels on his blog Thinking Complete The alignment problem from a deep learning perspective on the EA Forum Some conceptual alignment research projects on the AI Alignment Forum Richard Ngo and Eliezer Yudkowsky policely debating AI Safety on Less Wrong The AGI Safety from First Principle education series And on his Twitter What should I ask him? Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

B Shifter
FSRI's Study of Fire Service Residential Home Size-up and Search & Rescue Operations

B Shifter

Play Episode Listen Later Sep 14, 2022 52:48


You can find information on the FSRI study here.This three-year Research and Development Project examines fireground size-up measures and search and rescue operations as part of a coordinated fire attack on a residential structure. Our guest is Craig Weinschenk, Research Engineer.Craig is a Research Engineer with UL's Fire Safety Research Institute (FSRI). He holds a Master of Science and a doctorate in mechanical engineering from The University of Texas-Austin. During his graduate studies, Craig worked with the Austin Fire Department on analyzing firefighter compliance to changes in standard operating guidelines and on characterizing the impact of forced ventilation on room-scale fires. Since graduation, he has conducted full-scale residential fire experiments designed to characterize the thermal environment within the structure as well as exposed firefighter personal protective equipment. Craig is also a developer of NIST's Fire Dynamics Simulator (FDS) Version 6. He has used FDS to study the fire dynamics and thermal environment of fires that resulted in line-of-duty deaths and injuries to firefighters. Nick Brunacini is the leader of B Shifter and Blue Card. He joined the Phoenix Fire Department (PFD) in 1980. He served seven years as a firefighter on different engine companies before promoting to captain and working nine years on a ladder company. Nick served as a battalion chief for five years, and in 2001, he was promoted to shift commander. He then spent the next five years developing and teaching the Blue Card curriculum at the PFD's Command Training Center. His last assignment with the PFD was south shift commander; he retired from the department in 2009. Nick is the author of “B-Shifter—A Firefighter's Memoir.” He also co-wrote “Command Safety.” Contact Nick Brunacini at nick@bshifter.com Josh Blum, Blue Card Program Manager and has been in the fire service since 1993. He began his service as a volunteer firefighter before accepting a full-time career position. Josh served as the deputy chief of operations for the Loveland Symmes (Ohio) Fire Department, where he measurably increased the department's training and operational effectiveness. Josh retired from Loveland Symmes in 2020 and now works for Blue Card as the program manager in all aspects of curriculum development and program delivery. Josh continues to work directly with departments throughout Southern Ohio assisting with training and fire department operations. Josh has developed and managed many fire service training programs throughout his career. He is a graduate of the Ohio Fire Executive ProgramContact Josh Blum at josh@bshifter.com Jeff King who is our Professional Development Manager, Lead Instructor and Houston Fire Department member.  Jeffery L. King began his career with the Houston Fire Department in 2001 and currently serves as a safety officer. As a dedicated student of the fire service, Jeff earned a Bachelor of Science in fire science and a Master of Science in emergency services management. He has also earned the professional designations of fire service chief executive officer through the Mays School of Business at Texas A&M University and chief training officer through the Center for Public Safety Excellence. Jeff is a graduate of the City of Houston's Leadership Institute Program. He lives in Spring, Texas, with his wife, Tracy, and their daughters Savannah and Madigan.You can contact Jeff at jeffery@bshifter.com

Know It All: A Podcast For Construction
#7: Collaboration & New Ways of Working in the Construction Industry from Stanford University's PBL Lab

Know It All: A Podcast For Construction

Play Episode Listen Later Sep 14, 2022 40:41


As the construction industry prioritizes productivity and process growth and innovation, it's critical to engage the next great minds of the industry – students – to share their thoughts on what the future looks like. The Project Based Learning (PBL) Lab at the Center for Integrated Facility Engineering (CIFE) at Stanford University is doing just that by giving students the opportunity to experience, practice and innovate construction methodologies, practices and processes through real-world experiences. This program, started by Dr. Renate Fruchter, enables student teams with mentors from the construction space to push the envelope via a project-based learning course that allows them to use their skill set in a practical manner, arming them for their future. In this episode, Collaboration & New Ways of Working in the Construction Industry from Stanford University's PBL Lab, Justin Schwaiger, Director of Customer Success at Manufacton, and Dr. Fruchter, founding director of the PBL Lab, lecturer in the Dept. of Civil and Environmental Engineering, and Sr. Research Engineer thrust leader of “Collaboration Technologies” at CIFE at Stanford, discuss this program, its benefits, and what the industry at large can learn from it. To learn more about the course, please visit PBL Lab's website. To explore some of the projects discussed in the episode, please visit PBL Lab's Project Page. The video detailing the Atlantic2020 Project Evolution Story is on YouTube here.  

BUILDTank / buildCAST
#20-2022 Dan Gorham and the new Insurance Institute for Business and Home Safety Wildfire Prepared Home Program

BUILDTank / buildCAST

Play Episode Listen Later Sep 12, 2022 66:07


In episode #16 of 2022 I spoke with Fred Malik about the Fortified Construction Program. The FORTIFIED construction method is a voluntary construction standard and program designed to help protect homes against sever weather events. Primarily wind driven damage event. FORTIFIED construction was created from decades of research that the Insurance Institute for Business & Home Safety or (IBHS) has conducted. Their lab allows for the building and testing of full-size houses in a controlled environment. In this episode I am learning about a new IBHS program that has just been released and is of great interest to me because of my work as the Boulder County Marshall Fire New Homes Building Advisor. I spoke with Dan Gorham, a Research Engineer focusing on wildfire and its impact on the built environment at the Insurance Institute for Business and Home Safety. Like the Fortified programs focus on sever wind events, IBHS has focused its attention on wildfires and their impact on homes. They have recently launched a new Wildfire Prepared Home Program which is a research driven program design to meaningfully reduce risk and better protecting homes against wildfire. Dan was supper informative, and a pleasure to speak with, as he walked me through the IBHS research findings and the new Wildfire Prepared Home Program. Dan Gorham on LinkedIn Wildfire Prepared Home Program Insurance Institute for Business & Home Safety FireWise USA Colorado Firewise USA Fortified Program

The Nonlinear Library
EA - Levelling Up in AI Safety Research Engineering by Gabriel Mukobi

The Nonlinear Library

Play Episode Listen Later Sep 2, 2022 27:12


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Levelling Up in AI Safety Research Engineering, published by Gabriel Mukobi on September 2, 2022 on The Effective Altruism Forum. Summary: A level-based guide for independently up-skilling in AI Safety Research Engineering that aims to give concrete objectives, goals, and resources to help anyone go from zero to hero. Cross-posted to LessWrong. View a pretty Google Docs version here. Introduction I think great career guides are really useful for guiding and structuring the learning journey of people new to a technical field like AI Safety. I also like role-playing games. Here's my attempt to use levelling frameworks and break up one possible path from zero to hero in Research Engineering for AI Safety (e.g. jobs with the “Research Engineer” title) through objectives, concrete goals, and resources. I hope this kind of framework makes it easier to see where one is on this journey, how far they have to go, and some options to get there. I'm mostly making this to sort out my own thoughts about my career development and how I'll support other students through Stanford AI Alignment, but hopefully, this is also useful to others! Note that I assume some interest in AI Safety Research Engineering—this guide is about how to up-skill in Research Engineering, not why (though working through it should be a great way to test your fit). Also note that there isn't much abstract advice in this guide (see the end for links to guides with advice), and the goal is more to lay out concrete steps you can take to improve. For each level, I describe the general capabilities of someone at the end of that level, some object-level goals to measure that capability, and some resources to choose from that would help get there. The categories of resources within a level are listed in the order you should progress, and resources within a category are roughly ordered by quality. There's some redundancy, so I would recommend picking and choosing between the resources rather than doing all of them. Also, if you are a student and your university has a good class on one of the below topics, consider taking that instead of one of the online courses I listed. As a very rough estimate, I think each level should take at least 100-200 hours of focused work, for a total of 700-1400 hours. At 10 hours/week (quarter-time), that comes to around 16-32 months of study but can definitely be shorter (e.g. if you already have some experience) or longer (if you dive more deeply into some topics)! I think each level is about evenly split between time spent reading/watching and time spent building/testing, with more reading earlier on and more building later. Confidence: mid-to-high. I am not yet an AI Safety Research Engineer (but I plan to be)—this is mostly a distillation of what I've read from other career guides (linked at the end) and talked about with people working on AI Safety. I definitely haven't done all these things, just seen them recommended. I don't expect this to be the “perfect” way to prepare for a career in AI Safety Research Engineering, but I do think it's a very solid way. Level 1: AI Safety Fundamentals Objective‏‏‎ ‎ You are familiar with the basic arguments for existential risks due to advanced AI, models for forecasting AI advancements, and some of the past and current research directions within AI alignment/safety. Note: You should be coming back to these readings and keeping up to date with the latest work in AI Safety throughout your learning journey. It's okay if you don't understand everything on your first try—Level 1 kind of happens all the time. Goals‏‏‎ ‎ Complete an AI Safety introductory reading group fellowship. Write a reflection distilling, recontextualizing, or expanding upon some AI Safety topic and share it with someone for feedback. Figure out how convinced you are of the arg...

Alignment Newsletter Podcast
Alignment Newsletter #172: Sorry for the long hiatus!

Alignment Newsletter Podcast

Play Episode Listen Later Jul 5, 2022 5:52


Recorded by Robert Miles: http://robertskmiles.com More information about the newsletter here: https://rohinshah.com/alignment-newsletter/ YouTube Channel: https://www.youtube.com/channel/UCfGGFXwKpr-TJ5HfxEFaFCg   Sorry for the long hiatus! I was really busy over the past few months and just didn't find time to write this newsletter. (Realistically, I was also a bit tired of writing it and so lacked motivation.) I'm intending to go back to writing it now, though I don't think I can realistically commit to publishing weekly; we'll see how often I end up publishing. For now, have a list of all the things I should have advertised to you whose deadlines haven't already passed.   NEWS Survey on AI alignment resources (Anonymous) (summarized by Rohin): This survey is being run by an outside collaborator in partnership with the Centre for Effective Altruism (CEA). They ask that you fill it out to help field builders find out which resources you have found most useful for learning about and/or keeping track of the AI alignment field. Results will help inform which resources to promote in the future, and what type of resources we should make more of. Announcing the Inverse Scaling Prize ($250k Prize Pool) (Ethan Perez et al) (summarized by Rohin): This prize with a $250k prize pool asks participants to find new examples of tasks where pretrained language models exhibit inverse scaling: that is, models get worse at the task as they are scaled up. Notably, you do not need to know how to program to participate: a submission consists solely of a dataset giving at least 300 examples of the task. Inverse scaling is particularly relevant to AI alignment, for two main reasons. First, it directly helps understand how the language modeling objective ("predict the next word") is outer misaligned, as we are finding tasks where models that do better according to the language modeling objective do worse on the task of interest. Second, the experience from examining inverse scaling tasks could lead to general observations about how best to detect misalignment. $500 bounty for alignment contest ideas (Akash) (summarized by Rohin): The authors are offering a $500 bounty for producing a frame of the alignment problem that is accessible to smart high schoolers/college students and people without ML backgrounds. (See the post for details; this summary doesn't capture everything well.) Job ad: Bowman Group Open Research Positions (Sam Bowman) (summarized by Rohin): Sam Bowman is looking for people to join a research center at NYU that'll focus on empirical alignment work, primarily on large language models. There are a variety of roles to apply for (depending primarily on how much research experience you already have). Job ad: Postdoc at the Algorithmic Alignment Group (summarized by Rohin): This position at Dylan Hadfield-Menell's lab will lead the design and implementation of a large-scale Cooperative AI contest to take place next year, alongside collaborators at DeepMind and the Cooperative AI Foundation. Job ad: AI Alignment postdoc (summarized by Rohin): David Krueger is hiring for a postdoc in AI alignment (and is also hiring for another role in deep learning). The application deadline is August 2. Job ad: OpenAI Trust & Safety Operations Contractor (summarized by Rohin): In this remote contractor role, you would evaluate submissions to OpenAI's App Review process to ensure they comply with OpenAI's policies. Apply here by July 13, 5pm Pacific Time. Job ad: Director of CSER (summarized by Rohin): Application deadline is July 31. Quoting the job ad: "The Director will be expected to provide visionary leadership for the Centre, to maintain and enhance its reputation for cutting-edge research, to develop and oversee fundraising and new project and programme design, to ensure the proper functioning of its operations and administration, and to lead its endeavours to secure longevity for the Centre within the University." Job ads: Redwood Research (summarized by Rohin): Redwood Research works directly on AI alignment research, and hosts and operates Constellation, a shared office space for longtermist organizations including ARC, MIRI, and Open Philanthropy. They are hiring for a number of operations and technical roles. Job ads: Roles at the Fund for Alignment Research (summarized by Rohin): The Fund for Alignment Research (FAR) is a new organization that helps AI safety researchers, primarily in academia, pursue high-impact research by hiring contractors. It is currently hiring for Operation Manager, Research Engineer, and Communication Specialist roles. Job ads: Encultured AI (summarized by Rohin): Encultured AI is a new for-profit company with a public benefit mission: to develop technologies promoting the long-term survival and flourishing of humanity and other sentient life. They are hiring for a Machine Learning Engineer and an Immersive Interface Engineer role. Job ads: Fathom Radiant (summarized by Rohin): Fathom Radiant is a public benefit corporation that aims to build a new type of computer which they hope to use to support AI alignment efforts. They have several open roles, including (but not limited to) Scientists / Engineers, Builders and Software Engineer, Lab.

The Nonlinear Library
AF - DeepMind is hiring for the Scalable Alignment and Alignment Teams by Rohin Shah

The Nonlinear Library

Play Episode Listen Later May 13, 2022 15:10


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: DeepMind is hiring for the Scalable Alignment and Alignment Teams, published by Rohin Shah on May 13, 2022 on The AI Alignment Forum. We are hiring for several roles in the Scalable Alignment and Alignment Teams at DeepMind, two of the subteams of DeepMind Technical AGI Safety trying to make artificial general intelligence go well. In brief, The Alignment Team investigates how to avoid failures of intent alignment, operationalized as a situation in which an AI system knowingly acts against the wishes of its designers. Alignment is hiring for Research Scientist and Research Engineer positions. The Scalable Alignment Team (SAT) works to make highly capable agents do what humans want, even when it is difficult for humans to know what that is. This means we want to remove subtle biases, factual errors, or deceptive behaviour even if they would normally go unnoticed by humans, whether due to reasoning failures or biases in humans or due to very capable behaviour by the agents. SAT is hiring for Research Scientist - Machine Learning, Research Scientist - Cognitive Science, Research Engineer, and Software Engineer positions. We elaborate on the problem breakdown between Alignment and Scalable Alignment next, and discuss details of the various positions. “Alignment” vs “Scalable Alignment” Very roughly, the split between Alignment and Scalable Alignment reflects the following decomposition: Generate approaches to AI alignment – Alignment Team Make those approaches scale – Scalable Alignment Team In practice, this means the Alignment Team has many small projects going on simultaneously, reflecting a portfolio-based approach, while the Scalable Alignment Team has fewer, more focused projects aimed at scaling the most promising approaches to the strongest models available. Scalable Alignment's current approach: make AI critique itself Imagine a default approach to building AI agents that do what humans want: Pretrain on a task like “predict text from the internet”, producing a highly capable model such as Chinchilla or Flamingo. Fine-tune into an agent that does useful tasks, as evaluated by human judgements. There are several ways this could go wrong: Humans are unreliable: The human judgements we train against could be flawed: we could miss subtle factual errors, use biased reasoning, or have insufficient context to evaluate the task. The agent's reasoning could be hidden: We want to know not just what the system is doing but why, both because that might reveal something about what that we don't like, and because we expect good reasoning to better generalize to other situations. Even if the agent is reasoning well, it could fail in other situations: Even if the reasoning is correct this time, the AI could fail to generalize correctly to other situations. Our current plan to address these problem is (in part): Give humans help in supervising strong agents: On the human side, provide channels for oversight and advice from peers, experts in various domains, and broader society. On the ML side, agents should explain their behaviour and reasoning, argue against themselves when wrong, and cite relevant evidence. Align explanations with the true reasoning process of the agent: Ensure that agent's are able and incentivized to show their reasoning to human supervisors, either by making reasoning explicit if possible or via methods for interpretability and eliciting latent knowledge. Red team models to exhibit failure modes that don't occur in normal use We believe none of these pieces are sufficient by themselves: (1) without (2) can be rationalization, where an agent decides what to do and produces an explanation after the fact that justifies its answer. (2) without (1) doesn't scale: The full reasoning trace of the agent might be enormous, it might be terabytes of data even with com...

The Nonlinear Library
EA - DeepMind is hiring for the Scalable Alignment and Alignment Teams by Rohin Shah

The Nonlinear Library

Play Episode Listen Later May 13, 2022 15:05


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: DeepMind is hiring for the Scalable Alignment and Alignment Teams, published by Rohin Shah on May 13, 2022 on The Effective Altruism Forum. We are hiring for several roles in the Scalable Alignment and Alignment Teams at DeepMind, two of the subteams of DeepMind Technical AGI Safety trying to make artificial general intelligence go well. In brief, The Alignment Team investigates how to avoid failures of intent alignment, operationalized as a situation in which an AI system knowingly acts against the wishes of its designers. Alignment is hiring for Research Scientist and Research Engineer positions. The Scalable Alignment Team (SAT) works to make highly capable agents do what humans want, even when it is difficult for humans to know what that is. This means we want to remove subtle biases, factual errors, or deceptive behaviour even if they would normally go unnoticed by humans, whether due to reasoning failures or biases in humans or due to very capable behaviour by the agents. SAT is hiring for Research Scientist - Machine Learning, Research Scientist - Cognitive Science, Research Engineer, and Software Engineer positions. We elaborate on the problem breakdown between Alignment and Scalable Alignment next, and discuss details of the various positions. “Alignment” vs “Scalable Alignment” Very roughly, the split between Alignment and Scalable Alignment reflects the following decomposition: Generate approaches to AI alignment – Alignment Team Make those approaches scale – Scalable Alignment Team In practice, this means the Alignment Team has many small projects going on simultaneously, reflecting a portfolio-based approach, while the Scalable Alignment Team has fewer, more focused projects aimed at scaling the most promising approaches to the strongest models available. Scalable Alignment's current approach: make AI critique itself Imagine a default approach to building AI agents that do what humans want: Pretrain on a task like “predict text from the internet”, producing a highly capable model such as Chinchilla or Flamingo. Fine-tune into an agent that does useful tasks, as evaluated by human judgements. There are several ways this could go wrong: Humans are unreliable: The human judgements we train against could be flawed: we could miss subtle factual errors, use biased reasoning, or have insufficient context to evaluate the task. The agent's reasoning could be hidden: We want to know not just what the system is doing but why, both because that might reveal something about what that we don't like, and because we expect good reasoning to better generalize to other situations. Even if the agent is reasoning well, it could fail in other situations: Even if the reasoning is correct this time, the AI could fail to generalize correctly to other situations. Our current plan to address these problem is (in part): Give humans help in supervising strong agents: On the human side, provide channels for oversight and advice from peers, experts in various domains, and broader society. On the ML side, agents should explain their behaviour and reasoning, argue against themselves when wrong, and cite relevant evidence. Align explanations with the true reasoning process of the agent: Ensure that agent's are able and incentivized to show their reasoning to human supervisors, either by making reasoning explicit if possible or via methods for interpretability and eliciting latent knowledge. Red team models to exhibit failure modes that don't occur in normal use We believe none of these pieces are sufficient by themselves: (1) without (2) can be rationalization, where an agent decides what to do and produces an explanation after the fact that justifies its answer. (2) without (1) doesn't scale: The full reasoning trace of the agent might be enormous, it might be terabytes of data even wi...

ASTRO Journals
Red Journal Podcast March 15, 2022

ASTRO Journals

Play Episode Listen Later Feb 18, 2022 19:05


Radiation-Induced Senescence. Dr. Sue Yom, Editor-in-Chief, hosts a discussion of radiation-induced senescence with Dr. Marjan Boerma, Radiation Biology Section Editor at the Red Journal and Professor and Director of the Division of Radiation Health at the University of Arkansas for Medical Sciences, and Dr. Agnès François, Research Engineer in the Radiobiology of Medical Exposure Laboratory at the Institute for Radiological Protection and Nuclear Safety in Fontenay-aux-Roses, France, specializing in mouse models investigating the role of macrophages and bronchiolar epithelium in acute and late injury after focal lung irradiation. They discuss a paper recently published in the Red Journal, "Deciphering the Dynamic Molecular Program of Radiation-Induced Endothelial Senescence."

The Fatherhood Experience: Fitness, Family, Finance & Freedom
EP 134 - Living Longer at Your Highest Level with Chris Burres

The Fatherhood Experience: Fitness, Family, Finance & Freedom

Play Episode Listen Later Apr 19, 2021 57:31


In this episode, Jason interviews Chris Burres! He is a Research Engineer and Scientist whose primary mission is to help people live longer, healthier, and pain-free lives. Burres has a diverse background in the field of inventions and manufacturing, with a BS in Mechanical Engineering from the University of Houston. Burres is the co-owner of a company that manufactures a unique product containing a Nobel Prize-winning molecule, that has been shown to extend the lifespan of test subjects by 90%.Burres discusses his personal health journey, and how he knew he needed to make a change. He then discusses his scientific research and what he has found in the molecule that can extend your lifespan. Burres then offers all the benefits of his MyVitalC supplement, and describes in detail the positive impact it can have on your life. Stay accountable for your health with community: check out our FREE Facebook Group - Discussions with Dads: join.discussionswithdadsgroup.com Book your free 15 min health consultation call here: https://calendly.com/dadbodhealth/health-consultLearn More About Chris BurresWebsiteInstitute Stay Connected with Jason PriestWebsiteFacebookInstagramLinkedIn About The Dad Bod PodA podcast for men looking to improve their health and re-define their Dad Bods. A place for men to learn, grow and live the healthiest life possible.Don't forget to follow us on IG @thefatherhoodexperience!