Podcasts about DeepMind

  • 563PODCASTS
  • 1,156EPISODES
  • 42mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Nov 25, 2022LATEST
DeepMind

POPULARITY

20152016201720182019202020212022

Categories



Best podcasts about DeepMind

Show all podcasts related to deepmind

Latest podcast episodes about DeepMind

Not Boring
Anton Teaches Packy AI | Ep 2 | Chinchilla

Not Boring

Play Episode Listen Later Nov 25, 2022 62:45


We're back! In Episode 2, Anton Teaches Packy about Deepmind's March 2022 paper, Training Compute-Optimal Large Language Models, or as it's more commonly known, Chinchilla. Prior to Chinchilla, the best way to improve the performance of LLMs was thought to be by scaling up the size of the model. As a result, the largest models now have over 500 billion parameters. But there are only so many GPUs in the world, and throwing compute at the problem is expensive and energy intensive. In this paper, Deepmind found that the optimal way to scale an LLM is actually by scaling size (parameters) and training (data) proportionally. Given the race for size, today's models are plenty big but need a lot more data. In this conversation, we go deep on the paper itself, but we also zoom out to talk about the politics of AI, when AGI is going to hit, where to get more data, and why AI won't take our jobs. This one gets a lot more philosophical than our first episode as we explore the implications of Chinchilla and LLMs more generally. If you enjoyed this conversation, subscribe for more. We're going to try to release one episode per week, and we want to make this the best way to get a deeper understanding of the mind-blowing progress happening in AI and what it means for everything we do as humans. LINKS: Training Compute-Optimal Large Language Models: https://arxiv.org/abs/2203.15556 chinchilla's wild implications: https://www.lesswrong.com/posts/6Fpvc... Scaling Laws for Neural Language Models (Kaplan et al): https://arxiv.org/abs/2001.08361 --- Send in a voice message: https://anchor.fm/notboring/message

Intelligent Design the Future
Orphan Proteins Spell Trouble for AlphaFold 2

Intelligent Design the Future

Play Episode Listen Later Nov 16, 2022 28:35


On this ID the Future, philosopher of biology Paul Nelson further explores AlphaFold 2, a cutting edge computer program from Google's DeepMind designed to rapidly suss out important secrets in the realm of proteins, indispensable molecular biological workhorses that come in thousands of different shapes and sizes. Nelson enthuses about AlphaFold 2 but also explains why he is convinced that AlphaFold's creators have hit a series of immovable obstacles. The watchword here—orphans. Tune in to learn what these mischievous orphan proteins are about, and what they suggest for AlphaFold, evolution, and intelligent design. Source

Intelligent Design the Future
Powerful Protein Folding Algorithm AlphaFold Foiled by Singletons

Intelligent Design the Future

Play Episode Listen Later Nov 14, 2022 29:43


Today's ID the Future spotlights AlphaFold, an artificial intelligence program in the news for its impressive breakthroughs at predicting a protein's 3D structure from its amino acid sequence. Philosopher of Biology Paul Nelson walks listeners through the importance of this “amazing breakthrough,” as he describes it in a recent Evolution News article; but don't uncork the champagne bottles just yet. The reason, according to Nelson, is that while proteins, protein sequences, and protein folding promise to reveal much that is still mysterious in molecular biology, we now know that biological information involves far more than just an organism's proteome—that is, far more than the full suite of proteins expressed by an organism. Nelson uses analogies to manmade machines and cognates Read More › Source

Learning Bayesian Statistics
#71 Artificial Intelligence, Deepmind & Social Change, with Julien Cornebise

Learning Bayesian Statistics

Play Episode Listen Later Nov 14, 2022 65:08


Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!This episode will show you different sides of the tech world. The one where you research and apply algorithms, where you get super excited about image recognition and AI-generated art. And the one where you support social change actors — aka the “AI for Good” movement.My guest for this episode is, quite naturally, Julien Cornebise. Julien is an Honorary Associate Professor at UCL. He was an early researcher at DeepMind where he designed its early algorithms. He then worked as a Director of Research at ElementAI, where he built and led the London office and “AI for Good” unit.After his theoretical work on Bayesian methods, he had the privilege to work with the NHS to diagnose eye diseases; with Amnesty International to quantify abuse on Twitter and find destroyed villages in Darfur; with Forensic Architecture to identify teargas canisters used against civilians.Other than that, Julien is an avid reader, and loves dark humor and picking up his son from school at the 'hour of the daddies and the mommies”.Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, Adam Bartonicek, William Benton, James Ahloy, Robin Taylor, Thomas Wiecki, Chad Scherrer, Nathaniel Neitzke, Zwelithini Tunyiswa, Elea McDonnell Feit, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Joshua Duncan, Ian Moran, Paul Oreto, Colin Caprani, George Ho, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Raul Maldonado, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Luis Iberico, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Aaron Jones, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, David Haas, Robert Yolken, Or Duek, Pavel Dusek and Paul Cox.Visit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag ;)Links from the show:Julien's website: https://cornebise.com/julien/Julien on Twitter: https://twitter.com/JCornebiseJulien on LinkedIn:

The Nonlinear Library
AF - Some advice on independent research by Marius Hobbhahn

The Nonlinear Library

Play Episode Listen Later Nov 8, 2022 15:36


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some advice on independent research, published by Marius Hobbhahn on November 8, 2022 on The AI Alignment Forum. I have been doing independent research in addition to my Ph.D. for roughly a year now. For the next 6 months, I'll take a break from my Ph.D. and plan to do AI safety research full-time. I had chats with many people about independent research in the past, e.g. on EAGs or because 80K has connected me with people thinking about pursuing independent research. I had some great experiences with independent research but not everyone does. I think the variance for independent research is large and I'm worried that people get disheartened by bad experiences. So here are some considerations in which situations independent research might be a good idea and some tips that will hopefully improve your experience. I'd like to thank Magdalena Wache and Tilman Räuker for their feedback. TL;DR: At first glance, there is a bit of a paradoxical nature to independent research. If someone wants to pursue independent research they need a research agenda to work on. If they are able to construct a good research agenda, an existing institution often has incentives to hire them. On the flip side, if their research skills are not developed enough to be hired by an existing institution, their independent research might not be very successful. Thus, naively it would seem that there are few cases in which independent research makes sense. However, I think that there are many situations in which independent research or independent upskilling are a great option, e.g. when no established organization is working on the topic you find most promising, as a way to upskill for a job, to gain new research skills or to transition between jobs. Some tips for independent researchers include: getting feedback early on, aiming to collaborate with others and creating accountability mechanisms for yourself such as publishing your results. My most important advice for independent researchers is that you should probably be much more active than in other roles because there is less default structure and more responsibility on you. I'll mostly talk about AI safety research but many of these things probably also apply to other independent research. A perceived paradox Independent research is often presented as one of three default options for people seeking to do EA research, e.g. in AI safety: Academia, e.g. applying for Ph.D. and post-doc positions with labs that do research aligned with your goals. Research positions in industry, e.g. applying for Anthropic, Redwood Research, Deepmind, OpenAI or Conjecture. Independent research, e.g. supported by an EA grant. Doing independent research well requires a multitude of skills. The independent researcher needs to be able to set their own agenda, they require some basic research skills, self-discipline and some way of evaluating and correcting their own research. These are skills that usually don't come naturally but need to be learned and refined. In most standard career paths, e.g. within a Ph.D. or in an industry research team people have mentors who help them and ensure that they actually learn these skills. By default, independent research does not ensure that these skills are actually acquired. The perceived paradox is now that if someone has the skills required to do high-quality independent research, existing institutions often want to hire them. If they don't have these skills yet, the research they will produce independently is unlikely to be of high quality or conducted efficiently (unless they have mentorship or are especially talented). Thus, naively, it seems like there aren't that many situations in which independent research makes sense. However, I think there are many cases in which independent research makes a lot of sense and there there ar...

AI with AI
Drawing Outside the Box

AI with AI

Play Episode Listen Later Nov 4, 2022 33:19


Andy and Dave discuss the latest in AI-related news and research, including a bill from the EU that will make it easier for people to sue AI companies for harm or damages caused by AI-related technologies. The US Office of S&T Policy releases a Blueprint for an AI Bill of Rights, which further lays the groundwork for potential legislation. The US signs the AI Training for the Acquisition Workforce Act into law, requiring federal acquisition officials to receive training on AI, and it requires OMB to work with GSA to develop the curriculum. Various top robot companies pledge not to add weapons to their technologies and to work actively at not allowing their robots to be used for such purposes. Telsa reveals its Optimus robot at its AI Day. DARPA will hold a proposal session on 14 November for its AI Reinforcements effort. OpenAI makes DALL-E available for everybody, and Playground offers access to both DALL-E and Stable Diffusion. OpenAI also makes available the results of an NLP Community Meta survey in conjunction with NY University, providing AI researchers' views on a variety of AI-related efforts and trends. And Nathan Benaich and Ian Hogarth release the State of AI Report 2022, which covers a summary of everything from research, politics, safety, as well as some specific predictions for 2023. In research, DeepMind uses AlphaZero to explore matrix multiplication and discovers a slightly faster algorithm implementation for 4x4 matrices. Two research efforts look at turning text into video. Meta discusses its Make-A-Video for turning text prompts into video, leveraging text-to-image generators like DALL-E. And Google Brain discusses its Imagen Video (along with Phenaki, which produces long videos from a sequence of text prompts). The Foundation of Robotics is the open-access book of the week from Damith Herath and David St-Onge. And the video of the week addresses AI and the Application of AI in Force Structure, with LtGen (ret) Groen, Dr. Sam Tangredi, and Mr. Brett Vaughan joining in on the discussion for a symposium at the US Naval Institute.

The Marketing AI Show
#23: Google Penalizes AI-Generated Content, Responsible AI Guidelines, and AI's Impact on Local News

The Marketing AI Show

Play Episode Listen Later Nov 2, 2022 43:55


This week Paul and Mike talk about three news stories and happenings in the world of artificial intelligence, and they break down their importance to marketers. In a word (or two): buckle up. Well-known marketer Neil Patel recently revealed the results of Google's latest algorithm updates on sites he owns that have AI-generated copy—and the results weren't pretty. Patel disclosed that he has “100 experimental sites that use AI-written content.” He claims the sites are simply to figure out how Google perceives AI-written content, not to “game” the algorithm. Regardless of the motivation, he sure found out. Next, Boston Consulting Group, BCG, recently released its guidelines for how companies should approach AI based on its Responsible AI Leader Blueprint. BCG defines responsible AI as “developing and operating artificial intelligence systems that align with organizational values and widely accepted standards of right and wrong while achieving transformative business impact. And finally, earlier this year, the Partnership on AI did work on better understanding how AI will change the local news landscape by talking to 9 different experts in the space, including prominent media outlets and technologists. The Partnership on AI is a major nonprofit that was founded by Amazon, Facebook, Google, DeepMind, Microsoft, and IBM to research and share best practices around the development and deployment of artificial intelligence. Listen to the conversation.

The Nonlinear Library
AF - Caution when interpreting Deepmind's In-context RL paper by Sam Marks

The Nonlinear Library

Play Episode Listen Later Nov 1, 2022 7:36


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Caution when interpreting Deepmind's In-context RL paper, published by Sam Marks on November 1, 2022 on The AI Alignment Forum. Lots of people I know have had pretty strong reactions to the recent Deepmind paper, which claims to have gotten a transformer to learn an RL algorithm by training it on an RL agent's training trajectories. At first, I too was pretty shocked -- this paper seemed to provide strong evidence of a mesa-optimizer in the wild. But digging into the paper a bit more, I'm quite unimpressed and don't think that in-context RL is the correct way to interpret the experiments that the authors actually did. This post is a quick, low-effort attempt to write out my thoughts on this. Recall that in this paper, the authors pick some RL algorithm, use it to train RL agents on some tasks, and save the trajectories generated during training; then they train a transformer to autoregressively model said trajectories, and deploy the transformer on some novel tasks. So for concreteness, during training the transformer sees inputs that look like which were excerpted from an RL agent's training on some task (out of a set of training tasks) and which span multiple episodes (i.e. at some point in this input trajectory, one episode ended and the next episode began). The transformer is trained to guess the action an+c that comes next. In deployment, the inputs are determined by the transformer's own selections, with the environment providing the states and rewards. The authors call this algorithmic distillation (AD). Many people I know have skimmed the paper and come away with an understanding something like: In this paper, RL agents are trained on diverse tasks, e.g. playing many different Atari games, and the resulting transcripts are used as training data for AD. Then the AD agent is deployed on a new task, e.g. playing a held-out Atari game. The AD agent is able to learn to play this novel game, which can only be explained by the model implementing an reasonably general RL algorithm. This sounds a whole lot like a mesa-optimizer. This understanding is incorrect, with two key issues. First the training tasks used in this paper are all extremely similar to each other and to the deployment task; in fact, I think they only ought to count as different under a pathologically narrow notion of "task." And second, the tasks involved are extremely simple. The complaints taken together challenge the conclusion that the only way for the AD agent to do well on its deployment task is by implementing a general-purpose RL algorithm. In fact, as I'll explain in more detail below, I'd be quite surprised if it were. For concreteness, I'll focus here on one family of experiments, Dark Room, that appeared in the paper, but my complaint applies just as well to the other experiments in the paper. The paper describes the Dark Room environment as: a 2D discrete POMDP where an agent spawns in a room and must find a goal location. The agent only knows its own (x, y) coordinates but does not know the goal location and must infer it from the reward. The room size is 9 × 9, the possible actions are one step left, right, up, down, and no-op, the episode length is 20, and the agent resets at the center of the map. ... [T]he agent receives r = 1 every time the goal is reached. ... When not r = 1, then r = 0. To be clear, Dark Room is not a single task, but an environment supporting a family of tasks, where each task is corresponds to a particular choice of goal location (so there are 81 possible tasks in this environment, one for each location in the 9 x 9 room; note that this is an unusually narrow notion of which tasks count as different). The data on which the AD agent is trained look like: {many episodes of an agent learning to move towards goal position 1}, {many episodes of an agent learning to ...

The Nonlinear Library
AF - Clarifying AI X-risk by Zachary Kenton

The Nonlinear Library

Play Episode Listen Later Nov 1, 2022 7:40


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Clarifying AI X-risk, published by Zachary Kenton on November 1, 2022 on The AI Alignment Forum. TL;DR: We give a threat model literature review, propose a categorization and describe a consensus threat model from some of DeepMind's AGI safety team. See our post for the detailed literature review. The DeepMind AGI Safety team has been working to understand the space of threat models for existential risk (X-risk) from misaligned AI. This post summarizes our findings. Our aim was to clarify the case for X-risk to enable better research project generation and prioritization. First, we conducted a literature review of existing threat models, discussed their strengths/weaknesses and then formed a categorization based on the technical cause of X-risk and the path that leads to X-risk. Next we tried to find consensus within our group on a threat model that we all find plausible. Our overall take is that there may be more agreement between alignment researchers than their disagreements might suggest, with many of the threat models, including our own consensus one, making similar arguments for the source of risk. Disagreements remain over the difficulty of the alignment problem, and what counts as a solution. Categorization Here we present our categorization of threat models from our literature review, based on the technical cause and the path leading to X-risk. It is summarized in the diagram below. In green on the left we have the technical cause of the risk, either specification gaming (SG) or goal misgeneralization (GMG). In red on the right we have the path that leads to X-risk, either through the interaction of multiple systems, or through a misaligned power-seeking (MAPS) system. The threat models appear as arrows from technical cause towards path to X-risk. The technical causes (SG and GMG) are not mutually exclusive, both can occur within the same threat model. The distinction between them is motivated by the common distinction in machine learning between failures on the training distribution, and when out of distribution. To classify as specification gaming, there needs to be bad feedback provided on the actual training data. There are many ways to operationalize good/bad feedback. The choice we make here is that the training data feedback is good if it rewards exactly those outputs that would be chosen by a competent, well-motivated AI. We note that the main downside to this operationalisation is that even if just one out of a huge number of training data points gets bad feedback, then we would classify the failure as specification gaming, even though that one datapoint likely made no difference. To classify as goal misgeneralization, the behavior when out-of-distribution (i.e. not using input from the training data), generalizes poorly about its goal, while its capabilities generalize well, leading to undesired behavior. This means the AI system doesn't just break entirely, it still competently pursues some goal, but it's not the goal we intended. The path leading to X-risk is classified as follows. When the path to X-risk is from the interaction of multiple systems, the defining feature here is not just that there are multiple AI systems (we think this will be the case in all realistic threat models), it's more that the risk is caused by complicated interactions between systems that we heavily depend on and can't easily stop or transition away from. (Note that we haven't analyzed the multiple-systems case very much, and there are also other technical causes for those kinds of scenarios.) When the path to X-risk is through Misaligned Power-Seeking (MAPS), the AI system seeks power in unintended ways due to problems with its goals. Here, power-seeking means the AI system seeks power as an instrumental subgoal, because having more power increases the options availab...

The CERN Sparks! Podcast - Future Intelligence
S2 Ep4: S2 #4 Healthtech & Ethics: Getting it Right

The CERN Sparks! Podcast - Future Intelligence

Play Episode Listen Later Oct 26, 2022 29:50


“We are so taken in by technology that we forget that technology is a tool that should be used with an outcome in mind.” - Soumya Swaminathan In this episode, host Bruno Giussani and his guests wade through the quagmire of healthtech ethics and fairness, exploring topics such as how the notions of right and wrong are changed by technology, data ownership and privacy, mind-manipulation technologies and the marvels of machine-learning systems which often are black boxes that not even the specialists understand. In conversation with Bruno are Soumya Swaminathan, chief scientist of the WHO; George Church, the founding father of genomics; Pushmeet Kohli from DeepMind; technoethicist and entrepreneur Juan Enriquez; neuroscientist Olaf Blanke of the Swiss Federal Institute of Technology; and Nobel laureate Jennifer Doudna. Guests: Soumya Swaminathan, George Church, Pushmeet Kohli, Juan Enriquez, Olaf Blanke, Jennifer Doudna Host: Bruno Giussani Production CERN, Geneva: Claudia Marcelloni, Lila Mabiala, Sofia Hurst Whistledown Productions, London: Will Yates and Sandra Kanthal Copyright: CERN, 2022

The CERN Sparks! Podcast - Future Intelligence
S2 Ep2: S2 #2 The Biological Revolution: Tools & Tells

The CERN Sparks! Podcast - Future Intelligence

Play Episode Listen Later Oct 26, 2022 28:28


“I think the way we do medicine these days is broken.” - Michael Snyder In this second episode, join host Bruno Giussani as he examines the specific tools powering the biological revolution. He is joined by Michael Snyder, geneticist and founder of the Snyder Lab at Stanford University, to talk about wearable technologies; by Pushmeet Kohli, AI for Science Lead at Deepmind (a subsidiary of Alphabet) to understand AlphaFold, the machine learning system capable of predicting the structure of nearly all proteins known to science, and its impacts; and Ben Perry, medicinal chemist at the Drugs for Neglected Diseases Initiative (DNDI) to talk about AlphaFold's benefits for drug development. Guests: Michael Snyder, Pushmeet Kohli, Ben Perry Host: Bruno Giussani Production CERN, Geneva: Claudia Marcelloni, Lila Mabiala, Sofia Hurst Whistledown Productions, London: Will Yates and Sandra Kanthal Copyright: CERN, 2022

The Nonlinear Library
AF - Paper: In-context Reinforcement Learning with Algorithm Distillation [Deepmind] by Lawrence Chan

The Nonlinear Library

Play Episode Listen Later Oct 26, 2022 1:48


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Paper: In-context Reinforcement Learning with Algorithm Distillation [Deepmind], published by Lawrence Chan on October 26, 2022 on The AI Alignment Forum. Authors train transformers to imitate the trajectory of reinforcement learning (RL) algorithms. Find that the transformers learn to do in-context RL (that is, the transformers implement an RL algorithm)---the authors check this by having the transformers solve new RL tasks. Indeed, the transformers can sometimes do better than the RL algorithms they're trained to imitate. Seems like more evidence for the "a generative model contain agents" point.Abstract: We propose Algorithm Distillation (AD), a method for distilling reinforcement learning (RL) algorithms into neural networks by modeling their training histories with a causal sequence model. Algorithm Distillation treats learning to reinforcement learn as an across-episode sequential prediction problem. A dataset of learning histories is generated by a source RL algorithm, and then a causal transformer is trained by autoregressively predicting actions given their preceding learning histories as context. Unlike sequential policy prediction architectures that distill post-learning or expert sequences, AD is able to improve its policy entirely in-context without updating its network parameters. We demonstrate that AD can reinforcement learn in-context in a variety of environments with sparse rewards, combinatorial task structure, and pixel-based observations, and find that AD learns a more data-efficient RL algorithm than the one that generated the source data. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

Not As Crazy As You Think Podcast
AlphaGo the Movie, Big Data, and AI Psychiatry: Will Humans Be Left Behind? (S4, E14)

Not As Crazy As You Think Podcast

Play Episode Play 60 sec Highlight Listen Later Oct 16, 2022 50:31 Transcription Available


In the episode, "AlphaGo the Movie, Big Data, and AI Psychiatry: Will Humans Be Left Behind? (S4, E14)," I give a review of the film AlphaGo, an award-winning documentary that filled me with wonder and forgiveness towards the artificial intelligence movement in general. SPOILER ALERT: the episode contains spoilers, as would any news article on the topic as it was major world news and was a game-changer for artificial intelligence. DeepMind has a fundamental desire to understand intelligence. These AI creatives believe if they can crack the ancient game Go, then they've done something special. And if they could get their AlphaGo computer to beat Lee Sedol, the legendary historic 18-world champion player acknowledged as the greatest Go player of the last decade, then they can change history. The movie is suspenseful, and a noble match between human and machine, making you cheer on the new AI era we are entering and mourn the loss of humanity's previous reign all at once. And with how far AI has come, is big data the only path to achieve the best outcomes? Especially in regard to human healthcare? And what about the non-objective field of psychiatry? When so many mental health professionals and former consumers of the industry are criticizing psychiatry's ethics, scientific claims, and objective status as a real medical field, why are we rushing into using AI in areas that deal with human emotion in healthcare? Because that is where we have a large amount of data.  With bias in AI already showing itself in race and gender, the mad may be the next ready targets. #DeepMind #AlphaGo #DemisHassabis #LeeSedol #FanHui #AIHealthcare #westernpsychiatry #moviereview #psychiatryisnotscience #artificialintelligence #bigdata #globalAIsummit #GPT3 #madrights #healthsovereignty #bigpharma#mentalillness #suicide #mentalhealth #electronicmedicalrecordsDon't forget to subscribe to the Not As Crazy As You Think YouTube channel @SicilianoJenAnd please visit my website at: www.jengaitasiciliano.comConnect: Instagram: @ jengaitaLinkedIn: @ jensicilianoTwitter: @ jsiciliano

Grumpy Old Geeks
574: She's Got Legs

Grumpy Old Geeks

Play Episode Listen Later Oct 15, 2022 68:10 Very Popular


F around & find out; eggs, legs & self driving cars; Alex Jones; Meta, OnlyFans bribes & Within; what's the ROI for 1.2 billion & 38 users; the Metaverse is getting legs; more NFT & crypto news; Rivian recalls almost all it's trucks; DeepMind & DeepCake; DART worked; Netflix announces ad-tier, Apple is considering it; Succession & the Murdochs; Werewolf By Night; Sportsball; Shane MacGowan; eero Internet Backup; AirTags in luggage; 3D calling; Spark Mail; ePaper License Plates; Apple AR headsets; website tracking; patter & spoken word songs; Shatner gold!Show notes at https://gog.show/574Sponsors:Kolide - Kolide can help you nail third-party audits and internal compliance goals with endpoint security for your entire fleet. Learn more here.Hover - Go to Hover now and grab your very own domain or a few of them at hover.com/gog and get 10% off your first purchase.FOLLOW UPWhat's the magic number of steps to keep weight off? Here's what a new study saysSurvey: 42% of Tesla Autopilot Drivers Think Their Cars Can Entirely Drive ThemselvesIN THE NEWSAlex Jones ordered to pay $965 million after misinformation campaign targeting Sandy Hook familiesLawsuit accuses Meta executives of taking bribes from OnlyFansMeta files to dismiss FTC complaint over acquisition of VR fitness company Within$1.2 BILLION METAVERSE HORRIFIED BY REPORT IT ONLY HAD 30 ACTIVE USERSMeta's avatars are getting legs'The devices would have gotten us killed.' Microsoft's military smart goggles failed four of six elements during a recent test, internal Army report saysNFT Purchased by Logan Paul for 623k is now worth $10From Over $620K to $10: Logan Paul's Unsuccessful NFT InvestmentSatoshi Island project aims to turn a remote Pacific island into a city built on cryptocurrencySteering Defect Forces Tech Darling Rivian to Recall Nearly Every Car Its Ever MadeNew iPhone Crash Collision Feature Is Calling 911 For People On RollercoastersDeepMind breaks 50-year math record using AI; new record falls a week laterLabor Department proposal may lead to gig workers gaining employee statusNASA's DART spacecraft successfully altered the orbit of an asteroidMEDIA CANDYNetflix with ads launches Nov. 3, will be missing up to 10% of Netflix catalogApple is quietly pushing a TV ad product with media agencies‘The Peripheral': Prime Video Drops New Trailer For Sci-Fi From Jonathan Nolan And Lisa JoyProsecutors drop charges against ‘Serial' podcast subject Adnan SyedSuccessionThe Murdochs: Empire of InfluenceWerewolf By NightShane MacGowan "waved willy" at trains passing Bono's houseAPPS & DOODADSWhat is eero Internet Backup?Lufthansa "bans AirTags in luggage" after passengers publicly shame it with location of lost bagsLufthansa Says Apple AirTags Are Once Again Allowed in Checked BagsDJI Mavic 3 - Flying Over Mount EverestGoogle's 3D video calling booths, Project Starline, will now be tested in the real worldGoogle completes iOS 16 Lock Screen widgets rollout with Maps and SearchSpark Mail 3.0SECURITY HAH!The CyberWireDave BittnerHacking HumansCaveatControl LoopE Paper License Plates Now Street-Legal in CaliforniaRPlate by ReviverApple's Mixed Reality Headset to Offer Iris Scanning for Payments, Logging InNreal Air AR Glasses, Smart Glasses with Massive 201" Micro-OLED Virtual Theater, Augmented Reality Glasses, Watch, Stream, and Game on PC/Android/iOS – Consoles & Cloud Gaming CompatibleBetter ingredients, better trackers: Papa John's sued for allegedly snooping on website usersBeware cars that may have come from the Florida hurricane Ian floodplains.Big John by Jimmy DeanRingo by Lorne GreenTeddy Bear by Red SovineConvoy by CW McCallWilliam Shatner - Good King WenceslasCLOSING SHOUT-OUTSAngela LansburySweeney Todd: The Demon Barber of Fleet Street - IMDBSweeney Todd (1982) - FULLSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

New Scientist Weekly
#141 Energy threat to international security; a new form of multiplication

New Scientist Weekly

Play Episode Listen Later Oct 13, 2022 30:10


The climate crisis is as great a threat to energy security as Russia's war on Ukraine, warns the World Meteorological Organization. The team finds out what sort of threats we're talking about, and discusses potential solutions.Imagine looking up at the skyline, ready to take in a beautiful sunset, and there it is - a massive, Moon-sized advert, stretched out across the skyline. The team explains how it might be possible (and practical) to do it soon.The erect-crested penguin is the least studied penguin in the world - largely because it lives on remote islands off the coast of New Zealand. But Rowan and Alice find out more - as well as discovering about the surprising sex lives of penguins.DeepMind's newest artificial intelligence has discovered a new way to multiply numbers - the first improvement in over 50 years. It's an algorithm for something called matrix multiplication, and the team finds out how it could speed up computers by as much as 20 per cent.To mark World Mental Health Day (Monday 10th October), Rowan speaks to ‘Losing Eden' author Lucy Jones, and energy and climate scientist Gesche Huebner, to find out how the climate and nature crises are impacting our mental health - and what to do about it.On the pod are Rowan Hooper, Penny Sarchet, Madeleine Cuff and Matt Sparkes. To read about these subjects and much more, you can subscribe to New Scientist magazine at newscientist.com.Events and discount codes:Dow: newscientist.com/dowNew Scientist Autumn campaign: www.newscientist.com/pod13Big Thinker: newscientist.com/spaceandmotionMental health resources: UK Samaritans; US National Institute for Mental Health; help with climate anxiety Hosted on Acast. See acast.com/privacy for more information.

The Nonlinear Library
AF - Disentangling inner alignment failures by Erik Jenner

The Nonlinear Library

Play Episode Listen Later Oct 10, 2022 7:23


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Disentangling inner alignment failures, published by Erik Jenner on October 10, 2022 on The AI Alignment Forum. TL;DR: This is an attempt to disentangle some concepts that I used to conflate too much as just "inner alignment". This will be old news to some, but might be helpful for people who feel confused about how deception, distributional shift, and "sharp left turns" are related. I first discuss them as entirely separate threat models, and then talk about how they're all aspects of "capabilities are more robust than alignment". Here are three different threat models for how an AI system could very suddenly do catastrophic things: Deception: The AI becomes deceptively aligned at some point during training, and then does what we want only for instrumental reasons (because it wants to be deployed). Once we deploy, it starts pursuing its actual objective, which is catastrophic for humans. Distributional shift: The AI behaves well during training, perhaps using some messy set of heuristics and proxy objectives. We deploy, and there's distributional shift in the AI's inputs, which leads to the model's proxies no longer being aligned with human values. But it's still behaving capably, so we again get catastrophic outcomes. Capability gains/sharp left turn: At some point (while training or in deployment), the AI becomes much more capable, including at a bunch of things we didn't explicitly train for. This could happen quite suddenly, e.g. because it learns some crucial general skill in relatively few gradient steps, or because it starts learning from something other than gradients that's way faster. The properties of the AI that previously ensured alignment are too brittle and break during this transition. Note that these can be formulated as entirely distinct scenarios. For example, deception doesn't require a distributional shift nor capability gains; instead, the sudden change in model behavior occurs because the AI was "let out of the box" during deployment. Conversely, in the distributional shift scenario, the model might not be deceptive during training, etc. (One way to think about this is that they rely on changes along different axes of the training/deployment dichotomy). Examples I don't think we have any empirical examples of deception in AI systems, though there are thought experiments. We do see kind of similar phenomena in interactions between humans, basically whenever someone pretends to have a different goal than they actually do in order to gain influence. To be clear, here's one thing that is not an example of deception in the sense in which I'm using the word: an AI does things during training that only look good to humans even though they actually aren't, and then continues to do those things in deployment. To me, this seems like a totally different failure mode, but I've also seen this called "deception" (e.g. "Goodhart deception" in this post), thus the clarification. We do have experimental evidence for goal misgeneralization under distributional shift (the second scenario above). A well-known one is the CoinRun agent from Goal misgeneralization in Deep RL, and more recently, DeepMind published many more examples. A classic example for sudden capability gains is the history of human evolution. Relatively small changes in the human brain compared to other primates made cultural evolution feasible, which allowed humans to improve from a source other than biological evolutionary pressure. The consequence were extremely quick capability gains for humanity (compared to evolutionary time scales). This example contains both the "threshold mechanism", where a small change to cognitive architectures has big effects, and the "learning from another source mechanism", with the former enabling the latter. In ML, grokking might be an example for the "threshold ...

Lexman Artificial
Interview with Jeff Hawkins

Lexman Artificial

Play Episode Listen Later Oct 8, 2022 4:23


Jeff Hawkins, co-founder of the artificial intelligence software company Deep Mind, talks about his new book Fuddles and Roucou: Two Simple Machines that Explore the Complex World.

Noticias de Tecnología Express
Boston Dynamics promete no dar armas a sus robots – NTX 222

Noticias de Tecnología Express

Play Episode Listen Later Oct 6, 2022 7:03


Google presenta dispositivos, DeepMind revela su AlphaTensor y compañías de robótica prometen que no traerán el fin del mundo.Puedes apoyar la realización de este programa con una suscripción. Más información por acáNoticias:-La subsidiaria de Alphabet, DeepMind, publicó un artículo en la revista Nature que detalla el AlphaTensor, una inteligencia artificial diseñada para descubrir “algoritmos nuevos, eficientes y probablemente correctos”. -Google presentó su propio sistema de texto a video, Imagen Video. -GFW Report dice que, a partir del 3 de octubre, los usuarios en China informaron que algunos servidores basados en TLS usados para eludir la censura en el internet del país, habían sido bloqueados. -Google presentó nuevos dispositivos, entre los que tenemos los Pixel 7 y 7 Pro, un Pixel Watch y dio detalles sobre su futura tableta que se lanzará en 2023. -Varias compañías de robótica, entre las que se encuentran Boston Dynamics, Agility Robotics, ANYbotics, Clearpath Robotics, Open Robotics y Unitree Robotics, se han comprometido a “no apoyar el uso armamentístico de sus productos” Análisis: De robótica y usos militares Become a member at https://plus.acast.com/s/noticias-de-tecnologia-express. Hosted on Acast. See acast.com/privacy for more information.

Daily Tech Headlines
DeepMind's AlphaTensor Discovers New Algorithms – DTH

Daily Tech Headlines

Play Episode Listen Later Oct 6, 2022


DeepMind’s AlphaTensor finds new algorithms to multiply matrices, Twitter rolls out support for mixed-media tweets, and Google shows off two text-to-video systems. MP3 Please SUBSCRIBE HERE. You can get an ad-free feed of Daily Tech Headlines for $3 a month here. A special thanks to all our supporters–without you, none of this would be possible.Continue reading "DeepMind’s AlphaTensor Discovers New Algorithms – DTH"

The Nonlinear Library
AF - Paper: Discovering novel algorithms with AlphaTensor [Deepmind] by Lawrence Chan

The Nonlinear Library

Play Episode Listen Later Oct 5, 2022 2:40


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Paper: Discovering novel algorithms with AlphaTensor [Deepmind], published by Lawrence Chan on October 5, 2022 on The AI Alignment Forum. The authors apply an AlphaZero-like algorithm to discover new matrix multiplication algorithms. They do this by turning matrix multiplication into a one-player game, where the state represents how far from correct the current output is, moves are algorithmic instructions, and the reward is -1 per step (plus a terminal reward of -rank(final state), if the final state is not a zero tensor). On small matrices, they find that AlphaTensor can discover algorithms that use fewer scalar multiplications than the best known human-designed matrix multiplication algorithms. They apply this to find hardware-specific matmuls (by adding an additional reward equal to -time to the terminal state) that have a 10-20% larger speedup than Strassen's algorithm on NVIDIA V100s and TPU V2s (saving 4%/7.5% wall clock time). Paper abstract: Improving the efficiency of algorithms for fundamental computations can have a widespread impact, as it can affect the overall speed of a large amount of computations. Matrix multiplication is one such primitive task, occurring in many systems—from neural networks to scientific computing routines. The automatic discovery of algorithms using machine learning offers the prospect of reaching beyond human intuition and outperforming the current best human-designed algorithms. However, automating the algorithm discovery procedure is intricate, as the space of possible algorithms is enormous. Here we report a deep reinforcement learning approach based on AlphaZero for discovering efficient and provably correct algorithms for the multiplication of arbitrary matrices. Our agent, AlphaTensor, is trained to play a single-player game where the objective is finding tensor decompositions within a finite factor space. AlphaTensor discovered algorithms that outperform the state-of-the-art complexity for many matrix sizes. Particularly relevant is the case of 4 × 4 matrices in a finite field, where AlphaTensor's algorithm improves on Strassen's two-level algorithm for the first time, to our knowledge, since its discovery 50 years ago. We further showcase the flexibility of AlphaTensor through different use-cases: algorithms with state-of-the-art complexity for structured matrix multiplication and improved practical efficiency by optimizing matrix multiplication for runtime on specific hardware. Our results highlight AlphaTensor's ability to accelerate the process of algorithmic discovery on a range of problems, and to optimize for different criteria. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

Ask Doctor Dawn
Review of psychedelic drug research and teaching Herpes virus to fight cancer are top topics

Ask Doctor Dawn

Play Episode Listen Later Oct 2, 2022 53:03


KSQD 9-28-2022: Anti-inflammatories vs. turmeric for pain; The use of psychedelic drugs is finally being seriously researched for a variety of conditions; Deep Mind wins the Breakthrough Prize for the AI protein folding predictor Alpha Fold; Genetically modified Herpes virus attacks cancer cells; Review of Consumer Lab website; The importance of monitoring uric acid levels; Good ventilation at home is important to reduce trapped pollution; The different types of obesity

The Nonlinear Library
EA - I'm interviewing prolific AI safety researcher Richard Ngo (now at OpenAI and previously DeepMind). What should I ask him? by Robert Wiblin

The Nonlinear Library

Play Episode Listen Later Sep 29, 2022 1:24


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I'm interviewing prolific AI safety researcher Richard Ngo (now at OpenAI and previously DeepMind). What should I ask him?, published by Robert Wiblin on September 29, 2022 on The Effective Altruism Forum. Next week I'm interviewing Richard Ngo, current AI (Safety) Governance Researcher at OpenAI and previous Research Engineer at DeepMind. Before that he was doing a PhD in the Philosophy of Machine Learning at Cambridge, on the topic of "to what extent is the development of artificial intelligence analogous to the biological and cultural evolution of human intelligence?" He is focused on making the development and deployment of AGI more likely to go well and less likely to go badly. Richard is also a highly prolific contributor to online discussion of AI safety in a range of places, for instance: Moral strategies at different capability levels on his blog Thinking Complete The alignment problem from a deep learning perspective on the EA Forum Some conceptual alignment research projects on the AI Alignment Forum Richard Ngo and Eliezer Yudkowsky policely debating AI Safety on Less Wrong The AGI Safety from First Principle education series And on his Twitter What should I ask him? Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

Tech News Weekly (MP3)
TNW 253: iPhone 14 Teardown - YouTube, Pico Headset, DeepMind

Tech News Weekly (MP3)

Play Episode Listen Later Sep 22, 2022 65:36 Very Popular


Shahram Mokhtari from iFixit joins the show to talk with Jason about the iPhone 14, 14 Pro, 14 Max, and iFixit's teardown of the device. Becca Ricks from the Mozilla Foundation joins the show to talk with Mikah about the Mozilla Foundations' research into YouTube's algorithm and how it has failed in unwanted algorithmic recommendations. Jason talks about ByteDance's new VR headset, the Pico Headset, a competitor to Meta's Oculus Quest 2. Finally, Mikah talks about Google's new AI chatbot, Sparrow, which is trained to talk with humans and answer questions using Google's search feature. Hosts: Jason Howell and Mikah Sargent Guests: Shahram Mokhtari and Becca Ricks Download or subscribe to this show at https://twit.tv/shows/tech-news-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: ClickUp.com use code TNW infrascale.com/TWIT ZipRecruiter.com/tnw

Tech News Weekly (Video HD)
TNW 253: iPhone 14 Teardown - YouTube, Pico Headset, DeepMind

Tech News Weekly (Video HD)

Play Episode Listen Later Sep 22, 2022 65:54 Very Popular


Shahram Mokhtari from iFixit joins the show to talk with Jason about the iPhone 14, 14 Pro, 14 Max, and iFixit's teardown of the device. Becca Ricks from the Mozilla Foundation joins the show to talk with Mikah about the Mozilla Foundations' research into YouTube's algorithm and how it has failed in unwanted algorithmic recommendations. Jason talks about ByteDance's new VR headset, the Pico Headset, a competitor to Meta's Oculus Quest 2. Finally, Mikah talks about Google's new AI chatbot, Sparrow, which is trained to talk with humans and answer questions using Google's search feature. Hosts: Jason Howell and Mikah Sargent Guests: Shahram Mokhtari and Becca Ricks Download or subscribe to this show at https://twit.tv/shows/tech-news-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: ClickUp.com use code TNW infrascale.com/TWIT ZipRecruiter.com/tnw

Tech News Weekly (Video HI)
TNW 253: iPhone 14 Teardown - YouTube, Pico Headset, DeepMind

Tech News Weekly (Video HI)

Play Episode Listen Later Sep 22, 2022 65:54


Shahram Mokhtari from iFixit joins the show to talk with Jason about the iPhone 14, 14 Pro, 14 Max, and iFixit's teardown of the device. Becca Ricks from the Mozilla Foundation joins the show to talk with Mikah about the Mozilla Foundations' research into YouTube's algorithm and how it has failed in unwanted algorithmic recommendations. Jason talks about ByteDance's new VR headset, the Pico Headset, a competitor to Meta's Oculus Quest 2. Finally, Mikah talks about Google's new AI chatbot, Sparrow, which is trained to talk with humans and answer questions using Google's search feature. Hosts: Jason Howell and Mikah Sargent Guests: Shahram Mokhtari and Becca Ricks Download or subscribe to this show at https://twit.tv/shows/tech-news-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: ClickUp.com use code TNW infrascale.com/TWIT ZipRecruiter.com/tnw

Tech News Weekly (Video LO)
TNW 253: iPhone 14 Teardown - YouTube, Pico Headset, DeepMind

Tech News Weekly (Video LO)

Play Episode Listen Later Sep 22, 2022 65:54


Shahram Mokhtari from iFixit joins the show to talk with Jason about the iPhone 14, 14 Pro, 14 Max, and iFixit's teardown of the device. Becca Ricks from the Mozilla Foundation joins the show to talk with Mikah about the Mozilla Foundations' research into YouTube's algorithm and how it has failed in unwanted algorithmic recommendations. Jason talks about ByteDance's new VR headset, the Pico Headset, a competitor to Meta's Oculus Quest 2. Finally, Mikah talks about Google's new AI chatbot, Sparrow, which is trained to talk with humans and answer questions using Google's search feature. Hosts: Jason Howell and Mikah Sargent Guests: Shahram Mokhtari and Becca Ricks Download or subscribe to this show at https://twit.tv/shows/tech-news-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: ClickUp.com use code TNW infrascale.com/TWIT ZipRecruiter.com/tnw

The Nonlinear Library
LW - Alignment Org Cheat Sheet by Akash

The Nonlinear Library

Play Episode Listen Later Sep 20, 2022 7:09


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Alignment Org Cheat Sheet, published by Akash on September 20, 2022 on LessWrong. Epistemic Status: Exploratory. Epistemic Effort: ~6 hours of work put into this document. Contributions: Akash wrote this, Thomas helped edit + discuss. Unless specified otherwise, writing in the first person is by Akash and so are the opinions. Thanks to Olivia Jimenez for comments. Thanks to many others for relevant conversations. There have been a few attempts to summarize the AI alignment research landscape. Many of them are long and detailed. As an exercise, I created a “cheat sheet” describing the work of researchers, research organizations, and training programs. The goal was to summarize key people/orgs in just one sentence. Note that I do not aim to cover everyone; I focus on the ones that I think are most promising or popular, with a bias towards ones that I know the most (For longer summaries/analyses, I recommend the ones by Thomas Larsen & Eli Lifland and Nate Soares). Here's my alignment org cheat sheet: Researchers & Research Organizations Aligned AI: Let's understand all of the possible ways a model could generalize out-of-distribution and then figure out how to select the way(s) that are safe. (see Why I'm co-founding Aligned AI) (see also Thomas's critique of their plan here). Anthropic: Let's interpret large language models by understanding which parts of neural networks are involved with certain types of cognition, and let's build larger language models to tackle problems that will emerge as we get closer to AGI. ARC (Alignment Research Center): Let's interpret an AGI by building a reporter which a) honestly answers our questions and b) is smart enough to translate the AGI's thoughts to us in a way that we can understand (see ELK report). Conjecture: Let's focus on bets that make sense in worlds with short timelines, foster an environment where people can think about new and uncorrelated ways to solve alignment, create infrastructure that allows us to use large language models to assist humans in solving alignment, develop simulacra theory to understand large language models, build and interpret large language models, and maintain a culture of information security. DeepMind Alignment Team (according to Rohin Shah): Let's contribute to a variety of projects and let our individual researchers pursue their own directions rather than having a unified agenda (note that DeepMind is also controversial for advancing AI capabilities; more here). Eliezer & Nate: Let's help others understand why their alignment plans are likely to fail, in the hopes that people understand the problem better, and are able to take advantage of a miracle (see List of Lethalities & Why Various Plans Miss the Hard Parts). Encultured: Let's create consumer-facing products that help AI safety researchers, and let's start with a popular video game that serves as a “testing ground” where companies can deploy AI systems to see if they cause havoc (bad) or do things that human players love (good). Evan Hubinger: Let's understand how we might get deceptive models, develop interpretability tools that help us detect deception, and develop strategies to reduce the likelihood of training deceptive models (see Risks from Learned Optimization and A Transparency and Interpretability Tech Tree). John Wentworth: Let's become less confused about agency/alignment by understanding if human concepts are likely to be learned “naturally” by AIs (natural abstractions), understanding how selection pressures work and why they produce certain types of agents (selection theorems), and broadly generating insights that make alignment less pre-paradigmatic (see The Plan). MIRI: Let's support researchers who are working on a variety of research agendas, which makes it really hard to summarize what we're doing in one sentence, and...

Artificial Intelligence and You
117 - Guest: Chris Summerfield, Cognitive Scientist at Oxford and DeepMind, part 2

Artificial Intelligence and You

Play Episode Listen Later Sep 12, 2022 27:07


This and all episodes at: https://aiandyou.net/ .   If you want an expert on how today's AI compares to the human brain, it would be hard to beat an Oxford neuroscientist who also works at DeepMind. That describes Chris Summerfield, who runs Oxford University's Human Information Processing lab in the Department of Experimental Psychology and author of the upcoming book, "Natural General Intelligence." In part 2, we talk about the new image generators like DALL-E-2 and how they relate to human cognition, brain-computer interfaces and neuroplasticity, and purple pineapples. All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.        

The Nonlinear Library
EA - My emotional reaction to the current funding situation by Sam Brown

The Nonlinear Library

Play Episode Listen Later Sep 11, 2022 7:58


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My emotional reaction to the current funding situation, published by Sam Brown on September 11, 2022 on The Effective Altruism Forum. I'm allowed to spend two days a week at Trajan House, a building in Oxford which houses the Center for Effective Altruism (CEA), along with a few EA-related bodies. Two days is what I asked for, and what I received. The rest of the time I spend in the Bodleian Library of the University of Oxford (about £30/year, if you can demonstrate an acceptable “research need”), a desk at a coworking space in Ethical Property (which houses Refugee Welcome, among other non-EA bodies, for £200/month), Common Ground (a cafe/co-working space which I've recommended to people as a place where the staff explicitly explain, if you ask, that you don't need to order anything to stay as long as you like), a large family house I'm friends with, and various cafes and restaurants where I can sit for hours while only drinking mint tea. I'm allowed to use the hot-desk space at Trajan House because I'm a recipient of an EA Long Term Future Fund grant, to research Alignment. (I call this “AI safety” to most people, and sometimes have to explain that AI stands for Artificial Intelligence.) I judged that 6 months of salary at the level of my previous startup job, with a small expenses budget, came to about £40,000. This is what I asked for, and what I received. At my previous job I thought I was having a measurable, meaningful impact on climate change. When I started there, I imagined that I'd go on to found my own startup. I promised myself it would be the last time I'd be employed. When I quit that startup job, I spent around a year doing nothing-much. I applied to Oxford's Philosophy BPhil, unsuccessfully. I looked at startup incubators and accelerators. But mostly, I researched Alignment groups. I visited Conjecture, and talked to people from Deep Mind, and the Future of Humanity Institute. What I was trying to do, was to discern whether Alignment was “real” or not. Certainly, I decided, some of these people were cleverer than me, more hard-working than me, better-informed. Some seem deluded, but not all. At the very least, it's not just a bunch of netizens from a particular online community, whose friend earned a crypto fortune. During the year I was unemployed, I lived very cheaply. I'm familiar with the lifestyle, and – if I'm honest – I like it. Whereas for my holidays while employed I'd hire or buy a motorbike, and go travelling abroad, or scuba dive, instead my holidays would be spent doing DIY at a friend's holiday home for free board, or taking a bivi bag to sleep in the fields around Oxford. The exceptions to this thrift were both EA-related, and both fully-funded. In one, for which my nickname of “Huel and hot-tubs” never caught on, I was successfully reassured by someone I found very smart that my proposed Alignment research project was worthwhile. In the other, I and others were flown out to the San Francisco Bay Area for an all-expenses-paid retreat to learn how to better build communities. My hotel room had a nightly price written on the inside of the door: $500. Surely no one ever paid that. Shortly afterwards, I heard that the EA-adjacent community were buying the entire hotel. While at the first retreat, I submitted my application for funding. While in Berkeley for the second, I discovered my application was successful. (“I should hire a motorbike, while I'm here.” I didn't have time, between networking opportunities.) I started calling myself an “independent alignment researcher” to anyone who would listen and let me into offices, workshops, or parties. I fit right in. At one point, people were writing plans on a whiteboard for how we could spend the effectively-infinite amount of money we could ask for. Somehow I couldn't take it any more, so I ...

A hombros de gigantes
A hombros de gigantes - AlphaFold predice la forma de los ladrillos de la vida - 10/09/22

A hombros de gigantes

Play Episode Listen Later Sep 10, 2022 58:11


Fue la noticia del año pasado para la revista Science y una auténtica revolución en biología: Alphafold, un sistema desarrollado por DeepMind, la inteligencia artificial de Google, es capaz de predecir la estructura tridimensional de una forma rápida y fiable. En colaboración con el Laboratorio Europeo de Biología Molecular (EMBL), se ha construido una base de datos de 200 millones de proteínas, casi todas las conocidas. Este algoritmo ayudará a comprender la biología de todos los seres vivos del planeta y los mecanismos de algunas de las enfermedades más prevalentes, desde la malaria al alzhéimer y el cáncer. Hemos entrevistado a José Antonio Márquez, responsable del Servicio de Cristalografía del EMBL.Eva Rodríguez nos ha contado que los grandes mamíferos prehistóricos vivían rápido y morían jóvenes (como el lema de algunas estrellas del rock) y como el instrumento MOXIE a bordo del Perseverance está produciendo oxígeno a partir de los gases de la atmósfera marciana. Con Alfonso Martínez Ariashemos analizado el alcance de una de las noticias más destacadas de las últimas semanas: la creación de embriones de ratón a partir de células madre. Carlos Briones nos ha hablado de una bacteria supergigante (Thiomargarita magnifica) que ha roto todos los moldes por su tamaño (1 cm de longitud) y su ADN encerrado en orgánulos. Javier Cacho nos ha contado el sorprendente viaje de Tètè Michel Kpomassie, un joven de Togo que viajó a Groenlandia y se enamoró de este país. Escuchar audio

The Nonlinear Library
LW - My emotional reaction to the current funding situation by Sam

The Nonlinear Library

Play Episode Listen Later Sep 9, 2022 7:57


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My emotional reaction to the current funding situation, published by Sam on September 9, 2022 on LessWrong. I'm allowed to spend two days a week at Trajan House, a building in Oxford which houses the Center for Effective Altruism (CEA), along with a few EA-related bodies. Two days is what I asked for, and what I received. The rest of the time I spend in the Bodleian Library of the University of Oxford (about £30/year, if you can demonstrate an acceptable “research need”), a desk at a coworking space in Ethical Property (which houses Refugee Welcome, among other non-EA bodies, for £200/month), Common Ground (a cafe/co-working space which I've recommended to people as a place where the staff explicitly explain, if you ask, that you don't need to order anything to stay as long as you like), a large family house I'm friends with, and various cafes and restaurants where I can sit for hours while only drinking mint tea. I'm allowed to use the hot-desk space at Trajan House because I'm a recipient of an EA Long Term Future Fund grant, to research Alignment. (I call this “AI safety” to most people, and sometimes have to explain that AI stands for Artificial Intelligence.) I judged that 6 months of salary at the level of my previous startup job, with a small expenses budget, came to about £40,000. This is what I asked for, and what I received. At my previous job I thought was having a measurable, meaningful impact on climate change. When I started there, I imagined that I'd go on to found my own startup. I promised myself it would be the last time I'd be employed. When I quit that startup job, I spent around a year doing nothing-much. I applied to Oxford's Philosophy BPhil, unsuccessfully. I looked at startup incubators and accelerators. But mostly, I researched Alignment groups. I visited Conjecture, and talked to people from Deep Mind, and the Future of Humanity Institute. What I was trying to do, was to discern whether Alignment was “real” or not. Certainly, I decided, some of these people were cleverer than me, more hard-working than me, better-informed. Some seem deluded, but not all. At the very least, it's not just a bunch of netizens from a particular online community, whose friend earned a crypto fortune. During the year I was unemployed, I lived very cheaply. I'm familiar with the lifestyle, and – if I'm honest – I like it. Whereas for my holidays while employed I'd hire or buy a motorbike, and go travelling abroad, or scuba dive, instead my holidays would be spent doing DIY at a friend's holiday home for free board, or taking a bivi bag to sleep in the fields around Oxford. The exceptions to this thrift were both EA-related, and both fully-funded. In one, for which my nickname of “Huel and hot-tubs” never caught on, I was successfully reassured by someone I found very smart that my proposed Alignment research project was worthwhile. In the other, I and others were flown out to the San Francisco Bay Area for an all-expenses-paid retreat to learn how to better build communities. My hotel room had a nightly price written on the inside of the door: $500. Surely no one ever paid that. Shortly afterwards, I heard that the EA-adjacent community were buying the entire hotel complex. While at the first retreat, I submitted my application for funding. While in Berkeley for the second, I discovered my application was successful. (“I should hire a motorbike, while I'm here.” I didn't have time, between networking opportunities.) I started calling myself an “independent alignment researcher” to anyone who would listen and let me into offices, workshops, or parties. I fit right in. At one point, people were writing plans on a whiteboard for how we could spend the effectively-infinite amount of money we could ask for. Somehow I couldn't take it any more, so I left, crossed the ro...

The Nonlinear Library
AF - [An email with a bunch of links I sent an experienced ML researcher interested in learning about Alignment / x-safety.] by David Scott Krueger

The Nonlinear Library

Play Episode Listen Later Sep 8, 2022 9:44


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [An email with a bunch of links I sent an experienced ML researcher interested in learning about Alignment / x-safety.], published by David Scott Krueger on September 8, 2022 on The AI Alignment Forum. [[FYI: I'm just copying this in and removing a few bits; apologies for formatting; I don't intend to post the attachments]]EtA: I (embarrasingly and unfortunately) understated Richard Ngo's technical background in email; I left what I originally wrote in strikethrough.EtA(2): I thought a bit more context would be useful here:- This email was designed for a particular person after several in person conversations.- I'm not proposing this as anything like "The best things to send to an arbitrary 'experienced ML researcher interested in learning about Alignment / x-safety'".- I didn't put a ton of effort into this.- I aimed to present a somewhat diverse and representative sampling of x-safety stuff.EtA (3): Suggestions / Feedback welcome!OK I figure perfect is the enemy of the good and this is already a doozy of an email, so I'm just going to send it :)I think it would be great to get a better sense of what sort of materials are a good introduction for someone in your situation, so please let me know what you find most/least useful/etc.!A few top recommendations... [[FYI: I'm just copying this in and removing a few bits; apologies for formatting; I don't intend to post the attachments]]EtA: I (embarrasingly and unfortunately) understated Richard Ngo's technical background in email; I left what I originally wrote in strikethrough.EtA(2): I thought a bit more context would be useful here:- This email was designed for a particular person after several in person conversations.- I'm not proposing this as anything like "The best things to send to an arbitrary 'experienced ML researcher interested in learning about Alignment / x-safety'".- I didn't put a ton of effort into this.- I aimed to present a somewhat diverse and representative sampling of x-safety stuff.EtA (3): Suggestions / Feedback welcome!OK I figure perfect is the enemy of the good and this is already a doozy of an email, so I'm just going to send it :)I think it would be great to get a better sense of what sort of materials are a good introduction for someone in your situation, so please let me know what you find most/least useful/etc.!A few top recommendations... - read about DeepMind's "safety, robustness, and assurance" breakdown of alignment- sign up for the Alignment and ML Safety newsletters, and skim through the archives - Look at this syllabus I mocked up for my UofT application (attached). I tried to focus on ML and include a lot of the most important ML papers, although there's a lot missing. - Read about RL from human preferences if you haven't already. I imagine you might've seen the "backflipping noodle" blog post. I helped author a research agenda based on such approaches Scalable agent alignment via reward modeling. The research I talked about in my talk is part of a project on understanding reward model hacking; we wrote a short grant proposal for that (attached). I'm in the midst of rethinking how large a role I expect reward modeling (or RL more generally) to play in future AI systems, but I previously considered this one of the highest priority directions to work on. Others (e.g. Jan, first author of the agenda) are working on showing what you can do with reward modeling; I'm more concerned with figuring out if it is a promising approach at all or not (I suspect not, because of power-seeking / instrumental convergence). - Look at the "7 Alternatives for agent alignment" in the agenda for a brief overview of alternative approaches to specification.- Look at "10.1 Related research agendas" in ARCHES for a quick overview of various research agendas. - The most widely known/cited agenda is Concrete P...

The Nonlinear Library
AF - Monitoring for deceptive alignment by Evan Hubinger

The Nonlinear Library

Play Episode Listen Later Sep 8, 2022 12:10


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Monitoring for deceptive alignment, published by Evan Hubinger on September 8, 2022 on The AI Alignment Forum. This post is a follow-up to “AI coordination needs clear wins.” Thanks to Ethan Perez, Kate Woolverton, Richard Ngo, Anne le Roux, Sam McCandlish, Adam Jermyn, and Danny Hernandez for useful conversations, comments, and feedback. In this post, I want to propose a clear, concrete coordination task that I think might be achievable soon given the current landscape, would generate a clear coordination win, and that I think would be highly useful in and of itself. Specifically: I want DeepMind, OpenAI, and Anthropic to commit to actively monitor and look for evidence of deceptive alignment in their models—as well as run experiments to try to predict when and where deceptive alignment might occur before it does. Notably, I am specifically referring only to the narrow case of deceptive alignment here, not just any situation where models say false things. Deceptive alignment is specifically a situation where the reason the model looks aligned is because it is actively trying to game the training signal for the purpose of achieving some ulterior goal. I think this is a pretty minimal ask that would nevertheless be a clear win. Among all AI failure modes, deceptive alignment is one of the most unambiguously bad, which means, though I expect lots of disagreement on its likelihood, there should be little disagreement regarding its severity. Furthermore, things that I'm not asking for: that they make such a commitment highly public or legally binding, that they commit to any specific way of monitoring for or addressing the problem, that any organization has to share anything they find with any other organization, or that they commit to anything other than testing and monitoring. Nevertheless, as I stated, I think this would still be quite useful. First, I think deceptive alignment is clearly one of the most concerning problems in AI alignment. Though they don't always use the framing of deceptive alignment specifically, most presentations of AI risk feature the phenomenon of an AI that wants something in the world deceiving our oversight procedures to get that something. See: treacherous turns, schemers, influence-seeking behavior, power-seeking AI. Furthermore, I think there is a strong case for why deceptive alignment is reasonably likely to appear by default in advanced machine learning systems. Second, I think that it is plausible that, if deceptive alignment is eventually going to be a problem, we'll be able to see it and produce demonstrations of it before it becomes existentially dangerous. I'm less confident in this premise than the previous one, but my current sense is that the space of deceptively aligned models is large enough to include lots of deceptively aligned models that would be willing to display their deception in the presence of the right sort of honeypot. For example, you might be able to find deceptively aligned models that:[1] have relatively short time horizons such that they are very willing to defect even if their payoff and/or probability of successful defection is low, are confused about what level of simulation they're in such that they defect to break out of an inner simulation only to find themselves in an outer simulation, are bad enough at hiding their deception that simple transparency tools can detect them, are unwilling to cooperate with other deceptively aligned models to all defect simultaneously, etc. Of course, just being able to detect deception doesn't mean we'll be able to solve it—and in fact in many cases it seems likely that our default ways of addressing deception would result in deception that just hides itself better—but I still think that being able to detect it before it becomes catastrophic would be a huge win, since i...

Geekonomy - גיקונומי - פודקאסט שבועי על החיים עצמם
פרק #605 - רם רחום חוקר בינה מלאכותית

Geekonomy - גיקונומי - פודקאסט שבועי על החיים עצמם

Play Episode Listen Later Sep 7, 2022 65:27


רם רחום הוא מהנדס תוכנה שהחליט לפצוח בקריירת מחקר עצמאית, לאחר שסיים את תפקידו בגוגל. רם מחפש אחר ערכי הדדיות ועבודה משותפת אצל סוכני בינה מלאכותית.   על מה דיברנו: דיפ מיינד, גוגל, מחקר עצמאי, בינה מלאוכתית, למידת מכונה, רשתות ניורונים מלאכותיות, משחקים, פרופ׳ ישראל אומן   נותני החסות שלנו: חברת קייטו נטוורקס שמחפשת לאייש את תפקיד הפרודקט מרקטינג דיירקטור   לינקים: האתר של המחקר של רם הסרטון המדהים OpenAI Hide and Seek פרוייקט Melting Pot של אדגר וג'ואל מ-DeepMind, שמטרתו לקדם את התחום של Multi-Agent Reinforcement Learning הספר שאדגר ממליץ עליו, The Major Transitions in Evolution ערוץ היוטיוב CGP Grey 

One Planet Podcast
Nick Bostrom - Philosopher, Founding Director, Future of Humanity Institute, Oxford


One Planet Podcast

Play Episode Listen Later Sep 6, 2022 42:22


Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots."On the one hand, if AI actually worked out in the ideal way, then it could be an extremely powerful tool for developing solutions to climate change and many other environmental problems that we have, for example, in developing more efficient clean energy technologies. There are efforts on the way now to try to get fusion reactors to work using AI tools, to sort of guide the containment of the plasma. Recent work with AlphaFold by DeepMind, which is a subsidiary of Alphabet, they're working on developing AI tools that can be used for molecular modeling, and you could imagine various uses of that for developing better solar panels or other kinds of remedial technologies to clean up or reduce pollution. So certainly the potential from AI to the environment are manyfold and will increase over time."https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org

One Planet Podcast
Highlights - Nick Bostrom - Founding Director, Future of Humanity Institute, Oxford

One Planet Podcast

Play Episode Listen Later Sep 6, 2022 11:19


"On the one hand, if AI actually worked out in the ideal way, then it could be an extremely powerful tool for developing solutions to climate change and many other environmental problems that we have, for example, in developing more efficient clean energy technologies. There are efforts on the way now to try to get fusion reactors to work using AI tools, to sort of guide the containment of the plasma. Recent work with AlphaFold by DeepMind, which is a subsidiary of Alphabet, they're working on developing AI tools that can be used for molecular modeling, and you could imagine various uses of that for developing better solar panels or other kinds of remedial technologies to clean up or reduce pollution. So certainly the potential from AI to the environment are manyfold and will increase over time."Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument and the concept of existential risk.Bostrom's academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been on Foreign Policy's Top 100 Global Thinkers list twice and was included in Prospect's World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots.https://nickbostrom.comhttps://www.fhi.ox.ac.ukwww.creativeprocess.infowww.oneplanetpodcast.org

The Nonlinear Library
EA - Do AI companies make their safety researchers sign a non-disparagement clause? by ofer

The Nonlinear Library

Play Episode Listen Later Sep 5, 2022 0:43


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Do AI companies make their safety researchers sign a non-disparagement clause?, published by ofer on September 5, 2022 on The Effective Altruism Forum. Among AI companies that employ AI alignment/policy researchers (e.g. DeepMind, OpenAI, Anthropic, Conjecture), which companies make such researchers sign a non-disparagement clause? Also, what are the details of such non-disparagement clauses? (Do they aim to restrict such researchers indefinitely, even after they leave the company?) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Next Big Idea
DeepMind's Demis Hassabis on the future of AI (from The TED Interview)

The Next Big Idea

Play Episode Listen Later Sep 1, 2022 50:38 Very Popular


Demis Hassabis is one of tech's most brilliant minds. A chess-playing child prodigy turned researcher and founder of headline-making AI company DeepMind, Demis is thinking through some of the most revolutionary — and in some cases controversial — uses of artificial intelligence. From the development of computer program AlphaGo, which beat out world champions in the board game Go, to making leaps in the research of how proteins fold, Demis is at the helm of the next generation of groundbreaking technology. In this episode, he gives a peek into some of the questions that his top-level projects are asking, talks about how gaming, creativity, and intelligence inform his approach to tech, and muses on where AI is headed next. This is an episode of "The TED Interview," a podcast in the TED Audio Collective. It's hosted by author Steven Johnson. To check out the rest of their episodes, including a recent mini-series on the future of human intelligence, follow the show wherever you're listening to this.

Tech for Non-Techies
114. What is Deep Tech?

Tech for Non-Techies

Play Episode Listen Later Aug 31, 2022 13:41


Companies like Deep Mind fascinate investors and innovators, but what is a deep tech company really and how does it differ from other types of tech firms? Listen to this episode to find out. Learning notes from this episode: Deep Tech is a sub-sector of the technology sector where the emphasis is on tangible engineering innovation or scientific advances and discoveries. It includes artificial intelligence, robotics, blockchain, advanced material science, photonics and electronics, biotech and quantum computing.  Deep Tech is usually B2B: these companies usually sell their innovations to other businesses, rather than directly to consumers. Deep Tech companies are usually founded by technical founders, and sometimes have non-technical co-founders who help them commercialise the innovation. A good example is biotech tech start-up Vitro Labs, where a scientist teamed up with a fashion industry expert to create laboratory grown leather. The biggest risk to Deep Tech companies is getting over-excited by technological innovation at the cost of seeing whether the new technology is creating any actual value.  “The winning company is not always the one with the best technology. Tech can be a differentiator, but usually it's only temporary. The job of a venture capitalist is not to figure out which company has the best tech. It's to figure out which company has the best business that can ultimately be the biggest impact,” said Colin Beirne, co-founder of Two Sigma Ventures, a deep tech investor. Resources mentioned in this episode: Episode 103. How I got into deep tech investing (with Colin Beirne, Two Sigma Ventures) Tech Target: Deep Mind ----- If you like learning about how tech products and profits get made, you'll like our newsletter. It's funny too. Sign up here. ----- There are 2 ways to apply this work to your goals: For individuals, APPLY FOR A CONSULTATION CALL for Tech For Non-Techies membership. For companies: If you want to increase productivity, innovation and diversity, then your non-technical teams need to learn how to collaborate with the techies.  BOOK A CALL to discuss bespoke training & consulting. We love hearing from our readers and listeners. So if you have questions about the content or working with us, just get in touch on info@techfornontechies.co   Say hi to Sophia on Twitter and follow her on LinkedIn. Following us on Facebook, Instagram and TikTok will make you smarter.   

60-Second Science
This Artificial Intelligence Learns like a Widdle Baby

60-Second Science

Play Episode Listen Later Aug 26, 2022 2:36


Engineers at the company DeepMind built a machine-learning system based on research on how babies’ brain works, and it did better on certain tasks than its conventional counterparts.

Marketplace Tech
How machine learning is unfolding the mysteries of proteins

Marketplace Tech

Play Episode Listen Later Aug 10, 2022 6:53 Very Popular


Understanding proteins — like the spike protein of the coronavirus — is superimportant for the study of diseases and the development of drugs and vaccines. So there’s a lot of excitement about the AlphaFold Protein Structure Database, built by the artificial intelligence lab DeepMind with the European Molecular Biology Laboratory. Researchers there have used machine learning to predict and map more than 200 million protein structures from all kinds of organisms. Meghan McCarty-Carino of “Marketplace Tech” spoke with Matthew Higgins, professor of molecular parasitology at the University of Oxford. He studies malaria parasites for a potential vaccine, and he said the database has sped up that work.

Marketplace All-in-One
How machine learning is unfolding the mysteries of proteins

Marketplace All-in-One

Play Episode Listen Later Aug 10, 2022 6:53


Understanding proteins — like the spike protein of the coronavirus — is superimportant for the study of diseases and the development of drugs and vaccines. So there’s a lot of excitement about the AlphaFold Protein Structure Database, built by the artificial intelligence lab DeepMind with the European Molecular Biology Laboratory. Researchers there have used machine learning to predict and map more than 200 million protein structures from all kinds of organisms. Meghan McCarty-Carino of “Marketplace Tech” spoke with Matthew Higgins, professor of molecular parasitology at the University of Oxford. He studies malaria parasites for a potential vaccine, and he said the database has sped up that work.

Daily Tech News Show
Open Source Protein Structures - DTNS 4332

Daily Tech News Show

Play Episode Listen Later Aug 5, 2022 32:59 Very Popular


Last week DeepMind announced it was releasing and making free the expanded database filled with 200 million highly accurate protein structure predictions generated by DeepMind's AI, AlphaFold. What's the significance of AlphaFold's work and how will it help in other areas of research? Plus Amazon plans to buy iRobot and what's in the cards for Warner Media's streaming platforms after Discovery's buyout?Starring Tom Merritt, Sarah Lane, Dr. Niki Ackermans, Roger Chang, Joe.Link to the Show Notes. See acast.com/privacy for privacy and opt-out information. Become a member at https://plus.acast.com/s/dtns.

Daily Tech News Show (Video)
Open Source Protein Structures – DTNS 4332

Daily Tech News Show (Video)

Play Episode Listen Later Aug 5, 2022


Last week DeepMind announced it was releasing and making free the expanded database filled with 200 million highly accurate protein structure predictions generated by DeepMind’s AI, AlphaFold. What’s the significance of AlphaFold’s work and how will it help in other areas of research? Plus Amazon plans to buy iRobot and what’s in the cards for Warner Media’s streaming platforms after Discovery’s buyout? Starring Tom Merritt, Sarah Lane, Dr. Niki Ackermans, Len Peralta, Roger Chang, Joe MP3 Download Using a Screen Reader? Click here Multiple versions (ogg, video etc.) from Archive.org Follow us on Twitter Instgram YouTube and Twitch Please SUBSCRIBE HERE. Subscribe through Apple Podcasts. A special thanks to all our supporters–without you, none of this would be possible. If you are willing to support the show or to give as little as 10 cents a day on Patreon, Thank you! Become a Patron! Big thanks to Dan Lueders for the headlines music and Martin Bell for the opening theme! Big thanks to Mustafa A. from thepolarcat.com for the logo! Thanks to our mods Jack_Shid and KAPT_Kipper on the subreddit Send to email to feedback@dailytechnewsshow.com Show Notes To read the show notes in a separate page click here!

Babbage from Economist Radio
Babbage: How AI cracked biology's biggest problem

Babbage from Economist Radio

Play Episode Listen Later Aug 2, 2022 34:34 Very Popular


DeepMind's artificial-intelligence system AlphaFold has predicted the three-dimensional shape of almost all known proteins. The company's boss Demis Hassabis tells us how the AI was able to solve what was, for decades, biology's grand challenge. Plus, Gilead Amit, The Economist's science correspondent, explores the significance of the breakthrough for scientists tackling neglected diseases and designing new molecules. The leap forward could be AI's greatest contribution to biology to date, but how else could machine learning help science? Kenneth Cukier hosts.For full access to The Economist's print, digital and audio editions subscribe at economist.com/podcastoffer and sign up for our weekly science newsletter at economist.com/simplyscience. See acast.com/privacy for privacy and opt-out information.

Economist Radio
Babbage: How AI cracked biology's biggest problem

Economist Radio

Play Episode Listen Later Aug 2, 2022 34:34


DeepMind's artificial-intelligence system AlphaFold has predicted the three-dimensional shape of almost all known proteins. The company's boss Demis Hassabis tells us how the AI was able to solve what was, for decades, biology's grand challenge. Plus, Gilead Amit, The Economist's science correspondent, explores the significance of the breakthrough for scientists tackling neglected diseases and designing new molecules. The leap forward could be AI's greatest contribution to biology to date, but how else could machine learning help science? Kenneth Cukier hosts.For full access to The Economist's print, digital and audio editions subscribe at economist.com/podcastoffer and sign up for our weekly science newsletter at economist.com/simplyscience. See acast.com/privacy for privacy and opt-out information.

The TED Interview
DeepMind's Demis Hassabis on the future of AI

The TED Interview

Play Episode Listen Later Jul 28, 2022 48:48 Very Popular


Demis Hassabis is one of tech's most brilliant minds. A chess-playing child prodigy turned researcher and founder of headline-making AI company DeepMind, Demis is thinking through some of the most revolutionary—and in some cases controversial—uses of artificial intelligence. From ​​the development of computer program AlphaGo, which beat out world champions in the board game Go, to making leaps in the research of how proteins fold, Demis is at the helm of the next generation of groundbreaking technology. In this episode, he gives a peek into some of the questions that his top-level projects are asking, talks about how gaming, creativity, and intelligence inform his approach to tech, and muses on where AI is headed next.

Lex Fridman Podcast
#306 – Oriol Vinyals: Deep Learning and Artificial General Intelligence

Lex Fridman Podcast

Play Episode Listen Later Jul 26, 2022 135:00 Very Popular


Oriol Vinyals is the Research Director and Deep Learning Lead at DeepMind. Please support this podcast by checking out our sponsors: – Shopify: https://shopify.com/lex to get 14-day free trial – Weights & Biases: https://lexfridman.com/wnb – Magic Spoon: https://magicspoon.com/lex and use code LEX to get $5 off – Blinkist: https://blinkist.com/lex and use code LEX to get 25% off premium EPISODE LINKS: Oriol's Twitter: https://twitter.com/oriolvinyalsml Oriol's publications: https://scholar.google.com/citations?user=NkzyCvUAAAAJ DeepMind's Twitter: https://twitter.com/DeepMind DeepMind's Instagram: https://instagram.com/deepmind DeepMind's Website: https://deepmind.com Papers: 1. Gato: https://deepmind.com/publications/a-generalist-agent 2. Flamingo: https://deepmind.com/blog/tackling-multiple-tasks-with-a-single-visual-language-model 3. Language Models are Few-Shot Learners: https://arxiv.org/abs/2005.14165 4. Emergent Abilities of Large Language Models: https://arxiv.org/abs/2206.07682 5. Attention Is

Lex Fridman Podcast
#299 – Demis Hassabis: DeepMind

Lex Fridman Podcast

Play Episode Listen Later Jul 1, 2022 137:02 Very Popular


Demis Hassabis is the CEO and co-founder of DeepMind. Please support this podcast by checking out our sponsors: – Mailgun: https://lexfridman.com/mailgun – InsideTracker: https://insidetracker.com/lex to get 20% off – Onnit: https://lexfridman.com/onnit to get up to 10% off – Indeed: https://indeed.com/lex to get $75 credit – Magic Spoon: https://magicspoon.com/lex and use code LEX to get $5 off EPISODE LINKS: Demis's Twitter: https://twitter.com/demishassabis DeepMind's Twitter: https://twitter.com/DeepMind DeepMind's Instagram: https://instagram.com/deepmind DeepMind's Website: https://deepmind.com Plasma control paper: https://nature.com/articles/s41586-021-04301-9 Quantum simulation paper: https://science.org/doi/10.1126/science.abj6511 The Emperor's New Mind (book): https://amzn.to/3bx03lo Life Ascending (book): https://amzn.to/3AhUP7z PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/