Podcasts about alphastar

Direct-to-home satellite broadcasting service

  • 55PODCASTS
  • 89EPISODES
  • 47mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Mar 10, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about alphastar

Latest podcast episodes about alphastar

ZD Tech : tout comprendre en moins de 3 minutes avec ZDNet
Prix Turing 2025, les pionniers de l'apprentissage par renforcement récompensés !

ZD Tech : tout comprendre en moins de 3 minutes avec ZDNet

Play Episode Listen Later Mar 10, 2025 3:00


Aujourd'hui, on parle des lauréats du prix Turing 2025, la plus haute distinction en informatique. Il vient d'être décerné à deux chercheurs pionniers de l'intelligence artificielle. Il s'agit de Andrew Barto et Richard Sutton.Mais alors, quelle est leur contribution au monde de l'informatique ? Il s'agit d'une technique dite d'apprentissage par renforcement. C'est cette une approche clé qui a permis à des IA comme AlphaZero et AlphaStar d'exceller dans des jeux complexes, comme les échecs.Mais avant d'aller plus loin, penchons nous sur ce qu'est l'apprentissage par renforcement.Qu'est ce que l'apprentissage par renforcement ?Imaginez une souris dans un labyrinthe. À chaque décision, à chaque direction qu'elle prend, elle peut être récompensée ou non en fonction de son avancée vers la sortie.Et bien l'apprentissage que peut effectuer un ordinateur fonctionne de la même manière. Il explore différentes options, apprend de ses erreurs et ajuste sa stratégie pour maximiser ses gains.Et cette méthode est devenue essentielle pour entraîner des systèmes intelligents, oui tout le monde dit intelligence artificielle désormais. Et elles sont à présent capables de prendre des décisions autonomes.Echecs, go et shogi comme terrains d'entraînementConcrètement, l'apprentissage par renforcement est devenue une technique clé pour réaliser les promesses de l'IA moderne.C'est cette approche qui a permis à AlphaZero, le programme de Google DeepMind, d'apprendre à jouer aux échecs, au go ou encore au shogi, qui est un jeu de société traditionnel japonais.Et le tout sans connaissance préalable. L'IA s'est en effet entraînée contre elle même sur ces trois jeux, jusqu'à devenir experte en la matière. De la même manière mais cette fois dans le domaine des jeux vidéos, le programme AlphaStar a atteint un niveau de "grand maître" dans le jeu Starcraft 2.La première véritable théorie computationnelle de l'intelligenceMais évidemment, la puissance de l'apprentissage par renforcement à désormais un impact bien au-delà des jeux.Richard Sutton et Andrew Barto affirment que leur vision de l'apprentissage par renforcement repose sur une idée plus profonde. Ils expliquent que l'apprentissage par renforcement pourrait être la première véritable théorie computationnelle de l'intelligence.Mais au-delà des algorithmes, ils insistent sur l'importance du jeu et de la curiosité comme moteurs fondamentaux de l'apprentissage, et ce aussi bien pour les humains que pour les machines.Le ZD Tech est sur toutes les plateformes de podcast ! Abonnez-vous !Hébergé par Ausha. Visitez ausha.co/politique-de-confidentialite pour plus d'informations.

Retirement Inside Out
A Glimpse Into 2025: Talking AI, Market Trends, and Economic Shifts with Tony Parish

Retirement Inside Out

Play Episode Listen Later Feb 7, 2025 29:17


We're shaking it up here on the Retirement Inside Out Podcast! In 2025, we're bringing you a fresh lineup of voices, featuring guest hosts from the Financial Independence Group team, along with advisors, industry experts, and business owners. But don't worry- Tom will still be making regular appearances to share his valuable insights! To kick off the first episode of the year, we're thrilled to welcome back the highly popular Tony Parish, CFA, CQF. As AI technology continues to evolve, its role in investment management is growing stronger. Tune in as we dive into a discussion with Tony about AI's potential and limitations in portfolio management, the latest market trends, and exciting updates from Alphastar. Here's some of what we discuss in this episode: AI's role in investment management and market trends The bullish outlook for 2025 and the potential for market reversals The impact of global trade dynamics, tariffs, and immigration policies on the market The outlook for the jobs and housing market in 2025 Updates on Alphastar's initiatives   Learn About FIG: https://www.figmarketing.com  800-527-1155

Dj Murphy Podcast
September 2024 (Podcast 128)

Dj Murphy Podcast

Play Episode Listen Later Sep 28, 2024 83:53


A late September mix from me featuring all the new bouncies and bangers! Dj Murphy - September 2024 (Podcast 128)Tracklisting Huts, Strings - 4 O'clock (In The Morning)Billy Gillies, Hannah Boleyn - Right Here Now (Extended)Hannah Laing, Muki - Ibizacore (Extended Mix)Chase & Status ft Stormzy - Backbone (Sammy Porter Edit)Oliver Heldens & David Guetta Feat. Fast Boy - Chills (Feel My Love) (Extended Mix)Fragma vs Skepta - Toca Man (Dj Murphy's ReRub)Hannah Laing - Poppin (Extended Mix)Willie G Vs Poomstyles - Just FineDavid Ryan, Ev Wilde, Shanice Griffin - Here We Go Again (Kimmic Extended Mix)Klaas, Michael Roman - Silence (Extended Mix)Ewan Mcvicar, Kettama - The Miracle Makers (Original Mix)Roman Messer & Rocco - To The Moon & Back (Extended Mix)Hannah Laing, Jem Cooke - Stay (Extended Mix)Vula, Schak - Got No Money (Extended)S.J.J - Media Luna 2024Danny Bond - Never Leave (Kimmic Extended Remix)Ampris - Don't Say Goodnight (Extended Mix)Salvatore Mancuso, Max Niklas, Salvatore Mancuso & Max Niklas - Baby, You're The One (Whole Again) (Extended Mix)Alena - Turn It Around (Billy Gillies Extended Remix)Bounce Wave Alliance - Sound Of LoveKlaas - Parallel Lines (Extended Mix)Lady Luminis & Subcontrollz - Sad Part (Handsup Extended Mix)Dbl Feat. Aurya - Waiting For You (Extended Mix)Barcode Brothers & Braaheim - Ultra Love (Flute) (Extended Mix)Dancecore N3Rd & Rainy - Sleepless (Dancecore N3Rd Extended Mix)Regard - Call On Me (Extended Mix)Noyesman & Alphastar! - Don't Go (Handsup Extended Mix)Hannah Laing - I Need It More (Extended)Jens O. - Remember The Day (2024 Club Edit)Ferry Corsten - Just Breathe (Extended Mix)Submersive & Christina Novelli - Together Again (Extended Mix)Sixthema & H93 - Natalie Don't (Sixthema & H93 Remix)Tom Franke & John Dyke - Always And Forever Original Extended Mix)Keanu Silva & Izko Feat. Felix Samuel - Give You The Moon (Extended Mix)Da Blitz - Stay With Me (Santino Classic Remix)Dj Gollum & Danny Suko, Empyre One Feat. Dj Squared - Spring (Extended Mix)01:21:44 Crimore - All I Ever Wanted Hosted on Acast. See acast.com/privacy for more information.

No Priors: Artificial Intelligence | Machine Learning | Technology | Startups
Google DeepMind's Vision for AI, Search and Gemini with Oriol Vinyals from Google DeepMind

No Priors: Artificial Intelligence | Machine Learning | Technology | Startups

Play Episode Listen Later Aug 1, 2024 46:08


In this episode of No Priors, hosts Sarah and Elad are joined by Oriol Vinyals, VP of Research, Deep Learning Team Lead, at Google DeepMind and Technical Co-lead of the Gemini project. Oriol shares insights from his career in machine learning, including leading the AlphaStar team and building competitive StarCraft agents. We talk about Google DeepMind, forming the Gemini project, and integrating AI technology throughout Google products. Oriol also discusses the advancements and challenges in long context LLMs, reasoning capabilities of models, and the future direction of AI research and applications. The episode concludes with a reflection on AGI timelines, the importance of specialized research, and advice for future generations in navigating the evolving landscape of AI. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @oriolvinyalsml Show Notes:  (00:00) Introduction to Oriol Vinyals (00:55) The Gemini Project and Its Impact (02:04) AI in Google Search and Chat Models (08:29) Infinite Context Length and Its Applications (14:42) Scaling AI and Reward Functions (31:55) The Future of General Models and Specialization (38:14) Reflections on AGI and Personal Insights (43:09) Will the Next Generation Study Computer Science? (45:37) Closing thoughts

Retirement Inside Out
State of the Markets: A Mid-Year Review with Tony Parish

Retirement Inside Out

Play Episode Listen Later Jul 26, 2024 24:55


In this week's episode, we welcome back Tony Parish, CFA, CQF, Chief Investment Officer at Alphastar Capital Management, for another insightful conversation about the U.S. economy and market trends. This episode offers an exclusive preview of our upcoming State of the Markets presentation. Stay tuned as Tony sheds light on the overarching theme for 2024: the incredible persistence and resilience of the U.S. economy and markets. Despite global challenges and inflation concerns, the U.S. economy has demonstrated unparalleled strength and recovery, which is reflected in this year's market performance. Join us as Tony dives into the data, discusses the impact of elections on the markets, and offers insights on inflation trends. Then, at the end of the episode, he'll also share some exciting updates from Alphastar, including their new Orion Enterprise Experience, which is launching this year.   Here's some of what we discuss in this episode: The U.S. economy's impressive resilience and strength in 2024 The lack of a strong correlation between U.S. elections and market performance Common misconceptions surrounding inflation + managing expectations about its impact on daily expenses Exciting updates and projects from Alphastar this year   Learn About FIG: https://www.figmarketing.com  800-527-1155

Wealth Guardians Radio
October 21, 2023 - Real Estate in Your Retirement Portfolio?

Wealth Guardians Radio

Play Episode Listen Later Oct 19, 2023 24:23


On this edition of the Wealth Guardians Radio Show, Doug Ray and Brice Payne discuss whether it's a good idea to include real estate investments in your retirement portfolio. The Wealth Guardians Radio show is hosted by Doug Ray and broadcasts live each Saturday morning at 9:30 on Greensboro, NC's 94.5 WPTI FM and each Sunday morning at 9:30 on Winston-Salem's WTOB 98.0 AM. _____________________ The information provided is for educational purposes only and are not intended as investment advice for any individual or entity. All information contained herein is believed to be from reliable sources; however, we make no representation as to its completeness or accuracy. The views presented today are those of Wealth Guardians and do not necessarily represent the views of the Alphastar Capital Management. The opinions expressed are subject to change without notice and do not constitute financial, legal or tax advice. Any comments regarding safe and secure investments and guaranteed income refer only to fixed insurance products offered by Wealth Guardians. They do not refer in any way to securities or investment advisory products. Please consult your financial professional before executing any financial strategy. Investment Advisory Services offered through Alphastar Capital Management, a registered investment adviser. Alphastar and Wealth Guardians are independent entities.

Wealth Guardians Radio
October 14, 2023 - Continuing to Expose Retirement Planning Complaints

Wealth Guardians Radio

Play Episode Listen Later Oct 12, 2023 24:31


On this edition of the Wealth Guardians Radio Show, Brice Payne and Garrett Ray continue their discussion about exposing common retirement planning complaints. The Wealth Guardians Radio show is hosted by Doug Ray and broadcasts live each Saturday morning at 9:30 on Greensboro, NC's 94.5 WPTI FM and each Sunday morning at 9:30 on Winston-Salem's WTOB 98.0 AM. _____________________ The information provided is for educational purposes only and are not intended as investment advice for any individual or entity. All information contained herein is believed to be from reliable sources; however, we make no representation as to its completeness or accuracy. The views presented today are those of Wealth Guardians and do not necessarily represent the views of the Alphastar Capital Management. The opinions expressed are subject to change without notice and do not constitute financial, legal or tax advice. Any comments regarding safe and secure investments and guaranteed income refer only to fixed insurance products offered by Wealth Guardians. They do not refer in any way to securities or investment advisory products. Please consult your financial professional before executing any financial strategy. Investment Advisory Services offered through Alphastar Capital Management, a registered investment adviser. Alphastar and Wealth Guardians are independent entities.

Wealth Guardians Radio
October 07, 2023 - Exposing Common Retirement Planning Complaints

Wealth Guardians Radio

Play Episode Listen Later Oct 6, 2023 25:15


On this edition of the Wealth Guardians Radio Show, Brice Payne and Garrett Ray expose common retirement planning complaints. The Wealth Guardians Radio show is hosted by Doug Ray and broadcasts live each Saturday morning at 9:30 on Greensboro, NC's 94.5 WPTI FM and each Sunday morning at 9:30 on Winston-Salem's WTOB 98.0 AM. _____________________ The information provided is for educational purposes only and are not intended as investment advice for any individual or entity. All information contained herein is believed to be from reliable sources; however, we make no representation as to its completeness or accuracy. The views presented today are those of Wealth Guardians and do not necessarily represent the views of the Alphastar Capital Management. The opinions expressed are subject to change without notice and do not constitute financial, legal or tax advice. Any comments regarding safe and secure investments and guaranteed income refer only to fixed insurance products offered by Wealth Guardians. They do not refer in any way to securities or investment advisory products. Please consult your financial professional before executing any financial strategy. Investment Advisory Services offered through Alphastar Capital Management, a registered investment adviser. Alphastar and Wealth Guardians are independent entities.

Let's Talk AI
#134 - Text-to-Speech, Gartner Hype Cycle, AI2 OLMo, AlphaStar Unplugged, China Regulations, AI Porn Marketplace

Let's Talk AI

Play Episode Listen Later Aug 26, 2023 98:31


Our 134th episode with a summary and discussion of last week's big AI news! Apologies for pod being a bit late this week! Read out our text newsletter and comment on the podcast at https://lastweekin.ai/ Email us your questions and feedback at contact@lastweekin.ai Timestamps + links: (00:00) Intro / Banter (02:30) Response to listener comments / corrections Tools & Apps(03:53) ElevenLabs Comes Out of Beta and Releases Eleven Multilingual v2 - a Foundational AI Speech Model for Nearly 30 Languages (07:20) Meet Lilli, our generative AI tool that's a researcher, a time saver, and an inspiration (09:55) Google Tests an A.I. Assistant That Offers Life Advice (11:42) Runway launches new ‘Watch' feature as CEO says Hollywood AI discourse ‘needs to be more nuanced'  (12:45) The AI-powered Adobe Express is now generally available (14:30) Snapchat is expanding further into generative AI with ‘Dreams' (17:35) NCSoft's new AI suite is trained to streamline game production Applications & Business(19:45) Gartner Places Generative AI on the Peak of Inflated Expectations on the 2023 Hype Cycle for Emerging Technologies (28:52) State of AI Q2'23 Report (35:45) China GPT? Tencent to Unleash Homegrown AI as Big Tech Races for Supremacy (38:23) What you need to know about Sakana AI, the new startup from a transformer paper co-author (43:00 )AI startup Anthropic raises $100M from Korean telco giant SK Telecom (45:58) OpenAI acquires AI design studio Global Illumination Projects & Open Source(48:13) Announcing AI2 OLMo, an open language model made by scientists, for scientists (51:50) Introducing IDEFICS : An Open Reproduction of State-of-the-art Visual Langage Model (55:45) Introducing Arthur Bench: The Most Robust Way to Evaluate LLMs Research & Advancements(58:45) Self-Alignment with Instruction BacktranslationAutomatically Correcting Large Language Models: Surveying the landscape of diverse self-correction strategies (01:07:00) DeepMind's AlphaStar Benchmark Improves RL Offline Agent With 90% Win Rate Against SOTA AlphaStar Supervised Agent (01:12:12) RAVEN: In-Context Learning with Retrieval Augmented Encoder-Decoder Language Models (01:14:16) BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents (01:21:40) Risky Giant Steps Can Solve Optimization Problems Faster Graph of Thoughts: Solving Elaborate Problems with Large Language Models Policy & Safety(01:24:08) China's new AI regulations begin to take effect (01:28:08) The Associated Press sets AI guidelines for journalists (01:30:45) AI Detection Tools Falsely Accuse International Students of Cheating – The Markup Synthetic Media & Art (01:32:08) Inside the AI Porn Marketplace Where Everything and Everyone Is for Sale (01:35:10) AI Botched Their Headshots (01:37:52) Outro

GPT Reviews
Zoom Keystroke Detection

GPT Reviews

Play Episode Listen Later Aug 14, 2023 14:04


Anthropic has released Claude Instant 1.2, a faster and safer model that outperforms its previous version in math, coding, and safety. Media organizations are calling for regulations to protect copyright in data used to train generative AI models, as it undermines their business models and reduces media diversity. Researchers have made a breakthrough in detecting keystrokes over Zoom calls, using machine learning and microphones to interpret remote keystrokes based on sound profiles of individual keys. The papers discussed in this episode showcase advancements in reinforcement learning for complex games like StarCraft II, language models that critique and refine their own outputs, and metacognitive prompting to improve the understanding abilities of Large Language Models. Contact:  sergi@earkind.com Timestamps: 00:34 Introduction 01:30 Anthropic Releases Claude Instant 1.2 03:01 News outlets demand new rules for AI training data 04:47 AI researchers claim 93% accuracy in detecting keystrokes over Zoom audio 05:48 Fake sponsor 07:40 AlphaStar Unplugged: Large-Scale Offline Reinforcement Learning 09:16 Shepherd: A Critic for Language Model Generation 10:57 Metacognitive Prompting Improves Understanding in Large Language Models 12:44 Outro

The Nonlinear Library
AF - AGI is easier than robotaxis by Daniel Kokotajlo

The Nonlinear Library

Play Episode Listen Later Aug 13, 2023 6:38


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AGI is easier than robotaxis, published by Daniel Kokotajlo on August 13, 2023 on The AI Alignment Forum. [Epistemic status: Hot take I wrote in 1 hour. We'll see in the comments how well it holds up.]Who would win in a race: AGI, or robotaxis? Which will be built first? There are two methods: Tech companies build AGI/robotaxis themselves. First they build AI that can massively accelerate AI R&D, then they bootstrap to AGI and/or robotaxis. The direct method Definitions: By AGI I mean a computer program that functions as a drop-in replacement for a human remote worker, except that it's better than the best humans at every important task (that can be done via remote workers). (h/t Ajeya Cotra for this language) And by robotaxis I mean at least a million fairly normal taxi rides a day are happening without any human watching ready to take over. (So e.g. if the Boring Company gets working at scale, that wouldn't count, since all those rides are in special tunnels.) 1. Scale advantage for AGI: Robotaxis are subject to crippling hardware constraints, relative to AGI. According to my rough estimations, Teslas would cost tens of thousands of dollars more per vehicle, and have 6% less range, if they scaled up the parameter count of their neural nets by 10x. Scaling up by 100x is completely out of the question for at least a decade, I'd guess. Meanwhile, scaling up GPT-4 is mostly a matter of purchasing the necessary GPUs and networking them together. It's challenging but it can be done, has been done, and will be done. We'll see about 2 OOMs of compute scale-up in the next four years, I say, and then more to come in the decade after that. This is a big deal because roughly half of AI progress historically came from scaling up compute, and because there are reasons to think it's impossible or almost-impossible for a neural net small enough to run on a Tesla to drive as well as a human, no matter how long it is trained. (It's about the size of an ant's brain. An ant is driving your car! Have you watched ants? They bump into things all the time!) 2. Stakes advantage for AGI: When a robotaxi messes up, there's a good chance someone will die. Robotaxi companies basically have to operate under the constraint that this never happens, or happens only once or twice. That would be like DeepMind training AlphaStar except that the whole training run gets shut down after the tenth game is lost. Robotaxi companies can compensate by doing lots of training in simulation, and doing lots of unsupervised learning on real-world camera recordings, but still. It's a big disadvantage. Moreover, the vast majority of tasks involved in being an AGI are 'forgiving' in the sense that it's OK to fail. If you send a weirdly worded message to a user, or make a typo in your code, it's OK, you can apologize and/or fix the error. Only in a few very rare cases are failures catastrophic. Whereas with robotaxis, the opportunity for catastrophic failure is omnipresent. As a result, I think arguably being a safe robotaxi is just inherently harder than most of of the tasks involved in being an AGI. (Analogy: Suppose that cars and people were indestructible, like in a video game, so that they just bounced off each other when they collided. Then I think we'd probably have robotaxis already; sure, it might take you 20% longer to get to your destination due to all the crashes, but it would be so much cheaper! Meanwhile, suppose that if your chatbot threatens or insults >10 users, you'd have to close down the project. Then Microsoft Bing would have been shut down, along with every other chatbot ever.) Finally, from a regulatory perspective, there are ironically much bigger barriers to building robotaxis than building AGI. If you want to deploy a fleet of a million robotaxis there is a lot of red tape you need to cut th...

The Nonlinear Library
LW - Hooray for stepping out of the limelight by So8res

The Nonlinear Library

Play Episode Listen Later Apr 1, 2023 2:23


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hooray for stepping out of the limelight, published by So8res on April 1, 2023 on LessWrong. From maybe 2013 to 2016, DeepMind was at the forefront of hype around AGI. Since then, they've done less hype. For example, AlphaStar was not hyped nearly as much as I think it could have been. I think that there's a very solid chance that this was an intentional move on the part of DeepMind: that they've been intentionally avoiding making AGI capabilities seem sexy. In the wake of big public releases like ChatGPT and Sydney and GPT-4, I think it's worth appreciating this move on DeepMind's part. It's not a very visible move. It's easy to fail to notice. It probably hurts their own position in the arms race. I think it's a prosocial move. If you are the sort of person who is going to do AGI capabilities research—and I recommend against it—then I'd recommend doing it at places that are more likely to be able to keep their research private, rather than letting it contribute to an arms race that I expect would kill literally everyone. I suspect that DeepMind has not only been avoiding hype, but also avoiding publishing a variety of their research. Various other labs have also been avoiding both, and I applaud them too. And perhaps DeepMind has been out of the limelight because they focus less on large language models, and the results that they do have are harder to hype. But insofar as DeepMind was in the limelight, and did intentionally step back from it and avoid drawing tons more attention and investment to AGI capabilities (in light of how Earth is not well-positioned to deploy AGI capabilities in ways that make the world better), I think that's worth noticing and applauding. (To be clear: I think DeepMind could do significantly better on the related axis of avoiding publishing research that advances capabilities, and for instance I was sad to see Chinchilla published. And they could do better at avoiding hype themselves, as noted in the comments. At this stage, I would recommend that DeepMind cease further capabilities research until our understanding of alignment is much further along, and my applause for the specific act of avoiding hype does not constitute a general endorsement of their operations. Nevertheless, my primary guess is that DeepMind has made at least some explicit attempts to avoid hype, and insofar as that's true, I applaud the decision.) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library: LessWrong
LW - Hooray for stepping out of the limelight by So8res

The Nonlinear Library: LessWrong

Play Episode Listen Later Apr 1, 2023 2:23


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hooray for stepping out of the limelight, published by So8res on April 1, 2023 on LessWrong. From maybe 2013 to 2016, DeepMind was at the forefront of hype around AGI. Since then, they've done less hype. For example, AlphaStar was not hyped nearly as much as I think it could have been. I think that there's a very solid chance that this was an intentional move on the part of DeepMind: that they've been intentionally avoiding making AGI capabilities seem sexy. In the wake of big public releases like ChatGPT and Sydney and GPT-4, I think it's worth appreciating this move on DeepMind's part. It's not a very visible move. It's easy to fail to notice. It probably hurts their own position in the arms race. I think it's a prosocial move. If you are the sort of person who is going to do AGI capabilities research—and I recommend against it—then I'd recommend doing it at places that are more likely to be able to keep their research private, rather than letting it contribute to an arms race that I expect would kill literally everyone. I suspect that DeepMind has not only been avoiding hype, but also avoiding publishing a variety of their research. Various other labs have also been avoiding both, and I applaud them too. And perhaps DeepMind has been out of the limelight because they focus less on large language models, and the results that they do have are harder to hype. But insofar as DeepMind was in the limelight, and did intentionally step back from it and avoid drawing tons more attention and investment to AGI capabilities (in light of how Earth is not well-positioned to deploy AGI capabilities in ways that make the world better), I think that's worth noticing and applauding. (To be clear: I think DeepMind could do significantly better on the related axis of avoiding publishing research that advances capabilities, and for instance I was sad to see Chinchilla published. And they could do better at avoiding hype themselves, as noted in the comments. At this stage, I would recommend that DeepMind cease further capabilities research until our understanding of alignment is much further along, and my applause for the specific act of avoiding hype does not constitute a general endorsement of their operations. Nevertheless, my primary guess is that DeepMind has made at least some explicit attempts to avoid hype, and insofar as that's true, I applaud the decision.) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
LW - Contra "Strong Coherence" by DragonGod

The Nonlinear Library

Play Episode Listen Later Mar 5, 2023 7:03


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Contra "Strong Coherence", published by DragonGod on March 4, 2023 on LessWrong. Polished from my shortform See also: Is "Strong Coherence" Anti-Natural? Introduction Many AI risk failure modes imagine strong coherence/goal directedness (e.g. [expected] utility maximisers).Such strong coherence is not represented in humans (or any other animal), seems unlikely to emerge from deep learning and may be "anti-natural" to general intelligence in our universe. I suspect the focus on strongly coherent systems was a mistake that set the field back a bit, and it's not yet fully recovered from that error.I think most of the AI safety work for strongly coherent agents (e.g. decision theory) will end up inapplicable/useless for aligning powerful systems, because powerful systems in the real world are "of an importantly different type". Ontological Error? I don't think it nails everything, but on a purely ontological level, @Quintin Pope and @TurnTrout's shard theory feels a lot more right to me than e.g. HRAD. HRAD is based on an ontology that seems to me to be mistaken/flawed in important respects. The shard theory account of value formation (while lacking) seems much more plausible as an account of how intelligent systems develop values (where values are "contextual influences on decision making") than the immutable terminal goals in strong coherence ontologies. I currently believe that (immutable) terminal goals is just a wrong frame for reasoning about generally intelligent systems in our world (e.g. humans, animals and future powerful AI systems). Theoretical Justification and Empirical Investigation Needed I'd be interested in more investigation into what environments/objective functions select for coherence and to what degree said selection occurs.And empirical demonstrations of systems that actually become more coherent as they are trained for longer/"scaled up" or otherwise amplified. I want advocates of strong coherence to explain why agents operating in rich environments (e.g. animals, humans) or sophisticated ML systems (e.g. foundation models) aren't strongly coherent.And mechanistic interpretability analysis of sophisticated RL agents (e.g. AlphaStar, OpenAI Five [or replications thereof]) to investigate their degree of coherence. Conclusions Currently, I think strong coherence is unlikely (plausibly "anti-natural") and am unenthusiastic about research agendas and threat models predicated on strong coherence. Disclaimer The above is all low confidence speculation, and I may well be speaking out of my ass. By "strong coherence/goal directedness" I mean something like: Informally: a system has immutable terminal goals. Semi-formally: a system's decision making is well described as (an approximation) of argmax over actions (or higher level mappings thereof) to maximise the expected value of a single fixed utility function over states. You cannot well predict the behaviour/revealed preferences of humans or other animals by the assumption that they have immutable terminal goals or are expected utility maximisers. The ontology that intelligent systems in the real world instead have "values" (contextual influences on decision making) seems to explain their observed behaviour (and purported "incoherencies") better. Many observed values in humans and other mammals (see) (e.g. fear, play/boredom, friendship/altruism, love, etc.) seem to be values that were instrumental for increasing inclusive genetic fitness (promoting survival, exploration, cooperation and sexual reproduction/survival of progeny respectively). Yet, humans and mammals seem to value these terminally and not because of their instrumental value on inclusive genetic fitness. That the instrumentally convergent goals of evolution's fitness criterion manifested as "terminal" values in mammals is IMO strong empiric...

The Nonlinear Library: LessWrong
LW - Contra "Strong Coherence" by DragonGod

The Nonlinear Library: LessWrong

Play Episode Listen Later Mar 5, 2023 7:03


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Contra "Strong Coherence", published by DragonGod on March 4, 2023 on LessWrong. Polished from my shortform See also: Is "Strong Coherence" Anti-Natural? Introduction Many AI risk failure modes imagine strong coherence/goal directedness (e.g. [expected] utility maximisers).Such strong coherence is not represented in humans (or any other animal), seems unlikely to emerge from deep learning and may be "anti-natural" to general intelligence in our universe. I suspect the focus on strongly coherent systems was a mistake that set the field back a bit, and it's not yet fully recovered from that error.I think most of the AI safety work for strongly coherent agents (e.g. decision theory) will end up inapplicable/useless for aligning powerful systems, because powerful systems in the real world are "of an importantly different type". Ontological Error? I don't think it nails everything, but on a purely ontological level, @Quintin Pope and @TurnTrout's shard theory feels a lot more right to me than e.g. HRAD. HRAD is based on an ontology that seems to me to be mistaken/flawed in important respects. The shard theory account of value formation (while lacking) seems much more plausible as an account of how intelligent systems develop values (where values are "contextual influences on decision making") than the immutable terminal goals in strong coherence ontologies. I currently believe that (immutable) terminal goals is just a wrong frame for reasoning about generally intelligent systems in our world (e.g. humans, animals and future powerful AI systems). Theoretical Justification and Empirical Investigation Needed I'd be interested in more investigation into what environments/objective functions select for coherence and to what degree said selection occurs.And empirical demonstrations of systems that actually become more coherent as they are trained for longer/"scaled up" or otherwise amplified. I want advocates of strong coherence to explain why agents operating in rich environments (e.g. animals, humans) or sophisticated ML systems (e.g. foundation models) aren't strongly coherent.And mechanistic interpretability analysis of sophisticated RL agents (e.g. AlphaStar, OpenAI Five [or replications thereof]) to investigate their degree of coherence. Conclusions Currently, I think strong coherence is unlikely (plausibly "anti-natural") and am unenthusiastic about research agendas and threat models predicated on strong coherence. Disclaimer The above is all low confidence speculation, and I may well be speaking out of my ass. By "strong coherence/goal directedness" I mean something like: Informally: a system has immutable terminal goals. Semi-formally: a system's decision making is well described as (an approximation) of argmax over actions (or higher level mappings thereof) to maximise the expected value of a single fixed utility function over states. You cannot well predict the behaviour/revealed preferences of humans or other animals by the assumption that they have immutable terminal goals or are expected utility maximisers. The ontology that intelligent systems in the real world instead have "values" (contextual influences on decision making) seems to explain their observed behaviour (and purported "incoherencies") better. Many observed values in humans and other mammals (see) (e.g. fear, play/boredom, friendship/altruism, love, etc.) seem to be values that were instrumental for increasing inclusive genetic fitness (promoting survival, exploration, cooperation and sexual reproduction/survival of progeny respectively). Yet, humans and mammals seem to value these terminally and not because of their instrumental value on inclusive genetic fitness. That the instrumentally convergent goals of evolution's fitness criterion manifested as "terminal" values in mammals is IMO strong empiric...

The Nonlinear Library
LW - Parameter Scaling Comes for RL, Maybe by 1a3orn

The Nonlinear Library

Play Episode Listen Later Jan 24, 2023 22:57


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Parameter Scaling Comes for RL, Maybe, published by 1a3orn on January 24, 2023 on LessWrong. TLDR Unlike language models or image classifiers, past reinforcement learning models did not reliably get better as they got bigger. Two DeepMind RL papers published in January 2023 nevertheless show that with the right techniques, scaling up RL model parameters can increase both total reward and sample-efficiency of RL agents -- and by a lot. Return-to-scale has been key for rendering language models powerful and economically valuable; it might also be key for RL, although many important questions remain unanswered. Intro Reinforcement learning models often have very few parameters compared to language and image models. The Vision Transformer has 2 billion parameters. GPT-3 has 175 billion. The slimmer Chinchilla, trained in accord with scaling laws emphasizing bigger datasets, has 70 billion. By contrast, until a month ago, the largest mostly-RL models I knew of were the agents for Starcraft and Dota2, AlphaStar and OpenAI5, which had 139 million and 158 million parameters. And most RL models are far smaller, coming in well under 50 million parameters. The reason RL hasn't scaled up the size of its models is simple -- doing so generally hasn't made them better. Increasing model size in RL can even hurt performance. MuZero Reanalyze gets worse on some tasks as you scale network size. So does a vanilla SAC agent. There has been good evidence for scaling model size in somewhat... non-central examples of RL. For instance, offline RL agents trained from expert examples, such as DeepMind's 1.2-billion parameter Gato or Multi-Game Decision Transformers, clearly get better with scale. Similarly, RL from human feedback on language models generally shows that larger LM's are better. Hybrid systems such as PaLM SayCan benefit from larger language models. But all these cases sidestep problems central to RL -- they have no need to balance exploration and exploitation in seeking reward. In the typical RL setting -- there has generally been little scaling and little evidence for the efficacy of scaling. (Although there has not been no evidence.) None of the above means that the compute spent on RL models is small or that compute scaling does nothing for them. AlphaStar used only a little less compute than GPT-3, and AlphaGo Zero used more, because both of them trained on an enormous number of games. Additional compute predictably improves performance of RL agents. But, rather than getting a bigger brain, almost all RL algorithms spend this compute by (1) training on an enormous number of games (2) or (if concerned with sample-efficiency) by revisiting the games that they've played an enormous number of times. So for a while RL has lacked: (1) The ability to scale up model size to reliably improve performance. (2) (Even supposing the above were around) Any theory like the language-model scaling laws which would let you figure out how to allocate compute between model size / longer training. My intuition is that the lack of (1), and to a lesser degree the lack of (2), is evidence that no one has stumbled on the "right way" to do RL or RL-like problems. It's like language modeling when it only had LSTMS and no Transformers, before the frighteningly straight lines in log-log charts appeared. In the last month, though, two RL papers came out with interesting scaling charts, each showing strong gains to parameter scaling. Both were (somewhat unsurprisingly) from DeepMind. This is the kind of thing that leads me to think "Huh, this might be an important link in the chain that brings about AGI." The first paper is "Mastering Diverse Domains Through World Models", which names its agent DreamerV3. The second is "Human-Timescale Adaptation in an Open-Ended Task Space", which names its agent Adaptive...

The Nonlinear Library: LessWrong
LW - Parameter Scaling Comes for RL, Maybe by 1a3orn

The Nonlinear Library: LessWrong

Play Episode Listen Later Jan 24, 2023 22:57


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Parameter Scaling Comes for RL, Maybe, published by 1a3orn on January 24, 2023 on LessWrong. TLDR Unlike language models or image classifiers, past reinforcement learning models did not reliably get better as they got bigger. Two DeepMind RL papers published in January 2023 nevertheless show that with the right techniques, scaling up RL model parameters can increase both total reward and sample-efficiency of RL agents -- and by a lot. Return-to-scale has been key for rendering language models powerful and economically valuable; it might also be key for RL, although many important questions remain unanswered. Intro Reinforcement learning models often have very few parameters compared to language and image models. The Vision Transformer has 2 billion parameters. GPT-3 has 175 billion. The slimmer Chinchilla, trained in accord with scaling laws emphasizing bigger datasets, has 70 billion. By contrast, until a month ago, the largest mostly-RL models I knew of were the agents for Starcraft and Dota2, AlphaStar and OpenAI5, which had 139 million and 158 million parameters. And most RL models are far smaller, coming in well under 50 million parameters. The reason RL hasn't scaled up the size of its models is simple -- doing so generally hasn't made them better. Increasing model size in RL can even hurt performance. MuZero Reanalyze gets worse on some tasks as you scale network size. So does a vanilla SAC agent. There has been good evidence for scaling model size in somewhat... non-central examples of RL. For instance, offline RL agents trained from expert examples, such as DeepMind's 1.2-billion parameter Gato or Multi-Game Decision Transformers, clearly get better with scale. Similarly, RL from human feedback on language models generally shows that larger LM's are better. Hybrid systems such as PaLM SayCan benefit from larger language models. But all these cases sidestep problems central to RL -- they have no need to balance exploration and exploitation in seeking reward. In the typical RL setting -- there has generally been little scaling and little evidence for the efficacy of scaling. (Although there has not been no evidence.) None of the above means that the compute spent on RL models is small or that compute scaling does nothing for them. AlphaStar used only a little less compute than GPT-3, and AlphaGo Zero used more, because both of them trained on an enormous number of games. Additional compute predictably improves performance of RL agents. But, rather than getting a bigger brain, almost all RL algorithms spend this compute by (1) training on an enormous number of games (2) or (if concerned with sample-efficiency) by revisiting the games that they've played an enormous number of times. So for a while RL has lacked: (1) The ability to scale up model size to reliably improve performance. (2) (Even supposing the above were around) Any theory like the language-model scaling laws which would let you figure out how to allocate compute between model size / longer training. My intuition is that the lack of (1), and to a lesser degree the lack of (2), is evidence that no one has stumbled on the "right way" to do RL or RL-like problems. It's like language modeling when it only had LSTMS and no Transformers, before the frighteningly straight lines in log-log charts appeared. In the last month, though, two RL papers came out with interesting scaling charts, each showing strong gains to parameter scaling. Both were (somewhat unsurprisingly) from DeepMind. This is the kind of thing that leads me to think "Huh, this might be an important link in the chain that brings about AGI." The first paper is "Mastering Diverse Domains Through World Models", which names its agent DreamerV3. The second is "Human-Timescale Adaptation in an Open-Ended Task Space", which names its agent Adaptive...

Pensacola Expert Panel
1/2/23 - Medicare Monday - Keith & Pam Giles - Owners of Verus Health Partners, Partners with AlphaStar Financial

Pensacola Expert Panel

Play Episode Listen Later Jan 2, 2023 24:03


https://www.verushp.com/about-us.php 850-710-7196 2810 E. Cervantes St. M-Th 9-4 Fr By appt only S&S closed OEP (Open Enrollment Period: 1/1/23-3/31/23) Options for Veterans at the VA Market Place Educational Events with AlphaStar and Verus Health Partners

Building With People For People: The Unfiltered Build Podcast
Ep. 14: Data is your destiny - Exploring data science with Ryan Valenza

Building With People For People: The Unfiltered Build Podcast

Play Episode Listen Later Sep 13, 2022 41:35


Have you used Google search today? Or listened to music recommended to you? The results and suggestions you receive from these types of services are powered by Data Science. In today's world, big data and insights are the new currency. While it's the machines that ultimately do the number crunching and provide the data, it's the human touch behind the scenes that make it all possible. In today's episode we explore the field of Data Science and Machine Learning and how it permeates every walk of life and the endless possibilities it provides. Our guest today, Ryan Valenza, is a scientist through and through. He earned his Bachelor of Science in Physics and Math from University of Maryland Baltimore County and his Master of Science in Physics from the University of Washington (PhD). He has held roles as a Data Engineer and Data Analyst at the Allen Institute for Brain Science and as Chief Data Scientist at Stackline, an e-commerce startup. He recently began a new job as the Director of Machine Learning at Bungie. If Bungie rings a bell it should, they created the gaming franchises of Halo, Destiny and Marathon to name a few. When he is not teaching machines how to interpret data he is a big gamer himself with origins in Donkey Kong Country, a runner and an adventure planner for his three year old daughter. Connect with Ryan: LinkedIn Twitter Twitch Show notes and helpful resources: Condensed matter physics Random forest and gradient descent algorithms Converting sound to image using Sonographic Sound Processing Convolutional Neural Network - a series of mathematical transformations Google deep mind AlphaGo - the computer program that defeated the Go grand master AlphaStar - the computer that plays StarCraft II Words to live by - “You should never bring someone else down to bring to yourself up” Ryan's intro to machine learning will cover supervised learning , unsupervised learning and reinforcement learning Data Science at UMBC Building something cool or solving interesting problems? Want to be on this show? Send me an email at jointhepodcast@unfilteredbuild.com Podcast produced by Unfiltered Build - dream.design.develop.

The Nonlinear Library
AF - Oversight Leagues: The Training Game as a Feature by Paul Bricman

The Nonlinear Library

Play Episode Listen Later Sep 9, 2022 17:01


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Oversight Leagues: The Training Game as a Feature, published by Paul Bricman on September 9, 2022 on The AI Alignment Forum. This post is part of my hypothesis subspace sequence, a living collection of proposals I'm exploring at Refine. Followed by ideological inference engines. Thanks Adam Shimi for advice on putting more legible content out there. Thanks Eric Winsor, Leo Grinsztajn, Linda Linsefors, Lucas Texeira, Tammy Leake, Ze Shen for discussions which inspired this post. TL;DR: An oversight league is a training scheme which incentivizes an agent and an evaluator to constantly try to game each other, leading to synchronized increases in capability for the two players. However, the evaluator is being offered a host of additional learning signals to help it maintain a consistent (and potentially provable) lead over the agent. Oversight leagues heavily draw on ideas from capability literature, including: league training in AlphaStar, game theory in GANs, adversarial robustness, etc. Intro The whole project of oversight leagues relies on the following non-exhaustive list of assumptions: Assumption 1, "AGI Hard, Human Values Harder": We are unlikely to formulate the True Name of human values in closed-form before deploying transformative AI. The best we are likely to do before takeoff is model human values approximately and implement an imperfect evaluator. Assumption 2, "Linear Capability Ordering": Any fixed evaluator (e.g. a reward model) can be gamed by an agent above a certain threshold of capability. More generally, an agent whose capability improves consistently faster than the capability of an evaluator will eventually be able to game said evaluator. By "evaluator capability," I'm referring to its ability to prevent being gamed. Assumption 3, "Humans Are Not True Gamers": Human oversight is impractical because our capabilities as evaluators can't improve at an arbitrary large rate. Save for cyborgian schemes for human augmentation, human oversight would eventually be gamed by an agent of sufficient capability. Assumption 4, "Zone of Proximal Development": There is a relatively small interval of evaluator lead which allows the agent to improve from challenging it, even if the evaluator is slightly ahead. This assumption is motivated by the fact that GANs are actually working, despite an inevitable imbalance between the two components involved at any given time. Assumption 5, "Point of No Return": Even a minor lead in capability for the agent relative to the evaluator is catastrophic, as the agent can take advantage of this window of opportunity and maneuver itself into a persistent advantageous position by flying under the evaluator's radar. From the assumptions above, we can infer that if we are to deploy a robust evaluator as an operationalization of human values, it must certainly not be fixed. Not only that, but it must additionally improve in performance at least as fast as the agent being evaluated, so that it never gets overtaken. What's more, simple human oversight doesn't fulfill those necessary conditions, hence we should consider automated schemes. Proposal An oversight league is one such automated scheme for training agents and evaluators in a way which improves their performance in lockstep. The crux of this training regime is to supply most of the training through bilateral learning signals, and thus render the improvement of the two components interdependent. By ensuring that most of the learning opportunities of the agent come from playing against the evaluator and vice versa, the two sides form a positive feedback loop resembling patterns of co-evolution. The oversight league scheme implicitly attempts to cultivate "antifragility" by applying appropriate stressors on the evaluator in the form of ever more capable agents a reliable way of impr...

Retirement Inside Out
Ep 79: Mid-Year Economic Report & Investment Outlook with Tony Parish, CFA®, CQF

Retirement Inside Out

Play Episode Listen Later Aug 19, 2022 25:02


Hard to believe it's already time for another mid-year update with our good friend Tony Parish, CFA®, CQF, who is the Chief Investment Officer at Alphastar Capital Management.  We're grateful for his time twice each year on the podcast as he gives us a great overview of the economy and investment markets. Today he's here to provide a summation and culmination of everything that went on during the first half of the year, which was marked by volatility and inflation. Every year around August, Tony pulls together all the Alphastar market and economic data to put together a PowerPoint presentation that we provide all of our advisors. It's a tremendous resource that'll you'll get some insight from on the show today. We think this is another highly-informative show from someone who is plugged into the US Capitol and investment and economic markets. Here is some of what you'll learn on this episode: A recap of the brutal first half the of the year from Tony's perspective. (3:19) His outlook on the second half of the year and why there's reason for investor optimism. (5:39) Why their data is telling them to be prepared for a long period of above-average inflation. (9:44) What tools are they offering advisors right now based on current conditions? (12:14) The three known variables Alphastar is watching closely the rest of the year. (14:45) What should advisors be doing right now? (19:50)   More About Our Guest: https://www.alphastarcm.com/who-we-are/

The Nonlinear Library
AF - We have achieved Noob Gains in AI by Aniruddha Nrusimha

The Nonlinear Library

Play Episode Listen Later May 18, 2022 12:16


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We have achieved Noob Gains in AI, published by Aniruddha Nrusimha on May 18, 2022 on The AI Alignment Forum. TL;DR I explain why I think AI research has been slowing down, not speeding up, in the past few years. How have your expectations for the future of AI research changed in the past three years? Based on recent posts in this forum, it seems that results in text generation, protein folding, image synthesis, and other fields have accomplished feats beyond what was thought possible. From a bird's eye view, it seems as though the breakneck pace of AI research is already accelerating exponentially, which would make the safe bet on AI timelines quite short. This way of thinking misses the reality on the front lines of AI research. Innovation is stalling beyond just throwing more computation at the problem, and the forces that made scaling computation cheaper or more effective are slowing. The past three years of AI results have been dominated by wealthy companies throwing very large models at novel problems. While this expands the economic impact of AI, it does not accelerate AI development. To figure out whether AI development is actually accelerating, we need to answer a few key questions: What has changed in AI in the past three years? Why has it changed, and what factors have allowed that change? How have those underlying factors changed in the past three years? By answering these fundamental questions, we can get a better understanding of how we should expect AI research to develop over the near future. And maybe along the way, you'll learn something about lifting weights too. We shall see. What has changed in AI research in the past three years? Gigantic models have achieved spectacular results on a large variety of tasks. How large is the variety of tasks? In terms of domain area, quite varied. Advances have been made in major hard science problems like protein synthesis, imaginative tasks like creating images from descriptions, and playing complex games like Starcraft. How large is the variety of models used? While each model features many domain specific model components and training components, the core of each of these models is a giant transformer trained with a variant of gradient descent, usually ADAM. How large are these models? That depends. DALLE2 and AlphaFold are O(10GB), AlphaStar is O(1GB), and the current state of the art few shot NLP models (Chinchilla) are O(100GB). One of the most consistent findings of the past decade of AI research is that larger models trained with more data get better results, especially transformers. If all of these models are built on top of the same underlying architecture, why is there so much variation in size? Think of training models like lifting weights. What limits your ability to lift heavy weights? Data availability: (Nutrition) If you don't eat enough food, you'll never gain muscle! Data is the food that makes models learn, and the more "muscle" you want the more "food" you need. When looking for text on the internet, it is easy to get terabytes of data to train a model. This is harder for other tasks Cost (exhaustion): No matter how rich your corporation is, training a model is expensive. Each polished model you see comes after a lot of experimentation and trials, which uses a lot of computational resources. AI labs are notorious cost sinks. The talent they acquire is expensive, and in addition to their salaries the talent demands access to top of the line computational resources. Training methodology (What exercises you do). NLP models only require to train one big transformer. More complex models like DALLE-2 and AlphaFold have many subcomponents optimized for their use cases. Training an NLP model is like deadlifting a loaded barbell and training AlphaFold is like lifting a box filled with stuff: at equiva...

The Last Call
AI, ML, WTF? Pt. 3: Game over man! Game Over!

The Last Call

Play Episode Listen Later Apr 19, 2022 24:48


In part 3 of our conversation we dive into how AI/ML is evolving. David and Andy explore a Two Minute Paper on AlphaStar (https://www.youtube.com/watch?v=jtlrWblOyP4). Is it game over for humans? Not quite yet.

Advisor Development Show with Karl Hoover
S3 E2: Becoming an Integrated Advisor with Alphastar

Advisor Development Show with Karl Hoover

Play Episode Listen Later Mar 16, 2022 39:23


Alphastar Capital Management is a FIG partner and registered investment advisor firm, that started nearly 10 years ago as they recognized the convergence of wealth management and insurance. Strategically they run portfolios, manage risk, and help advisors focus on what they're great at - developing relationships with their clients.  They are truly helping the next generation of advisors become integrated advisors. On today's episode, Sean, the Chief Sales officer at Alphastar Capital Management joins us to discuss his journey to Alphastar, risk protection, structured notes, unique solutions for wealthy clients, and more.   Alphastar Capital Management: https://www.alphastarcm.com/   TIMESTAMPS: 1:34 – Most common app you use? 1:59 – Special family traditions 2:45 – His journey to Alphastar 7:24 – What is Alphastar?  8:57 – What strategies do they use? 12:18 – How is Betashield different? 16:42 – 5 different portfolios 17:36 – Protecting your clients 18:32 – Structured notes 20:59 – Getting into an overpriced market 22:34 – Addressing taxes and real estate 28:20 – Providing better income solutions 30:00 – What makes Alphastar different? 33:34 – Being a partner with Alphastar 34:33 – Developing a strong mindset   MORE INFORMATION:  https://advisordevelopmentshow.com/

Retirement Inside Out
Ep 53: The State of the Market with Tony Parish

Retirement Inside Out

Play Episode Listen Later Feb 11, 2022 25:38


The market has been relatively stable over the past few years, but this could be changing. What do you need to know to help your clients make the best decisions possible? On today's show, Tony Parish joins us to discuss his state of the market presentation.   Tony provides advisors with thorough information on important financial topics like inflation, investments, and market trends. 2022 has started off rocky in January. To truly understand where we stand today, Tony explores the events leading up to the current state of the market and where things are likely to go. Using these tools, Tony hopes advisors have the information and knowledge they need for success in the new year.   Key Points:  1:50 – Welcome Tony! 2:55 – State of the market presentation 6:04 – How detailed is this presentation? 7:22 – Tracking inflation and its impacts 10:49 – The 5 goals for investors 14:36 – What can you add to your process? 18:01 – What does a CIO at Alphastar do? 21:15 – Empowering your business 23:08 – Insurance as an asset class   More About Our Guest:  https://gocheckers.com/

Choses à Savoir TECH
Qu'est-ce l'IA AlphaCode ?

Choses à Savoir TECH

Play Episode Listen Later Feb 8, 2022 2:35


Côté intelligence artificielle, l'entreprise Deepmind, filiale de Google, est clairement l'un des leaders dans le monde. Après AlphaGo qui a été capable de battre le meilleur joueur de Go du monde et AlphaStar sur le jeu vidéo StarCraft II, voilà que l'IA investit le champ du développement informatique. AlphaCode, c'est le nom de cette nouvelle intelligence artificielle vise je cite à « écrire des programmes informatiques à un niveau compétitif » fin de citation. De quoi s'agit-il ? Est-ce une révolution on plutôt une menace sur la communauté des développeurs ? C'est ce que je vous propose de voir dans cet épisode. Dans le détail, AlphaCode a été entraîné grâce à des codes disponibles au grand public sur la plateforme opensource GitHub. Mais ces derniers temps, c'est sur la plateforme Codeforces, qui organise régulièrement des compétitions pour développeurs avec un classement à la clé que l'IA s'est illustré. Pour résumer, au terme de sa participation, AlphaCode se classait dans les je cite dans les 54 % des meilleures réponses humaines. Et si l'on élargit un peu plus, AlphaCode se positionnerait même parmi les 28 % des meilleurs compétiteurs des six derniers mois. Des résultats qui je cite « dépassent les attentes » pour Mike Mirzayanov, fondateur de Codeforces. « J'étais sceptique parce que même pour les problèmes simples, il est souvent nécessaire non seulement d'implémenter l'algorithme, mais aussi (et c'est la partie la plus difficile) de l'inventer […] AlphaCode a réussi à atteindre un niveau de performances du niveau d'un nouveau compétiteur prometteur » fin de citation. Pour l'instant, AlphaCode ne peut exercer ses talents que dans le cadre de compétition de code, avec une consigne stricte. En clair, voire cette IA coder un programme à partir de rien n'est pas pour tout de suite. De plus, il existe un vrai risque à confier la rédaction d'un code à une intelligence artificielle, car si les données qui servent à son entraînement contiennent des failles de sécurité, il n'est pas impossible que l'IA les reproduise ensuite. Quoi qu'il en soit, DeepMind n'est pas le seul sur les rangs puisque Microsoft et OpenAI sont également très actif dans ce domaine. Ceci, dit, on est encore assez loin de voir débarquer des programmes entièrement conçus par des robots. Voir Acast.com/privacy pour les informations sur la vie privée et l'opt-out. Learn more about your ad choices. Visit megaphone.fm/adchoices

Börsenradio to go Marktbericht
Marktbericht Mi. 02.02.2022 - DAX gibt vor EZB-Sitzung alle Gewinne wieder ab, Inflation im Euroraum 5,1 %

Börsenradio to go Marktbericht

Play Episode Listen Later Feb 2, 2022 25:07


Die Anleger scheinen wieder zuversichtlicher zu werden. Der DAX steigt am Mittwoch bereits den dritten Tag in Folge. Und das trotz neuer Höchstwerte bei Corona-Neuinfektionen und Inzidenz, Meldungen wie der, dass die USA 2.000 weitere Soldaten nach Europa verlegt wegen der Unsicherheiten rund um die Ukrainekrise, und einer Inflation in Europa, die mit 5,1 % nicht nur höher ausfällt als erwartet, sondern auch auf einen Rekordwert gestiegen ist. Auch in Deutschland ist die Inflation zuletzt höher ausgefallen, als vorab erwartet wurde. Der DAX blieb zunächst stark, gab seine Gewinne bis Börsenschluss aber wieder ab. Grund waren aber eher nicht die Inflationsdaten, sondern die fehlende Stütze aus den USA, wo der Dow Jones ohne Kursgewinne eröffnete. Der DAX schloss nahezu unverändert mit 15.613 Punkten. Der ATX in Wien stieg 0,9 % auf 3.939 Punkte, der ATX Total Return auf 8.008 Punkte. Stärkste Gewinner im DAX waren eher zyklische Aktien wie Symrise, Covestro oder HeidelbergCement. Verlierer waren die beiden Luftfahrttitel Airbus und MTU und Vortagesgewinner Delivery Hero. Die EU-Kommission hat wie erwartet Atomkraft und Gas laut Taxonomie als nachhaltige Energieformen eingestuft. Im Fokus außerdem einige Unternehmenszahlen. Teamviewer brachte neben den Zahlen die Meldung, bis zu 10 % der eigenen Aktien zurückkaufen zu wollen. Die Aktie springt ganze +18 % an. Hören Sie zur Marktlage Fondsberater Lukas Spang von Tigris Capital und Fondsberater Felix Gode von Alphastar, zur Inflation, die in Europa höher ausgefallen ist als erwartet Chefvolkswirt Thorsten Polleit von Degussa, zur VIB Vermögen Übernahme durch DIC Asset Analyst Stefan Scharff von SRC, zu den Zahlen von Paypal und Alphabet Vermögensverwalter Burkhard Wagner von Partners, zu den Zahlen von Vantage Towers CFO Thomas Reisten und zu den Zahlen von aifinyo CEO Stefan Kempf.

This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Deep Learning, Transformers, and the Consequences of Scale with Oriol Vinyals - #546

This Week in Machine Learning & Artificial Intelligence (AI) Podcast

Play Episode Listen Later Dec 20, 2021 52:43


Today we're excited to kick off our annual NeurIPS, joined by Oriol Vinyals, the lead of the deep learning team at Deepmind. We cover a lot of ground in our conversation with Oriol, beginning with a look at his research agenda and why the scope has remained wide even through the maturity of the field, his thoughts on transformer models and if they will get us beyond the current state of DL, or if some other model architecture would be more advantageous. We also touch on his thoughts on the large language models craze, before jumping into his recent paper StarCraft II Unplugged: Large Scale Offline Reinforcement Learning, a follow up to their popular AlphaStar work from a few years ago. Finally, we discuss the degree to which the work that Deepmind and others are doing around games actually translates into real-world, non-game scenarios, recent work on multimodal few-shot learning, and we close with a discussion of the consequences of the level of scale that we've achieved thus far.   The complete show notes for this episode can be found at twimlai.com/go/546

The Nonlinear Library: LessWrong Top Posts
Developmental Stages of GPTs by orthonormal

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 11, 2021 11:18


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Developmental Stages of GPTs , published by orthonormal on the AI Alignment Forum. Crossposted from the AI Alignment Forum. May contain more technical jargon than usual. Epistemic Status: I only know as much as anyone else in my reference class (I build ML models, I can grok the GPT papers, and I don't work for OpenAI or a similar lab). But I think my thesis is original. Related: Gwern on GPT-3 For the last several years, I've gone around saying that I'm worried about transformative AI, an AI capable of making an Industrial Revolution sized impact (the concept is agnostic on whether it has to be AGI or self-improving), because I think we might be one or two cognitive breakthroughs away from building one. GPT-3 has made me move up my timelines, because it makes me think we might need zero more cognitive breakthroughs, just more refinement / efficiency / computing power: basically, GPT-6 or GPT-7 might do it. My reason for thinking this is comparing GPT-3 to GPT-2, and reflecting on what the differences say about the "missing pieces" for transformative AI. My Thesis: The difference between GPT-2 and GPT-3 has made me suspect that there's a legitimate comparison to be made between the scale of a network architecture like the GPTs, and some analogue of "developmental stages" of the resulting network. Furthermore, it's plausible to me that the functions needed to be a transformative AI are covered by a moderate number of such developmental stages, without requiring additional structure. Thus GPT-N would be a transformative AI, for some not-too-large N, and we need to redouble our efforts on ways to align such AIs. The thesis doesn't strongly imply that we'll reach transformative AI via GPT-N especially soon; I have wide uncertainty, even given the thesis, about how large we should expect N to be, and whether the scaling of training and of computation slows down progress before then. But it's also plausible to me now that the timeline is only a few years, and that no fundamentally different approach will succeed before then. And that scares me. Architecture and Scaling GPT, GPT-2, and GPT-3 use nearly the same architecture; each paper says as much, with a sentence or two about minor improvements to the individual transformers. Model size (and the amount of training computation) is really the only difference. GPT took 1 petaflop/s-day to train 117M parameters, GPT-2 took 10 petaflop/s-days to train 1.5B parameters, and the largest version of GPT-3 took 3,000 petaflop/s-days to train 175B parameters. By contrast, AlphaStar seems to have taken about 30,000 petaflop/s-days of training in mid-2019, so the pace of AI research computing power projects that there should be about 10x that today. The upshot is that OpenAI may not be able to afford it, but if Google really wanted to make GPT-4 this year, they could afford to do so. Analogues to Developmental Stages There are all sorts of (more or less well-defined) developmental stages for human beings: image tracking, object permanence, vocabulary and grammar, theory of mind, size and volume, emotional awareness, executive functioning, et cetera. I was first reminded of developmental stages a few years ago, when I saw the layers of abstraction generated in this feature visualization tool for GoogLeNet. We don't have feature visualization for language models, but we do have generative outputs. And as you scale up an architecture like GPT, you see higher levels of abstraction. Grammar gets mastered, then content (removing absurd but grammatical responses), then tone (first rough genre, then spookily accurate authorial voice). Topic coherence is mastered first on the phrase level, then the sentence level, then the paragraph level. So too with narrative flow. Gwern's poetry experiments (GPT-2, GPT-3) are good examples. GPT-2 could more ...

The Nonlinear Library: LessWrong Top Posts
The unexpected difficulty of comparing AlphaStar to humans by Richard Korzekwa

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 11, 2021 37:37


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The unexpected difficulty of comparing AlphaStar to humans published by Richard Korzekwa on the LessWrong. This is crossposted from the AI Impacts blog. Artificial intelligence defeated a pair of professional Starcraft II players for the first time in December 2018. Although this was generally regarded as an impressive achievement, it quickly became clear that not everybody was satisfied with how the AI agent, called AlphaStar, interacted with the game, or how its creator, DeepMind, presented it. Many observers complained that, in spite of DeepMind's claims that it performed at similar speeds to humans, AlphaStar was able to control the game with greater speed and accuracy than any human, and that this was the reason why it prevailed. Although I think this story is mostly correct, I think it is harder than it looks to compare AlphaStar's interaction with the game to that of humans, and to determine to what extent this mattered for the outcome of the matches. Merely comparing raw numbers for actions taken per minute (the usual metric for a player's speed) does not tell the whole story, and appropriately taking into account mouse accuracy, the differences between combat actions and non-combat actions, and the control of the game's “camera” turns out to be quite difficult. Here, I begin with an overview of Starcraft II as a platform for AI research, a timeline of events leading up to AlphaStar's success, and a brief description of how AlphaStar works. Next, I explain why measuring performance in Starcraft II is hard, show some analysis on the speed of both human and AI players, and offer some preliminary conclusions on how AlphaStar's speed compares to humans. After this, I discuss the differences in how humans and AlphaStar “see” the game and the impact this has on performance. Finally, I give an update on DeepMind's current experiments with Starcraft II and explain why I expect we will encounter similar difficulties when comparing human and AI performance in the future. Why Starcraft is a Target for AI Research Starcraft II has been a target for AI for several years, and some readers will recall that Starcraft II appeared on our 2016 expert survey. But there are many games and many AIs that play them, so it may not be obvious why Starcraft II is a target for research or why it is of interest to those of us that are trying to understand what is happening with AI. For the most part, Starcraft II was chosen because it is popular, and it is difficult for AI. Starcraft II is a real time strategy game, and like similar games, it requires a variety of tasks: harvesting resources, constructing bases, researching technology, building armies, and attempting to destroy their opponent's base are all part of the game. Playing it well requires balancing attention between many things at once: planning ahead, ensuring that one's units1 are good counters for the enemy's units, predicting opponents' moves, and changing plans in response to new information. There are other aspects that make it difficult for AI in particular: it has imperfect information2, an extremely large action space, and takes place in real time. When humans play, they engage in long term planning, making the best use of their limited capacity for attention, and crafting ploys to deceive the other players. The game's popularity is important because it makes it a good source of extremely high human talent and increases the number of people that will intuitively understand how difficult the task is for a computer. Additionally, as a game that is designed to be suitable for high-level competition, the game is carefully balanced so that competition is fair, does not favor just one strategy3, and does not rely too heavily on luck. Timeline of Events To put AlphaStar's performance in context, it helps to understand the ti...

The Nonlinear Library: LessWrong Top Posts
Fun with +12 OOMs of Compute by Daniel Kokotajlo

The Nonlinear Library: LessWrong Top Posts

Play Episode Listen Later Dec 11, 2021 29:38


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Fun with +12 OOMs of Compute, published by Daniel Kokotajlo on the LessWrong. Crossposted from the AI Alignment Forum. May contain more technical jargon than usual. Or: Big Timelines Crux Operationalized What fun things could one build with +12 orders of magnitude of compute? By ‘fun' I mean ‘powerful.' This hypothetical is highly relevant to AI timelines, for reasons I'll explain later. Summary (Spoilers): I describe a hypothetical scenario that concretizes the question “what could be built with 2020's algorithms/ideas/etc. but a trillion times more compute?” Then I give some answers to that question. Then I ask: How likely is it that some sort of TAI would happen in this scenario? This second question is a useful operationalization of the (IMO) most important, most-commonly-discussed timelines crux: “Can we get TAI just by throwing more compute at the problem?” I consider this operationalization to be the main contribution of this post; it directly plugs into Ajeya's timelines model and is quantitatively more cruxy than anything else I know of. The secondary contribution of this post is my set of answers to the first question: They serve as intuition pumps for my answer to the second, which strongly supports my views on timelines. The hypothetical In 2016 the Compute Fairy visits Earth and bestows a blessing: Computers are magically 12 orders of magnitude faster! Over the next five years, what happens? The Deep Learning AI Boom still happens, only much crazier: Instead of making AlphaStar for 10^23 floating point operations, DeepMind makes something for 10^35. Instead of making GPT-3 for 10^23 FLOPs, OpenAI makes something for 10^35. Instead of industry and academia making a cornucopia of things for 10^20 FLOPs or so, they make a cornucopia of things for 10^32 FLOPs or so. When random grad students and hackers spin up neural nets on their laptops, they have a trillion times more compute to work with. [EDIT: Also assume magic +12 OOMs of memory, bandwidth, etc. All the ingredients of compute.] For context on how big a deal +12 OOMs is, consider the graph below, from ARK. It's measuring petaflop-days, which are about 10^20 FLOP each. So 10^35 FLOP is 1e+15 on this graph. GPT-3 and AlphaStar are not on this graph, but if they were they would be in the very top-right corner. Question One: In this hypothetical, what sorts of things could AI projects build? I encourage you to stop reading, set a five-minute timer, and think about fun things that could be built in this scenario. I'd love it if you wrote up your answers in the comments! My tentative answers: Below are my answers, listed in rough order of how ‘fun' they seem to me. I'm not an AI scientist so I expect my answers to overestimate what could be done in some ways, and underestimate in other ways. Imagine that each entry is the best version of itself, since it is built by experts (who have experience with smaller-scale versions) rather than by me. OmegaStar: In our timeline, it cost about 10^23 FLOP to train AlphaStar. (OpenAI Five, which is in some ways more impressive, took less!) Let's make OmegaStar like AlphaStar only +7 OOMs bigger: the size of a human brain.[1] [EDIT: You may be surprised to learn, as I was, that AlphaStar has about 10% as many parameters as a honeybee has synapses! Playing against it is like playing against a tiny game-playing insect.] Larger models seem to take less data to reach the same level of performance, so it would probably take at most 10^30 FLOP to reach the same level of Starcraft performance as AlphaStar, and indeed we should expect it to be qualitatively better.[2] So let's do that, but also train it on lots of other games too.[3] There are 30,000 games in the Steam Library. We train OmegaStar long enough that it has as much time on each game as AlphaStar had on Starcraft. Wi...

The Nonlinear Library: Alignment Forum Top Posts
Developmental Stages of GPTs by orthonormal

The Nonlinear Library: Alignment Forum Top Posts

Play Episode Listen Later Dec 10, 2021 11:13


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Developmental Stages of GPTs, published by orthonormal on the AI Alignment Forum. Epistemic Status: I only know as much as anyone else in my reference class (I build ML models, I can grok the GPT papers, and I don't work for OpenAI or a similar lab). But I think my thesis is original. Related: Gwern on GPT-3 For the last several years, I've gone around saying that I'm worried about transformative AI, an AI capable of making an Industrial Revolution sized impact (the concept is agnostic on whether it has to be AGI or self-improving), because I think we might be one or two cognitive breakthroughs away from building one. GPT-3 has made me move up my timelines, because it makes me think we might need zero more cognitive breakthroughs, just more refinement / efficiency / computing power: basically, GPT-6 or GPT-7 might do it. My reason for thinking this is comparing GPT-3 to GPT-2, and reflecting on what the differences say about the "missing pieces" for transformative AI. My Thesis: The difference between GPT-2 and GPT-3 has made me suspect that there's a legitimate comparison to be made between the scale of a network architecture like the GPTs, and some analogue of "developmental stages" of the resulting network. Furthermore, it's plausible to me that the functions needed to be a transformative AI are covered by a moderate number of such developmental stages, without requiring additional structure. Thus GPT-N would be a transformative AI, for some not-too-large N, and we need to redouble our efforts on ways to align such AIs. The thesis doesn't strongly imply that we'll reach transformative AI via GPT-N especially soon; I have wide uncertainty, even given the thesis, about how large we should expect N to be, and whether the scaling of training and of computation slows down progress before then. But it's also plausible to me now that the timeline is only a few years, and that no fundamentally different approach will succeed before then. And that scares me. Architecture and Scaling GPT, GPT-2, and GPT-3 use nearly the same architecture; each paper says as much, with a sentence or two about minor improvements to the individual transformers. Model size (and the amount of training computation) is really the only difference. GPT took 1 petaflop/s-day to train 117M parameters, GPT-2 took 10 petaflop/s-days to train 1.5B parameters, and the largest version of GPT-3 took 3,000 petaflop/s-days to train 175B parameters. By contrast, AlphaStar seems to have taken about 30,000 petaflop/s-days of training in mid-2019, so the pace of AI research computing power projects that there should be about 10x that today. The upshot is that OpenAI may not be able to afford it, but if Google really wanted to make GPT-4 this year, they could afford to do so. Analogues to Developmental Stages There are all sorts of (more or less well-defined) developmental stages for human beings: image tracking, object permanence, vocabulary and grammar, theory of mind, size and volume, emotional awareness, executive functioning, et cetera. I was first reminded of developmental stages a few years ago, when I saw the layers of abstraction generated in this feature visualization tool for GoogLeNet. We don't have feature visualization for language models, but we do have generative outputs. And as you scale up an architecture like GPT, you see higher levels of abstraction. Grammar gets mastered, then content (removing absurd but grammatical responses), then tone (first rough genre, then spookily accurate authorial voice). Topic coherence is mastered first on the phrase level, then the sentence level, then the paragraph level. So too with narrative flow. Gwern's poetry experiments (GPT-2, GPT-3) are good examples. GPT-2 could more or less continue the meter of a poem and use words that fit the existing theme, but even...

The Nonlinear Library: Alignment Forum Top Posts
Fun with +12 OOMs of Compute by Daniel Kokotajlo

The Nonlinear Library: Alignment Forum Top Posts

Play Episode Listen Later Dec 10, 2021 29:34


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Fun with +12 OOMs of Compute, published by Daniel Kokotajlo on the AI Alignment Forum. Or: Big Timelines Crux Operationalized What fun things could one build with +12 orders of magnitude of compute? By ‘fun' I mean ‘powerful.' This hypothetical is highly relevant to AI timelines, for reasons I'll explain later. Summary (Spoilers): I describe a hypothetical scenario that concretizes the question “what could be built with 2020's algorithms/ideas/etc. but a trillion times more compute?” Then I give some answers to that question. Then I ask: How likely is it that some sort of TAI would happen in this scenario? This second question is a useful operationalization of the (IMO) most important, most-commonly-discussed timelines crux: “Can we get TAI just by throwing more compute at the problem?” I consider this operationalization to be the main contribution of this post; it directly plugs into Ajeya's timelines model and is quantitatively more cruxy than anything else I know of. The secondary contribution of this post is my set of answers to the first question: They serve as intuition pumps for my answer to the second, which strongly supports my views on timelines. The hypothetical In 2016 the Compute Fairy visits Earth and bestows a blessing: Computers are magically 12 orders of magnitude faster! Over the next five years, what happens? The Deep Learning AI Boom still happens, only much crazier: Instead of making AlphaStar for 10^23 floating point operations, DeepMind makes something for 10^35. Instead of making GPT-3 for 10^23 FLOPs, OpenAI makes something for 10^35. Instead of industry and academia making a cornucopia of things for 10^20 FLOPs or so, they make a cornucopia of things for 10^32 FLOPs or so. When random grad students and hackers spin up neural nets on their laptops, they have a trillion times more compute to work with. [EDIT: Also assume magic +12 OOMs of memory, bandwidth, etc. All the ingredients of compute.] For context on how big a deal +12 OOMs is, consider the graph below, from ARK. It's measuring petaflop-days, which are about 10^20 FLOP each. So 10^35 FLOP is 1e+15 on this graph. GPT-3 and AlphaStar are not on this graph, but if they were they would be in the very top-right corner. Question One: In this hypothetical, what sorts of things could AI projects build? I encourage you to stop reading, set a five-minute timer, and think about fun things that could be built in this scenario. I'd love it if you wrote up your answers in the comments! My tentative answers: Below are my answers, listed in rough order of how ‘fun' they seem to me. I'm not an AI scientist so I expect my answers to overestimate what could be done in some ways, and underestimate in other ways. Imagine that each entry is the best version of itself, since it is built by experts (who have experience with smaller-scale versions) rather than by me. OmegaStar: In our timeline, it cost about 10^23 FLOP to train AlphaStar. (OpenAI Five, which is in some ways more impressive, took less!) Let's make OmegaStar like AlphaStar only +7 OOMs bigger: the size of a human brain.[1] [EDIT: You may be surprised to learn, as I was, that AlphaStar has about 10% as many parameters as a honeybee has synapses! Playing against it is like playing against a tiny game-playing insect.] Larger models seem to take less data to reach the same level of performance, so it would probably take at most 10^30 FLOP to reach the same level of Starcraft performance as AlphaStar, and indeed we should expect it to be qualitatively better.[2] So let's do that, but also train it on lots of other games too.[3] There are 30,000 games in the Steam Library. We train OmegaStar long enough that it has as much time on each game as AlphaStar had on Starcraft. With a brain so big, maybe it'll start to do some transfer learning, acquiring g...

Retirement Inside Out
Ep 34: State of the Economy & Investing Outlook with Tony Parish, CFA®, CQF

Retirement Inside Out

Play Episode Listen Later Aug 20, 2021 26:22


Our guest today is Tony Parish, CFA®, CQF, who is the Chief Investment Officer at Alphastar Capital Management. As you can guess, he's keenly tapped into what's happening in all corners of finance and we've invited him on to share a bit of his insight. During this conversation, he'll provide a great perspective on the economy during the first half of 2021, give his thoughts on Bitcoin, and share some insight into precious metals. It's a thorough breakdown on where things are right now that we feel can benefit both financial professionals and retail investors.   About our guest: https://www.alphastarcm.com/who-we-are/#team    Key Points: 2:32 – His time with Alphastar 3:05 – RIAs 5:41 – His role as CIO 8:28 – Thoughts on first half of 2021   11:14 – Indicators he's watching 14:08 – Perspective on employment right now   20:34 – Bitcoin and cryptocurrency 23:11 – Precious metals and safer investments   Contact Retirement Inside Out: Email: tom.lamendola@figmarketing.com Phone: 800-527-1155 Web: figmarketing.com

ThePylonShow
Behind the scenes at ESL

ThePylonShow

Play Episode Listen Later Jul 18, 2021 137:31


We sat down with Heyoka from ESL to talk about the bizz side of making money in esports. 01.  00:01:39  Welcome back / Show road map / Guest introductions 02.  00:06:58  What you do to make SC events happen 03.  00:20:26  Philosophy approaches in esports vs tech companies? 04.  00:24:54  How the F**k does an esports company make money? 05.  00:39:52  Why do we see orgs with teams in different games? 06.  00:44:47  What do long term viability and support goals look like? 07.  00:53:23  Notable past experiences that have shaped your careers or organizations Part 1 08.  00:59:12  Success of StarCraft within ESL / TLMC and the map design process 09.  01:04:02  Notable past experiences that have shaped your careers or organizations Part 2 10.  01:20:05  Patron Q&A 11.  01:21:15  Is it fair to tap into the powers of the dark side to win games in tournaments? 12.  01:22:12  What is the strangest product to have a GSL/ASL commercial? 13.  01:24:13  You are invited to a halloween party. What will your costume be? 14.  01:26:06  What do you want from the current contest and ladder maps it provides? 15.  01:29:53  What can we the community do now to help sc2 beyond the contract in 2023? 16.  01:35:54  What do you think MaxPax looks like? / What was the impact of AlphaStar? 17.  01:42:53  Final thoughts / Wrap up 18.  01:46:15  Cobra's update on the State of the Pylon Show 19.  01:54:48  This Week In StarCraft / Thanks for watching - Special thanks to the Pylon Show Team: Producer: https://twitter.com/CobraVe7nom7 Shownotes - Alisaunder: https://twitter.com/Daisemiin TWiSC Presenter - https://twitter.com/CreightonOlsen Timestamps: https://twitter.com/AllelujahTV Intro VFX Artist: https://twitter.com/BodyVii WebDev: NeosteelEnthusiast Asst. podcast editor: https://twitter.com/Kousta29 Track: Koven - Never Have I Felt This [NCS Release] Music provided by NoCopyrightSounds. Watch: https://youtu.be/-7fuHEEmEjs Stream: http://ncs.io/NeverHaveIFeltThisYO  

TalkRL: The Reinforcement Learning Podcast

Kai Arulkumaran on AlphaStar and Evolutionary Computation, Domain Randomisation, Upside-Down Reinforcement Learning, Araya, NNAISENSE, and more!

TalkRL: The Reinforcement Learning Podcast

Roman Ring discusses the Research Engineer role at DeepMind, StarCraft II, AlphaStar, his bachelor's thesis, JAX, Julia, IMPALA and more!

Börsenradio to go Marktbericht
Marktbericht Mi. 09.12.2020 Lockdown, Brexit und EZB-Sitzung vor der Tür, MDAX und Wall Street mit Rekorden

Börsenradio to go Marktbericht

Play Episode Listen Later Dec 9, 2020 15:36


Dass der DAX am Mittwoch keine großen Sprünge macht, ist fast schon logisch: auf der einen Seite werden die Rufe nach einem harten Lockdown vonseiten der Politik immer lauter, allen voran Kanzlerin Merkel, auf der anderen Seite gibt es noch immer keine Ergebnisse der laufenden Brexit-Verhandlungen und am Donnerstag steht die EZB-Sitzung an. Die Hoffnung auf positive Signale von dort und der Schwung der letzten Impfstoffmeldungen trieben den DAX dennoch ins Plus: +0,5 %, 13.340 Punkte. Einen neuen Rekord erzielte der MDAX, der erstmals in seiner Geschichte über die 29.800 Punktemarke klettern konnte. Der ATX in Wien legte 1,3 % zu auf 2.670 Punkte. An der Wall Street sind Dow Jones und S&P 500 auf neue Rekorde gestiegen, danach setzen dort allerdings Gewinnmitnahmen ein. Stärkste Gewinner im DAX waren Delivery Hero mit deutlichen +6,7 %, hier hilft der sehr gute Börsengang der US-Essenslieferkette Doordash, weiterer DAX Gewinner war Covestro mit +5 %, die ihre Jahresziele angehoben haben, ebenfalls zulegen konnte BASF mit +3 %, hier gab es Gerüchte, dass US-Vermögenswerte in Höhe von 400 Mio. US-Dollar verkauft werden sollen. Stärkste Verlierer im DAX waren die Wohnungskonzerne Vonovia mit -1,1 % und Deutsche Wohnen mit -1,2 %. Schlusslicht war die Deutsche Bank mit -1,6 %. Hören Sie Heiko Thieme zur Möglicherweise zu guten Stimmung an der Börse und dem Beispiel Doordash, Fondsadvisor Felix Gode von Alphastar mit dem Rückblick auf die Fondsperformance, Lars Brandau vom Deutschen Derivateverband zum Ergebnis der Jahresumfrage: die meisten Anleger sind im Plus, außerdem Shop Apotheke Vorstand Stefan Feltens und den Vorstand der Traumhaus AG Otfried Sinner.

Börsenradio to go Marktbericht
Marktbericht Mo. 21.09.2020 - DAX fällt deutlich zu Wochenstart, Deutsche Bank im Geldwäsche-Sog der FinCEN Leaks

Börsenradio to go Marktbericht

Play Episode Listen Later Sep 21, 2020 13:31


Der Wochenstart geht deutlich in die Hose: der DAX verliert zeitweise 4,5 % und sinkt deutlich unter die 13.000 Punktemarke. Gründe gibt es genügend, zum einen war eine Korrektur von vielen erwartet und überfällig, zum anderen wird immer mehr über die Rückkehr der Coronaangst gesprochen. Schlusskurs im DAX: 12.542 Punkte und -4,4 %. Der ATX in Wien verlor 3,7 % auf 2.125 Punkte. Und auch an der Wall Street geht es deutlich abwärts mit Handelseröffnung. Die DAX Gewinner sind heute schnell aufgezählt: es gibt keine. Alle 30 DAX Titel schlossen den Montag im Minus. Stärkste Verlierer waren vor allem zyklische und konjunkturanfällige Titel wie Covestro, BASF, HeidelbergCement und MTU. Schlusslicht war die Deutsche Bank nach den Geldwäsche Leaks FinCEN. Die gesamte Bankenbranche wird deutlich nach unten gezogen, die Deutsche Bank verliert mehr als 8 %. Sie hören diesmal zu möglichen Chancen durch Rücksetzer Fondsadvisor Felix Gode von Alphastar, zu den Geldwäsche Leaks FinCEN Kapitalmarktanalyst Folker Hellmeyer von Solvecon, zur neuen Welt in der Anlagestrategie Frank Benz von der Benz AG und zur Notenbankpolitik Helge Rechberger von Raiffeisen Research.

Voltec Tech Talk
Google Deepmind

Voltec Tech Talk

Play Episode Listen Later Sep 11, 2020 32:18


The various projects of Google Deepmind, including AlphaGo, AlphaFold, and Alphastar

Entre Chaves
Entre Chaves #11 - A revolução com machine learning

Entre Chaves

Play Episode Listen Later Sep 8, 2020 38:54


Será que máquinas podem aprender? Elas já estão ganhando jogos como Dota, StarCraft e até programando sozinhas! No episódio de hoje falamos sobre machine learning! Ou, em bom português, aprendizagem de máquina.Participantes:@thechagas@gabrielbckr@magaluthMarco BorgesKaren Stefany MartinsProfessor Adriano Vilela Barbosa

EDC8 podcast
Sensores y VideoJuegos

EDC8 podcast

Play Episode Listen Later Jul 6, 2020 86:22


Dale like y suscribirte a nuestro canal si te gusta el contenido, además de activar las notificaciones para enterarte cuando publiquemos un nuevo vídeo. EDC8 Facebook: https://www.facebook.com/EDC8Podcast Linked in Karla: https://www.linkedin.com/in/karla-margarita-medrano-martínez-40223935/ LinkedIn de Enrique Mendoza: https://www.linkedin.com/in/enrique-mendoza-martinez-67405a172 Videos recomendados: AlphaStar: https://www.youtube.com/watch?v=M3nn3K7u1R4 Deepnude: https://www.youtube.com/watch?v=ysEjAqnHp64 Música: RFM - Royalty Free Music (https://www.patreon.com/rfmofficialpage) --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app

Lex Fridman Podcast
#86 – David Silver: AlphaGo, AlphaZero, and Deep Reinforcement Learning

Lex Fridman Podcast

Play Episode Listen Later Apr 3, 2020 108:28


David Silver leads the reinforcement learning research group at DeepMind and was lead researcher on AlphaGo, AlphaZero and co-lead on AlphaStar, and MuZero and lot of important work in reinforcement learning. Support this podcast by signing up with these sponsors: – MasterClass: https://masterclass.com/lex – Cash App – use code “LexPodcast” and download: – Cash App (App Store): https://apple.co/2sPrUHe – Cash App (Google Play): https://bit.ly/2MlvP5w EPISODE LINKS: Reinforcement learning (book): https://amzn.to/2Jwp5zG This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook,

Software 2.0
AlphaStar - Aprendizaje por refuerzo - Oriol Vinyals - Parte 1

Software 2.0

Play Episode Listen Later Dec 29, 2019 21:57


Software 2.0, es un podcast sobre Inteligencia Artificial. Entrevistamos a Oriol Vinyals, uno de los creadores de AlphaStar, IA capaz de jugar al video-juego StarCraft. En esta primera parte de la entrevista Oriol nos cuenta su trayectoria y como tienen que limitar a AlphaStar para que la batalla con sus rivales humanos sea más justa... a pesar de lo cual no salimos bien parados.

Software 2.0
AlphaStar - Aprendizaje por refuerzo - Oriol Vinyals - Parte 3 (Última)

Software 2.0

Play Episode Listen Later Dec 29, 2019 37:49


Software 2.0, es un podcast sobre Inteligencia Artificial. Esta es la última parte de la entrevista con Oriol Vinyals, uno de los investigadores líderes a nivel en el mundo de la inteligencia artificial. En los dos primeros episodios Oriol nos contó su trayectoria y los fundamentos del AlphaStar. En este episodio Oriol nos cuenta más sobre AlphaStar, sobre código abierto en Google, sobre los avances de las redes en los últimos años y mucho más.

Software 2.0
AlphaStar - Aprendizaje por refuerzo - Oriol Vinyals - Parte 2

Software 2.0

Play Episode Listen Later Dec 29, 2019 28:28


Software 2.0, es un podcast sobre Inteligencia Artificial. Entrevistamos a Oriol Vinyals, uno de los creadores de AlphaStar, IA capaz de jugar al video-juego StarCraft. En esta segunda parte hablamos de creatividad en las estrategias nuevas de AlphaStar, cómo afectará la existencia de AlphaStar al StartCraft y los límites en la eficiencia del aprendizaje por refuerzo.

Biznes Myśli
BM71: Podsumowanie roku 2019

Biznes Myśli

Play Episode Listen Later Dec 23, 2019 65:05


Okres świąteczny jest czasem, kiedy warto zrobić podsumowanie roku, a także przemyśleć kolejne kroki i odpowiednio przygotować się na nowy rok. Dowiesz się o najciekawszych wnioskach z raportu Artificial Intelligence Index 2019* Rozkwit badań AI* Inwestycja w AI* EdukacjaTeż będzie o Pluribus (nowa wersja Libratus), AlphaStar itd.Będzie też podsumowanie BiznesMyśli oraz DataWorkshop.https://biznesmysli.pl/71

Biznes Myśli
BM71: Podsumowanie roku 2019

Biznes Myśli

Play Episode Listen Later Dec 22, 2019 65:05


Okres świąteczny jest czasem, kiedy warto zrobić podsumowanie roku, a także przemyśleć kolejne kroki i odpowiednio przygotować się na nowy rok. Dowiesz się o najciekawszych wnioskach z raportu Artificial Intelligence Index 2019* Rozkwit badań AI* Inwestycja w AI* EdukacjaTeż będzie o Pluribus (nowa wersja Libratus), AlphaStar itd.Będzie też podsumowanie BiznesMyśli oraz DataWorkshop.https://biznesmysli.pl/71

Podcasts do Portal Deviante
A rápida ascensão do AlphaStar – 12 Nixian (Spin #765 – 15/12/19)

Podcasts do Portal Deviante

Play Episode Listen Later Dec 15, 2019 19:26


Sejam bem-vindos ao septingentésimo sexagésimo quinto Spin de Notícias, o seu giro diário de informações científicas... em escala sub-atômica. E nesse Spin de Notícias falaremos sobre... Matemática e Física! *Este episódio, assim como tantos outros projetos vindouros, só foi possível por conta do Patronato do SciCast. Se você quiser mais episódios assim, contribua conosco!*

Spin de Notícias | Deviante
A rápida ascensão do AlphaStar – 12 Nixian (Spin #765 – 15/12/19)

Spin de Notícias | Deviante

Play Episode Listen Later Dec 15, 2019 19:26


Sejam bem-vindos ao septingentésimo sexagésimo quinto Spin de Notícias, o seu giro diário de informações científicas... em escala sub-atômica. E nesse Spin de Notícias falaremos sobre... Matemática e Física! *Este episódio, assim como tantos outros projetos vindouros, só foi possível por conta do Patronato do SciCast. Se você quiser mais episódios assim, contribua conosco!*

ThePylonShow
TPS EP.72 LIVE from Blizzcon 2019 AlphaStar News & more

ThePylonShow

Play Episode Listen Later Dec 3, 2019 60:13


Timestamped Topics: 01. 0:01 - Welcome to Ep.#72 LIVE @ #Blizzcon19 - 02. 01:44 - Carbot Animations Story and Ann. of "StarCrafts" Series Final - 03. 09:57 - SC2 Announcements - 04. 21:27 - AlphaStar: News & Updates on AI Research Progress - 05. 31:02 - Cool moments from behind the scenes replays. - 06. 54:01 - AlphaStar at Blizzcon 2019 - 07. 54:50 - Thanks Everyone - 08. 55:03 - Geoff "iNcontroL" Robinson Tribute - En Taro iNcontroL - Special thanks to everyone who made this live episode possible. - https://twitter.com/CarBotAnimation - https://twitter.com/LiquidTLO - https://twitter.com/OriolVinyalsML - https://twitter.com/Solid_monk - https://twitter.com/DeepMindAI - https://twitter.com/Artosis - https://twitter.com/CobraVe7nom7 - https://twitter.com/MattShermanSC & the countless other people from the Pylon, DeepMind and Blizzard teams who have assisted. - - Power the Pylon: https://www.patreon.com/ThePylonShow - - Social - - Discord: https://discord.gg/ga5umfc Twitter: https://twitter.com/ThePylonShow Instagram: https://www.instagram.com/thepylonshow/ Facebook: https://www.facebook.com/ThePylonShow/ - Visit: https://ThePylonShow.com for Podcasts, VODs, Q&A submission link, countdown timer, and more. #ThePylonShow Live (most) Wednesdays @ 5:45pm PT on https://www.twitch.tv/Artosis/

ThePylonShow
TPS EP.72 LIVE from Blizzcon 2019 AlphaStar News & more

ThePylonShow

Play Episode Listen Later Dec 3, 2019 60:13


Timestamped Topics: 01. 0:01 - Welcome to Ep.#72 LIVE @ #Blizzcon19 - 02. 01:44 - Carbot Animations Story and Ann. of "StarCrafts" Series Final - 03. 09:57 - SC2 Announcements - 04. 21:27 - AlphaStar: News & Updates on AI Research Progress - 05. 31:02 - Cool moments from behind the scenes replays. - 06. 54:01 - AlphaStar at Blizzcon 2019 - 07. 54:50 - Thanks Everyone - 08. 55:03 - Geoff "iNcontroL" Robinson Tribute - En Taro iNcontroL - Special thanks to everyone who made this live episode possible. - https://twitter.com/CarBotAnimation - https://twitter.com/LiquidTLO - https://twitter.com/OriolVinyalsML - https://twitter.com/Solid_monk - https://twitter.com/DeepMindAI - https://twitter.com/Artosis - https://twitter.com/CobraVe7nom7 - https://twitter.com/MattShermanSC & the countless other people from the Pylon, DeepMind and Blizzard teams who have assisted. - - Power the Pylon: https://www.patreon.com/ThePylonShow - - Social - - Discord: https://discord.gg/ga5umfc Twitter: https://twitter.com/ThePylonShow Instagram: https://www.instagram.com/thepylonshow/ Facebook: https://www.facebook.com/ThePylonShow/ - Visit: https://ThePylonShow.com for Podcasts, VODs, Q&A submission link, countdown timer, and more. #ThePylonShow Live (most) Wednesdays @ 5:45pm PT on https://www.twitch.tv/Artosis/

AI with AI
When You Wish Upon an AlphaStar

AI with AI

Play Episode Listen Later Nov 22, 2019 35:10


In the news, Andy and Dave discuss the interim report from the National Security Commission on AI. DARPA’s new OFFensive Swarm-Enabled Tactics (OFFSET) program takes a look at swarm behavior. And DARPA picks the teams for its virtual Air Combat Competition (ACE). In research, DeepMind’s AlphaStar beats 99.8% of human games at StarCraft. A report on Mosaic Warfare looks at restoring the military competitiveness of US forces. Daniel Egel and Eric Robinson pen the latest response to the NSCAI call for ideas, examining the likely evolution, not revolution, of AI in Irregular Warfare. The Promise of AI: Reckoning and Judgment, by Brian Cantwell Smith, rounds out Andy’s pick for a trio of recent, interesting books on AI, taking a philosophical look at the topic. And mosaic warfare and multi-domain battle make the video of the week. Click here to visit our website and explore the links mentioned in the episode.   

EdTech Situation Room by @techsavvyteach & @wfryer
EdTech Situation Room Episode 155

EdTech Situation Room by @techsavvyteach & @wfryer

Play Episode Listen Later Nov 15, 2019 65:20


Welcome to episode 155 of the EdTech Situation Room from November 13, 2019, where technology news meets educational analysis. This week Jason Neiffer (@techsavvyteach) and Wesley Fryer (@wfryer) discussed YouTube's newly announced terms of service to apparently pave the way for more channel / account takedowns, the latest 2018-19 report "Why Rural Matters," and the importance of addressing the rural/urban political divides which separate many voters in western states like Montana and Oklahoma. The "Long Tail" and the wonderful "Craft With Me" YouTube channel of Gayle Agostinelli was mentioned. The new PBS Frontline special "In the Age of AI," Deepmind AI and its triumph (AlphaStar) over Starcraft 2 world class players, Android users who love the Apple Watch, and Apple's ongoing focus / market differentiation on privacy were discussed. Additional topics included the story of Carson King, College GameDay in Iowa, Venmo, and the raising of $1 million for a local children's hospital overshadowed by racist tweets from the past, as well as articles about the algorithmic darkness of YouTube. Google's forthcoming inclusion of "end of life" date information in ChromeOS settings, Jason's rebuttal to Phil Schiller's (of Apple) public criticisms of Chromebooks, and security articles including discussion of passwords and "security fatigue" and the importance of using a unique password for your Google account were also highlighted. Disinformation research from NPR's Fresh Air program, and resources highlighting both our "age of information disorder) (via @firstdraftnews) and the weaponization of Twitter to counter critics of Saudi Arabia were also discussed. Geeks of the Week included The Noun Project, Andrew Marantz's new book "Antisocial: Online Extremists, Techno-Utopians, and the Hijacking of the American Conversation," First Draft News' Informational Toolbox on Information Disorder, and an alarming video of MIT's Mini-Cheetah's rounded out the show. Our show was live streamed and archived simultaneously on YouTube Live as well as our Facebook Live page via StreamYard.com. Please follow us on Twitter @edtechSR for updates, and join us LIVE on Wednesday nights if you can (normally) at 10 pm Eastern / 9 pm Central / 8 pm Mountain / 7 pm Pacific or 3 am UTC. All shownotes are available on http://edtechSR.com/links.

FriendlyFire Podcast
FriendlyFire Podcast - S05E26 - "El Halloween que no fue"

FriendlyFire Podcast

Play Episode Listen Later Nov 4, 2019 122:23


Volvemos a los programas en vivo con un especial de Halloween que no fue pero con un programa cargado de reviews y noticias. Noticias de la semana 1 - Empezó la cacería de brujas para aquellos jugadores que apoyan el nuevo servicio para Fallout 76 2 - Death Stranding llega a PC en 2020 3 - EA vuelve a Steam y Jedi Fallen order encabeza la lista. 4 -BLOQUE ACTIVISION BLIZZARD TENCENT AUSPICIADO POR FONTI 5 -Llega la actualización con mejoras para Bloodstained en Nintendo Switch 6 -La inteligencia artificial AlphaStar consigue el título de 'gran maestro' de Starcraft II 7 -Red Dead Redemption 2 para PC tiene tráiler de lanzamiento 8 -Los autores de The Outer Worlds mandan y un mensaje a Bethesda? 9 -EA ve potencial en Titanfall 3 pero... REVIEWS: DIGIMON STORY CYBER SLEUTH COMPLETE EDITION Lee nuestra review: http://friendlyfirepodcast.blogspot.com/2019/10/digimon-story-cyber-sleuth-complete.html#more DELIVER US THE MOON

Blended Podcasts
Talos Talks Shit EP 10 - Google DeepMind

Blended Podcasts

Play Episode Listen Later Nov 3, 2019 24:35


The second episode examining Google DeepMind. A deep look at AlphaStar and why it's so amazing.

Focus Wetenschap
Computerprogramma AlphaStar domineert strategisch videospel

Focus Wetenschap

Play Episode Listen Later Nov 1, 2019 7:17


Zo'n vijf-en-twintig jaar geleden hielden we het niet voor mogelijk dat een computerprogramma zou kunnen winnen van een schaakgrootmeester of een professionele go-speler. Maar de ontwikkelingen gaan snel, vandaag de dag is er een programma dat zelfs in staat was om grootmeester te worden in een veel complexer videospel, genaamd StarCraft II. Het computerprogramma, AlphaStar, was in staat om van bijna iedereen te winnen, zo blijkt uit de publicatie die gisteravond in Nature verscheen. We spraken hierover met Frank van Caspel, cognitiefilosoof en scheidsrechter bij internationale StarCraft evenementen.

Coffee Break: Señal y Ruido
Ep239: Marte; Bacterias Espaciales; AlphaStar de Deepmind; Materia Oscura; IA y Olfato; Exocinturones de Clarke

Coffee Break: Señal y Ruido

Play Episode Listen Later Oct 31, 2019 167:02


La tertulia semanal en la que repasamos las últimas noticias de la actualidad científica. En el episodio de hoy: Blade Runner (min 4:00); InSight: Nuevos problemas en Marte (8:40); Bacterias y hongos en la ISS (16:10); AlphaStar, el sistema de Deepmind de Google que juega al StarCraft II mejor que los humanos(38:00); Google Brain y sus redes neuronales profundas para entender el olfato (57:00); El problema de los "cusps" de materia oscura, ¿un fallo de simulación? (1:34:40); Nuevo paper sobre Exocinturones de Clarke (2:00:00); Señales de los oyentes y preguntas del público (2:30:00). En la foto, de arriba a abajo y de izquierda a derecha: Francis Villatoro, Carlos Westendorp, Héctor Socas. Todos los comentarios vertidos durante la tertulia representan únicamente la opinión de quien los hace… y a veces ni eso. CB:SyR es una colaboración del Museo de la Ciencia y el Cosmos de Tenerife con el Área de Investigación y la UC3 del Instituto de Astrofísica de Canarias.

Linux Headlines
2019-10-31

Linux Headlines

Play Episode Listen Later Oct 31, 2019 2:59


SUSE comes to Oracle Cloud, it's time to move on from openSUSE LEAP 15.0, a new home for Vulkan code samples, and Google's AI takes on StarCraft II.

AI Buzz
Faster MRIs with AI, Tesla self-driving, Cortex AI, and AlphaStar!

AI Buzz

Play Episode Listen Later Oct 31, 2019 19:52


In this episode, I will discuss how Facebook and NYU are teaming up to speed up MRI scans, how the price to upgrade Tesla's to full self-driving will increase on November 1st, what the Cortex does differently compared to other cryptocurrencies, and how new AlphaStar performance is the new bar for machine learning.

Coffee Break: Señal y Ruido
Ep239: Marte; Bacterias Espaciales; AlphaStar de Deepmind; Materia Oscura; IA y Olfato; Exocinturones de Clarke

Coffee Break: Señal y Ruido

Play Episode Listen Later Oct 31, 2019 167:02


La tertulia semanal en la que repasamos las últimas noticias de la actualidad científica. En el episodio de hoy: Blade Runner (min 4:00); InSight: Nuevos problemas en Marte (8:40); Bacterias y hongos en la ISS (16:10); AlphaStar, el sistema de Deepmind de Google que juega al StarCraft II mejor que los humanos(38:00); Google Brain y sus redes neuronales profundas para entender el olfato (57:00); El problema de los "cusps" de materia oscura, ¿un fallo de simulación? (1:34:40); Nuevo paper sobre Exocinturones de Clarke (2:00:00); Señales de los oyentes y preguntas del público (2:30:00). En la foto, de arriba a abajo y de izquierda a derecha: Francis Villatoro, Carlos Westendorp, Héctor Socas. Todos los comentarios vertidos durante la tertulia representan únicamente la opinión de quien los hace… y a veces ni eso. CB:SyR es una colaboración del Museo de la Ciencia y el Cosmos de Tenerife con el Área de Investigación y la UC3 del Instituto de Astrofísica de Canarias.

DeepMind: The Podcast
Life is like a game

DeepMind: The Podcast

Play Episode Listen Later Aug 20, 2019 26:53


Video games have become a favourite tool for AI researchers to test the abilities of their systems. In this episode, Hannah sits down to play StarCraft II - a challenging video game that requires players to control the onscreen action with as many as 800 clicks a minute. She is guided by Oriol Vinyals, an ex-professional StarCraft player and research scientist at DeepMind, who explains how the program AlphaStar learnt to play the game and beat a top professional player. Elsewhere, she explores systems that are learning to cooperate in a digital version of the playground favourite ‘Capture the Flag’. If you have a question or feedback on the series, message us on Twitter (@DeepMindAI using the hashtag #DMpodcast) or emailing us at podcast@deepmind.com. Further reading The Economist: Why AI researchers like video games DeepMind blogs: Capture the Flag and Alphastar Professional StarCraft II player MaNa gives his impressions of AlphaStar and DeepMind Open AI’s work on Dota 2 The New York Times: DeepMind can now beat us at multiplayer games, too Royal Society: Machine Learning resources DeepMind: The Inside Story of AlphaStar Andrej Karpathy: Deep Reinforcement Learning: Pong from Pixels Interviewees: Research scientists Max Jaderberg and Raia Hadsell; Lead researchers David Silver and Oriol Vinyals, and Director of Research Koray Kavukcuoglu. Credits: Presenter: Hannah Fry Editor: David Prest Senior Producer: Louisa Field Producers: Amy Racs, Dan Hardoon Binaural Sound: Lucinda Mason-Brown Music composition: Eleni Shaw (with help from Sander Dieleman and WaveNet) Commissioned by DeepMind

ThePylonShow
ThePylonShow EP.56

ThePylonShow

Play Episode Listen Later Jul 12, 2019 170:57


We had some great balance discussions on the community update, Alphastar loose on the ladder battling humans and a lot more. #ThePylonShow Timestamps:  01. 00:00:44  Guest Introduction / Show Road Map 02. 00:05:51  Proposed patches in the community updates / Thoughts from the pros 03. 00:54:06  Thoughts on the voting system NationWars 04. 01:03:28  Alphastar: Loose on the Ladder 05. 01:11:28  WCS Summer 06. 01:16:34  HomeStory Cup XIX Results 07. 01:25:44  This Week in Starcraft II 08. 01:41:11  This Week in Brood War / This Week in Awesome  09. 01:47:26  Cobra's Clips 10. 01:53:19  GSL Season 2 11. 01:54:55  ASL Season 2 12. 01:57:27  Patreon Q & A   Shownotes: http://bit.ly/ep56TPS - Sponsors: https://afktea.com - Code Pylon & https://matcherino.com/thepylonshow - This weeks code: "Meow"   Visit our website for links to Discord, Podcasts, Social Platforms, The Q&A submission link, countdown timer, and more: https://ThePylonShow.com    Live Wednesdays @ 5:45pm PT on twitch.tv/iNcontroLTV   Hosts:  https://twitter.com/Artosis https://www.twitch.tv/Artosis https://twitter.com/iNcontroLTV https://www.twitch.tv/iNcontroLTV   Special thanks to the Pylon Show Team: Producer: https://twitter.com/CobraVe7nom7 Shownotes: Alisaunder - https://twitter.com/Daisemiin WebDevs: Diaxis & NeosteelEnthusiast VFX Artist: https://twitter.com/Bodypop_ Merch Artist: Lich&Famous  Timestamps: https://twitter.com/AllelujahTV Animator: https://twitter.com/DarkFirze

The Humans of Ai
Steven Brown, on Deepmind's PySC2 Starcraft Interface, Blizzcon 2017 & Alphastar.

The Humans of Ai

Play Episode Listen Later Jul 4, 2019 106:50


Steven Brown is one of the primary contributors to the PySC2 interface- the API that Deepmind used to defeat humans in Starcraft 2 earlier in January this year (2019).  In this episode we discuss Steven's history with how he got involved in the Starcraft 2 Ai community, what it was like to attend the original announcement of the Blizzard-Deepmind collaboration in 2017, discuss his contribution to the interface - and find out a little bit of info on what might on the horizon for the next phase of Starcraft Ai development.

Talking in Stations
Northern FCs on the Tribute Conflict

Talking in Stations

Play Episode Listen Later Jun 1, 2019 102:00


We talk to the fleet commanders and leaders defending Tribute against the Imperium Invasion. Hedliner (PL), Elo Knight (Frat), Killah Bee (NC), Alphastar (Horde) get us up to speed on the events in Tribute, particularly the battle over the SH1-6P infrastructure (ihub). We also hear about Alphastar’s personal story in EVE. Elo Knight – Origin. Corporation [Fraternity] Hedliner – Sniggerdly Corporation [Pandemic Legion] Killah Bee – Shiva Corporation [Northern Coalition] Alphastarpilot – Horde Vanguard. Corporation [Pandemic Horde] Elise Randolph – Habitual Euthanasia Corporation [Pandemic Legion] Matterall – Destructive Influence Corporation [Northern Coalition] Carneros – Ancient Hittite Corporation [The Bastion]    ...

Lex Fridman Podcast
Oriol Vinyals: DeepMind AlphaStar, StarCraft, Language, and Sequences

Lex Fridman Podcast

Play Episode Listen Later Apr 29, 2019 106:07


Oriol Vinyals is a senior research scientist at Google DeepMind. Before that he was at Google Brain and Berkeley. His research has been cited over 39,000 times. He is one of the most brilliant and impactful minds in the field of deep learning. He is behind some of the biggest papers and ideas in AI, including sequence to sequence learning, audio generation, image captioning, neural machine translation, and reinforcement learning. He is a co-lead (with David Silver) of the AlphaStar project, creating an agent that defeated a top professional at the game of StarCraft. If you would like to get

Linear Digressions
AlphaStar

Linear Digressions

Play Episode Listen Later Mar 10, 2019 22:03


It’s time for our latest installation in the series on artificial intelligence agents beating humans at games that we thought were safe from the robots. In this case, the game is StarCraft, and the AI agent is AlphaStar, from the same team that built the Go-playing AlphaGo AI last year. StarCraft presents some interesting challenges though: the gameplay is continuous, there are many different kinds of actions a player must take, and of course there’s the usual complexities of playing strategy games and contending with human opponents. AlphaStar overcame all of these challenges, and more, to notch another win for the computers.

ThePylonShow
ThePylonShow EP.41

ThePylonShow

Play Episode Listen Later Mar 8, 2019 136:43


LiquidMana & JonSnow talking Katowice, Terran Balance, Nydus Worms, AlphaStar, + ASL & GSL updates 01. 00:02:58 This Week in Starcraft II 02. 00:12:03 This Week in Brood War 03. 00:14:15 This Week in Awesome 04. 00:19:10 IEM Katowice Recap 05. 00:24:05 The Current State of Balance @ IEM 06. 00:44:08 IEM Final Results - soO 07. 01:07:18 Round Table Topic: Thoughts on Nydus Worm 08. 01:26:42 Upcoming WESG / WCS Schedule 09. 01:29:41 Round Table Topic: AlphaStar 10. 01:35:16 GSL Season 1 11. 01:40:42 ASL Season 7 12. 01:46:30 Patreon Q & A 13. 02:11:16 Updates with Mana @JonSnow 14. 02:15:38 Thanks & Signoff ShowNotes: https://docs.google.com/document/d/1TvsbYnBuLZUsm5-_MedBouzA4ZHTeHK5zIjrDEtEiRs   Code this week: "Cheesecake" https://matcherino.com/pylonshow 15% Discount code = “Pylon”@ https://afktea.com If you'd like to support us directly: https://www.patreon.com/ThePylonShow Follow us to to stay up to date on SC happenings: https://twitter.com/ThePylonShow https://www.instagram.com/thepylonshow/ https://discordapp.com/invite/ga5umfc https://ThePylonShow.com - A countdown timer, Links to the Podcasts and most everything else can be found on our website. Special thanks to the Pylon Show Team: Producer: https://twitter.com/CobraVe7nom7 Shownotes: Alisaunder https://twitter.com/Daisemiin VFX Artist: https://twitter.com/Bodypop_ Artist: Lich&Famous Timestamps: https://twitter.com/AllelujahTV Animator: https://twitter.com/DarkFirze - Guests - https://twitter.com/JonSnowSC2 https://twitter.com/Liquid_MaNa - Hosts - https://twitter.com/Artosis https://twitter.com/iNcontroLTV

ThePylonShow
ThePylonShow EP.41

ThePylonShow

Play Episode Listen Later Mar 7, 2019 136:44


LiquidMana & JonSnow talking Katowice, Terran Balance, Nydus Worms, AlphaStar, + ASL & GSL updates 01. 00:02:58 This Week in Starcraft II 02. 00:12:03 This Week in Brood War 03. 00:14:15 This Week in Awesome 04. 00:19:10 IEM Katowice Recap 05. 00:24:05 The Current State of Balance @ IEM 06. 00:44:08 IEM Final Results - soO 07. 01:07:18 Round Table Topic: Thoughts on Nydus Worm 08. 01:26:42 Upcoming WESG / WCS Schedule 09. 01:29:41 Round Table Topic: AlphaStar 10. 01:35:16 GSL Season 1 11. 01:40:42 ASL Season 7 12. 01:46:30 Patreon Q & A 13. 02:11:16 Updates with Mana @JonSnow 14. 02:15:38 Thanks & Signoff ShowNotes: https://docs.google.com/document/d/1EKKNDCgLrzKTiJx-OU7If8N3cd7Tl4P_t6wP9Jr7dGE Code this week: "Cheesecake" https://matcherino.com/pylonshow 15% Discount code = “Pylon”@ https://afktea.com If you'd like to support us directly: https://www.patreon.com/ThePylonShow Follow us to to stay up to date on SC happenings: https://twitter.com/ThePylonShow https://www.instagram.com/thepylonshow/ https://discordapp.com/invite/ga5umfc https://ThePylonShow.com - A countdown timer, Links to the Podcasts and most everything else can be found on our website. Special thanks to the Pylon Show Team: Producer: https://twitter.com/CobraVe7nom7 Shownotes: Alisaunder https://twitter.com/Daisemiin VFX Artist: https://twitter.com/Bodypop_ Artist: Lich&Famous Timestamps: https://twitter.com/AllelujahTV Animator: https://twitter.com/DarkFirze - Guests - https://twitter.com/JonSnowSC2 https://twitter.com/Liquid_MaNa - Hosts - https://twitter.com/Artosis https://twitter.com/iNcontroLTV

Artificially Intelligent
73: DeepMind Does it Again

Artificially Intelligent

Play Episode Listen Later Feb 26, 2019 41:24


Google DeepMind releases the long awaited AlphaStar in a high-profile demonstration for the public which shows it besting world champs in the game of Starcraft 2. We discuss the challenge and some of the interesting results from this latest round of man vs. machine. Links Ep 13: AI's new Alpha Dog Google DeepMind Blog DeepMind Video   Follow us and leave a rating! iTunes Homepage Twitter @artlyintelly Facebook

津津乐道中国版
vol.128 『乱槽之癫』AlphaStar 跟人类打星际,输赢怎么算?

津津乐道中国版

Play Episode Listen Later Feb 17, 2019 98:00


前段时间 AlphaStar 跟人类打星际争霸的消息震惊了游戏圈和 AI 圈,与围棋不同,策略类游戏的 AI 模型更加复杂,还要考虑很多人的因素,甚至要迁就人类的操作习惯。那么这次比赛背后有哪些我们不知道的事情,以及到底 AI 会不会剥夺了人类的“玩具”,这期请 Googler 狗叔和来自瑞典大学的嘉宾管啸给大家深入聊一聊这背后的故事。您将在本期节目中听到以下一些内容:- AlphaStar 到底是啥?- 给游戏盲介绍下星际争霸- 复盘比赛细节- AI 与人类的操作视角差别- Deepmind 为何会选择星际争霸作为突破点?- 游戏战术的来源和演化- 心理稳定性对胜负的影响- 星际争霸跟围棋的区别- 探索行为与计算行为的差异- 神经网络的基本知识- AlphaStar 是如何训练AI模型的?- 训练数据的来源- 对战场景的处理和决策- APM 方面的考虑- AI 技术对职业电竞选手的影响- 回复听友:如何看待 AI 剥夺人类玩具的行为?- 回复听友:与的差异?关于【津津乐道播客】由一群 IT 从业者创办的播客闲聊节目,主播朱峰是一位资深互联网创业者,其他主播和嘉宾来自各个城市、各个行业,跨界思考和嘉宾众多是我们的特点。经常出现的话题会包括生活、新科技、旅游和你所不了解的行业。

津津乐道中国版
vol.128 『乱槽之癫』AlphaStar 跟人类打星际,输赢怎么算?

津津乐道中国版

Play Episode Listen Later Feb 17, 2019 98:00


前段时间 AlphaStar 跟人类打星际争霸的消息震惊了游戏圈和 AI 圈,与围棋不同,策略类游戏的 AI 模型更加复杂,还要考虑很多人的因素,甚至要迁就人类的操作习惯。那么这次比赛背后有哪些我们不知道的事情,以及到底 AI 会不会剥夺了人类的“玩具”,这期请 Googler 狗叔和来自瑞典大学的嘉宾管啸给大家深入聊一聊这背后的故事。您将在本期节目中听到以下一些内容:- AlphaStar 到底是啥?- 给游戏盲介绍下星际争霸- 复盘比赛细节- AI 与人类的操作视角差别- Deepmind 为何会选择星际争霸作为突破点?- 游戏战术的来源和演化- 心理稳定性对胜负的影响- 星际争霸跟围棋的区别- 探索行为与计算行为的差异- 神经网络的基本知识- AlphaStar 是如何训练AI模型的?- 训练数据的来源- 对战场景的处理和决策- APM 方面的考虑- AI 技术对职业电竞选手的影响- 回复听友:如何看待 AI 剥夺人类玩具的行为?- 回复听友:与的差异?关于【津津乐道播客】由一群 IT 从业者创办的播客闲聊节目,主播朱峰是一位资深互联网创业者,其他主播和嘉宾来自各个城市、各个行业,跨界思考和嘉宾众多是我们的特点。经常出现的话题会包括生活、新科技、旅游和你所不了解的行业。

津津乐道中国版
vol.128 『乱槽之癫』AlphaStar 跟人类打星际,输赢怎么算?

津津乐道中国版

Play Episode Listen Later Feb 17, 2019 98:00


前段时间 AlphaStar 跟人类打星际争霸的消息震惊了游戏圈和 AI 圈,与围棋不同,策略类游戏的 AI 模型更加复杂,还要考虑很多人的因素,甚至要迁就人类的操作习惯。那么这次比赛背后有哪些我们不知道的事情,以及到底 AI 会不会剥夺了人类的“玩具”,这期请 Googler 狗叔和来自瑞典大学的嘉宾管啸给大家深入聊一聊这背后的故事。您将在本期节目中听到以下一些内容:- AlphaStar 到底是啥?- 给游戏盲介绍下星际争霸- 复盘比赛细节- AI 与人类的操作视角差别- Deepmind 为何会选择星际争霸作为突破点?- 游戏战术的来源和演化- 心理稳定性对胜负的影响- 星际争霸跟围棋的区别- 探索行为与计算行为的差异- 神经网络的基本知识- AlphaStar 是如何训练AI模型的?- 训练数据的来源- 对战场景的处理和决策- APM 方面的考虑- AI 技术对职业电竞选手的影响- 回复听友:如何看待 AI 剥夺人类玩具的行为?- 回复听友:与的差异?关于【津津乐道播客】由一群 IT 从业者创办的播客闲聊节目,主播朱峰是一位资深互联网创业者,其他主播和嘉宾来自各个城市、各个行业,跨界思考和嘉宾众多是我们的特点。经常出现的话题会包括生活、新科技、旅游和你所不了解的行业。

津津乐道
AlphaStar 跟人类打星际,输赢怎么算?

津津乐道

Play Episode Listen Later Feb 16, 2019 98:00


主播 / 狗叔嘉宾 / 管啸嘉宾协调 / 狗叔音乐 / 姝琦后期 / 朱峰内容简介:前段时间 AlphaStar 跟人类打星际争霸的消息震惊了游戏圈和 AI 圈,与围棋不同,策略类游戏的 AI 模型更加复杂,还要考虑很多人的因素,甚至要迁就人类的操作习惯。那么这次比赛背后有哪些我们不知道的事情,以及到底 AI 会不会剥夺了人类的“玩具”,这期请 Googler 狗叔和来自瑞典大学的嘉宾管啸给大家深入聊一聊这背后的故事。您将在本期节目中听到以下一些内容:AlphaStar 到底是啥?给游戏盲介绍下星际争霸复盘比赛细节AI 与人类的操作视角差别Deepmind 为何会选择星际争霸作为突破点?游戏战术的来源和演化心理稳定性对胜负的影响星际争霸跟围棋的区别探索行为与计算行为的差异神经网络的基本知识AlphaStar 是如何训练 AI 模型的?训练数据的来源对战场景的处理和决策APM 方面的考虑AI 技术对职业电竞选手的影响回复听友:如何看待 AI 剥夺人类玩具的行为?回复听友:与德州扑克的差异?打赏主播:欢迎以打赏的方式支持本期嘉宾和参与主播,点击这里就可以立刻打赏,打赏成功后您的名字将在节目中被念出。本期音乐:iRobot——歌手:Jon Bellion联系我们:官网:https://jinjinledao.org/微信公众号: 津津乐道播客微博:https://weibo.com/jjldpodcastTwitter:@jinjinledaofmTelegram Group:https://t.me/htnpodcastTelegram Channel:https://t.me/jinjinledao知乎专栏:https://zhuanlan.zhihu.com/jinjinledaoEmail: hi@jinjinledao.org

Historias Cienciacionales: el podcast
T2E25 - Vórtice polar, Alpha Star juega Starcraft II, ¿se cura el VPH?, y memoria y lenguas

Historias Cienciacionales: el podcast

Play Episode Listen Later Feb 9, 2019 76:01


En este episodio, hablamos de una corriente de aire gélido que se desborda en el norte del planeta, de una inteligencia artificial que le gana a tus campeones favoritos de Starcraft II, de un anuncio inusual y dudoso sobre una cura para el virus del papiloma humano, y terminamos hablando de cómo la lengua que hablas podría afectar tus capacidades de memoria. Tenemos la voz invitada de Elisa T Hernández, que nos comparte sus preguntas y escepticismo, además de sus acertados cuestionamientos a cómo estamos comunicando la ciencia (como sociedad (y como HC, claro, porque somos parte de ella (de la sociedad, no de Elisa))). Menú 00:16 - Intro y presentaciones 01:42 - Vórtice polar, explicado 13:36 - AlphaStar domina en Starcraft II 38:31 - La noticia de la cura del virus del papiloma humano 53:36 - La influencia de la lengua en las capacidades de memoria 01:13:51 - Pensamientos finales y despedida Voces y contenido: Elisa T Hernández, Sofía Flores, Rodrigo Pacheco, Víctor Hernández. Edición y producción: Víctor Hernández Voz en la rúbrica: Valeria Sánchez Pueden encontrar a Elisa en su Tuiter https://twitter.com/ElisaT_ . Aquí una nota sobre el vórtice polar: https://www.weather.gov/safety/winter-spanish-polar Y en esta hay fotos que son realmente espectaculares: https://es.gizmodo.com/las-imagenes-mas-espectaculares-del-vortice-polar-en-es-1832255956 Aquí una sobre Alpha Star: https://www.xataka.com/robotica-e-ia/alphastar-inteligencia-artificial-deepmind-que-ha-logrado-ganar-10-1-a-profesionales-starcraft-ii y una partida que jugó contra Mana: https://www.youtube.com/watch?v=Y-Knq5XjCS4 Aquí el comunicado del IPN sobre el trabajo de la Dra. Eva Gallegos: https://www.ipn.mx/CCS/comunicados/ver-comunicado.html?fbclid=IwAR0BQ0ecRkagsTGW3M3MGDC43YOf-oehPvcCkP3z6hbPJ1tv5f-xMFS7cIs&n=31&y=2019 Y aquí el artículo que comentó Sofía, en su versión en inglés, más directa y menos cortés: https://www.nature.com/articles/s41598-018-37654-9 Y un par de artículos sobre lengua, mente y filosofía, alrededor de La Llegada (dir. Dennis Villeneuve), la película de 2016 que salió a tema y la cual recomendamos muchísimo: https://www.investigacionyciencia.es/blogs/ciencia-y-sociedad/98/posts/em-la-llegada-em-o-cmo-el-lenguaje-construye-realidades-14960 https://posgrado.ufm.edu/blog/sobre-extraterrestres-realidad-y-lenguaje/ ¡VACUNÉMONOS CONTRA EL VPH! Para más información sobre este virus, las consecuencias que puede causar y la vacuna, aquí la liga del Instituto Mexicano del Virus del Papiloma Humano: https://virusdelpapilomahumano.com.mx/?gclid=EAIaIQobChMI8LuKosav4AIVB4lpCh3LZwXMEAAYASAAEgLt6fD_BwE Este podcast es producido desde un lugar donde las temperaturas oscilaban entre 13 y 18 grados Celsius, y aún así estábamos usando sudaderas (también llamados jerseys (también llamados pullovers)), porque nos resulta inconcebible pensar en -50°C, siendo los inocentes niños del verano que somos. Música Intro y salida: Little Lily Swing, de Tri-Tachyon, bajo una licencia Creative Commons 3.0 de Atribución: freemusicarchive.org/music/Tri-Tachyon/ Rúbrica: Now son, de Podington Bear, freemusicarchive.org/music/Podington_Bear/ Bajo una licencia Creative Commons Internacional de Atribución No Comercial 3.0 Eggs! Toast! Gas! Fish! by Elvis Herod is licensed under aCreative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. Audios Collage sonoro de videos sobre el vórtice polar, en inglés y en español, todo tomado de Youtube; una introducción al juego entre AlphaStar y Mana, del canal WinterStarcraft https://www.youtube.com/watch?v=H3MCb4W7-kM&t=43s ; un fragmento de una vieja película donde se canta el mambo del IPN: https://www.youtube.com/watch?v=XXY1VSoN-4k ; y ese fragmento donde Mike Wazowski le pide a Sully que haga su papeleo, y a este se le olvida el orden de las carpetas (es entendible).

AI with AI
Darcraft Shadows

AI with AI

Play Episode Listen Later Feb 8, 2019 54:44


In recent announcements, Andy and Dave discuss the National Endowment for Science, Technology, and the Arts (Nesta) launch of a project that is ‘Mapping AI Governance;’ MIT Tech Review’s survey of AI and ML research suggests that “the era of deep learning coming to an end” (or does it?); a December 2018 survey shows strong opposition to “killer robots;” China has (internally) released a report on its view of the “State of AI in China;” and DARPA wants to build conscious robots using insect brains, announcing its (mu)BRAIN Program. In research topics, Andy and Dave discuss the recent competition between DeepMInd’s AlphaStar and human professional gamers in playing Starcraft II. MIT and Microsoft have created a model that can identify instances where autonomous systems have learned from training examples that don’t match what’s happening in the real world, thus creating blind spots. Boston University publishes research that allows an ordinary camera to “see” around corners using shadow projection, in essence turning a wall into a mirror – and doing so without any AI or ML techniques. In papers and reports, the Office of the Director for National Intelligence releases its AIM Initiative – a strategy for augmenting intelligence using machines; a report provides a survey of the state of self-driving cars; and another report surveys the state of AI/ML in medicine. Game Changer takes a look at AlphaZero’s chess strategies, while The Hundred-Page Machine Learning Book offers a condensed overview of ML. The Association for the Advancement of AI conference (27 Jan – 1 Feb) begins to release videos of the conference, including an Oxford-style debate of the Future of AI. And finally, Andy and Dave conclude with a “hype teaser” for next week – with SELF AWARE robots!

Inteligencia Artificial
Inteligencia Artificial gana a profesionales de StarCraft

Inteligencia Artificial

Play Episode Listen Later Feb 7, 2019 10:45


Hablamos sobre AlphaStar la inteligencia artificial creada por DeepMind de Google que aplastó a dos jugadores profesionales de StarCraft Origen

Rebuild
229: Imprisoned Underground (hak)

Rebuild

Play Episode Listen Later Feb 5, 2019 96:17


Hakuro Matsuda さんをゲストに迎えて、チップ業界ニュース、スマートフォン、FaceTime, Facebook, AlphaStar などについて話しました。 Show Notes ドラゴンクエストⅩ AMD Keynote - CES 2019 PassMark Software GeForce RTX 2060 Graphics Card | NVIDIA Chipmakers Turn to 'Chiplets' Intel CEO reportedly sold shares after the company already knew about massive security flaws The Snapdragon 855 Performance Preview Mi MIX 3 All the Incoming Foldable Phones of 2019 Apple's dramatic Q1 2019 results Apple gets obliterated by OnePlus in India as sales drop by 50 percent Apple lays off over 200 from Project Titan autonomous vehicle group Apple TV Stick Release Date, Price and Features Rumours Apple exec met with teenager who found FaceTime bug 「うんこボタン」全品交換の理由 Why Apple went to war with Facebook and Google this week Apple Developer Enterprise Program AlphaStar: Mastering the Real-Time Strategy Game StarCraft IId 高橋名人 しくじり先生

Les Décodeurs RTBF
Les Décodeurs-RTBF - L'Intelligence Artificielle signe-t-elle la mort de la musique ? - 03/02/2019

Les Décodeurs RTBF

Play Episode Listen Later Feb 3, 2019 60:07


Franz Schubert a 25 ans en 1822 lorsqu’il entame la composition de sa symphonie n°8 en si mineur, plus connue sous le nom de Symphonie Inachevée. Malade, il ne la terminera jamais. Depuis, l’œuvre intrigue et de nombreux compositeurs s’essayent à la compléter. Et ce 04 février, la symphonie sera présentée au Cadogan Hall de Londres avec ses deux derniers mouvements composés par… une intelligence artificielle de Huawei. "Nous avons utilisé les capacités de l'intelligence artificielle pour repousser les limites de ce qui est humainement possible et ainsi voir l'impact positif que la technologie pourrait avoir sur la culture moderne.", se félicite Walter Ji, président de CBG, Huawei Western Europe. Alors une IA peut-elle remplacer la créativité humaine ? Peut-on encore parler d'art lorsque c'est une machine qui produit ? A quoi cela sert-il, de demander à une IA de composer de la musique, est-ce autre chose qu'un coup de pub? On va en parler avec nos invités, Thierry Dutoit, professeur à l'Université de Mons, président de l'institut Numédiart qui étudie les liens entre l'art et le numérique, et Pierre Barreau, co-fondateur de Aiva, une start-up qui crée justement de la musique grâce à l'intelligence artificielle. FOCUS 2 / Journalisme en zone de guerre : comment se forme-t-on avant de partir ? Une équipe de la RTBF se prépare à partir au Yémen, une zone du monde peu couverte par les médias... Le Yémen est en guerre depuis quatre ans et en proie à une situation humanitaire catastrophique. Sur place, il faudra faire attention. Ne pas se faire remarquer. Assurer sa sécurité, passer des checkpoints, être capable de réagir rapidement en cas de danger. Comment se forme-t-on à tout cela ? Les Décodeurs ont assisté à la formation et interrogé Gaetan Vannay, le formateur, reporter de guerre pendant 17 ans dans des zones comme la Lybie, la Syrie ou la Géorgie. Tendances Pub - Super Bowl, Super Pub Dans sa rubrique Tendances Pub, Frédéric Brébant s’intéressera à l'événement sportif le plus populaire des Etats-Unis qui est aussi la plus grande foire publicitaire sur écran : le Super Bowl, finale du championnat de football américain qui se déroulera dans la nuit de dimanche à lundi pour nous, téléspectateurs européens. Nouvelles Technologies - IA : Gagner à Starcraft2 et conquérir le monde L’intelligence artificielle de Google, AlphaStar, a sèchement battu des joueurs professionnels du jeu vidéo Starcraft2, un jeu de stratégie en temps réel. C’est une vraie performance qui ouvre la porte de nombreuses autres applications. Jusqu’à la stratégie militaire ? Une chronique de Gilles Quoistiaux. L’Autre Web – Le monde brûle Hé non, Instagram n’est pas qu’un réseau social destiné à poster des photos de soi à la plage ! Saviez-vous que c’est aussi un endroit où de nouvelles formes de BD trouvent leur place ? Lucie Rezsöhazy met quelques comptes Insta à l’honneur, de la BD futuriste avec « Le monde brûle », à l’illustrateur québécois Alex Lévesque et son compte décapant « Dessine Bandé ». Présentation : Marie Vancutsem

Not Enough Resources
Episode 47 - Hearthstone, Anthem, GameStop can't find a buyer, Epic VS Steam, AlphaStar AI VS StarCraft II Pros, Smash Brothers Patch Notes

Not Enough Resources

Play Episode Listen Later Feb 2, 2019 50:52


Hold on to your hype it is time for the latest episode of Not Enough Resources! We have a lot of cool things in the work, so keep an eye on Rogues Portal and follow us on Twitter @NERPodcast! As always send us your comments and suggestions! You can subscribe to Not Enough Resources on iTunes or Google Play. This was our first episode recorded live, and we are looking into streaming our future episodes live as well. Now Playing: Dylan is back into Hearthstone with a brand new focus: Single player puzzle modes. They itch his challenge without the frustration of multiplayer. Ryan finished Resident Evil 2, but has been itching to keep playing BioWare's latest, Anthem. With only an hour and a half of playtime in the demo, he is ready to make the jump to a full purchase. News: Converging markets might force GameStop to close their doors, while Epic and Steam face off in the digital space for your hard earned dollars. The real winner in all of this news though? Developers. Also, Metroid Prime 4 restarts development, so when are we actually going to see the game? Competitive Corner: Overwatch goes to Paris! The new map is beautiful and fun, and Ryan gives a potential helpful strategy to taking to the first point. Nintendo shows it's support for the competitive Super Smash Brothers Ultimate scene by providing some of the most in-depth patch notes imaginable, and Piranha Plant enters the battle. Finally, we talk about AlphaStar, the Google Deepmind StarCraft II AI, and what it means for the future of competitive play and developing metas.

STG podcast (Science, Technology,Gaming and Stuff)
Ep.39 DeepMind's AI against professional Starcraft 2 players

STG podcast (Science, Technology,Gaming and Stuff)

Play Episode Listen Later Feb 1, 2019 58:35


In this episode we are discussing the recent results of Alphastar, the new deep learning algorithm from DeepBlue, playing Starcraft 2. The AI went 10-1 against 2 different pro gamers in Protoss vs Protoss matches. The discussion is not on the technical side unfortunately because we are not expert in deep learning in any way but more on the results. Is the AI really having better strategies or the micro and technical "skills" are mainly its advantage? Interesting discussion when mostly knowing the SC2 side of things, we hope to find someone that would come to speak with us about how the deep learning algorithm actually work. As always come to talk @STG_podcast and share this episode around! Find us also on PocketCasts, iTunes, Spotify and probably in your favorite podcast service! Podcast music by: Punch Deck

The Daily Crunch – Spoken Edition
StarCraft II-playing AI AlphaStar takes out pros undefeated

The Daily Crunch – Spoken Edition

Play Episode Listen Later Jan 30, 2019 7:15


Losing to the computer in StarCraft has been a tradition of mine since the first game came out in 1998. Of course, the built-in “AI” is trivial for serious players to beat, and for years researchers have attempted to replicate human strategy and skill in the latest version of the game. They've just made a huge leap with AlphaStar, which recently beat two leading pros 5-0.

Engineering IRL
Rev.27 - Deepmind AI AlphaStar and Artificial Intelligence Engineers

Engineering IRL

Play Episode Listen Later Jan 30, 2019 25:55


In this episode of the Engineering Podcast as you accompany me on the drive to work we go through the milestone achieved in Artificial Intelligence, when Deep Mind's AI AlphaStar defeated some of the top professional gamers in the world at the popular high skill and highly competitive video game, Blizzard's Starcraft 2. We also have a brief look at Artificial Intelligence Engineers and focus on why this achievement is an important milestone. As referenced in the show: Pro Starcraft 2 player point of view video: https://youtu.be/bexWuHmV32A?t=27 As always if you liked this episode and want more please subscribe, ask one of your friends to listen and join the conversation at www.facebook.com/engineerIRL or www.sariodev.com Remember to subscribe and for more head to https://www.engineeringinreallife.com and become a member of the Engineering IRL Community. Facebook: www.Facebook.com/engineerIRL Twitter: www.Twitter.com/engineering_irl Instagram: https://www.instagram.com/engineeringinreallife To know more about our partnerships and how to get in touch with the show visit the top engineering podcast.

Diagnose Kaufsucht
#017 - Stargast Jan, Gefakte Reviews, iPhone Akkuaustausch, AlphaStar, Back To Nokia, Logitech G910

Diagnose Kaufsucht

Play Episode Listen Later Jan 30, 2019 77:44


Unser zweiter Gast ist da. Der gute Jan hat uns die Ehre erwiesen, bei einer Folge dabei zu sein. Die Folge ist vollgepackt mit Spannung, Spaß und Action, aktuelle News und natürlich Whats In Your Bag! Jans Twitter https://twitter.com/LikeCodar Unser Twitter https://twitter.com/KaufsuchtCast Unsere Website https://DiagnoseKaufsucht.de/

WIRED Business – Spoken Edition
DeepMind Beats Pros at StarCraft in Another Triumph for Bots

WIRED Business – Spoken Edition

Play Episode Listen Later Jan 28, 2019 6:43


In London last month, a team from Alphabet's UK-based artificial intelligence research unit DeepMind quietly laid a new marker in the contest between humans and computers. Thursday, it revealed the achievement, in a three-hour YouTube stream in which aliens and robots fought to the death. DeepMind's broadcast showed its artificial intelligence bot, AlphaStar, defeating a professional player at the complex real-time-strategy videogame StarCraft II.

Radio Gimmick
La IA di Google è pronta a trasferirsi in Korea

Radio Gimmick

Play Episode Listen Later Jan 26, 2019 1:30


AlphaStar, l'algoritmo creato da Google per vincere a StarCraft II, ha stracciato alcuni dei campioni del gioco di Blizzard, dando prova della sua lucidità e potenza. --- Watch DeepMind's AI Tackle 'Starcraft 2' - Motherboard

VZOO | E分钟
E分钟-0125:AlphaStar人工智能《星际争霸2》大败人类选手,谷歌Pixel 4配骁龙855曝光

VZOO | E分钟

Play Episode Listen Later Jan 25, 2019 1:22


人类又输一局!AlphaStar人工智能《星际争霸2》连续5:0大败职业选手 骁龙855+Android Q加持?谷歌Pixel 4跑分曝光代号珊瑚虫 ……