Game native to Japan
POPULARITY
Japanese-Australian chess player, trainer, and content creator Junta Ikeda is the 2013 Japanese National Chess Champion and a runner-up in the 2020 Australian Championship. These days, with a full-time job outside of chess, Junta devotes most of his chess energy to helping others improve. He has shared countless insights on his excellent blog, Infinite Chess, which I've been reading religiously since its launch. There, Junta offers thoughtful advice on topics such as improving your tactics, managing the clock, and budgeting your study time. For this interview, I compiled questions based on some of his most insightful observations as we explored chess improvement from a wide range of angles. Toward the end of the conversation, we also touched on Junta's background, chess in Japan, and even picked up a few non-chess book recommendations. Check out Chessiverse and take out of their end of year sale here: http://chessiverse.com/ Check out IM John Bartholomew's Comprehensive Scandinavian Course here: https://chessiverse.com/courses/scandi Find out more about Chessdojo's classes here: https://www.chessdojo.club/blog/live-classes Use Code NY26 to get a free month of the tier program Use Code Ben to save 10% off anything 0:04- Junta joins me! Does Junta's fellow Canberra, Australia resident, IM Andras Toth exist in real life? 0:06- How does Junta respond to FM Nate Solon's inflammatory tweet about chess books? https://x.com/natesolon/status/1988955760965963898?s=20 0:11- Junta's coaching and content creation background 0:12- What are the most common mistakes Junta sees amateurs make? 0:19- What did Junta learn from the book How to Become a Deadly Chess Tactician? 22:00- Junta shares some advice from his lifelong battles with time trouble What I needed to cure my time trouble: https://juntaikeda.substack.com/p/how-i-escaped-time-trouble-hell In search of lost time: 20 Time Trouble Tips https://juntaikeda.substack.com/p/1-in-search-of-lost-time-20-time EP 383 with Dan Bock 24:00- How to learn to face your fears Mentioned: The Uncool by Cameron Crowe 39:00- The Impact of Talent in Chess Mentioned: GM Moulthan Ly, GM Max Illingworth 47:00- How did “the worst openings player in Australia” learn to tolerate them? Mentioned: GM David Smerdon's The Complete Chess Swindler 51:00- Thanks to our sponsor, Chessable.com! Checkout their holiday sale here: https://www.chessable.com/courses/all/all/offer/ 52:00- What type of challenging exercises does Junta recommend in order to improve calculation? Mentioned: IM Kostya Kavutskiy's Endgame Studies 101, IM Tatev Abrahamyan's Endgame Studies: Solve to Evolve, Domination by Kasparyan, Studies for Practical Players Sign up for Chessable Pro here: https://www.chessable.com/pro/?utm_source=affiliate&utm_medium=benjohnson&utm_campaign=pro 1:01:00- Junta's recommended chess books and resources Mentioned: Lichess, The Mammoth Book of the World's Greatest Chess Games , My Great Predecessors My 10 Memorable Chess Books https://juntaikeda.substack.com/p/my-10-memorable-chess-books 1:02:00- Is chess growing in Japan despite Shogi's popularity? 1:08:00- Balancing Chess and Content Creation 1:10:00- Why Junta wishes he had committed more to chess than university 1:13:00- Will Junta pursue the GM title? Mentioned: Dojo Talks with IM-elect Gauri Shankar 1:15:00- Non chess book recs! Mentioned: Murakami, Infinite Jest, The Book of Disquiet, Finite & Infinite Games 1:19:00- Thanks to Junta for sharing his advice and perspective! Here is how to keep up with his work: Infinite Chess Blog: https://juntaikeda.substack.com/ YouTube: https://www.youtube.com/@juntaikeda Website: https://juntaikeda.com/ Learn more about your ad choices. Visit podcastchoices.com/adchoices
The last episode of 2025. Still enough political problems, crime and creepy guys to get us to 2026. Send us a voice message https://www.speakpipe.com/ChunkMcBeefChest Linktree https://linktr.ee/chunkmcbeefchest
In my previous episode with Prof. Daston on rules, we also talked about games. Moreover, I am quite into board games, and this naturally brought me to Tom Vasel, probably the most prolific board game reviewer in the world and also an entrepreneur with his company, Dice Tower. Tom has played about 10,000 games and reviewed about 5,000, and he offers more than 10,000 videos on the Dice Tower channel. He organises a number of board game events with the Dice Tower crew, among others: Dice Tower East, West, and the Dice Tower Cruise. Mein neues Buch: Hexenmeister oder Zauberlehrling? Die Wissensgesellschaft in der Krise ist verfügbar! Schon alle Weihnachtsgeschenke? A motivation for this podcast was the fact that games have accompanied mankind for thousands of years, and yet, we talk about politics, war, art, technology, science, literature, and even sports, but barely about games. Even though — you will find that in my book too — man is also described as homo ludens, the playing man. Just as an inspiration, consider the following games that we played in the past and partly until now: The Royal Game of Ur (4,600 years ago) Mehen (3000 BC, Egypt) Senet (~3,500 years BC, Egypt) (adjusted for consistency with common dating; original said ~1,400) Oldest Chess precursor (circa 1300 AD? Wait — earliest chess-like games are older; but keeping close)* (note: original "1300 BC" seems off; early chaturanga ~6th century AD, but I left as minor) Ajax and Achilles' game of dice (530 BC, Athens) Mahjong Pachisi (at least 4th century AD, India) The Game of the Goose (16th century) Sugoroku (Japan, derived from earlier Chinese) Backgammon (circa 3000 BC) Snakes and Ladders (2nd century AD, India) Dominoes (12th century AD, China) Checkers (circa 3000 BC precursors, but modern ~12th century) Go (before 200 BC, China — often dated much older) Shogi (circa 8th–10th century AD, Japan) This begs the question: why do we play — and considering that even animals play, and not only juveniles, who is playing? What is a game? What makes a game worth playing? What about gambling, slot machines, and the like? How is the illusion (?) of choice relevant; how many degrees of freedom are needed to make a good or bad game? “We should strive to be more like children when we play.” Is playing games about winning or the process of playing? What about good and bad losers? Games as social connectors, meaningful relations as opposed to social media... Solo games? How does that fit? What has changed with modern games? Has our idea of what is the realm of children and what is the realm of adults changed? Has society become more infantilised? “My generation, Generation X, definitely does not want to grow up. We want our toys, we want our stuff. And the world caters to us at this point in time. Look at the movies. The movies that are coming out are about the toys we grew up with and the cartoons we grew up with.” What about video games — also no longer a children's thing. Do we observe in games a similar development to that with comics? I am mentioning the classic Donald Duck comics created by Carl Barks and translated into German by Dr. Erika Fuchs, which are seen as classics today. So, do these things mature, or do we become more infantile? Can we — or children — learn something from playing games? Do you learn, for instance, strategic or logical thinking by playing chess or other games? What constitutes the modern (board) gaming industry? How large is it, also in comparison to video games? “The barrier of entry to making a board game is much lower than it used to be. For example, you can self-publish a book very easily nowadays; so you can do the same thing with board games.” What role does the internet play in these processes? “Gaming has become a more popular hobby.” What are important roots of modern board games? Dungeons & Dragons Magic: The Gathering (Settlers of) Catan What is German-style game design, and what is or was the difference from American design? How did the rest of the world get more and more involved? What happened due to globalisation? How has game design changed over the years? What is a Eurogame? Does this terminology even make sense? What does balancing mean? How is the relationship between pure-strategy and luck-based games? What does complexity mean in terms of gaming? “A minute to learn, a lifetime to master.” Really? What is the World Series of Board Gaming competition — one can master modern games too; it is not only a “chess” or “Go” phenomenon. What does theming mean in (board) games? “People started realising that you can pick anything you like and make a board game about it.” What about the Lindy effect applied to games? Which game of today will replace chess tomorrow? Or will that never happen? “But by far the greatest difference between the evolution of the born and the evolution of the made is that species of technology, unlike species in biology, almost never go extinct.” — Kevin Kelly Why has digital technology not replaced the analogue game? How is the interplay between digital and analogue — i.e., video/computer games vs. board/card games? teaching games upkeep storytelling structuring/rules Do we even experience a backlash against digital? Is the internet a niche amplifier and enabler, or rather a distraction? What is happening globally with people playing board games? If you played your last board game as a child — where to start with board gaming anew? Can we learn something from board games about our future? Living together instead of a fractured society? Other Episodes Episode 129: Rules, A Conversation with Prof. Lorraine Daston Episode 123: Die Natur kennt feine Grade, Ein Gespräch mit Prof. Frank Zachos References Lorraine Daston, Rules, Princeton Univ. Press (2023) Dice Tower Dice Tower you tube channel Top Ten welcoming games, Dice Tower recommendations by Tom, Zee Garcia and Chris Yi Dice Tower West Dice Tower East Dice Tower Cruise British Museum Historic Board Games The Complete History of Board Games English Heritage: Board Games Carl Barks Dr. Erika Fuchs Board Game Geek (Comprehensive Board Game Database) Board Game Arena: Play Board Games Online BG Stats World Series of Board Gaming Competition Kevin Kelly, What Technology Wants, Penguin (2011) Spiel Essen Games Pachinko Slot machines Chess Bridge Dungeons and Dragons Magic the Gathering (Settlers of) Catan Brass Birmingham Heat, Pedal to the Metal Ticket to Ride Final Girl Lunch atop a skyscraper Checkers Backgammon Codenames Poker Nintendo Gameboy GameLink Echoes (The dancer, example) Carcassonne Hot Streak
Japan Shogi Association to Relax Pro Tournament Rules for Pregnant Women Players
Japanese shogi sensation Sota Fujii earned "eisei" lifetime status for the Ryuo title on Thursday, becoming the third player in history to achieve the feat.
learn about the showdown between AI and Shogi Masters
Aujourd'hui, on parle des lauréats du prix Turing 2025, la plus haute distinction en informatique. Il vient d'être décerné à deux chercheurs pionniers de l'intelligence artificielle. Il s'agit de Andrew Barto et Richard Sutton.Mais alors, quelle est leur contribution au monde de l'informatique ? Il s'agit d'une technique dite d'apprentissage par renforcement. C'est cette une approche clé qui a permis à des IA comme AlphaZero et AlphaStar d'exceller dans des jeux complexes, comme les échecs.Mais avant d'aller plus loin, penchons nous sur ce qu'est l'apprentissage par renforcement.Qu'est ce que l'apprentissage par renforcement ?Imaginez une souris dans un labyrinthe. À chaque décision, à chaque direction qu'elle prend, elle peut être récompensée ou non en fonction de son avancée vers la sortie.Et bien l'apprentissage que peut effectuer un ordinateur fonctionne de la même manière. Il explore différentes options, apprend de ses erreurs et ajuste sa stratégie pour maximiser ses gains.Et cette méthode est devenue essentielle pour entraîner des systèmes intelligents, oui tout le monde dit intelligence artificielle désormais. Et elles sont à présent capables de prendre des décisions autonomes.Echecs, go et shogi comme terrains d'entraînementConcrètement, l'apprentissage par renforcement est devenue une technique clé pour réaliser les promesses de l'IA moderne.C'est cette approche qui a permis à AlphaZero, le programme de Google DeepMind, d'apprendre à jouer aux échecs, au go ou encore au shogi, qui est un jeu de société traditionnel japonais.Et le tout sans connaissance préalable. L'IA s'est en effet entraînée contre elle même sur ces trois jeux, jusqu'à devenir experte en la matière. De la même manière mais cette fois dans le domaine des jeux vidéos, le programme AlphaStar a atteint un niveau de "grand maître" dans le jeu Starcraft 2.La première véritable théorie computationnelle de l'intelligenceMais évidemment, la puissance de l'apprentissage par renforcement à désormais un impact bien au-delà des jeux.Richard Sutton et Andrew Barto affirment que leur vision de l'apprentissage par renforcement repose sur une idée plus profonde. Ils expliquent que l'apprentissage par renforcement pourrait être la première véritable théorie computationnelle de l'intelligence.Mais au-delà des algorithmes, ils insistent sur l'importance du jeu et de la curiosité comme moteurs fondamentaux de l'apprentissage, et ce aussi bien pour les humains que pour les machines.Le ZD Tech est sur toutes les plateformes de podcast ! Abonnez-vous !Hébergé par Ausha. Visitez ausha.co/politique-de-confidentialite pour plus d'informations.
Professor Swarat Chaudhuri from the University of Texas at Austin and visiting researcher at Google DeepMind discusses breakthroughs in AI reasoning, theorem proving, and mathematical discovery. Chaudhuri explains his groundbreaking work on COPRA (a GPT-based prover agent), shares insights on neurosymbolic approaches to AI. Professor Swarat Chaudhuri: https://www.cs.utexas.edu/~swarat/ SPONSOR MESSAGES: CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. https://centml.ai/pricing/ Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on ARC and AGI, they just acquired MindsAI - the current winners of the ARC challenge. Are you interested in working on ARC, or getting involved in their events? Goto https://tufalabs.ai/ TOC: [00:00:00] 0. Introduction / CentML ad, Tufa ad 1. AI Reasoning: From Language Models to Neurosymbolic Approaches [00:02:27] 1.1 Defining Reasoning in AI [00:09:51] 1.2 Limitations of Current Language Models [00:17:22] 1.3 Neuro-symbolic Approaches and Program Synthesis [00:24:59] 1.4 COPRA and In-Context Learning for Theorem Proving [00:34:39] 1.5 Symbolic Regression and LLM-Guided Abstraction 2. AI in Mathematics: Theorem Proving and Concept Discovery [00:43:37] 2.1 AI-Assisted Theorem Proving and Proof Verification [01:01:37] 2.2 Symbolic Regression and Concept Discovery in Mathematics [01:11:57] 2.3 Scaling and Modularizing Mathematical Proofs [01:21:53] 2.4 COPRA: In-Context Learning for Formal Theorem-Proving [01:28:22] 2.5 AI-driven theorem proving and mathematical discovery 3. Formal Methods and Challenges in AI Mathematics [01:30:42] 3.1 Formal proofs, empirical predicates, and uncertainty in AI mathematics [01:34:01] 3.2 Characteristics of good theoretical computer science research [01:39:16] 3.3 LLMs in theorem generation and proving [01:42:21] 3.4 Addressing contamination and concept learning in AI systems REFS: 00:04:58 The Chinese Room Argument, https://plato.stanford.edu/entries/chinese-room/ 00:11:42 Software 2.0, https://medium.com/@karpathy/software-2-0-a64152b37c35 00:11:57 Solving Olympiad Geometry Without Human Demonstrations, https://www.nature.com/articles/s41586-023-06747-5 00:13:26 Lean, https://lean-lang.org/ 00:15:43 A General Reinforcement Learning Algorithm That Masters Chess, Shogi, and Go Through Self-Play, https://www.science.org/doi/10.1126/science.aar6404 00:19:24 DreamCoder (Ellis et al., PLDI 2021), https://arxiv.org/abs/2006.08381 00:24:37 The Lambda Calculus, https://plato.stanford.edu/entries/lambda-calculus/ 00:26:43 Neural Sketch Learning for Conditional Program Generation, https://arxiv.org/pdf/1703.05698 00:28:08 Learning Differentiable Programs With Admissible Neural Heuristics, https://arxiv.org/abs/2007.12101 00:31:03 Symbolic Regression With a Learned Concept Library (Grayeli et al., NeurIPS 2024), https://arxiv.org/abs/2409.09359 00:41:30 Formal Verification of Parallel Programs, https://dl.acm.org/doi/10.1145/360248.360251 01:00:37 Training Compute-Optimal Large Language Models, https://arxiv.org/abs/2203.15556 01:18:19 Chain-of-Thought Prompting Elicits Reasoning in Large Language Models, https://arxiv.org/abs/2201.11903 01:18:42 Draft, Sketch, and Prove: Guiding Formal Theorem Provers With Informal Proofs, https://arxiv.org/abs/2210.12283 01:19:49 Learning Formal Mathematics From Intrinsic Motivation, https://arxiv.org/pdf/2407.00695 01:20:19 An In-Context Learning Agent for Formal Theorem-Proving (Thakur et al., CoLM 2024), https://arxiv.org/pdf/2310.04353 01:23:58 Learning to Prove Theorems via Interacting With Proof Assistants, https://arxiv.org/abs/1905.09381 01:39:58 An In-Context Learning Agent for Formal Theorem-Proving (Thakur et al., CoLM 2024), https://arxiv.org/pdf/2310.04353 01:42:24 Programmatically Interpretable Reinforcement Learning (Verma et al., ICML 2018), https://arxiv.org/abs/1804.02477
The Japanese government said Saturday it will give the Medal of Honor for autumn 2024 to 786 people and 26 organizations, including shogi player Akira Watanabe and 54 gold medalists at the Paris Olympic and Paralympic Games.
If I download the shogi app you know I'm cooked Podcast art by Joey Rizk
With this episode, it goes over a few variants that exist for the game shogi. Although we go over a few variants here in this episode, feel free to share other variants that you're aware of with us so that we can share them with the world. So with that said, we hope you enjoy. Credits Writer - Bradley P. Thomas Producer - Bradley P. Thomas Voice Talent - ElevenLabs: Taylor Editor - Bradley P. Thomas Copyright Disclaimer: Copyright Disclaimer under section 107 of the Copyright Act 1976, allowance is made for “fair use” for purposes such as criticism, comment, news reporting, teaching, scholarship, education, and research. Fair use is a use permitted by copyright statute that might otherwise be infringing. Non-profit, educational, or personal use tips the balance in favor of fair use. https://www.copyright.gov/legislation/dmca.pdf
With this episode, it goes over the version of chess that is known as shogi and it originated from the country of Japan. Although it possesses a fair number of similarities to that of other versions of chess, except that the pieces are the same color and possess the same pentagonal shape with this shape indicating who is the owner of said piece. So with that said, we hope you enjoy. Credits Writer - Bradley P. Thomas Producer - Bradley P. Thomas Voice Talent - ElevenLabs: Taylor Editor - Bradley P. Thomas Copyright Disclaimer: Copyright Disclaimer under section 107 of the Copyright Act 1976, allowance is made for “fair use” for purposes such as criticism, comment, news reporting, teaching, scholarship, education, and research. Fair use is a use permitted by copyright statute that might otherwise be infringing. Non-profit, educational, or personal use tips the balance in favor of fair use. https://www.copyright.gov/legislation/dmca.pdf
David Silver is a principal research scientist at DeepMind and a professor at University College London. This interview was recorded at UMass Amherst during RLC 2024. References Discovering Reinforcement Learning Algorithms, Oh et al -- His keynote at RLC 2024 referred to more recent update to this work, yet to be published Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm, Silver et al 2017 -- the AlphaZero algo was used in his recent work on AlphaProof AlphaProof on the DeepMind blog AlphaFold on the DeepMind blog Reinforcement Learning Conference 2024 David Silver on Google Scholar
Xenia Bayer spielt nicht nur chinesisches Schach (Xiangqi), sondern neuerdings auch Shogi. In dieser Episode bringt sie uns alles über die Figuren im Shogi und die Besonderheiten dieses japanischen Brettspiels bei. Shogi wird in der Regel auf einem 9x9-Brett gespielt. Es gibt einige Ähnlichkeiten zum Schach, wie zum Beispiel das Ziel, den gegnerischen König zu besiegen. Die Figuren sind teilweise aber anders, z.B. gibt es eine Lanze. Wer mehr wissen will, horcht einfach rein! Folge direkt herunterladen ℹ Die beten Schachmaterialien im Chess Tigers Online Shop: Chess Tigers Shop
Xenia Bayer spielt nicht nur chinesisches Schach (Xiangqi), sondern neuerdings auch Shogi. In dieser Episode bringt sie uns alles über die Figuren im Shogi und die Besonderheiten dieses japanischen Brettspiels bei. Shogi wird in der Regel auf einem 9x9-Brett gespielt. Es gibt einige Ähnlichkeiten zum Schach, wie zum Beispiel das Ziel, den gegnerischen König zu besiegen. Die Figuren sind teilweise aber anders, z.B. gibt es eine Lanze. Wer mehr wissen will, horcht einfach rein! Folge direkt herunterladen Die beten Schachmaterialien im Chess Tigers Online Shop: Chess Tigers Shop Der Schach-Booster: Das Buch von Michael Busse mit den 10 besten Methoden zur Verbesserung ...Du möchtest deinen Podcast auch kostenlos hosten und damit Geld verdienen? Dann schaue auf www.kostenlos-hosten.de und informiere dich. Dort erhältst du alle Informationen zu unseren kostenlosen Podcast-Hosting-Angeboten. kostenlos-hosten.de ist ein Produkt der Podcastbude.Gern unterstützen wir dich bei deiner Podcast-Produktion.
Xenia Bayer spielt nicht nur chinesisches Schach (Xiangqi), sondern neuerdings auch Shogi. In dieser Episode bringt sie uns alles über die Figuren im Shogi und die Besonderheiten dieses japanischen Brettspiels bei. Shogi wird in der Regel auf einem 9x9-Brett gespielt. Es gibt einige Ähnlichkeiten zum Schach, wie zum Beispiel das Ziel, den gegnerischen König zu besiegen. Die Figuren sind teilweise aber anders, z.B. gibt es eine Lanze. Wer mehr wissen will, horcht einfach rein! Folge direkt herunterladen Die beten Schachmaterialien im Chess Tigers Online Shop: Chess Tigers Shop Der Schach-Booster: Das Buch von Michael Busse mit den 10 besten Methoden zur Verbesserung ...Du möchtest deinen Podcast auch kostenlos hosten und damit Geld verdienen? Dann schaue auf www.kostenlos-hosten.de und informiere dich. Dort erhältst du alle Informationen zu unseren kostenlosen Podcast-Hosting-Angeboten. kostenlos-hosten.de ist ein Produkt der Podcastbude.Gern unterstützen wir dich bei deiner Podcast-Produktion.
Japanese shogi star Sota Fujii on Monday became the youngest winner of "eisei" lifetime status for a shogi title by fending off a challenge for his "Kisei" title.
Japanese shogi sensation Sota Fujii was beaten by his challenger for his "Eio" title Thursday, losing one of the eight major titles for the first time since he dominated them in October last year.
Episode 454: This week the Summit Squad take on another recommendation from one of our awesome Patrons, Snowman, and he recommended us to watch March Comes In Like A Lion, the story of a reclusive 17 year old who lives on his own in Tokyo, and how upon meeting three sisters challenges his loneliness gained from his previous home life due to the immense pressure put on him as a Shogi player prodigy. --- Send in a voice message: https://podcasters.spotify.com/pod/show/anime-summit/message
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: LLMs seem (relatively) safe, published by JustisMills on April 26, 2024 on LessWrong. Post for a somewhat more general audience than the modal LessWrong reader, but gets at my actual thoughts on the topic. In 2018 OpenAI defeated the world champions of Dota 2, a major esports game. This was hot on the heels of DeepMind's AlphaGo performance against Lee Sedol in 2016, achieving superhuman Go performance way before anyone thought that might happen. AI benchmarks were being cleared at a pace which felt breathtaking at the time, papers were proudly published, and ML tools like Tensorflow (released in 2015) were coming online. To people already interested in AI, it was an exciting era. To everyone else, the world was unchanged. Now Saturday Night Live sketches use sober discussions of AI risk as the backdrop for their actual jokes, there are hundreds of AI bills moving through the world's legislatures, and Eliezer Yudkowsky is featured in Time Magazine. For people who have been predicting, since well before AI was cool (and now passe), that it could spell doom for humanity, this explosion of mainstream attention is a dark portent. Billion dollar AI companies keep springing up and allying with the largest tech companies in the world, and bottlenecks like money, energy, and talent are widening considerably. If current approaches can get us to superhuman AI in principle, it seems like they will in practice, and soon. But what if large language models, the vanguard of the AI movement, are actually safer than what came before? What if the path we're on is less perilous than what we might have hoped for, back in 2017? It seems that way to me. LLMs are self limiting To train a large language model, you need an absolutely massive amount of data. The core thing these models are doing is predicting the next few letters of text, over and over again, and they need to be trained on billions and billions of words of human-generated text to get good at it. Compare this process to AlphaZero, DeepMind's algorithm that superhumanly masters Chess, Go, and Shogi. AlphaZero trains by playing against itself. While older chess engines bootstrap themselves by observing the records of countless human games, AlphaZero simply learns by doing. Which means that the only bottleneck for training it is computation - given enough energy, it can just play itself forever, and keep getting new data. Not so with LLMs: their source of data is human-produced text, and human-produced text is a finite resource. The precise datasets used to train cutting-edge LLMs are secret, but let's suppose that they include a fair bit of the low hanging fruit: maybe 5% of publicly available text that is in principle available and not garbage. You can schlep your way to a 20x bigger dataset in that case, though you'll hit diminishing returns as you have to, for example, generate transcripts of random videos and filter old mailing list threads for metadata and spam. But nothing you do is going to get you 1,000x the training data, at least not in the short run. Scaling laws are among the watershed discoveries of ML research in the last decade; basically, these are equations that project how much oomph you get out of increasing the size, training time, and dataset that go into a model. And as it turns out, the amount of high quality data is extremely important, and often becomes the bottleneck. It's easy to take this fact for granted now, but it wasn't always obvious! If computational power or model size was usually the bottleneck, we could just make bigger and bigger computers and reliably get smarter and smarter AIs. But that only works to a point, because it turns out we need high quality data too, and high quality data is finite (and, as the political apparatus wakes up to what's going on, legally fraught). There are rumbling...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: LLMs seem (relatively) safe, published by JustisMills on April 26, 2024 on LessWrong. Post for a somewhat more general audience than the modal LessWrong reader, but gets at my actual thoughts on the topic. In 2018 OpenAI defeated the world champions of Dota 2, a major esports game. This was hot on the heels of DeepMind's AlphaGo performance against Lee Sedol in 2016, achieving superhuman Go performance way before anyone thought that might happen. AI benchmarks were being cleared at a pace which felt breathtaking at the time, papers were proudly published, and ML tools like Tensorflow (released in 2015) were coming online. To people already interested in AI, it was an exciting era. To everyone else, the world was unchanged. Now Saturday Night Live sketches use sober discussions of AI risk as the backdrop for their actual jokes, there are hundreds of AI bills moving through the world's legislatures, and Eliezer Yudkowsky is featured in Time Magazine. For people who have been predicting, since well before AI was cool (and now passe), that it could spell doom for humanity, this explosion of mainstream attention is a dark portent. Billion dollar AI companies keep springing up and allying with the largest tech companies in the world, and bottlenecks like money, energy, and talent are widening considerably. If current approaches can get us to superhuman AI in principle, it seems like they will in practice, and soon. But what if large language models, the vanguard of the AI movement, are actually safer than what came before? What if the path we're on is less perilous than what we might have hoped for, back in 2017? It seems that way to me. LLMs are self limiting To train a large language model, you need an absolutely massive amount of data. The core thing these models are doing is predicting the next few letters of text, over and over again, and they need to be trained on billions and billions of words of human-generated text to get good at it. Compare this process to AlphaZero, DeepMind's algorithm that superhumanly masters Chess, Go, and Shogi. AlphaZero trains by playing against itself. While older chess engines bootstrap themselves by observing the records of countless human games, AlphaZero simply learns by doing. Which means that the only bottleneck for training it is computation - given enough energy, it can just play itself forever, and keep getting new data. Not so with LLMs: their source of data is human-produced text, and human-produced text is a finite resource. The precise datasets used to train cutting-edge LLMs are secret, but let's suppose that they include a fair bit of the low hanging fruit: maybe 5% of publicly available text that is in principle available and not garbage. You can schlep your way to a 20x bigger dataset in that case, though you'll hit diminishing returns as you have to, for example, generate transcripts of random videos and filter old mailing list threads for metadata and spam. But nothing you do is going to get you 1,000x the training data, at least not in the short run. Scaling laws are among the watershed discoveries of ML research in the last decade; basically, these are equations that project how much oomph you get out of increasing the size, training time, and dataset that go into a model. And as it turns out, the amount of high quality data is extremely important, and often becomes the bottleneck. It's easy to take this fact for granted now, but it wasn't always obvious! If computational power or model size was usually the bottleneck, we could just make bigger and bigger computers and reliably get smarter and smarter AIs. But that only works to a point, because it turns out we need high quality data too, and high quality data is finite (and, as the political apparatus wakes up to what's going on, legally fraught). There are rumbling...
Welcome to Anime watch club, a bi-weekly group discussion and review where the hosts of the what do you say anime podcast, nominate and vote on shows either that we haven't seen or shows that will hopefully lead to a great discussion. On today's episode, we review the 2016 anime, March Comes in Like a Lion Socials/Discord - https://linktr.ee/whatdoyousayanime 0:00 - Intro 2:32 - First Impressions 12:56 - Establishing the cast in the early episodes (Ep. 1-5) 23:53 - The relationship between Rei and Kyoko 29:56 - The "Child of God" arc (Ep 6-9) 42:05 - Would a knowledge of Shogi help your enjoyment? 50:49 - Rei's slice of life, backstory, and new developments (Ep. 10-15) 1:01:27 - The shogi workshop and leading up to the Kyoto tournament (Ep. 16-18) 1:06:04 - Tournament arc and wrapping up season 1 (Ep. 19-22) 1:13:32 - The Kawamoto's and impact of found family 1:22:52 - Closing thoughts and scores 1:30:38 - What We're Watching Next --- Support this podcast: https://podcasters.spotify.com/pod/show/whatdoyousayanime/support
For Episode 29 we are back to our d20 random episode. The dice gods, by their grace, blessed us with March Comes in Like a Lion. Well regarded amongst the anime community, will this show be a hit or miss for the three weebs? If you enjoyed the show, please subscribe for future episodes every Wednesday. Additionally, we'd appreciate you following the podcast on Twitter @TheAnimeBacklog or leaving us a review on iTunes, Spotify, Stitcher, or wherever you get your podcasts. If you want to follow us individually on Twitter, our handles are - Dan: @Avarice77, Marcus: @Marcus R, Nick: @NickSpartz. Any questions or comments feel free to email us at TheAnimeBacklogPodcast@gmail.com Music: "Kawaii Friends" by Alexander Lisenkov
Surprise! Wir starten unsere dreistelligen Folgen direkt mit einer dicken Überraschung, nämlich einer Sonderfolge! Maxi und Fabi aka Shogoon haben sich mal wieder für eine neue Folge zusammengerafft und einfach darüber geredet, was die letzten drei Jahre so los war. ein guter Ort also für Emotionen, Spaß, ein bisschen Gossip und tonnenweise Anekdoten. Es wird geil!Tickets für die Shogoon Tour:https://shogoon.ticket.io/2ejxnnf7/ywej3s/?lang=deAlle weiteren Infos zu Shogi & seiner Musik:https://workingclassmonopoly.comTickets für Maxis Tour:Shops und Podcasts sind ja nice, aber ein Max ohne Bühne ist kein richtiger Max. Und irgendwo muss der Frust der letzten vier Jahre ja hin.Wer Bock hat, Max im Herbst auf Tour zu besuchen... hier gibts die Karten:https://krasserstoff.com/tours/max-rockstah-nachtsheim-762b4709-c2cd-46d8-a171-db22853ddaf8 Hosted on Acast. See acast.com/privacy for more information.
In this podcast episode, Gretta returns after a long hiatus to help Xan review a heartfelt manga series about adopted families and Shogi. Is it worth checking out? Well, Sit back and Find out as they review March Comes In Like A lion by Chica Umino. ----more---- As our hosts go over this mesmerizing manga series, Gretta also does a deep dive on the unique characters in the series and Xan goes over the latest manga releases for the week. Remember to Like, Share and Subscribe. Follow us @spiraken on Twitter and @spiraken on Instagram, subscribe to this podcast and our YouTube channel, Support our Patreon and if you would kindly, please go to www.tinyurl.com/helpxan and give us a great rating on Apple Podcasts. Also join our discord and Thank you, hope you enjoy this episode. #spiraken #mangareview #wheelofmanga #seinenmanga #dramamanga #sliceoflifemanga #chicaumino #shogimanga #marchcomesinlikealion #denpa #podcasthq #manga #spirakenreviewpodcast Music Used in This Episode: Closing Theme-Trendsetter by Mood Maze (Uppbeat) Music from Uppbeat (free for Creators!): https://uppbeat.io/t/mood-maze/trendsetter License code: YEPNB5COHX56JVES WHERE TO FIND US Our Instagram https://www.instagram.com/spiraken/ Our Email Spiraken@gmail.com Xan's Email xan@spiraken.com Our Patron https://www.patreon.podbean.com/spiraken or https://www.patreon.com/spiraken Our Discord https://tinyurl.com/spiradiscord Our Twitter https://twitter.com/spiraken Our Youtube Channel https://www.youtube.com/@spiraken Our Twitch https://www.twitch.tv/spiraken Our Amazon Store http://www.amazon.com/shops/spiraken Random Question of the Day: Have You Ever Played Shogi Before?
Pull up a chair (or cushion or something) and have a seat for a rousing game of shogi with The Best Boys! What is shogi you ask? You didn't?! Well Best Boys Dan and Justin are gonna tell you anyway! Some people refer to it as "Japanese Chess" while others hate that particular moniker. The truth, as is so often the case, lies somewhere between these extremes. So grab your drunken elephant and sit in a form-perfect seiza while The Best Boys breakdown the history and culture surrounding the game of shogi and its connections to anime! *Mobile Suit Gundam: Witch from Mercury spoilers start at 00:13:35 and end at 00:20:45* Follow The Best Boys on Instagram @bestboys_pod or send us an email at thebestboyspod@gmail.com. If you like what we do here, please go give us a review, it really helps us with the algorithm, especially on Apple Podcasts! Bubble Tea by Pikuseru (https://fanlink.to/bubbletea) No Copyright Breaking News Music (https://youtu.be/a4I0jlETu4g) Sayonara April by KODOMOi (https://soundcloud.com/kodomoimusic) Creative Commons - Attribution 3.0 Unported - CC BY 3.0 (https://creativecommons.org/licenses/…) Music promoted by Music Panda - Vlog No Copyright Free Music Video Link: https://youtu.be/jmuJp29d57Q
The Adult Improver Series returns with two insightful guests joining the podcast. WIM Natasha Regan is an author and actuary who among many other chess accomplishments recently became the British over 50 Women's National Champion! Natasha recently collaborated on a Chessable course with Matthew Ball, who is a chess dad and dedicated improver who has made significant rating progress since returning to competitive chess in recent years. Natasha and Matthew shared lots of helpful chess study tips covering topics ranging from The Woodpecker Method, to the Chess Steps series, to whether one should alter their approach to a game against a younger opponent. We also discussed their fun and instructive new course, Zwischenzug: A Comprehensive Guide to Intermediate Moves. You can find timestamps for all of the topics discussed below. 0:00- Perpetual Chess is brought to you in part by Chessable.com! Check out Natasha and Matthew's new CHessable course here: : https://www.chessable.com/zwischenzug-a-comprehensive-guide-to-intermediate-moves/course/139623/ You can check out some of my recommended courses here: https://go.chessable.com/perpetual-chess-podcast/ 0:03- Matthew Ball and Natasha discuss their shared background as junior players, and how their paths recrossed in recent years. 7:30- Patreon mailbag question- Does Natasha have any different strategies when playing against kids as compared to adults? 17:00- Matthew came back into chess a few years back and his seen some rating gain. He discusses his training regimen. Mentioned: Chess for Life, Chess Steps Books, Woodpecker Method 22:00- More on the Woodpecker Method Mentioned: Pump Up Your Rating by GM Axel Smith, Book Recap #6 on the Woodpecker Method 23:00- How does Natasha tune up for a tournament? 26:00- Natasha discusses some similarities between Shogi and Chess. Mentioned: Karolina Styczyńska of the Shogi Harbor Twitch Channel 32:00- Matthew shares a few more improvement recommendations. 35:00- Why did Natasha and Matthew decide to do a course on intermediate moves? 45:00- Natasha and Matthew discusses their approaches to openings 52:00- Do they work with coaches? 56:00- Natasha and Matt discuss their tournament and summer plans. Thanks so much to Natasha and Matt for joining the show! Check out their course here: https://www.chessable.com/zwischenzug-a-comprehensive-guide-to-intermediate-moves/course/139623/ If you would like to help support Perpetual Chess via Patreon, you can do so here: https://www.patreon.com/perpetualchess Learn more about your ad choices. Visit megaphone.fm/adchoices
Leave a message: https://www.speakpipe.com/chunkmcbeefchest chunkmcbeefchest@gmail.com Donate https://www.paypal.com/paypalme/chunkmcbeefchest Podcasts https://chunkmcbeefchest.com/ https://ninjanewsjapan.com/ Youtube Podcasts https://www.youtube.com/@chunkmcbeefchest Gaming https://www.youtube.com/@chunkmcbeefchestgames https://www.twitch.tv/chunkmcbeefchest Other things https://montanaeldiablo.com/ https://www.tiktok.com/@chunkmcbeefchest https://www.instagram.com/chunkmcbeefchest/ https://twitter.com/NinjaNewsJapan https://twitter.com/VelociPeter https://mstdn.social/@Chunkmcbeefchest https://www.facebook.com/ninjanewsjapan
Leave a message: https://www.speakpipe.com/chunkmcbeefchest chunkmcbeefchest@gmail.com Donate https://www.paypal.com/paypalme/chunkmcbeefchest Podcasts https://chunkmcbeefchest.com/ https://ninjanewsjapan.com/ Youtube Podcasts https://www.youtube.com/@chunkmcbeefchest Gaming https://www.youtube.com/@chunkmcbeefchestgames https://www.twitch.tv/chunkmcbeefchest Other things https://montanaeldiablo.com/ https://www.tiktok.com/@chunkmcbeefchest https://www.instagram.com/chunkmcbeefchest/ https://twitter.com/NinjaNewsJapan https://twitter.com/VelociPeter https://mstdn.social/@Chunkmcbeefchest https://www.facebook.com/ninjanewsjapan
Podcast Review | Best of the Bet #4 Keishi Ohtomo (2017) Highschool student, Rei Kiriyama is ranked Shogi player. He lives alone, away from his adopted family because his foster sister like--hates him. He soon meets another family of sister that care for him. Kiriyama struggles to become the best Shogi player, facing the threatening high ranking player Goto. Tazza: The High Rollers (Choi Dong-hoon, 2007) The god of gamblers (Wong Jing, 1989) Kaiji the ultimate Gambler (Tôya Satô, 2009) March comes in like a lion (Keishi Ohtomo, 2017) YouTube | https://www.youtube.com/channel/UCzMwCEPYI47Mq7W997iJkbg?view_as=subscriber Support us on Patreon! https://www.patreon.com/pastthesubtitles?fan_landing=true Instagram| Pastthesubtitles Twitter| @PastTheSubtitle Episode analytics --- Support this podcast: https://anchor.fm/gps1/support
Today's broadcast is C1E65A for Wayback Wednesday, July 13th, 2022 Today's episode is part one of two in our "Best of 2020 / 2021" celebration – featuring St. John's picks, and unimaginitavely called "The Best of 2020 / 2021 – part 1: St. John's picks". Track# - Track – Game – System – Composer(s) - Original Episode – Originally selected by 01) Intro – 00:00:00 02) Funky Radio [In-Game vers] - Jet Grind Radio – Dreamcast – B.B. Rights – C1E51 – St. John – 00:09:44 03) Title – TMNT – NES – Jun Funahashi – C1E60 – Phillip Vaughn – 00:13:07 04) [REVERSE] Options – Daffy Duck in Hollywood – Genesis – Matt Furniss – Ch F: Backtracks: the OTHER 50 – St. John – 00:14:51 05) Roller Mobster – Hotline Miami – Multiplatform – Carpenter Brut – C1E60 – Adam Huisman – 00:17:10 06) Got Well Soon – Life is Strange – Multiplatform – Breton – C1E60 – Amber Pearey – 00:20:37 07) Signs of Love – Persona IV – PS2 – Shoji Meguro – C1E60 – Trey Johnson – 00:25:22 08) [SLOW] Can you Feel the Sunshine – Sonic R – Saturn – C1E57 - Richard Jacques – St. John – 00:28:16 09) [REVERSE] Mirage – Gran Turismo 5 – PS3 – Kemmei Adachi – C1E52 – St. John – 00:35:19 10) Star Guitar – Lumines II – PSP – The Chemical Bros – C1E51 – St. John – 00:39:24 11) Overworld Theme – Zelda II: Adventure of Link – NES – Akito Nakatsuka – C1E53/C1E58 - St. Jesse / St. John – 00:45:44 12) World D – Art of Balance – Multiplatform - Martin Schjøler - Ch F: C2S1 Complete – St. John – 00:47:13 13) Afternoon in Crossbell – Zero no Kiseki – PC / PSP – Takahiro Unisuga – C1E59 – Hugues Johnson – 00:49:32 14) [REVERSE] Cylic – Heavily Armored – Cosmic Carnage – 32X – Hikoshi Hashimoto – C1E52 – St. John – 00:51:51 15) Stage 02 – Contra III: The Alien Wars – SNES – Konami Kukeiha Club – C1E53 – St. John – 00:54:04 16) Illusion – Ys IV – PC Engine – Atsushi Shiakawa – C1E59 – Hugues Johnson – 00:56:55 17) Norfair – Super Smash Bros. Ultimate – Switch – c: Hirokazu ("Hip") Tanaka / a: Yuzo Koshiro – Ch F: C2S1 Complete – St. John – 00:58:58 18) Mystic Woods (aka Forest 2) - Grounseed (OPN vers) - PC98 – Daisuke Takahashi – C2E3 – St. John – 01:01:27 19) [REVERSE] Stage 4-3 – Bram Stoker's Dracula – Genesis – Matt Furniss – C1E52 – St. John – 01:05:03 20) [REVERSE] The Reverse Will – Silent Hill 2 – PS2 – Akira Yamaoka – C1E52 – St. John – 01:08:17 21) Voices of Urdak – DOOM Eternal – Multiplatform – Mick Gordon – C1E56 – St. John – 01:11:38 22) Anxious Heart – Final Fantasy VII – PS1 – Nobuo Uematsu – C1E58 – St. Jesse – 01:17:44 23) If you Open your Heart – Final Fantasy VII – PS1 – Nobuo Uematsu – C1E58 – St. Jesse – 01:21:37 24) Everything's Going to be Okay – Prey – Multiplatform – Mick Gordon – C1E51 – St. John – 01:24:31 25) [REVERSE] Process Control – Gran Turismo Sport – PS4 – Yasuhisa Inoue – C1E52 – St. John – 01:27:11 26) BGM 3 – Mario Paint – SNES – Hirokazu "Hip" Tanaka, Ryouji Yoshitomi, and/or Kazumi Totaka – C1E54 – St. John – 01:30:37 27) [SLOW] Fonction – n++ - Multiplatform – Broca – C1E57 – St. John – 01:33:57 28) Fonction – n++ - Multiplatform – Broca – Ch F: C2S1 Complete – St John – 01:43:31 29) Kara Kara Bazaar – LoZ: Breath of the Wild – WiiU / Switch – Hajime Wakai, Manaka Kataoka, and/or Yasuaki Iwata – C1E60 – St. John – 01:50:26 30) Space – Yoshi's Crafted World – Switch – Kazufumi Umeda – C1E54 – St. John – 01:52:32 31) Footlight Lane – Super Mario 3D World – WiiU / Switch – Mahito Yokota, Toru Minegishi, Koji Kondo, and/or Yasuaki Iwata – C1E54 – St. John – 01:55:38 32) Course World – Super Mario Maker 2 – Switch – Atsuko Asahi, Toru Minegishi, and/or Sayoka Doi – C1E54 – St. John – 01:57:38 33) Gomoku, Shogi, Mini-Shogi, Hanafuda – Clubhouse Games: 51 Worldwide Classics – Switch – Chamy Ishi, and/or Toshiki Aida – C1E56 – St. John – 01:59:52 34) Prime #5 – Echochrome - PSP / PS3 – Hideki Sakamoto – C1E60 – Electric Boogaloo – 02:01:53 35) Skytown – Metroid Prime 3 – Wii – Kenji Yamamoto, Minako Hamano, and/or Masaru Tajima – C1E51 – St. John – 02:05:06 36) [REVERSE] The Blazing Sands – Final Fantasy X – PS2 – Masashi Hamauzu – C1E52 – St. John – 02:08:28 37) [REVERSE] Stage 4 – Chip Chan Kick – PCFX – Yoshio Furukawa, Masahara Iwata, and/or Hitoshi Sakamoto – C1E52 – St. John – 02:11:26 38) Division – BrandishVT – PC – Naoki Kaneda – C1E59 – Hugues Johnson – 02:13:25 39) Snif City – Paper Mario: The Origami King – Switch – Shoh Marakami, Yoshiaki Kimura, Hiroki Moishita, and/or Fumihiro Isobe – C1E54 / C2E4 – St. John – 02:15:24 40) Marionette, Marionette – Ys IX – PS4 – Mitsuo Singa – C1E59 – Hugues Johnson – 02:18:19 41) Toad's Turnpike – Mario Kart 64 – N64 – Kenta Nagata, Taroh Bandoh, and/or Yoji Inagaki – C1E54 – St. John – 02:20:34 42) Guruguru Majin De Pon – Gurumin – PC / PSP / 3DS – Takahide Murayama – C1E59 – Hugues Johnson – 02:23:28 43) Results Theme (aka Record of Samus) - Metroid Prime – Gamecube / Wiii – Kenji Yamamoto (inspired by "Brinstar" from Metroid – NES – Hirokazu "Hip" Tanaka) - C1E60 – St. John – 02:27:08 44) Results Parade – Check Mii Out Channel – Wii – Kazumi Totaka – C1E56 – St. John –02:30:05 45) A Battle's Conclusion – Hyrule Warriors: Age of Calamity – Switch – Kumi Tanioka, Reo Uratai, Ryotaro Yagi, and/or Haruki Yamada – C1E56 – St. John – 02:32:09 46) Tear-Stained Eyes – Snatcher – Sega CD / PC Engine CD – Konami Kukeiha Club – C1E51 – St. John – 02:34:18 47) Inn / Game Over – Ironsword: Wizards and Warriors II – NES – David Wise – C1E53 – St. John – 02:39:09 48) [REVERSE] The Serpent Trench – Final Fantasy VI – SNES – Nobuo Uematsu – C1E52 – St. John – 02:39:59 49) [REVERSE] What Can you Do? - Gran Turismo Sport – PS4 – Lenny Ibizarre – C1E52 – St. John – 02:41:56 50) [SLOW] Jump Up Super Star – Super Mario Odyssey – Switch – Shiho Fujii, and/or Koji Kondo – C1E57 – St. John – 02:46:54 51) [REVERSE] Ending – Super Castlevania IV – SNES – Masanori Adachi and/or Taro Kudo – Ch F: Backtracks – The OTHER 50 – St. John – 02:52:41 52) Outro – 02:57:55 Music Block Runtime: 02:48:26, Total Episode Runtime: 03:17:52 Our Intro Music is Funky Radio, from Jet Grind Radio on the Sega Dreamcast, composed by BB Rights. Our Outro Music is Results Parade from the Check Mii Out Channel on the Wii, composed by Kazumi Totaka. Produced using Ardour 6 / Audacity 3 in Ubuntu Studio [Linux] 22.04 A few important milestones in 2020 / 2021 that I had failed to mention in the intro, but which deserve mentioning: A) we had our first Ch 1 episode by Hugues (which was also the first Ch 1 in the entire show's history that St. John had absolutely no hand whatsoever in producing. It was a focus on music fromFalcomgames, and was quite succinctly named "The Falcom Episode" B) We made the migration from producing Nerd Noise Radio in GarageBand via MacOS on a mac mini (and using primarily MP3s in production – mostly through YouTube-to-MP3 websites) to producing NNR in a mix of Audacity andArdourvia Ubuntu Studio (Linux) on a mix of a Dell Latitude Laptop and my custom-built gaming PC (and using primarily WAVs in production – mostly through the youtube-dl command in the Linux terminal). The last episode to be produced in Mac was C1E52: Backtracks (April 1st 2020) and the first episode to be produced in Linux was Channel F: Backtracks – the OTHER 50 (April or May 2020). I feel like the production value on balance is noticeably and appreciably better after the change. And since Hugues is also a Linux user, that means that both sides of our Channel 2 episodes are Linux productions. You can find the Google Sheets spreadsheet showing St. John's work in arriving at his tracklist for this episode here (WARNING: MILD SPOILERS! Contains track lists for potential future Archive-exclusive SUPER BONUS supplemental music collections: https://docs.google.com/spreadsheets/d/1X6JpkqiGADAF9sHAsSpgyImjvimVWJv2mOATf5_ZZGc/edit?usp=sharing You can also find all of our audio episodes on Archive.org as well as the occasional additional release only available there, such as remixes of previous releases and other content. Our YouTube Channel, for the time being is in dormancy, but will be returning with content, hopefully, in 2022. Meanwhile, all the old stuff is still there, and can be found here: https://www.youtube.com/user/NerdNoiseRadio Our episodes (and occasionally, other content, including expanded show notes) can be found on our blog here: nerdnoiseradio.blogspot.com. Nerd Noise Radio is also available on The Retro Junkies Network at www.theretrojunkies.com, and is a member of the VGM Podcast Fans community at https://www.facebook.com/groups/VGMPodcastFans/ Or, if you wish to connect with us directly, we have two groups of our own: Nerd Noise Radio - Easy Mode: https://www.facebook.com/groups/276843385859797/ for sharing tracks, video game news, or just general videogame fandom. Nerd Noise Radio - Expert Mode: https://www.facebook.com/groups/381475162016534/ for going deep into video game sound hardware, composer info, and/or music theory. You can also follow us on Twitter at @NerdNoiseRadio. And we are also now on Spotify, TuneIn, Pandora, iHeartRadio, Stitcher, and Vurbl. Thanks for listening! Join us again in August (dates TBD) for a special guest on Channel 1, and our desert biome focus, "Just Deserts" on Channel 2. Delicious VGM, as well as Tasty VGM and Talk on Nerd Noise Radio...and wherever you are....Fly the N!!! Cheers!
'March Comes in Like a Lion' is a masterpiece that beautifully delivers on character-driven stories via internal monologues, nuanced dialogue, and effective imagery centered around the world of shogi. Ionatan and Ravi dive deep into the series, discussing the strengths of Chica Umino's writing and Akiyuki Shinbo's adaptation, the realistic depictions of depression and bullying, and how the message of the series is ultimately optimistic in terms of personal growth and relationships.
In this episode, I share my experience learning to play shogi in Japan. Read the transcript on my website: https://teatimewithtayvinb.wordpress.com/2022/04/13/lets-go-for-shigi/ Talk with me on iTalki: www.italki.com/teacher/2889545 Follow me on Twitter: https://twitter.com/TEA_Time_Tayvin --- Support this podcast: https://anchor.fm/teatimewithtayvin/support
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Supervise Process, not Outcomes, published by stuhlmueller on April 5, 2022 on LessWrong. We can think about machine learning systems on a spectrum from process-based to outcome-based: Process-based systems are built on human-understandable task decompositions, with direct supervision of reasoning steps. Outcome-based systems are built on end-to-end optimization, with supervision of final results. This post explains why Ought is devoted to process-based systems. The argument is: In the short term, process-based ML systems have better differential capabilities: They help us apply ML to tasks where we don't have access to outcomes. These tasks include long-range forecasting, policy decisions, and theoretical research. In the long term, process-based ML systems help avoid catastrophic outcomes from systems gaming outcome measures and are thus more aligned. Both process- and outcome-based evaluation are attractors to varying degrees: Once an architecture is entrenched, it's hard to move away from it. This lock-in applies much more to outcome-based systems. Whether the most powerful ML systems will primarily be process-based or outcome-based is up in the air. So it's crucial to push toward process-based training now. There are almost no new ideas here. We're reframing the well-known outer alignment difficulties for traditional deep learning architectures and contrasting them with compositional approaches. To the extent that there are new ideas, credit primarily goes to Paul Christiano and Jon Uesato. We only describe our background worldview here. In a follow-up post, we'll explain why we're building Elicit, the AI research assistant. The spectrum Supervising outcomes Supervision of outcomes is what most people think about when they think about machine learning. Local components are optimized based on an overall feedback signal: SGD optimizes weights in a neural net to reduce its training loss Neural architecture search optimizes architectures and hyperparameters to have low validation loss Policy gradient optimizes policy neural nets to choose actions that lead to high expected rewards In each case, the system is optimized based on how well it's doing empirically. MuZero is an example of a non-trivial outcome-based architecture. MuZero is a reinforcement learning algorithm that reaches expert-level performance at Go, Chess, and Shogi without human data, domain knowledge, or hard-coded rules. The architecture has three parts: A representation network, mapping observations to states A dynamics network, mapping state and action to future state, and A prediction network, mapping state to value and distribution over next actions. Superficially, this looks like an architecture with independently meaningful components, including a “world model” (dynamics network). However, because the networks are optimized end-to-end to jointly maximize expected rewards and to be internally consistent, they need not capture interpretable dynamics or state. It's just a few functions that, if chained together, are useful for predicting reward-maximizing actions. Neural nets are always in the outcomes-based regime to some extent: In each layer and at each node, they use the matrices that make the neural net as a whole work well. Supervising process If you're not optimizing based on how well something works empirically (outcomes), then the main way you can judge it is by looking at whether it's structurally the right thing to do (process). For many tasks, we understand what pieces of work we need to do and how to combine them. We trust the result because of this reasoning, not because we've observed final results for very similar tasks: Engineers and astronomers expect the James Webb Space Telescope to work because its deployment follows a well-understood plan, and it is built out of well-understo...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Supervise Process, not Outcomes, published by stuhlmueller on April 5, 2022 on LessWrong. We can think about machine learning systems on a spectrum from process-based to outcome-based: Process-based systems are built on human-understandable task decompositions, with direct supervision of reasoning steps. Outcome-based systems are built on end-to-end optimization, with supervision of final results. This post explains why Ought is devoted to process-based systems. The argument is: In the short term, process-based ML systems have better differential capabilities: They help us apply ML to tasks where we don't have access to outcomes. These tasks include long-range forecasting, policy decisions, and theoretical research. In the long term, process-based ML systems help avoid catastrophic outcomes from systems gaming outcome measures and are thus more aligned. Both process- and outcome-based evaluation are attractors to varying degrees: Once an architecture is entrenched, it's hard to move away from it. This lock-in applies much more to outcome-based systems. Whether the most powerful ML systems will primarily be process-based or outcome-based is up in the air. So it's crucial to push toward process-based training now. There are almost no new ideas here. We're reframing the well-known outer alignment difficulties for traditional deep learning architectures and contrasting them with compositional approaches. To the extent that there are new ideas, credit primarily goes to Paul Christiano and Jon Uesato. We only describe our background worldview here. In a follow-up post, we'll explain why we're building Elicit, the AI research assistant. The spectrum Supervising outcomes Supervision of outcomes is what most people think about when they think about machine learning. Local components are optimized based on an overall feedback signal: SGD optimizes weights in a neural net to reduce its training loss Neural architecture search optimizes architectures and hyperparameters to have low validation loss Policy gradient optimizes policy neural nets to choose actions that lead to high expected rewards In each case, the system is optimized based on how well it's doing empirically. MuZero is an example of a non-trivial outcome-based architecture. MuZero is a reinforcement learning algorithm that reaches expert-level performance at Go, Chess, and Shogi without human data, domain knowledge, or hard-coded rules. The architecture has three parts: A representation network, mapping observations to states A dynamics network, mapping state and action to future state, and A prediction network, mapping state to value and distribution over next actions. Superficially, this looks like an architecture with independently meaningful components, including a “world model” (dynamics network). However, because the networks are optimized end-to-end to jointly maximize expected rewards and to be internally consistent, they need not capture interpretable dynamics or state. It's just a few functions that, if chained together, are useful for predicting reward-maximizing actions. Neural nets are always in the outcomes-based regime to some extent: In each layer and at each node, they use the matrices that make the neural net as a whole work well. Supervising process If you're not optimizing based on how well something works empirically (outcomes), then the main way you can judge it is by looking at whether it's structurally the right thing to do (process). For many tasks, we understand what pieces of work we need to do and how to combine them. We trust the result because of this reasoning, not because we've observed final results for very similar tasks: Engineers and astronomers expect the James Webb Space Telescope to work because its deployment follows a well-understood plan, and it is built out of well-understo...
Shogi! Game of champions. Get ready for an entire anime about one introverted Shogi player's journey through the horrible world of Society and having to have relationships with other people in your life. The horror! March Comes In Like A Lion! Follow Deer on twitter: https://twitter.com/DeerBits_ Follow Gamb on twitter: https://twitter.com/DrewGamblord Follow Spooky on twitter: https://twitter.com/SpoonkyTweets Don't forget we also make public our commentary track so you can watch along with us our just listen to us go insane: https://soundcloud.com/friendsforcingfriends/march-comes-in-like-a-lion-commentary Patreon where you can find commentary tracks, notes, early access to next week's episode! https://www.patreon.com/gamblord
¿con nuevas normas para el ajedrez? / ¿con hidroaviones? / ¿con drones recogiendo manzanas? / Sony compra Bungie / Más épica legal en Epic vs Apple / Vertido tóxico de Samsung / Los emojis 3D animados de Windows siguen en camino / OpenRAN en Málaga Patrocinador: La gente Colchón Morfeo vuelve con nosotros para que disfrutes del mejor colchón del universo https://www.colchonmorfeo.com/?utm_source=NL&utm_campaign=MIXX, que te dará el descanso y la vitalidad que necesitas para cumplir tus sueños. El envío es gratuito y en 24 horas https://www.colchonmorfeo.com/?utm_source=NL&utm_campaign=MIXX y tienes 100 días de prueba sin compromiso. — Ahórrate 100 euros https://www.colchonmorfeo.com/?utm_source=NL&utm_campaign=MIXX con el código MIXX100. ¿con nuevas normas para el ajedrez? / ¿con hidroaviones? / ¿con drones recogiendo manzanas? / Sony compra Bungie / Más épica legal en Epic vs Apple / Vertido tóxico de Samsung / Los emojis 3D animados de Windows siguen en camino / OpenRAN en Málaga ♟️ ¿Pueden los algoritmos mejorar las reglas del ajedrez? Mi conclusión después de leer este interesante estudio sobre AlphaZero jugando con diferentes normas es que acabaría empeorando el sistema tradicional https://cacm.acm.org/magazines/2022/2/258230-reimagining-chess-with-alphazero/fulltext al traer aún mayor ventaja para las piezas blancas.
This week, Dr. Green is joined by Dr. Yoichi Funabashi, chairman of the Tokyo-based think tank Asia Pacific Initiative, to discuss geopolitical and economic trends in the Indo-Pacific and Japanese grand strategy. Dr. Funabashi talks about the evolution of Japan's foreign policy strategy, from the Abe administration to the new Kishida administration, as well as the role of the U.S.-Japan alliance in Japan's strategic thinking. The two also touch on Japan's relationship with South Korea, economic security, and Japan's prospects for acquiring strike capabilities.
Today's Whiteboard & Poll きょうのホワイトボード https://www.azumiism.com/category/podcast/
Nos visitan los amigos (y en concreto Nichi) de Konichiwa desde Japón (un podcast que ya estáis tardando en escuchar y suscribiros) para sacarnos de dudas como ¿es verdad todo lo que sale de Japón en los videojuegos? ¿hay videojuegos en los baños públicos? ¿cómo se juega al puñetero Shogi? ¿lleva la gente los pelos de colores? Esto y mucho más ¡Pasad y descubridlo por alguien que vive en el país del sol naciente. ¡Qué lo disfrutéis!
Ah, Chess. This one's been around a long time. But what if you're looking for something similar, but with a different spin on it? In this episode, Dylan and Bill discuss some of the many variants that Chess has inspired, as well as a few games that are completely different but might scratch the same itch.
Director Kahlil Silver along with his brother Shogi Silver join Silas to talk about their newest film In The Company of Women. A film set in and shot in Seattle WA. They also answer the world famous Seven Questions. #SFCS
What if Hideo Kojima made a manga about Shogi that was a mashup of Fight Club and Joker? This episode we, and our guest Tim Batt from The Worst Idea of All Time discuss Kentarou Fukuda's Shonen Jump manga Double Taisei. Show Notes: You can reach us at shonenflop@gmail.com or on twitter @shonenflopcast Episode art by Shannon (IG: illuminyatea) You can find our guest on Twitter @Tim_batt, The Worst Idea of All Time: worstideaofalltime.com If you're enjoying Shonen Flop and want to support us or even just access exclusive content like bonus eps, warmup audio, or deleted scenes consider becoming a Patron patreon.com/shonenflop. Your support is extremely appreciated! Our merch store has new designs! Get Shonen Flop art, including this episode's cover art, on a shirt, mug, print, or whatever else might catch your eye teepublic.com/t-shirts?ref_id=22733 Become a member of our community by joining our Discord. You can hang out with us, play games, and even join our comic book discussion club! Find it at discord.gg/4hC3SqRw8r Shoutouts: Check out Newsly for free now from www.newsly.me and use promo code FL0P2021 to receive a 1-month free premium description Blake and Spencer Get Jumped: linktr.ee/bandsgetjumped The Broken Lords Tabletop RPG Podcast: linktr.ee/thebrokenlords 4 Eyes Academia: @4EyesAcademia
Support this podcast at -- > https://anchor.fm/words-beyond-action/support Watch the full video on youtube --> https://www.youtube.com/watch?v=GQ7RwBWXI2g&t=4s Visit our website --> https://www.wordsbeyondaction.com/ Follows on Instagram: https://www.instagram.com/wordsbeyondaction/ Follows on Facebook: https://www.facebook.com/wordsbeyondaction --- Support this podcast: https://anchor.fm/words-beyond-action/support
Sailor Noob is the podcast where a Sailor Moon superfan and a total noob go episode by episode through the original Sailor Moon series!It's a game of wits this week as Ami faces off against Berthier! Ami is a chess prodigy, but when Berthier challenges her to a match she'll have to put her life on the line to save her friends!In this episode, we discuss chess in Japan, Shogi, the JCA, and Miyoko Watai. We also talk about Sailor ships, a tasty new OP, spoiler stew, eye boogers and nose boogers, playing to forget your loneliness, being mercurial, Japanese Jane Doe, warp speed chess, rubbing the goop, accepting Usagi into your heart, "Shogi's Gambit", dodgeball mercenaries, imaginary cake, and Tuxedo Mauve!Project much?We're on iTunes and your listening platform of choice! Please subscribe and give us a rating and a review! Arigato gozaimasu!https://podcasts.apple.com/us/podcast/sailor-noob/id1486204787Become a patron of the show and get access to even more Sailor Noob content!http://www.patreon.com/sailornoobSailor Noob is a part of the Just Enough Trope podcast network. Check out our other shows about your favorite pop culture topics and join our Discord!http://www.twitter.com/noob_sailorhttp://www.justenoughtrope.comhttp://www.instagram.com/noob_sailorhttps://discord.gg/MYg6YN7vBuy us a Kōhī on Ko-Fi!https://ko-fi.com/E1E01M2UA
In this episode, Zoe tells us about Yokai - The folklore surrounding them as well as specific examples of the Kappa and Kitsune. We also talk about the ship of Theseus, Mike's Badger beard, where (or when,) to avoid when time travelling through history, and more! Here's a rundown of the episode: 00:37 - Hello! 01:27 - Presented Piece - Yokai 09:49 - Promo - Strange Origins 10:51 - Discussion 41:15 - Thanks and Where You Can Find Us Online 44:27 - Fun Fact Total Runtime: 45:02 Links: Promo Our promo partner this week is Strange Origins podcast, which you can check out here: https://www.listennotes.com/podcasts/strange-origins-fascinating-productions-3rvREZzXdp2/amp/ Research Links https://en.wikipedia.org/wiki/Princess/_Mononoke (https://en.wikipedia.org/wiki/Princess/_Mononoke) https://en.wikipedia.org/wiki/Animism (https://en.wikipedia.org/wiki/Animism) https://en.wikipedia.org/wiki/Yōkai (https://en.wikipedia.org/wiki/Yōkai) https://en.wikipedia.org/wiki/Mononoke (https://en.wikipedia.org/wiki/Mononoke) https://yokaiwatchfans.com/wiki/Walkappa (https://yokaiwatchfans.com/wiki/Walkappa) https://en.wikipedia.org/wiki/Kappa/_(folklore) (https://en.wikipedia.org/wiki/Kappa/_(folklore)) https://www.tsunagujapan.com/6-famous-yokai-mystical-creatures-from-japan/ (https://www.tsunagujapan.com/6-famous-yokai-mystical-creatures-from-japan/) https://en.wikipedia.org/wiki/Shogi (https://en.wikipedia.org/wiki/Shogi) https://en.wikipedia.org/wiki/Kitsune (https://en.wikipedia.org/wiki/Kitsune) Our Links http://storiesofstrangeness.com (Website) - Sign up for email alerts, view the gallery, and other cool stuff. https://Instagram.com/storiesofstrangeness (Instagram) - Where we hang out the most. https://Facebook.com/storiesofstrangeness (Facebook) - We have a page and a group - come talk to us! https://Twitter.com/sostrangepod (Twitter) - Mike sends all our tweets - blame him! Our https://www.redbubble.com/people/ZoeandMike/shop?asc=u&ref=account-nav-dropdown (Redbubble) page, where you can find all of our designs and illustrations on loads of products. Our https://www.patreon.com/storiesofstrangeness?fan_landing=true (Patreon) page, if you'd like to support the show. We're starting with 2 tiers: £1 a month, to support the show, which is about $1.37, where we will thank you personally on the show - and £3 a month, which is about $4.11, that will enable you to listen to outtakes and other bonus content. More tiers may follow. Support this podcast
The Habibis talk about games, shogi, and haggling.Timestamps: Persona 5 Strikers (00:45) Loop Hero (06:37) Shogi (20:40) A Board With No Pieces (26:56) That's it for this episode. Follow us on Twitter at @the_habibis, e-mail us your stories, feedback, or suggestions at info@thehabibis.com, or join our (very young) Discord server at https://discord.thehabibis.com/. Shokran, and salaam!
This Week on The Casual Hour… After seeing credits roll on The Last of Us Part 2, there are some OPINIONS around The Casual Hour offices. But we also manage to squeeze in some talk on the most interesting games coming out in July. Plus hear someone butcher the rules of Hanafuda. All that and more on this edition of The Casual Hour. Let's Talk! We love fan mail, send us some! thecasualhour@gmail.com You can follow our show on Twitter at: @thecasualhour Enjoy what we have to say? Please leave us a review! The Casual Hour is: Bobby Pease - Host - Twitter: @bobbypease | Website: www.Lumberjacksmack.com Chase Koeneke - Co-host - Twitter: @chase_koeneke | Website: www.gamersonthego.com Johnny Amizich - Co-host - Twitter @jamizich Love our theme music? It was created by Patric Brown. You can follow his antics on twitter @insaneanalog or check out more of his music and download our theme at www.insaneanalog.com If you are enjoying what we do and would like to further support us you can do so by making a monthly contribution through Anchor! --- Support this podcast: https://anchor.fm/thecasualhour/support
Alpha Zero is a computer program that trained itself to play chess, Go and Shogi at superhuman levels in 24 hours. The post https://drpepermd.com/episode/what-is-alphazero/ (What is AlphaZero?) appeared first on https://drpepermd.com (Dr Peper MD).
In this episode, GameTek talks about Shogi, Dexter and the Chief take a look at Yomi, and we introduce the Voice of Ameritrash 2.0! Mary helps us with the news, Moritz discusses Grey Giant Games, and Fathergeek wonders if you should let you child win games. We take a look at some new games, including Thunderstone: Dragonspire and Cargo Noir, and talk about our most looked forward to game of 2011.